text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Effect of Inonotus obliquus Extract Supplementation on Endurance Exercise and Energy-Consuming Processes through Lipid Transport in Mice Inonotus obliquus (IO) is used as functional food to treat diabetes. This study investigated the effect of IO supplementation on body composition in relation to changes in energy expenditure and exercise performance. Male Institute of Cancer Research mice were divided into four groups (n = 8 per group) and orally administered IO once daily for 6 wk at 0 (vehicle), 824 (IO-1×), 1648 (IO-2×), and 2472 mg/kg (IO-3×). IO supplementation increased muscle volume, exhaustive treadmill time, and glycogen storage in mice. Serum free fatty acid levels after acute exercise improved in the IO supplementation group, which exhibited changes in energy expenditure through the peroxisome proliferator-activated receptor (PPAR) pathway. RNA sequencing revealed significantly increased PPAR signaling; phenylalanine, ascorbate, aldarate, and cholesterol metabolism; chemical carcinogenesis; and ergosterol biosynthesis in the IO group compared with the vehicle group. Thus, IO supplements as nutraceuticals have a positive effect on lipid transport and exercise performance. In addition, this study was only IO supplementation without training-related procedures. Introduction Inonotus obliquus (IO), also known as chaga, belongs to Profundidae genus of the Poriaceae family [1]. It is mainly distributed in latitudes between 45 • and 50 • N, such as Russia (Siberia), North America, Japan, and China (Heilongjiang); IO is a medicinal, edible fungus [2]. Studies have revealed that the polyphenoloxicarbonic complex of IO contains two to three types of polymer structures of different molecular mobilities. IO water extracts vary slightly according to the combination of compounds, such as phenolic substances [3], polysaccharides [4], steroid substances [5], nonsaturated fatty acids, vitamin K, coenzyme Q, phospholipids, and glycolipids. The ethyl acetate extract of IO contains phenol-carbonic acids, simple phenols, flavonoids, iridoids, and triterpenes [6]. IO has many pharmacological effects and has been demonstrated to alleviate hyperglycemia, hyperinsulinemia, and impaired glucose tolerance in mice with type 2 diabetes mellitus [7]. Its insulin-stimulated glucose uptake is directly linked to the phosphoinositide 3-kinase pathway, which increases glucose uptake through the translocation of the glucose transporter type 4 protein and suppresses interleukin (IL)-2 and IL-2R levels through the inhibition of the NF-κB protein complex in mice with streptozotocin (STZ)-induced diabetes [8]. IO also affects peroxisome proliferator-activated receptor (PPAR)-γ and sterol regulatory element-binding transcription factor 1 (SREBP1-c) transcription factors, which play critical roles in lipogenesis, changing lipid metabolism [9]. Thus, IO, as a PPAR-activating agent, can activate PPAR. PPAR is a heterotrimeric complex with a key role in stimulating energy-generating processes, such as glucose uptake and fatty acid oxidation, and decreasing energy-consuming processes, such as protein and lipid synthesis [10]. Previous studies have found out that IO supplementation can result in faster glucose transport from the blood stream in the skeletal muscle as well as its potential role to increase mitochondrial lipid oxidation which could potentially delay the lactate threshold and ultimately results in better endurance performance [11][12][13]. Therefore, IO decrease lipid accumulation is most likely due to alleviate fatty acid uptake into the muscle coupled with enhance mitochondrial lipid oxidation [14], and increase the potential of muscle growth by enhancing lipid-mediated insulin sensitization. IO contains betulin, betulinic acid, inotodiol, and trametenolic acid, which are major triterpenoids [15]. Studies have revealed that ergostane-type triterpenoids are active antifatigue components [16]. Fatigue is an extremely complex physiological change process, generally referring to a decline in physical and mental states that make it difficult to maintain the physiological function of the body in a steady state [17]. Therefore, IO is an antifatigue herbal supplement candidate. This study explored the effect of IO supplementation on exercise performance and energy-consuming processes and identified the possible mechanism through which IO can increase exercise performance. Animals and Experiment Design The IO full-spectrum mushroom extract was purchased from Jilin Ruikang Biotechnology (Jilin, China). Male Institute of Cancer Research (ICR) mice (6-wk old), grown under specific pathogen-free conditions, were purchased from LASCO Biotechnology (Taipei, Taiwan). All the mice had free access to a standard laboratory diet (No. 5001; PMI Nutrition International, Brentwood, MO, USA) and distilled water and were housed in a 12 h light and 12 h dark cycle at room temperature (22 ± 1 • C) and 30-40% humidity. The animal protocol (LAC-2019-0459) was reviewed and approved by the Institutional Animal Care and Use Committee of Taipei Medical University, Taipei city, Taiwan. The 1× dose of IO used for humans is typically 4 g per day. The 1× mouse dose (824 mg/kg) used in this study was converted from a human-equivalent dose (HED) based on the body surface area according to the US Food and Drug Administration formula: assuming a human weight of 60 kg, the HED for 2000 (mg)/60 (kg) = 25 × 12.3 = 824 mg/kg; the conversion coefficient 12.3 is used to account for differences in the body surface area between mice and humans, as previously described. An a priori power analysis (G*Power version 3.1.9.4; Heinrich Heine University Düsseldorf, Düsseldorf, Germany) showed that a minimum of 8 mice was required on the basis of conventional α (0.05) and β (0.80) values, and Cohen's d value was 1.313. The sample size n = 8 was enough as reported in Chen et al. [18]. In total, 32 mice were randomly assigned to four groups (8 mice/group) for daily vehicle/IO oral gavage in every morning 9:00 am for 6 wk. Animals were restrained by tightly scruffing with the nongavage hand, and oral gavage was performed by using 1.5-in., curved, 20-gauge, stainless steel feeding needles with a 2.25 mm ball. The four groups were vehicle, IO-1× (824 mg/kg), IO-2× (1648 mg/kg), and IO-3×(2472 mg/kg) groups [19]. The vehicle group received an equivalent solution-based volume. Exercise Performance Test Exercise performance was evaluated as described previously. A low-force testing system (PicoScope 2000, Pico Technology, Cambridgeshire, UK) was used to measure the forelimb grip strength of mice in the vehicle or IO treatment groups after 6 wk and 1 h after the administration of the last treatment dose. Order to hydrolysis IO for the digestion and absorption in vivo, all exercise performance test started at 1 h after the administration IO. The amount of tensile force was measured by using a force transducer Nutrients 2022, 14, 5007 3 of 14 equipped with a metal bar (2 mm diameter and 7.5 cm long). A treadmill test until exhaustion was conducted from our previous study where the incline started at 10 • and was progressively increased every min until exhaustion [20]. We used time to exhaustion as the main index of endurance performance. Order to reduce the amount of stress to the mice and ensure familiarization with the treadmill and swimming water, all mice were undergo to seven days of acclimatization. On day 1 animals were placed on a static treadmill and in water (length 65 cm and radius 20 cm) with 40 cm water depth maintained at 37 ± 1 • C. Subsequently, all mice set up the speed 10m/min and duration was 5 min during 3 consecutive days prior to the test on the treadmill. Fatigue-Associated Biochemical Indices, Serum Biochemical Index, and Hormone Index The effects of IO on serum lactate, ammonia, creatine kinase (CK), glucose, lactate dehydrogenase, and free fatty acid (FFA) levels were evaluated postexercise. All the mice were fasted 5 h prior to a 15 min swimming test. Prior to the test, the mice were provided with the IO supplementation; after 1 h, a 15 min unloaded swimming test commenced as described previously [21]. After a 15 min swim exercise, blood samples were immediately collected and centrifuged at 1500× g at 4 • C for 10 min for serum separation. The fatigue-associated biochemical indices in the serum were determined using an autoanalyzer (SYSMEX XT-2000iv, Sysmex, Kobe, Japan). Tissue Glycogen Determination and Visceral Organ Weight The stored form of glucose in the liver and muscle is glycogen, which exists mostly in liver and muscle tissues. The liver and muscle tissues were excised after the mice were euthanized and weighed for glycogen content analysis, as described previously [21]. Magnetic Resonance Imaging Magnetic resonance imaging (MRI) was conducted using a 0.5T mouse MRI scanner (MesoMR, Niumag Analytical, Suzhou, China) 1 h after the intragastric administration of IO before the mice were anesthetized with 1.5% isoflurane. The respiratory rate was monitored using a pneumatic sensor to detect the depth of anesthesia during MRI acquisition, and we selected the third cross-section for MRI imaging. Subsequently, MRI was used to capture the muscle volume area. Measuring Energy Metabolism and In-Cage Spontaneous Physical Activity After 5 wk of IO supplementation, the mice were transferred to an energy-metabolism system and housed singly for 5 d. During this period, the mice were provided with free access to the food hopper and water. Respiratory gases, including water vapor, were measured with an integrated fuel-cell oxygen analyzer, spectrophotometric carbon dioxide analyzer, and capacitive water vapor partial pressure analyzer. The respiratory quotient (RQ) was calculated as the ratio of carbon dioxide production (VCO 2 ) to oxygen consumption (VO 2 ) [22]. Energy expenditure (EE) was calculated using the Weir equation: kcal/h = 60 × [0.003941 × VO 2 + 0.001106 × VCO 2 ] [23]. All the metabolism cages had running wheels to measure in-cage spontaneous physical activity (SPA) for 24 h over 5 d. Total wheel revolutions were recorded daily, with the total distance run per day determined by multiplying the number of wheel rotations and speed by the circumference of the wheel. This type of exercise is considered voluntary exercise [24]. RNA Sequencing of Muscle Tissue Muscle tissue was obtained after the mice were euthanized, and an RNA-sequencing analysis was conducted as described previously [25,26]. After RNA extraction, purification, and library establishment, the libraries were sequenced using next generation sequencing based on the Illumina hiSeq sequencing platform. Gene counts were subsequently used as input for analysis using DESeq2 [27]. A gene ontology (GO) pathway analysis was performed using GOseq [28], and significantly expressed Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways were identified using Gage [29] and visualized using Pathview. Statistical Analysis All data are expressed as mean ± standard deviation. Statistical differences between the groups were analyzed using a one-way analysis of variance, and a Cochran-Armitage test was used for dose-effect trend analysis. All statistics were performed using the Py-Charm program (version 2021.2.3, Prague, Czech Republic), with p values <0.05 considered statistically significant. Table 1 lists the body weight (BW), food and water intake, and tissue weight of the mice with IO supplementation over 6 wk. These data revealed that IO supplementation led to different epididymal fat pad (EFP) and muscle weights in the IO group compared with those in the vehicle group. The body composition data were obtained using a mouse body composition analyzer ( Figure 1A). The free fat mass (FFM) was significantly higher in the IO-1× (1.12-fold, p < 0.0001), IO-2× (1.14-fold, p < 0.0001), and IO-3× (1.12-fold, p < 0.0001) groups than in the vehicle group. No significant difference was identified in initial BW, final BW, and water intake between the groups ( Figure 1B, Table 1). Table 1 lists the body weight (BW), food and water intake, and mice with IO supplementation over 6 wk. These data revealed tha led to different epididymal fat pad (EFP) and muscle weights in th with those in the vehicle group. The body composition data were ob body composition analyzer ( Figure 1A). The free fat mass (FFM) wa in the IO-1× (1.12-fold, p < 0.0001), IO-2× (1.14-fold, p < 0.0001), an < 0.0001) groups than in the vehicle group. No significant difference tial BW, final BW, and water intake between the groups ( Figure 1B, Effect of Six-Week IO Supplementation on General Characteristics an The MRI results are presented in Figure 2A. At wk 6, a quantit mouse muscle volume indicated a significant increase in muscular compared with that in the vehicle group (Figure 2A,B). A further in ume in the IO-1× (241 ± 2 mm 2 ), IO-2× (230 ± 2 mm 2 ), and IO-3× compared with that in the vehicle group (189 ± 3 mm 2 ) was observed cle volume continued to increase in the IO group. We observed no MRI analysis of the IO groups compared with the vehicle group aft mentation. The MRI results are presented in Figure 2A. At wk 6, a quantitative MRI analysis of mouse muscle volume indicated a significant increase in muscular mass in the IO group compared with that in the vehicle group (Figure 2A,B). A further increase in muscle volume in the IO-1× (241 ± 2 mm 2 ), IO-2× (230 ± 2 mm 2 ), and IO-3× (227 ± 3 mm 2 ) groups compared with that in the vehicle group (189 ± 3 mm 2 ) was observed. Subsequently, muscle volume continued to increase in the IO group. We observed no abnormalities in the MRI analysis of the IO groups compared with the vehicle group after 6 wk of IO supplementation. Effect of Six-Week IO Supplementation on Exercise Performance As illustrated in Figure 3A, the grip strength of the vehicle, IO-1×, IO-2×, and IO-3× group mice was 206.9 ± 69.5, 235.0 ± 60.4, 259.4 ± 52.2, and 288.3± 80.4 g, respectively. The grip strength of the IO-3× group mice was significantly higher (1.39-fold, p = 0.0029) than that of the vehicle group. In the trend analysis, IO supplementation had a significant dose-dependent effect on grip strength (p = 0.0067). Effect of Six-Week IO Supplementation on Glycogen Content As depicted in Figure 5, the muscle glycogen of the vehicle IO-1×, IO-2×, and IO-3× groups was 27.6 ± 5.8, 33.2 ± 10.6, 35.9 ± 14.9, and 36.7 ± 11.1 µg/g, respectively. Although no significant difference was identified between the IO groups and the vehicle group in terms of muscle glycogen, the trend analysis demonstrated a significant dose dependence when IO supplementation was increased (p = 0.0259). The liver glycogen content of the vehicle, IO-1×, IO-2×, and IO-3× groups was 9.3 ± 5.5, 14.8 ± 3.2, 23.9 ± 3.9, and 24.2 ± 6.7 µg/g, respectively. The liver glycogen content of the IO-1×, IO-2×, and IO-3× groups was significantly higher (1.60-fold [p = 0.0368], 2.58-fold [p < 0.0001], and 2.61-fold [p < 0.0001], respectively) than that of the vehicle group. Figure 5. Effect of IO on muscle and liver glycogen levels at rest. All the mice tested for glycogen levels in muscle tissues 1 h after the final treatment. Data SD of eight mice in each group. A one-way analysis of variance (ANOVA) was Different letters (a, b, c) indicate a significant difference at p < 0.05. Effect of Six-Week IO Supplementation on Biochemical Variables The levels of alanine aminotransferase, aspartate aminotransferase, total protein, creatinine, urea assay, and glucose were nonsignificant in each group. Albumin levels and the lipid profile (total cholesterol [TC], triacyl glycerol [TG]) were significantly lower in the IO group than in the vehicle group. In addition, IO supplementation resulted in a dose-dependent decrease in TC and TG levels ( Table 2). Data are expressed as mean ± SD, n = 8 mice/group. Different letters ( a, b ) in the same row indicate significant differences at p < 0.05 using one-way ANOVA. AST, aspartate aminotransferase; ALT, alanine amino transferase; TP, total protein; ALB, albumin; BUN, blood urea nitrogen; CRE, creatinine; UA, urea assay; TC, total cholesterol; TG, triacyl glycerol. One-way ANOVA was used for the analysis. Different letters (a, b, c) indicate a significant difference at p < 0.05. Effect of Six-Week IO Supplementation on Biochemical Variables The levels of alanine aminotransferase, aspartate aminotransferase, total protein, creatinine, urea assay, and glucose were nonsignificant in each group. Albumin levels and the lipid profile (total cholesterol [TC], triacyl glycerol [TG]) were significantly lower in the IO group than in the vehicle group. In addition, IO supplementation resulted in a dose-dependent decrease in TC and TG levels ( Table 2). Data are expressed as mean ± SD, n = 8 mice/group. Different letters ( a, b ) in the same row indicate significant differences at p < 0.05 using one-way ANOVA. AST, aspartate aminotransferase; ALT, alanine amino transferase; TP, total protein; ALB, albumin; BUN, blood urea nitrogen; CRE, creatinine; UA, urea assay; TC, total cholesterol; TG, triacyl glycerol. Effect of Six-Week IO Supplementation on the Gene Ontology Analysis and Enrichment Analysis of the Kyoto Encyclopedia of Genes and Genomes in Muscle To explore the beneficial effect of IO supplementation, we analyzed the action mechanism in the vehicle-and IO-supplemented mice using transcriptome sequencing of the mouse muscle. Figure 7A presents the analysis of gene expression in the IO and vehicle groups. The DEseq R package was used to analyze the differences in gene expression, and the screening conditions are expressed as the expression difference multiple |log2foldchange| > 1, with a significant difference of p < 0.05. The vehicle versus IO-2× result demonstrated the largest amount of gene upregulation compared with vehicle versus IO-1× and vehicle versus IO-3×. In the IO-2× group, 298 genes were upregulated and 62 genes were downregulated. We then constructed a Venn diagram to obtain the 16 key target genes for the IO supplementation group (Supplementary Table S1). To explore the mechanism underlying the IO effect on improved exercise performance and changes in body composition, the target genes were imported into the DAVID database for the GO enrichment ( Figure 7B) and biological pathway analysis of KEGG ( Figure 7C). As presented in Figure 7B, with regard to the top 20 significantly upregulated genes from the GO enrichment analysis, IO treatment had the lowest false discovery rate value for lipid oxidation activity, monocarboxylic acid metabolic process, small molecule metabolic process, extracellular region, oxoacid metabolic process, carboxylic acid metabolic process, and organic acid metabolic process. The top 20 pathways involved in IO treatment with the most significant expression were the PPAR signaling pathway, phenylalanine metabolism, ascorbate and aldarate metabolism, cholesterol metabolism, chemical carcinogenesis, and steroid hormone biosynthesis ( Figure 7C). We used the target genes and their corresponding effective components to analyze the effect of IO treatment on PPAR signaling pathways ( Figure 7D). Red represents the upregulated pathways after IO treatment. The results demonstrated that fatty acid transport proteins, fatty acid binding protein (FABP), apolipoprotein (APO)-AI, APO-AII, APO-CIII, APO-AV, FABP1, cytochrome P450 (CYP4A1), and perilipin are the key nodes in the network. The lipid transport pathway has the most upregulated genes after IO treatment. Analysis of the Kyoto Encyclopedia of Genes and Genomes in Muscle To explore the beneficial effect of IO supplementation, we analyzed the action mechanism in the vehicle-and IO-supplemented mice using transcriptome sequencing of the mouse muscle. Figure 7A presents the analysis of gene expression in the IO and vehicle groups. The DEseq R package was used to analyze the differences in gene expression, and the screening conditions are expressed as the expression difference multiple |log2fold-change| > 1, with a significant difference of p < 0.05. The vehicle versus IO-2× result demonstrated the largest amount of gene upregulation compared with vehicle versus IO-1× and vehicle versus IO-3×. In the IO-2× group, 298 genes were upregulated and 62 genes were downregulated. We then constructed a Venn diagram to obtain the 16 key target genes for the IO supplementation group (Supplementary Table S1). To explore the mechanism underlying the IO effect on improved exercise performance and changes in body composition, the target genes were imported into the DAVID database for the GO enrichment ( Figure 7B) and biological pathway analysis of KEGG ( Figure 7C). As presented in Figure 7B, with regard to the top 20 significantly upregulated genes from the GO enrichment analysis, IO treatment had the lowest false discovery rate value for lipid oxidation activity, monocarboxylic acid metabolic process, small molecule metabolic process, extracellular region, oxoacid metabolic process, carboxylic acid metabolic process, and organic acid metabolic process. The top 20 pathways involved in IO treatment with the most significant expression were the PPAR signaling pathway, phenylalanine metabolism, ascorbate and aldarate metabolism, cholesterol metabolism, chemical carcinogenesis, and steroid hormone biosynthesis ( Figure 7C). We used the target genes and their corresponding effective components to analyze the effect of IO treatment on PPAR signaling pathways ( Figure 7D). Red represents the upregulated pathways after IO treatment. The results demonstrated that fatty acid transport proteins, fatty acid binding protein (FABP), apolipoprotein (APO)-AⅠ, APO-AⅡ, APO-CⅢ, APO-AV, FABP1, cytochrome P450 (CYP4A1), and perilipin are the key nodes in the network. The lipid transport pathway has the most upregulated genes after IO treatment. Discussion The results demonstrated that 6-wk IO supplementation increased muscle volume and decreased fat weight (EFP tissue) in the mice. IO supplementation enhanced mouse exercise performance through energy expenditure ( Figure 2) by increase glucose content during exercise. Studies also revealed that IO increases the swimming time and the glycogen content of the liver and muscle, that is the important energy fuel source during exercise [11]. In other studies, the IO polysaccharide increased GRAF1 expression in mouse gastrocnemius muscles [32], which maintain muscle membrane integrity and muscle repair [33]. According to our results, IO may influence cross-talk between muscle and lipid metabolism to improve glucose regulation. The lipid tissue alleviate ( Figure 2) and increase the glucose uptake could clearly found the glycogen levels and FFA content were increased by 6-wk IO treatment ( Figure 5). Therefore, a recent study suggested that elevated FFA immediately after exercise can provide an energy source for skeletal muscles, restoring glycogen concentrations [25]. In previous study reveal that cholesterol-rich (similar with IO) food can restore liver glycogen content to enhance exercise time [34] was the other evidence that IO has potential conditioning the glucose level absorption and increase insulin sensitive which is relative with exercise performance. In addition, we identified CK as the metabolic biomarker after acute exercise. CK levels were reduced after acute exercise in the IO treatment group; thus, CK is deemed to play a key role during strenuous exercise with muscle fatigue [35]. Numerous studies have focused on IO for treating diabetes [7,19,36]. Our results suggest that IO could affect insulin secretion and glycemic control [37], which were adjusted after exercise, causing fatigue. Previous studies have also reported that IO treatment increases insulin-stimulated glucose uptake [9]. The increase in FFA may alter hepatic lipid storage and/or insulin resistance to some extent, supporting blood glucose increase after exercise [38,39]. Consequently, IO treatment alters skeletal muscle insulin resistance to reduce acute exercise metabolic waste. Our results demonstrated that IO treatment for 6 wk increased the effects of exercise performance on glucose and lipid metabolism rather than moderating acute exercise, and this may, in part, explain improved exercise performance. To examine how IO affects body composition and exercise performance, we analyzed data related to activated PPAR, a lipid-activated transport factor that regulates APO-AI, APO-AII, and APO-CIII gene expression. According to Szychowski et al. [40], IO extract consist of the triterpenoids, steroids fractions and polysaccharide fractions and those substances are the key regulatory substances to activate PPARs. The mice treated with IO became exhausted later than those in the vehicle group and exhibited higher rates of glycogen depletion in the liver than in skeletal muscles. According to metabolic data, IO supplementation increased EE and reduced the RQ value, indicating an increase in fat oxidation and a decrease in carbohydrate oxidation [40]. These data indicate that during 6-wk IO treatment, the mechanisms underlying improved exercise performance involved increased glycogen content [41] and the rapid repletion of glycogen stores, both of which maximize lipid oxidation and oxidative capacity for the performance recovery of endurance athletes [42]. Although EE increased during IO treatment, in our study, BW exhibited no difference between the groups; changes in EE and fat oxidation in weight change are contentious [43]. IO may be used to treat obesity and may be able to modulate fat oxidation through nutrients and drugs used for treating diabetes. In addition, the present study revealed that the increases in the expression of lipidactivated transport factor genes in energy balance pathways and body composition changes resulting from IO treatment are similar to responses to exercise training programs [44]. The mice in the IO-2X groups increased their SPA exercise performance through enhanced wheel speed; thus, IO could be used to increase lipid oxidation as a predominant energy substrate [45]. In the present study, IO supplementation decreased the lipid levels in mice. This can be attributed to an increase in APO-A1 expression in the IO treatment group. The APO-A1 gene in the high-density lipoprotein pathway is associated with muscle differentiation [10]. IO supplementation activated the PPAR pathway, inducing gene expression associated with glucose uptake, fatty acid synthesis, and lipid storage [46]. We investigated the glycogen level in the muscle and liver. Liver glycogen levels increased glucose storage through IO treatment. Furthermore, the increase in glycogen storage was consistent with an increase in the mouse exercise exhaustion time, reduced fat accumulation, and enhanced FFA after acute exercise following IO treatment. This observation suggests that the activation of the PPAR pathway regulates lipid catabolism in skeletal muscles. The dose-dependent analysis, some IO-3X data were not effective than IO-1X and IO-2X, we infer that IO show no significant benefit in this increased dosage to 3X. Thus, IO-2X was the acceptable dose as nutraceuticals. Notably, this increase in gene expression in the PPAR pathway is mirrored by increased improve insulin sensitivity in skeletal muscle, our data showed that increase of absolute and relative muscle mass. The increase of muscle mass could lead to greater mitochondria content and respiration which ultimately facilitates the increase of VO 2 and hence endurance performance when assessed by time to exhaustion. Therefore, IO may counteract skeletal muscle fatigue to extend exercise time. There may be some possible limitations in this study. Our study was only IO supplementation without training-related procedures. As all mice were not having any exercise training during supplementation period, we therefore do not know the combination of exercise training and IO supplementation can further increase the endurance performance when compare with IO supplementation along. We did not compare moderate (6 weeks) vs. prolonged supplementation (12 week) on muscle and endurance performance in mice. However, this has not been investigated by any current studies to date and thus warrants further investigation. Conclusions In this study, 6-wk IO supplementation significantly decreased the EPF weight and had beneficial effects on muscle content. Exercise performance was significantly improved in the IO group through increased glucose and FFA content after acute exercise. The glycogen storage ability also increased. The results of RNA sequencing revealed that FATP, FABP, APO-AI, APO-AII, APO-CIII, APO-AV, FABP1, CYP4A1, and perilipin expression increased in the network. The lipid transport pathway was the most upregulated during IO treatment. These data indicate that IO increases lipid transport and has positive effects on glucose uptake during exercise. Further studies to understand the up-regulated. To solely isolate the effect of IO supplementation on lipid transportation, further studies should focus on IO supplementation with and without resistance exercise so that the separate and the combined effect of IO supplementation can be truly revealed. However, this is unrelated to the main purpose of this study and thus warrants further investigation.
6,311.2
2022-11-25T00:00:00.000
[ "Medicine", "Biology" ]
Enzymatic catalysis for sustainable production of high omega‐3 triglyceride oil using imidazolium‐based ionic liquids Abstract Two different fish oil preparations, namely triglycerides and ethyl esters containing, respectively, 30.02% and 74.38% of omega‐3 fatty acids, were employed as the substrates for transesterification. Catalyzed by immobilized lipase using imidazolium‐based ionic liquid systems, the total content of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) in the resulting triglyceride reached 63.60% when 4% hydrophobic ionic liquid was used, which was 11.74% higher than that of the triglyceride produced in a solvent‐free reaction system. The activation energy of the product (triglyceride‐type fish oil) was 173.64 KJ mol−1, which was not significantly different from that of the commercial ethyl ester‐type fish oil, so were the other thermal oxidative kinetic parameters. The kinetic parameters depicting the thermal and oxidative stability of the fish oil product provide the basis for industrial processing, storage, and applications. Wang, Wang, Li, & Wang, 2009). Since the structures of cations and anions are crucial in making ILs hydrophilic or hydrophobic, structurewise, the anion portion has been found to be more crucial in determining the water miscibility of ILs (Huddleston et al., 2001). The ILs based on [PF 6 ] − (hexafluorophosphate) and [Tf 2 N] − {= bis [(trifluoromethyl) sulfonyl]amide} are normally water-immiscible, making them the solvents of choice for forming biphasic systems in most IL applications (Zhao et al., 2005). Moreover, the pioneer work on recovery of n-3 PUFA methyl esters from fish oil by combining ILs and silver salts was first reported by Li and Li (2008). In their work hydrophobic ILs with large anions of low lattice energy were identified as the best. For the case of ester synthesis via direct esterification, lipase/esterase-mediated catalysis, and using functionalized ionic liquids as dual solvent-catalysts, could be considered a green approach in terms of catalysts and green solvents (Tao et al., 2016). The main advantages of lipase/ esterase-mediated esterification in nonaqueous systems compared to acid/base catalysis include the mild reaction conditions, high selectivity, and creation of less waste (Brígida, Amaral, Coelho, & Goncalves, 2014;Reetz, 2013). Specifically, ILs containing aromatic rings, i.e., the imidazole and pyridinium types, have been reported capable of selectively extracting the healthful n-3 PUFA, thus subsequently enhancing their purity, by forming reversible π-bonding that governs the selective adsorption of PUFA and ethyl esters from fish oil (Cheong et al., 2011;Ventura et al., 2017). Nevertheless, there remains a void in the literature on how such a green system could be applicable in the production of fish oil containing high omega-3 triglyceride and the stability of resulting products. In the present study, imidazolium-based ILs containing different anions were employed to conduct enzymatic catalysis of esterification in the production of high omega-3 triglyceride oil. The thermal stability and oxidative kinetics of the products were investigated to gain insights on how to increase product stability so that they can be further optimized to meet industrial processing, storage, and application needs. N-octanoic acid and n-hexane (HPLC grade) were purchased, respectively, from Sinopharm Chemical Reagent Co., Ltd. (Shanghai, China) and Shandong Yuwang industrial Co., Ltd (Shandong, China). | Substrates and reagents Thin-layer chromatography silica gel G (analytical grade) was purchased from Qingdao Marine Chemical Co., Ltd (Shandong, China). Boron trifluoride diethyl ether and carboxymethyl cellulose sodium (analytical grade) were purchased from Aladdin Chemicals Co., Ltd (Shanghai, China). | Transesterification reaction and separation of triglycerides The following general procedures were employed for the transesterification reaction. First, 5 g of triglyceride fish oil was mixed with 5 g of ethyl fish oil in a double-beaker system. The temperature was kept constant (52°C) using a controlled water bath. The reaction began with the addition of 3% lipase (0.3 g) to the mixture under sealed and light-free conditions. After 24 hr, 2.5-ml mixture of anhydrous ethanol and acetone was added to terminate the reaction. To obtain the mixed fish oil, the reaction product was filtered to remove lipase and evaporated to remove residual organic solvents (Ruzich & Bassi, 2011). The product was then spotted onto a chromatographic plate and chromatographed. The chromatographic silica gel containing the triglyceride fraction was scraped off. The fraction was retained for methyl esterification reaction. At the end of the reactions, all ILs used were separated and recovered by centrifugation based on ILs' immiscible nature with densities drastically higher than other components in the system (Huddleston, Willauer, Swatloski, Visser, & Rogers, 1998;Liang, Zeng, Yao, & Wei, 2012;Sunitha, Kanjilal, Reddy, & Prasad, 2007). | Methyl esterification of fatty acids For methyl esterification, 10-ml 0.5 M NaOH methanol solution was added into the triglyceride fish oil sample, and shaken for 20 min in a constant temperature water bath at 65°C until the oil droplet disappeared. After cooling, 10 ml of methanol-three boron fluoride ether solution (3:1, v/v; used immediately after preparation) was added, and the mixture was shaken in a constant temperature water bath (70°C) for 20 min. After cooling, 10 ml of n-hexane was added to the mixture and shaken sufficiently for extraction. Saturated salt water was then added to precipitate the oil sample dissolved in n-hexane. Next, the supernatant was transferred into a centrifuge tube, and nhexane was evaporated under nitrogen flush. The sample was then injected into gas chromatography (see below) for analysis. | Determination of lipase transesterification activity Five g of n-octanoic acid and 5 g of triglycerides were added into a 25-ml conical flask with stopper. After stirring uniformly, 1 ml of ionic liquid and 0.1 g of lipase were added under oscillation at 60°C for 2 hr. Approximately 0.2 g of the reaction mixture was transferred immediately to dissolve in 5 ml of n-hexane before 1 M KOH solution was added to remove free fatty acids. After methyl esterification, the relative percentage of n-octanoic acid in glycerides was analyzed by gas chromatography (Goujard, Ferre, Gil, Ruaudel, & Farnet, 2009). One transesterification enzyme unit is defined by the amount of enzyme required to transfer 1 μmol of n-octanoic acid to glyceride per min (Janssen, Oosten, Paul, Arends, & Hollmann, 2014). | Thermal analysis Thermal analysis of fish oil samples was carried out by synchronous thermal analyzer (STA 449F3; Netzsch Co., Germany). Approximately 6 mg of the sample was placed in a ceramic crucible, capped, and subjected to a thermal reaction in air atmosphere under 'sample + calibration' mode, and the heating rate was set at 5, 10, 15, and 20°C/min, respectively. At each heating rate, an empty crucible was used as a control group, and scanned it as a baseline in the corresponding temperature range. The airflow was fixed at 60 ml/min (Tan & Che, 2002). | Gas chromatographic (GC) analysis Gas chromatographic analysis was performed with an Agilent 7890A Gas Chromatograph equipped with a 112-88A7 HP-88 capillary column and an FID detector. The chromatographic conditions were as follows: injection volume 1 μl; column temperature programed to 140°C, hold for 5 min, 4°C/min, 220°C, then hold until the analysis was complete. When the EPA and DHA contents were analyzed, the inlet and detector temperatures were set at 250 and 280°C, respectively. When the n-octanoic acid content was analyzed, the inlet and detector temperatures were set at 240 and 250°C, respectively. Carrier gas flow rate was at 1.0 ml N 2 /min, 35.0 ml H 2 /min, 350 ml air/min, and tail blowing 40 ml N 2 /min; the split ratio was 20:1 when EPA and DHA content was analyzed, and 100:1 when analyzing noctanoic acid content (Ha, Mai, & Sang, 2007). The fatty acid was quantified by Agilent Chemstation software, and the relative percentage of various fatty acids was determined by area normalization. The tests were carried out in parallel, and all sample measurements were conducted in triplicates. | Selection of lipase The effect of different lipases on the transesterification reaction was first assessed to select the most effective enzyme as the catalyst for further studies. Figure 1 shows the total EPA and DHA attained on the esterified triglyceride after equal amount of enzymes was used in the reaction under room temperature without light. Among the three enzymes studied, lipozyme TL IM produced triglyceride with the highest EPA and DHA contents (51.86%), followed by PPL (47.91%), and Novozyme 435 (47.18%). The effect of enzymes on the full composition of fatty acids (saturated, monounsaturated, and polyunsaturated fatty acids) has been reported elsewhere by the authors (Li et al., 2018). Carvalho, Campos, Noffs, Fregolente, and Fregolente (2009) reported that EPA and DHA as well as other n-3 PUFA tend to attach to stereospecific Sn-2 position on the glycerol backbone Therefore, the Sn-1,3 specific Lipozyme TL IM (Kim, Kim, Lee, Chung, & Ko, 2002) was able to clip off low degree fatty acids from Sn-1,3 while leaving long-chain n-3 PUFA attached to Sn-2 on the glycerol backbone, yielding higher EPA and DHA contents. Additionally, it has been shown that alcoholysis of vegetable oils is faster with Lipozyme TL IM than with Novozyme 435 (Hernández-Martín & Otero, 2008), indicating that Lipozyme TL IM performs more effectively in hydrolytic reactions than Novozyme 435. A lipase produced by immobilized Thermomyces (Humicola) lanuginosa, Lipozyme TL IM has hydrophilic character besides its Sn-1,3 specificity. It is less expensive than the commonly applied commercial lipase Novozyme 435, thus offering an opportunity for industry to reduce the process cost of high omega-3 triglyceride oil. | Effect of ILs on transesterification To characterize the effect of ILs on the enzymatic activity during transesterification, n-octanoic acid was employed due to its low cost and low viscosity that favors the enzymatic reactions with F I G U R E 4 The thermal stability (mW/mg) of the triglyceride-type fish oil produced in the present study up to 700°C under various temperature gradients (5, 10, 15, and 20°C/min) F I G U R E 5 The thermal stability (mW/ mg) of a commercial ethyl ester-type fish oil product containing 40.04% EPA and 20.36% DHA up to 700°C under various temperature gradients (5, 10, 15, and 20°C/min) F I G U R E 6 The fatty acid composition of the triglyceride-type fish oil produced in the present study triglyceride without needing any solvent. As can be seen in Figure 2 Due to the drastically higher density of ILs compared to fish oil, triglyceride, and the immobilized enzymes used in the present study, the enhanced transesterification could be attributed the phase separation caused by ILs, which significantly increased the surface areas between enzyme and the reactants, resulting in accelerated reaction rate (Nowicki & Muszynski, 2014). According to Ventura et al. (2017), the lipase-catalyzed transesterification of fish oil could be categorized under liquid-liquid extraction with hydrophobic ILs. Although ILs are still quite pricey at present, hydrophobic ILs ([PF 6 ]-based) have been effectively applied to extract different bioactive compounds (Erbeldinger et al., 2000). It has been reported that the process reached equilibrium relatively quickly and can be further enhanced by increasing the volume ratio of IL to solvent (Zhao et al., 2005). The possibility of recyclability, as well as the simplicity of the procedure resulted in cost efficiency as compared to existing transesterification process. However, the number of available hydrophobic water-immiscible ILs is much more limited when compared to water-miscible ones. This suggests certain limitations in terms of variability and tuning of the IL chemical structures aiming at optimizing the performance of such systems. water layer around the lipase, but it also provided an adequate microenvironment that reduced the distance between reactants, hence enhancing the activity and stability of the enzyme (Linder, Kochanowski, Fanni, & Parmentier, 2005). However, as the concentration of [BMIM][PF 6 ] increased to 6% and beyond, the reaction rate was decreased, which could be attributed to (a) dilution of the reactants in the system that prevented the reaction from going forward, and (b) the increase in system viscosity due to the viscosity of [BMIM] [PF 6 ], which hindered the transesterification reaction, reducing the total EPA and DHA yield. | Effect of ILs on lipase catalyzed fish oil transesterification The fatty acid composition of the commercial ethyl ester-type fish oil employed in the study F I G U R E 8 The activation energy (E) of triglyceride-type and ethyl ester-type fish oil could be attained from the slop of the regression lines when plotting log β against 1/T based on the Ozawa-Flynn-Wall isoconversional method | Thermal analysis The thermal stability of the triglyceride-type fish oil produced in the present study was examined using DSC ( Figure 4) in comparison with a compatible commercial ethyl ester-type fish oil product containing 40.04% EPA and 20.36% DHA ( Figure 5). In general, the DSC curves showed similar trends in Figures 4 and 5, and the peak positions shifted to a higher temperature with increasing temperature. Thermal stability of fish oil is one of the major concerns to the fish oil industry because the inherent oxidation tendency of the high n-3 PUFA contents could result in oxidized products that pose toxic threats to human health, including accelerated lipid oxidation in human body, destruction of cell membrane that consequently deteriorates cell functionality, and cardiovascular diseases (Choe & Min, 2006;Seppanen & Csallany, 2002). Equally noteworthy is that the DSC curves of the triglyceridetype fish oil ( Figure 5) showed more small peaks at lower temperatures (around 200 and 300°C) than that of commercial ethyl ester-type, indicating that the triglyceride-type produced in the study contained fatty acids spreading over a wider range of molecular weight than the ethyl ester-type. This could be attributed to the wide spectrum of acryl groups formed during the transesterification process when producing triglyceride-type fish oil. Such an observation was further confirmed by analyzing the fatty acid composition of both the triglyceride-type and the commercial ethyl ester-type fish oil (Figure 6; 7). As can be seen in Figure 6, the triglyceride-type fish oil contained 40 fatty acids, whereas the ethyl ester-type fatty acid contained only 29 fatty acids (Figure 7). | Assessment of kinetic parameters To assess the activation energy of the transesterification reaction, as well as to identify critical kinetic parameters, the Ozawa-Flynn-Wall isoconversional method was employed (Mothé & de Miranda, 2013;Ozawa, 1965). The activation energy (E) of triglyceride-type and ethyl ester-type fish oil could be attained from the slop of the regression lines when plotting log (β/T 2 ) against 1/T ( Figure 8). As can be seen in Table 1, the activation energy of commercial ethyl ester-type fish oil was slightly higher than that of the triglyceridetype fish oil, indicating that the ethyl ester-type fish oil had a better stability against thermal oxidation than the triglyceride-type. However, as the triglyceride-type fish oil carried a higher degree of unsaturation than the commercial ethyl ester-type, the difference in activation energy was acceptable, especially when both showed similar kinetic parameters (Table 1). | CON CLUS ION This study successfully produced triglyceride-type fish oil containing elevated concentration of omega-3 fatty acids by carrying out lipasecatalyzed transesterification in an IL system. With the presence of 4% [BMIM][PF 6 ], Lipozyme TL IM was able to catalyze triglyceride transesterification, reaching the total of 63.60% EPA and DHA content. The thermodynamic characteristics of the triglyceride-type fish oil were compatible to commercially available ethyl ester-type, but the former carried more fatty acids than the latter. The kinetic parameters acquired from the present study could serve as the foundation for process scale-up. ACK N OWLED G M ENT The authors would like to express gratitude for the financial support provided by the Fujian Province Developing and Special Project of Marine High-Tech Industry, project number 2015-21. CO N FLI C T O F I NTE R E S T None declared. E TH I C A L S TATEM ENT There is no conflict of interest to declare. This article does not contain any studies with human participants or animals performed by any of the authors.
3,668
2018-08-06T00:00:00.000
[ "Agricultural And Food Sciences", "Chemistry" ]
Technique for Large-Scale Antenna Beamforming Based on Neural Network The commercialization of the fi fth-generation technology and the rapid development of Internet of Things technology have made mobile communication networks increasingly complex. Simultaneously, the massive terminal connections have caused serious interference between networks. Large-scale array antennas have become a hot topic of recent studies to improve the wireless transmission characteristics and spectrum resource utilization e ffi ciency. This study was aimed at exploring using large-scale array antennas combined with neural networks in ultradense cells to improve the wireless signal transmission quality. The simulation results showed that the scheme realized not only the wireless signal transmission with a neural network but also the e ff ective recovery of the source signal. Similarly, the application of this scheme e ff ectively reduced the power consumption during signal transmission, the delay, and the interference. Introduction In wireless communication systems, large-scale antennas play an important role [1][2][3], especially with the gradual commercialization of the fifth-generation communication and the requirements of Internet of Things applications [4]. They are essential for the massive terminal connections, which demand higher system capacity. The application of large-scale antennas can overcome the multipath fading in the communication environment and improve resource utilization, further enhancing the system capacity through the space division multiple access technology. Therefore, largescale antennas have become an important technology in modern and future communication systems. Large-scale antennas can also be deployed in ultra-dense cells and hotspots to effectively reduce path loss, delay, and interference to achieve a reduction in power consumption [5]. Some studies [6][7][8] have investigated large-scale antenna techniques that simulate stationary beamforming and channel state precoding. Two studies [9,10] focused on the capacity of downlink beamforming based on multiple-input and multiple-output (MIMO) and gave a closed expression. According to previous studies, this approach effectively increased the channel capacity by N times compared with the conventional approach. Based on the statistical properties of the wireless channel and considering the superiority of artificial intelligence neural networks in exploring the transmission channel and receiving information perception, previous studies [11,12] combine neural networks and MIMO to build an artificial intelligence-based MIMO architecture model. This approach can improve the channel conditions and reduce signal distortion due to channel fading and increase the spectral efficiency through channel estimation and symbol discovery. The multipolarized uniform linear large-scale MIMO with plane and spherical waves and the multipolarized circularly distributed large-scale MIMO techniques were investigated to reduce the orthogonal properties of the channel [13,14]. The authors suggested that the performance of the multipolarized, circularly distributed large-scale MIMO antenna array is better than that of the multipolarized, linearly distributed large-scale MIMO antenna array. The transmission environment is harsh, and interference is extremely serious due to the complexity of the wireless network structure. Studies [15,16] investigated the interference management technique based on deep learning MIMO beamforming to reduce the interference. The authors used a combination of maximum transmission ratio beamforming and forced-zero beamforming techniques, and the results showed that this approach greatly improved the data transmission rate. Similarly, in a previous study [17], the authors adopted a cooperative strategy to compare techniques for interference cancellation in largescale MIMO downlinks between large-scale MIMO and network MIMO. The results showed that the quality of service of large-scale MIMO systems is better than that of network MIMO systems in a cellular small-scale fading environment. However, the scheme has a more complex theory model. In previous studies [10,15,18,19], the authors used the antenna selection algorithm for maximum capacity transmission to reduce system complexity. The study compared three kinds of schemes for the sensitivity of the transmission power: beamforming for maximum rate transmission, forced-zero beamforming, and minimum mean-squared error beamforming. Other studies [17,20] investigated the multiantenna beamforming coordinated technique based on the multiple cell to improve the communication quality of users at the edge of the cell, and the results showed that the scheme significantly improved the communication quality of the users. A hybrid large-scale MIMO beam assignment scheme was proposed in a previous study [20,21,22] to overcome the deficiency that the current large-scale MIMO beam assignment only serves a single user. The scheme obtained the user spectrum resource allocation via the spatial user covariance matrix, and the results showed that the scheme significantly saved the power consumption. In this study, a massive MIMO beamforming technique was proposed on the basis of neural networks. This technique used the learning function of neural networks to enhance the tracking and identification of user signals. Therefore, in-depth research and exploration were conducted in this area in this study. The main contributions of this study are reflected in the following aspects: This study has been organized as follows: the second part introduces the neural network-based large-scale MIMO beam assignment technique; the third part discusses the model simulation and analysis; and finally, the conclusion is presented. System Model To simplify the system model, we assumed a region with a massive MIMO cell serving multiple users simultaneously. Each cell had M transceiver antennas with array element intervals of d (one-half wavelength), wavelength λ, plane waves, and speed of light c. There were N single-antenna user terminals, which were randomly distributed. The elevation angle of users to the array antennas was θ i ði = 1, 2, ⋯ , NÞ. The neural network-based beam assignment model of the large-scale MIMO antenna is shown in Figure 1. In Figure 1, the array antennas (1, 2, ⋯, N) are used as input signal units to the neural network. According to the antenna theory described in previous studies [2,6], the delay of the signal from the source emission plane wave to each antenna was (0, d sin θ/c, 2d sin θ/c, ⋯, dðN − 1Þ sin θ/c), where θ is the azimuth from the signal source to the antenna. The symbol "О" in the figure represents neuron nodes of each layer; b i ði = 1, 2, ⋯, NÞ is an additive signal for each neuron; w ij ði = 1, 2, ⋯, N ; j = 1, 2, ⋯, NÞ is the weighted value; and a i ði = 1, 2, ⋯, NÞ is the output signal of neuron. The source signal has been denoted by sðtÞ. The signal received by each antenna was expressed as sðt − τ i Þ, ði = 1, The neuron output signal of the output layer was denoted by where f i ð⋅Þ is the activation function of the input layer. The input signal of the output layer neuron was denoted by netðtÞ: The neuron output signal of the output layer was expressed as o i ðtÞ, i = 1, 2, ⋯, N, and the output signal of the output layer was denoted by where Φ i ð⋅Þ is the activation function of the output layer. The last output result was expressed as Next, we used a back propagation neural network (BP) to establish the weights and thresholds for each layer of the neural network. According to the error propagation direction of the BP neural network, a gradient descent algorithm was used to calculate the weights and thresholds by each layer and finally output the expected value of the approximation. The sample mean-squared error function [23] was expressed as Wireless Communications and Mobile Computing where T denotes the target value of the output signal. According to equation (5), using the gradient descent algorithm, the threshold correction and weight correction for each layer were expressed, respectively, as where the parameter ε is the adaptive learning rate factor. From equations (2)-(4), the output layer weight w adjusted formula was expressed as Similarly, the threshold a i adjustment formula for the output layer was expressed as The threshold b i adjustment formula for the input layer was expressed as Using equation (1)-(5), we derived the following expressions: Wireless Communications and Mobile Computing Therefore, the amendments to the weights and thresholds were expressed as For the algorithm design, the fastest descent method in nonlinear programming is usually used [11,15,23] to minimize the error in equation (5), which means that the negative gradient direction of the error function modifies the weights. However, this method usually suffers from low learning efficiency, slow convergence, and a tendency to fall into the local minimal value point state. To avoid the aforementioned situation, this study adopted the additional momentum approach described in previous studies [23,14] to change the effect of the trend of the error minima on the error surface. The algorithm is as follows: (1) Calculate the duration of the signal source arriving at each array antenna according to dðN − 1Þ sin θ/c, and obtain the signal data at a certain moment by sampling Simulation In this study, we simulated the network in figure 1. The array antennas are the input elements of the input layer in a neural network. In this case, each antenna receives a signal from a source with azimuth θ. Each neural unit in the hidden layer was connected to an antenna with an additional threshold b. The output layer of the network had n neurons, where the outputs of the neurons and the neurons in the hidden layer were connected by a weight w. Finally, the output unit in the output layer produced a signal through an adder, which was the signal received from the source by the array antenna. Next, we set the following parameters for the simulation, as shown in Table 1. First, a random number generator was used to generate a sequence of M decimal as a signal source, which was denoted by X. Similarly, a matching random azimuth was generated for each source information, which was denoted by θ. Then, the delay in the signal arriving at each antenna was calculated using equation (5). Finally, the signals received from each antenna were sampled and sent to the neural network. The neural network was learned and trained, and finally, results of the output neuron were used to synthesize the signal source through an adder. To verify the plan in this study, we used two cases for simulation. In the first case, all data samples were used to participate in the training, and the gradient descent method had a learning rate of 0.05 and 20 iteration steps. Figure 2 shows the results for the convergence of the weights and learning rate. Figure 2 shows that the gradient rate of change was close to the value of 1.9172 at 6 iterations of the program and the learning rate decreased from 0.05 to 0.007 and stabilized at 20 iterations. During this process, the training result verification checks up to two times. Figure 3 shows that during 20 iterations, the minimum mean-squared error of the training and test data was close to the optimal result and the accuracy of the target setting at the sixth iteration. The optimal results and accuracy were fully achieved after the 20th iteration, when the minimum mean-squared error was about 0.2474. In Figure 4, the linear relationship is shown between the sample data and the output target in the training, testing, and validation states of the network. A linear relationship was observed between the sample data and the output target, and an R = 0 meant no correlation between the sample data and the output target value. The fitted equation is shown in Figure 4. Figure 5 shows the comparison between the sample data and the simulation data for the two cases of all sample data and 20% of the sample data involved in the training. Figure 5 illustrates that in both cases, the network simulation data basically matched the sample data. Conclusions Large-scale array antennas are important devices for current and future wireless information transmission, considering the advantages of large-scale antennas such as high speed rate, resistance to multipath interference, and the possibility of using space resources. Therefore, this study investigated the beam assignment technique of large-scale array antennas based on the neural network. The principle of this technique was to use the array antenna as the input unit of the neural network and output the final data by the error backpropagation network in the output layer of the network through the data complex adder. Through simulations, this study used all the data involved in training and 20% of the data involved in training with its parameters unchanged. The results revealed that both cases could recover the sample data very well. In conclusion, the scheme used in this study was feasible in the combination of array line and neural network for data reception. Data Availability For the dataset problem in the paper, in fact, we have made a description that we use an M-decimal random sequence with systematic generation. Conflicts of Interest The authors declare that they have no conflicts of interest.
2,955.4
2022-09-06T00:00:00.000
[ "Engineering", "Computer Science" ]
Probing double hadron resonances by the complex scaling method Many newly discovered excited states are interpreted as bound states of hadrons. Can these hadrons also form resonant states? In this paper, we extend the complex scaling method (CSM) to calculate the bound state and resonant state consistently for the $\Lambda_c D(\bar D)$ and $\Lambda_c \Lambda_c (\bar \Lambda_c)$ systems. For these systems, the $\pi, \eta, \rho$ meson exchange contributions are suppressed, the contributions of intermediate- and short-range forces from $\sigma/\omega$ exchange are dominant. Our results indicate that $\Lambda_c D$ system can not form bound state and resonant state. There exist resonant states in a wide range of parameters for $\Lambda_c \bar D$ and $\Lambda_c \Lambda_c (\bar \Lambda_c)$ systems. For these systems, the larger bound state energy, the easier to form resonant states. Among all the resonant states, the energies and widths of the P wave resonant states are smaller and more stable, which is possible to be observed in the experiments. The energies of D and F wave resonant states can reach dozens of MeV and the widths can reach hundreds of MeV. I. INTRODUCTION Since the observations of the charmonium-like state X(3872) [1], many new type hadrons have been discovered. The quantum numbers of these states are different from the traditional qq mesons and qqq baryons, so they are called the exotic hadrons, such as X(3872) [1], Y(4260)/Y(4360) [2,3], Z b (10610)/Z b (10650) [4], P c (4380)/P c (4450) [5]. Since many of these exotic states X/Y/Z/P c are close to the thresholds of two hadrons, they are naturally considered as candidates of the hadronic molecules. The non-relativistic effective field theories and lattice QCD are the suitable to study the structure of these hadronic molecules. Among various explanations, hadronic molecules gain more attention, especially for the hadrons containing one or two heavy quark(s). Deuteron is the only stable molecule state of hadrons, which can be well explained by the one-bosonexchange model [6,7]. Along this way, we suppose that the deuteron-like hadron states have similar structures and interaction potentials. The one-boson-exchange model is a reasonable theoretical method for explaining the hadronic molecular states. According to the mass of exchanged mesons, the mesons π, η, σ, ρ and ω exchange contribute to the long, medium, and short range interaction, respectively. In the last decade, the molecular states of hadrons have been extensively explored for the discoveried charmonium-like X/Y/Z and P c (4380)/P c (4450) states [8][9][10][11][12][13]. In the framework of one-boson-exchange model, these hadrons can not only bind to hadron molecular states, but also form resonant states with high angular momentum, which is less studied in hadron physics. The resonance is one of most striking phenomenon in the scattering experiment, which exists widely in atoms, molecules, nuclei and chemical reactions. Therefore, researchers have developed many methods to study resonances, including: R-matrix method [14,15], K-matrix method [16], scattering phase shift method, continuous spectrum theory, * Electronic address<EMAIL_ADDRESS>J-matrix method [17], coupling channel method, real stabilization method (RSM) [18], analytic continuation method of coupling constant (ACCC) [19] and complex scaling method (CSM) [20,21], etc. Among them, RSM, ACCC and CSM are bound-state-type methods, which can be conveniently dealt with bound-like state. In the framework of non-relativistic theory, these methods have obtained dramatically improvements. For example, RSM has been able to effectively determine the parameters of the resonant state through improved calculation methods. In combination with the cluster model, the ACCC has been used to calculate the energy and lifetime of some light nuclear resonant states, as well as the wave function of the resonant states. The complex scaling method can describe the bound state, resonant state and continuum in a consistent way, which is widely used to exploring the resonance in atomic, molecular and nuclear physics. The CSM have been extended from nonrelativistic to the relativistic framework [22][23][24][25][26]and from spherical nuclei to deformed nuclei [27,28], which have been applied in halo nuclei. We will apply CSM to hardon physics for searching the resonant states. In the system of heavy molecular states, we need consider the interaction of π, η, ρ, σ, ω mesons. Unusually, the effect of one-π meson exchange is dominant, and obscures the contribution of other mesons. In Ref. [29], the authors have discussed the bound states of DD(D), Λ c D(D) and Λ c Λ c (Λ c ) through one-σ-exchange(OSE) and one-ω-exchange(OOE). Based on the spin and isospin conservation, there is no coupling πΛ c Λ c and πDD, and the contribution of π, η, ρ meson exchanges are forbidden or suppressed heavily. The Λ cΛc system has been investigated in several previous works [30][31][32]. In this paper, we will further investigate whether two hardons are possible to form resonant states by exchanging σ and ω mesons for DD(D), Λ c D(D) and Λ c Λ c (Λ c ) systems. This paper is organized as follows. After the introduction, we present the theoretical framework and calculation method in Section II. The numerical results and discussion are given in Section III. A short summary is given in Section IV. II. THEORETICAL FRAMEWORK Under the heavy-quark symmetry, the effective Lagrangians for one-σ-exchange and one-ω-exchange are expressed, Here, υ is the four velocity of the heavy meson, which has the form of υ = (1, 0). The coupling strengths can be estimated by using the quark model, where the σ and ω mesons couple to light quarks in heavy hadrons. Since the σ and ω mesons couple dominantly to the light quarks, the Lagrangian of the light quarks (q = u, d) and σ/ω can be written as Compared with the vertices of D † Dσ/ω,Λ c Λ c σ/ω, and qqσ/ω in Eqs. (1)−(3), the coupling constants can be related, i.e., In a σ model [33], the value of g q σ is taken as g q σ = 3.65. For the ω coupling g q ω , in the Nijmegen model, g q ω = 3.45, whereas it is equal to 5.28 in the Bonn model [34]. In Ref. [35], g q ω was roughly assumed to be 3.00. In the following calculation, all the possible choices will be considered. According to the effective Lagrangians in Eqs. (1) and (2), all the relevant OBE scattering amplitudes can be collected in Table I. Here, H(q, m) is defined as H(q, m) = 1/(q 2 + m 2 ). According to the G-parity rule [36], the OSE and OOE effective potentials for the DD, Λ cD , and Λ cΛc systems are related to the potentials for the DD, Λ c D, and Λ c Λ c systems. The interactions from the σ exchange are same, and from the ω exchange are contrary for DD and DD, Λ cD and Λ c D, Λ cΛc and Λ c Λ c systems. With the Breit approximation, one can get the relation between the effective potentials in momentum space V f i and the scattering amplitude M f i in the momentum space, i.e., is the scattering amplitude for the process h 1 h 2 → h 3 h 4 . m i and m f are the masses of the initial (h 1 , h 2 ) and final particles (h 3 , h 4 ), respectively. The effective potential in the coordinate space V(r) is obtained by performing the Fourier transformation as In order to regularize the off shell effect of the exchanged mesons and the structure effect of the hadrons, a monopole form factor F (q 2 ) is introduced at every vertex, here, Λ is the cutoff parameter, m and q correspond to the mass and momentum of the exchanged meson, respectively. In Refs. [6,7], Λ is related to the root-mean-square radius of the source hadron which propagate the interaction through the intermediate boson (σ or ω). According to the previous experience of the deuteron, the cutoff Λ is taken around 1.0 GeV. After adding the monopole form factor F (q 2 ), the effective potentials in coordinate space are obtained in Table II. Systems Quarks In Table II, we can find that the interactions of one-σexchange are always attractive for these systems. The depth of the one-σ-exchange effective potentials depend on the number of the light quarks and/or antiquark combinations (qq, qq,qq), where the light quark or anti-quark is reserved in different hadrons of the hadron-hadron systems, respectively. The one-σ-exchange and one-ω-exchange interactions are corresponding intermediate-and short-range forces, and therefore they are suppressed when the radius r reaches 1.0 fm or larger. Since the force of the one-σ-exchange is the dominant, the total effective potentials for all the systems are attractive. The one-σ-exchange can always provide an attractive force. However, the one-ω-exchange is repulsive for the system including the same light quarks or antiquarks in its components of the investigated systems. The interaction strengths for the one-σ-exchange and one-ω-exchange depend on the light-quark combination numbers. We extend the CSM to describe the resonance of Λ c D(D) and Λ c Λ c (Λ c ) systems. The advantage of this approach is that both bound and resonant states can be treated consistently, since the complex scaled functions of the resonant states are square integrable same as the bound state. The Hamiltonian can be written as: where T is the kinetic energy, and V is the effective potential of the system, which are obtained from the double hadron scattering. In the CSM, the relative coordinate r in Hamiltonian H and wave function ψ is complex scaled as We can get the transformed Hamiltonian and the wave function H θ = U (θ) HU −1 (θ) and ψ θ = U (θ) ψ, where ψ θ is square integrable. Then, the corresponding complex scaled equation is obtained, Based on the Aguilar-Balslev-Combes(ABC) theorem [37], the energy spectrum is a set of poles of the Green function in the complex energy plane, which consists of three parts: (i) the bound states are discrete set of real points on the negative energy axis (ii) the resonant states correspond to the discrete set of points in the lower half of complex energy plane; and (iii) the continuous spectrum is rotated around the origin of the complex energy plane by an angle 2θ. To solve the complex scaled equation, the basis expansion method is applied. The total wave function ψ θ can be expanded as where φ i = R nl (r)Y lm l (ϑ, ϕ) and the index i sum over all the quantum numbers n, l, m l . R nl (r) is the radial function of a spherical harmonic oscillator potential, Here, x = r/b 0 is the radius measured in units of the oscillator length b 0 . Y lm l (ϑ, ϕ) is the spherical harmonics, and describes the angular distribution of particles. Inserting the wave function (11) into the complex scaled equation (10) and applying the orthogonality of wave functions φ i , we get a symmetric matrix diagonalization, where T i ′ ,i and V i ′ ,i are presented as Here, M = m 1 m 2 /(m 1 + m 2 ) is the reduced mass of particle 1 and particle 2 system. Substituting φ i into the above equation, the matrix elements T i ′ ,i are obtained as Similarly, the matrix elements V i ′ ,i are obtained as ǫ k and Λ nk are the eigenvalues and eigenvectors of the matrix JK with elements JK n,n = 2(n + l); JK n,n+1 = − √ n(n + 2l + 1). With the matrix elements T i ′ ,i and V i ′ ,i , the solutions of the complex scaled equation (10) can be obtained by diagonalizing the matrix H θ . The eigenvalues of H θ representing bound states or resonant states do not change with θ, while the eigenvalues representing the continuous spectrum rotate with θ. The former are associated with resonance complex energies E − iΓ/2, where E is the resonance position and Γ is its width. III. NUMERICAL RESULTS Our purpose is to extend the CSM to describe the resonances of two hardons. In this section, we discuss and analysis the effects of the OSE and OOE interactions for the systems of Λ c D(D) and Λ c Λ c (Λ c ) by solving the Schrödinger equation in the CSM. The related parameters are summarized in Table III. The information about the resonant state is obtained af- ter we diagonalize the Hamiltonian. We take Λ-D system as an example to describe how to find out the resonant state by complex rotation. The process is shown in Figs. 1 and 2. As shown in Fig. 1, all the eigenvalues of H θ are divided into three parts: the bound state, the resonant state, and the continuum, respectively, labeled as black squares, orange solid dots and gray triangles. The bound state locates on the negative energy axis, while the continuous spectrum of H θ rotates clockwise with the angle 2θ and resonant state locates in the lower half of the complex energy plane, which is bounded by the rotated continuum line and the positive energy axis and become isolated. In order to clarify how the resonant states isolate from continuum by complex rotation, we present the variation of resonant states with the complex scale angle θ, and the other parameters are fixed same as in Fig. 1. In Fig. 2(a), when θ = 30 • , all dots are nearly in one straight line, and it is hard to distinguish resonant state in the continuous spectrum. In Fig. 2(b), when θ = 40 • , the resonant state begin to separate from the continuous spectrum. When complex scale angle θ reaches 50 • in Fig. 2(c), the resonant state is completely separated from the continuous spectrum. Then, with the angle θ increasing, the continuum spectra rotate with, the position of the resonant state in the complex plane is almost not movement as shown in Fig. 2(d). In the complex scaling method, the resonance energy should not depend on the choice of angle θ. In practice, the number of basis chosen is finite for basis expansion method, the position of resonance state is not completely independent on θ. In order to get a reasonable value for the number of basis N, we plot the trajectories of the resonant energies for different values of N with angle θ in Fig. 3. We can see that as the number of basis increases from 30 to 80, and the angle θ increase from 50 • to 85 • , these trajectories tend to converge to same location. This means that the numerical results are insensitive to the number of basis N, and for any number N between 30 to 80, we can obtain the sufficient precision numerical value. In each θ trajectory, there is a point corresponding to the minimal rate of change with the rotation angle θ, which indicate the optimal value for the resonance parameters. From the Fig. 3, we know that the resonant state is a function of θ, and the existence of minimal rate of change point indicates that the energy of the resonant state has the optimal value for the rate of change θ. In order to better determine this point, we show the θ trajectory of N=80 in Fig. 4, with the other parameters are same as in Fig. 3. The optimal energy value of the resonant state appears at around θ = 85 • , which is almost independent of θ, i.e. dE dθ = 0. In Table IV, we present the numerical results of bound and resonant states for the Λ c D(D) and Λ c Λ c (Λ c ) systems with Λ = 1.1 GeV. By solving the Schrödinger equation with CSM, the solutions are divided into two parts: one is the bound state solution with negative energy; others are resonant states, and have the form E − iΓ/2, where E is the resonance energy and Γ is its width. In the table, we list the energies of the bound states, the energies and widths of the resonant states for differ- The energy and width of bound and resonant states for the Λ c D(D) and Λ c Λ c (Λ c ) systems. E r and Γ represent the energy and width of resonant states in units of MeV, respectively. The cutoff Λ is set as 1.1GeV. The value of coupling constant g q ω are set as g q ω = 3.00 in Ref. [35], 3.45 in the Nijmegen model, and 5.28 in the Bonn model [34], respectively. The notation . . . stands for no bound or resonant state solutions. ent angular momentum L. We find that there are neither bound states nor resonant states for DD and DD systems. The total interaction of the Λ cD system includes the σ exchange attraction and the ω exchange repulsion. As the cutoff parameter Λ is increased, the σ meson exchange becomes more prominent. Due to the stronger ω exchange repulsion, they can not form bound state, much less to resonant states. Compared to the Λ cD system, both the σ and ω meson provide attractive contribution in the Λ c D system, and it is strong enough to form a shallow S wave bound state. For cases g q ω = 3.45 and 5.28, this system can form a P wave resonant state, the energy is about several MeV, the width arrive dozens of MeV. For the Λ c Λ c (Λ c ) systems, the interaction strength is two times stronger than that in the Λ cD and Λ c D systems. Therefore, it is more easy form bound and resonant states for Λ c Λ c and Λ cΛc systems. When the cutoff Λ = 1.1 GeV, the binding energies can reach from several to dozens MeV. For Λ c Λ c system, they can form a P wave resonant state; For Λ cΛc system, they can form more high angular momentum resonant states. The cutoff parameter Λ has a great effect on the energy and width of the resonant states. In Fig. 5 and Fig. 6, we present the energy and width curves as a function of Λ. From Fig. 5 and Fig. 6, we can see that the energy of the P wave resonant state is only a few to a dozen MeV, and the corresponding width is dozens of MeV with Λ changing from 0.9 to 1.5 GeV. With the variation of Λ, the change of energy and width was not very dramatic. For the D and F wave resonant states of Λ cΛc system, with the increasing of Λ, the resonant energy can reach dozens or even a hundred MeV, the corresponding widths can reach about 200 MeV and 480 MeV. The resonance width increases with the increasing of angular momentum L for Λ cΛc system, which means the D and F wave resonant states become even more unstable. IV. SUMMARY A large number of new excited hadronic states have been found in experiments, and many excited states are explained by the hadronic molecular states. We extend the complex scaling method(CSM) to describe bound and resonant states. For Λ c D(D) and Λ c Λ c (Λ c ) systems, under the heavy quark symmetry, the contributions of π, η, ρ meson exchanges are greatly suppressed, and the contribution of the σ and ω mesons exchange are dominant in these process. Our results indicate that there may be exist resonant states for Λ c D(D) and Λ c Λ c (Λ c ) systems. The resonant states position in the complex energy plane remains almost unchanged with the variation of θ. We also checks the independent of basis numbers N with θ trajectories, the results show that the numerical precision is insensitive to the basis numbers. Our results indicate that there may be form resonant states for Λ c D and Λ c Λ c (Λ c ) systems. In these resonant states, the P waves have relative smaller energies and widths, which is more stable and easy to search in the experiments. In general, for these systems, the resonant states are possible to form in a wide range of parameters, and may be observed in experiments in the future.
4,736.2
2021-09-03T00:00:00.000
[ "Physics" ]
Jordanian deformations of the AdS5 × S5 superstring We consider Jordanian deformations of the AdS5 × S5 superstring action. These deformations correspond to non-standard q-deformations. In particular, it is possible to perform a partial deformation, for example, of the AdS5 part only, or of the S5 part only. Then the classical action and the Lax pair are constructed with a linear, twisted and extended R operator. It is shown that the action preserves the κ-symmetry. Introduction One of the fascinating topics in string theory is the AdS/CFT correspondence [1][2][3]. The most well-studied example is the duality between type IIB superstring on the AdS 5 ×S 5 background [4] (often called the AdS 5 ×S 5 superstring) and the N = 4 SU (N ) super Yang-Mills (SYM) theory in four dimensions (in the large N limit). It has been revealed that an integrable structure exists behind the duality and it plays a fundamental role in testing the correspondence of physical quantities (for a comprehensive review, see [5]). Recently, a q-deformed AdS 5 ×S 5 superstring action was constructed [42] by generalizing the result in [36]. Then the bosonic part of the action was determined and, by using this action, the world-sheet S-matrix of bosonic excitations was computed in [43]. The resulting S-matrix exactly agrees with the q-deformed S-matrix in the large tension limit. Thus the two approaches are now related to each other and there are many directions to study q-deformations of the AdS 5 ×S 5 superstring. In this paper, we consider how to twist the q-deformed AdS 5 ×S 5 superstring action. This twisting is regarded as a non-standard q-deformation. Indeed, it would also be seen as a higher-dimensional generalization of 3D Schrödinger sigma models in which the qdeformed Poincaré algebra [44,45] and its infinite-dimensional extension are realized as shown in a series of works [30][31][32]. In particular, it is possible to perform a partial deformation, for example, of the AdS 5 part only, or of the S 5 part only. This would make the resulting geometry much simpler. Some extensions of the twisted R operators are also discussed. Then the classical action and the Lax pair are constructed with a linear, twisted and extended R operator. It is shown that the action preserves the κ-symmetry. The paper is organized as follows. Section 2 is a short review of the q-deformed AdS 5 ×S 5 action. Section 3 describes how to twist the q-deformed action. Then we construct the Jordanian deformed action of the AdS 5 ×S 5 superstring preserving the κsymmetry. The Lax pair is also presented. Section 4 is devoted to conclusion and discussion. Appendix A describes the notation of the superconformal generators. In appendix B, the notation of the classical R-matrix is explained. A general prescription to twist the classical r-matrix for the standard q-deformation of Drinfeld-Jimbo type is also provided. 2 A review of the q-deformed AdS 5 ×S 5 superstring In this section, we will give a short review of the q-deformed AdS 5 ×S 5 superstring action constructed in [42], using the notation therein. The linear R operator A key ingredient in the construction is the classical R-matrix, which is a linear map R : g → g over a Lie algebra g satisfying the modified classical Yang-Baxter equation (mCYBE); JHEP04(2014)153 where M, N ∈ g and c is a complex parameter. When c = 0, the parameter is regarded as a scaling of the R-matrix. When c = 0, the mCYBE is nothing but the classical Yang-Baxter equation (CYBE). The standard q-deformation of the superstring action presented in [42] is described by the following R-matrix, where E ij (i, j = 1, · · · , 8) are the gl(4|4) generators. For the standard notation of the superconformal generators, see appendix A. The parity of the indices is given byī = 0 for i = 1, · · · , 4 andī = 1 for i = 5, · · · , 8. The associated tensorial r-matrix is where the super skew-symmetric symbol is introduced as The relations between the linear R operator and the tensorial notation r are summarized in appendix B. The classical r-matrix given in (2.3) describes the standard q-deformation of Drinfeld-Jimbo (DJ) type [39][40][41]. Note that the linear R operator is defined here as a map from gl(4|4) to gl(4|4) , while the original action of the AdS 5 ×S 5 superstring is concerned with su(2, 2|4) , rather than gl(4|4). The compatibility of R operator with the real-form condition fixes the normalization factor in (2.3) as c = i up to real scalar multiplication. The classical action and the Lax pair With the help of the linear R operator defined in (2.2), the q-deformed classical action S is given by 2 Here τ and σ are time and spatial coordinates of the string world-sheet and the periodic boundary conditions are imposed for the σ direction. The real constant η ∈ [0, 1) measures the deformation. 3 The super Maurer-Cartan one-form A α is defined as Here we have normalized the parameter as c = 1 in (2.2). 3 Since the deformation is measured by η , it is often called "η-deformation". On the other hand, η is related to the q parameter of the standard q-deformation by Drinfeld-Jimbo [39][40][41] as shown in [36]. Hence we will refer this deformation as to q-deformation, following [42]. JHEP04(2014)153 and A α takes the value in the Lie superalgebra su(2, 2|4). The action of the R-matrix (2.2) on A α is induced from gl(4|4) by imposing a suitable reality condition. Note that A α automatically satisfies the flatness condition, The projection operators P αβ ± are defined as Then operators d andd are linear combinations of the projection operators P i (i = 1, 2, 3) , (2.7) The symbol R g indicates a chain of the adjoint operation and the linear R operation, Note that the usual AdS 5 ×S 5 superstring action is reproduced from (2.5) when η = 0. For a pedagogical review of the undeformed AdS 5 ×S 5 superstring, see [46]. It is convenient to introduce the following notations, Then the equations of motion are written in a simpler form, The Lax pair is given by where λ is the spectral parameter that takes a complex value. The flatness condition (2.6) can be rewritten in terms of J α − and J α + like With the definition L α ≡ L +α + M −α , the zero-curvature condition is equivalent to the equations of motion given in (2.10) and the flatness condition (2.12). JHEP04(2014)153 3 Jordanian deformations of the AdS 5 ×S 5 superstring In this section we shall consider Jordanian deformations of the AdS 5 ×S 5 superstring action. The deformations correspond to a non-standard q-deformation and contain twists of the linear R operator. The twist procedure is realized as an adjoint operation for the linear R operator with an arbitrary bosonic root. We first explain how to construct Jordanian R operators by twisting the linear R operator used in the q-deformed AdS 5 ×S 5 superstring action (2.5). There are two remarkable features of Jordanian R operators. The first is that they satisfy the CYBE rather than the mCYBE (2.1). The second is the nilpotency of them. That is for M, N ∈ g. Then, by using the Jordanian R operators, the Jordanian deformed action with the κ-symmetry and the Lax pair are presented. Jordanian R operators from twists and their extension We shall give a description to twist the linear R operator for basic examples of Jordanian R operators here. Then some extensions of twisted R operators are discussed. First of all, note that the classical r-matrix of Drinfeld-Jimbo type (2.3) has vanishing Cartan charges, where the coproduct is given by On the other hand, one may introduce a classical r-matrix which has non-zero Cartan charges for the deformation of AdS 5 × S 5 superstring. In this sense, we refer to these as to Jordanian r-matrices. In general, such an r-matrix can be constructed by a twist of r DJ with an arbitrary bosonic root E ij with i < j , One may also consider twists by negative bosonic roots E ij (i > j), but the corresponding r-matrix has the same property because gl(4|4) algebra enjoys the automorphism Thus positive roots E ij (i < j) are enough for our later argument. The twisted, linear R operator is defined as JHEP04(2014)153 It is straight forward to read off the R operator from the tensorial r-matrix via (3.6) and inner product (A.2). So far, we have constructed the Jordanian R operators via twists of r DJ . One may also consider the extension of the twisted R operators by adding bilinear terms of fermionic root generators. It should be noted that the latter cannot be obtained with the twists. Thus there are the two classes: 1) Jordanian R operators stemming from twists and 2) extended Jordanian R operators. We will introduce some examples below. 1) Jordanian R operators from twists The first example is twists by simple roots. Then the corresponding subsectors of the superstring action are deformed. For instance, let us consider twists by positive simple root generators E k,k+1 (k = 1, . . . ,4, . . . , 7). 4 Then the associated classical r-matrix is given by The twists give rise to deformations of the AdS 3 or S 3 subspace. For each of the values k = 1, 2, 3 , the resulting geometry is given by a deformed AdS 3 spacetime. It would contain a three-dimensional Schrödinger spacetime and may be regarded as a generalization of the previous works [30][31][32]. The explicit relation is presented in [47]. More interesting examples are deformations of either AdS 5 or S 5 . These partial deformations are realized by twists with the maximal bosonic generators E 14 = P 14 in su(2, 2) and E 58 = R 58 in su(4) , respectively; 5 The deformation of S 5 should be interesting because it would provide a simpler geometry without deforming AdS 5 . The associated linear operator acts on the generators as follows: Remarks. More generally, the Reshetikhin twist [48] or the Jordanian twist [49,50] is closely related to the present prescription. The Jordanian twists for Lie superalgebras are considered in [51][52][53][54]. This relation will be elaborated somewhere else. 2) Extended Jordanian R operators Let us now consider some extensions of the twisted classical r-matrices given in (3.7), (3.8) and (3.9). Recall that these are obtained by twisting r DJ . Here we are concerned with some extensions of the twisted r-matrices, which are not described as twists. It is easy to see that a linear combination of (3.8) and (3.9) Furthermore, these r-matrices may be extended to contain supercharges in their tails, including two parameters, like Here α, β, α ′ , β ′ are arbitrary parameters. The extended r-matrices satisfy the CYBE (3.1). As a remark, it would not be obvious that the multi-parameter deformations lead to consistent string theories. The vanishing β-function has not been confirmed even for the single parameter case. The multi-parameter case would be much more difficult. Comments on fermionic twists One may think of twists by fermionic generators called fermionic twists. An example is given by the maximal root E 18 , (Also see appendix B.2) (3.14) Note that c is a Grassmann odd element [51], so that the r-matrix is Grassmann even. This is a solution of the CYBE. For this fermionic twist, we have no clear understanding for the physical interpretation because the deformation is measured by a Grassmann odd parameter. It would be interesting to interpret the fermionic twist in type IIB supergravity. Generically the r-matrices of the fermionic twists do not satisfy the CYBE (3.1). As an example, let us consider E 45 =S 45 . This is a simple root generator but it gives rise to JHEP04(2014)153 the maximal twist. That is, the corresponding geometry is also maximally deformed. The associated classical r-matrix is given by However it is not a solution of the CYBE. Jordanian deformed action Next we consider Jordanian deformations of the classical action of the AdS 5 ×S 5 superstring. Although the construction is almost parallel to the one in [42], it is necessary to take account of small modifications coming from the fact that the Jordanian linear operator R Jor satisfies the CYBE rather than the mCYBE. In the following, R Jor is used as a representative of arbitrary (extended) Jordanian R operators. 6 The detail expression of R Jor is not relevant to the subsequent analysis. The Jordanian deformed classical action is given by Here, by using Jordanian R-matrix R Jor , a chain of the operations [R Jor ] g is defined as In the present case, d andd are not deformed and do not contain η like and the overall factor of the action (2.5) is not needed to be multiplied. As in the case of [42], the equations of motion can be written simply with the following quantities: There are two ways to rewrite the action given in (3.16). The first is based on J α and the action is written as JHEP04(2014)153 The second is based on J α and the action becomes The two expressions are useful to discuss the Virasoro conditions and the κ-invariance. Then equations of motion are given by 22) and the flatness condition is represented by Note that the flatness condition does not contain η 2 terms, in comparison to the one given in (2.12). This modification comes from the fact that the Jordanian operator R Jor satisfies the CYBE, rather than the mCYBE. For later computations, it is convenient to decompose the equations of motion (3.22) and the flatness condition (3.23) as follows: Then the Lax pair is given by L +α +M −α , it is an easy task to show that the zero curvature condition (2.13) is equivalent to the equation of motion (3.22) and the flatness condition (3.23). The next is to consider the Virasoro conditions. The expression given in (3.20) leads to the Virasoro conditions, On the other hand, the expression in (3.21) gives rise to The above two representations of the Virasoro conditions given in (3.27) and (3.28) should be equivalent. κ-symmetry Let us consider the κ-symmetry of the action (3.16). We consider a fermionic local transformation (called the κ-transformation) of g given by where ρ (1) and ρ (3) are arbitrary functions on the string world-sheet to be determined later, and hence ǫ also depends on the world-sheet coordinates. Then the variation of the action given in (3.16) is described as Here the following relations have been used in the second equality, Now let the forms of ρ (1) and ρ (3) be +α + J JHEP04(2014)153 The derivation is the following, The second equality comes from the fact that J −τ is proportional to J −σ . Similarly, one can show the relation, (3.34) Furthermore, for any grade 2 traceless matrix A (2) ±α , the following relation is satisfied [46] , by using the matrix representation, where Υ is the following 8 × 8 matrix: Then we will show that this variation is canceled out with the variation of the action with respect to the world-sheet metric γ αβ . Let the variation of γ αβ be JHEP04(2014)153 Then, by using the expressions of the classical action given in (3.20) and (3.21) , the variation of the action is evaluated as In order to show the second equality, the following relations have been used, Thus, the total variation of the classical action (3.16) becomes zero, That is, the action (3.16) is invariant under the κ-transformation. Comment on the real-form condition Here we would like to discuss the real-form condition of su(2, 2|4). So far, we are working with a linear R operator from gl(4|4) to gl(4|4) , hence the image is not necessarily su(2, 2|4) , even if the domain is restricted to su(2, 2|4). In the case of the standard qdeformation [36], the real-form condition is preserved. On the other hand, in the case of Jordanian deformations, it is not preserved. However, this fact does not always lead to serious problems like complex actions. Preserving the real-form condition is just a sufficient condition for real actions and it is not necessary to impose it inevitably. In fact, without preserving the real-form condition, one can get the real actions for some Jordanian deformations as shown in [47]. In particular, different r-matrices may give rise to identical string action, up to coordinate transformations and (double) Wick rotations. So far, we have not found the general criterion for which of Jordanian deformations lead to real and physical actions. It would be interesting to specify it in order to classify the physical Jordanian deformations. As a matter of course, Jordanian deformations contain some unphysical ones where there are two time directions or the action contains imaginary parts. For example, a Jordanian deformed S 5 contains imaginary parts but it might have some gauge-theoretical interpretations as a complex integrable deformation. Conclusion and discussion We have discussed Jordanian deformations of the AdS 5 ×S 5 superstring action. The description to construct Jordanian R operators via twists has been explained in detail. Notably JHEP04(2014)153 the Jordanian R operators satisfy the CYBE rather than the mCYBE and they have nonvanishing Cartan charge. Then we have constructed the Jordanian deformed action that preserves the κ-symmetry. The Lax pair has also been presented. It should be remarked that partial deformations are possible in our procedure. This fact implies that one may perform a deformation of the AdS 5 part only, or of the S 5 part only, for example. Then the background geometry for the deformed S 5 would be much simpler because the AdS 5 part is not modified and the gauge-theory dual would be identified with a deformation of the scalar sector such as Leigh-Strassler deformations [55]. A promising way is to consider a twist of the q-deformation of the SO(6) sector argued in [56,57]. As a matter of course, even for the maximal twist, the metric of the twisted geometry can be determined, for example, by following [43]. The background geometries associated with simple Jordanian twists are presented in [47] as well as the solution of type IIB supergravity. In principle, it should be possible to classify all of the skew-symmetric classical rmatrices of gl(4|4) and its real form su(2, 2|4). This classification would enable us to reveal all of the possible deformations of the AdS 5 ×S 5 superstring from the algebraic point of view. We believe that the study of integrable deformations of the AdS 5 ×S 5 superstring will shed light on new aspects of the integrable structure behind the AdS/CFT correspondence. B Constant classical R-matrix We summarize here the notation of the classical R-matrix, which is independent of the spectral parameter (For example, see [58]). B.1 Classical Yang-Baxter equation Let g be a bosonic Lie algebra over C. For a i , b i ∈ g, an element denoted by where the action of r ij is extended to three sites g ⊗ g ⊗ g such as Suppose that there exists the invariant non-degenerate symmetric bilinear form , on g. With the bilinear form, the linear operator R : g → g can be introduced though the following relation; JHEP04(2014)153 The twists by the bosonic roots E αβ and E ab with α < β and a < b (α, β = 1, · · · , M and a, b = M + 1, · · · , N + M ) are given by where the coproduct is defined in (3.1). These are solutions of the CYBE rather than the mCYBE. We will call them the bosonic twists. Also, one may consider a twist by a fermionic root, which is referred as to a fermionic twist. An example is given by E 1,M +N , where c is a Grassmann odd parameter rather than a complex number, so that the r-matrix should be Grassmann even [51]. This is a solution of the CYBE. When M = N = 4, it reproduces (3.14). In general, the fermionic twist by the fermionic root E α,b gives rise to However it does not seem to be a solution of the (m)CYBE except for (B.11). Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
4,816
2014-04-01T00:00:00.000
[ "Mathematics" ]
An open‐source general purpose machine learning framework for individual animal re‐identification using few‐shot learning Animal re‐identification remains a challenging problem due to the cost of tagging systems and the difficulty of permanently attaching a physical marker to some animals, such as sea stars. Due to these challenges, photo identification is a good fit to solve this problem whether evaluated by humans or through machine learning. Accurate machine learning methods are an improvement over manual identification as they are capable of evaluating a large number of images automatically and recent advances have reduced the need for large training datasets. This study aimed to create an accurate, robust, general purpose machine learning framework for individual animal re‐identification using images both from publicly available data as well as two groups of sea stars of different species under human care. Open‐source code was provided to accelerate work in this space. Images of two species of sea star (Asterias rubens and Anthenea australiae) were taken using a consumer‐grade smartphone camera and used as original datasets to train a machine learning model to re‐identify an individual animal using few examples. The model's performance was evaluated on these original sea star datasets which contained between 39–54 individuals and 983–1204 images, as well as using six publicly available re‐identification datasets for tigers, beef cattle noses, chimpanzee faces, zebras, giraffes and ringed seals ranging between 45–2056 individuals and 829–6770 images. Using time aware‐splits, which are a data splitting technique ensuring that the model only sees an individual's images from a previous collection event during training to avoid information leaking, the model achieved high (>99%) individual re‐identification mean average precision for the top prediction (mAP@1) for the two species of sea stars. The re‐identification mAP@1 for the mammalian datasets was more variable, ranging from 83% to >99%. However, this model outperformed published state‐of‐the‐art re‐identification results for the publicly available datasets. The reported approach for animal re‐identification is generalizable, with the same machine learning framework achieving good performance in two distinct species of sea stars with different physical attributes, as well as seven different mammalian species. This demonstrates that this methodology can be applied to nearly any species where individual re‐identification is required. This study presents a precise, practical, non‐invasive approach to animal re‐identification using only basic image collection methods. | INTRODUC TI ON Re-identifying individuals is important for animals in managed care as well as wildlife research; however, it represents a surprisingly challenging feat in many species and scenarios.In zoos, individual animal identification is needed to track where the animal came from, reproductive history, medical records and evaluate lifespan (Reuther, 1968); while in agricultural settings, the ability to identify individuals is also important in traceability during disease outbreaks (Bowling et al., 2008;Murphy et al., 2008).Individual re-identification of free-ranging animals is necessary to facilitate research in many areas of ecology, evolutionary biology and conservation; including information on life history, population dynamics and social structure (Clutton-Brock & Sheldon, 2010;Pradel, 1996). Tagging sea stars for individual re-identification has proven to be an exceptionally challenging problem.Passive integrated transponder devices (PIT tags), one of the most common methods of tagging animals (Gibbons & Andrews, 2004), are poorly retained in sea stars (Olsen et al., 2015), and currently available alternatives are either invasive or temporary.Genotyping methods can be used in individual re-identification of animals but these methods are expensive, labour-intensive, and require handling of animals to obtain samples (Taberlet & Luikart, 1999;Weller et al., 2006).Both freeranging sea stars and those in managed care may need to be individually identified.Predatory sea stars are important members of their ecosystems, and have been recognized as keystone species for their role in maintaining biodiversity and structuring aquatic ecosystems (Paine, 1969;Saier, 2001).The loss of large numbers of sea stars in mass mortality events has resulted in trophic cascades (Schultz et al., 2016), underscoring the need to have a better understanding of basic life history traits in these important species.Sea stars are used in research and are popular display animals in public aquaria. Sea stars in managed care are often group-housed, and the inability to individually identify them precludes the ability to have medical and husbandry records at the individual level.In addition, captive breeding and reintroduction programs for the critically endangered sunflower star (Pycnopodia helianthoides) (Hodin et al., 2021) will benefit from the ability to monitor the success of released individuals. The majority of published sea star tagging methods are invasive and may have detrimental effects on individual fitness, as well as potential impacts on research results due to the response to trauma and behavioural changes.Published methods include electronic tags affixed by piercing the arm with a wire (Lamare et al., 2009), branding with a soldering iron, injection of visible implant elastomer (Martinez et al., 2013), scratching a number into the body wall with a sharp pencil (Scheibling, 1980), sewing a plastic label with thread through the body wall (Savy, 1987) and attaching acoustic tags by threading monofilament line through the depth of the arm (Chim & Tan, 2013).Non-invasive methods have included vital staining and surface tags, but these methods are temporary.Vital staining techniques with Nile Blue Sulfate have involved immersion of the entire animal (Feder, 1955;Loosanoff, 1937), specific arms (Barahona & Navarrete, 2010) or use of a marking pen (Kvalvågnaes, 1972). Before the recent developments in machine learning methods, individual animal re-identification was performed using a number of computer vision methods which required the development of bespoke algorithms to target the distinguishing features of the individual animals such as zebra stripes (Lahiri et al., 2011), bear faces (Clapham et al., 2022), whale shark skin patterns and whale flukes (Berger-Wolf et al., 2017).In computer vision techniques, two or more images of the same individual are compared using a feature matching algorithm such as Scale Invariant Feature Transform (Lowe, 1999).Newer, more robust methods use convolutional neural networks (i.e. machine learning) which require significantly less preprocessing of images and are easier to adapt across different species (Schneider et al., 2018).In addition to incremental improvements in neural network-based architectures due to affordable access to more efficient and better performing hardware, these new methods have been made more robust against small datasets due to transfer learning (Shaha & Pawar, 2018).The final and most recent leap in performance has come from the development of methodologies specifically designed to improve performance in the individual reidentification domain, such as the triplet loss function (Hermans et al., 2017).and can be coupled with transfer learning (Shaha & Pawar, 2018) to take advantage of pretrained neural networks capable of extracting key features from images (Orenstein & Beijbom, 2017) for general purpose image classification tasks.To use FSL methods, diverse images for each individual, such as those from multiple angles and lighting conditions, need to be used in model training. Machine learning methods One of the key developments in machine learning that made FSL methods attractive for the individual re-identification problem domain was the development of loss functions specifically designed to work with small amounts of data (Parnami & Lee, 2022), such as a loss function known as triplet loss.A loss function is a metric that measures the error between the model's prediction and the ground truth data.Images from the same individual will be mapped to embeddings that have a small Euclidean distance between them, while different individuals will be mapped to embeddings with a longer Euclidean distance between them; the computed distance corresponds to the loss function which facilitates the re-identification of the same individual.This process is known as triplet loss learning when three examples are used (Le Cacheux et al., 2019;Nepovinnykh et al., 2020;Schneider et al., 2020).In animal re-identification, the objective for the model is to learn a dense (i.e.compressed) representation of images such that, given an image of an individual (anchor), its dense representation will be closer to the representation of another image of the same individual (positive example) than that of a different individual (negative example); as measured by the Euclidean distance in a high-dimensional space. The extensive resources invested in the machine learningpowered human face re-identification domain (Schroff et al., 2015) can be leveraged for animal re-identification due to recent developments which improve the usability of those technologies.While machine learning methods have many advantages, there have been no fully automated machine learning models published for sea star reidentification.This may be due to difficulties in implementing crucial components of triplet loss learning which are sometimes deemed as "complicated" (Yao et al., 2020), such as the triplet selection criteria, also known as triplet mining.There are also logistical challenges related to the large number of images traditionally required for training models, in addition to the limited existence of publicly available datasets.Furthermore, published models for use in other species have not made their code freely available (Li et al., 2022;Nepovinnykh et al., 2020;Schneider et al., 2020) which poses a challenge in reproducing results and making direct comparisons to other machine learning frameworks. The individual animal re-identification problem domain can be categorized into several different varieties depending on how the problem is framed, similarly to the person re-identification problem domain (Bedagkar-Gala & Shah, 2014).On one hand, the closed-set problem refers to the re-identification of individuals when the pool of possible choices is fixed, and no new individuals can enter.This is typically the case with captive animals under human care, but it can also happen with free-ranging animal populations that have an extremely limited number of individuals.On the other hand, the open-set problem refers to the re-identification of individuals when the pool of possible choices is not known (i.e.any individual may have never been seen before) (Scheirer et al., 2013).This is a more common occurrence in wildlife conservation and ecology scenarios, such as with images collected using camera traps.Although both problems require distinct evaluation metrics, much of the underlying technology used to solve them can be the same. The objectives of this study were to (1) create an accurate, robust, general-purpose machine learning framework for individual re-identification of sea stars and other animals using images and (2) make the code open source to accelerate work in this space and to facilitate the creation of applications. | Ethics statement Sea stars are not currently covered by research oversight guidelines in the United States or Australia and thus no animal ethics committee review was performed.Sea stars were handled and housed in accordance with guidelines for aquatic species published in the Guide for the Care and Use of Laboratory Animals, 8th edition (National Research Council, 2010).Sea stars were maintained in natural seawater systems; and every effort was made to minimize stress on individuals.During data collection, sea stars were always maintained under the water. | Data collection The machine learning model was initially developed using images of In both datasets, a small number of images were later discarded due to image quality issues, such as occlusions or blurry subjects. The total number of images left in the ASRU dataset was 1204 across 39 individuals and five collection events.The processed ANAU dataset contained 983 images across 56 individuals and three collection events. | Data splitting For the sea star datasets, all images, 1204 for the ASRU dataset and 983 for the ANAU dataset, were separated into a training set and a testing set such that images from every individual were present in both sets.The training and testing sets were produced using time-aware data splitting, which is demonstrated to be significantly less likely to lead to performance overestimation compared to time-unaware (random) data splitting (Papafitsoros et al., 2022) as illustrated in Figure 5.Using time-aware splitting, images were divided based on collection events; so, groups of images collected at the same event were all part of the same set.This was used in the closed-set variation of the evaluation metrics. To further prevent overestimation, the model was evaluated using a separate time-aware split for each available collection event, and the average of the individual splits is reported.The ASRU dataset consists of five collection events, separated into five different splits leaving one collection event out as the testing set and using all others for the training set.The ANAU dataset consists of three collection events, which were separated into two splits.The first split only used the last two collection events, where the individuals were placed in plastic containers, as the training and testing sets respectively.The second split combined the last two collection events as the training set, and used the first collection event, where the individuals were in the naturalistic environment, as the testing set. This was intended to simulate a catch-and-release scenario, where an individual might be taken from its natural environment for sample collection and later released and photographed back in its natural environment. For the publicly available datasets, randomized time-unaware splitting (see Figure 5) was performed in order to make the results | Machine learning framework The machine learning model described here uses the DenseNet (Huang et al., 2017) neural network architecture, specifically the DenseNet121 available in the Keras software (Chollet, 2015).This architecture was chosen given its reported performance in the individual re-identification problem domain (Schneider et al., 2020). The final classification layer was replaced with a densely connected embedding layer consisting of 128 units; this is called the embedding examples, with a minimum margin (α).If the value is negative, the output of the loss function is set to 0, One of the most challenging aspects of this approach is efficient sampling implementation to draw an anchor, positive, and negative example in each iteration; this is known as triplet mining.Triplets were drawn in the following manner: First, a batch of samples consisting of image and label pairs were drawn.Then, for each element (i.e.image) in the batch (anchor), a pair of samples within the batch with the same (positive) and different (negative) labels that met the following conditions were identified: 1.The loss function (Figure 6) produced a positive value. 2. The negative sample was farther from the anchor than the positive sample, within margin α.This is known as semi-hard triplet mining (Hermans et al., 2017), which is one of several published triplet mining strategies (Chechik et al., 2010).However, this methodology poses another problem, which is the requirement for all elements in a batch to have the presence of a positive and negative example within the same batch.To overcome this, the batches were created as follows: 1.An element was drawn from the dataset using random order. 2. A second element with the same label was selected at random from the dataset. 3. Steps 1 and 2 were repeated until the desired batch size was achieved. 4. If the batch only contained elements of a single label, the process was restarted. | Model evaluation The metric used to evaluate the model was the mean average precision (mAP) for the top N predictions (mAP@N).Results are reported for the top 1 (mAP@1) and top 5 (mAP@5) predictions, as has been previously reported for machine learning photo identification (Nepovinnykh et al., 2020;Schneider et al., 2020).1.An individual animal's identification number (label L) was selected to be evaluated. 2. An image from the same individual (L) that had not been previously seen by the model during training was selected and its embedding vector (V) was computed. 3. Using Euclidean distance, the closest 10 embedding vectors to V from the training set each cast a vote for their label. 4. If one of the N most voted labels (N = 1 for mAP@1 and N = 5 for mAP@5) was the same as L, it was considered an accurate identification. 5. The process was repeated for all other images from the same individual (L). 6.The average precision was computed as the count of accurate identifications divided by the total number of possible identifications. 7. The mAP was computed as the mean of the average precision across all individuals. Model evaluation using the closed-set evaluation metric was only performed for both sea star datasets (ASRU and ANAU). To compare our methodology with existing results, performance against existing, publicly available datasets was evaluated using randomized, time-unaware splits and the open-set evaluation metric as described in previous research (Dlamini & van Zyl, 2021).Using the open-set evaluation metric, the model is evaluated using only previously unseen individuals and the mAP@N is calculated based on whether one of the nearest N neighbours belong to the same individual.Data processing was replicated as described in each respective publication; when provided, pre-processed images were used. These publicly available datasets range in size from 820 to 6770 images and 45 to 2056 individuals (Table 2).These databases covered seven different mammalian species.The Amur Tiger Reidentification in the Wild database (Li et al., 2019) (Parham et al., 2017).These images have a maximum dimension of 3000 pixels.Images of the pelage pattern from Saimaa ringed seals (Pusa hispida saimensis) that are 160 × 160 pixels are included in the Ringed Seal Image dataset (SealID) (Nepovinnykh et al., 2022).The StripeSpotter database (Lahiri et al., 2011) contains full-body images including both flanks of plains zebra (E.quagga) and Grévy's zebra (Equus grevyi). | RE SULTS For the ASRU and ANAU datasets and all respective splits, a mean average precision of over 99% using the closed-set evaluation metric was achieved.The mAP@1 shows that the top prediction in testing was accurate and ranged from 0.9945 to 0.9992.This indicates that in 1000 image evaluations, the model will correctly identify the individual 995 to 999 times.In terms of individual re-identification, at most one image from a single individual was misclassified in some of the experiments, leading to the high mAP@1 metric.The mAP@5 indicates that one of the top five identifications from the model was the accurate identification and ranged from 0.9973 to 0.9992.Both of these metrics indicated reliable re-identification of the same individual across collection events.These results are summarized in Table 1.Even after using principal component analysis (PCA) to reduce the dimensionality of embedding vectors from 128 to only 2 dimensions, the clustering of individuals is readily apparent (Figure 7). The performance of the model using the open-set evaluation metric and randomized time-unaware splitting when tested against these publicly available datasets was more variable (Table 2).The mAP@1 ranged from 0.8385 to 0.9974.For each dataset, the mAP@1 was higher than the reported corresponding state-of-theart results which ranged from 0.746 to 0.987.The mAP@5 was 1.0 for the Amur tiger, beef cattle noseprint, and StripeSpotter datasets, which indicates perfect recall.The mAP@5 was 0.9527 for the chimpanzee dataset, 0.9682 for the Great Zebra and Giraffe Count and ID dataset, and 0.9139 for the ringed seal dataset. Examples of the quality of the images in the publicly available datasets are represented in Figures 8 and 9. See Figure 8 for the impact of image quality on model performance.When the model was tested on an image with an occluded subject, the identification performance decreased.The lack of diversity in pose, lighting, and background in datasets is illustrated in Figure 9. TA B L E 1 Mean average precision (mAP) metrics and dataset split summary for the sea star datasets.Note: Mean average precision was calculated for the top choice (mAP@1) and the top five choices of the model (mAP@5).The ANAU dataset contains images of Anthenea australiae sea stars, the ASRU dataset contains images of Asterias rubens sea stars.For the ASRU dataset, the mAP is presented with standard deviation due to the presence of five separate splits in the training process. F I G U R E 7 Dimensionality-reduced embedding vectors using principal component analysis.The dense vector representation was compressed further into two components (x, y) using principal component analysis.Each point represents a single image and each colour on the plot represents a different individual sea star.Seven individuals from the ASRU dataset previously unseen by the model are included in this figure for illustration purposes; see Table 1 for the number of images and individuals used during training and testing. The optimal choice of hyper-parameters was partly dependent on the specific dataset, highlighting the need for hyper-parameter tuning in order to achieve maximum performance as seen in Table 3. The effects of individual hyper-parameters across all datasets are summarized in Table 4. Larger batch sizes did not have a positive effect on performance.Regularization and image augmentation methods were also counter-productive in most cases, but caution must be applied when extrapolating those results due to the limited number of epochs used for training during the hyper-parameter selection process.The choice of triplet loss margin value did not have a large impact with less than 0.02 difference for<EMAIL_ADDRESS>a larger embedding size as well as the number of layers in the base model whose weights were updated were correlated with greater performance. TA B L E 2 Mean average precision (mAP) metrics and dataset split summary for existing publicly available animal re-identification datasets. Dataset (citation) Dataset size mAP@1 Reported mAP@1 (citation) Amur Tiger Re-identification in the Wild (S.Li et al., 2019) 1887 Images (92 individuals) 0.9974 0.889 (Dlamini & van Zyl, 2021) Beef Cattle Muzzle/Noseprint Database (Xiong et al., 2021) 4923 Images (268 individuals) 0.9937 0.987 (Xiong et al., 2021) Chimpanzee Faces (Freytag et al., 2016) 6770 Images (78 individuals) 0.8385 0.797 (Dlamini & van Zyl, 2021) Great Zebra and Giraffe Count and ID (Parham et al., 2017) 4948 These results represent an improvement in the accuracy of previously published animal re-identification work, particularly for sea stars (Glynn, 1982).A photo recognition program and code has been developed to identify individual knobby stars (Protoreaster nodosus) by coloration and tubercles (Chim & Tan, 2012).However, this method was only applicable to a single species and required manual processing of each image into a coding system that was still not very reliable with a 23% error rate in the first test.Methodologies for animal re-identification for multiple different species using similarity learning networks have been previously described (Dlamini & van Zyl, 2021;Miele et al., 2020;Schneider et al., 2022), however, none of these publications made their code publicly available.With an mAP@1 greater than 99% in multiple datasets with different species of sea stars and over 83% in all other species, the methodology described herein advances the field of animal re-identification by improving state-of-the-art results and providing open-source code that can help reproduce these results and accelerate the creation of applications using this technology.In most cases, expert human evaluation can be used to distinguish between individuals; however, this can be time-intensive and can be challenging when the individual markers between animals are small or hard to visualize, as is the case when using sea star spines for individual animal identification.Even in these cases where individual re-identification is extremely challenging to humans, the reported methodology performs with high accuracy provided the images taken are under a controlled environment.This method is also less expensive and more practical than The ANAU (Split 2) dataset attempted to simulate an independent, natural environment collection event without a significant decrease in performance, but time-aware splitting was not available for datasets with larger numbers of individuals.Since images of free- ranging individuals (open-set problem) are often only available from a single collection event, typically in the form of frames extracted from a single video, the performance metrics reported in this work might not translate to real-world results.The lack of diversity of poses and backgrounds is readily apparent in some cases, as seen in the two individuals demonstrated in Figure 9. Furthermore, a model trained using this methodology can be employed in a number of ways with different applications imposing different constraints and requirements.For example, a fully automatic re-identification system might be desirable for low-risk scenarios such as the feeding of a population of individuals under human care. On the other hand, tracking of free-ranging populations could use a hybrid approach and present researchers with the most likely individuals but let the final determination be hand-picked by the user. The metrics used to determine classification of images can also be subject to fine-tuning to meet specific application requirements; for example, a voting mechanism could be implemented across multiple images if they were taken consecutively and there is certainty that the images belong to the same individual. Our goal is that by making our source code available and demonstrating state-of-the-art results, practical solutions and products can be developed that will result in improvements in animal management and care, population research in situ, and animal welfare. We encourage other authors to make their code freely available to facilitate work in this space and allow for reproduction of results. Individual re-identification improves the ability to provide and record individualized care for animals, including monitoring feeding, behaviour, and veterinary care.A non-invasive method for animal re-identification is especially advantageous in aquatic invertebrate species which have been historically challenging to permanently tag. Non-invasive tagging methods also represent an improvement in animal welfare.Previously used invasive methods of tagging animals will likely lose their social licence as standards of care and legal protections increase, particularly with the rising interest in invertebrate animal welfare (Horvath et al., 2013;Mather, 2019;Perkins, 2021). are an ideal tool for animal reidentification since they have the capability to evaluate a large number of images without the input of human operators once the model has been trained.Machine learning refers to computer-based algorithms that use many examples to derive a relationship between the given examples and their corresponding labels without being mammalian species.This demonstrates that this methodology can be applied to nearly any species where individual re-identification is required.This study presents a precise, practical, non-invasive approach to animal re-identification using only basic image collection methods.K E Y W O R D S animal re-identification, artificial intelligence, few-shot learning, machine learning, sea star, starfish explicitly programmed for it.During the training phase, the algorithm is shown both the examples and their labels.During the testing phase, it is only shown the examples and the derived labels are evaluated against the ground truth (i.e. the real labels).One common issue in machine learning applications is the need for a large training corpus of data.Few-shot learning (FSL) methods require very few examples of a given individual animal to capture the required information to produce accurate identification predictions for unlabeled images.FSL methods were specifically developed to overcome the often small amount of training data available in real-world scenarios, adult North Atlantic common sea stars (Asterias rubens) [ASRU dataset].Common sea stars (n = 39) were individually housed in 53 L tanks and had no visible external lesions.Each of the 39 individual common sea stars were removed from their tank and placed in a foodgrade clear plastic container with 2 L of water from their home-tank.Images were taken with common sea stars in the plastic containers with a solid white background under the same lighting conditions in a controlled laboratory environment.A minimum of seven images from five different angles were taken of each individual on each of five separate collection events (different days) using an iPhone camera (version 8 Plus or SE 2020; Apple, Cupertino, California, United States).See Figure 1 for example images from common sea stars. Figure 2 Figure 2 demonstrates intra-and inter-individual variability of the common sea stars.To determine if the methodology was transferable to a different species of sea star, images were taken of adult Australian cushion stars (Anthenea australiae) [ANAU dataset].Cushion stars (n = 54) were group housed in a naturalistic touch tank exhibit. described herein comparable with the existing literature(Dlamini & van Zyl, 2021;Schneider et al., 2020).Time-unaware splitting randomly splits images into the training and test sets using an 80/20 F I G U R E 3 Sample images for the ANAU dataset from a single Anthenea australis sea star.The first image (a) was from the simulated natural environment.The second image (b) was a close-up of the central disk.All other images were taken consecutively from different angles.split: 80% of images in the training set, and 20% in the testing set.The training set was used during model training, while the testing set was used to evaluate the model's performance on unseen data.This was used in the open-set variation of the evaluation metrics. vector, conceptually equivalent to the dense representation of the image.The size of the embedding vector was chosen after an initial exploratory analysis, showing little improvement when larger embedding vectors were used when all other variables remained equal.Each of the artificial neurons within the neural network have an associated weight which is assigned as part of weight initialization, updated during training, and is used to determine the final classification during inference time (i.e.testing).A model with the same architecture, pretrained on the ImageNet dataset(Deng et al., 2009), was used for the purpose of weight initialization; in a technique called transfer learning.Transfer learning reduces the need to train models on a large dataset, by pretraining the data on a similar task with a different dataset(Torrey & Shavlik, 2010).By using transfer learning, the pretrained weights of a neural network are used as the starting values for weight initialization.During training, a number of layers of the base model were updated, between 64 and all 427 (the exact count was determined by a choice of hyper-parameter).Classification of individuals in the test set was done after the training phase using the nearest neighbours of the embedding vector.The nearest neighbours were defined as the most similar embedding vectors using Euclidean distance.In the closed-set variation, when the model was shown an image of an unknown individual, votes were cast for the ten nearest neighbours and the model generated a ranked list of potential matches.In the open-set variation, the number of nearest neighbours was dependent on the evaluation metric used, such that the single nearest neighbour was used for mAP@1 and the five nearest neighbours were used for<EMAIL_ADDRESS>the model is trained, new individuals can be re-identified without the need for additional training since the nearest neighbours may correspond to any individual, seen or unseen.The loss function used during training was triplet loss(Hermans et al., 2017).The loss function (L) computes the difference of squares of the distances from the anchor (A) to positive (P) and negative (N) F I G U R E 4 Example of intra-and inter-individual variability for Anthenea australis sea stars.Images of sea stars with the same letter represent the same individual. F Illustration of data splitting for model training and testing.(a) represents time-aware data splitting.(b) represents timeunaware (random) data splitting.This guaranteed that, for every element in the batch, there was at least one corresponding positive and negative example within the same batch.Batches were selected until all elements were drawn in random order, which completed a training epoch.A maximum of 50 epochs were used during training but were stopped early when the training evaluation metrics achieved more than five consecutive epochs without a decrease in the average loss computed on the training set.None of the experiments reached the maximum number of epochs.Given the small number of total images available for training, multiple techniques were employed to reduce the risk of overfitting the model to the training data.First, a number of image augmentation procedures were applied to samples from the training set as they were drawn into batches.Image augmentation is used to simulate a higher number of images than what was included in the original dataset by applying pseudo-random transformations to the images.The number of augmentations was controlled by an augmentation count parameter, which determined how many image augmentations were randomly chosen to be applied to an individual image.The image augmentations applied included pixel dropout, Gaussian noise, horizontal flip, translation, rotation, cropping or zoom-out, and changes in hue and saturation.The augmentation count parameter was set between 2 and 8, meaning each image had at least two but less than eight transformations applied each time it was seen during training.Another technique used to reduce overfitting was dropout regularization (Wager et al., 2013) which is using a dropout factor parameter to control the probability of a connection between the base model and the final embedding layer being dropped.Both the image augmentations and the dropout regularization were applied only during training, and not when evaluating the model against the test set.To find an optimal model, a series of different hyper-parameters were changed across trials.These hyper-parameters included batch size, dropout regularization rate, count of image augmentation transformations, image augmentation factor, triplet loss margin value, number of layers in the base model whose weights were updated, and size of the embedding.Due to computational constraints, not all combinations of hyper-parameters were exhaustively tested; instead, for each dataset, a random sample of 100 different hyperparameters was used to train a model for 10 epochs.Then, the bestperforming set of hyper-parameters for each dataset was used to train a separate model for 50 epochs across multiple trials using a different selection of training and testing sets-5 for the ASRU dataset, 3 for all others. The embedding vectors correspond to the output of the model after going through the training process.The mAP evaluation metric was computed differently for the open-set and the closed-set variations.The closed-set evaluation metric was designed to determine the model's ability to re-identify a previously seen individual given a previously unseen image of that individual at a new collection event.It was computed as follows: F Illustration of the training objective derived from the triplet loss function.The anchor and positive samples are two different images from the same individual selected from a single training batch, the negative sample is an image from a different individual selected from the same training batch.The anchor, positive, and negative samples form a triplet which is used in the loss function by minimizing the distance between the anchor and positive samples while maximizing the difference between the anchor and negative samples. contains images of the full-body of Amur tigers (Panthera tigris altaica) with visible stripes extracted from 1080p resolution video from trap cameras.Images within the Beef Cattle Muzzle/Noseprint Database (Xiong et al., 2021) are close-up images of the noseprint of feedlot beef cattle at 26 megapixel maximum resolution.The Chimpanzee Faces database (Freytag et al., 2016) is a combination of two datasets: the C-Zoo dataset and the C-Tai dataset and contains images that are centred around the cropped face of chimpanzee (Pan troglodytes) faces with an unspecified resolution.Full-body photographs of plains zebra (Equus quagga) and Masai giraffe (Giraffa camelopardalis tippelskirchi) taken by 55 total citizen scientist photographers are in the The Great Zebra and Giraffe Count and ID database Mean average precision was calculated for the top choice (mAP@1).F I G U R E 8Representative images from the Chimpanzee Faces dataset(Freytag et al., 2016) demonstrating model performance with differing image quality.The leftmost column contains images from the test subset never seen during training.The remaining images in the same row contain the five nearest neighbours in order of Euclidean distance.☒ Indicates that the image does not correspond to the same individual.☑ Indicates that the image does correspond to the same individual.(a) None of the nearest neighbours correspond to the correct individual.(b) Although the closest neighbour does not correspond to the correct individual, two of the five nearest neighbours correspond to the correct individual.(c) All five nearest neighbours correspond to the correct individual.4 | DISCUSS ION The proposed methodology demonstrates state-of-the-art results in the animal re-identification domain, with accurate results given multiple populations and across two different species of sea stars (Asterias rubens and Anthenea australiae).While this research used sea stars to create and test the model, this methodology was accurate when applied to the re-identification of seven mammalian species and required no changes in the approach.The additional datasets evaluated had diverse properties, including varying distance to the subject, image resolution, and identifying features of the individuals.This indicates that the described framework is versatile and can be used in closed populations of animals in managed care settings (closed-set problem) as well as open populations of free ranging wildlife (open-set problem). F I G U R E 9 Representative images from the Amur Tiger Re-identification in the Wild dataset(Li et al., 2019) demonstrating a lack of diversity in pose, lighting, and background.(a) Entire set of images available for one individual.(b) Entire set of images available for a different individual. other forms of individual re-identification, such as electronic tags.Determining what markers are used by the machine learning model for individual re-identification is a challenging task.Future work can reach an approximation by studying which components of the underlying neural network are activated when the relevant pixels of the image change, or by computing saliency maps(Simonyan et al., 2014).However, definitive determination of the markers used by the model is unlikely to be necessary to effectively use the model for animal re-identification.Due to the lack of other naturally occurring distinguishable features in A. rubens, such as colour or shape, it is reasonable to assume the model used the spine pattern on the central disc as the distinguishing feature.Other known methodologies, including computer vision and human labelling, prove intractable when using the spine pattern as the distinguishing feature.TA B L E 3 Optimal choice of hyper-parameters for each individual dataset selected after training a model for 10 epochs and identifying the model with the highest mAP@1 using unseen individuals (open set evaluation) or unseen collection events (closed set evaluation). With a sufficiently diverse set of pictures to use as training data that include a variety of poses, lighting conditions, and backgrounds, a model trained using the methodology described herein is able to re-identify even unseen individuals without the need for re-training the model as seen in the open-set evaluation metrics.This is achievable due to the inherent FSL properties(Parnami & Lee, 2022) of the methodology and will be especially useful in re-identification of free-ranging animals.The results of machine learning models for animal reidentification will be highly dependent on access to high-quality imagery.Model performance can vary based on the properties of the datasets used in training and testing including number of images per individual, resolution of images, lighting conditions, diversity of poses and backgrounds, distinctness of individuals (i.e.identifying features), and overall quality of the images.Image quality likely played a role in the variable performance of the model with the publicly available datasets; however, this cannot be definitively determined without additional performance testing.The described methodology has proven to work remarkably well with images taken under a controlled environment, with relatively good lighting conditions, and at close distance to the subject.While images taken with a smartphone camera were used for the sea star dataset images, the model is expected to perform similarly with high quality images taken with an underwater camera or other consumer-grade cameras. Dataset (citation) Batch size Dropout Augmentation count Augmentation factor Loss margin Embedding size Retrain layer count Effect of individual hyper-parameter values as measured by the difference in mAP@1 for the average across all trials compared to the trials for which the specific hyper-parameter value was chosen.
8,927.8
2024-01-04T00:00:00.000
[ "Computer Science" ]
Kinetic mixing and symmetry breaking dependent interactions of the dark photon We examine spontaneous symmetry breaking of a renormalisable U(1) x U(1) gauge theory coupled to fermions when kinetic mixing is present. We do not assume that the kinetic mixing parameter is small. A rotation plus scaling is used to remove the mixing and put the gauge kinetic terms in the canonical form. Fermion currents are also rotated in a non-orthogonal way by this basis transformation. Through suitable redefinitions the interaction is cast into a diagonal form. This framework, where mixing is absent, is used for subsequent analysis. The symmetry breaking determines the fermionic current which couples to the massless gauge boson. The strength of this coupling as well as the couplings of the massive gauge boson are extracted. This formulation is used to consider a gauged model for dark matter by identifying the massless gauge boson with the photon and the massive state to its dark counterpart. Matching the coupling of the residual symmetry with that of the photon sets a lower bound on the kinetic mixing parameter. We present analytical formulae of the couplings of the dark photon in this model and indicate some physics consequences. I Introduction In a non-abelian gauge theory the field tensor F a µν is gauge covariant and the kinetic term is L = − 1 4 F aµν F a µν , where µ, ν are Lorentz indices and a is a gauge index both of which are summed over. This form is determined by Lorentz and gauge invariance. For a U (1) gauge theory, on the other hand, F µν by itself is gauge invariant. Therefore, if there are several U i (1) (i = 1, . . . n) factors in a theory the possibility of mixed terms in the Lagrangian of the form −α ij F iµν F j µν (i = j), where α ij quantify the mixing (in a given basis), opens up. Indeed, such kinetic mixing has been noted in the literature [1] and its origin, especially in the context of grand unification where two U (1) factors are often encountered, examined [2]. If both the U (1) are embedded in a grand unified theory (GUT) such as E 6 then at the unification scale the mixing will vanish but it could be generated at low energy where the GUT symmetry is not exact, by renormalisation group (RG) effects. Phenomenological applications in the context of dark matter have considered the kinetic mixing of a "dark sector" U (1) gauge field with the U (1) Y of the standard model [3]. The detectability of a such a dark photon in a number of experiments using different approaches has been examined [4]. The alternative of the kinetic mixing of the "dark sector" U (1) with U (1) EM has also been proposed [5]. Possible tests of such a photon-dark photon mixing scenario are available in the literature [6]. Consequences of mixing between several dark sector U (1) factors have been illustrated in [7]. In this work our endeavour has been to take a detailed look at the effect of spontaneous symmetry breaking on a theory with two kinetically mixed U (1) factors where the gauge bosons also couple to fermions, that is, when interaction terms are present. In the literature such models are usually analysed with the assumption that the coefficient of the mixing term, c, is small. We have not imposed this restriction 1 . On the contrary, we show that depending on the two U (1) gauge coupling strengths, g 1 and g 2 , a lower bound on the magnitude of c will exist if the coupling of the final unbroken gauge symmetry is to match that of electromagnetism, e. In particular, we show that in this case c must satisfy 1 4 In the special case g 1 = g 2 = e the lower bound on c becomes zero. Also, if (g 2 1 + g 2 2 ) > 4e 2 then there is no solution. In general the presence of two U (1) symmetries will entail all particles to carry two distinct charges. Consequently there are two fermionic currents. These currents couple exclusively to the two gauge bosons of the theory without any cross terms. In this way the starting basis of our analysis is defined. We proceed in the following stages. In the first the mixing term F 1 µν F 2µν is removed by a transformation of gauge bosons involving an orthogonal rotation and a scaling 2 . The initial basis where kinetic mixing terms are present is denoted as the A basis and the second where non-diagonal terms are removed as the B basis. Thereafter spontaneous symmetry breaking of the type, U 1 (1)× U 2 (1) → U 3 (1) takes place. This causes a further orthogonal rotation of gauge bosons taking the B basis to the mass eigenstates which we term as the X basis. One of the states, X 1 µ , associated with the unbroken U 3 (1) remains massless while the other state X 2 µ is massive. These mass eigenstates form an orthonormal basis. The original A basis where mixing terms are present and which is related to this mass basis X by orthogonal and scaling transformations cannot then be orthogonal. That leads us to no conflict as we can define charges of fermions and scalars consistently in the B basis which is orthogonal. We evaluate the couplings of the massless and massive gauge boson states to fermions after symmetry breaking. The massless eigenstate, X 1 µ , couples to one particular combination of the two fermion currents. We express this specific combination in terms of the direction of symmetry breaking in the U 1 (1) × U 2 (1) space and further determine the coupling strength of the massless boson to the current related to the unbroken charge. The coupling of the massive gauge boson, X 2 µ , is conveniently given in terms of the above current combination and another current which is orthogonal to it. We observe that the coupling of X 2 µ to the unbroken combination is controlled by the kinetic mixing parameter c. We have indicated how these results on couplings may offer a window on the physics of Dark Matter via a calculable ordinary matter-dark matter interaction strength. We have given two examples. In the first, ordinary matter does not have any dark charges and also, as expected, the dark matter does not have electric charge. In the second example, an extra U (1) of the dark sector is kinetically mixed with normal U (1) EM . Spontaneous symmetry breaking occurs in such a manner that the unbroken direction remains along U (1) EM . This means that only the dark U (1) is broken. In such an event due to the presence of kinetic mixing in the unbroken theory we derive relations between gauge couplings in the broken theory. Such relations will not exist if kinetic mixing is absent. Our paper is arranged as follows. In the next section we set up the notation and the transformation from the A-(mixed) to the B-(unmixed) basis. Symmetry breaking is considered in the following section and the X-basis is defined as the (orthogonal) mass basis for gauge bosons. Analytic expressions for the couplings of the gauge boson mass eigenstates to fermions are given in the next section. Possible application of these ideas in the context of dark matter are then considered. We end with our conclusions. II Removing kinetic mixing by rotation and scaling In general, the kinetic terms for a gauge theory consisting of two U (1) groups can be written as 3 Here the field strengths are expressed in terms of gauge fields by the usual formula and c is a real kinetic mixing parameter. Gauge invariance cannot fix the magnitude of c. In fact c has different values for different basis choices. Once we are able to fix the basis A 1 µ , A 2 µ using a set of physical arguments, then c becomes a meaningful parameter. The Lagrangian for the interaction of fermions with gauge bosons is Above, j µ r (r = 1, 2), is the fermionic current due to the presence of U (1) r charge g i is the corresponding coupling strength. We see that the initial basis, called A basis, is fixed by demanding that couplings of gauge bosons to fermions is diagonal. Through an orthogonal rotation by π/4 in the A 1 µ − A 2 µ sector 4 followed by scaling, by a factor which can always be chosen to be real, one can remove the kinetic mixing and bring the gauge Lagrangian in Eq. (2) to the canonical form. After these transformations one has Here the redefined field strength tensors are The new basis is defined by the transformation equation In this new basis, here termed the B basis, there is no kinetic mixing, but we have lost the diagonal form of interaction with fermions. In the transformation matrix given in Eq. (8), the parameters λ 1 , λ 2 are given by We observe that λ 1 , λ 2 are the eigenvalues of a real symmetric matrix formed by the coefficients of terms in Eq. (2). If the scaling transformations in Eq. (8) are real then λ 1 and λ 2 must be positive, which results in the following inequalities 5 : Under the transformation c ↔ −c we get λ 1 ↔ λ 2 . So, we can keep c > 0 and λ 1 > λ 2 in this analysis with the understanding that the results for negative c can be obtained by the prescription noted above. It is to be borne in mind that Eq. (8) is not an orthogonal transformation so if the B basis is orthogonal 6 the A basis is not. We will define the U (1) × U (1) charges in the B basis which is orthonormal keeping in mind that in this basis off-diagonal interactions with fermions are present. However, the mixing parameter c, defined in the A basis can still be constrained as we discuss now. Let us rewrite Eq. (4) as It is amply evident now that these are two equivalent formulations of the same phenomenon. The first description corresponds to Eq. (2) and Eq. (4) where there is kinetic mixing among the U (1) gauge field strengths (i.e., c = 0) and the currents couple only to the corresponding gauge bosons, i.e., j µ r to A r µ for r = 1, 2. In the second picture given by Eq. (6) and Eq. (11) there is no kinetic mixing among gauge boson fields, B 1,2 µ , but the currents j µ 1,2 couple to both gauge bosons. It is to be noted that in Eq. (11) a change of c only affects the scaling within the matrix and in the limit c → 0 we have From Eq. (8) we can easily see that in this limit B 1 µ and B 2 µ have equal admixtures of A 1 µ and A 2 µ . This result is reminiscent of degenerate perturbation theory. An alternate but useful way of rewriting interactions in Eq. (11) is to express it in terms of a redefined set of currents J µ 1,2 which couple diagonally to the gauge bosons B 1,2 µ . One then has In this process, currents involving fermions are scaled and rotated now, as, which is a non-orthogonal transformation, and For c > 0 we have λ 1 > λ 2 which leads tog 2 >g 1 . In the special case g 1 = g 2 ≡ g, which we consider in an example later, the relations in Eq. (15) become It is to be noted that though Eq. (13) bears a strong resemblance to Eq. (4) a major difference is that the currents j µ and J µ are related by a non-orthogonal rotation. III Spontaneous symmetry breaking At this point we are in a position to consider symmetry breaking of the U 1 (1) × U 2 (1) theory. For a scalar field Φ with U 1,2 (1) charges q s 1,2 the covariant derivative is In our convention, we have assigned charges Q i in the B basis and the q i charges are in the A basis. They are related through Eq. (14). Thus We can now consider spontaneous breaking of the U 1 (1)×U 2 (1) symmetry by the scalar field developing a vacuum expectation value Φ = v/ √ 2 = 0. The gauge boson mass matrix in the B 1,2 basis is The mass eigenstates are denoted by X 1 µ , X 2 µ . One eigenstate has a zero eigenvalue while the other one is massive 7 . Because X 1 µ and X 2 µ are eigenvectors of a real symmetric matrix with distinct eigenvalues, 7 The complex scalar field Φ provides the longitudinal mode for X 2 µ and also results in a real scalar boson. The latter can couple to the SM Higgs boson through quartic terms in the scalar potential leading to a 'Higgs portal' for the dark matter [9]. they are orthogonal. Furthermore, we know that the diagonalizing matrix is an orthogonal matrix. The mixing angle is where the normalization factor is given by, The two eigenvalues of the mass matrix are, Interactions of the mass eigenstates X 1 µ and X 2 µ can now be written neatly. From Eq. (13) one has When U (1) × U (1) symmetry is spontaneously broken to a residual U (1), there is an associated conserved charge which is a linear combination of Q 1 and Q 2 . This conserved charge can be written in a normalised form as such that the scalar field Φ which acquires a vacuum expectation value triggering the symmetry breaking satisfies 8 This implies (up to an overall sign) One can also define another charge, which is not conserved, and is orthogonal in direction to Q: The non-conservation of Q ′ is due to the fact that the corresponding U (1) symmetry is broken. IV Fermion interactions Recall that we had originally defined fermionic currents in Eq. (5) in the presence of kinetic mixing. These currents had a diagonal interaction with gauge bosons in the A basis. A combination of j µ 1,2 was identified in Eq. (14) to form J µ 1,2 which had a diagonal form of interaction in the B basis. These currents will now be further mixed during spontaneous symmetry breaking. We can express fermionic interactions of the gauge boson mass eigenstates in terms of the currents defined through the charges Q f and Q ′f asĴ It is convenient to write the interaction Lagrangian of massive and massless gauge bosons as Here X 1 µ corresponds to the surviving U (1) and it couples only toĴ µ 1 . On the contrary X 2 µ couples to bothĴ µ 1 as well as the orthogonal combination, namely,Ĵ µ 2 . To determine the coupling strengths g ij we reexpress Eq. (24) as In particular, using Eqs. (21) and (27) the interaction of the massless gauge boson X 1 µ is We can now read off the coupling strengths g 11 and g 21 from Eq. (33). We see that By a rearrangement of terms, one obtains the more familiar expression An interesting consequence of Eq. (35) is that As X 1 µ corresponds to a surviving U (1) symmetry, it couples only toĴ 1 µ . The interaction of X 2 µ can be expressed as whence, we can again read off the couplings of the heavy gauge boson X 2 µ with the two currents, viz. Here we emphasise that X 2 µ couples to bothĴ µ 1 as well asĴ µ 2 because there is no symmetry which can force it to couple toĴ µ 2 only. V Applications Even though the existence of dark matter was known for a long time [10], in fact since the 1930's, recent satellite-based experiments such as COBE and WMAP have brought the issue to the foreground [11]. Analysis of temperature anisotropies of Cosmic Microwave Background (CMB) data found in the PLANCK experiment has shown that in the universe 26.8% of all matter and energy is dark matter [12]. Dark Matter interacts with the visible sector by gravitational interactions. Other possible interactions of the dark sector with the visible one has to be tightly controlled in order for it to qualify as dark matter. Any such interaction, if it exists at all, cannot be stronger than the weak interactions, i.e., the candidate dark matter could be a weakly interacting massive particle (WIMP) [13]. Here we examine the possiblity of gauge kinetic mixing between the ordinary photon and a dark counterpart being at the origin of such an interaction. This can also account for the fact that halo properties of galaxies, studied in cosmological simulations, hint towards dark matter self-interactions [14,15]. The idea of a "dark photon" kinetically mixed with the ordinary photon has been invoked before in theories of dark matter [5,6]. In these theories the dark matter (DM) coupling to the dark photon is of comparable strength as the coupling of the ordinary photon to standard model (SM) matter. Though the dark matter does not couple to the ordinary photon, the SM matter develops a tiny coupling to the dark photon through the small kinetic mixing. This leads to an effective interaction between the dark and SM sectors whose strength is controlled by kinetic mixing. In the subsections below we examine how our calculations can be useful for such considerations without the assumption of a small kinetic mixing. In other words in the following discussions the value of the mixing parameter is not necessarily small. V.1 Couplings and charges of dark photons In a realistic theory the residual unbroken U (1) group has to be identified with U (1) EM , i.e., the massless gauge boson, X 1 µ , has to be the photon. The immediate consequence of this identification is that g 11 is related with the fine structure constant by As g 11 is now expressed in terms of e, we can rewrite Eq. (35) as, This is a key equation, arising from the identification of X 1 µ with the photon, which relates symmetry breaking parameters (α 1,2 ) with the kinetic mixing strength c and gauge couplingsg 1,2 of X 1,2 µ which are mass basis states. Using Eq. (15) along with (9) one can rewrite it as: Eq. (41) results in a lower bound on the kinetic mixing parameter c. To see this, using Eq. (15) we express the couplingsg 1 andg 2 in units of e as Here we have defined a new quantity ξ: In terms of ξ, Eq. (41) takes the shape of We may recall that λ 1,2 are determined by the kinetic mixing parameter c in Eq. (9). Using α 2 1 +α 2 2 = 1 one can solve for α 1,2 , Now since 0 ≤ α 2 1 ≤ 1 and 0 ≤ α 2 2 ≤ 1 we arrive at Using Eq. (44) one immediately arrives at the inequality (1) stated in the Introduction. We see that c can vanish only for ξ = 1/4, i.e., (g 2 1 + g 2 2 ) = 2e 2 . In general, when this condition will not be met, we obtain a lower bound on c depending on the value of ξ. From Eqs. (38) and (39) the two other couplings g 12 and g 22 can be expressed in this notation as As is evident from the above discussion both g 12 and g 22 are determined once c and ξ are fixed. Choosing several values of ξ within the permitted range, we display in Fig. 1 the dependence of g 22 and g 12 on c. We have taken the central value of ξ = 1/4 and also other values equidistantly at higher and lower sides of this central value. We have presented the results for five values of ξ = 1/12 (red solid), 1/6 (green dotted), 1/4 (blue dot-dashed), 1/3 (magenta dotted), and 5/12 (blue solid). It is seen that for large c ∼ 1/4 the two couplings are of comparable size. At the small c end g 12 tends to zero while g 22 tends to a non-zero limiting value. Both couplings diverge as c tends towards 1/4. This is a reflection of the factor √ λ 1 λ 2 in the denominator in the expressions for g 12 and g 22 in Eq. (48) since from Eq. (9): Nonethesless physical processes remain finite in the c → 1/4 limit as the mass of X µ 2 also diverges. Using Eqs. (29) and (45) one has: For any δ between 0 and 1/4 when ξ changes from 1/4 − δ to 1/4 + δ α 1 and α 2 are exchanged, as can be seen from Eq. (46). Because g 12 depends on the product α 1 α 2 , the curves for g 12 for these cases overlap. For a given value of c, larger g 22 corresponds to a higher ξ. Since α 1,2 ≤ 1, we see from Eq. (46) that for larger values of ξ the kinetic mixing strength c can take values in a restricted range. Here we have considered only positive values of c since, as noted, for negative c one has α 1 interchanged with α 2 whereas λ 1 and λ 2 are exchanged. This will take g 12 to -g 12 , while g 22 will be unaffected. We observe that once α 1 is fixed by c and ξ, the electric charge of a fermion, Q, is given by Eq. (25) in terms of the U (1) × U (1) charges Q 1,2 . The orthogonal charge combination, Q ′ , is similarly defined in Eq. (28). In the next two subsections we present two illustrative models. In the first one U (1) sm × U (1) dm breaks to U (1) EM . Here suffixes sm and dm indicate visible and dark sectors respectively whereas the suffix EM denotes electromagnetism. In the second example U (1) EM × U (1) dm breaks to U (1) EM . In the first example gauge bosons of U (1) sm and U (1) dm mix during the spontaneous symmetry breaking process, whereas in the second case the mixing between photon and the dark gauge boson is solely due to the kinetic mixing. V.2 Example 1: A toy model for dark matter In this model there are two sectors, a visible sector denoted by U (1) sm and a dark sector denoted by U (1) dm . Even though this model is not realistic as it stands, key features of our analysis can be demonstrated by this simplified version. Symmetry breaking is along the following line, To apply this formulation of kinetic mixing to models of dark matter we consider two classes of particles specified in terms of the nature of their charges Q and Q ′ . Of these, Q corresponds to the currentĴ µ 1 which is associated with U (1) EM . It is a conserved charge unlike Q ′ which corresponds to the orthogonal broken direction. The photon (X µ 1 ) couples through only Q while the dark photon (X µ 2 ) couples to both Q -coupling g 12 -as well as Q ′ -coupling g 22 . There are two classes of particles, namely, (a) Dark Matter which is decoupled from the photon by having Q dm = 0, and (b) Normal matter which has Q ′sm = 0. By choice, we have the photon coupling only to the SM sector. It will be of our interest to discuss the coupling of SM with Dark Matter through the dark photon, X µ 2 , mediated interactions. This is shown in the left panel of Fig. 2. For momentum transfers small compared to m X 2 from Eqs. (48) the probability amplitude will be where Q sm and Q ′dm are respectively the electric charge of the SM particle and the dark charge of the DM particle. One can readily extract the dependence of the above amplitude on c. One finds By a suitable choice of ξ near the limiting values the right-hand-side of Eq. (53) can be made arbitrarily small. Hence, a small and controllable interaction cross section between the standard and dark sectors is a natural consequence of the model. On the other hand, one may be tempted to think that for large values of the mixing parameter |c| ∼ 1/4 this interaction can be enhanced. However, this will also modify cross sections of purely standard processes such as e − e − → e − e − and is very tightly constrained. For example, the X µ 2 coupling to SM fermions will result in interactions within the SM sector as depicted in the right panel of Fig. 2. This leads to the probability amplitude The dependence of the above amplitude on c is Needless to say, one can similarly calculate scattering within the dark matter sector via X µ 2 -exchange. V.3 Example 2: Realistic U(1) dm mixing with QED A simple and realistic model which appears in the literature of kinetic mixing is one in which the photon mixes with a U (1) dm gauge field. Because the other gauge boson is not yet detected experimentally, U (1) dm symmetry is broken and the dark photon is massive. Usually the mixing term is considered as a perturbation and its effects examined. In our approach, which is exact, one must identify the remaining unbroken symmetry as QED which is also one of the two initial U (1) symmetries. Thus, one must demand X µ 1 ≡ A µ 1 . From Eqs. (18), and (25) we can write The requirement that Q = q 1 can be achieved by Of these, the second option is untenable as it implies g 2 = 0 as a consequence of Eq. (15). This result identifies electric charge as the coupling of one of the factor groups that existed before symmetry breaking. From Eq. (15) it implies Thus, in the A basis where gauge bosons have diagonal couplings with fermions, gauge coupling must be identical for the two U (1) factors. To the best of our knowledge this is a new result. Then from Eqs. (34), (38) and (39), Using Eq. (44) we get ξ = 1/4 for which as shown earlier |c| ≥ 0. Two noteworthy features here are that g 22 , the coupling within the Dark Matter sector mediated by X 2 µ , is stronger than normal electromagnetism. Also Dark Matter couples to ordinary matter via the coupling g 12 which goes to zero as the kinetic mixing parameter c → 0. Before moving on we would like to draw attention to another mode of handling kinetic mixing that is often used. It is common in the literature to define the mixing in the basis in which the gauge bosons are already the mass eigenstates, one of which is massless while the other has a non-zero mass typically through a Stückelberg mechanism. In such scenarios the removal of the kinetic mixing is enabled through the transformation Note that this leads precisely to the couplings in eqs. (62) for X 1 µ and X 2 µ . VI Summary and conclusion When a theory has two (or more) U (1) symmetries then the possibility of gauge kinetic mixing opens up. We have examined kinetic mixing in a generic model with two U (1) factors where the symmetry is spontaneously broken as U 1 (1) × U 2 (1) → U 3 (1). These models are usually considered in the literature using various approaches that commonly assume a small mixing parameter, c, and study physical effects by varying it. In this paper in contrast we have focussed on c without restricting it to be small. We show that in certain cases the range of c is bounded. Here, as a first step the kinetic mixing term is removed by an orthogonal rotation and a scaling. It is convenient to use the charges, Q 1,2 , of fermions and scalars in this new orthonormal basis to discuss the spontaneous symmetry breaking. The symmetry breaking identifies a charge, Q = α 1 Q 1 + α 2 Q 2 , corresponding to the unbroken gauge symmetry. The interactions are then readily expressed in terms of Q and an orthogonal charge Q ′ . While the massless gauge boson couples only to Q (with coupling g 11 ) the heavy gauge boson has a coupling to Q ′ of strength g 22 and also to Q given by g 12 . We derive analytical formulae for these couplings and show that both g 12 and g 22 are controlled by the mixing parameter c. An important result, which can be seen from Fig. 1 is the following. To be able to identify the unbroken U (1) coupling with that of electromagnetism for a fixed ξ = (g 2 1 + g 2 2 )/8e 2 there is a lower bound on the magnitude of c given in Eq. (47). The bound is quoted in a basis where couplings of fermions to gauge bosons is diagonal. As noted, a nonzero g 12 is responsible for interactions between the dark and ordinary sectors. The coupling g 22 leads to interactions within the dark sector which have been suggested as an ingredient for the explanation of the halo structure of satellite galaxies [16]. We note that g 22 need not be a small coupling unlike g 12 , which is controlled by kinetic mixing. Such self-interaction is also needed to resolve conflicts between observation and simulation at the galactic scale and smaller [14]. Selfinteraction in the dark sector is also needed to explain signals obtained in the DAMA experiment [15]. We have illustrated this theory by two examples related to Dark Matter. In both cases we have identified the unbroken U (1) as the electromagnetic group U (1) EM . In the first example, ordinary matter has only the Q charge, which is now the electric charge, whereas dark matter has only the Q ′ charge. The heavy gauge boson is identified with the dark photon and it couples to visible as well as dark matter. We have shown the manner in which the coupling of the dark photon to the ordinary matter depends on the mixing parameter c. In the limit of no kinetic mixing the dark photon does not interact with the ordinary matter at all (except by gravity) and therefore cannot be searched easily in scattering experiments. In the second example we have examined the case where U (1) EM is kinetically mixed with another U (1). This situation can occur only when the two gauge groups have same gauge couplings initially. In this model also we have given analytical formulae for the coupling strengths of heavy and massless gauge bosons. In both cases we have derived analytical expressions for the Dark Matter self-coupling strengths. The dark photon has been considered here as an intermediary in interactions linking dark matter with ordinary matter. There is also the possibility that a dark photon may be produced on-shell in physical processes, e.g., in dark matter annihilation. In the literature it has been proposed to look for comparatively light dark photon signals using e + e − colliders or electron beam dump experiments where an emitted dark photon could decay to a pair of lighter dark matter [17]. If the dark photon coupling to dark matter is enhanced to large values by an appropriate choice of the mixing parameter c, as indicated in sec. V.3, decays to dark matter will become more prominent. This will permit the dark photon to be detected through these proposed tests. If the dark photon is relatively light, having mass around 10 MeV, then it can decay to e + e − pairs only, i.e., with branching ratio unity, with a lifetime which goes as 1/c 2 . Detection of electron-positron pairs with invariant mass matching the dark photon mass would be a clear signal. If the mass is such that µ + µ − decays are kinematically possible then that too could be an alternate detection channel. As formulated, the dark photon coupling to all SM particles should be proportional to the respective electric charges. So, the branching ratio to muons and electrons will differ simply due to the phase space considerations. Electrons and muons of such energy can be observed in neutrino detectors, e.g., SuperKamiokande. If the dark photons are produced in the annihilation of much heavier dark matter particles then one can expect them to be relativistic. In such an event, the decay products will be collimated in the forward direction. A magnetic field will help in separating the decay products and also determine their energy-momentum. A sufficiently high-energy charged particle, e.g., at an accelerator, will emit dark photons by bremsstahlung which, needless to say, will be suppressed compared to similar γ emission by a factor of (O(c 2 )). There are therefore several avenues for testing the scenario of kinetic mixing discussed in this paper.
7,834.4
2014-09-07T00:00:00.000
[ "Physics" ]
Dirac-based solutions for JUNO production system The JUNO (Jiangmen Underground Neutrino Observatory) Monte Carlo production tasks are composed of complicated workflow and dataflow linked by data. The paper will present the design of the JUNO production system based on the DIRAC transformation framework to meet the requirements of the JUNO Monte Carlo production activities among JUNO data centres according to the JUNO computing model. The approach allows JUNO data-driven workflow and dataflow to be chained automatically with availability of data and also provides a convenient interface for production groups to create and monitor production tasks. The functions and performance tests for evaluating the prototype system would be also presented in the paper. Introduction JUNO [1] is a multipurpose neutrino experiment. JUNO plans to take about 2 PB raw data each year, which will start from 2022 and take data for more than 10 years. The JUNO Monte Carlo (MC) production activities will be arranged and operated on the JUNO distributed computing system which can integrate heterogeneous resources from the JUNO data centres globally. The experiment data including Monte Carlo data and raw data will be stored in IHEP, while multiple copies will be replicated in the European data centres. Therefore, a production system is needed to handle MC production workflow and dataflow in an automatic way. JUNO Monte Carlo simulation In JUNO, Monte Carlo simulation algorithms and software are built and run on the framework called SNIPER [2]. The JUNO Monte Carlo simulation [3] is used for detector design and optimization, algorithms validation and physics studies. As shown in Figure 1, each JUNO MC simulation is composed of five parts: Physics Generator (PhyGen), Detector Simulation (DetSim), Electronics Simulation (EleSim), PMT Reconstruction (PmtRec/Cal), and Event Reconstruction (EvtRec). Each JUNO production task includes data processing and data replication activities. The data processing includes four steps: DetSim, EleSim, PmtRec, EvtRec, where the PhyGen step is combined into the DetSim step. Every step generates different type of event data. Except DetSim, the other steps need input event data. These four steps are interconnected with data, which form the JUNO simulation workflow. These data produced by these steps are replicated between data centres and sites, which form the JUNO dataflow. The JUNO production tasks include large samples of physical processes, such as Inverse Beta Decay, backgrounds, positron and electron with different momenta, muons, etc. It is hard for data production groups to handle these large and complex production tasks manually in distributed environment. JUNO distributed computing and computing model The JUNO distributed computing system has been built on DIRAC [4], which provides a complete grid solution and framework for high energy physics experiments. The resource types integrated in the JUNO distributed computing system include cluster, grid and cloud. The JUNO distributed computing plans to use "Tier" architecture composed of three layers, as shown in Figure 2. The IHEP data centre as Tier0 will hold central Storage Element (SE), receive and store raw data from the onsite, and also store one copy of all other data types including simulation data, reconstruction data, calibration data, etc. Tier0 will be responsible for first-time full reconstruction and calibration, and also will perform simulation and user analysis. The data centres in Europe (IN2P3, JINR, CNAF) as Tier1 will hold another copy of the whole data and perform re-reconstruction, simulation, user analysis. Small and opportunistic sites without SE as Tier2 will perform some part of simulation. Small sites with cache linked to the SEs in data centres will also support user analysis. The JUNO production tasks will be assigned by the production group through the distributed computing system to all the JUNO centres and sites. To reduce the burden of central SE, two SEs will be used to receive the output data produced from local SEs. That means the data produced in local SE will be replicated to IHEP or one of data centres in Europe, and synchronized between IHEP and this data centre, and replicated to other data centres if needed. JUNO production system The purpose of JUNO production system is to provide a convenient interface for the JUNO product groups to submit production tasks and manage the JUNO MC simulation workflow and dataflow in an easy way. Architecture and Functions The JUNO production system is designed based on the DIRAC Transformation System (TS) [5]. The TS provides a framework to handle "repetitive" work and chain production workflow and dataflow in a data-driven way. As shown in Figure 3, the JUNO production system mainly comprises of four parts: production manager and transformation system, DIRAC Workload Management System (WMS) and Data Management System (DMS). The JUNO production manager allows the production group to define production requests with a steering file. Interfaced with the TS, the JUNO production manager transforms these requests into transformation instances according to the JUNO computing model. These transformation instances are interconnected with the availability of data by checking DIRAC File Catalogue (DFC) with metadata. Each type of transformation instance creates a sequence of jobs or a list of data replications for each step of production tasks. These jobs and replications created by the transformations are submitted to the WMS and DMS separately, where they will be scheduled to the related services and resources for real operations. The JUNO production system also provides an interface for the production group to monitor and control of those workflow and dataflow. Design of transformation modules for JUNO workflow and dataflow The transformation is the heart of production system. Design of transformation modules according to JUNO workflow and dataflow is the most important part. Each production task includes five steps: DetSim, EleSim, PmtRec/Cal, EvtRec/Rec and replications from closest SEs to final SEs. As shown in Figure 4, accordingly each step is taken care by a transformation module. These transformation modules are chained by the metadata query. The first transformation module DetSim without need of inputs is launched directly by the production system. Other transformation modules are started when their inputs are found to be available by looking into DFC with metadata query. When the query returns file lists, those modules are triggered to generate jobs or data transfer tasks with these files. All the output data from jobs in last step is downloaded to closest SEs and registered in DFC with predefined metadata when arriving in SE, which can be immediately known by the next step and in turn trigger the following step. Implementation The whole system is mainly implemented in three parts: production manager, workflow, dataflow. The production manager is to accept JUNO production requests, transform these requests into transformation tasks and launch the production chain. This part is JUNOspecific, and closely integrate with JUNO data processing activities. The workflow and dataflow implementation are general, and can be used in other experiments as well. More details on workflow and dataflow implementations are explained in the followings. Workflow The workflow setup aims to create jobs by the workflow transformations and assign them to the WMS. Three systems are involved: DFC, TS and WMS. In TS, three agents are used to create and submit data-driven jobs: InputData agent, Workflow Task agent, Transformation agent. The InputData agent queries the DFC with metadata to see if the files as inputs to jobs are available in SE to create jobs. As shown in Figure 5, when the files are ready in SE, the Transformation agent creates jobs and the Workflow Task agent is responsible to submit the jobs to the DIRAC WMS and also keep track of the status of jobs to report to the monitoring part. When the jobs arrived in the Task Queue, the DIRAC WMS will schedule jobs to sites. Dataflow The dataflow setup creates and assigns data replication tasks by transformations to the FTS (File Transfer System) [6] which can take care of file-by-file transfers between SEs. The dataflow part can also be acted as an independent data replication system which accepts only data replication requests. As shown in Figure 6, five systems are involved to complete dataflow work: DFC, TS, RMS (Request Management System), DIRAC FTS service and FTS. Just as what does in the workflow part, first the InputData Agent queries DFC to check the availability of data. When data is ready, the Request Task Agent creates data replication tasks and puts them into queues of the RMS. The RMS submits tasks to FTS service which is interfaced with external FTS to do real replications between SEs. Tests The JUNO software is deployed through CVMFS. The JUNO software version used in the tests is J17v1r1. The production tasks for testing are to create samples of positron at different momenta which includes 0.0 MeV, 1.398 MeV, 4.460 MeV and 6.469 MeV. For each momentum, eight transformation instances are created, including four workflow transformation types (detsim, elecsim, cal, rec) and four replication transformation types (detsim-replication, elecsim-replication, cal-replication, rec-replication). Each type of workflow transformation instance generates 100 jobs and each job processes 1000 events. Figure 7 shows two plots of these tests. All the jobs and replications were successfully completed. Fig. 7. Tests done for JUNO production system. The left plot shows the jobs running in sites and the right plot shows the replication speed between sites Summary and Plans The JUNO production system have been designed and tested for the JUNO Monte Carlo production, and also can be extended to other activities such as data reconstruction if needed. The tests with real JUNO production tasks have proved that the system is working well as planned. The system was also successfully applied to replicate raw and reconstruction data from IHEP to other sites. In the near future, heavier tests with more resource involved will be deployed, to check the whole system for possible bottleneck and tune its performance.
2,261
2020-01-01T00:00:00.000
[ "Computer Science", "Physics" ]
Language Mixing in an Indonesian-Balinese Simultaneous Bilingual Child --When children get regular and continual exposure to two or more languages, they can develop the competence to use those languages. However, the languages used by the children may include mixing, where features of different languages converge in single language use. The present research investigates the speech of a three-year-old child exposed to Indonesian and Balinese since birth. The study focused on analyzing the language mixing produced by the child. The child was observed for three months. The child was exposed to Indonesian by the parents and Balinese by grandparents and other extended family members. They all lived in the same compound. In collecting the data, diary notes were used, supplemented with video recordings. The result shows that language mixing occurred when the child substituted content words, phrases, and function words in both languages. The language mixing happens due to the salient of the words, the frequent use of the words or phrases in the child's environment, the availability of mixing in the input, and an effort to emphasize meaning. Pragmatically, the result shows that the child can use the two languages appropriately with different interlocutors. Introduction Studies in bilingual children's linguistics' development portray the early circumstances of language production, which is the combination of constituents from every language (Riksem, 2017;Rubin & Toribio, 1996).Researches show that parents guide children's attitudes and play a huge role in raising a bilingual (Hakuta & D' Andrea, 1992;Nakamura, 2019).Parents are believed to foster the children into bilinguals.In a bilingual society, language mixing is a common phenomenon.It characterizes bilingual speakers from monolinguals (Alexiadou & Lohndal, 2018).Meisel (2007) states that language mixing is common in bilingual and multilingual communities.People will be categorized to intend language mixing when a word or an utterance of language A contains the elements of language B (Cantone, 2007).MacSwan (2006) defines language mixing as the production of speech combining lexical items and grammatical features of languages in one utterance.Grammars are assumed to be tightly integrated into the speaker's mind and adapt from one language to another.It can be seen from the blend of two languages, called word-inner level, in a common context.As a result, the question of what the lowest components of language mixing are increased. Furthermore, language mixing is the act and ability to mix two or more languages in a conversation (Wehler, 2016).Language mixing is unpredictable.Some people may switch their language in a conversation, in a full sentence, or maybe in a single sentence.It happens almost consciously and unconsciously.The factors that might affect the code-switching act are the speaker, the audience, the context, and the speakers' relation. It was strongly believed that the human brain, especially children's, shall be confused by learning several languages (Jernigan, 2015).However, this myth is not supported by evidence, research, health professionals, or educators.Cognitive control is the part that supports the development of the ability in language mixing, either in bilingual adults or children, has been studied in recent research (Gross & Kaushanskaya, 2015).Meanwhile, other research shows that bilingualism prevents cognitive decline in older age (Bialystok, 2017;Grundy & Anderson, 2017).Various researches have shown that bilingual children have better cognitive development in tasks and can see the matter from different perspectives (Garraffa et al., 2018). Language mixing does not serve any pragmatic purpose and is considered the same as code-switching and code-mixing (Doğruöz et al., 2021).Concerning parental discourse strategies, the strategy of one-person-one-language appears to give input in the various patterns in developing a child's linguistic mixed utterances in many ways (Juan-Garau & Perez-Vidal, 2001).This strategy of one-person-one-language has become a popular object of child language development studies for many years (Adnyani et al., 2018;Purniawati et al., 2019;Yip & Matthews, 2000).Meisel (1994) claims that bilinguals can differentiate two languages from a very young age, even children.Between the ages of 2 and 3 years, the language mixing frequency shall be decreased (Genesee, 1989;Koppe, 1996).It happens due to children acquiring more lexical and grammatical knowledge.Many simultaneous bilingual language developments have been observed where the strategies used were one-parent-one language.Cases, where bilingual children are raised in one language by the parents and the other by extended family members have rarely been conducted.Therefore, this study focuses on the language mixing that happened to a three-year-old child exposed to the bilingual environment of Bahasa Indonesia and Balinese Language.Bahasa Indonesia is exposed by the parents, and Balinese by the extended family members such as the grandmother, the grandfather, the aunties, and other relatives. Method Participants and Linguistic Environments This research is a case study of a child using Balinese and Indonesian languages, which is observed using longitudinal observational data.The child lives with his family and extended family members in Bali.Two housekeepers also live with them.The parents used Indonesian dominantly to the child.The parents speak Balinese to one another.The extended family members, however, spoke Balinese mostly.One of the housekeepers communicates with the child in Indonesian, and the other housekeeper communicates in Balinese with the child. The Data Collection The data was collected for three months and followed the child between 3;3 (39 months) and 3;6 (42 months).The data was collected from conversational text or speech based on spontaneous interactions amongst family members in daily notes and video recordings of the interactions and conversations in unset situations, mostly in the house environment.The parents converse with the child using Indonesian while the extended family members speak Balinese.The child's language use was observed daily.The videos were taken at a minimum one-hour recording every week. Transcription and analysis The speech produced by the child was segmented based on utterances, Balinese and Indonesian.Esposito & Aversano (2004) and Gilbert et al. (2021) used this method of segmenting speech to provide automatic speech recognition in their study about bilingual development.Based on the data, 381 utterances were collected in Indonesian and 142 in Balinese.The data was transcribed orthographically.Moreover, every utterance produced by the child was accompanied by a contextual description and explanation. Finding and Discussion As indicated, this study focuses on the language mixing on simultaneous bilingualism.According to Mayers-Scotton (2006), bilingual child language acquisition is the acquisition of two or more languages at a young age.Meisel (1989) proposed that bilingual first language acquisition refers to children who grow up hearing two languages from the time they are born.In this study, the child hears Balinese and Indonesian since birth in the family environment. Finding As previously mentioned, the data was analyzed by segmenting the child's speech.Every utterance in both Balinese and Indonesian is recorded.Utterances that contain mixing are coded.The findings are shown in the following excerpts. 'When did you buy it?With whom?' Child : Petengne, isana beli.'In the evening, I bought it there.'Excerpt #1 shows the language mixing in the conversation between the child and the grandmother, whose native language is Balinese.The topic of the conversation was the meal that was had by the child that night.In the conversation, it can be seen that the child inserted the Indonesian phrase [yam kal] ayam bakar 'grilled chicken' into the Balinese conversation initiated by the grandmother.Besides, the child also mixed the Indonesian words [isana] di sana 'there' and beli 'to buy.'The child mixed the Indonesian words or phrases into Balinese utterances to tell the common object known in Indonesian words rather than Balinese.The child used the phrase ayam bakar might be that the child does not know the corresponding phrase in Balinese be siap metunu/be siap mepanggang.Another reason can be that ayam bakar in Indonesian is more frequently used in the child's environment.Thus, the phrase ayam bakar is more salient in the child's linguistic environment.This finding corroborates the previous study by Grimstad et al. (2018) and Goral et al. (2019), which found that language mixing happened to emphasize the lexical item in the other language used by the speakers. Child : Engken batis umpik to? 'What happened to great grandmother's foot?' Great grandmother : Batis pik kene tiuk 'My foot was cut by a knife.'Child : Getih to? 'Is that blood?' Great grandmother : Ae.Nak metatu kan pesu getihne ye 'Yes.The foot is cut, so it is bleeding.'Child : Be misi obat to Umpik? 'Have you put medicine on it, Great grand?' Great grandmother : Ne be misi lidah buaya 'I put aloe vera on it already.'Child : Banyak misi lidah buaya 'That is a lot of aloe vera added.' The conversation happened when the child's great-grandmother accidentally got her foot cut.The main language used was Balinese.The Indonesian word obat 'medicine' was inserted into the Balinese utterance in the conversation.Another mixing was found in the use of the Indonesian word banyak 'much'.The phrase lidah buaya happens to be Indonesian and Balinese words to refer to 'aloe vera'.The reason of the mixing is because the child is more salient with the Indonesian word to say the object and also the fact that the great-grandmother was using the same word to refer the same object previously before the child did.Yip (2013) found that it is common for the parents in this case the great-grandmother, to language mixing their utterance even though they were intentionally used only one language at a time.Language mixing could be unavoidable. 'That belongs to the person in that TV.' This conversation occurred when the child watched MotoGP on the TV with his grandfather.In the conversation, the language used by the child when talking to the grandfather was Balinese, yet the child switched the word from Balinese to Indonesian in the middle of the conversation.The child used the word [obing] mobil 'car', an Indonesian word in Balinese utterance to point the object car.Besides, the child mixed the Balinese utterance with the Indonesian word by saying the word siapa 'who', and itu 'that.'The Indonesian word siapa was inserted in the Balinese utterance to ask for the ownership of a certain object, that was a car.The word [lah] is a Balinese word, a shortened form of the word ngelah 'have/own.'Thus, the Indonesian word siapa is combined with ngelah 'to have' in Balinese to utter who owns.The conversation happened when the child saw his dog was outside the cage one evening while it was raining.Some language mixings happened in this excerpt #4.First, the language mixing happened where the child mixed the Balinese utterance with the Indonesian word masuk 'to get in.'Second, the language mixing of Indonesian phrases could be seen from masuk kandang 'get into the cage'.The language mixing also happened in the sentence "uujan belus nyen itu kuluknya".The word itu 'that' is used to point to something in Indonesian. #4 The conversation between grandmotherchild Although the conversation was in Balinese, the child switched the language in the middle of the conversation between the grandmother and the child. Moreover, there was a mix of the Balinese word kuluk 'dog' with the Indonesian suffix -nya (possessive) in the word kuluknya 'the dog.'The child might mix languages because he knows that his grandmother could understand his utterance despite being in Indonesian or Balinese.It deals with the listener's perceptions of the conversation (Gonzales et al., 2019). #5 The conversation between grandfatherchild Grandfather The conversation happened between the child and his grandfather when he played with a soccer ball in the middle of the house's yard.The child kicked the ball randomly, and it almost hit the bird's cage at the house.The child's grandfather tried to warn him not to kick randomly, or else the birds' cage might get hit that time, and the child would get scolded by his father.The language mixing happened in the child's utterance of excerpt #5, of which the child mixed the Indonesian phrase in Balinese utterance.The child said ndak boleh ngopak Bapak 'He is not allowed to be angry, my father.'The phrase 'ndak boleh' is 'not allowed' in Indonesian.The child switched his language from Balinese to Indonesian even though he spoke in Balinese with his grandfather.The reason for the mixing of the language is to emphasize the meaning of the idea to his grandfather.Similar to this finding is the study conducted by Riksem et al. (2019) about the American-Norwegian language mixing found the probability of the mixing due to the need to emphasize the idea of an utterance in the other language. #6 The conversation between grandmotherchild Grandmother This conversation happened between the child and the grandmother when the child was ill of cough and flu.He needed to drink the medicine, but he refused to drink it with his grandmother and preferred to have it with his mother instead.The child said ndak mau nenek 'I don't want it, grandmother' to refuse his grandmother's offer.It was an Indonesian statement that occurred in the conversation of the Balinese language.Besides, the child said John mau sama mama minum obat, which means he wants to drink the medicine with his mother in a full Indonesian utterance.However, the next utterance was followed by a Balinese utterance, 'John sing nyak minum obat ajak nenek 'John does not want to drink the medicine with grandmother'.The verb phrase minum obat are the Indonesian to say 'drink the medicine.'In this situation, the phrase minum obat is used in Balinese utterance.People usually do not change the language into Balinese even though they are talking in Balinese linguistic environment.Riksem et al. (2019) found an indication of bilinguals to insert the verbs and nouns of the other language to the other language because of the common sense of using the language in a social environment.The conversation occurred when the child saw his aunt's eyes-blindfold in his aunt's bedroom.Two language mixings occurred in the conversation.The child mixed Indonesian with Balinese to indicate where he wanted to buy the blindfold.Ditu is a Balinese word to say 'there', and the rest of the sentence is Indonesian.Besides, the child inserted the Balinese word ajak 'and' to connect the sentence.The mixes happened directly and subconsciously done by the child.The language mixing was that the child lacked vocabulary knowledge in Indonesian to say prepositions.According to Montanari et al. (2019), this is the usage of language mixing to fill the utterance's vocabulary gaps.The conversation between the child and his mother happened when the child tried to ask his mother to buy a necklace for him, mistakenly saying different things to refer to the necklace.The child started the sentence by using the Indonesian language and at the end, it directly switched to Balinese.The child produced intra-sentential codemixing once the topic of accessories with the mother that was illustrated by excerpt #8 (Poplack, 1980).The italic word constituted of boongnya 'the neck', a Balinese word boong 'neck' and Indonesian suffix -nya (possessive) 'it's' or 'the.'The mother needs to emphasize and explain the meaning and function of the word 'bracelet' by referring to where it was going to be worn on the body, which is on the neck.Riksem et al. (2019) found that it is very commonly found in bilinguals to add suffixes from the other language to another language and this is considered to be the ability of bilinguals to acknowledge the syntactical structures of both languages.This conversation occurred when the father was taking the child to a kiosk that sells food for pets.At the beginning of the conversation, the sentence uttered by the child was started with the Balinese language and ended up with Indonesian.The phrase kel baang in Balinese means 'will give.'Moreover, the child mixed the Balinese word kedis 'bird' and the Indonesian suffix -nya 'the' to say 'the bird.' Riksem et al. (2019) found similar data of bilinguals to mix the language in terms of verbs and nouns in conversations.Besides, the young bilinguals might do this language mixing to fill the vocabulary gaps in their utterances (Montanari et al., 2019).The conversation occurred when the child wanted to watch the movie Rio 2, which happened to have a scene that showed a snake.However, the situation was before the mother turned the tv on.So the mother got a little confused with the sudden request of the snake.The child tried to explain where to find the snake by pointing at the TV therefore, the mother understood that he meant the snake in the movie.The language mixing happened in the sentence itu ulal yang lantang itu Mama.The child mispronounced the word [ulal] ular 'snake'.The language used by the child with his mother was Indonesian.However, in the sentence, the child inserted the Balinese word lantang 'long' to show the snake's size in the middle of the Indonesian utterance.The sentence still made sense because the Balinese word only emphasizes the snake's size or shape.The child tried to tell the mother that the snake was long.Based on Riksem et al. (2019), this belongs to the ability of young bilinguals to switch the adjective from one language to the other language to emphasize the description within an object.This conversation occurred when the father, mother, and the child and the child's brother were playing and relaxing in the bedroom while watching a performance of masks on the television.The child was curious about the headpiece on top of the masked performer and considered it a towel.The language mixing of Balinese words in Indonesian utterances happened in excerpt #11.John was trying to explain the situation to his father by saying ndak ini, isi handuk ini, ajak mamak?'No, this is, is this a towel, with mama?' In English, ajak is a Balinese word to say 'with'.The utterance used Indonesian while the child added one Balinese word in the middle. #11 The Besides, the switch from Indonesian to Balinese in one utterance as Apa ini? Mama misi ape ne? Misi handuk?'What is this?Mama what is she wearing?It has a towel?'The phrase apa ini? 'what is this' is Indonesian.The other two sentences were in the Balinese language.This mixing emphasized the meaning of the question, which was using Indonesian at first and then switched to Balinese after.It supports the finding of the previous research by Hoff et al. (2018) about the English-Spanish language mixing, whose language mixing shows children's proficiency in using both languages, especially for the expressive domain.The conversation occurred between the child and the mother when the child was about to sleep at night.The child just came back from traveling with his grandfather and grandmother.The language used in the conversation is Indonesian; however, the child mixed Balinese words in Indonesian utterances to tell the activity he had done.The Indonesian word is probably not yet familiar to him.The word melali is the Balinese word to say bepergian 'travelling' in Indonesian.Besides, the language mixing also occurred in the last utterance, with the Balinese word ajak 'with.'This occurred simply because the child subconsciously switched the language from Indonesian to Balinese.The child is familiar with the Indonesian word sama 'with', the shorter version of bersama in Indonesian, which was said earlier in the same utterance.The data supports Bosma & Blom (2018) about language switching to fill the vocabulary gaps.Riksem et al. (2019) found that children switch nouns and verbs in their utterance using data from children acquiring American and Norwegian languages. Discussion The natural tendency of bilingual children is to mix their languages, which means they use both languages in a single sentence.Child bilingualism involves borrowing and code-switching (Grosjean, 2013).The child frequently mixed the languages of Bahasa Indonesia into Balinese and vice versa.These were also documented in the previous study of English-Spanish language mixing of pre-school children (Montanari et al., 2019).However, from observation and data analysis, the child can differentiate the two languages appropriately depending on the interlocutors.For instance, when the child talks to his grandmother, grandfather, and great-grandmother, he uses Balinese Balinese in his utterances instead of Indonesian.Meanwhile, during his interaction with his parents, the child uses Bahasa Indonesia more than the Balinese. In terms of language mixing, it happened for several reasons.First, the child does not know the corresponding words or phrases in the other language or lacks lexical entry in the appropriate language.Second, the words or phrases inserted are more familiar in the child's ears.In other words, they are used more frequently in the child's environment, as confirmed by Lindholm & Padilla (1978).Third, the child is mixing languages because the adults do language mixings, such as the parents and other family members.Thus, mixing is available in the input.In the conversation between the grandmother and the child, for example, in Balinese utterance, the grandmother inserted the Indonesian word obat to the Balinese conversation instead of ubad, a Balinese word.Yip (2013) stated that even parents who claim not to mix themselves are unlikely to avoid it consistently.Fourth, language mixing emphasizes the meaning of the idea or the child's intention by clarifying the words in the other language corroborating the finding of Martiana (2013).In this study, the child often switches the word from Balinese to Indonesian and from Indonesian to Balinese, implying that the child can use both languages in different contexts (Gonzales et al., 2019). On the other hand, this implies quite the opposite, which can also mean that the child was still learning the vocabularies of Indonesian and Balinese.According to Bosma & Blom (2018), children tend to code-switch their language as a strategy to replace unknown words and fill the gaps in linguistic knowledge.Now, even though the language mixing can be the lack of vocabulary or as the strategy to fill the gaps, according to Green & Wei (2014), this proves that children involve cognitive control while doing it.Halmari (2005) also confirms that language mixing in young bilinguals is not necessarily evidence of attrition of a weaker language or failure of inhibitory control. Conclusion This study concludes that language mixing occurs in the language production of a child simultaneously exposed in Balinese and Indonesian from birth.The child inserted content words, phrases as well as function words.This study also proves that the bilingual child can differentiate two linguistic systems from a very young age.It is shown where the child can communicate using different languages appropriately to different interlocutors.The child can communicate in Bahasa Indonesia to the parents and Balinese to the extended family members.The language mixing happens due to the salient of the words in the other language, the frequent use of the words or phrases in the child's environment, the availability of mixing in the input, and an effort to emphasize meaning.The study's findings are applicable to the subject at hand.Additional research involving children growing up in similar language environments is necessary.
5,138
2022-09-07T00:00:00.000
[ "Linguistics" ]
Sensitive and Reproducible Photocatalytic Activity Evaluation Instrument for Transparent Coatings A new photocatalytic activity (PC) measurement instrument based on the measurement of the photo-induced reduction of Ag ions was proposed. The feature of this system is to perform “ultraviolet irradiation for PC activation” and “Ag film thickness determination for PC evaluation” simultaneously and automatically. Realizing a PC measurement system with high sensitivity, wide dynamic range, easy operation and reproducibility, which is especially suited for the PC measurement of the coatings on transparent substrates. Introduction Since the discovery of the photosensitizing effect of TiO 2 electrode on the electrochemical decomposition of water [1], TiO 2 has attracted significant attention, and the photocatalytic properties of TiO 2 has become a major area of intensive research.TiO 2 has shown interesting phenomena, such as photo-induced oxidation and reduction, and photo-induced hydrophilic/ hydrophobic switching [2][3][4][5] and has been widely considered as the most promising photocatalytic material.For the practical use of this photocatalyst, coating technology of TiO 2 is the key technology and has intensively been studied for achieving higher photocatalytic activity [6][7][8].For this purpose, reproducible and high-throughput photocatalytic activity evaluation for coatings is necessary and several methods have been proposed.The effective surface area of the photocatalytic coatings is very small compared with that of the powders and hence the photocatalytic activity measurement for coatings should be not only reproducible but also highly sensitive.Photocatalytic reduction of Ag ions resulted in Ag film formation on the surface of the photo catalyst [9] and this phenomenon was used for the photocatalytic activity evaluation [8,9].This method is very sensitive and applicable for the evaluation of the photocatalytic activity of the coatings.However, a well-trained operator is necessary for the reproducible measurement at high throughput since this method requires "ultraviolet irradiation" and "Ag film thickness determination" to be alternated [8].In this study, we propose a modified Ag photoreduction method, which is capable of performing "ultraviolet irradiation" and "Ag film thickness determination" simultaneously and automatically.Accordingly, sensitive, reproducible and high-throughput photocatalytic activity evaluation method especially for transparent coatings was realized. Experimental Three kinds of TiO 2 films coated on quartz substrates were used to demonstrate the potential of the new photocatalytic activity measurement method.A pulse-powered sputtering apparatus, which is described elsewhere in detail [7], was applied for coating the TiO 2 films.The deposition conditions for the three kinds of films are listed in Table 1.The crystalline structures of the films were evaluated with an x-ray diffractometer (RIGAKU RINT).The schematic drawing of the photocatalytic activity measurement apparatus is shown in Figure 1.The TiO 2 film coated quarts glass is placed in a quartz cell and the cell was filled with aqueous AgNO 3 solution of 0.1 mol/l.The lights from the D2 lamp and Xe lamp was mixed by a half mirror and irradiated on the surface of the TiO 2 coating.The ultraviolet light intensity measured by an ultraviolet photometer (MINOLTA UM-10 + UM-360) at the sample was 0.03 mW/cm 2 .Note that the irradiation should be done from the coated side of the quartz glass.The Ultraviolet lights with the wavelength smaller than 380 nm act as excitation light for the photocatalytic reaction and adsorbed with traveling through the TiO 2 coating.The lights with the wavelength larger than 380 nm travel through the TiO 2 coating, the substrate, the cell and finally reaching the spectrometer, which is capable of measuring the optical transmittance of the sample. On the surface of the TiO 2 coating, the ultraviolet irradiation induced photocatalytic reduction of the Ag ions occurs and resulting in the Ag film formation on the surface of the TiO 2 coating.The optical transmittance of the sample gradually decreases with the growth of the Ag film on the surface.The higher the photocatalytic activity of the coating is, the quicker the decrease of the optical transmittance becomes.Hence by recording the transition curve of optical transmittance of the sample (transmittance vs time), the photocatalytic activity of the coatings can be evaluated.The optical transmittance was measured every 12 seconds and recorded automatically by the optical spectrometer (Shimadzu Multispec 1500).The measurement time was set to 120 minutes for one sample, resulting in 600 points for one transition curve.All the operator has to do for measuring one sample is to put the sample in the quartz cell filled with aqueous AgNO 3 and set it on the photometer and start the measurement software.Hence this method is reproducible (not dependent on the skill of the operator) as well as easy operation. Results and Discussion Figure 2 shows the x-ray diffraction patterns of three kinds of TiO 2 films coated on quartz glass substrates.An amorphous film (sample A: Figure 2(a)), a slightly crystallized film (sample B: Figure 2(b)) and a well-crystallized film (sample C: Figure 2(c)) were obtained, respectively.The x-ray diffraction peaks of the sample C were assigned to those of the TiO 2 with anatase and rutile structures.It was revealed that sample C was mainly crystallized into anatase structure and small amount of rutile structure was observable by x-ray diffraction.The sample B showed only one broad peak, which was assigned to the rutile structure.The photocatalytic activity measurement for coatings should be much more sensitive compared with that of the powder materials since the effective surface area of the coatings are much smaller than that of the powders.As a result, it has been very hard to evaluate the photocatalytic activities of the TiO 2 coatings especially with poor crystalline quality (e.g.samples A and B) since the photocatalytic activities for those coatings are quite low. Figure 3 shows the results of the photocatalytic activity measurement for the three samples (A, B and C) by the method described above.The transmittances of the visible light of the samples are plotted against UV irradiation time.The higher the photocatalytic activity of the sample is, the curve of the transmittance shifts to the lower left of the graph (shown by an arrow in Figure 3).The transmittance curve of the sample C was observed at the lower left side of those of the samples A and B, reflecting the better crystalline quality of the sample C. Note that the photocatalytic activity of the amorphous sample (sample A) was measurable and a clear decrease in transmittance due to the Ag film formation on the sample surface was observed (Figure 3(a)) demonstrating the very high sensitivity for photocatalytic activity measurement of this method.It is also worth noticing that the photocatalytic activities of the amorphous sample (sample A) and the slightly crystallized sample (sample B) were clearly distinguishable (Figures 3(a) and (b)).Thus the high sensitivity combined with the wide dynamic range of this photocatalytic activity measurement was demonstrated. Conclusion A new photocatalytic activity measurement system based on the measurement of the photo-induced reduction of Ag ions was proposed.The feature of this system is to perform "ultraviolet irradiation for photocatalytic activation" and "Ag film thickness determination for photocatalytic activity evaluation" simultaneously and automatically.This method was applied on the TiO 2 coatings on quartz glass substrates and was proven to be sensitive enough for evaluating the TiO 2 coating with amorphous structure.Thus a photocatalytic activity measurement system with high sensitivity, wide dynamic range, easy operation and reproducibility, which is especially suited for the photocatalytic activity measurement of the coatings on transparent substrates, was realized. Figure 1 . Figure 1.The schematic drawing of the photocatalytic activity measurement apparatus proposed in this study. Figure 2 .Figure 3 . Figure 2. The x-ray diffraction patterns of three kinds of TiO 2 films coated on quartz glass substrates.An amorphous film (sample A: (a)), a slightly crystallized film (sample B: (b)) and a well-crystallized film (sample C: (c)) were obtained, respectively.The letters "a" and "r" denote the anatase and rutile structures, respectively.
1,779.4
2012-02-28T00:00:00.000
[ "Materials Science", "Chemistry" ]
First-principles treatment of Mott insulators: linearized QSGW+DMFT approach The theoretical understanding of emergent phenomena in quantum materials is one of the greatest challenges in condensed matter physics. In contrast to simple materials such as noble metals and semiconductors, macroscopic properties of quantum materials cannot be predicted by the properties of individual electrons. One of the examples of scientific importance is strongly correlated electron system. Neither localized nor itinerant behaviors of electrons in partially filled 3d, 4f, and 5f orbitals give rise to rich physics such as Mott insulators, high-temperature superconductors, and superior thermoelectricity, but hinder quantitative understanding of low-lying excitation spectrum. Here we present a new first-principles approach to strongly correlated solids. It is Q4 based on a combination of the quasiparticle self-consistent GW approximation and the dynamical mean-field theory. The sole input in this method is the projector to the set of correlated orbitals for which all local Feynman graphs are being evaluated. For that purpose, we choose very localized quasiatomic orbitals spanning large energy window, which contains most strongly hybridized bands, as well as upper and lower Hubbard bands. The self-consistency is carried out on the Matsubara axis. This method enables the first-principles study of Mott insulators in both their paramagnetic and antiferromagnetic phases. We illustrate the method on the archetypical charge transfer correlated insulators La2CuO4 and NiO, and obtain spectral properties and magnetic moments in good agreement with experiments. Introduction. The first principles description of strongly-correlated materials is currently regarded as one of the greatest challenges in condensed matter physics. The interplay between localized electrons in open d-or fshell and itinerant band states gives rise to rich physics that makes these materials attractive for a wide range of applications such as oxide electronics, high temperature superconductors and spintronic devices. Various theoretical approaches are currently being pursued [1]. One of the most successful approaches is the dynamical mean field theory (DMFT) [2]. In combination with density functional theory [3,4], it has described many features of strongly-correlated materials successfully and highlighted the surprising accuracy of treating correlations local to a small subset of orbitals exactly, while treating the reminder of the problem in a static mean field manner. [5,6]. Correlations are built into the full electron Green's function (G), through electron self-energy (Σ) G(r, r ′ , iω n ) = r| iω n 1 −Ĥ H −Σ(iω n ) where r is position vector, ω n is Matsubara frequency andĤ H is the Hartree Hamiltonian. The successes of the DMFT method suggest that in certain energy range a good expression for the self-energy Σ is provided by Σ(r, r ′ , iω n ) = Rαβ P * Rα (r)Σ α,β loc (iω n )P Rβ (r ′ ) + V (r, r ′ ) (2) with Σ loc described by an impurity model, where R is lattice vector and P Rα (r) is the projector to a set of local orbitals α centered at R. The numerous successes of DMFT in different classes of correlated materials revived the interest in the long sought goal of achieving a diagrammatically controlled approach to the quantum many body problem of solids. The free energy functional can be expressed in terms of the Green's function G and the screened Coulomb interactions W [7,8]. The lowest order perturbation theory in this functional gives rise to the GW approximation [9] while the local approximation applied to the most correlated orbitals gives rise to an extended DMFT approach to the electronic structure problem [8]. The addition of the GW and DMFT graphs was proposed and implemented in model Hamiltonian studies [10] and in realistic electronic structure [11,12]. There is now intense activity in this area with many recent publications [13][14][15][16] triggered by advances in the quality of the impurity solvers [17][18][19], insights into the analytic form of the high frequency behavior of the self-energy [20] and improved electronic structure codes. Several conceptual issues remain to be clarified before the long sought goal of a robust electronic structure method for solids is attained. The first issue is the choice of local orbitals (P Rα (r) in Eq. (2)) on which to perform the DMFT method (summation of all local Feynman graphs). The second issue is the level of self-consistency that should be used in the calculation of various parts of the diagrams included in the treatment ( free or bare Green's function G 0 vs self-consistent interacting Green's functions G). The self-consistency issue appears already at the lowest order, namely the GW level, and it has been debated over time. The corresponding issue in GW+DMFT is expected to be at least as important, but has not been explored, except for model Hamiltonians [21,22]. At the GW level, it is now well established that Hedin's fully self-consistent formulation [9], while producing good total energies in solids [23], atoms and molecules [24,25], does not produce a good approximation to the spectra of even simple semiconductors or weakly correlated metals. Instead, using a free (quasiparticle) Green's function in the evaluation of the polarization graph of the GW method gives much better results for spectral functions. This is the basis of the one-shot quasiparticle (QP) GW, starting from LDA [26] or from others [27,28]. Unfortunately, the answer depends on the starting point. A solution for this problem is to impose a self-consistency equation to determine G 0 . This method, called the quasiparticle self-consistent GW (QSGW) [29], is very success- ful reproducing the spectra of many systems [30][31][32]. Previous GW+DMFT studies typically used one shot QPGW and projectors spanning a relatively small energy window [13][14][15][16]. In this work, we propose a different approach to the two issues: the level of self-consistency and the choice of the DMFT orbital. First we use the quasiparticle self-consistent GW in combination with DMFT to address the self-consistency issue. Next, we choose a very localized orbital for the summation of the higher order Feynman diagrams in DMFT, therefore the hybridization spans much larger energy window. In the LDA+DMFT context, the choice of very localized orbitals has provided a great deal of universality since the interactions do not vary much among compounds of the same family. This has been demonstrated in the studies of iron pnictides [33] and transition metal oxides [34]. This choice results in a second advantage as we will show below, namely the frequency dependence of the interaction matrix can be safely ignored. After the orbitals are chosen, all the parameters are selfconsistently determined. This is the first ab initio quasiparticle self-consistent GW+DMFT implementation and the first study on a paramagnetic Mott insulator within the GW+DMFT method. Methods. Our approach is carried it out entirely on the Matsubara axis, which requires a different approach to the quasiparticle self-consistency in GW [35], called Matsubara Quasiparticle Self-consistent GW (MQSGW), where the quasiparticle Hamiltonian is constructed by linearizing the self-energy and renormalization factor [36]. Working on the Matsubara axis, is numerically very stable, provide a natural interface with advanced DMFT solvers such as continuous-time quantum Monte-Carlo (CTQMC) [17][18][19] and has very good scaling in system size as in the space-time method [37]. (see Supplemental Material [38] for details). For DMFT, it is essential to obtain bandstructure in a fine enough crystal momentum (k) mesh to attain desired frequency resolution of physical quantities. To achieve such momentum resolution, we use a Wannier-interpolated MQSGW bandstructure in a large energy window using Maximally localized Wannier function (MLWF) [39], and than constructed local projector in a fine momentum mesh. In contrast to SrVO 3 [13][14][15][16] where a set of t 2g states is reasonably well separated from the other bands, correlated 3d orbitals in La 2 CuO 4 are strongly hybridized with other itinerant bands. In this case, it is necessary to construct local projectors from states in a wide enough energy windows to make projectors localized near the correlated atoms. We constructed local projectors in the energy window E F ±10eV Fig. 1(b). The Dashed lines in (b) and (c) represent electronic bandstructures within non spin-polarized MQSGW and spin-polarized MQSGW, respectively in which there are ∼82 bands at each k point, where E F is the Fermi level. Then we confirmed that absolute value of its overlap to the muffin-tin orbital (of which radial function is determined to maximize electron occupation in it) is more than 95%. Our choice of energy window is justified by the Cu-3d spectra being entirely contained in this window. Using constructed MLWFs in large energy window, we defined our local-projector is quasiparticle wavefunction with an index n, and N k is the number of k points in the first Brillouin zone. Static U d and J H are evaluated by a modification of the constrained RPA method [40], which avoids screening by the strongly hybridized bands. This screening by hybridization is included in our large energy window DMFT. For details, see Supplemental Material [38]. We divide dynamic polarizability within MQSGW approximation χ QP into two parts, χ QP = χ low QP + χ high QP . Here, χ low QP is defined by all transitions between the states in the energy window accounted for by the DMFT method (E F ±10eV ). Using χ high QP , we evaluate partially screened Coulomb interaction U −1 (r, r ′ , k, iω n ) = V −1 (r, r ′ , k) − χ high QP (r, r ′ , k, iω n ) and parametrize static U d and J H by Slater's integrals [41,42], where V is bare Coulomb interaction. The Feynman graphs included in both MQSGW and DMFT (double-counting) are the local Hartree and the local GW diagram. They are computed using the local projection of the MQSGW Green's function (Ĝ QP ) Finally, for the stable numerics, we approximated Σ DC (iω n ) ≃Σ DC (iω n = 0) since these low order diagrams are dominated by the Hartree-Fock contribution. Results. Fig. 2(a) shows the frequency dependence of real and imaginary parts of U d . It is calculated on an imaginary frequency axis and analytically continued using a Pade approximant [44]. We also plot the fully screened Coulomb interaction W d for comparison. Static U d is 12.5 eV and U d remains almost constant up to 10 eV. In contrast, in W d , there are several peaks due to low-energy collective excitations below 10 eV. At very high energy, U d approaches the bare coulomb interaction of 28 eV. Calculated J H is 1.4 eV and has negligible frequency dependence. By contrast, conventional constrained-RPA, in which 10 bands of mostly Cu-3d character are excluded from screening, results in static U d = 7.6 eV, which is too small to open the Mott gap, and which is also inconsistent with photoemission experiments on CuO charge transfer insulators [45]. We also computed the static U d and J H by requiring that the calculated excitation spectra of MQSGW+DMFT with (local) GW as the impurity solver matches the spin-polarized MQSGW spectra. Here we used non spin-polarized MQSGW band structure and allowed spontaneous magnetic long range order by embedding impurity self energy, which is function U d and J H , within spin-polarized GW approximation. In Fig. 2(b), we allowed U d to vary between 8-13 eV (at fixed J H = 1.4 eV) and we plot the size of the indirect gap. The gap size of this method matches the gap of spin-polarized MQSGW when U d ≈ 12 eV. If the choice of U d and J H is correct, the resulting spectra must be similar to the prediction of spin-polarized MQSGW method. We show this comparison in Fig. 2(c) to confirm a good match. In addition, the relative position of Cu-d band (the lowest en- ergy conduction band at S) to the La-d band (the lowest energy conduction band at Y) is also well matched justifying the approximation ofΣ DC (iω n ) ≃Σ DC (iω n = 0). Σ DC (iω n = 0) for Cu-d x 2 −y 2 orbital differs from nominal double counting energy [46] by only 1%, highligting again the advantages of using a broad window and narrow orbitals. We now discuss the magnetic moment associated with Cu and the electronic excitation spectra by using MQSGW+DMFT (with U d = 12.5eV , J H = 1.4eV ) in which the impurity is solved by the numerically exact CTQMC [17,18] and compare them with other methods. LSDA does not have a magnetic solution. In contrast, spin-polarized MQSGW, QSGW [29], and MQSGW+DMFT predict 0.7 µ B , 0.7 µ B , and 0.8 µ B , respectively. This is consistent with experimental measurements, although the later span quite large range 0.4µ B ∼ 0.8µ B [47][48][49]. We find that MQSGW opens too large gap in the antiferromagnetic (AFM) phase, while it remains metallic in the paramagnetic (PM) phase. LDA+DMFT predict too small excitation energy of Laf levels. We show that these deficiencies can be remedied by adding all local Feynman diagrams for the Cu-d orbitals using the DMFT and treating the other states within GW approximation. In the low-energy spectrum, LSDA does not have a insulating solution; there is a single non-magnetic solution with zero energy gap as shown in the bandstructure (Fig. 3(a)) and total density of states ( Fig. 4(a)). The non spin-polarized MQSGW also predicts metal as shown in Fig. 4(a), but the two bands of primarily Cu-d x 2 -y 2 character near the Fermi level are here well separated from the rest of the bands (dashed lines in Fig. 3(b)). Spinpolarized MQSGW calculation (dashed lines in Fig. 3(c)) yields qualitative different results from LSDA and non spin-polarized MQSGW calculation. The two Cu-d x 2 -y 2 bands are now well separated from each other with a bandgap of 3.4 eV. Spin-polarized QSGW [29] also yields insulating phase with a gap of 4.0 eV. In the experiment, the larger direct gap, as measured by optics, is ∼ 2eV [50,51]. We show that these deficiencies of LDA, QSGW and MQSGW in the low-energy spectra can be remedied by adding all local Feynman diagrams for the Cu-d orbitals using the DMFT. The LDA+DMFT calculation in Fig. 4(a), carried out by the all-electron LDA+DMFT method [34,46], predicts reasonable gap of 1.5 eV and 1.8 eV in PM and AFM phases, in good agreement with experiment and previous LDA+DMFT studies [34,[52][53][54][55]. Within MQSGW+DMFT, we find gaps of 1.5 eV and 1.6 eV in PM and AFM phases, respectively. The excitation spectra of MQSGW+DMFT in PM and AFM phase as shown in Fig. 3(b) and 3(c) are very similar as both are insulating with well separated Cu-d x 2 -y 2 bands, which is now also substantially broadened due to large scattering rate in Hubbard-like bands. The projected density of states of Cu-d x 2 -y 2 orbital in Fig. 4(b) shows the size of the gap more apparently. This gap is of correlated type as it results from the non-perturbative pole in the impurity self-energy near zero frequency. This pole is connected to spin fluctuations, which at low temperature results in an ordered AFM phase. The Zhang-Rice peak appears around ∼−2eV and strongly overlapps with O-p states . In the high energy region, the most distinctive difference is the position of La-f peak. It appears at ∼ 3eV within LDA and LDA+DMFT, but at around ∼ 9eV , in the inverse-photoemission spectra (cyan dotted line in Fig. 4(a)) [43]. By treating La-f within GW approximation, it appears at ∼ 10eV within MQSGW and MQSGW+DMFT. In summary, we introduced a new methodology within MQSGW+DMFT and tested it in the classic charge transfer insulator La 2 CuO 4 . Our methodology predicts a Mott-insulating gap in the PM phase, thus overcoming the limitation of LDA and QSGW. It yields more precise peak positions of the La-f states, thus improving the results of LDA+DMFT. The method should be useful in understanding electronic excitation spectrum of other strongly-correlated materials, in particular those where precise position of both the itinerant and correlated states is important. COMPUTATIONAL DETAILS: BASIS SET All calculations are performed using our relativistic spin-polarized, full-potential, linearizedaugmented-plane-wave package (RSPFLAPW) [1,2], which is based on full-potential linearized augmented plane wave plus local orbital (FPLAPW+lo) method. For this particular calculation, experimental lattice constants and atomic positions at the low-temperature orthorhombic phase [3] MATSUBARA QSGW CALCULATIONS The electron self-energy can be systematically expanded in terms of the dressed Green's function G and the screened Coulomb interaction W . Within GW approximation, we keep only the first term of the series expansion of self-energy in W : where r is the position vector in a unit cell, k is the crystal momentum, R is the lattice vector, ω n is Matsubara frequency, τ is Matsubara time, and s is spin index. Within Matsubara quasiparticle self-consistent GW (MQSGW) approximation, we calculate the dynamic polarizability and the electron self-energy by using quasiparticle Green's function (G QP ) instead of Full GW Green's function. First, we construct QP Green's function using the quasiparticle Hamiltonian (H QP ) by For the first iteration, we regard Hamiltonian within local density approximation as H QP . Next, the screened Coulomb interaction W QP is evaluated using the dynamic polarizability χ QP (r, r ′ , k, ′ iω n ) by W −1 QP (r, r ′ , k, iω n ) = V −1 (r, r ′ , k) − χ QP (r, r ′ , k, iω n ) within random phase approximation (RPA): where V is bare Coulomb interaction. Then, we calculate MQSGW electron self-energy Then, we constructed quasiparticle Hamiltonian with linearized self-energy and renormal- whereĤ H (k) is Hartree Hamiltonian. This process is repeated until self-consistency is attained. The static U d and J H are evaluated by a method similar to constrained RPA [4], but here we avoid screening by the strongly hybridized bands, for which the screening by hybridization is included in our large energy window DMFT. We divide dynamic polarizability χ QP into two parts, χ QP = χ low QP + χ high QP . Here, χ low QP is defined by all transitions between the states in the energy window accounted for by the DMFT method (E F ± 10eV ): Using χ high QP , we evaluate partially screeend Coulomb interaction by U −1 (r, r ′ , k, iω n ) = V −1 (r, r ′ , k) − χ high QP (r, r ′ , k, iω n ). Then, We then parametrize static U = F 0 and J H = (F 2 + F 4 )/14 by Slater's integrals [5], drdr ′ U(r, r ′ , R = 0, iω n = 0)W * R=0,m 1 (r)W * R=0,m 2 (r ′ )W R=0,m 3 (r ′ )W R=0,m 4 (r), where W R,i (r) is Wannier function centered at R with index m and Y lm is spherical har- Here, P i,n (k) = R < W Ri |ψ nk > e ik·R / √ N k is projector to the correlated subspace at each k and ψ nk (r) is quasiparticle wavefunction with an index n. N k is the number of k points in the first Brillouin zone.Ê imp and∆ imp are impurity level energy and hybridization function, which are inputs to impurity solver.Σ embeded = P † (k) Σ imp (s, iω n ) −Σ DC (iω n ) P (k) is embeded self-energy with impurity self-energy (Σ imp ) and double-counting correction (Σ DC ).
4,548.6
2015-04-28T00:00:00.000
[ "Physics" ]
Design and application of location error teaching aids in measuring and visualization As an abstract concept, ‘location error’ in <Machinery Manufacturing Technology> is considered to be an important element with great difficult to understand and apply. The paper designs and develops an instrument to measure the location error. The location error is affected by different position methods and reference selection. So we choose position element by rotating the disk. The tiny movement transfers by grating ruler and programming by PLC can show the error on text display, which also helps students understand the position principle and related concepts of location error. After comparing measurement results with theoretical calculations and analyzing the measurement accuracy, the paper draws a conclusion that the teaching aid owns reliability and a promotion of high value. Introduction At present, engineering colleges in China explain related contents to location error analysis and calculation with the assistance of PPT and CAI as teaching aids, lacking of visible location error analysis model.Location error is relatively abstract in practical production, especially when positioned with cylindrical surface, the position reference axis cannot be observed, which is difficult for students to make further understanding about calculation principle of location error.As a result, it is necessary to develop a set of visualized teaching aids of location error for machine manufacturing teaching. Location error analysis 2.1 Concept of location error Processing error will arise in workpiece location.Under the condition of adjustment method processing, the maximum position variation of process standard in the direction of working size resulted from workpiece positioning is called location error which is signified with .The location error includes datum mismatch error and datum displacement error .The location error resulted from inconformity between position standard and process standard is called datum mismatch error.The location error resulted from subsidiary manufacturing error of position and location error resulted from its fit clearance are called datum displacement error. excircle positioning mode and typical location error analysis There are two kinds of position modes of outside surface: centralized positioning and bearing positioning.The positioning standard of centralized positioning is axial lead of external cylindrical surface with the commonly used positioning device being spring head.The commonly used positioning device of support positioning is V-Block.The thesis adopts different types of position modes, and selects different process standards for finished surfaces of workpiece to make location error analysis and comparison, which can be more comprehensive.The simple instruction about location error calculation will be made with the example of V piece in the following contents.(1) Including as diameter tolerance of workpiece When adopting three kinds of process standards, the corresponding location error calculation is showed in table 1. Design scheme of visualized teaching aids of location error The teaching aids designed in the thesis are aimed to gain location error stated in above contents through measurement.As a result, the teaching aid should possess functions like generation, measurement, and display of location error.The teaching aids designed by the thesis are mainly composed by mechanical part, measuring part and control part, like in figure 2. switching mechanism design of positioning parts In order to select different kinds of position element, we refer to indexing plate to design structure.We choose position element by rotating the disk.Then fix the disk to bracket by index pin, to make sure the plate won t move a tiny displacement during the experiment.After that, we fix position element to disk by bolts and locating pin.We also use the compression screw installed on the axle to clamp the workpieces. The two-dimensional assembling of location error teaching aids designed by the thesis is showed in figure 3. measurement mechanism design of location error We connect height gauge to grading ruler so as to transfer the tiny movement from the location of scriber to grading sensor.By adjusting the location of scriber, we obtain the up-bus, down-bus and the axis position of workpieces.After finishing design of location error analysis, make three-dimensional modeling with SolidWorks software.In addition, motion simulation is made to guarantee the usability of device. Three dimensional modeling and simulation of location error teaching aids 5 Design for measurement and control system and operation interface of location error teaching aid function of the location error teaching aid Function1: Select different kinds of workpieces (we take two different sizes of cylinders for example) Function2: Switch to various position element (we illustrate how V-block works in the experiment) Function3: Process basis selection Function4: Position Measurement-we get the location error based on up-bus and down-bus of workpieces.Function5: Data process and display (location error, datum mismatch error, datum displacement error can be visualized on the test display.) Grading ruler selection Location error teaching aid performs a series of operations by using grading ruler to measure the tiny movement and the text display to show the data.The grading ruler can transmit signal to PLC, utilizing highspeed counter to enumerate.It could give a clear presentation of movement and its accuracy can reach up to [5][6] Test display selection We choose cost-effective and easy connection test display as HMI to show results.It is of great economical and practical, and the programming is of great convenient.5.2.3 PLC (programmable logic controller) selection S7-200PLC has an advantage over rapidity, accuracy and stability.As a result, we take it as the control unit based on economy and well performance of its nuclear-CPU224. [7] The measurement and control section of teaching aid can be divided into four system unit.Figure 4 shows the transitive relation of data and signal. (1)grading ruler The grading ruler will transform a tiny movement to an electrical signal, then transmit the signal to PLC when measuring the position of process basis.After simple operation of location error, the system transports the relevant data to the test display.The series make structure clear and simple. 6 Take V-block as a position element to do location error experiments Figure 5 shows the location error teaching aids we developed.Through joint commissioning of hardware and software, we make the aids operate normally and meet the requirements.We select and position element to do location error experiment.We compare the measuring actual value to the theoretical value, in order to verify the feasibility and accuracy of teaching aids. Measure the position of process basis During the experiment of location error, Adjustment Act is regarded as a prerequisite.By selecting different process bases and measure the position of them, we obtain the actual values of location error, Table 2 and table 3 show the result of the first set of experiment, table 2 and table 3 show the result of the second set of experiment, the theoretical results are displayed in table 4. Besides, workpiece1 is a cylinder with a diameter of 18 mm, and workpiece1 is a cylinder with a diameter of 20 mm. Processing and analysis of measurement data Take an average of measured in two experiments, table 5 shows the averaged errors. Table 7 the average of two experiments center as the process basis up-bus as the process basis down-bus as the process basis Comparison and analysis between experimental values and theoretical values of location error Compare the experimental results with the theoretical result and calculate the absolute difference of location error, results are as follows. Table 8 absolute difference of location error center as the process basis up-bus as the process basis down-bus as the process basis By comparison, we keep the difference of location error less than 0.05mm, and successfully control the relative error under 2%, which tests and verifies authenticity of the results as well as the ability of experimenting of the teaching aid. Measurement accuracy of location error teaching aid The cooperative relationship of assembling and the manufacture of dimensional tolerances among each part, will make an impact on the measurement precision of the grading ruler.So it is necessarily to construct dimensionchain analysis of the teaching aid. -the closed-loop regarded as the final form of loop during assembling process, the distance from the scriber plane of height gauge to the level.The definition and accuracy requirements of composing link are as follows: -the distance from horizontal plane to the centerline of hole on mounting plate with a dimensional tolerance of 0.1mm. -the deviation from the centerline of disk to the centerline of hole on mounting plate with a concentricity error of 0.1mm. -the deviation from the axis of mandrel to the centerline of disk with a concentricity error of 0.05mm. -the deviation from the centerline of disk to the axis of bearing with a concentricity error of 0.1mm. -the distance from the axis of bearing to the bottom of V-block, with a dimensional tolerance of 0.15mm. -the distance from the bottom of V-block to the centerline of work piece, with a dimensional tolerance of 0.1mm 03004-p.4 -the radius dimension of the workpiece, with a dimensional tolerance of 0.1mm.The functional relationship between closed-loop and composing link is (2) After taking total derivative for the equation of dimension-chain, we get (3) Plugging the values into the formula, we obtain (4) According to the formula of linear closed-loop, we get the dimensional tolerances of 0.05mm.From the experiment, we get the location error of 1.42mm, 0.4mm, 2.44mm, corresponding to the process basis based on upbus, center and down-bus.The dimensional tolerance of closed-loop is less than 15% of the minimum of the location error.For that case, we can testify the reliability and authenticity of the results. Conclusion The teaching aid that we invent has simple structure and can be operated conveniently.Besides, with the help of its demonstration and measurement, location errors can be visualized.The aid has following characteristics: Visualization of location error Strong conceptual location errors can be reflected by experiment.A direct demonstration of location error helps students get profound understanding in its principles and knowledge. High accuracy Grating scale as a highly sensitive device can reach the accuracy of 5um.Remarkably enhance the resolution ratio and obtain more reliable measuring results.It proves the reasonableness and accuracy of the teaching aid by comparing the location error between experiments and theory calculation. Apply to different kinds of locate modes and procedure sizes The location error teaching aid use different kinds of locate mode.Not only can measure and show different kinds of location errors with different locate modes, but also analyze location errors of different workpieces in different procedure sizes with the same locate mode.It is a fully functioning device.In conclusion, the teaching aid fills in the blank of practical observation of location errors.Positive reviews are received from the students after demonstrating the aid in class.This shows that the aid is valuable for application and popularization.It is an ideal aid for location error's teaching in class.References Figure 1 . Figure 1.location error analysis of workpiece in V pieceLike in 2-1 picture, mill keyway in workpiece.Take the horizontal plane where central line of workpiece lies as Figure Figure 2. asic composition of teaching aids Figure 3 . Figure 3.sembling aids picture of location error teaching aids Figure 4.odule components Figure 5 Figure 5 teaching aid of location error Table 1 location error analysis of workpiece in V piece Table 2 . the first set of experiment of position size Table 3 the first set of experiment of measurement errorTable 4 the second set of experiment of position size Table 5 the second set of experiment of measurement error Process basis position Type of error Center / / / / / down-bus / / / / / up-bus / / / / / Table 6 the theoretical results
2,650.2
2015-01-01T00:00:00.000
[ "Engineering", "Computer Science", "Education" ]
Raman Spectroscopic Signature of Ectoine Conformations in Bulk Solution and Crystalline State Abstract Recent crystallographic results revealed conformational changes of zwitterionic ectoine upon hydration. By means of confocal Raman spectroscopy and density functional theory calculations, we present a detailed study of this transformation process as part of a Fermi resonance analysis. The corresponding findings highlight that all resonant couplings are lifted upon exposure to water vapor as a consequence of molecular binding processes. The importance of the involved molecular groups for water binding and conformational changes upon hydration is discussed. Our approach further shows that the underlying rapid process can be reversed by carbon dioxide saturated atmospheres. For the first time, we also confirm that the conformational state of ectoine in aqueous bulk solution coincides with crystalline ectoine in its dihydrate state, thereby highlighting the important role of a few bound water molecules. Introduction Ectoine (2-Methyl-1,4,5,6-tetrahydropyrimidine-4-carboxylic acid) is a zwitterionic, hygroscopic, low-weight organic molecule which is produced by extremophilic bacteriae in presence of harsh environmental conditions like high salinity. [1,2] Recent experimental and computational results revealed a broad plethora of interesting effects for ectoine in aqueous bulk solution. The presence of strongly bound water molecules around ectoine and the corresponding effects on the water structure were discussed as a rationale for the stabilizing and protective effects on proteins and lipid bilayers. [3,4,5] In addition to this remarkable water binding behaviour and the resulting hygroscopicity, [2] it was assumed that the properties of the first hydration shell around ectoine are responsible for further effects, ranging from the stabilization of proteins [6,7,8,9,10,1] to the protection of DNA from ultraviolet [11] and ionizing radiation damage. [12,13] In contrast to its stabilizing effects, previous studies also reported a destabilizing impact of ectoine on charged macromolecules with regard to direct and local interactions [14,15,16,17,18,19,11] in addition to radical scavenging properties. [12,20,11] Besides the aforementioned effects in bulk solution, recent single-crystal and powder X-ray, as well as single crystal neutron diffraction measurements also revealed conformational changes of crystalline ectoine upon dehydration. [21] In more detail, ectoine in its crystalline dihydrate state forms nanometer-sized channels with bound water molecules. Over a few days and even at ambient conditions, the dihydrate state undergoes a loss of water and finally transforms into a highly hygroscopic anhydrate form (Figure 1). This transition also involves a significant conformational change of the carboxylate group from axial position into an energetically more favorable equatorial conformation. It was shown by density functional theory (DFT) calculations, that the corresponding changes in the conformations and the associated metastable and stable states are strongly influenced by the amount of hydrating water molecules. [21] In this article, we study the transformational change of solid ectoine upon hydration and dehydration by means of confocal Raman spectroscopy and DFT calculations. The strong binding of water molecules to ectoine provides unique conditions for spectroscopic analyses as part of vibrational studies. The associated vibrational coupling and Fermi resonances allow us to monitor and to study in full detail the crucial interactions with water molecules. With regard to these points, our approach thus extends previous experimental studies [21] and elucidates the strong interactions between water and ectoine in combination with certain consequences for conformational changes. Our findings are then applied to the situation of ectoine in bulk water which shows by comparison a conformational agreement with crystalline ectoine in its dihydrate form. These results demonstrate that only a few bound water molecules are necessary to switch between different conformations as well as stable and metastable states. Theoretical Background: Fermi Resonances Fermi resonance is a very common mechanism in vibrational spectra of polyatomic molecules with complex structure. [22] It appears when a fundamental vibrational frequency lies closely to an overtone or combination frequency. Usually these vibrations concern the same part of the molecule and necessarily belong to the same symmetry point group such that any interaction with further molecules leads to their disappearance. As a result of the Fermi resonance, two peaks can be observed in the spectra so that the energy is transferred between the two frequencies. Most often, one mode is increased in its magnitude whereas the other one is decreased. According to the treatment of Betran et al. [23] and Devendorf et al., [24] the frequency gap D between an observed Fermi resonance doublet reads D ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi D 2 0 À 4W 2 q with the unperturbed frequency spacing D 0 and the anharmonic coupling strength W. Furthermore, the coupling strength W is calculated from the experimentally determined spectrum via the intensity ratio R according to ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi D 2 À 4W 2 p D À ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi where I a and I b are the observed peak intensities (or integrated peak areas) of the Fermi doublet. It has been shown by Placzek [25] that the intensity ratio can be approximated by when ignoring all prefactors. Experimental and Numerical Details Anhydrate crystalline ectoine powder with > 95 % purity was purchased from Sigma-Aldrich. Ultra pure water (LiChrosolv for chromatography) and heavy water (Uvasol, for NMR spectroscopy with deuteration degree > 99:99 %) were obtained from Merck (Germany). Confocal Raman measurements were performed with a confocal Alpha300R instrument (WITec, Germany), equipped with a 20x Zeiss EX Epiplan DIC objective, a 532 nm laser (Excelsior 532-60) with a laser power of 20 mW. The spectrometer was an UHTS-300-VIS (grid of 600 gratings/mm) and an thermoelectrically cooled CCD-camera Andor DV-401A-BV-532 at À 64°C. The spectra were obtained through high precision Zeiss cover glasses focusing on one of the ectoine crystals. A picture of the crystal is shown in the supporting material. After obtaining adequate and optimized spectra of anhydrate ectoine, the sample was covered with a small vessel containing a few drops of the respective solvent or some CO 2 crystals to rapidly form a fully saturated H 2 O, D 2 O or CO 2 atmosphere, respectively. A series of spectral measurements at 4 s time-resolution (integration time 3 s, measurements interval 1 s) were obtained. The anhydrate-to-dihydrate ectoine transformation under the influence of water vapour took a few minutes to start and additionally few tens of seconds to reach the final stable hydration state. During this time, the solvent molecules (D 2 O or H 2 O) penetrated the probed confocal volume, expected to be few microns under present experimental conditions. [26] The reverse dihydrate-to-anhydrate transformation in CO 2 atmosphere took somewhat longer, about few tens of minutes to produce fully anhydrated ectoine within the confocal volume. DFT calculations In accordance with previous publications, [14,5] all DFT calculations were performed with the software package Orca 4.0.0.2 [27,28] and with the generalized gradient approximation BLYP functional. [29,30] The Kohn-Sham orbitals were expanded into the def2-TZVPP basis set [31] with dispersion corrections [32] and with the Becke-Johnson damping scheme. [33] For the calculation of the spectra with the method described in Ref. [34], we used an identical approach like in Ref. [14] where ectoine is interacting with 4 water molecules. In addition, we also calculated the spectrum of a single zwitterionic ectoine molecule embedded by a continuum solvent with a dielectric constant e r ¼ 80:4. In addition, ground-state energies were calculated for single zwitterionic ectoine molecules with axial as well as equatorial carboxylate group conformations, both in continuum solvent [35] and in gas phase approximation. Before the calculation of spectra, a minimization of the total energy (geometry optimization) was performed for 100 cycles. Results and Discussion Representative confocal Raman spectra of anhydrated and dihydrated ectoine and in aqueous 1 M ectoine bulk solution are shown in Figure 2. Two important anharmonic couplings in the regions 750 cm À 1 to 900 cm À 1 and 2900 cm À 1 to 3000 cm À 1 can be identified. These frequencies correspond to the two Fermi resonances, one involving ring deformation modes and the other the methyl group of ectoine, respectively. A visualization of these vibrations is shown in the supporting material. As can be seen, these two spectral features can be used to distinguish anhydrated from dihydrated solid ectoine in combination with the two conformations of the carboxylate group 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 ( Figure 1) as discussed in Ref. [21]. Furthermore, the preponderance of these bands is directly related to the hydration process as will be discussed in detail in the following. Fermi resonances in anhydrate ectoine The prominent Fermi resonance of anhydrate solid ectoine (doublet located at 775 and 878 cm À 1 in Figure 2, black A) involves the combination band of two ring deformation modes at 352 cm À 1 and 482 cm À 1 which form a Fermi resonance with a ring breathing mode at about 852 cm À 1 . From previous DFT calculations [14] it is known, that the combination mode has the same point group symmetry as the ring breathing mode, thereby satisfying the necessary condition for the occurrence of a Fermi resonance as shown in the supporting material. Importantly, all involved features contain contributions from the scissor vibrational mode of the carboxylate group. According to the discussion above and by setting the doublet splitting to 2 W = 103 cm À 1 in combination with the peak area intensity ratio R � 0.869, a separation of Δ � 7 cm À 1 is obtained for the coupling modes before Fermi resonance. This is in good agreement with the observed shift of Δ = 12 cm À 1 from 364 cm À 1 to 352 cm À 1 for the ring deformation mode upon hydration (supporting material). The assignment of the resonance is corroborated by the study of p-cresol and related molecules where an observed Raman doublet reveals Fermi resonance between the symmetric ring-breathing fundamental and the overtone mode of the non-planar ring vibration at 413 cm À 1 . [36] Fermi resonances also often occur in the CÀ H stretching region of molecules. [37] For the methyl group of ectoine, the Fermi resonance of interest occurs in the 2900 cm À 1 spectral region, where a coupling between the CÀ H symmetric stretch fundamental and a CÀ H bend overtone gives rise to two prominent bands (Figure 2, box B). In fact, several overtones and combination bands from the bending region are expected to be close to the CH 3 stretching mode with the same symmetry, so that multiple resonances can be expected. Therefore, it is difficult to clearly establish which CH 3 bending overtone (or combination of them) is involved in the Fermi resonance. However, the Fermi resonance of gaseous methanol in the CÀ H stretching region was assigned to bands at 2925 cm À 1 and 2955 cm À 1 [37] which is in reasonable agreement with the bands of anhydrated ectoine at 2934 cm À 1 and 2970 cm À 1 (Figure 2, box B). The disappearance of the resonances upon hydration clearly implies the important role of the CH 3 group for water binding. Monitoring ectoine hydration The corresponding integrated spectral densities of the aforementioned bands as a function of time are presented in Figure 3. As can be seen, all anharmonic spectral features of anhydrate ectoine quickly disappear upon exposure to water Figure 2. Confocal Raman spectra of anhydrated and dihydrated solid ectoine and in 1 M ectoine aqueous solution (red). The consecutively obtained spectra concerned pristine anhydrate ectoine (black), ectoine exposed to H 2 O atmosphere (blue), followed by exposure to CO 2 atmosphere (green). The spectra are normalised to the band at 1140 cm À 1 which is known to be relatively insensitive to the ectoine hydration state. The two spectral ranges of interest here are marked by rectangles: A) the Fermi resonance of the anhydrate ectoine ring deformations and ring breathing modes with the doublet observed at 775 cm À 1 and 878 cm À 1 , and dihydrate ectoine with the single band of the ring breathing mode at 852 cm À 1 . B) the Fermi resonance of the methyl group of the anhydrate with the doublet observed at 2934 cm À 1 and 2970 cm À 1 , concerning coupling of a CÀ H bending overtone and CÀ H stretching, the stretching in ectoine dihydrate is observed as a narrow predominant band at 2943 cm À 1 . See text for details. Figure 3. Integrated area intensities of bands (difference between dihydrate and anhydrate ectoine spectra at specified wavenumbers) as a function of exposure time to water-saturated atmosphere. In addition to single bands, the area intensity of the water stretching region above 3240 cm À 1 is shown in the inset. The hydration of ectoine within the confocal volume stops at the point at which the two Fermi resonance bands at 775 and 878 cm À 1 disappear. vapour. In more detail, a few minutes after exposure to water vapour, the integrated area densities for wavenumbers 775 cm À 1 and 878 cm À 1 (Figure 2A) start to decrease rapidly, while area intensities for 852 cm À 1 increase (Figure 3). This behavior correlates directly with the binding of water to ectoine as revealed by an increase in the intensity of the OÀ H stretching region above 3240 cm À 1 (inset of Figure 3). It is worth noting here that the OÀ H stretching peak of bulk water reveals a very broad band with a complex line shape which is fitted in the experimental spectrum with four Gaussian functions. [5] The bands below 3600 cm À 1 are attributed to the distinct contribution of the collective modes to the molecular polarizability. Thereby the intensity of the modes within distorted tetrahedral networks contribute at a higher frequency and, those involved in a ice-like network contribute at a lower frequency. The hydration behaviour is not restricted to water, as a comparable behaviour can also be observed upon exposure of anhydrated ectoine to saturated D 2 O atmosphere ( Figure 4). As can be seen a D 2 O-ectoine complex reveals at least 7 different states in the DÀ O stretching region around 2500 cm À 1 . Since, there is no overlap between ectoine CÀ H and DÀ O stretching frequencies (in contrast to H 2 O at Raman shifts larger than 2800 cm À 1 (Figure 2)), the spectrum can also be used to estimate the amount of D 2 O molecules bound to crystalline ectoine. This was achieved by the normalization of the CÀ H stretching region of ectoine and in comparison with the area intensities of the DÀ O stretching region for D 2 O hydrated ectoine with 1 M ectoine D 2 O solution (molar ratio D 2 O:ectoine = 50 : 1). The corresponding calculation yields a value of 3.2 D 2 O molecules bound to one ectoine molecule within the crystal. While this number is slightly higher when compared to the recently reported results for dihydrate crystal structures with 2 water molecules per ectoine, [21] it has to be noted that we did not include the unknown partial molar volume occupied by ectoine in 1 M solution for corrections. Interestingly, the reverse process as represented by the rapid transformation from dihydrated to anhydrated solid ectoine, can be triggered by the exposure to CO 2 saturated atmospheres ( Figure 2, green line). Although this conformational change is slower than the hydration process, it still can be considered as a rapid exchange mechanism, such that a complete dehydration of ectoine within the considered confocal volume can be observed on a time scale of few tens of minutes. The mechanistic aspects of this reaction are not known, however, we propose by reasons of chemical intuition that the transiently formed unstable H 2 CO 3 species within the nanometer size channels in the ectoine dihydrate crystal decompose under water removal. In the case of 1 M ectoine in aqueous bulk water solution (Figure 2, red line) a strong band near 850 cm À 1 is observed, while the two bands at 775 cm À 1 and 877 cm À 1 are missing. The solution spectrum is also closely related to the dihydrate spectrum with respect to the CÀ H stretching region around 2950 cm À 1 (Figure 2, box B). The excellent qualitative agreement between solid dihydrate ectoine and ectoine in bulk water implies that the carboxylate group of ectoine adopts an axial conformation in both environments. Therefore, it can be concluded that the axial conformation is stabilized under the influence of water molecules [21] and does not change upon the formation of a full hydration shell. Computed Raman spectra from DFT calculations The differences in the electronic ground state energies DE ¼ E eq ð Þ À E ax ð Þ as calculated for single ectoine molecules with the carboxylate group in axial (ax) and in equatorial (eq) conformation embedded into a continuum water model as well as in gas phase are presented in Table 1. Despite all previous findings, the corresponding results reveal that the equatorial state is more stable than the axial conformation in both gas phase and continuum water. The respective energy differences are DE ¼ À 5:51 kJ/mol in continuum water and DE ¼ À 10:03 kJ/mol in gas phase. As we will point out in the remainder of this section, these results can be rationalized by the missing crucial contribution of explicit water molecules. Notably, the energy differences between gas phase and continuum solvent DE S ax=eq ð Þ ¼ E sol ax=eq ð Þ À E gas ax=eq ð Þ reveal that the axial conformation (DE S ax ð Þ ¼ À 139:86 kJ/mol) is more stabilized than the equatorial state ( DE S eq ð Þ ¼ À 135:34 kJ/mol) in presence of continuum water which highlights the strong water affinity of the axial state. In contrast to continuum solvent and gas phase approximations, the outcomes of the DFT calculations change in presence of explicit water molecules. By means of energy minimization routines in terms of geometry optimization, one can observe that ectoine surrounded by four water molecules and in presence of a continuum water dielectric background indeed prefers the axial conformation ( Figure 5). Moreover, any initial equatorial conformation transforms into an axial state after a sufficient amount of geometry optimization steps which points to the fact that the equatorial state is not stable in presence of explicit water molecules. The corresponding results are in reasonable agreement with recent discussions and emphasize the crucial role of direct water binding for the conformational behavior of ectoine. [21] The computed Raman spectra for ectoine in the axial state with four water molecules as well as for single ectoine in a dielectric continuum solvent are shown in Figure 6. Both conformations reveal an axial conformation of the carboxylate group. In more detail, three water molecules strongly interact via hydrogen bonds with the carboxylate group while a fourth water molecules interacts with the ring nitrogen atom. From the fingerprint region between 250 cm À 1 and 1050 cm À 1 it becomes clear that a number of features in the calculated Raman spectra are consistent with the experimental results as discussed above. The presence of hydrating water molecules around ectoine implies the emergence of a strong band around 817 cm À 1 . A comparable band can also be observed in the experimental results for dihydrated ectoine at 852 cm À 1 (Figure 2). In absence of explicit water molecules around ectoine in a continuum solvent, a strong band near 800 cm À 1 is missing, while two medium intensity bands at 754 and 870 cm À 1 can be observed. In agreement with the considerations above, a pronounced ring deformation mode at about 300 cm À 1 becomes visible which borrows the intensity from the Fermi coupling. With regard to these findings, one has to conclude that the observed disappearance of Fermi resonances upon hydration requires the interaction with water molecules. Summary and Conclusion Confocal Raman spectroscopy measurements were used to study the vibrational behavior of anhydrated and dihydrated ectoine crystals. The spectra reveal that the vibrational spectrum of solid and anhydrated ectoine is dominated by a significant amount of anharmonic interactions. Upon exposure to a saturated water (or D 2 O) atmosphere, these interactions are rapidly lifted as a result of the formation of the ectoine-water complex. The associated conformational changes, meaning the transition of the carboxylate group from the equatorial to an axial conformation influences the ring deformation modes which are spectroscopically characterized. We are able to monitor the conformational change as well as the hydration process which takes place on a time scale of a few tens of seconds. Our results reveal that only a few water molecules instead of a full hydration shell are required to initiate this transition. Main molecular groups to identify this process are represented by the carboxylate as well as the methyl group. Further DFT calculations highlight that the disappearance of Fermi resonances is solely attributed to the interaction with water molecules instead of any conformational changes. The juxtaposition of the present data and the literature crystallographic results permits unambiguous assignment of the solution ectoine structure with axial carboxylate group conformation. The results of this study shed more light on the crucial interaction between ectoine and water molecules. The underlying hydration process can be reversed under carbon dioxide atmosphere which highlights the subtle balance of stable and metastable conformations. One may ask if these slight changes as well as the strong water interactions are of further importance for stabilizing and destabilizing effects of ectoine in macromolecular environments.
5,163.2
2020-07-06T00:00:00.000
[ "Chemistry" ]
EVS 25 Shenzhen , China , Nov 5-9 , 2010 Key Technologies of Hybrid Solar Electric Midibus Development The solar and lithium-ion hybrid electric midibus is a new energy vehicles.According to the 21 low-speed integrated Li-ion power midibus’s design and development requirements,this paper introduces some key Technologies need to be resolved,such as vehicle dynamic parameter matching technology, Lightweight design, high efficiency motor and controler technology, solar panels and charger technology, Battery management technology and advanced vehicle control technology. Based on the analysis of the key technologies, this paper put forward the corresponding solutions and implementation plan, and the midibus prototype is also developed. Through the vehicle and some components testing and experiments, which verify the correctness of such Key Techniques. Introduction Electric Vehicles are using batteries as the power source Vehicles.Solar electric car is the most clean, most promising green vehicles, which are transform the solar light energy into electricity stored in high-energy batteries, used to drive permanent magnet motor rotation, thereby driving vehicles to move.As a new green transport, it has zero emissions, low noise, wide source of energy, and many additional advantages.At present, because the conversion efficiency of solar cells can not very satisfactory, solar power is more suitable for short distance transport of vehicles or low speed situations [1], Such as attractions, schools, large communities, small towns, suburbs, etc.And now at home and abroad most ferry bus under 14 seats are Low energy density, low power system efficiency whose Power system adopts DC motor, the battery used lead-acid batteries.At the same time such ferry bus use fiberglass body, leading to lower load, and thus the energy utilization efficiency is low.Base on the market demand of 21-midibus, the lightweight, high energy efficiency of a new generation of lithiumion battery integrated solar electric midibus are developed. Application Scope The vehicle can be used under light rainfall condition.It is also suitable for campus, spots, and large communities such as regional vehicle. Key Techniques of Solar and Lithium-ion Hybrid Electric Midibus Development Vehicle dynamic matching technology The design parameters of electric vehicle power train components, Such like motor power and torque, transmission gear ratio, And a reasonable matching between them , etc, have significant impact on electric vehicle power, economy, driving range.As the vehicle speed is low, taking into account the motor's constant torque and highspeed low-speed characteristics, considering reduce the mass of transmission, improve the transmission efficiency and reduce the difficulty of driving, the fixed ratio transmission solution is adopted.Taking into account the load capacity and vehicle style, the Rear-wheel drive solution is used.Reference to the current light vehicle buses or similar loading vehicle transmission parameters, we select 4.55 as final reduction ratio, 3.652 or 2.39 as fixed-gear ratio.Combining vehicle model parameters, the electric speed, power and torque matching calculation can be carried out.Using the Independent development software EVSIM of electric automobile power matching technology to carry on Simulation calculation, and get the required curve shown in Figure 1. High-efficiency Motors and Drive Systems Development Motor and Drive System is the one of the electric vehicle core development.There are currently DC motor drive system, AC induction motor drive control system, Switched reluctance motor drive system(SRM), and Permanent magnet synchronous motor (PMSM) drive control system, etc. the permanent magnet synchronous motor with permanent magnet excitation is high power density, large starting torque, smooth torque, low vibration and noise, easy to form a new magnetic circuit.It is very well satisfy the demands such like the electric vehicle load, wide speed range, low-speed high torque, high-speed low-torque, high Instantaneous Power, etc.It is the direction of development of electric vehicle motors [3].To meet the requirements of Matching vehicle dynamic parameters , Considering the vehicle driving range, 192V ,100AH lithium-ion battery is used as power supply.Corresponding permanent magnet synchronous motor and its controller are also designed and developed.The motor and controller parameters are shown in table 1. Distributed Battery Management System Electric vehicles usually use lead-acid battery, nickel metal hydride batteries or lithium ion battery as power source.Lithium-ion batteries because of its high energy density, high number times of rechargeable are considered the best battery for electric vehicles at present.Electric vehicles' batteries are generally composed by the battery packs which are making up by dozens or even hundreds of single cells in series or in parallel. During using it will cause the battery to the inconsistent, and affect the battery life and battery energy storage and so on.All these will bang inconvenience for battery performance, and security management and cause impact on the application of vehicle.For this reason, electric vehicles are typically equipped with battery power management system to achieve the estimated battery state of charge, and vehicle safety management and communication functions [4].Figure5: The solar and lithium-ion hybrid electric midibus Prototype and Test The following is the integration of solar electric vehicle parameters: Conclusion The low speed and large capacity electric bus in the city plays roads, slip roads to public transport, small and medium sized cities and urban public transport and large factories, mines, businesses, schools and tourist spots have a broad application space.The modified traditional bus program reduce energy use efficiency, is higher production costs, difficult to use solar energy and other clean energy as Auxiliary energy.This paper according to the characteristics of electric vehicles, design a new electric vehicle model having fashionable appearance, Develop a new motor and drive system, solar battery and charge system and lithium-ion battery management system, apply lightweight design.And these greatly improve the energy efficiency of electric vehicle, achieve to better operating results.Based on the prototype in operation, we will further design and develop closed electric midibus, and achieve industrialization as soon as possible. Figure 1 : Figure 1: Vehicle power demand curve Figure 2 : Figure 2: Electrical Motor Characteristics Curves The solar and lithium-ion hybrid electric midibus Utilizes 55 100AH square lithium-ion batteries which are divided into 5 groups, each composed of 11 cells in series into smart battery pack as shown in figure3.Smart battery packs include lithium-ion battery and Intelligent Manager, which include smart sensors, thermal management and equalization functionality.Distributed battery management system which developed based on embedded hardware, includes battery management parameters setting module, battery dynamic database module, battery statistical analysis module, battery equalization charging algorithm module, the battery state of charge (SOC) module, the battery state of health (SOH) module, an alarm warning module, heat balance management module, display module and control and communication module. Figure 3 : Figure 3: Distributed battery management system 3.4 Distributed Development of Solar Cell and Charge System The design of solar module includes the size and performance parameters design.The former depends on the structure and size of the roof, and its Layout is shown in figure 4. To reduce weight, battery components' substrate use lightweight, high toughness resin material.The performance parameters design is based on characteristics of vehicle and Battery characteristics [5] and the rated voltage of 192V, the maximum voltage of 212V.Design parameters of the battery components can be shown in table 2, maximum power Pm is 60W, component efficiency is17.5%,open circuit voltage Voc is 20.1V,Isc short circuit electrical current is 4.01A, the best operating voltage Vm is 16.9V, the best work of the current Im is 3.57A, components color is white.16 battery module in series to achieve the total open circuit voltage 321.6V.The total voltage of the best 270.4V.16 battery components in series achieve the total open circuit voltage to 321.6V.The best working total voltage is 270.4V.Table 1: battery component performance parameters Item Pm Efficiency Cell Cutting Size 1 60W 17.5% 125x93(116.3)Voc Isc Vm Im Color 20.1V 4.01A 16.9V 3.57A White Solar charger use step-down charging way, that using Assisted in the maximum power point tracking technology to achieve the ultimate objective of charge.Charger circuit diagram is shown in Figure 4. Main DC / DC circuit is buck circuit model, the control circuit use efficient and practical C8051F330 microcontroller to handle solar module and the sampling signal from battery pack, to achieve MPPT (maximum power point tracking), Modulate PWM waveforms, through the drive circuit driving MOSFET, and sent solar voltage and current signals to the host computer. Figure 4 : charger circuit diagram 3 . 5 Figure 4: charger circuit diagram 3.5 Optimization of Vehicle Structural and Lightweight Design Under the condition of high energy density, highperformance battery did not come out currently, the lightweight design technology is particularly important for electric vehicle [6].The solar and lithium-ion hybrid electric midibus can take 21 passengers, Equivalent to the traditional midibus.The midibus most use multi-parallel double-beam chassis structure, and the weight is close to 1800kg, leading to the much more mass of vehicle, load ratio decreases, reducing energy efficiency.Project team considering the low vehicle speed and the operating environment condition, select the chassis whose size and weight and structure similar to the midibus' chassis, to optimize design of the chassis structure, and analysis the dynamic and structural strength, for achieving optimization of the chassis structure.Body, seat frame, handrails and other accessories are used aluminum; seats are made of wood, to reduce vehicle weight.The vehicle is shown in figure 5. Figure 6 : Figure 6: The appearance of solar and lithium-ion hybrid electric midibus Table 1 : Motor design parameters table 2 , maximum power Pm is 60W, component efficiency is17.5%,open circuit voltage Voc is 20.1V,Isc short circuit electrical current is 4.01A, the best operating voltage Vm is 16.9V, the best work of the current Im is 3.57A, components color is white.16 battery module in series to achieve the total open circuit voltage 321.6V.The total voltage of the best 270.4V.16 battery components in series achieve the total open circuit voltage to 321.6V.The best working total voltage is 270.4V.
2,241.4
2010-03-26T00:00:00.000
[ "Engineering", "Computer Science" ]
An MPS-BNS Mixed Strategy Based on Game Theory for Wireless Mesh Networks To achieve a valid effect of wireless mesh networks against selfish nodes and selfish behaviors in the packets forwarding, an approach named mixed MPS-BNS strategy is proposed in this paper. The proposed strategy is based on the Maximum Payoff Strategy (MPS) and the Best Neighbor Strategy (BNS). In this strategy, every node plays a packet forwarding game with its neighbors and records the total payoff of the game. After one round of play, each player chooses the MPS or BNS strategy for certain probabilities and updates the strategy accordingly. In MPS strategy, each node chooses a strategy that will get the maximum payoff according to its neighbor's strategy. In BNS strategy, each node follows the strategy of its neighbor with the maximum total payoff and then enters the next round of play. The simulation analysis has shown that MPS-BNS strategy is able to evolve to the maximum expected level of average payoff with faster speed than the pure BNS strategy, especially in the packets forwarding beginning with a low cooperation level. It is concluded that MPS-BNS strategy is effective in fighting against selfishness in different levels and can achieve a preferable performance. Introduction In wireless networks such as ad hoc and mesh networks, the steady operation of the networks must rely on the cooperation for which packets should be forwarded sufficiently between nodes, namely, the routers in networks. However, for the sake of saving battery life for their own communications, some nodes in wireless network do not cooperate to the basic network functioning, making the network lack trust between nodes. ese nodes refuse to forward packets by dropping other's packets and trying to use others to send their own. is noncooperative behavior is named sel�sh behavior, and these routers are sel�sh nodes. �sers or nodes that want to maximize their own welfare and do not contribute to the network are de�ned as sel�sh nodes or free riders [1]. Now the sel�sh behavior is an important issue and is being vigorously researched. e simulation study presented in Michiardi and Molva [2] showed that the performance of multihop wireless networks severely degrades in face of sel�sh node's misbehavior. To deal with this problem, many mechanisms have been proposed to detect the sel�sh nodes and restrict their sel�sh behaviors in wireless networks. In terms of the previous work on this problem, there are main approaches based on the credit approach, the reputation approach, network coding approach, game theory, and so on. In the credit approach [3][4][5][6][7], cooperation is induced by means of payments received every occasion when a node acts as a relay and forwards a packet, and such credit can later be used by these nodes to encourage others to cooperate. Reputation-based schemes observe the behavior of their neighboring nodes through promiscuous overhearing and accordingly assign them a reputation rating which is used for identifying the sel�sh nodes. In reputation-based schemes [8][9][10][11][12][13][14], a node's behavior is measured by its neighbors using a watchdog mechanism, and nodes can be punished for noncooperation. To deal with the issue, cooperative nodes sometimes are perceived as being sel�sh because of unreliable transmission. Joang et al. [15] proposed a method based on subjective logic to discover the trust networks between speci�c parties. Kane and Browne [16] successfully transplanted and applied subjective logic to a wireless network environment. Due to the unreliability and lack of information of the wireless networks, the pure subjective logic-based model may lead to a high uncertainty value in some cases. To solve the problem, a reputation computation model was proposed to discover and prevent sel�sh behaviors by combining familiarity values with subjective opinions [17]. And to compute the reputation of the nodes in wireless networks, some techniques such as network coding and fuzzy recommendation were proposed [18,19]. For the credit approach and reputation approach, because the system needs a central control requiring an infrastructure, this method cannot be used in the distributed environment. �ame theory can easily cope with sel�sh behaviors and therefore was introduced to the research of sel�sh nodes related to wireless mesh networks. To adapt with the distributed environment, some detecting methods based on game theory were proposed in recent studies [20][21][22][23][24][25]. A distinct and novel approach named best neighbor strategy (BNS) was proposed to stimulate and enforce cooperation among such sel�sh nodes in an ad hoc or wireless mesh network environment, where there is no central authority to monitor their unacceptable behaviors [26]. Komathy and Narayanasamy [26] pointed out that there were four basic methods using game theory to defend sel�sh nodes in an ad hoc network. is paper is to discuss the �rst and third ones. e �rst is to choose the best response as a rational expectation according to the expected behavior of the other players, and this method is called maximum payoff strategy (MPS) in this paper. e third one is to choose the other player's strategy, if it is achieving more than others. Based on the third method, Komathy and Narayanasamy proposed the BNS [26] against sel�sh neighbors. �n BNS, each node records the total payoff of its every neighbor in each round of the packet forwarding game. Once one round is completed, the player changes its current strategy to its neighbor's current strategy, if the neighbor has achieved a higher total return than any other neighbor. BNS is able to converge faster to enforce cooperation among sel�sh nodes and is robust against sel�shness when invaded by sel�sh behaviors. In our paper, beginning with high cooperation level, BNS is able to provide a superior performance; however, it displays inferior behaviors beginning with low cooperation level, which will be discussed in the simulation sections in this paper. To improve the performance of BNS in low cooperation level, we then combine MPS and BNS to propose a scheme identi�ed as mixed MPS-BNS strategy against sel�sh nodes and sel�sh behaviors. In this paper, the basic and complete models of mixed MPS-BNS strategy are �rstly discussed and analy�ed, respectively, with two repeated packet forwarding (RPF) games of two player and multiplayer. en, via the simulation of the RPF games, the performance of the true BNS and the mixed MPS-BNS will be compared and discussed on average payoff and cooperation level. In MPS-BNS strategy, each node has a strategy space , consisting of two policies, namely, MPS-BNS and ALL Drop. e strategy space of a player is shown in Table 1, in which each player always drops others' packets in a certain probability of (0 ≤ ≤ ) and chooses MPS-BNS strategy in a probability of ( − ). e probability represents the sel�shness level and also simulates the mutation proportion. Each player chooses MPS and BNS strategies in probabilities of − 0 ≤ ≤ and − − , respectively. MPS and BNS have two substrategies Forward (F) and Drop (D) as shown in Table 2, which also shows the payoff matrix. In MPS strategy, the player chooses the strategy (F or D) with which they will get the maximum payoff in the next round according to its neighboring player. In BNS strategy, the player chooses the strategy (F or D) with which its neighboring player gets the maximum payoff in the previous round. MPS-BNS Strategic Game of Two Player According to different prime strategies, there exits four states in Markov process as follows. (i) State (1,1): both player 1 and player 2 have MPS-BNS as their prime strategies for the current game. (ii) State (1,2): player 1 has MPS-BNS as its prime strategy, and player 2 has ALL-D as its prime-strategy for the current game. (iii) State (2,1): player 1 has ALL-D as its prime strategy for the current game, and player 2 has MPS-BNS as its prime strategy. (iv) State (2,2): both player 1 and player 2 have ALL-D as their prime strategy for the current game. e expected payoff for player 1 in state (1,1) is simulated using Table 3 with parameters , , and , which represent different payoffs gained by the nodes in different strategies. In MPS strategy, every node chooses the strategy which will get the maximum payoff corresponding to the strategies of its neighbors; while in BNS strategy, every node follows the strategy of its neighbor having the maximum payoff in previous round. Accordingly, the payoff of MPS is higher than that of BNS; thus, 1 = 5.5, 2 = 5, 3 = 4.5, 4 = 4 and 1 = 3.5, 2 = 3, = 1, = 1. e 1 and 2 are the proportions of ALL-D chosen by player 1 and player 2, respectively. Evolution with MPS-BNS as Pure Strategy. e pure strategy means that every node in network has a uniform strategy pro�le (�orward or Drop) against all its neighbors. When MPS-BNS is implemented as pure strategy, as shown in Table 4, each node in topology has MPS and BNS strategies with the probability of (0 1) and (1 − ), respectively, against all its neighbors ( = 1, 2, , ), where is the number of neighbors. Scheme 2 is a �ow chart to illustrate the strategic game of multiplayer using MPS-BNS as a pure strategy. e computing method of average payoff per player in Scheme 1 is given as follows: where is the total payoff of all nodes, and × is the size of topology. e proposed model MPS-BNS is implemented using tool MATLAB. e topology size ranges from 10 × 10 to 100 × 100, and the simulation runs for 100 generations. random initial strategies and probability of choosing MPS (0 ≤ ≤ ). In the strategy without MPS-BNS, all nodes have random strategies Forward or Drop, choosing each strategy with probability of 0.5. In Scheme 3, the outcome of MPS-BNS as pure strategy is much higher than that without MPS-BNS and is able to evolve to the maximum expected level at about 32. e average payoff of the strategy without MPS-BNS remains the same at 16 in all topology grid sizes. When the size of topology increases to 50 × 50, the evolution speed slows down, and the maximum average payoff per player remains at the maximum expected level. Evolution with MPS-BNS as Mixed Strategy. Instead of having a uniform strategy pro�le in pure strategy, in mixed strategy, every node in topology maintains a strategy pro�le with different strategies against neighbors. Scheme 4 depicts a size of × topology grid, in which the corner node has three neighbors, the edge node (excluding corner node) has �ve neighbors, and each of the other nodes has eight neighbors. �very node in grid maintains a strategy pro�le, Table 5, where ( is the coordinate of node, is the strategy adapted against neighbor node , and is the number of neighbors. = 0, representing the true BNS strategy, BNS behaves advanced performance of evolving to the maximum level in fast convergence speed with niceness more than 70 ; however, as niceness decreases, the convergence slows down; thus with niceness less than 25 , the outcomes are unable to evolve to the maximum level but a lower average payoff instead. e mixed MPS-BNS ( 0) behaves better performance than the true BNS in low niceness. When 0 0 , representing the proportion of MPS to be equal to or greater than 0.01 in mixed MPS-BNS, in all proportion of niceness, all the outcomes of MPS are able to converge to the maximum average payoff 31, and as increases, the convergence converses faster, especially when = 0 convergences are all within the �rst �ve generations in all proportion of niceness. e evolution of percentage of cooperation in Scheme 7 is similarly the same as the average payoff in Scheme 6. As the proportion of MPS ( ) increases, the network experiences better cooperation from nodes and converges faster. From the simulation, we �nd that though the probability of choosing MPS (0 ≤ ≤ ) is just slightly more than zero and much less than the probability of choosing BNS, the outcomes of MPS-BNS behave much better than the true BNS especially with low level of niceness, that is because in MPS the maximum payoff strategy is chosen in a more direct way than BNS, bringing a much faster convergence speed. high level of average payoff and cooperation at all levels of sel�shness with niceness ≥ 50%; however, it behaves much lower performance with niceness of 25% than with niceness ≥ 50%, especially at a low level of sel�shness. e mixed MPS-BNS ( , representing the choice of MPS and BNS with the same probability) behaves approximately the same high level of performance in various proportions of niceness. MPS-BNS especially with the niceness of 25% gives relatively a distinctly better share of average payoff at all levels of sel�shness. �n addition, MPS-BNS with various proportions of niceness achieves higher average payoff and cooperation than BNS with 75% and 90% percentage of sel�sh nodes. Schemes 8 and 9 conclude that MPS-BNS has better networking performance and better robustness than BNS in �ghting against sel�shness. Conclusion e paper has proposed a mixed MPS-BNS strategy based on game theory against sel�sh nodes and sel�sh behaviors in the process of packets forwarding in wireless mesh networks. rough our research, the true BNS is able to provide a superior network performance with high initial cooperation levels but behaves inferior with low initial cooperation levels. To overcome this problem, we combine BNS with MPS strategy, each of which is chosen with respective probability. In MPS strategy, every node selects the strategy which will get the maximum payoff corresponding to the strategies of its neighbors. In BNS strategy, every node follows the strategy of its neighbor having the maximum payoff. A basic MPS-BNS strategic game of two players is discussed and is extended to a complicated strategic game involving multiplayer. e simulation and discussions of the proposed strategy as pure strategy and mixed strategy are carried out on performance of average payoff and cooperation level. e results conclude that MPS-BNS is able to converge to the expected maximum level with various proportions of the initial percentage of cooperation and converge more rapidly than BNS. e simulation results of MPS-BNS against sel�shness conclude that MPS-BNS behaves superior robustness than BNS defending against sel�sh nodes. us, the proposed MPS-BNS strategy is much more effective and efficient in defending against sel�shness in wireless networ�s.
3,348.8
2013-01-17T00:00:00.000
[ "Computer Science" ]
EFT approach to the electron Electric Dipole Moment at the two-loop level The ACME collaboration has recently reported a new bound on the electric dipole moment (EDM) of the electron, $|d_e|<1.1 \times 10^{-29}\, {\rm e\cdot cm}$ at 90$\%$ confidence level, reaching an unprecedented accuracy level. This can translate into new relevant constraints on theories beyond the SM laying at the TeV scale, even when they contribute to the electron EDM at the two-loop level. We use the EFT approach to classify these corrections, presenting the contributions to the anomalous dimension of the CP-violating dipole operators of the electron up to the two-loop level. Selection rules based on helicity and CP play an important role to simplify this analysis. We use this result to provide new bounds on BSM with leptoquarks, extra Higgs, or constraints in sectors of the MSSM and composite Higgs models. The new ACME bound pushes natural theories significantly more into fine-tune territory, unless they have a way to accidentally preserve CP. Introduction Electric dipole moments (EDM) provide one of the best indirect probes for new-physics. Since a non-zero EDM requires a violation of the CP symmetry, and the Standard Model (SM) contributions are accidentally highly suppressed, the EDM is an exceptionally clean observable to uncover beyond the SM (BSM) physics. Indeed, if BSM physics lies at the TeV scale, we expect new interactions and therefore new sources of CP violation to be present, 1 inducing sizable EDM to be observed in the near future. For this reason, experimental bounds on the electron and neutron EDM have provided the most substantial constraints on the best motivated BSM scenarios, such as supersymmetry or composite Higgs models. The ACME experiment has recently released a new bound on the electron EDM that improve by a factor ∼ 8.6 their previous bound [1]: (1.1) This unprecedented level of accuracy allows for a sensitivity to BSM effects even if they appear at the two-loop level. Indeed, using the rough estimate, we get from Eq. (1.1) a bound on the scale of new-physics Λ 2.5 TeV, being competitive with direct LHC searches. It is therefore of crucial interest to understand how and which BSM sectors affect the electron EDM up to the two-loop level, and which constraints can be derived from the bound Eq. (1.1). The purpose of the paper is to use the Effective Field Theory (EFT) approach to provide a classification of the leading BSM effects on the EDM of the electron up to the two-loop level. In the EFT approach BSM indirect effects are encoded in the Wilson coefficients of higher-dimensional SM operators. At the loop level these Wilson coefficients can enter, via operator mixing, into the renormalization of the CP-violating dipole operators responsible for the EDM. By calculating the anomalous dimensions of these operators, we can provide all log-enhanced contributions to the EDM coming from new physics. At the leading order in a m 2 W /Λ 2 expansion, the electron EDM arises from two dimension-6 operators, O eB and O eW (see below). We will present here the relevant anomalous dimensions of the imaginary part of the corresponding Wilson coefficients, C eB and C eW , up to the two-loop level. In particular, we will provide the leading correction (either at the one-loop level or two-loop level) of the different Wilson coefficients C i to the imaginary part of C eB and C eW . We will see that due to selection rules, only few Wilson coefficients enter into the renormalization of C eB and C eW at the one-loop or two-loop level. Calculating these leading corrections will allow to extract bounds on these Wilson coefficients from the recent EDM measurement. In addition, we will also provide the most relevant one-loop anomalous dimensions of the dimension-8 operators affecting the electron EDM. Although sub-leading in the m 2 W /Λ 2 expansion, dimension-8 operators give contributions of order Our results can be useful to derive from Eq. (1.1) new bounds on BSM particle masses. As an example, we will provide bounds on BSM with leptoquarks or extra Higgs, showing that we can exclude masses below hundreds of TeV. We will also present constraints on new regions of the parameter space of the MSSM, as well as bounds on top-partners in composite Higgs models. These bounds can be better than those from present and future direct searches at the LHC, unless the BSM preserve CP. EDM of the electron in the EFT approach We are interested in calculating new physics contributions to the electron EDM following the EFT approach. This is a valid approximation whenever the new-physics scale Λ is larger than the electroweak (EW) scale, such that new-physics effects on the SM can be characterized by the Wilson coefficients C i (µ) of higher-dimensional SM operators. Assuming lepton number conservation, the leading effects arise from dimension-6 operators where the Wilson coefficients are induced at the new-physics scale, C i (µ = Λ), and must be evolved via the renormalization group equations (RGEs) down to the relevant physical scale at which the measurement takes place. Since the Wilson coefficients mix via loop effects in the RGEs, precise measurements, such as the EDM, can be sensitive to different Wilson coefficients induced by different sectors of the BSM. The EDM of the electron is measured at low-energies µ m e , and can then be extracted in the EFT approach from the coefficient of the operator where F µν is the field-strength of the photon. The RG evolution of d e (µ) from Λ to m e must be computed in the EFT made with the states lighter than µ. This means that from the new-physics scale Λ down to the EW scale we must use the SM EFT, while below the EW scale we must use the effective theory including only light SM fermions, gluons and photons. Let us start discussing the contributions to d e (µ) in the SM EFT. SM EFT basis We will work mainly within the Warsaw basis [2], as the loop operator mixing is simpler in this basis due to the presence of many non-renormalization results. Nevertheless, we will make two changes in the four-fermion operators of the Warsaw basis. In particular, we will make the replacement 2 where a, b denote the SU(2) L doublet indices, and L L and e R denote only the first generation lepton multiplets, while L L and e R the second and third generation ones. 3 We also relabel the operator Our labeling is to make clear that there are two types of operators O ψψψψ and O ψψψψ that in Weyl notation are respectively ψ 4 of total helicity 2 and ψ 2ψ2 of total helicity zero. As we will see in the following, the helicity of the operator plays a crucial role in understanding the properties of the operator mixing at the loop level [3]. In Dirac notation these two type of operators could also be written (after Fierzing) respectively as operators of type (Ψγ µ Ψ)(Ψγ µ Ψ) and of type (Ψ L Ψ R )(Ψ L Ψ R ). In this case, for example, we would have O ledq = −(L a L γ µ Q L a )(d R γ µ e R )/2. 2 These operators are related to the ones in the Warsaw basis [2] by Fierz identities, namely O 3 Notice that in our formulae we will suppress the fermion generation indices, except in the cases in which they cannot be straightforwardly reconstructed from the context. Tree-level contributions At tree-level there are only two dimension-6 operators that contribute to the electron EDM, namely the dipole operators O eW and O eB , given in Table 1. We have where we defined v 246 GeV as the Higgs VEV, and s θ W ≡ sin θ W with θ W the weak angle (similarly for the other trigonometric functions). Notice that contributions to the electron EDM arised only from the imaginary part of C eW,eB . For this reason, we will only be interested in loop contributions to the dipole operators that can generate nonzero Im[C eW,eB ]. One-loop effects At the one-loop level, however, other dimension-6 operators can mix with the dipole operators O eW and O eB , giving a contribution to d e . Selection rules, mainly based on helicity arguments, dictate that only few operators contribute at the one-loop order to the anomalous dimension of the Wilson coefficients C eW and/or C eB , as has been argued in Refs. [3,4] based on the analysis of Ref. [5]. The relevant selection rules for our analysis are given in Table 2, 4 where we use Weyl Table 2: Selection rules [3,4] for the mixing at the one-loop level between the different types of dimension-6 operators (in Weyl notation). notation, and denote with F any SM field strength, with ψ any Weyl fermion and with H any Higgs insertion. The O eW and O eB operators are of type Hψ 2 F and can then only receive contributions from operators of type ψ 4 , F 3 and H 2 F 2 of total helicity ≥ 2. There are four dimension-6 operators of type ψ 4 , but only two contain two leptons, O luqe and O (1) lequ given in Table 1. The second one, however, after closing the quark loop, can only give rise to the Lorentz-singlet structureL L e R , and therefore cannot contribute to the electron dipoles. Hence only O luqe contributes at the one-loop level to the anomalous dimension of O eW and O eB . 5 This contribution is given by where Y f refers to the hypercharge of the fermion f (Y Q = 1/6, Y u = 2/3 and Y d = −1/3), and N c = 3 is the number of QCD colors. Since we can work in a basis where the Yukawa matrix y u is diagonal, the renormalization of the imaginary part of C eW,eB from Eq. (2.8) only arises from the imaginary part of C luqe . A second type of operators, involving SM bosons, are H 2 F µν F µν . There are three operators of this type in the SM, presented in the bottom-left of Table 1. All of them contribute to the EDM at the one-loop level [6]: It is instructive to write Eq. (2.9) in a more physically oriented way, by relating the contributions to the EDM to those to the CP-violating Higgs couplings hγγ, hγZ, and to the anomalous triple gauge coupling δκ γ , defined as (2.10) 5 As explained in Ref. [4], this is easily seen in Weyl notation where the dipole operator is ∝ LαE β F αβ with Lα and E β being respectively the SU(2) L doublet and singlet Weyl electron. Therefore, only four-fermion operators containing LαE β (antisymmetric under α ↔ β) can contribute at the loop level to the dipole. The only one is LαE β U α Q β that corresponds to O luqe in Dirac notation. Notice that this argument applies also to one-loop finite parts. We haveκ that, using Eq. (2.9), leads to where Q e ≡ −1/2 + Y L = Y e = −1 is the electric charge of the electron. Due to the approximate accidental cancellation in the electron vector coupling to the Z, (1/2 + 2Q e s 2 θ W ) ∼ 0.04, the main contribution to the EDM comes fromκ γγ and δκ γ . In fact, this second contribution is often found to be small in many BSM scenarios, such as the MSSM or composite Higgs models, as we will see later. In these models the contribution fromκ γγ is the dominant one. This allows in many cases to give a direct relation between the electron EDM and the CP-violating Higgs coupling to photons. Another class of operators that can in principle mix at the one-loop level with the electron dipole operators are other type of dipole operators Hψ 2 F , for example, those involving other fermions in the SM. It is easy to see, however, that there are no possible Feynman diagrams from quark dipole operators contributing to the electron EDM at the one-loop level. For dipole operators involving other SM leptons, for example, HeµF , these contributions are also absent at the one-loop level. Indeed, since we can work in a basis where the SM lepton Yukawa matrix y e is diagonal, none of these operators can affect the electron EDM at the one-loop level. Below we will see that there can be, however, contributions at the two-loop level. Finally, there are operators of type F 3 that could potentially mix with the dipole operators. In the SM there are two of these operators, either with gluons (O G ) or W a bosons (O W ). The O G operator obviously can not give corrections to the electron dipole operators at one-loop or twoloop level, as there are no possible Feynman diagrams at these orders. On the other hand, the O W operator can contribute to the O eW dipole at one loop. It turns out, however, that this contribution is finite, so it does not induce a running for the dipole operator. The finite contribution can be readily computed in dimensional regularization, leading to the result [7] 6 Im[C eW ] = 3 64π 2 y e g 2 C W . Two-loop effects The first case can potentially give larger corrections, as in the leading-log approximation, they will contain two logarithms, i.e. ∝ ln 2 (Λ 2 /m 2 W ). From the selection rules of Table 2, we see that only two classes of operators can contribute at this order. One is given by the ψ 4 operators that could not generate an electron dipole at the one-loop due to the absence of Feynman diagrams, namely the O Notice that, as we pointed out before, there is an exception to the selection rules of Table 2, corresponding to a possible mixing ofψ 2 ψ 2 operators into ψ 4 when the pair of Yukawas either y u y e or y u y d is involved in the loop [3,4]. Nevertheless, by working in the basis in which the lepton and up-type quark Yukawa matrices are real and diagonal, one can easily find that there are not ψ 2 ψ 2 operators contributing to the imaginary part of O luqe at the one-loop level. Indeed, in this basis y u y e is real and diagonal, and the onlyψ 2 ψ 2 operators that could contribute to O luqe are the ones involving two electron fields and two same-generation quarks. The Wilson coefficients of these operators are necessarily real and do not induce CP-violating effects. Therefore the one-loop mixing pattern and RGEs are the following. The O lequ operator can mix with O luqe at the one-loop level [6]: lequ . (2.14) The dipole operators, on the other hand, mix with the O luqe operator, Flavor indices are easily understood as we can always work with diagonal Yukawa matrix y e and either y u or y d . The operator O W also enters at two loops via renormalization of the O VṼ operators [6], This leads to a two loop, double log contribution to the electron EDM, to be compared with the finite contribution at one loop in Eq. (2.13). The two loop contribution becomes comparable to the one-loop one already for Λ ∼ 10 TeV. Let us now discuss those dimension-6 operators that can directly contribute at the two-loop level to the anomalous dimension of the electron dipole operators. In fact, we are only interested in the EDM, i.e. the imaginary part of the the dipole operators, and therefore only complex Wilson coefficients can contribute, as the SM interactions preserve CP up to small Yukawa couplings that we neglect. This reduces the list of possible dimension-6 operators to those to the right of Table 1. For example, operators of the typeΨγ µ ΨH † D µ H are Hermitian (and then have real Wilson coefficients) unless the two fermions involved are different, meaning that they must involve different flavors. But since in the SM we can work in the basis where y e and either y u or y d are diagonal, we cannot draw any two-loop Feynman diagram contributing to the electron EDM operators. Similar conclusions can be obtained for four-fermion operators, except for those in Table 1, namely O ledq and O leē l . There is however an important subtlety related to these operators, which results in an ambiguity in the determination of their contributions to the electron EDM. Within the basis we are using, in which O ledq and O leē l are written as the product of scalar currents, it is simple to check that the 1-loop contributions to the electron EDM trivially vanish due to the tensor structure. However, through a Fierz rearrangement, O ledq and O leē l can also be rewritten in the form (Ψγ µ Ψ)(Ψγ µ Ψ), i.e. as a product of vector currents. With this choice, if dimensional regularization (in particular the MS scheme) is used to compute the contributions to the electron EDM, a finite 1-loop effect is found. The origin of this contribution is related to the presence of additional four-fermion interactions involving multiple gamma matrices that are generated at intermediate steps of the calculation. These are known as "evanescent operators" (for a review, see for example [10]). The coefficients of these interactions carry an = 4 − d factor, but they can give finite effects in the presence of 1/ poles. As an example, we report the contributions to the electron EDM induced by the O e = C e (L L γ µ L L )(ē R γ µ e R ) operator (see for instance [11,12]) A similar result is obtained for the O ledq operator in vector-current form. Summarizing the above discussion, one finds that the contributions from the O ledq and O leē l 4-fermion operators crucially depend on the choice of the operator basis and on the regularization procedure. This, in turn, can affect the matching from a UV model. In particular, finite 1-loop contributions can be shifted from the matching to the 4-fermion operators into the EDM operators O eB and O eW and vice versa. 8 As already mentioned, for our analysis we choose the scalar-current form for the 4-fermion operators, (L L e R )(ΨΨ). Therefore no finite one-loop contributions arise for the electron EDM from the O ledq and O leē l operators. As we will see in sec. 3.2.3, this choice is particularly convenient for studying UV models including heavy Higgs-like states, in which case only scalar-current 4-fermion operators are obtained from the matching. We have then only O ye , O ledq and O leē l giving a two-loop mixing with the electric dipole operators. From O ye (see left-hand side of Fig. 2), we obtain while from O ledq and O leē l (see right-hand side of Fig. 2), we get respectively We summarize our results by schematically presenting in Fig. 1 the mixing patterns of the effective operators contributing to the electron dipoles at 2-loop order. For completeness we also include the 2-loop mixing of the O W operator, which, at 1-loop order, only induces finite corrections to the electron dipoles. We also provide below the leading-log approximation to the electron EDM that can be good enough since the new physics scale is not constrained yet to be far away from the electroweak scale, and therefore we do not need to resum the logs by exactly solving the RGEs. We find that the one-loop corrections are The double-log 2-loop corrections are given by Finally, the single-log 2-loop corrections are Notice that we did not include in the above formula finite corrections that can arise at the matching scales. In fact, few other dimension-6 operators can enter in this way into the renormalization of the electron EDM. As we saw in Sec. 2.3, an example is provided by the O W operator, whose dominant corrections to the electron EDM arise as finite one-loop contributions. An analogous result is valid for the H 3f f operator (f = u, d, e ) that modify the quarks and heavy lepton Yukawa couplings. These operators start to contribute to the electron EDM at the two-loop level through finite Barr-Zee-type diagrams [14]. We will discuss these operators in Sec. 2.6. Dimension-8 operators Since we are calculating corrections to the EDM at the two-loop level coming from dimension-6 operators, it is appropriate to ask whether dimension-8 operators can also give similar contributions. The effects of these operators are suppressed with respect to dimension-6 operators by an extra m 2 W /Λ 2 , but this could be overcome if their contributions to the EDM arises at the one-loop level instead of two-loops. We will only be interested here in dimension-8 operators that can be generated from integrating new physics at tree-level and that have not been constrained from previous dimension-6 operators. For example, dimension-8 operators involving extra |H| 2 or extra derivatives will not be relevant. We only find two of these operators, that at the one-loop level can mix with dimension-8 operators of dipole type, and equivalently for C le l e with the replacement We get a contribution to EDM of order (2.30) Threshold effects at the EW scale: the impact of CP-violating Yukawa's The log-enhanced contributions we considered in the previous sections are expected to give the dominant corrections to the electron EDM. Nevertheless, being this logarithm not always so large, there are cases in which finite corrections can be more important. A noticeable example, that we will discuss here, is the case of two-loop corrections to the EDM generated when integrating out the top and the Higgs at the EW scale. These corrections come from CP-violating Yukawa couplings induced by the SM dimension-6 operator O y f at the EW scale: For the particular case of C ye we obtain, from Barr-Zee diagrams, that these contributions are given by where we only kept the leading corrections due to diagrams with a virtual photon and a top loop. Contributions from a virtual Z-boson are highly suppressed by the small vector Z coupling to the electron, whereas diagrams involving light quarks are suppressed by the light quark Yukawa's. Diagrams involving a virtual W boson are expected to be subleading with respect to the photon contribution, although not negligible. For simplicity we however neglect these contribution as often done in the literature. Notice that in the above formula we only included the leading terms in an expansion for large m 2 t /m 2 h , which reproduce the full result with an accuracy ∼ 10%. 9 The appearance of ln m 2 t /m 2 h in Eq. (2.32) can be understood as a RG running of the electron EDM from the top mass, where the top is integrated out generating a hF µν F µν term at the one-loop level, down to the Higgs mass. In particular, this arises from a loop diagram involving hF µν F µν and the CP-violating Yukawa Eq. (2.31). Comparing the result in Eq. (2.32) with the two-loop running in Eq. (2.26), we find that the former is typically dominant and is a factor of few larger than the latter for a cut-off scale in the 10 TeV range. This is because the contributions of Eq. (2.26) are slightly smaller than the naive estimate since they are proportional to g and are suppressed by an accidental factor 3/8. The two contributions become comparable only for a very large cut-off scale ∼ 10 4 TeV. Similarly, C y u,d and C y e , which can give CP-violating corrections to the quark and to the heavy lepton Yukawa's, can lead to finite Barr-Zee contributions to the electron EDM. For the top case, Again, the ln m 2 t /m 2 h can be understood as a RG running of the electron EDM from the top mass, where a hFF term is generated from the CP-violating top Yukawa after integrating out the top, down to the Higgs mass. EFT below the electroweak scale and relevant RGEs Let us now discuss the RG running effects below the EW scale. At the scale µ ∼ m W , we must integrate out all the heavy SM particles, the W , Z, the Higgs and top, and work for µ < m W with the EFT built with the light fermions, the photon and the gluons. The EDM of the electron arises in this EFT from a dimension-5 operator whose coefficient, from the tree-level matching with the SM EFT at the EW scale, is given by (2.35) Other relevant operators below the EW scale are four-fermion operators made with the light SM fermions. The matching at the EW scale is given by and where we have also included the matching of the dimension-8 SM operators The above four-fermion operators can enter into the anomalous dimension of O eγ at the one or two loop level. Using our previous results, we can easily extract the RGE for C eγ since the renormalization of the photon can be read from that of the U(1) Y -boson in the SM. At the one-loop level we only have, equivalently to Eq. (2.8) and Eq. (2.29), (see also [15]) 40) or by a direct contribution to C eγ : The RGE running should be considered from m W down to the mass of the heaviest fermion in the four-fermion operator, where the state should be integrated out. 10 Therefore this running can be important for lighter fermions. We can compare our results with those obtained from Barr-Zee diagrams arising from CPviolating Yukawa interactions Eq. (2.31). For light quarks and leptons e , one finds respectively For light quarks, however, a better bound on the corresponding Wilson coefficient can be obtained from constraints on CP-violating electron-nucleon interactions, that the ACME collaboration has also recently reported [1]: Neglecting isospin breakings that in the ThO are small [1], we have for the down-quarks We have checked that bounds from d e are slightly better than those from C S for operators involving the bottom (using N |bb|N 74 MeV/m b [16]), while the bound from C S is better for ligher quarks. Power counting of the Wilson coefficients So far we have presented the leading contributions of the dimension-6 operators to the anomalous dimension of the electron electric dipole operators up to two loops. All the possible contributing operators are given in Table 1. Nevertheless, the importance of the different operators depends on the size of their Wilson coefficients, which crucially depends on the BSM dynamics. In the following we will be interested in BSM theories that can be described as weakly-coupled renormalizable theories. These includes all possible extensions of the SM with extra particles with renormalizable interactions. In these BSM theories we can classify the Wilson coefficients as those that can be generated at tree-level and those generated at most at the loop level. For example, among the operators of Table 1, the only ones with Wilson coefficients that can be induced at tree-level by integrating out new heavy states are all the four-fermion operators and O ye ; the rest, involving always field-strengths, can only be generated by loops. Therefore, we expect where g * refers to a generic coupling of the BSM dynamics to the SM, f = e, u, d and V = W, B. In this class of BSM theories the contributions to the electron EDM have the following loop expansion. The leading contributions are of order d e /e ∼ (g 2 * /16π 2 )(m u /Λ 2 ) ln(Λ/m W ) and can arise from those particular BSM that contribute to O luqe at tree-level. There are only two types of particles that can generate O luqe at tree-level (see Table 3), the leptoquarks R 2 and S 1 that will be discussed below. Next, we can have BSM dynamics contributing directly to the Wilson coefficients of the electron dipole operators, that can give corrections of one-loop order (but without a log-enhancement), d e /e ∼ (g 2 * /16π 2 )(g * v/Λ 2 ). This can happen in BSM theories that contain fermions and bosons coupled to the electron, with at least one of them charged, as for example, the selectron and wino in supersymmetric theories. Contributing at the two-loop level, we have those BSM theories inducing O lequ at tree-level, which leads to a double-log enhanced EDM, d e /e ∼ g 2 g 2 * /(16π 2 ) 2 (m u /Λ 2 ) ln 2 (Λ/m W ). As shown in Table 3, this includes BSM theories with extra Higgses. Single-log two-loop contributions can come from BSM scenarios generating O VṼ at the loop level (those BSM containing extra charged fermions coupled to the Higgs), or generating O ledq , O leē l or O ye at tree-level (see Table 3). Also BSM theories generating O luqe at one-loop level can lead to a single-log two-loop contribution to the electron EDM, as we will see later for the case of the MSSM. Finally, BSM theories with extra SU(2) L fermions can generate O W at the loop-level, leading to a contribution to the electron EDM at the two-loop level with no log enhancement. On the other hand, BSM theories contributing to the EDM of a fermion different from the electron are expected to give negligible effects to the electron EDM, as these arise at best at the three-loop level. Examples of these classes of BSM theories will be given below. We must also notice that there is a large class of low-energy effective descriptions of strongly-coupled theories that follow the same power counting described above. These are those theories that were assumed to follow the "minimal coupling" assumption [17], and correspond to holographic models as well as their deconstructed versions. It is also important to keep in mind that operators of Table 1 containing the fieldsL L and e R can potentially give a contribution to the electron mass. This places a constraint on the natural size of their Wilson coefficients. In particular, we find In fact, in most of the UV-complete BSM theories (e.g. supersymmetry, composite Higgs or theories with flavor symmetries only broken by Yukawas) we expect operators with chirality flips to carry yukawa couplings, i.e., C f V ∝ y f , C ye ∝ y e , C lequ ∝ y e y u , C luqe ∝ y e y u , C ledq ∝ y e y d , C leē l ∝ y e y e , (3.3) implying that we only expect sizable contributions to the electron EDM from four-fermion operators involving the third family. All these considerations can be useful for a proper interpretation of the recent ACME bound. In Table 4 we list the bounds on individual Wilson coefficients that can be inferred from the new electron EDM measurement Eq. (1.1). To derive the bounds we considered the various Wilson tree-level C eW 5.5 × 10 −5 y e g C eB 5.5 × 10 −5 y e g one-loop Leptoquarks and extra Higgs As a first example of an application of the above EFT analysis, we focus here on new-physics models containing states of Table 3. In particular, we focus on leptoquarks and heavy Higgs-like states. 12 As can be seen from Table 3, four leptoquark multiplets can give rise to electron EDM contributions up to two-loop order. 13 Among scalar leptoquarks only the R 2 and the S 1 multiplets give rise to contributions to d e . In both cases the contributions arise at one-loop level and include a logarithmically-enhanced term. On the other hand, vector leptoquarks, in particular the V 2 and the U 1 multiplets can contribute to the electron EDM at one-loop order only with finite contributions. We analyze the various cases in the following, providing the matching with the EFT operators in the limit of heavy multiplet masses. For the scalar leptoquarks, we also compare the leading running contributions to the EDM with the full results, which are already known in the literature. For simplicity we only include couplings to third-generation quarks, since interactions with the light generations give rise to EDM contributions suppressed by the light-fermion Yukawa couplings. 11 Bounds on effective operators coming from measurements of the electron EDM were previously derived in the literature in refs. [12,18]. 12 Notice that, in addition to the electron EDM, leptoquarks and heavy Higgs-like states, as well as supersymmetric scenarios, can also be constrained by the EDM of 199 Hg atom through the CP-odd electron-nucleon interaction [19]. 13 See Ref. [20] for a review of leptoquark properties and for the nomenclature. See also Ref. [21] for a recent reappraisal of the contributions of the scalar leptoquarks to electron and light quark EDMs. Scalar leptoquarks We start our discussion with the case of scalar leptoquarks. The R 2 leptoquark The first case we consider is the R 2 multiplet, whose SU(3) c ×SU(2) L ×U(1) Y quantum numbers are (3, 2, 7/6). The Lagrangian describing the relevant leptoquark interactions with the SM fermions is where L L i and Q L i labels the i-generation lepton and quark respectively. In the limit of large mass, the R 2 leptoquark gives rise to a contribution to the O luqe effective operator, namely Using Eq. (2.24), we can obtain the log-enhanced one-loop contribution to the electron EDM: The full one-loop contribution to the electron EDM is also known in the literature [22] where Q t = 2/3 and Q LQ = 5/3 are the electric charges of the top quark and R 2 leptoquark, while the I 2 and J 2 functions are given by (3.8) One can easily check that the leading logarithmic term in Eq. (3.8) agrees with the result of the EFT calculation in Eq. (3.6). In fact, the leading-log contribution in Eq. (3.6) provides a quite good approximation of the full result even for relatively small leptoquark masses. The discrepancy is below 25% for m R 2 > 300 GeV and below 10% for m R 2 > 360 GeV. The recent bound on the electron EDM Eq. (1.1) translates into the constraint 14 where we have normalized y LR 2 y RL 2 to the electron and top Yukawa coupling, following the estimates presented in Eq. (3.3). The S 1 leptoquark The second leptoquark state that can give rise to one-loop contributions to the electron EDM is the S 1 multiplet, which has (3, 1, 1/3) quantum numbers. Its interactions with the SM fermions can be parametrized by where the C superscript denotes the charge conjugation operation, namely ψ C ≡ Cψ T with C = iγ 2 γ 0 . Integrating out S 1 gives rise to a contribution to O luqe and O (1) equ , namely (3.12) The full one-loop result reads [20] d where the G function is defined by (3.14) We find that the leading-log approximation in Eq. (3.12) is in fair agreement with the full result, the difference being 30% for m S 1 220 GeV. Eq. (1.1) translates into the bound m S 1 400 TeV |Im(y LL 1 y RR * 1 )| y e y t 1 + 0.081 ln |Im(y LL 1 y RR * 1 )| y e y t . (3.15) Vector leptoquarks We now consider the case of vector leptoquarks. Before specializing the discussion to the V 2 and U 1 cases, we discuss some generic features of these models. As we already mentioned, vector leptoquarks can give rise to a finite contribution to the electron EDM at the 1-loop order. In order to compute the EDM effects, one needs first of all to embed the vector leptoquark into a well-behaved (i.e. renormalizable) UV theory. 15 For this purpose one can consider GUT-like extensions of the SM, as done in refs. [23,24], which always lead to the following Lagrangian for the couplings of the Leptoquark V µ to the photon A µ where Q V is the leptoquark electric charge. The couplings of a vector leptoquark to the electron and a quark q can be parametrized as The 1-loop contribution to the electron EDM is given by where m q and Q q are the mass and the electric charge of the quark, while m V is the leptoquark mass. Notice that contributions come from two type of diagrams, one in which the photon is attached to the quark line and one in which it is attached to the leptoquark line. The two contributions are therefore proportional to Q q ad Q V respectively. Within our EFT description, these contributions must be matched directly into the Wilson coefficients C eW and C eB at the scale m V . Notice, however, that the leptoquark gives also rise to effective 4-fermion interactions of the form (L a L γ µ Q L a )(b R γ µ e R ) that after Fierzing leads to O ledq . As we discussed, the O ledq operator however do not contribute at 1-loop to the electron EDM but at the 2-loop order. 16 The V 2 leptoquark Let us now consider V 2 multiplet with quantum numbers (3, 2, 5/6). The relevant interactions with the SM fermions read Using the above formulae we find the following 1-loop contribution to the electron EDM . (3.20) The new bound from the ACME collaboration leads to (3.21) The U 1 leptoquark The second possible vector leptoquark that gives a contributions to the electron EDM is the U 1 state, with quantum numbers (3, 1, 2/3). Its Lagrangian reads The 1-loop contributions to the electron EDM are given by . The bound in Eq. (1.1) leads to (3.24) 16 If the 4-fermion operators are written in vector-current form and MS regularization is used, the 1-loop contribution to the electron EDM proportional to the quark charge would be matched onto the Wilson coefficient of (L a L γµQL a)(bRγ µ eR). We checked that the finite 1-loop contribution coming from this operator indeed matches the Qq term in eq. (3.18). The Heavy Higgs The last type of massive multiplets we consider are Higgs-like states H with quantum numbers (1, 2, 1/2). 17 Depending on their allowed couplings to the SM fermions they can give rise to different sets of contributions to the electron EDM. In general a Higgs-like multiplet can have couplings to all SM fermion species. For simplicity we consider only couplings to the electron family and to third generation fermions, discarding flavor-violating couplings. We parametrize the relevant heavy Higgs interactions as 18 Notice that this bound, for η ∼ 1, is significantly stronger than the ones derived in the presence of couplings to the bottom or τ , but is much weaker than the one expected in the presence of a sizeable coupling to the top quark. The MSSM We will work within the MSSM assuming that the superpartner masses are larger than the EW scale. CP-violating phases can appear in several terms of the MSSM. Either in the supersymmetric parameter µ (that corresponds to the Higgsino mass), or in soft supersymmetry breaking terms: Bino and Wino masses, M 1 and M 2 respectively, Higgs mixing mass term, m 2 12 H u H d , and the scalar trilinears, e.g. y u A u H u Q LũR . Nevertheless, only those combinations of MSSM parameters whose phase cannot be removed by redefinitions of fields can lead to physical CP-violating effects. A recent analysis of the impact of the new ACME bound on the MSSM can be found in Ref. [26]. The main contribution from the MSSM arise from one-loop contributions to C eW and C eB , that generate an electron EDM calculated long ago -see for example [27]. From Winos and left-handed selectrons (L L ), we have where ρ ≡ |M 2 /µ| 2 and Using Eq. (3.37) and Eq. (2.12), we can obtain the contribution to the electron EDM in agreement with Ref. [28] in the large log-approximation. From the ACME bound Eq. (1.1), we get a limit on the Wino and Higgsino masses that can be approximately written as where we have taken sin 2β ∼ sin ϕ ∼ 1. Another type of two-loop contributions to the electron EDM can arise from one-loop contributions to C luqe . From loops involving selectrons, squarks, Winos and Higgsinos (see Fig. 3), we have , (3.41) with i running over the mass of the Higgsino, Wino,ũ R andL L . These results are valid for any quark generation u → u, c, t. For equal superpartner masses, we have F (m 2 i ) = 1/(12m 4 i ), and Eq. (2.24) and the ACME bound lead to Composite Higgs As a last example we consider the class of composite Higgs models. For definiteness we focus on minimal scenarios based on the SO(5) → SO(4) symmetry breaking pattern, which gives rise to a single Higgs doublet [29]. Depending on the implementation of the flavor structure different contributions to the electron EDM can arise. In models based on the anarchic partial compositeness paradigm naively extended to both quark and lepton sectors [30], large contributions arise at the one-loop level due to the presence of partners of the SM leptons and/or composite vector resonances. These contributions are generated at the mass scale of the composite states and can be estimated as [31,32] which pushes these scenarios into highly fine-tuned territory. The one-loop contributions to the electron EDM can be efficiently suppressed by either introducing flavor symmetries (in particular a U(2) family symmetry involving the light fermion generations [33]) or generating the light fermion masses by a bilinearf f mixing with the strong sector [34]. In both cases the leading corrections to the electron EDM arise at the two-loop level due to the presence of relatively light fermionic partners of the top quark [35,36]. In a large set of minimal models, 19 only derivative interactions involving the Higgs and the top partners give rise to CP-violating effects. Using a CCWZ notation (see Ref. [32] for a review of the CCWZ formalism), a typical representative of such operators is given by where d i µ denotes the CCWZ d-symbol, while Ψ 1,4 are composite fermions in the singlet and fourplet SO(4) representation respectively. The c t coefficient is in general complex, thus containing a CPviolating phase. The two-loop corrections to the electron EDM arises from Barr-Zee-type diagrams and contain a leading, log-enhanced contribution. The origin of the latter can be traced back to a two-step evolution. At the energy scale of the top partners a finite contribution to the O W W , O B B and O W B operators is generated, which then according to Eq. (2.9) induces a running for the electron EDM [38]. As an explicit example we report the results for the 14 + 1 model with a light SO(4) fourplet and a fully composite right-handed top. 20 By integrating out the heavy top-partners, we obtain, at leading order in the v/f expansion, (3.46) 19 These models are the ones in which only one SO(4)-invariant effective operator exists which gives rise to the Yukawa couplings. This happens, for instance, in the original holographic MCHM theories [29,37], as well as in "minimally tuned" scenarios with a fully-composite right-handed top quark [36]. 20 For more details on the model see refs. [36,39]. while at the tree-level, that will be relevant later, we get where we have defined with m T being the mass of the charged-2/3 top partner T , and y L4 , y Lt the mixing of the Q L doublet with the composite states as defined in Ref. [38]. Notice that the contribution to C yt is purely imaginary, i.e. CP-violating. Using the RGE Eq. (2.12), we obtain an electron EDM given by The three terms in square brackets come from diagrams containing a virtual photon, a virtual Z-boson and a virtual W -boson respectively. Since, as we said, the Z-boson vector coupling to the electron is quite suppressed, the main contribution is coming from the photon loop, whereas the W -boson term gives a ∼ 40% correction. In Eq. (3.49) the RG running of the EDM starts at m T and stops at the top mass. At that scale, indeed, we have to integrate the top, inducing an additional finite contribution to C F F due to Eq. (3.47). Surprisingly, 21 this contribution exactly cancels the one coming from top partners loops, Eq. (3.46), so that no net contribution to O F F is left below m t . In Fig. 4 we give the constraints on the mass of the T top partner in the (y L4 , Im[c t ]) plane. To derive these bounds we assumed that the result in Eq. (3.49) provides the main correction to the electron EDM and no additional contributions (or at least no strong cancellations) are present. We can see that for natural values of the parameters of the theory, y L4 ∼ 1 and Im[c t ] ∼ 1, the bounds from the electron EDM measurements can exclude top partner masses up to ∼ 15 TeV. This bound is significantly stronger than the current direct LHC exclusions and cannot be matched even in the high-luminosity LHC runs [40]. Conclusions We performed a two-loop analysis of the EDM of the electron using the EFT approach. In particular, we calculated the RGEs of the dimension-6 CP-violating dipole operators at the two-loop order, 22 as well as the one-loop RGEs of most relevant dimension-8 operators. We have shown that, due to selection rules, few operators can mix with EDM operators, as appreciated in Figure 1 where we present a summary of which and how dimension-6 operators enter into the EDM. We also commented on the RG running of the Wilson coefficients below the electroweak scale, and when CP-violating electron-nucleon interactions can be competitive with bounds on the electron EDM. These results are important to provide a proper interpretation of the new ACME bound on the electron EDM in terms of constraints on BSM particles. The recent improvement on the bound allows to constrain TeV new-physics even when it only contributes at the two-loop level. We have shown this with some examples. In particular, we considered theories with leptoquarks or extra Higgs, obtaining bounds ranging 1 − 100 TeV. We also considered two of the most motivated BSM scenarios for TeV new-physics, supersymmetry and composite Higgs. We first reinterpreted previous calculations in the EFT language. Then, we used our RGE two-loop results to understand which sectors or which new regions of the parameter space of these BSM are now constrained by the recent ACME result. In the MSSM case, for example, we showed how our two-loop results can provide new constraints on the small tan β region in the s-electron and wino sector. For the composite Higgs, after reinterpreting calculations on top-partners in the EFT language, we showed that bounds on these particles put them out of the reach of the LHC, unless they have CP-conserving couplings. Therefore, we conclude that, unless we find a reason of why, contrary to the SM, the interactions in these BSM do respect CP, the ACME result makes these theories much less natural. More importantly, future improvement on the the electron EDM bound (see for example [42]) could 21 As noticed in Ref. [38], the cancellation of the contributions to the O F F operators at low energy is a direct consequence of the fact that the derivative Higgs operator in Eq. (3.45) induces purely off-diagonal couplings with the composite fermions. For this reason the trace of the coupling matrix vanishes and the top loop exactly cancels the contributions from the top partners. The cancellation is rather generic and happens in a large class of models. Indeed, since the dµ CCWZ symbol transform non-trivially under SO(4) (it is in the representation 4), it can only give rise to couplings involving fermions in two different SO(4) representations, which are therefore purely off-diagonal. 22 We have only calculated the leading effect to the EDM for each Wilson coefficient Ci up to the two-loop order. This means that we have not included, for example, self-renormalization effects, neither two-loop effects from Wilson coefficients entering in the renormalization of the EDM at the one-loop level. These effects are only expected to correct the derived bounds by less than O(1), and could be easily incorporated if needed (see for example [41] for the case of the magnetic dipole moment of the muon). constrain BSM beyond the reach of future colliders. A EDM contributions from Barr-Zee diagrams In this appendix we report the full expressions for the contributions to the electron EDM coming from CP-violating Higgs Yukawa's originating from H 3f f operators. Before giving the formulae for the EDM contributions, it is useful to introduce a few definitions. We define the j(r, s) function as [25] j(r, s) ≡ 1 r − s r ln r r − 1 − s ln s s − 1 . (A.1) The Barr-Zee results with a CP-violating quark Yukawa can be rewritten in terms of the following integrals [43] F 1 (a, 0) and The expansion of the F 1 (a, 0) integral for large a is given by We can now give the results for the Barr-Zee contributions to the electron EDM. The contribution from a O y f operator for a generic quark or a heavy lepton is given by where g V e,f denote the vector couplings of the gauge boson V , namely g γ f = eQ f for the photon and The contribution from the O ye operator is instead where we only included the contributions from top loops, since the ones from the other fermions are suppressed by the small Yukawa's.
11,711
2018-10-22T00:00:00.000
[ "Physics" ]
Metabolic Alterations in a Slow-Paced Model of Pancreatic Cancer-Induced Wasting Cancer cachexia is a devastating syndrome occurring in the majority of terminally ill cancer patients. Notably, skeletal muscle atrophy is a consistent feature affecting the quality of life and prognosis. To date, limited therapeutic options are available, and research in the field is hampered by the lack of satisfactory models to study the complexity of wasting in cachexia-inducing tumors, such as pancreatic cancer. Moreover, currently used in vivo models are characterized by an explosive cachexia with a lethal wasting within few days, while pancreatic cancer patients might experience alterations long before the onset of overt wasting. In this work, we established and characterized a slow-paced model of pancreatic cancer-induced muscle wasting that promotes efficient muscular wasting in vitro and in vivo. Treatment with conditioned media from pancreatic cancer cells led to the induction of atrophy in vitro, while tumor-bearing mice presented a clear reduction in muscle mass and functionality. Intriguingly, several metabolic alterations in tumor-bearing mice were identified, paving the way for therapeutic interventions with drugs targeting metabolism. Introduction More than half of cancer patients are suffering from a systemic wasting disorder referred to as cachexia (from Greek "bad condition"), a syndrome strongly affecting the quality of life and prognosis in cancer patients. This syndrome is characterized by unstoppable consumption of adipose and skeletal muscle tissues leading to an excessive body weight loss that cannot be fully reverted by conventional nutritional support [1]. Cancer cachexia is a complex syndrome accounting for multiple organ dysfunction and systemic metabolic deregulations [2]. Cachectic patients experience symptoms ranging from anorexia, elevated inflammation, and insulin resistance to increased energy expenditure, which ultimately promote malaise, fatigue, and impaired tolerance to chemotherapy [3], further worsening patients' prognosis. Besides being associated with a poor prognosis, cachexia is estimated to be the direct cause of one-third of cancer deaths [4]. Several tissue dysfunctions emerge during cachexia, such as liver steatosis, fat deposit lipolysis, intestinal dysbiosis, and, most notably, skeletal muscle wasting, which account for the steep decrease in quality of life, weakness, and respiratory distress of cancer patients. Skeletal muscle atrophy is a highly regulated process driven by an unbalance between protein synthesis and degradation. Activation of the ubiquitin-dependent proteasome pathway (UPP) and the autophagy-lysosome system are two important mechanisms leading to increased protein breakdown. This process is orchestrated by a set of genes called atrogenes, such as Atrogin-1/MAFbx or Murf1 [5]. Compelling evidence shows that an impairment of mitochondrial metabolism and an increase in mitochondrial ROS are also strongly associated with the cachectic phenotype [6,7]. Several tumor types, such as lung, gastrointestinal tract, and pancreatic cancer, are emerging as strong promoters of cancer cachexia [8]. In particular, pancreatic ductal adenocarcinoma (PDAC) presents a high penetrance of wasting, a process that seems to occur even in earlier phases of tumor transformation [9]. Despite the burden of cachexia in PDAC, there are still limited experimental models available. Particularly, our understanding of the biology underlying cachexia is mostly based on the extensively used and well-characterized C26 carcinoma model, in which mice are drastically losing muscle and total body weight in a short period [10], thus contrasting with the progressive wasting occurring in the human pathology. It is known that the C26 model is associated with high levels of IL6 that play a central role in mediating muscle wasting [11], even though other inducers are probably involved in cachexia. In order to better characterize early stages of cachexia, we established a model of pancreatic cancer-induced cachexia able to promote mitochondrial metabolic alterations and a progressive wasting both in vivo and in vitro. Materials and Methods 2.1. Animals. Young adult female C57BL/6J mice (9-12 weeks old) were used. All animal experiments were authorized by the Italian Ministry of Health and carried out according to the European Community guiding principles in the care and use of animals. 2.2. Generation of a PDAC Model. KPC tumor cells were derived from a primary culture of pancreatic tumor cells of the genetically engineered mouse model of PDAC (K-ras LSL.G12D/+ ; p53 R172H/+ ; Pdx-Cre (KPC)). 0.7 × 10 6 KPC cells in 200 μl PBS were injected subcutaneously into the flank of C57bl/6J mice. Mice were sacrificed 5 weeks after injection, when tumor volume was approaching 5 mm of radius. Grip Test. An automatic grip strength meter was used to measure the maximum forelimb grip strength of mice. The machine measures the peak resistance force of the mouse while the latter is pulled away from the grid of the device. Each animal was assessed several times, and the final value corresponds to the average of 5 repeated force measurements in order to minimize procedure-related variability. Hanging Test. A wire-hanging test was used to assess whole-body muscle strength and endurance. The test was performed as previously described [12]. Briefly, mice were subjected to a 180-second hanging test on a wire, during which "falling" and "reaching" scores were recorded. When a mouse fell from the wire, the falling score was diminished by 1, and when a mouse reached one of the side of the wire, the reaching score was increased by 1. A final score was then established using both falling and reaching scores and was represented in the form of a Kaplan-Meier-like curve; scores have been normalized with respect to control. MRI. Magnetic resonance images were acquired on a 1 Tesla M2 system (Aspect, Israel) equipped with a 30 mm transmitter/receiver (TX/RX) solenoid coil to determine body composition [13]. T 1 -weighted spin-echo images were acquired with high-resolution whole-body coronal orientation (repetition time/echo time/flip angle/number excitations [TR/TE/FA/NEX]: 400 ms/9.5 ms/90°/3, field of view [FOV]: 10 cm, matrix: 192 × 192, number of slices: 18, slice thickness: 1.5 mm, in-plane spatial resolution: 521 μm, and acquisition time: 4 min). All T 1 -weighted images were processed by an in-house Matlab-developed script (MATLAB R2008, The MathWorks Inc.). The T 1 -weighted image histogram has three dominating classes, background, lean mass, and fat, so the total fat volume was isolated by segmenting the image into three categories by using a k-means clustering algorithm [14,15]. 2.7. Tissue Collection and Histology. Gastrocnemius muscle was excised, weighted, frozen in isopentane cooled in liquid nitrogen, and stored at −80°C. Transverse sections (7 μm) from the medial belly were cut on a cryostat and collected on Superfrost plus glass slides. Cryosections were then processed for laminin staining. In detail, sections were fixed in 4% paraformaldehyde (PFA) for 10 min before being incubated with a laminin antibody (1 : 200; Dako) and visualized by an anti-mouse IgG Alexa Fluor 488 (Thermo Fisher Scientific) secondary antibody. Pictures of the whole slides were acquired with the slide scanner Pannoramic Midi 1.14 (3D Histech, Budapest, Hungary), and the cross-sectional area (CSA) was measured automatically by ImageJ software. 2.8. Succinate Dehydrogenase Activity. Succinate dehydrogenase (SDH) enzymatic activity was determined on 15 μm cryosections by specific staining (Bio-Optica, Milan, Italy) following the producer's instructions. Briefly, the cryosections were incubated with the rehydrated SDH solution for 45 min at 37°C, washed, fixed, and mounted on slides. Images were then acquired with the slide scanner Pannoramic Midi 1.14. Conditioned medium (CM) was prepared as follows: KPC cells were grown in DMEM with 10% FBS supplemented with 1% penicillin and streptomycin. When cells reached full confluence, the medium was removed; cells were washed twice with phosphate-buffered saline (PBS) and once with serum-free DMEM. Cells were grown in serum-free DMEM for further 24 h; then, the medium was collected, centrifuged at 4000 rpm for 10 min, aliquoted, and stored at −80°C. Atrophy on C2C12 was induced with 10% CM treatment for 48 h. 2.10. Myotube Diameter Quantification. C2C12 myotubes were treated with differentiation medium supplemented with 10% conditioned medium from KPC for 48 h. Pictures of myotubes were taken with bright field microscopy (Zeiss), and diameters of myotubes were measured using the software JMicroVision as previously described [16]. 2.11. ROS Measurement In Vitro. ROS production was assessed in C2C12 myotubes by using the oxidantsensitive fluorescent dye 29,79-dichlorodihydrofluorescein diacetate (H 2 DCFDA; Molecular Probes Inc., Eugene, OR). Cells were incubated with 10 μM H 2 DCFDA in PBS for 30 minutes at 37°C under 5% CO 2 atmosphere in darkness. An excess probe was washed out with PBS. Fluorescence was recorded at excitation and emission wavelengths of 485 nm and 530 nm, respectively, by a fluorescence plate reader (Promega). Fluorescence intensity was expressed as arbitrary units. Mitochondrial Isolation. Mitochondrial fractions were isolated as previously reported [17], with minor modifications. Samples were lysed in 0.5 ml buffer A (50 mM Tris, 100 mM KCl, 5 mM MgCl 2 , 1.8 mM ATP, and 1 mM EDTA (pH 7.2)), supplemented with protease inhibitor cocktail III (Calbiochem), 1 mM PMSF, and 250 mM NaF. Samples were clarified by centrifuging at 650 ×g for 2 min at 4°C, and the supernatant was collected and centrifuged at 13,000 ×g for 5 min at 4°C. This supernatant was discarded, and the pellet containing mitochondria was washed in 0.5 ml buffer A and resuspended in 0.25 ml buffer B (250 mM sucrose, 15 mM K 2 HPO 4 , 2 mM MgCl 2 , 0.5 mM EDTA, and 5% BSA (w/v)). A 50 μl aliquot was sonicated and used for the measurement of protein content or Western blotting; the remaining part was stored at −80°C. 2.13. Electron Transport Chain. The activity of complexes I-III was measured on 25 μl of nonsonicated mitochondrial samples resuspended in 145 μl buffer C (5 mM KH 2 PO 4 , 5 mM MgCl 2 , and 5% BSA (w/v)) and transferred into a 96well plate. Then, 100 μl buffer D (25% saponin (w/v), 50 mM KH 2 PO 4 , 5 mM MgCl 2 , 5% BSA (w/v), 0.12 mM cytochrome c-oxidized form, and 0.2 mM NaN 3 ) was added for 5 min at room temperature. The reaction was started with 0.15 mM NADH and was followed for 5 min, and the absorbance was measured at 550 nm by a Synergy HT spectrophotometer (BioTek Instruments). Under these experimental conditions, the rate of cytochrome c reduction, expressed as nmol cyt c reduced/min/mg mitochondrial proteins, was dependent on the activity of both complex I and complex III [18]. 2.14. Intramitochondrial ATP Levels. The amount of ATP was measured on 20 μg of mitochondrial extracts with the ATPlite assay (PerkinElmer), according to the manufacturer's instructions. Data were converted into nmol/mg mitochondrial proteins, using a calibration curve previously set. 2.15. Intramitochondrial ROS Levels. The amount of ROS in mitochondrial extracts was measured fluorimetrically incubating mitochondrial suspension at 37°C for 10 minutes with 10 μM of 5-(and-6)-chloromethyl-2,7-dichorodihydro-fluorescein diacetate-acetoxymethyl ester (DCFDA-AM) and then washed and resuspended in 0.5 ml of PBS. Results were expressed as nmol/mg mitochondrial proteins, using a calibration curve previously set with a serial dilution of H 2 O 2 . 2.16. Fatty Acid β-Oxidation. Long-chain fatty acids were measured as described by Gaster et al. [19] with minor modifications. 100 μl mitochondrial suspension was rinsed with 100 μl of 20 mM HEPES, containing 0.24 mM fatty acid-free BSA, 0.5 mM L-carnitine, and 2 μCi [1-14 C]palmitic acid (3.3 mCi/mmol, PerkinElmer). Samples were incubated at 37°C for 1 h; then, 100 μl of 1 : 1 phenylethylamine (100 mM)/methanol solution (v/v) was added. After one hour at room temperature, the reaction was stopped by adding 100 μl of 0.8 N HClO 4 . Samples were centrifuged at 13,000 ×g for 10 min. Both the precipitates containing 14 C acid-soluble metabolites (ASM) and the supernatants containing 14 CO 2 derived from oxidation (used as an internal control and expected to be less than 10% of ASM) were counted by liquid scintillation. Results are expressed as nmol/min/mg cellular proteins. 2.17. Statistics. Statistical significance was evaluated with one-way or two-way analysis of variance (ANOVA) for multiple groups, followed by a post hoc test as defined in the figure legends. Student's unpaired t-test was used to compare two groups. All error bars indicate SEM. Significance was established as P < 0 05. Data have been obtained from multiple independent experiments for an in vitro assay and from at least 4 mice for in vivo experiments. All the analyses were performed with the software PRISM5 (GraphPad Software). Establishment of a Slow-Paced Cancer-Induced Muscle Wasting Model. PDAC is known to induce muscle wasting with high penetrance [3]. Since cancer cachexia is a complex syndrome involving various pathological processes promoting wasting, such as anorexia and chronic inflammation, it is difficult to assess the direct contribution from the tumor. Therefore, we decided to assess the direct role of cancer cells in skeletal muscle atrophy via an in vitro model of atrophy, thus excluding other systemic confounding atrophic factors, hypothesizing that, in this type of cancer, atrophy can be mediated directly by tumor cellsecreted factors. To this aim, we took advantage of KPC cells, a stable cell line derived from spontaneous primary tumor arising in C57BL/6 KRAS G12D P53 R172H Pdx-Cre +/+ (KPC) mouse [20], a genetically modified mouse model known to develop spontaneous wasting [9]. Similar to other cancer models [21], in our experimental conditions, KPC cells were able to directly promote muscle atrophy in vitro. Treatment of C2C12-derived myotubes with 10% KPC cell-conditioned media induced a consistent reduction in myotube thickness, similar to that elicited by dexamethasone, used as a positive control of atrophy induction (Figure 1(a)). A reduction in fiber thickness was associated with higher ROS generation (Figure 1(b)). A recent report from Michaelis et al. [20] showed that a subcutaneous injection of 5 million of these cells consistently promotes anorexia, hormonal dysfunctions, and lethal cachexia in 2 weeks. In order to establish a progressive model of wasting, we subcutaneously injected 0.7 million of KPC cells, which is the minimal amount able to consistently induce tumor growth without exacerbating factors such as excessive tumor burden and anorexia. Indeed, 5 weeks after KPC cell injection, neither food intake alteration (Figure 1(c)) nor macroscopic features of wasting were observed, despite a nonsignificant decreasing trend in body weight (Figure 1(d)). Tumor weight at the end of the experiment was approximatively 0.6 grams (Figure 1(e)), while in the other work, weight was between 1 and 2 grams [20]. Remarkably, despite the absence of overt signs of cachexia, skeletal muscle functionality, checked by rotarod evaluation twice a week (not shown), was drastically affected, but only at week 5, the week of the sacrifice. Accordingly, tumor-bearing mice showed reduced muscle performance, as assessed by the hanging-wire test [12] (Figure 1(f)), suggesting muscle deterioration in tumor-bearing animals. Along with reduced performance in stamina-related assays, mice displayed as well a reduction in grip strength, indicating also that the maximal force developed was reduced (Figure 1(g)). Coherently with the decrease in muscle functionality, 5 weeks after KPC injection, mice presented a consistent loss of gastrocnemius mass of roughly 20% (Figure 2(a)). The decrease was related to a reduction in average fiber size as detailed by histological analysis (Figures 2(b)-2(d)). Coherently, analysis of fiber cross-sectional area (CSA) distribution highlighted a shift towards smaller areas (Figure 2(e)). A reduction in muscle mass was not associated with transcriptional regulation of atrogenes Atrogin1, Musa, and Murf1 and of cathepsin L (Figures 2(f)-2(i)), nor with altered expression of ATG7, BECLIN1, and LC3 in gastrocnemii (not shown). Nevertheless, muscle protein lysates presented increased protein ubiquitination (Figure 2(j)), indicative of an activation of the UPP. Along with increased protein ubiquitination, we identified higher AMPK phosphorylation, in line with the emerging role of AMPK as a functional player in cancer cachexia [22]. Mice Undergoing Muscle Dysfunction Present Altered Lipid Metabolism. Given the importance of energy metabolism in regulating skeletal muscle mass and functionality [23,24], we investigated potential alterations of mitochondrial metabolism in the skeletal muscle of KPC-bearing mice. To this aim, we assessed basal complex II activity in gastrocnemii by performing the succinate dehydrogenase (SDH) activity assay. Intriguingly, gastrocnemius sections from KPC-bearing animals presented increased complex II activity, as evidenced by the increased concentration of blue tetrazolium salt (Figure 3(a)). However, this increased activity was not coupled with elevated flux through the electron transport chain (ETC). Indeed, ETC, as measured by cytochrome c reduction rate in uncoupled mitochondria, was similar in the two groups (Figure 3(b)). While SDH activity supports ETC, it is also part of the tricarboxylic acid (TCA) cycle and it is linked to fatty acid oxidation, allowing ketone bodies generated by acetyl coenzyme A due to excessive fatty acid oxidation to enter the TCA. To clarify whether the increased SDH activity was indicative of increased fatty acid oxidation, we measured this metabolic pathway in isolated mitochondria from the gastrocnemius of either control or KPC-bearing mice and we observed a significant increase in fatty acid oxidation in muscles of tumor-bearing mice (Figure 3(c)), consistent with the increase in complex II activity. In order to identify if the altered intramuscular lipid oxidation was correlated with a systemic dysregulation during this precachectic process, we performed T 1 -weighted magnetic resonance imaging (MRI). Intriguingly, 4 weeks post-KPC injection (one week before sacrifice), precachectic mice presented reduced bright hyperintensity regions, indicative of reduced fat deposits (Figures 3(d) and 3(e)). Coherently, at the time of sacrifice, KPC-bearing mice presented a significant reduction in inguinal fat tissue mass (Figure 3(f)). Therefore, we speculated that the reduced fat content might be related to increased fatty acid oxidation, a feature previously associated with cancer cachexia [23] in other tumor types. High SDH activity [25,26] and excessive fatty acid oxidation might lead to ROS accumulation [27], ultimately promoting mitochondrial dysfunction [28] and fiber damage. Hence, we investigated the impact of tumor growth on mitochondrial ROS and energetic balance. Mitochondria extracted from KPC-bearing animals had indeed increased ROS (Figure 3(g)), coupled with reduced ATP (Figure 3(h)), suggesting that the increased fatty acid oxidation may have a detrimental rather than beneficial effect on mitochondria. Discussion Pancreatic cancer is a pathology with dismal prognosis associated with a stark decrease in quality of life, mostly because of cachexia development [29,30]. While cachexia is considered the last step of cancer progression, it is important in PDAC to model the earliest steps of the disease (i.e., precachexia). Indeed, Mayers et al. [9] found that spontaneous PDAC mouse model presents an increased release of amino acids from the skeletal muscle months Frequency (%) 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000 3200 3400 3600 3800 4000 4200 4400 4600 4800 before the development of cachexia, which is consistent with the data from PDAC patients [9]. These data advocate for the importance of defining alterations in skeletal muscle occurring in the early phases of disease, before the establishment of overt cachexia. To this aim, we modified the cancer cachexia model described by Michaelis et al. to reproducibly induce cachexia with KPC cells [20]. Since KPC-bearing male mice present hormonal dysfunctions, we performed the study in female mice, although these animals are characterized by a moderate degree of wasting. While Michaelis and coworkers modeled cachexia by injecting up to 5 × 10 6 cells per mouse, thus promoting anorexia and subsequent animal death starting from 11-14 days, we injected only 0.7 × 10 6 cells (the minimal amount necessary to consistently promote tumor growth) in order to promote a slower tumor growth, thus allowing the characterization of precachectic events. This reduced cell number resulted in barely palpable tumors at 2 weeks after injection (the time point where mice from Michaelis et al. already started to die of cachexia). Moreover, in contrast to the effects reported with the injection of higher amount of KPC cells, smaller tumor mass (up to 75% reduction) and no effect on heart weight were observed (data not shown). Despite the moderate degree of wasting, mice presented reduced muscle function and strength. In order to detect alterations in muscle morphology occurring at early phases of wasting, we sacrificed the mice at the first sign of decreased performance. At necropsy, mice presented a statistically significant reduction in gastrocnemius weight and fat deposit, but no signs of anorexia, excluding the involvement of food intake in the skeletal muscle mass reduction. Intriguingly, we found only a trend towards decreased body weight, similar to the findings of Brown and colleagues [7]. Of note, it is known that many tumors hijack organ function, especially the liver and spleen [3], by inducing increased organ size, which might counterbalance the drop in muscle and fat in the first phases. Muscle from tumor-bearing mice did not present any major transcriptional regulation of the investigated atrogenes. However, we identified increased levels of AMPK T172 phosphorylation, indicative of ongoing energy stress, and protein ubiquitination indicating that the alterations present in muscle were mediated by a different pathway, that is, alternative ubiquitin ligase, a regulation at protein level, or alterations in protein deubiquitinases. As previously shown in clear cell kidney cancer, muscle undergoing wasting is causally linked to increased fatty acid oxidation [23,31], potentially raising noxious ROS generation in the mitochondria. Noteworthy, also in our model, the significant drop in lipid was coupled with an increased fatty acid utilization and mitochondrial ROS generation, indicating a potential source of oxidative stress causing reduced muscular function and degeneration. Intriguingly, we identified an increased activity of SDH, uncoupled with increased ETC flux. Moreover, ATP content was decreased, suggesting a profound mitochondrial alteration. This observation further supports the concept that mitochondrial alterations occur at the early phases of cachexia. While SDH does not contribute to increasing mitochondrial energy metabolism in cachectic muscles, it promotes the metabolism of ketone body derivatives that are produced in conditions of high fatty acid oxidation, that is, in the same metabolic conditions of KPC-bearing muscles. Redox cycles occurring at complex I and complex III of the ETC are generally considered the key sources of ROS within mitochondria. However, also, the SDH complex has been recognized as an important source of intramitochondrial ROS [25]. Taken together, our findings suggest that muscles consume fatty acids, forcing SDH activity in early cachexia. The final result is an energetic catastrophe that may severely impair muscle physiological performance. Interestingly, in vitro myotubes did not show increased fatty acid oxidation during atrophy (not shown), in line with the fact that culture and differentiation media contain limited amount of fatty acids. However, C2C12 cells treated with the medium of pancreatic cancer cells displayed the same alterations observed in cachectic muscles, notably high ROS levels and AMPK phosphorylation (not shown), suggesting that common metabolic alterations in mitochondrial metabolism occur in the early phase of cachexia both in vitro and in vivo. In conclusion, we report a novel model of precachexia causing a drastic reduction in muscle function and an initial reduction in skeletal muscle mass. Interestingly, the onset of increased fatty acid oxidation and mitochondrial ROS generation occurs before the emergence of muscle mass reduction. Further test inhibiting fatty acid oxidation (g) Gastrocnemius mitochondria were isolated, and ATP level was assessed using the ATPlite kit (PerkinElmer, USA). (h) ROS measurement in isolated mitochondria from gastrocnemius using H 2 DCFDA. N ≥ 4. All data are shown as means ± SEM. Statistical analysis was conducted using two-tailed t-test. n.s. = nonsignificant; * P < 0 05; * * P < 0 01. or mitochondrial ROS generation will be instrumental in understanding the relative contribution of such pathways to the pathogenesis of cachexia, as well as in the identification of the factors secreted by PDAC cells causing muscle atrophy, both in vitro and in vivo. Conflicts of Interest The authors declare no conflict of interest.
5,468.4
2018-02-26T00:00:00.000
[ "Biology", "Medicine" ]
Modern and Digitalized Usb Device with Extendable Memory Capacity —This paper proposes a advance technology which is completely innovative and creative. The urge of inventing this proposal lies on the bases of the idea of making a pen drive have an extendable memory capacity with a modern and digitalized look. This device can operate without the use of a computer system or a mobile. The computerized pen drive has a display unit to display the contents of the pen drive and in-built USB slots to perform data transmission to other pen drives directly without the use of the computer system. The implementation of the extendable memory slots to the computerized pen drive makes it a modern digitalized and extendable USB device. The implementation of an operating system and a processor to the pen drive are the main challenges in this proposed system. The design of the system is done in such a way that the device is cost efficient and user friendly. The design process and the hardware, software structures of this modern and digitalized USB device with extendable memory capacity are explained in detail in this paper. INTRODUCTION This paper primarily focuses on the innovation of a new gadget which is completely useful for storage and data transfer.The core subject of this proposal is a pen drive device.The major reason for choosing it is in-spite of developments of many advanced technologies in the field data exchange and transmission mainly like internet through which we can send and receive a lot of data and files, but the usage of the pan drive devices is still dominant through the world and very common in the student and working community.It is rare to see a person making use of the computer system even in personal or official working environments without owning a pen drive in the present scenario. There are also so many latest devices or the mediums in which storage of data can be made, for example the cloud computing offers a way in which we can store the data in the cloud area being provide for us.The Imagination of a normal pen drive with the display gives an attractive and enables to know the details of the contents inside the pen drive.The implementation of the in-built USB ports in the body of the pen drive provides a medium, through which another pen drive being connected to it directly and data transmission can be done even without the use of the computer system. The way in which the pen drive models are categorized is normally based on the memory size like 2GB, 4GB up to 64GB in today's market.The main idea of this paper is change this trend and bring into the advanced concept of the adjustable memory capacity pen drive devices.This process allows the pen drive to have extendable slots through which additional memory cards can be inserted in order to increase the size of the memory capacity of the computerized pen drive.These are the brief introduction about the modern and digitalized USB device with extendable memory capacity and the concept implementation are described as follows. A. Look and Dependent on computer systems to operate The look of the normal pen drives which are being present now has no special features are as shown as follows in the figure 1.There is no display unit or in-built USB port for data transmission.It cannot operate without a computer system; completely depend on the computers for working with it. B. Fixed Memory capacity The normal pen drives are of fixed memory size, all the models are limited to a constant memory space.The memory capacity is a major issue while in need of extra space for storage and other operations being done through it. So this paper provides a complete solution for this limitation and has a new approach over the sales of the pen drive which will not be on the memory size as being now.The variation in the memory can be done when there is a need for it.www.ijacsa.thesai.org III. COMPUTERIZED PEN DRIVE The look of the normal pen drives which are being present now has no special features are as shown as follows in the figure 1.There is no display unit or in-built USB port for data transmission.It cannot operate without a computer system; completely depend on the computers for working with it.Computerized Pendrive is a data storage device like text, image, video etc. and it was invented by invented by Amir Ban, Dov Moran and OronOgdan and it was established by IBM in 2000. It is known as Universal SeriaBus (USB) interface and it does what a floppy disk dose.The essential components are A USB plug, microcontroller, flash memory chip, Crystal oscillator, Jumpers, LEDs and Write-protect switches.It supports the operating system like UNIX, LINUX, MACOS and WINDOWS. Its main features are touch screen, two USB slots and has charge terminal it does what the entire computer does.We can transfer the data from one pen drive to the computerized pendrive and edit the data's to our needs.It is smaller, faster, cheaper and portable.It has the charge terminal to charge as like laptop's etc.They have a flash memory that is lower conception of power by advanced microprocessor .We can even play songs and videos. The transformation of large data is done in less time.Then the appearance is very attractive and colorful.They are having a file system like,  Defragment.  Even Distribution.  Hard Drive. It is used to store the data's of booting a system ,computer forensics and law enforcement, booting operating system, windows vista and windows 7ready boost, audio player, media marketing and storage, arcades, brands and product promotions, operating system installation, medias, graphics, security systems and backup's.It is the portable devices like tape, floppy disk, optimal media, flash memory cards, external hard disk, obsolete devices, encryption and security.This computerized pendrive have a security code that we can keep a number security lock that no other people can access it.It has a touch screen ability to perform its job and know the contents of the memory in the pen drive. There is a need of behavior analysis of flash-memory storage systems and other USB devices for storages and their evaluations.In particular, a set of evaluation metrics and their corresponding access patterns are proposed.The behaviors of flash memory are also analyzed in terms of performance and reliability issues [1].The security of the pen drive can be implemented be some techniques which can identify specific drive.We can describe the methods of digital evidence analysis [2] about USB thumb drives or devices such as Computerized Pen Drive.The rate of the pen drives which is implemented with these features can also be cheap as the normal drives being in the market now.This can really create a dynamic storage and transmission device.This proposal when implemented has more impact on the pen drive sales.The advantages of using such USB pen drives and flash drives are mainly because their power consumption and energy overhead features [3].The main advantages are listed as follows:  Very handy and compact  Display unit to display its contents  Extendable memory slots  High speed processor and user friendly OS. A. Display unit The display unit is the advanced feature in the modern pen drives.The display of the pen drive can be designed using touch screen .Using this unit we can see the contents in the pen drive using the touch screen.Using the touch screen in the pen drive will reduce the space occupied by the buttons in the pen drives.There are many kids of display technologies.The storage tube graphic display is the oldest technology used for display but now we are using the new virtual therapy for modern pen drives.The quality of the display can be increased by using the technology that we are using for the display.The quality of the display will be varying based on the technology we are using.We have to consider the various factors such as quality and speed while choosing the technologies that are using for display. B. In-built USB Port In the computerized pen drive there will be In-built USB port slot.Using that slot you can insert two pen drives in the space provided for it.This computerized pen drive is working as a computer system by providing all the features. C. Data Transmission The Data transmission is also the advanced feature in the computerized pen drive.Using this feature we can transmit the data to any other USB devices and the transmission from any other USB devices can be made very easy and simple using this computerized pen drive.The working of the computerized pen drive is similar to the computer system.The data transmission through this computerized pen drive will reduce the consumption of power.The need to connect portable electronic devices to each other has accelerated the adoption of USB onthe-go as an industry standard wired interface [4] for interfacing the two devices concepts.The data transmission in the Computerized Pen Drive can also be done without the use of the system. D. NIDR Processor The use of NIDR processor is used to increase the speed and the data can be transmitted in a more speed than any other processor [5].First it will fetch the instruction and send to the identifier unit to identify the type of instruction.The multiple instruction queues are implemented to increase the temporary memory.Based on the type of instructions all the instructions will be decoded at a time and finally those instruction are send to the execute unit for execution.The operation of this processor is shown as following. E. Operating System Operating system is very important in this pen drive, because without the support of pen drive we cannot display the contents in the pen drive through display unit.The other main use of operating system in the computerized pen drive is to maintain the formats of contents in the pen drive and the transmission of contents from the computerized pen drive.The use of operating system in the pen drive will also provide the user friendly environment so that the user will feel more comfort and easy while using this computerized pen drive.We can also use the Caernarvon operating system demonstrate that a high assurance system for smart cards was technically feasible and commercially viable.The entire system has been designed to be evaluated under the Common Criteria at EAL7 [6].The processor must also be implemented for the functioning of the Computerized Pen Drive.We use the Nurture IDR segmentation and multiple instructions queues in superscalar pipelining processor [7], which is very fast and the efficient processor. F. Charge Terminals While using the computerized pen drives if the battery is low we can charge the pen drive by using the mobile phone adapter in the charge terminal slot.It requires only small amount of power.In the Computerized Pen Drive the charge terminals are optional while constructing of the pen drive.www.ijacsa.thesai.org IV. DESIGNING EXTENDABLE MEMORY SLOTS The core idea of this proposal is creating the extendable memory slots in the body of the pen drive.This can be used in case there is a need of extra memory space.The memory can be extended by adding the memory cards in the slots provided therefore increase the space for storage.By implementing this concept the sales or the division of the pen drive devices can be brought into end. A. Reasons for Implementation of Extendable Memory The two main reasons which made us to design the concept of extendable memory slots in the pen drive devices are being explained as follows. 1) Insuffient memory space in the pen drive: Suppose a person is working a pen drive of memory capacity of 2GB and want to store and transmit a digital file of size more than 2GB may be of 3GB.In this case a memory card of 2GB can be inserted into the pen drive in the memory slots available and the capacity of the pendrive can be extended to 4GB and used for the storage of the data. 2) Enabling pen drive to act as a card reader: If the pen drive has the in-built slots for the memory cards then when the card in inserted the contents of the memory card can be read through the pen drive itself.The pen drive can act as a card reader also. B. Memory Card A memory card can also be called as a flash card and it is basically a type of electronic storage device based on flash memory concept for storing digital information like films, song and photos.The Memory can be used in many electronic devices like mobile phones, digital cameras and in MP3 players.It can also be used in laptop and desktop computers.The memory cards are basically small in size and rewritable storage device.The data which are stored in it will be retained even without power. C. Structure of theMemory Card Slots The hardware space required for the implementation of this idea of creating the memory slots in the body of the pen drive does not occupy much space.The size of the memory card is very less.The following figure shows the structure of the memory card slots in the body of the pen drive.The inclusion of this particular feature in the pen drive makes the computerized pen drive a complete advanced model hence giving raise a new gadget for storage and for data transmission purpose.The major factor which has to be kept under consideration is the size of the pen drive device because the main advantage of these USB pen drives is that they are basically very handy and compact by implementing this feature we must not spoil the nature of the pen drives look and smart sizes.The one more important thing is the cost for implementing this features on a pen drive device must be low as possible.In this paper we have tried to throw light on the idea of creating a new computerized and digitalized gadget which is extremely smart and innovative.The primary goal of this proposal is to create a new pen drive model which can have an extendable memory and also be used as the card reader at the same time.Adding few features like the display screen, an operating system with the processor using the concept Nurture IDR segmentation and multiple instructions queues in Figure 1 . Figure 1.Look of a Normal Pen Drive. Figure 1 . Figure 1.Look of a Normal Pen Drive Figure 5 . Figure 5.Types of memory cardsThere are many types in the structure and the purpose of the memory cards, the different types of the memory card available in the market are listed as follows. FIGURE 6 . FIGURE 6. MODERN AND DIGITALIZED USB DEVICE WITH EXTENDABLE MEMORY CAPACITY V. CONCLUSION
3,413.2
2012-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Improved Performance of Electron Blocking Layer Free AlGaN Deep Ultraviolet Light-Emitting Diodes Using Graded Staircase Barriers To prevent electron leakage in deep ultraviolet (UV) AlGaN light-emitting diodes (LEDs), Al-rich p-type AlxGa(1−x)N electron blocking layer (EBL) has been utilized. However, the conventional EBL can mitigate the electron overflow only up to some extent and adversely, holes are depleted in the EBL due to the formation of positive sheet polarization charges at the heterointerface of the last quantum barrier (QB)/EBL. Subsequently, the hole injection efficiency of the LED is severely limited. In this regard, we propose an EBL-free AlGaN deep UV LED structure using graded staircase quantum barriers (GSQBs) instead of conventional QBs without affecting the hole injection efficiency. The reported structure exhibits significantly reduced thermal velocity and mean free path of electrons in the active region, thus greatly confines the electrons over there and tremendously decreases the electron leakage into the p-region. Moreover, such specially designed QBs reduce the quantum-confined Stark effect in the active region, thereby improves the electron and hole wavefunctions overlap. As a result, both the internal quantum efficiency and output power of the GSQB structure are ~2.13 times higher than the conventional structure at 60 mA. Importantly, our proposed structure exhibits only ~20.68% efficiency droop during 0–60 mA injection current, which is significantly lower compared to the regular structure. Introduction The AlGaN-based ultraviolet (UV) light-emitting diodes (LEDs) offer tremendous potential for a wide range of applications, including air/water purification, surface disinfection, biochemical sensing, cancer cell elimination, and many more [1]. These UV LEDs have the potential to replace the bulky and toxic conventional UV lamps due to advantages like environment-friendly material composition, longer life-time, low power consumption due to low DC drive voltage, compact in size, and tunable emission across the UV region from~200 nm to~365 nm [2]. Nevertheless, the external quantum efficiency (EQE) and light output power of AlGaN deep UV-LEDs are still low due to several challenges. For instance, strong induced polarization fields and quantum-confined Stark effect (QCSE) contribute significantly to the separation of electron and hole wave functions, leading to reduced carrier confinement and radiative recombination in the device active region. Subsequently, the electron overflow, which acts as one of the primary reasons for efficiency droop is increased [3]. To eliminate the electron overflow, a p-doped Al-rich electron blocking layer (EBL) has been introduced between the active region and p-region [4]. This could mitigate the electron leakage to only an extent. However, hole injection efficiency is affected owing to the Device Structure and Parameters Firstly, to validate our device model and parameters, we have considered the conventional EBL-based AlGaN deep UV LED structure grown on a c-plane AlN template with 284 nm wavelength emission as a reference structure and denoted with LED 1. This study was experimentally reported by Yan et al. [21]. LED 1 consists of a 3 µm n-Al 0.6 Ga 0.4 N layer (Si doping concentration: 5 × 10 18 cm −3 ), succeeded by an active region, followed by a 20 nm p-Al 0.65 Ga 0.35 N EBL (Mg doping concentration: 2 × 10 19 cm −3 ), then capped by a 50 nm p-Al 0.5 Ga 0.5 N hole injection layer (Mg doping concentration: 2 × 10 19 cm −3 ), and finally a 120 nm p-GaN contact layer (Mg doping concentration: 1 × 10 20 cm −3 ). The active region comprises of five intrinsic 3 nm Al 0.4 Ga 0.6 N QWs sandwiched between six intrinsic 12 nm Al 0.5 Ga 0.5 N QBs. The schematic diagram of LED 1 is presented in Figure 1a and the Al composition (%) profile related to the conduction band energy diagram of LED 1 is shown in Figure 1b. The mesa area of the deep UV LED chip is 400 µm × 400 µm. As illustrated in Figure 1c, LED 2 has the same structure as LED 1 except the QBs, where the Al composition of the QBs is gradually increasing from QB 2 to QB 6 as 0.51, 0.54, 0.57, 0.60, and 0.75, respectively. The proposed structure referred to as LED 3 is identical to LED 2 with the replacement of GSQBs instead of uniform Al composition QBs. As depicted in Figure 1d, 12 nm thick each QB consists of Al x Ga (1−x) N (4 nm)/Al (x + 0.5)/2 Ga 1−(x + 0.5)/2 N (4 nm)/Al 0.5 Ga 0.5 N (4 nm) step layers. The Al composition (x) in the last five QBs is 0.51, 0.54, 0.57, 0.60, and 0.75, respectively. The x values are chosen by carefully optimizing the structure, similar to our previous study [13]. Micromachines 2021, 12, x 3 of 11 LED 2 with the replacement of GSQBs instead of uniform Al composition QBs. As depicted in Figure 1d, 12 nm thick each QB consists of AlxGa(1−x)N (4 nm)/Al(x + 0.5)/2Ga1-(x + 0.5)/2N (4 nm)/Al0.5Ga0.5N (4 nm) step layers. The Al composition (x) in the last five QBs is 0.51, 0.54, 0.57, 0.60, and 0.75, respectively. The x values are chosen by carefully optimizing the structure, similar to our previous study [13]. In this study, the above-mentioned LED structures are numerically studied using the Advanced Physical Models of Semiconductor Devices (APSYS) tool. The energy bandgap of GaN and AlN are estimated using the Varshni formula [22] where Eg(T) and Eg(0) are the energy bandgap at temperatures T and 0 K, respectively. a and b are material constants. The values of a, b, and Eg (0) for GaN are 0.909 meV/K, 830 K, and 3.507 eV [23]. The corresponding values for AlN are 1.799 meV/K, 1462 K, and 6.23 eV, respectively [23]. The band offset ratio and bowing parameter for AlGaN are taken as 0.67/0.33 and 0.94 eV, respectively [24]. The carrier mobility is estimated using the Cauchy-Thomas approximation [25] and the energy band diagrams of LED structures are calculated by using 6 × 6 k.p model [26]. Additionally, the Mg activation energy of AlxGa(1−x)N alloy for 0 < x < 1 is set to scale linearly from 170 meV to 510 meV [6]. The Shockley-Read-Hall (SRH) recombination life-time, radiative recombination coefficient, Auger recombination coefficient, and light extraction efficiency are set as 15 ns, 2.13 × 10 −11 cm 3 /s, 2.88 × 10 −30 cm 6 /s, and 15%, respectively [27]. Moreover, the built-in polarization due to spontaneous and piezoelectric polarization is estimated using the methods proposed by Fiorentini et al. [28] and considered as 50% of the theoretical value. All simulations are performed at room temperature and other band parameters can be found elsewhere [29]. Results The numerical device model and parameters implemented in this study are optimized based on the experimentally measured data of LED 1 published by Yan et al. [21]. Figure 2 shows the numerically calculated light-current-voltage curves of LED 1 closely matching with the experimentally obtained curves that validate our device model. To investigate the performance of the proposed structure, we have performed a numerical study on three LEDs, and the results are carefully analyzed. As a part of this study, In this study, the above-mentioned LED structures are numerically studied using the Advanced Physical Models of Semiconductor Devices (APSYS) tool. The energy bandgap of GaN and AlN are estimated using the Varshni formula [22] where E g (T) and E g (0) are the energy bandgap at temperatures T and 0 K, respectively. a and b are material constants. The values of a, b, and E g (0) for GaN are 0.909 meV/K, 830 K, and 3.507 eV [23]. The corresponding values for AlN are 1.799 meV/K, 1462 K, and 6.23 eV, respectively [23]. The band offset ratio and bowing parameter for AlGaN are taken as 0.67/0.33 and 0.94 eV, respectively [24]. The carrier mobility is estimated using the Cauchy-Thomas approximation [25] and the energy band diagrams of LED structures are calculated by using 6 × 6 k.p model [26]. Additionally, the Mg activation energy of Al x Ga (1−x) N alloy for 0 < x < 1 is set to scale linearly from 170 meV to 510 meV [6]. The Shockley-Read-Hall (SRH) recombination life-time, radiative recombination coefficient, Auger recombination coefficient, and light extraction efficiency are set as 15 ns, 2.13 × 10 −11 cm 3 /s, 2.88 × 10 −30 cm 6 /s, and 15%, respectively [27]. Moreover, the built-in polarization due to spontaneous and piezoelectric polarization is estimated using the methods proposed by Fiorentini et al. [28] and considered as 50% of the theoretical value. All simulations are performed at room temperature and other band parameters can be found elsewhere [29]. Results The numerical device model and parameters implemented in this study are optimized based on the experimentally measured data of LED 1 published by Yan et al. [21]. Figure 2 shows the numerically calculated light-current-voltage curves of LED 1 closely matching with the experimentally obtained curves that validate our device model. we have calculated the energy-band diagrams of LED 1, LED 2, and LED 3 at 60 mA injection current, as shown in Figure 3. The effective CBBH at the corresponding barrier (n) and EBL layer are denoted as фen and фEBL, respectively. In the same way, фhn denotes the effective valence band barrier heights (VBBH) at the corresponding barrier (n). The values for each of CBBH are extracted from the energy band diagrams and listed in Table 1. The value of фEBL is 235 meV in the case of LED 1 due to EBL, which is the maximum CBBH to block the electron overflow in LED 1. This value is comparatively low in contrast to LED 2 and LED 3 without EBL. In LED 2 and LED 3, the value of фen is progressively increasing with each QB and effectively blocking the electrons overflow by preventing the electrons from jumping out of the QWs. Further, the value of maximum CBBH i.e., фe6 in LED 3, is even higher than that of LED 2 as listed in To investigate the performance of the proposed structure, we have performed a numerical study on three LEDs, and the results are carefully analyzed. As a part of this study, we have calculated the energy-band diagrams of LED 1, LED 2, and LED 3 at 60 mA injection current, as shown in Figure 3. The effective CBBH at the corresponding barrier (n) and EBL layer are denoted as ф en and ф EBL , respectively. we have calculated the energy-band diagrams of LED 1, LED 2, and LED 3 at 60 mA injection current, as shown in Figure 3. The effective CBBH at the corresponding barrier (n) and EBL layer are denoted as фen and фEBL, respectively. In the same way, фhn denotes the effective valence band barrier heights (VBBH) at the corresponding barrier (n). The values for each of CBBH are extracted from the energy band diagrams and listed in Table 1. The value of фEBL is 235 meV in the case of LED 1 due to EBL, which is the maximum CBBH to block the electron overflow in LED 1. This value is comparatively low in contrast to LED 2 and LED 3 without EBL. In LED 2 and LED 3, the value of фen is progressively increasing with each QB and effectively blocking the electrons overflow by preventing the electrons from jumping out of the QWs. Further, the value of maximum CBBH i.e., фe6 in LED 3, is even higher than that of LED 2 as listed in In the same way, ф hn denotes the effective valence band barrier heights (VBBH) at the corresponding barrier (n). The values for each of CBBH are extracted from the energy band diagrams and listed in Table 1. The value of ф EBL is 235 meV in the case of LED 1 due to EBL, which is the maximum CBBH to block the electron overflow in LED 1. This value is comparatively low in contrast to LED 2 and LED 3 without EBL. In LED 2 and LED 3, the value of ф en is progressively increasing with each QB and effectively blocking the electrons overflow by preventing the electrons from jumping out of the QWs. Further, the value of maximum CBBH i.e., ф e6 in LED 3, is even higher than that of LED 2 as listed in Table 1, demonstrating LED 3 is the optimal choice to confine the electrons in the active region. As a result, in comparison with other LEDs, improved and maximum electron concentration in the active region for LED 3 was observed and is shown in Figure 4a. Though LED 2 has boosted electron concentration as compared to LED 1, but it is lower than LED 3. It is also noticed that due to improved electron confinement in the active region, electron leakage into the p-region is significantly reduced in LED 3, as shown in Figure 4b. Subsequently, this would reduce the non-radiative recombination of the overflowed electrons, and incoming holes in the p-region thereby contribute to better hole injection efficiency into the active region. However, LED 2 has even higher electron leakage as compared to LED 1. Due to this, the non-radiative recombination in the p-region of LED 2 would increase and reduce the hole injection efficiency into the active region, irrespective of the creation of negative sheet polarization charges at the last QB and p-Al 0.5 Ga 0.5 N interface. Table 1, demonstrating LED 3 is the optimal choice to confine the electrons in the active region. As a result, in comparison with other LEDs, improved and maximum electron concentration in the active region for LED 3 was observed and is shown in Figure 4a. Though LED 2 has boosted electron concentration as compared to LED 1, but it is lower than LED 3. It is also noticed that due to improved electron confinement in the active region, electron leakage into the p-region is significantly reduced in LED 3, as shown in Figure 4b. Subsequently, this would reduce the non-radiative recombination of the overflowed electrons, and incoming holes in the p-region thereby contribute to better hole injection efficiency into the active region. However, LED 2 has even higher electron leakage as compared to LED 1. Due to this, the non-radiative recombination in the p-region of LED 2 would increase and reduce the hole injection efficiency into the active region, irrespective of the creation of negative sheet polarization charges at the last QB and p-Al0.5Ga0.5N interface. It is worthwhile to note that in the last QB of LED 1, a sharp bending in the conduction band is formed due to induced positive polarization sheet charges at the heterointerface of the last QB and EBL. This area accumulates a large number of electrons i.e., 3.66 × 10 16 cm −3 , which eventually contributes to non-radiative recombination [30]. In addition, due to this induced positive polarization sheet charges in LED 1, a hole depletion region is formed at the heterointerface of the last QB and EBL, as shown in Figure 3a, which reduces the hole injection efficiency [5]. The formation of the hole depletion region problem is eliminated in the case of LED 2 and LED 3 by removing the EBL. In the case of LED 2, a hole accumulation region is formed at the interface of the last QB and p-region, which generally should improve the hole injection efficiency, whereas in LED 3 the hole injection efficiency even should further improve due to the formation of two-hole accumulation regions as shown in Figure 3c. The boosted hole injection efficiency in LED 3 can be seen from Figure 4c. This is also because of the reduced electron overflow in LED 3 due to improved electron confinement in the active region. Moreover, the effective VBBH, ф hn due to each QB, are calculated and listed in Table 2. As ф hn increases with the increase in Al composition in the QBs, the values of ф hn are found to be high in LED 2 and LED 3 compared to LED 1. This supports the improved hole confinement and increased hole concentration in the active region. However, a very high ф hn can also affect the hole transportation in the active region at the same time, which is in the case of LED 2. Moreover, already the hole injection efficiency is poor in LED 2, altogether the hole concentration is very low in LED 2, as shown in Figure 4c. In this regard, LED 3 has a smaller value of ф hn as compared to LED 2 due to again GSQB structures. Altogether, due to effective hole injection efficiency along with a comparable height of ф hn in the active region, hole concentration in LED 3 is relatively evenly distributed as compared to other LEDs. Overall, the hole concentration in the active region of all three LEDs is 7.2 × 10 16 cm −3 , 4.8 × 10 15 cm −3 , 7.8 × 10 16 cm −3 , respectively. Importantly, the overlap level of electron and hole wave functions in the active region for LED 1 and LED 3 are summarized in Table 3. It is seen that even though the hole concentration of LED 3 is close to LED 1, the proposed structure in LED 3 improves the electron and hole wavefunctions overlap level as compared to LED 1. As a result, the radiative recombination is significantly increased in LED 3, as depicted in Figure 4d. Finally, the IQE and output power of LED 1, LED 2, and LED 3 as a function of injection current are illustrated in Figure 5a,b, respectively. Figure 5c depicts the electroluminescence (EL) spectra of the three LEDs. As shown in Figure 5a, LED 3 exhibits the maximum IQE of 44.34%, whereas it is only 35.69% and 29.46% in the case of LED 1 and LED 2, respectively. In addition, the droop in the IQE during 0 mA-60 mA injection current is remarkably reduced to 20.68% in the proposed structure as compared to 53.68% and 94.7% in LED 1 and LED 2, respectively. This is due to the enhanced carrier transportation and confinement in the active region, thereby reduced electron overflow into the p-region because of GSQBs in the proposed structure. As depicted in Figure 5b, the output power of LED 3 is 2.13 times higher than LED 1 and 22.56 times higher than LED 2. As shown in Figure 5c, LED 3 depicts higher EL intensity as compared to LED 1 and LED 2 at the emission wavelength of~284 nm due to improved radiative recombination in the active region. EL intensity of LED 3 is~2.12 times higher than LED 1 and~22.24 times higher than LED 2. Different parameters related to IQE and output power of three LED structures are summarized in Table 4. remarkably reduced to 20.68% in the proposed structure as compared to 53.68% and 94.7% in LED 1 and LED 2, respectively. This is due to the enhanced carrier transportation and confinement in the active region, thereby reduced electron overflow into the p-region because of GSQBs in the proposed structure. As depicted in Figure 5b, the output power of LED 3 is 2.13 times higher than LED 1 and 22.56 times higher than LED 2. As shown in Figure 5c, LED 3 depicts higher EL intensity as compared to LED 1 and LED 2 at the emission wavelength of ~284 nm due to improved radiative recombination in the active region. EL intensity of LED 3 is ~2.12 times higher than LED 1 and ~22.24 times higher than LED 2. Different parameters related to IQE and output power of three LED structures are summarized in Table 4. To better understand the role of GSQB structure in LED 3, the schematic model for transportation of electrons in LED 1 and LED 3 is depicted in Figure 6. In this study, the total number of injected electrons into the n-Al0.6Ga0.4N region is considered as N0 for LED 1 and LED 3. For the simplicity of the model, electron loss through non-radiative recombination in n-Al0.6Ga0.4N region is neglected. The captured electrons in the quantum well (Ncapture) are correlated with the electron mean free path (lMFP) as expressed in Equation (2) [31]. To better understand the role of GSQB structure in LED 3, the schematic model for transportation of electrons in LED 1 and LED 3 is depicted in Figure 6. In this study, the total number of injected electrons into the n-Al 0.6 Ga 0.4 N region is considered as N 0 for LED 1 and LED 3. For the simplicity of the model, electron loss through non-radiative recombination in n-Al 0.6 Ga 0.4 N region is neglected. The captured electrons in the quantum well (N capture ) are correlated with the electron mean free path (l MFP ) as expressed in Equation (2) [31]. where t QW is the quantum well thickness. Illustrated in Figure 6a,b, the incoming electrons (N 0 ) are scattered and fall into the quantum wells, denoted by process 1. Some of those fallen electrons recombine with the holes radiatively as well as with the crystal defects as depicted by process 2, while remaining electrons escape from the QWs as illustrated by process 3. In addition, some electrons with longer l MFP travel to a remote position without being captured by the QWs as indicated by process 4. To increase N capture , the l MFP of these electrons needs to be reduced so that the electron concentration in the QWs would be increased that would favor the higher radiative recombination rate in the active region. At the same time, l MFP depends on thermal velocity (v th ) and the scattering time (τ sc ) as shown in Equation (3). For LED 1, v th can be further expressed as illustrated in Equation (4) [31]. where E is the excess kinetic energy in the n−Al 0.6 Ga 0.4 N layer, qV 1 is the work done to the electrons by the induced polarization electric field in QBs of LED 1, and m e is the effective mass of electrons. +∆E c denotes the conduction band offset between QB n and QW n , while −∆E c represents the conduction band offset between QW n and QB n + 1 . On the other hand, the GSQB structure in LED3 forms discontinuity in the conduction band of QB layers due to which the probability of the electrons to be scattered increases. where tQW is the quantum well thickness. Illustrated in Figure 6a,b, the incoming electrons (N0) are scattered and fall into the quantum wells, denoted by process 1. Some of those fallen electrons recombine with the holes radiatively as well as with the crystal defects as depicted by process 2, while remaining electrons escape from the QWs as illustrated by process 3. In addition, some electrons with longer lMFP travel to a remote position without being captured by the QWs as indicated by process 4. To increase Ncapture, the lMFP of these electrons needs to be reduced so that the electron concentration in the QWs would be increased that would favor the higher radiative recombination rate in the active region. At the same time, lMFP depends on thermal velocity (vth) and the scattering time (τsc) as shown in Equation (3). For LED 1, vth can be further expressed as illustrated in Equation (4) [31]. where E is the excess kinetic energy in the n−Al0.6Ga0.4N layer, qV1 is the work done to the electrons by the induced polarization electric field in QBs of LED 1, and me is the effective mass of electrons. +ΔEc denotes the conduction band offset between QBn and QWn, while −ΔEc represents the conduction band offset between QWn and QBn + 1. On the other hand, the GSQB structure in LED3 forms discontinuity in the conduction band of QB layers due to which the probability of the electrons to be scattered increases. Therefore, electrons would be thermalized more efficiently by interacting with longitudinal optical (LO) phonons, thereby reducing the vth and lMFP, as a result, electron confinement in the active region increases [19]. Hence, the vth in LED3 can be expressed as follows, ω υ Therefore, electrons would be thermalized more efficiently by interacting with longitudinal optical (LO) phonons, thereby reducing the v th and l MFP , as a result, electron confinement in the active region increases [19]. Hence, the v th in LED3 can be expressed as follows, where +∆E c1 represents the conduction band offset between QB n and QW n whereas −∆E c2 is the conduction band offset between QW n and QB n + 1 . As the QB heights are varying in LED 3 along the growth direction, ∆E c1 − ∆E c2 can therefore not be eliminated. qV 2 is the work done to the electrons by the induced polarization electric field in GSQBs of LED 3. The −hω LO denotes the total energy loss by phonon emissions due to each step layer in GSQBs. The values of qV due to each QB in LED1 and LED3 are listed in Table 5. Further, the values ofhω LO in each step of GSQBs for LED3 are calculated [32] and presented in Figure 7. where +∆Ec1 represents the conduction band offset between QBn and QWn whereas −∆Ec2 is the conduction band offset between QWn and QBn + 1. As the QB heights are varying in LED 3 along the growth direction, ΔEc1−ΔEc2 can therefore not be eliminated. qV2 is the work done to the electrons by the induced polarization electric field in GSQBs of LED 3. The −ℏꞷLO denotes the total energy loss by phonon emissions due to each step layer in GSQBs. The values of qV due to each QB in LED1 and LED3 are listed in Table 5. Further, the values of ℏꞷLO in each step of GSQBs for LED3 are calculated [32] and presented in Figure 7. (4) and (5), it is understood that (E + ΔEc1 + qV2−ΔEc2−ℏꞷLO) < (E + qV1). Consequently, vth for LED3 would be less as compared to LED1. As a result, lMFP would be reduced, which improves the electrons capture (Ncapture) ability of the QWs in LED3. In addition, the electron overflow happening due to process 4 can also be reduced by increasing the barrier height as in the proposed structure shown in Figure 6b. Here, the QB heights before and after the QWs are not at the same level, rather it is progressively increasing along the growth direction due to which some of the electrons from process 3 and 4 would bounce back denoted as process 5, which can also aid to improved electron concentration in the QWs in comparison with LED 1 as shown in Figure 4a. The proposed AlGaN deep UV LEDs using graded staircase barriers can also be realized by experimentation due to a simple device architecture. As different AlGaN based UV LEDs with thinner epilayers than our proposed structure have already been grown by metal-organic chemical vapor deposition (MOCVD) [33][34][35] and molecular beam epitaxy (MBE) [36,37]. Therefore, it is anticipated that the proposed structure can also be grown by both MBE and MOCVD. Conclusion We have numerically demonstrated and investigated the performance of EBL-free AlGaN UV LEDs emitting light at ~284 nm wavelength with the incorporation of GSQB structures. The reduced thermal velocity and mean free path of electrons improved the electron capture efficiency in the multi QWs, thus electron overflow was suppressed eminently. In addition, carefully engineered GSQBs promoted the hole injection by forming negative sheet polarization charges and improved the spatial overlap of the electron-hole From Equations (4) and (5), it is understood that (E + ∆E c1 + qV 2 − ∆E c2 −hω LO ) < (E + qV 1 ). Consequently, v th for LED3 would be less as compared to LED1. As a result, l MFP would be reduced, which improves the electrons capture (N capture ) ability of the QWs in LED3. In addition, the electron overflow happening due to process 4 can also be reduced by increasing the barrier height as in the proposed structure shown in Figure 6b. Here, the QB heights before and after the QWs are not at the same level, rather it is progressively increasing along the growth direction due to which some of the electrons from process 3 and 4 would bounce back denoted as process 5, which can also aid to improved electron concentration in the QWs in comparison with LED 1 as shown in Figure 4a. The proposed AlGaN deep UV LEDs using graded staircase barriers can also be realized by experimentation due to a simple device architecture. As different AlGaN based UV LEDs with thinner epilayers than our proposed structure have already been grown by metal-organic chemical vapor deposition (MOCVD) [33][34][35] and molecular beam epitaxy (MBE) [36,37]. Therefore, it is anticipated that the proposed structure can also be grown by both MBE and MOCVD. Conclusions We have numerically demonstrated and investigated the performance of EBL-free AlGaN UV LEDs emitting light at~284 nm wavelength with the incorporation of GSQB structures. The reduced thermal velocity and mean free path of electrons improved the electron capture efficiency in the multi QWs, thus electron overflow was suppressed eminently. In addition, carefully engineered GSQBs promoted the hole injection by forming negative sheet polarization charges and improved the spatial overlap of the electron-hole wavefunction. Therefore, the proposed structure exhibited higher radiative recombination and recorded output power of 13.9 mW at 60 mA injection current, which is 2.13 times higher than the conventional structure. Hence, the reported structure shows incredible potential to develop high-efficiency UV light emitters for real-world applications. Conflicts of Interest: The authors declare no conflict of interest.
6,910.4
2021-03-01T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Approach development to the pipeline networks design an integrated heat and cold supply system on the example of Yakutsk city . The pipeline networks design approach for integrated heat and cold supply systems based on absorption chillers has been developed. Within the framework of the approach, following tasks are solved: modeling consumers demand for cooling; hydraulic calculations of district cooling pipeline systems; technical and economic calculations to choose the optimal layout of pipeline systems; heat supply and cold supply systems repair and commissioning works compatibility assessment, and modeling of thermal interaction between permafrost massif and cold supply pipeline. The study of integrated heat and cold supply systems was carried out on the example of a Yakutsk city quarter using the developed approach to determine optimal variant of the system. The simulation results showed that district cooling pipelines can be laid underground with the implementation of measures to ensure preservation of permafrost soil temperature regime. Problem statement Currently, studies on integrated energy systems are increasing worldwide aimed at optimizing the production, distribution and consumption of energy in several forms (electricity, heat, cold, gas supply, etc.), considering them as interconnected subsystems of a single system. The introduction of integrated energy systems can improve energy efficiency, reduce energy losses, and enhance the overall reliability of energy systems [1,7]. For example, waste heat from a power plant can be used to supply consumers with cold [2][3][4]. Overall, research and development on integrated energy systems are an important step towards an efficient and sustainable energy future. The introduction of integrated systems in the Russian Far East requires re-evaluation of existing methods and models due to specific climates. For example, the climate in the Republic of Sakha (Yakutia) has cold winters and hot summers in the permafrost zone, and higher standards of reliability and performance are imposed on engineering systems. Climatological data analysis shows that the republic has low calculated outdoor air temperatures for heating systems design (up to -60°C in the Oymyakon village), and there are summer temperature peaks, which often reach plus 35 -38°C in some parts of the republic [6]. Furhtermore, permafrost thickness in the republic central zone reaches several hundred meters in some places. The objective of this study is the development of integrated heating and cooling systems pipeline networks in regions with cold winters and hot summers in Russia, located in the permafrost zone. An approach to the design of pipeline systems has been developed with additional consideration of interaction processes with permafrost. The purpose of the district cooling (DC) pipeline networks design approach is to select the most appropriate technical solution: pipeline laying method, consumers connecting scheme, and pipeline material. The end result is a complete list of DC pipeline system elements and equipment, their technical parameters, connection schemes, hydraulic modes, and pipeline temperature regime. Figure 1 shows a schematic diagram of the approach. District cooling pipeline networks design approach Stage 1. The design methodology begins with calculations of cooling demand. The potential consumers of the district cooling system (DCS) are city buildings located near the chiller station, which can be situated next to central heating points (CHP). Consumers are divided into 5 archetypes: residential, administrative, hospital, retail space, and hotel. Cooling demand is calculated using BEM methods of energy consumption mathematical modeling in buildings (Building Energy Modeling). The modeling is performed using the eQuest program based on the DOE-2 program. Cold consumption hourly modeling is performed depending on the ambient temperature, solar insolation, number of floors, area, internal heat inflows, walls thermal conductivity, etc. Yakutsk city energy system modeling was performed using the average statistical meteorological data of EnergyPlus [8]. The cold load hourly simulation maximum values are taken as the DCS operating parameters. Stage 2. The hydraulic calculation of cooling supply pipeline system with fan coil units inside buildings is made on the basis of the calculated cold loads. The purpose of the second stage is to calculate the required building entrance pressure. Pipeline diameters are calculated based on the recommended flow rate of fluid in pipelines and available pressure drop [9][10][11][12][13]. The temperature difference in cooling system pipelines is very small and the change in water density is insignificant, so the natural circulation pressure (negative) can be neglected [14]. The consumers available head Havailable head must take into account buildings pile foundations height hpile built on permafrost soils, lower technical floor height htechnical floor, building height hbuilding and pressure losses sum in the building's cold supply system ∆P. Stage 3. The district cooling system quarterly pipeline networks optimal parameters are determined. The district cooling system pipeline networks design task is to minimize the discounted payback period, taking into account the fulfillment of technological restrictions and conditions. This task includes the calculation of hydraulic pressure losses due to friction, the calculation of local energy losses during the water flow in pipeline various elements, as well as other elements of the hydraulic system, the calculation of heat exchanger parameters, thermal calculations of insulating structures, the calculation of pumping equipment parameters, the calculation of technical and economic system parameters [11][12][13]. Given are: time period; pipeline lengths; number of pipelines; air conditioning demand; operating water temperatures; water dynamic viscosity; outdoor air temperature; pipes thermal conductivity; insulation thermal conductivity; heat transfer coefficient from outer layer insulation to air. As a result of the tasks solving, for each variant of DCS, pipeline networks optimal parameters are calculated: pipelines optimal internal diameters; pipeline wall thickness; insulation thickness; average flow rates, liquid mass flow. In further work, it is planned to use new generation algorithms to determine optimal parameters of heat supply systems, implemented in the SOSNA software package [15]. The program uses the method of multi-loop optimization with solution successive improvement, based on dynamic programming. It will be necessary to upgrade the program to determine the DCS hydraulic system optimal parameters: change in operating temperatures of direct 5 °C and return pipelines 15 °C; increase liquid values of dynamic and kinetic viscosity; adding hydraulic resistance chiller and fan coil units. Stage 5. The optimal variant choice for the district cooling pipeline system layout is carried out in accordance with a standard integrated methodology for assessing investment project effectiveness. The options differences are the method of laying pipelines (underground, aboveground), consumer connection scheme (dependent, independent), and pipeline material (high-density polyethylene (HDPE), steel). A technical solution with the shortest discounted payback period is selected, subject to the following technical restrictions: average flow rate, similarly according to formula (2), consumer water pressure: balance between total load and consumers cooling demand: liquid flow balance between main section and branching consumer's sections: Stage 6. The compatibility assessment of heat supply and cold supply systems repair and commissioning works is carried out, along with the verification of the possibility of carrying out additional work during summer for maintaining cold networks. In further work, it is planned to take into account the operation impact assessment in the heat and cold supply mode on depreciation costs and premature wear of heat supply networks. Stage 7. Permafrost soils in Yakutsk city create technical restrictions for DCS underground pipelines that must be taken into account when designing. A thermotechnical calculation of cold losses during transportation through DCS underground pipelines networks must be carried out. It is necessary to perform a numerical calculation of dangerous geocryological processes that affect the stability and reliability of the system permafrost -pipeline, resulting from their thermal interaction. It is required to assess the thermal interaction of DC pipelines with the frozen soil array and to check the technical feasibility of underground laying, to assess the impact on the thermal regime of frozen soils. The heat transfer process in soil mass is described by a quasilinear parabolic equation with discontinuous coefficients, taking into account the "water-ice" phase transition [16]. Thermal interaction modeling of the soil mass with the pipeline is carried out in COMSOL Multiphysics 6.0 software package, taking into account pore moisture phase transitions. The computational domain is a transverse section of a soil mass 10 m wide and 6 m deep with pipelines laid at a depth of 1 m parallel to each other [4]. The pipelines laying method is underground, non-channel. The approach practical application example The developed approach is applied to the study of Yakutsk quarter 167. This quarter includes 8 buildings, including 7 five-story residential buildings and 1 hospital. All consumers are connected to district heating through central heating station No. 400. The pipeline heat networks are made in above-ground laying on concrete supports. The total length of heating networks in a single-pipe design is 445.1 meters. District cooling system layout options considered in the study differ in pipe laying method, consumer connection scheme, and pipelines material. Technical and economic parameters calculations were carried out according to a standard integrated methodology for assessing investment projects effectiveness. As a result of applying the developed approach, the best DC pipeline networks layout option was chosen -underground laying of pipelines with direct connection of consumers, the pipeline material is HDPE. The distance between pipes for underground laying is one meter in the developed model. Internal diameter of the pipelines is 268.6 mm and wall thickness is 23.2 mm. The pipelines are covered with a heat-insulating sheath made of polyurethane foam with a thickness of 25.4 mm. The cooling source is chilled water, water temperature in forward direction is 5°C, in reverse direction 15°C. The water flow rate in pipelines is 1.059 m/s. The thermal interaction modeling results showed that the engineering system operation has a strong impact on the soil upper layers temperature regime, and below a depth of 2.5 m, the change in soil temperature is not significant [4]. In turn, the support-concrete piles bearing capacity with a length of 6-12 meters depends on the strength of the soil much below a depth of 2.5 m. Conclusion An approach has been developed for designing district cooling pipeline networks with chiller-fancoil building cooling technology. The stages of the approach application are described. The approach was practically applied to the Yakutsk city quarter 167 example. Calculations results of hydraulic regimes were obtained, the pipelines diameters, pumping equipment parameters, laying pipelines methods, consumers connection schemes, pipeline material were determined. Economic indicators were assessed for the system. The calculation result in the determination of the optimal DCS variant -underground pipeline laying with direct connection of consumers, pipeline material -HDPE. The simulation results showed that underground laying of the district cooling pipelines is possible while ensuring measures to preserve the permafrost soils temperature regime. According to the initial assessment, the cooling cost can be 1.9-2.4 rubles/kWh, and the discounted payback period is 17-27 years.
2,466.8
2023-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Berrington, Cooling rate of thermal electrons by electron impact excitation of ®ne structure levels of atomic oxygen The atomic oxygen fine structure cooling rate of thermal electrons based on new effective collision strengths for electron impact excitation of the ground-state 3P fine-structure levels in atomic oxygen have been fitted to an analytical expression which is available to the researcher for quick reference and accurate computer modeling with a minimum of calculations. We found that at the F region altitudes of the ionosphere the new cooling rate is much less than the currently used fine structure cooling rates (up to a factor of 2–4), and this cooling rate is not the dominant electron cooling process in the F region of the ionosphere at middle latitudes. Introduction The electron temperature in the ionosphere is of great signi®cance in that it usually controls the rates of many physical and chemical ionospheric processes. The theoretical computation of electron temperature distribution in the ionosphere requires the knowledge of various heating and cooling rates, and heat transport through conduction. Schunk and Nagy (1978) have reviewed the theory of these processes and presented the generally accepted electron cooling rates. Pavlov (1998a, c) has revised and evaluated the electron cooling rates by vibrational and rotational excitation of N 2 and O 2 and concluded that the generally accepted electron cooling rates of Prasad and Furman (1973) due to the excitation of O 2 1 D g and O 2 1 R g are negligible in comparison with those for vibrational excitation of O 2 . The thermal electron impact excitation of the ®ne structure levels of the 3 ground state of atomic oxygen is presently believed to be one of the dominant electron cooling processes in the F region of the ionosphere (Dalgarno and Degges, 1968;Hoegy, 1976;Schunk and Nagy, 1978;Carlson and Mantas, 1982;Richards et al., 1986;Richards and Khazanov, 1997). To evaluate the energy loss rate for this process, Dalgarno and Degges (1968) have employed the theoretical O 3 excitation cross sections given by Breig and Lin (1966). The electron cooling rates of Hoegy (1976) and Carlson and Mantas (1982) which are currently used in models of the ionosphere are based on the excitation cross sections calculated by Tambe andHenry (1974, 1976) and Le Dourneuf and Nesbet (1976). The shortcomings of the theoretical approach of Tambe andHenry (1974, 1976) and Le Dourneuf and Nesbet (1976) were summarised by Berrington (1988) and Bell et al. (1998) Bell et al. (1998 improved the work of Berrington (1988) and presented the numerical calculations of the rate coecient of this electron cooling rate for the electron temperature, e 200, 500, 1000, 2000, and 3000 K and the neutral temperature, n 100, 300, 1000, and 2000 K. The primary object of this study is to use the theoretical O 3 excitation cross sections of Bell et al. (1998) to calculate and to ®t to a new analytical expression for atomic oxygen ®ne structure cooling rate of thermal electrons. 2 The electron cooling rate by electron impact excitation of ®ne-structure levels of atomic oxygen The O 3 ground state is split into three ®ne structure levels 3 i i 2Y 1Y 0 with the level energies given by Radzig and Smirnov (1980) as i 2 0, i 1 227X7 K (or 0.01962 eV), and i 0 326X6 K (or 0.02814 eV). Collisions of thermal electrons with the ground state of atomic oxygen produce transitions among the O 3 i ®ne structure levels and the electron cooling. Sharma et al. (1994) found that within an accuracy of 1±2% the Correspondence to: A. V. Pavlov ®ne structure levels are in local thermodynamic equilibrium at the local neutral atom translation temperature, n , for altitudes up to 400 km: where g i 2i 1 is the statistical weight of the i-th level, is the full number density of atomic oxygen. This study has been also conducted assuming that the velocity distribution of electrons is described by a Maxwellian distribution with a thermal electron gas of temperature, e . In this approximation the oxygen ®ne structure cooling rate is given by the expression (1) of Stubbe and Varnum (1972) as k is Boltzmann's coecient, the i 2 ground level with i 2 0 and the i 1 level with i 1 0X01962 eV of O 3 i are excited by thermal electrons, the O 3 j deexcitation levels are the j 0 upper level with i 0 0X02814 eV and the j 1 level, i ij i j À i i b 0, x ik e À1 , i is the energy of electrons, and m e denotes the mass of electrons, r ij is the cross section for excitation by electrons of O 3 from i-th to j-th state. It should be noted that deexcitation j 3 i cross sections of O 3 are related with excitation i 3 j cross sections of O 3 through the principle of detailed balancing. As a result, the excitation term of the electron cooling rate is de®ned as N e Oh À1 2 i1 j`i ij , and the deexcitation term of the electron cooling rate is N e Oh À1 2 i1 j`i ij exp i ij À1 e À À1 n À Á k À1 Â Ã . It follows from this de®nition of the cooling rate that in the energy balance equation for electrons, the cooling rate is subtracted from the electron heating rate received by thermal electrons from photoelectrons. The value of v is positive when e b n and negative when e`n . The cross section, r ij i, for the transition i 3 j is obtained from the collision strength, X ij i, by (see Eq. 1 of Hoegy, 1976) where 0 is the Bohr radius, and y is the Rydberg constant. Using Eq. (4), we conclude that where the eective collision strength, ij e , is determined as ij e I 0 X ij x expÀxdx 6 Carlson and Mantas (1982) found that the collision strengths calculated by Tambe andHenry (1974, 1976) and Le Dourneuf and Nesbet (1976) can be approximated by the empirical formula where the constants e ij , f ij , g ij , and q ij are given in Table 1 of Carlson and Mantas (1982). In the approximation of Eq. (7) the eective collision strength is calculated as ij e g ij k e f ij e ij 1 k e q ij À1 8 Figure 1 shows the eective collision strengths for the ®ne structure transitions 2 3 1 (left panel), 2 3 0 and Nesbet (1976) and Tambe andHenry (1974, 1976) calculated in the approximation of Eq. (8) with the constants e ji , f ji , g ji and q ji given in Table 1 of Carlson and Mantas (1982)
1,587.6
1999-07-31T00:00:00.000
[ "Physics" ]
IJBF TESTING THE PERFORMANCE OF ASSET PRICING MODELS IN DIFFERENT ECONOMIC AND INTEREST RATE REGIMES USING INDIVIDUAL STOCK RETURNS Using return data for all stocks continuously traded on the NYSE over the period July 1963 to December 2006, we tested the performance of the two-moment Capital Asset Pricing Model (CAPM) and the Fama French three-factor model in explaining individual stock returns. We found the performance of Fama French three-factor model to be marginally better than the CAPM. We further test the models for the signifi cance and stability of parameters in the bull/bear periods and the Federal increasing/decreasing interest rate periods and found the performance of the two models comparable. Introduction Sharpe's two-moment capital asset pricing model is the model most widely used to obtain the discount rate (required rate of return or the cost of equity capital). Graham and Harvey (2001) survey a sample of 392 fi rms and fi nd that "CAPM is by far the most popular method of estimating the cost of equity capital: 73.5% of respondents always or almost always use the CAPM". Even though practitioners use asset pricing models to predict the required return on individual assets, most researchers have used returns on portfolios to test different asset pricing models. 1 1 The formation of portfolios in asset pricing tests was introduced initially by researchers such as Blume (1970), Friend and Blume (1970) and Black, Jensen and Scholes (1972) and further enhanced by Fama and MacBeth (1973) to improve the precision of estimated betas for use in cross-sectional regression analysis. Using portfolio returns, researchers fi nd the performance of CAPM less promising as compared to its most prominent rival, the Fama French three-factor model. In this paper we use individual stock returns data to test the performance of CAPM and the Fama French three-factor model and found that contrary to the highly superior performance of Fama French three factor model using portfolio returns data, when tested on individual stock return data, the Fama French threefactor model performs marginally better in explaining the stock returns and the proportion of stocks that have a signifi cant alpha is comparable for both models. We further investigated the signifi cance and stability of the parameters of CAPM and Fama French three-factor model under changing economic and interest rate cycles and found that unlike the return on the market portfolio which is signifi cant over the entire period, the signifi cance of SMB and HML varies with both economic cycles and interest rate cycles. Over the last four decades, several studies 2 have appeared in the literature that empirically demonstrate that Sharpe's (1963Sharpe's ( , 1964 two-moment capital asset pricing model does not fully explain the asset pricing mechanism. In general, researchers found four shortcomings in the CAPM; namely, the model is not a good fi t to the actual rates of return data because of very low coeffi cient of determination, the intercept term is statistically signifi cant signaling specifi cation error problem, the model overestimates (underestimates) the discount rates for low (high) beta stocks and beta is unstable over time. One alternative to the CAPM that has received a great deal of attention in the fi nance literature is the Fama French three-factor model. Fama and French (1993) developed a threefactor model that explains the average returns of investment opportunities better than any of the previous models. Whereas the central theme of the CAPM is that the return on a market portfolio is suffi cient to explain asset returns, the threefactor model postulates that in addition to the loading on a market portfolio, loadings on two additional replicating portfolios, SMB, the difference between the rates of return on a portfolio of small stocks and large stocks and HML, the difference between the rates of return on a portfolio of high-book-to-market, and a portfolio of low-book-to-market stocks, are needed to explain the returns on assets. Fama and French (1996) test their three-factor model on portfolios constructed based on the market value and book value of stocks. They found that, not only is the average coeffi cient of determination (R 2 ) for the Fama French three-factor model close to one, but the constant term is insignifi cant as well, suggesting that the model does not suffer from misspecifi cation error. The high R 2 and the insignifi cance of the constant term suggest that the Fama French three-factor model does not suffer from the problem of under-or overestimation of excess returns. 3 There is ample evidence reported in the literature indicating that the widely used twomoment capital asset pricing model (CAPM) shows signifi cantly different results in bear and bull market periods (see, for example, Black (1972), Levy (1974) Chen (1982) Whitelaw (2000), Perez-Quiros and Timmermann (2000), and Ang and Chen (2002)). The cross sectional superiority of the Fama French three-factor model over the CAPM is already academically established and started with the Fama and French (1992) claim that CAPM as a model is "dead". In a recent paper, Lawrence, Geppert and Prakash. (2007) compared the performance of the twomoment CAPM, the three-moment CAPM and the Fama French three-factor model using the Fama-French 25 portfolio data. Based on the time series and the cross sectional tests, they found that the Fama French three-factor model outperforms the other models. In this paper we do not compare CAPM and Fama and French three-factor model cross-sectionally as the question of interest here is not if the risk premiums are priced. We tested the two models in time series regressions to investigate the predictive powers of the models using individual stock returns. Using individual stock returns, we also tested the stability of the parameters of the two asset pricing models. Fama and French (1996) did not test whether the parameters of their three-factor model depend on the market conditions. Since most of the models for portfolio selection and allocation of long-term resources (capital budgeting) use asset pricing models to compute the investors' required rate of return and/or the cost of capital, any inherent instability of the parameters 3 in changing market conditions may result in an incorrect decision. Therefore, it becomes imperative to search for the model that remains largely immune to the changing market conditions. Using individual stock returns, in this paper, we test the stability of the parameters of CAPM and Fama and French three-factor model in the bear and bull market periods. There has been a plethora of empirical studies on the effect of Federal discount rate change announcements on the asset prices (Waud (1970), Cook and Hahn (1988), Smirlock and Yawitz (1985), Jensen and Mercer (2002)). There seems to be no empirical study that has specifi cally examined the effect of interest changes on the parameters of asset pricing models. The Federal (Fed) monetary policies are designed to infl uence the overall economy and the Fed regularly use the discount rates to revive (restrict) the slowing (growing) economy by reducing (increasing) the discount rates. Though the discount rate changes are used to trigger changes in the macroeconomic variables such as overall output, employment and infl ation, the most prominent and direct effect of the discount rate changes is felt in the fi nancial markets through the changes in asset prices and their returns. If this is so, then the discount rate changes should affect the parameters of asset pricing models as well. According to Waud (1970), the stock market reacts positively to discount rate decreases and negatively to rate increases. Cook and Hahn (1988) and Smirlock and Yawitz (1985) found negative short-term market reaction to discount rate increases and vice versa. Jensen and Johnson (1995) fi nd evidence that the long-term stock market performance is correlated with changes in the Fed discount rate. Jensen, Merces and Johnson (1996) claim that the monetary environment infl uences investor's required returns. Jensen, Johnson and Bauman (1997) provide evidence regarding the relevance of monetary conditions for asset pricing. Bernanke and Kuttner (2005) found strong and consistent response of stock markets to the unexpected changes in the Fed interest rates. These studies clearly document the infl uence of Fed interest rate regimes on the security prices and their returns; however none of the studies so far have studied the effect of the Fed interest rate changes on the parameters of the asset pricing models. In this paper we made an attempt to fi ll this gap. We tested the two asset pricing models in the chronologically delineated non-overlapping (such as bear and bull periods 4 and the up and down interest rate regimes) market periods. Success of an asset pricing model should necessarily be gauged on how well it explains the returns on single assets. Our fi rst contribution was to show that the superior performance of the three-factor model is largely in explaining portfolio returns and not the stock returns. We performed time series analysis of the performance of the two models using both stock and portfolio return data over the 522 months, from July 1963 to December 2006. For portfolio returns we found that the average R 2 of the Fama French three-factor model is a convincing 18% more than that of the CAPM. However, when these models are used on the individual stock returns, the differential average R 2 falls to 5% and for those stocks where both models perform exceptional the increment is only 3%. Furthermore, the proportion of stocks that have a signifi cant alpha is comparable for both models; 7% in the Fama French three-factor model and 11% in CAPM. Our second contribution was an investigation of the signifi cance and stability of the SMB and HML under changing economic and interest rate cycles. The period of our study is conducive to such an investigation since over the period there have been a number of both bull/bear markets as well as a large number of increasing/decreasing discount rate periods. We found that unlike the return on the market portfolio which is signifi cant over the entire period, the signifi cance of the other two factors varies with both economic cycles and interest rate cycles. In the bull and bear periods, both SMB and HML are signifi cant in nearly all of the 25 Fama French portfolios but the signifi cance of both SMB and HML reduces for individual stocks; SMB is signifi cant in 60% of stocks in bull periods and 45% of stocks in bear periods whereas HML is signifi cant in 64% of the stocks in bull periods and 70% of stocks in the bear periods. Similar to our fi nding for bull/bear market periods, both SMB and HML are signifi cant in nearly all portfolios for the increasing and decreasing interest rate time periods. However SMB is signifi cant in 54% of stocks in increasing interest rate periods and 53% in the decreasing interest rate periods whereas HML is signifi cant in 69% of the stocks in the increasing interest rate time periods and is signifi cant in 60% of the stocks in the decreasing interest rate periods. Our results indicate that the parameters for SMB and HML are signifi cant for most of the portfolios returns but they are not signifi cant for the individual stock returns. Also, the Fama French three-factor model shows weaker results in the bear periods and in the increasing interest rate regimes. With respect to the stability of parameters we found the two models comparable. In the bull/bear periods, we found that the parameter for the market is different in 9% of the stocks using CAPM and 3% of the stocks using the Fama French three-factor model but the parameters for SMB and HML are different in respectively 9% and 8% of the stocks. In the Fed increasing and decreasing interest rate regimes, the parameter for the market remains nearly the same for the two models, 7% for CAPM and 8% for the three-factor model while the differences in the parameters for SMB and HML are 5% and 3% respectively. The layout of the paper is as follows: in Section 2 we briefl y discuss CAPM and the Fama and French three-factor model. In Section 3 we provide data and methodology. Section 4 has the empirical results. The conclusions are in Section 5. CAPM and Fama French Three-factor Model Under the assumptions for the CAPM, the market portfolio is effi cient and there is a risk free rate available to all investors. The following pricing relationship of the security market line (SML) holds for all individual assets and their portfolios: where R i denotes the return on any portfolio or asset i, R M is the return on some proxy of the market portfolio and  i,M = cov (R i R M ) / var (R M ). The above SML relationship allows a test of the CAPM using the following excess return market model regression equation: Taking expectations in the above market model we get: Comparing equation 3 with the SML equation 1, we see that CAPM imposes the restriction that the intercept  i is not signifi cantly different from zero and the coeffi cient on the excess market return (the beta coeffi cient) is statistically signifi cant. In the late 70s and 80s a number of anomalies concerning certain fi rm specifi c characteristics that seem to have explanatory power for the cross-section of returns beyond the market beta of the CAPM were reported. For example, Basu (1977) provides evidence that when common stocks are sorted on earnings-price ratios, future returns on high E/P stocks are higher than predicted by the CAPM and Banz (1981) document a size effect where low market capitalization fi rms have higher sample mean returns than would be expected if the market portfolio was mean-variance effi cient. Other researchers document a leverage effect and a role for the ratio of the book value of a fi rm's equity to its market value, (BE/ ME). 5 Fama and French (1992) investigate the joint role of all these variables by including all of them in their Fama-MacBeth style cross-sectional regression using portfolios formed fi rst on size and then on betas. Using a sample of monthly returns for non-fi nancial fi rms on NYSE, AMEX and Nasdaq from 1962-1989, they fi nd that beta does not explain the cross-section of average stock returns, there is a negative relation between size and return, book-to-market equity is signifi cantly positively related to average returns and the combination of size and book-to-market equity seem to absorb the roles of leverage and E/P ratio. They conclude that the two dimensions of risk which are priced are proxied by size and the ratio of book value of equity to market value of equity. Fama and French (1996) also report similar fi ndings using the time-series regression approach applied to portfolios of stocks sorted on price ratios. The evidence provided by Fama and French (1992) started the claims that CAPM as a model is "dead". This has however been countered by other researches who consider Fama and French results to be spurious and the result of data mining, (Kothari et al. 1995). In using time series regressions to test if the factors in the three-factor model are suffi cient to explain asset returns, the following model is used. If the three-factor model holds, then all three-factor coeffi cients are signifi cantly different from zero and the intercept is not signifi cantly different from zero. The three-factor model is now widely used in empirical research that requires a model of expected returns. It has been used in event studies to test for abnormal performance (Loughran and Ritter (1995); Mitchell and Stafford (2000) as well as models that study mutual fund performance (Carhart, 1997). However, todate there is no theory underlying this model. A. Data The data for this study consisted of all fi rms with monthly return data on CRISP return on the benchmark portfolios, HML, SMB and the monthly risk-free rate of return for the sample period are obtained from Kenneth French's website. We also obtained monthly value-weighted return on the 25 Fama-French portfolios which are the intersection of 5-size sort and 5-BE/ME sort from Kenneth-French's website. Table 1 provides summary statistics of the data. We included only those stocks that have been continuously traded over the sample period, a total of 245 stocks. Over the sample period the mean monthly excess return on the market is 0.476% which is similar to the value of 0.47% that was reported by Fama and French (2006) for their July 1963 to December 2004 period. B. Individual Asset Returns We performed time series analysis on each of the individual stocks and each of the 25 Fama French portfolios using the following two models: In the above models we tested for the signifi cance of the coeffi cient of determination of CAPM and FF3F. In addition, we also test if the signifi cance of the intercept is close to zero and ,  i s i and h i are signifi cantly different from zero. C. Stability Tests over Different Market Conditions We investigate the stability of the parameters in CAPM and the Fama French three-factor model over bear/bull economic cycles and the Fed interest rate cycles. Similar to the models used by Fabozzi and Francis (1977) we extend the CAPM and the three-factor model to include dummy variables for Bull/Bear market conditions and Increasing/Decreasing interest rate periods. The extended models that we use to test the stability of the parameters over bull/bear market conditions are: BB is a dummy variable which has a value of "1" for months that are part of bull market periods and zero otherwise. We used. similar models to test the stability of the parameters over different discount rate periods, by including a dummy variable, DR which takes a value of "1" for months when the discount rate is increasing and zero otherwise: We fi rst tested the signifi cance of the market factor in explaining individual stock returns over different market conditions using models CAPMBB and CAPMDR. Then, using stock data, we investigate the stability of the additional factors in the Fama French three-factor model by estimating models FF3FBB and FF3FDR. Specifi cally, our null hypotheses are: A. Individual Asset Returns vs. Portfolio Returns Panel A of Table 2 provides regression results of the CAPM and the Fama French three-factor model for the individual stocks in our sample and Panel B provides similar results for the Fama French 25 portfolios. For portfolio returns, our results are similar to those of Fama and French (1993. For the CAPM the R 2 ranges from a low of 58% to a high of 87% with an average of 73%; while for the Fama French three-factor model, the lowest R 2 is 79%, the highest is 95% and the average R 2 is a convincing 91%. In addition, whereas the intercept is signifi cant in 15 of the portfolios when CAPM is used, this number is reduced to 8 using the Fama French three-factor model. The results for the individual stocks reported in Panel A are less convincing. Here the range of R 2 is similar for the two models, 3% to 60% for the CAPM and 4% to 63% for the three-factor model. Also, unlike the results for the Fama French 25 portfolios in Panel B where we get an 18% improvement in average R 2 using the Fama French three-factor model versus the CAPM model, with individual stocks, the difference in average R 2 is only 5%. In addition, for CAPM, the intercept is signifi cant in 11% of the stocks compared to 7% for the three-factor model. Whereas, SMB and HML are signifi cant in explaining the returns of all 25 portfolios, they are signifi cant only in 67% (SMB) and 78% (HML) of the stocks. Together these results provided evidence that the improvements in the Fama French three-factor model over the CAPM is in explaining portfolio returns than individual stock returns. B. Bull/Bear Markets In Appendix 1, we provided the start and end date of the bull/bear periods and a summary of the total number of months for each. In Table 3 we present the results of the regression models when the CAPM and the Fama French threefactor models are extended to include dummies for the Bull/Bear months. Comparison of Panels A and B shows that only the market factor is consistently signifi cant in explaining both stock and portfolio returns during the bull and bear market periods. In the bear period, both SMB and HML are signifi cant in all the portfolios but in bull periods, SMB is signifi cant in 96% of portfolios whereas HML is signifi cant in 88% of the portfolios. The signifi cance of both SMB and HML reduces for individual stocks; SMB is signifi cant in 60% of stocks in the bull period and 45% of stocks in the bear period whereas HML is signifi cant in 64% of stocks in the bull period and 70% of stocks in the bear period. The results indicated that parameters for SMB and HML are signifi cant for most of the portfolio returns but they are not signifi cant for nearly half of the individual stock returns. The three-factor model shows weaker results in the bear periods where the parameter for SMB is insignifi cant for 55% of the stocks. C. Increasing/Decreasing Interest Rates In Appendix 2, we provided the start and end date of the increasing and decreasing interest rate periods and a summary of the total number of months for each. Over the sample period, the number of months during which the interest rates was increasing is approximately equal to the number of months when interest rate was decreasing (266 vs. 256). DR is a dummy variable which is 1 for months in which the discount rate is increasing and 0 otherwise. In Panel A we provide results for the time series regression with the dependent variable being the monthly stock return on each of the 245 stocks continuously traded over the sample period of July 1963 to December 2006. In Panel B we present the regression results for the 25 Size/BEME portfolios. In this table we report the regression results for the following two models: In 69% of the stocks in the increasing interest rate time periods and is signifi cant in 60% of the stocks in the decreasing interest rate periods. In Table 5, we provide the results for the F-tests of equivalence of the slope coeffi cients for each of the two models in the bear/bull and the Fed interest rate increasing/decreasing time periods. For CAPM, beta is different in 9% of the stocks in the bear/bull periods and 18% of the stocks in the increasing/ decreasing interest rate periods. For the Fama French three-factor model, beta is different in 3% of stocks in the bear/bull periods and 8% of stocks in the increasing/decreasing interest rate periods. The parameters for SMB and HML are different in respectively 9% and 8% stocks in the bear/ bull time periods and respectively 5% and 3% stocks in the increasing/decreasing interest rate periods. The differences in the parameters are more prominent in the portfolio returns where for the bear/bull market period, beta values are different for 40% of the portfolios for CAPM and 16% of the portfolios for the three-factor model. For the increasing/decreasing interest rate periods there is no difference in the beta values for CAPM but the beta values are different for 8% of portfolios for the three-factor model. The parameter for SMB is not different for any of the 25 portfolios in the bear/bull periods. It is different for 28% portfolios in the increasing/decreasing interest rate periods. The parameter for HML is different in 48% of the portfolios in the bear/bull periods and 28% of the portfolios in the increasing/decreasing interest rate periods. Panels A and B shows that the market factor is generally more stable than SMB and HML in explaining both stocks and portfolio returns over different market conditions. Conclusions In practice, asset pricing models are used to compute the expected returns of individual assets. These returns are then used in the computation of fundamental price of stock by investors and the net present value of projects by corporate managers. Even though asset pricing models are used for the individual assets they are invariably tested using portfolio return data to avoid the problem of errors in variables. Though CAPM is inarguably the most used model by practitioners, it performs poorly when tested against the Fama French three-factor model using portfolio return data. In this paper we tested the performance of CAPM and the Fama French three-factor model using individual stock return data and fi nd that the Fama French three-factor model performs marginally better than the CAPM. We also testwd the stability of parameters of the two models in the economic conditions (bear and bull periods and the Federal increasing and decreasing interest rate regimes) and found the two models comparable.
5,831
2010-01-01T00:00:00.000
[ "Economics" ]
FRK: An R Package for Spatial and Spatio-Temporal Prediction with Large Datasets FRK is an R software package for spatial/spatio-temporal modelling and prediction with large datasets. It facilitates optimal spatial prediction (kriging) on the most commonly used manifolds (in Euclidean space and on the surface of the sphere), for both spatial and spatio-temporal fields. It differs from many of the packages for spatial modelling and prediction by avoiding stationary and isotropic covariance and variogram models, instead constructing a spatial random effects (SRE) model on a fine-resolution discretised spatial domain. The discrete element is known as a basic areal unit (BAU), whose introduction in the software leads to several practical advantages. The software can be used to (i) integrate multiple observations with different supports with relative ease; (ii) obtain exact predictions at millions of prediction locations (without conditional simulation); and (iii) distinguish between measurement error and fine-scale variation at the resolution of the BAU, thereby allowing for reliable uncertainty quantification. The temporal component is included by adding another dimension. A key component of the SRE model is the specification of spatial or spatio-temporal basis functions; in the package, they can be generated automatically or by the user. The package also offers automatic BAU construction, an expectation-maximisation (EM) algorithm for parameter estimation, and functionality for prediction over any user-specified polygons or BAUs. Use of the package is illustrated on several spatial and spatio-temporal datasets, and its predictions and the model it implements are extensively compared to others commonly used for spatial prediction and modelling. Introduction Fixed rank kriging (FRK) is a spatial/spatio-temporal modeling and prediction framework that is scaleable, works well with large datasets, and can deal easily with data that have different spatial supports. FRK hinges on the use of a spatial random effects (SRE) model, in which a spatially correlated mean-zero random process is decomposed using a linear combination of spatial basis functions with random coefficients plus a term that captures the random process' fine-scale variation. Dimensionality reduction through a relatively small number of basis functions ensures computationally efficient prediction, while the reconstructed spatial process is, in general, non-stationary. The SRE model has a spatial covariance function that is always nonnegative-definite and, because any (possibly non-orthogonal) basis functions can be used, it can be constructed so as to approximate standard families of covariance functions (Kang and Cressie 2011). For a detailed treatment of FRK, see Johannesson (2006, 2008), Shi and Cressie (2007), and Nguyen, Cressie, and Braverman (2012). There are numerous R (R Core Team 2021) packages available for modeling and prediction with spatial or spatio-temporal data (see Bivand 2021), although relatively few of these make use of a model with spatial basis functions. A few variants of FRK have been developed to date, and the one that comes closest to the present software is LatticeKrig (Nychka, Bandyopadhyay, Hammerling, Lindgren, and Sain 2015;Nychka, Hammerling, Sain, Lenssen, Smirniotis, and Iverson 2019). LatticeKrig implements what we call a LatticeKrig (LTK) model, which is made up of Wendland basis functions (that have compact support) decomposing a spatially correlated process. LatticeKrig models use a Markov assumption to construct a precision matrix (the matrix K −1 in Section 2.1) to describe the dependence between the coefficients of these basis functions. This, in turn, results in efficient computations and the potential use of a large number (> 10, 000) of basis functions. LatticeKrig models do not cater for what we term fine-scale-process variation and, instead, the finest scale of the process is limited to the finest resolution of the basis functions used. The package INLA (Lindgren and Rue 2015) is a general-purpose package for model fitting and prediction. One advantage of INLA is that it contains functionality for fitting Gaussian processes that have covariance functions from the Matérn class (see Lindgren and Rue 2015, for details on the software interface) by approximating a stochastic partial differential equation (SPDE) using a Gaussian Markov random field (GMRF). Specifically, the process is decomposed using basis functions that are triangular 'tent' functions, and the coefficients of these basis functions are normally distributed with a sparse precision matrix. Thus, these models, which we term SPDE-GMRF models, share many of the features of LatticeKrig models. A key advantage of INLA over LatticeKrig is that once the spatial or spatio-temporal model is constructed, one has access to all the approximate-inference machinery and likelihood models available within the package. Kang and Cressie (2011) develop Bayesian FRK; they keep the spatial basis functions fixed and put a prior distribution on K. The predictive-process approach of Banerjee, Gelfand, Finley, and Sang (2008) can also be seen as a type of Bayesian FRK, where the basis functions are constructed from the postulated covariance function of the spatial random effects and hence depend on parameters (see Katzfuss and Hammerling 2017, for an equivalence argument). An R package that implements predictive processes is spBayes (Finley, Banerjee, and Carlin 2007;Finley, Banerjee, and Gelfand 2015). It allows for multivariate spatial or spatio-temporal processes, and Bayesian inference is carried out using Markov chain Monte Carlo (MCMC), thus allowing for a variety of likelihood models. Because the implied basis functions are constructed based on a parametric covariance model, a prior distribution on parameters results in new basis functions generated at each MCMC iteration. Since this can slow down the computation, the number of knots used in predictive processes is usually chosen to be small, which has the effect of limiting their ability to model finer scales. Our software package FRK differs from spatial prediction packages currently available by constructing an SRE model on a discretized domain, where the discrete element is known as a basic areal unit (BAU; see, e.g., Nguyen et al. 2012). The BAU can be viewed as the smallest spatial area or spatio-temporal volume that can be resolved by the process and, to reflect this, the process itself is assumed to be piecewise constant over the set of BAUs. The BAUs serve many purposes in FRK: They define a fine grid over which to do numerical integrations for change-of-support problems; a fine lattice of discrete points over which to predict (although FRK implements functions to predict over any arbitrary user-defined polygons); and a set of bins within which to average large spatio-temporal datasets, if so desired, for computational efficiency. BAUs do not need to be square or all equal in size, but they do need to be 'small, ' in the sense that they should be able to reconstruct the (undiscretized) process with minimal error. In the standard 'flavor' of FRK (Cressie and Johannesson 2008), which we term vanilla FRK (FRK-V), there is an explicit reliance on multi-resolution basis functions to give complex nonstationary spatial patterns at the cost of not imposing any structure on K, the covariance matrix of the basis function weights. This can result in identifiability issues and hence in over-fitting the data when K is estimated using standard likelihood methods (e.g., Nguyen, Katzfuss, Cressie, and Braverman 2014), especially in regions of data paucity. Therefore, in FRK we also implement a model (FRK-M) where a parametric structure is imposed on K (e.g., Stein 2008;Nychka et al. 2015). The main aim of the package FRK is to facilitate spatial and spatio-temporal analysis and prediction for large datasets, where multiple observations come with different spatial supports. We see that in 'big data' scenarios, lack of consideration of fine-scale variation may lead to over-confident predictions, irrespective of the number of basis functions adopted. In Section 2, we describe the modeling, estimation, and prediction approach we adopt in FRK. In Section 3, we discuss further details of the package and provide a simple example on the classic meuse dataset. In Section 4, we evaluate the SRE model implemented in FRK in controlled cases, against LatticeKrig models and SPDE-GMRF models through use of the packages LatticeKrig and INLA. In Section 5, we show its capability to deal with change-of-support issues and anisotropic processes. In Section 6, we show how to use FRK with spatio-temporal data and illustrate its use on the modeling and prediction of columnaveraged carbon dioxide on the globe from remote sensing data produced by NASA's OCO-2 mission. The spatio-temporal dataset contains millions of observations. Finally, Section 7 discusses future work. Outline of FRK: Modeling, estimation and prediction In this section we present the theory behind the operations implemented in FRK. In Section 2.1 we introduce the SRE model, in Section 2.2 we discuss the EM algorithm for parameter estimation, and in Section 2.3 we present the spatial prediction equations. The SRE model Denote the spatial process of interest as {Y (s) : s ∈ D}, where s indexes the location of Y (s) in our domain of interest D. In what follows, we assume that D is a spatial domain but extensions to spatio-temporal domains are natural within the framework (Section 6). Consider the classical spatial statistical model, In order to cater for different observation supports {B j } (defined below), it is convenient to assume a discretized domain of interest D G ≡ {A i ⊂ D : i = 1, . . . , N } that is made up of N small, non-overlapping basic areal units or BAUs (Nguyen et al. 2012) and N is the number of BAUs. At this BAU level, where for i = 1, . . . , N, and ξ i is specified below. The SRE model specifies that the small-scale random variation is υ(·) = φ(·) η, and hence in terms of the discretization onto D G , so that υ = Sη, where S is the N × r matrix defined as follows: In FRK, we assume that η is an r-dimensional Gaussian vector with mean zero and r × r covariance matrix K, and estimation of K is based on likelihood methods; we denote this variant of FRK as FRK-V (where recall that 'V' stands for 'vanilla'). If some structure is imposed on VAR(η) in terms of parameters ϑ, then K = K • (ϑ) and ϑ needs to be estimated; we denote this variant as FRK-M (where recall that 'M' stands for 'model'). Frequently, the resolution of the BAUs is sufficiently fine, and the basis functions are sufficiently smooth, so that S can be approximated: where {s i : i = 1, . . . , N } are the centroids of the BAUs. Since small BAUs are always assumed, this approximation is used throughout FRK. In FRK, we do not directly model ξ(s), since we are only interested in its discretized version. Rather, we assume that ξ i ≡ 1 |A i | A i ξ(s)ds has a Gaussian distribution with mean zero and variance ξ is a parameter to be estimated, and the weights {v ξ,1 , . . . , v ξ,N } are known and represent heteroscedasticity. These weights are typically generated from domain knowledge; they may, for example, correspond to topographical features such as terrain roughness (Zammit-Mangion, Rougier, Schön, Lindgren, and Bamber 2015). Since we specified ξ(·) to be 'almost' spatially uncorrelated, it is reasonable to assume that the variables representing the discretized fine-scale variation, {ξ i : i = 1, . . . , N }, are uncorrelated. From (2), we can write We now assume that the hidden (or latent) process, Y (·), is observed with m footprints (possibly overlapping) spanning one or more BAUs, where typically m r (note that both m > N and N ≥ m are possible). We thus define the observation domain as D O ≡ {∪ i∈c j A i : j = 1, . . . , m}, where c j is a non-empty set in 2 {1,...,N } , the power set of {1, . . . , N }, and m = |D O |. For illustration, consider the simple case of the discretized domain being made up of three BAUs. Then D G = {A 1 , A 2 , A 3 } and, for example, D O = {B 1 , B 2 }, where B 1 = A 1 ∪A 2 (i.e., c 1 = {1, 2}) and B 2 = A 3 (i.e., c 2 = {3}). Catering for different footprints is important for remote sensing applications in which satellite-instrument footprints can widely differ (e.g., Zammit-Mangion et al. 2015). Each B j ∈ D O is either a BAU or a union of BAUs. Measurement of Y is imperfect: We define the measurement process as noisy measurements of the process averaged over the footprints where the weights, depend on the areas of the BAUs, and I(·) is the indicator function. Currently, in FRK, BAUs of equal area are assumed, but we give (6) in its most general form. The random quantities {δ i } and { i } capture the imperfections of the measurement. Better known is the measurement-error component i , which is assumed to be mean-zero Gaussian distributed. The component δ i captures any bias in the measurement at the BAU level, which has the interpretation of an intra-BAU systematic error. These systematic errors are BAU-specific, that is, the {δ i } are uncorrelated with mean zero and variance where σ 2 δ is a parameter to be estimated, and {v δ,1 , . . . , v δ,N } represent known heteroscedasticity. We assume that Y and δ are independent. We also assume that the observations are conditionally independent, when conditioned on Y and δ. Equivalently, we assume that the measurement errors { j : j = 1, . . . , m} are independent with VAR( j ) = σ 2 v ,j . We represent the data as Z ≡ (Z j : j = 1, . . . , m) . Then, since each element in D O is the union of subsets of D G , one can construct a matrix where the three components are independent, ≡ ( j : j = 1, . . . , m) , and VAR( The matrix Σ is assumed known from the properties of the measurement. If it is not known, V is fixed to I and σ 2 is estimated using variogram techniques (Kang, Liu, and Cressie 2009). Notice that the rows of the matrix C Z sum to 1. It will be convenient to re-write where In practice, it is not always possible for each B j to include entire BAUs. For simplicity, in FRK we assume that the observation footprint overlaps a BAU if and only if the BAU centroid lies within the footprint. Frequently, point-referenced data are included in Z. In this case, each data point is attributed to a specific BAU and it is possible to have multiple observations of the process defined on the same BAU. We collect the unknown parameters in the set θ ≡ {α, σ 2 ξ , σ 2 δ , K} for FRK-V and θ • ≡ {α, σ 2 ξ , σ 2 δ , ϑ} for FRK-M for which K = K • (ϑ); their estimation is the subject of Section 2.2. If the parameters in θ or θ • are known, an inversion that uses the Sherman-Woodbury identity (Henderson and Searle 1981) allows spatial prediction at any BAU in D G . Estimates of θ are substituted into these spatial predictors to yield FRK-V. Similarly, estimates of θ • substituted into the spatial-prediction equations yield FRK-M. In FRK, we allow the prediction set D P to be as flexible as D O ; specifically, D P ⊂ {∪ i∈c k A i : k = 1, . . . , N P }, wherec k is a non-empty set in 2 {1,...,N } and N P is the number of prediction areas. We can thus predict both at the individual BAU level or averages over an area spanning multiple BAUs, and these prediction regions may overlap. This is an important change-ofsupport feature of FRK. We provide the FRK equations in Section 2.3. Parameter estimation using an EM algorithm In all its generality, parameter estimation with the model of Section 2.1 is problematic due to confounding between δ and ξ. In FRK, the user thus needs to choose between modeling the intra-BAU systematic errors (in which case σ 2 ξ is fixed to 0) or the process' fine-scale variation (in which case σ 2 δ is fixed to 0). We describe below the estimation procedure for the latter case; due to symmetry, the estimation equations of the former case can be simply obtained by replacing the subscript ξ with δ. However, which case is chosen by the user has a considerable impact on the prediction equations for Y (Section 2.3). Recall that the measurement-error covariance matrix Σ is assumed known from measurement characteristics, or estimated using variogram techniques prior to estimating the remaining parameters described below. For conciseness, in this section we use θ to denote the parameters in both FRK-V and FRK-M, only distinguishing when necessary. We carry out parameter estimation using an expectation-maximization (EM) algorithm (similar to Katzfuss and Cressie 2011;Nguyen et al. 2014) with (7) as our model. Define the complete-data likelihood L c (θ) ≡ [η, Z | θ] (with ξ Z integrated out), where [ · ] denotes the probability distribution of its argument. The EM algorithm proceeds by first computing the conditional expectation (conditional on the data) of the complete-data log-likelihood at the current parameter estimates (the E-step) and, second, maximizing this function with respect to the parameters (the M-step). In mathematical notation, in the E-step the function is found for some current estimate θ (l) . In the M-step, the updated parameter estimates The E-step boils down to finding the conditional distribution of η at the current parameter estimates. One can use standard results in Gaussian conditioning (e.g., Rasmussen and Williams 2006, Appendix A) to show from the joint distribution, [η, Z | θ (l) ], that In FRK-V, the update for K (l+1) is while in FRK-M, where recall that K = K • (ϑ), the update is which is numerically optimized using the function optim with ϑ (l) as the initial vector. The update for σ 2 ξ requires the solution to where The solution to (9), namely (σ 2 ξ ) (l+1) , is found numerically using uniroot after (8) is substituted into (10). Then α (l+1) is found by substituting (σ 2 ξ ) (l+1) into (8). Computational simplifications are possible when V ξ,Z and Σ are diagonal, since then only the diagonal of Ω needs to be computed. Further simplifications are possible when V ξ,Z and Σ are proportional to the identity matrix, with constants of proportionality γ 1 and γ 2 , respectively. In this case, where recall that m is the dimension of the data vector Z and α (l+1) is, in this special case, the ordinary-least-squares estimate given µ (l) η (see (8)). These simplifications are used by FRK whenever possible. Convergence of the EM algorithm is assessed using the (incomplete-data) log-likelihood function at each iteration, Efficient computation of the log-likelihood is facilitated through the use of the Sherman-Morrison-Woodbury matrix identity and a matrixdeterminant lemma (e.g., Henderson and Searle 1981). Specifically, the operations ensure that we only deal with vectors of length m and matrices of size r × r, where typically the fixed rank r m, the dataset size. Prediction The prediction task is to make inference on the hidden Y -process over a set of prediction regions D P . Consider the process {Y P (B k ) : k = 1, . . . , N P }, which is derived from the Y process and, similar to the observations, is constructed using the BAUs {A i : i = 1, . . . , N }. Here, N P is the number of areas at which spatial prediction takes place, and is equal to |D P |. Then, where the weights arẽ Define Y P ≡ (Y P,k : k = 1, . . . , N P ) . Then, since each element in D P is the union of subsets of D G , one can construct a matrix, the rows of which sum to 1, such that where T P ≡ C P T, S P ≡ C P S, ξ P ≡ C P ξ and VAR(ξ P ) = σ 2 ξ V ξ,P ≡ σ 2 ξ C P V ξ C P . As with the observations, the prediction regions {B k } may overlap. In practice, it may not always be possible for eachB k to include entire BAUs. In this case, we assume that a prediction region contains a BAU if and only if the BAU centroid lies within the region. Let l * denote the EM iteration number at which convergence is deemed to have been reached. The final estimates are then Recall from Section 2.2 that the user needs to attribute fine-scale variation at the BAU level to either the measurement process or the hidden process Y . This leads to the following two cases. Case 1: σ 2 ξ = 0 and estimate σ 2 δ . The prediction vector Y P and covariance matrix Σ Y P |Z , corresponding to the first two moments from the predictive distribution [Y P | Z] when σ 2 ξ = 0, are Under the assumptions taken, [Y P | Z] is a Gau( Y P , Σ Y P |Z ) distribution. Note that all calculations are made after substituting in the EM-estimated parameters, and that σ 2 δ is present in the estimated parameters. Case 2: σ 2 δ = 0 and estimate σ 2 ξ (default). To cater for arbitrary observation and prediction support, we predict Y P by first carrying out prediction over the full vector Y, that is, at the BAU level, and then transforming linearly to obtain Y P through the use of the matrix C P . It is easy to see that if Y is an optimal (squared-error-loss matrix criterion) predictor of Y, then A Y is an optimal predictor of AY, where A is any matrix with N columns. Let W ≡ (η , ξ ) and Π ≡ (S, I). Then (5) can be re-written as Y = Tα + ΠW, and for and the block-diagonal matrix Λ ≡ bdiag( K, σ 2 ξ V ξ ), where bdiag(·) returns a block diagonal matrix of its matrix arguments. Note that all calculations are made after substituting in the EM-estimated parameters. For both Cases 1 and 2 it follows that Y P = E(Y P | Z) = C P Y and Note that for Case 2 we need to obtain predictions for ξ P which, unlike those for η, are not a by-product of the EM algorithm of Section 2.2. Sparse-matrix operations (Zammit-Mangion and Rougier 2018) are used to facilitate the computation of (13) when possible. FRK package structure and usage In this section we discuss the layout and the interface of the package, and we show its use on the meuse dataset under 'simple usage' and 'advanced usage.' The former attempts to construct the SRE model automatically from characteristics of the data, while the latter gives the user more control through use of additional commands. The meuse dataset is not large and contains 155 readings of heavy-metal abundance in a region of The Netherlands along the river Meuse. For more details on the dataset see the vignette titled 'gstat' in the package gstat (Pebesma 2004). Usage overview By leveraging the flexibility of the spatial and spatio-temporal objects in the sp (Bivand, Pebesma, and Gómez-Rubio 2013) and spacetime (Pebesma 2012) packages, FRK provides a consistent, easy-to-use interface for the user, irrespective of whether the datasets have different spatial supports, irrespective of the manifold being used, irrespective of whether or not a temporal dimension needs to be included, and irrespective of the 'prediction resolution.' In Figure 1 we provide a partial unified modeling language (UML) diagram summarizing the important package classes and their interaction with the packages sp and spacetime, while in Table 1 we provide a brief summary of these classes. BAUs should be 'Spatial' or 'ST' pixel Class Description 'Basis' Defines basis functions on a specified manifold. 'Basis_obj' A virtual class that other basis classes inherit from. 'manifold' A virtual class that other manifold classes inherit from. 'measure' Defines objects that compute distances on a specified manifold. 'plane', 'real_line', 'sphere', 'STplane', 'STsphere', 'STmanifold' Subclasses that inherit from the virtual class 'manifold'. 'SRE' Defines the spatial-random-effects model, which is used to do FRK. 'TensorP_Basis' Tensor product of two basis functions. or polygon objects, while the data can also be point objects (although they are subsequently mapped to BAUs by FRK). Each 'Spatial' and 'ST' object is equipped with a coordinate reference system (CRS), which needs to be identical across objects. The main class is the 'SRE' class, the object of which incorporates all information about fitting and prediction using the data, BAUs, and basis functions. The basis functions are constructed on a manifold which, at the time of writing, can be R (real_line), R 2 (plane), S 2 (surface of sphere), and their spatio-temporal counterparts (STplane and STsphere). Some consistency checks are made to ensure that the CRS in the BAUs and the data objects are compatible with the manifold on which the basis functions are constructed. As with spDists in the sp package, distances on the manifold are either Euclidean or great-circle. The function spDists in sp is not used, rather a function in an object of class 'measure' is used for abstraction -this redundant structure is intended to facilitate future implementation of FRK on arbitrary manifolds and with arbitrary distance functions. The package FRK has support for spatio-temporal data (see Section 6); in this case, basis functions are of class 'TensorP_Basis' and, as the name implies, are constructed through the tensor product of spatial and temporal basis functions. The package is built around a straightforward model (outlined in Section 2) and has the capability of handling large datasets (up to a few hundred thousand data points on a standard desktop machine, and a few million on a big memory machine). For linear algebraic calculations, it leverages routines from the sparseinv package (Zammit-Mangion 2018, which is built from C code written by Davis 2021) The user has two levels of control; for simple problems one can call the function FRK, in which case basis-function construction and BAU generation is done automatically based on characteristics of the data. Alternatively, for more (advanced) control, the user can follow the following six steps. • Step 1: Place the data into an object with class defined in sp or spacetime, specifically either 'SpatialPointsDataFrame' or 'STIDF' for point-referenced data, and either 'SpatialPolygonsDataFrame' or 'STFDF' for polygon-referenced data (Pebesma 2012 STFDF . . . Table 1 for a brief description of these classes. For conciseness, in each class diagram (yellow box) only a few attributes are shown and no class operations are listed. Italicized class names indicate virtual classes, an arrow with an open arrowhead indicates inheritance, and a line with a diamond at one end and an arrowhead at another indicates a compositional ("has a") relationship. The numbers on these lines indicate the number of instances involved in the relationship. For example, the 'SRE' class always has two or more sp or spacetime instances (BAUs and data), while a 'TensorP_Basis' object may or may not be needed when setting up the SRE model. In the former case we use the notation '2..*' to denote 'two or more,' while in the latter we use '0..1' to note that the user may have 0 or 1 'TensorP_Basis' objects when using FRK. • Step 2: Construct a prediction grid of BAUs using auto_BAUs, where each BAU is representative of the finest scale upon which we wish to carry out inference (the process is discretized at the BAU level). The BAUs are usually of class 'SpatialPixelsDataFrame' for spatial problems (or they could also be of class 'SpatialPolygonsDataFrame'), and they are of class 'STFDF' for spatio-temporal problems. • Step 3: Construct using auto_basis a set of regularly or irregularly spaced basis functions. The basis functions can be of various types (e.g., bisquare, Gaussian, or exponential functions). • Step 4: Construct an SRE model using SRE from an R formula that identifies the response variable, the covariates, the data, the BAUs, and the basis functions. • Step 5: Estimate the parameters within the SRE model using SRE.fit. Estimation is carried out using the EM algorithm described in Section 2.2. • Step 6: Predict either at the BAU level or over arbitrary polygons specified as 'SpatialPolygon's or 'SpatialPolygonDataFrame's in the spatial case, or as 'STFDF's in the spatio-temporal case, using predict. Group Method/Function Use Basis functions auto_basis Automatically constructs a set of basis functions on a given manifold based on a supplied dataset. local_basis Manually constructs a set of 'local' basis functions from a set of centroids and scale parameters. eval_basis Evaluates basis functions over arbitrary points or polygons. remove_basis Removes basis functions from an object of class 'Basis'. show_basis Visualizes basis functions. BAUs auto_BAUs Automatically constructs a set of BAUs on a given manifold around a supplied dataset. BAUs_from_points Constructs BAUs from point-level data. Information coef Returns regression coefficients from a fitted SRE model. info_fit Returns information from the EM algorithm (e.g., information on convergence). nbasis Returns the number of basis functions in a 'Basis' or 'SRE' object. nres Returns the number of basis-function resolutions in a 'Basis' or 'SRE' object. opts_FRK$get Returns current option settings. opts_FRK$set Sets an option. summary Returns information on the 'Basis' or 'SRE' object. FRK operations FRK Constructs and fits an SRE model from a supplied R formula and dataset. predict Predicts over BAUs or at newdata using a fitted SRE model. SRE Constructs an SRE model from an R formula, data, BAUs, and basis functions. SRE.fit Fits (estimates parameters in) an SRE model. In Table 2 we provide some of the important methods and functions, together with brief descriptions, available to the user of FRK. Simple usage In simple cases, the user constructs and fits the SRE model using the function FRK, and then prediction is carried out using the function predict. The main function FRK takes two compulsory arguments: A standard R formula f and a list of data objects data, and it returns an object of class 'SRE'. Each of the data objects in the list must be of class 'SpatialPointsDataFrame', 'SpatialPolygonsDataFrame', 'STIDF', or 'STFDF', and each must contain the dependent variable defined in f. If there are covariates, then the user must supply the covariate data with all the BAUs, that is, at both the BAU measurement locations and at the BAU prediction locations. The BAUs should be of class 'SpatialPolygonsDataFrame' or 'SpatialPixelsDataFrame' (in the spatial case) or 'STFDF' (in the spatio-temporal case). Note that, unlike conventional spatial modeling tools, covariate information should not be supplied with the data, but with the BAUs. Also note that the intersection of the data support and that of the BAUs should never be null. When no basis functions or BAUs are supplied, then these are elicited automatically based on characteristics of the supplied dataset(s). The number of basis functions used depends on whether K is unstructured or not, on whether the data are spatial only or are spatio-temporal, and on the number of data points. For details, see the package's manual (Zammit-Mangion and Sainsbury-Dale 2021). The number of BAUs depends on the domain boundary and on whether the dataset is spatial or spatio-temporal. Domain construction and basis-function placement may make use of geometric functions available in INLA. If INLA is unavailable, simple geometric methods are used instead. FRK was not built for small datasets, for which standard exact kriging is fast and memory efficient. However, to illustrate the utility of FRK, we consider the meuse dataset in the package sp. We first consider a simple model with no covariates, in which we model the logarithm of zinc concentrations. Basis functions can either be arranged on a grid by setting regular = 1 or as a function of data density (using the INLA mesher) by setting regular = 0. The meuse dataset is first loaded and cast into a 'SpatialPointsDataFrame'. R> library("FRK") R> f <-log(zinc)~1 R> S <-FRK(f = f, data = list(meuse), regular = 0) The returned 'SRE' object S contains all the information about the fitted SRE model, which can be displayed using the summary command. If we wish to use covariate information, we need to consider BAUs that have covariate information attached to them. Such BAUs are available for this problem in the package sp in meuse.grid, which we first cast into a 'SpatialPixelsDataFrame' using the function gridded before using them in the SRE model. R> data("meuse.grid", package = "sp") R> coordinates(meuse.grid) <-~x + y R> gridded(meuse.grid) <-TRUE In this example, based on prior exploratory data analysis (see the vignette 'gstat' in the package gstat), we consider the square root of the distance from the centroid of a BAU to the nearest point on the river Meuse as the covariate. Recall that all covariates need to be supplied with the BAUs and not with the data, and FRK will throw an error if the data and BAUs have fields in common. In the code below, we first set any common fields to NULL in the data object, before running FRK using the user-specified BAUs. The other core function, which is also needed for 'advanced usage,' is predict, which is used to compute prediction and prediction standard errors at all prediction locations. This function takes as compulsory argument the SRE model S. If no polygons are specified, prediction is carried out at the BAU level (in space and/or time). An important argument is the flag obs_fs, which acts as a choice between Case 1 (process' fine-scale variance σ 2 ξ = 0; obs_fs = TRUE) and Case 2 (systematic intra-BAU variance σ 2 δ = 0; obs_fs = FALSE) of Section 2.3. R> Pred <-predict(S, obs_fs = FALSE) The function Pred returns the polygons (or, in this case, the BAUs) containing the prediction mu and the prediction variance var, which can be readily used for visualization. The predictions and prediction standard errors of the model having sqrt(dist) as a covariate are depicted in Figure 2. In this instance, Case 2 was used, and the estimate of the fine-scale variance σ 2 ξ was positive. Hence, the prediction and prediction-error maps exhibit 'bulls-eye' features, where the prediction standard errors are much lower in BAUs containing data than in neighboring BAUs not containing data. Point-level data and predictions In many cases, the user has one data object or data frame containing both observations and prediction locations with accompanying covariates. Missing observations are then usually denoted as NA. Since in FRK all covariates are associated with the BAUs and not the data, that one data object needs to be used to construct (i) a second data object where no data are missing and that does not contain missing covariates, and (ii) BAUs at both the observation and prediction locations supplied with their associated covariate data. For example, assume that the first 10 log-zinc concentrations are missing in the meuse dataset. R> data("meuse", package = "sp") R> meuse[1:10, "zinc"] <-NA Once the data frame is appropriately subsetted, it is then cast as a 'SpatialPointsDataFrame' as usual. R> meuse2 <-subset(meuse, !is.na(zinc)) R> meuse2 <-meuse2[, c("x", "y", "zinc")] R> coordinates(meuse2) <-~x + y The BAUs, on the other hand, should contain all the data and prediction locations, but not the response variable itself. Their construction is facilitated by the function BAUs_from_points which constructs tiny BAUs around the data and prediction locations. Advanced usage The package FRK provides several helper functions for facilitating basis-function construction and BAU construction when more control is needed. Harnessing the extra functionality requires following the six steps outlined in Section 3.1. Step 1: As before, we first load the data and cast it into a 'SpatialPointsDataFrame'. R> data("meuse", package = "sp") R> coordinates(meuse) <-~x + y Step 2: Based on the geometry of the data we now generate BAUs. For this, we use the helper function auto_BAUs, which takes several arguments (see help(auto_BAUs) for details). In the code below, we instruct the helper function to construct BAUs on the plane, centered around the data in meuse with each BAU of size 100 × 100 m. The type = "grid" input indicates that we want a rectangular grid and not a hexagonal lattice (type = "hex") and convex = -0.05 is a parameter controlling the shape of the domain boundary when nonconvex_hull = TRUE (see the help file of INLA::inla.nonconvex.hull and Lindgren and Rue 2015 for more details), and the extension of the convex hull of the data when nonconvex_hull = FALSE (default). For the ith BAU, we also need to supply the element v ξ,i (or v δ,i ) that describes the heteroscedasticity of the fine-scale variation for that BAU. As described in Section 2.1, this component encompasses all process variation that occurs at the BAU scale and only needs to be known up to a constant of proportionality, σ 2 ξ or σ 2 δ (depending on the chosen model); this constant is estimated using maximum likelihood with SRE.fit, which uses the EM algorithm of Section 2.2. Typically, geographic features such as altitude are appropriate, but in this illustration of the package we just set this value to be 1 for all BAUs. This field is labeled fs, and SRE will throw an error if it is not set. R> GridBAUs1$fs <-1 The data and BAUs are illustrated using the plot function in Figure 3. At this stage, the BAUs only contain geographical information. To add covariate information to these BAUs from other 'Spatial' objects, the function sp::over can be used. Step 3: FRK decomposes the spatial process as a sum of basis functions that can be constructed using the helper functions auto_basis as follows: R> G <-auto_basis(manifold = plane(), data = meuse, regular = 0, + nres = 3, type = "bisquare") The argument nres indicates the number of basis-function resolutions to use, while type indicates the function to use, in this case the bisquare function, where A is the amplitude and r is the aperture. Other options are "exp" (the exponential covariance function) and "Matern32" (the Matérn covariance function with smoothness parameter equal to 1.5). The basis functions do not need to be positive-definite and users may (7) need to contain at least one non-zero element in order for η to be identifiable. See help(auto_basis) for details. The basis functions can be visualized using show_basis(G); see Figure 4. Step 4: The SRE model is constructed by supplying an R formula, the data, the BAUs, and the basis functions, to the function SRE. If the model contains covariates, one must make sure that they are specified at the BAU-level (and hence attributed to GridBAUs1). We use the following formula. R> f <-log(zinc)~1 The SRE model is then constructed using the function SRE, which essentially bins the data into the BAUs, constructs all the matrices required for estimation, and provides initial guesses for the parameters that need to be estimated. By default, K_type = "block-exponential", which signals the construction of the matrices where d ijn is the distance between the centroids of the ith and jth basis functions at the nth resolution, r n is the number of basis functions at the nth resolution, n = 1, . . . , n res , n res is the number of resolutions, ϑ 1n is the marginal variance at the nth resolution, and ϑ 2n is the e-folding length-scale (i.e., the distance at which the correlation is 1/e) at the nth resolution. Then the default is K • (ϑ) = bdiag({K n (ϑ) : n = 1, . . . , n res }), where ϑ ≡ (ϑ 11 , ϑ 21 , ϑ 12 , . . . , ϑ 2nres ) and bdiag(·) returns a block-diagonal matrix constructed from its arguments. R> S <-SRE(f = f, data = list(meuse), BAUs = GridBAUs1, basis = G, + est_error = TRUE, average_in_BAU = FALSE) K_type = "unstructured" can be used to invoke FRK-V. When calling the function SRE, we supplied the formula f containing information on the dependent variable and the covariates; the data (as a list that can include additional datasets); the BAUs; the basis functions; a flag est_error; and another flag average_in_BAU. The flag est_error = TRUE is used to estimate the measurement-error variance σ 2 (where Σ ≡ σ 2 I) using semivariogram methods (Kang et al. 2009). At the time of writing, est_error = TRUE was only available for spatial data, not for spatio-temporal data. When not set to TRUE, each dataset needs to also contain a field std, the standard deviation of the measurement error (that can vary with the measurement). FRK is built on the concept of a BAU, and hence the smallest spatial support of an observation has to be equal to that of a BAU. However, in practice, several datasets (such as the meuse dataset) are point-referenced. We reconcile this difference by assigning a support to every point-referenced datum equal to that of a BAU. Multiple point-referenced data falling within the same BAU are thus assumed to be noisy observations of the same random variable; see (6). As a consequence of this, when multiple observations fall into the same BAU, the matrices V ξ,Z and V δ,Z will be sparse but not diagonal (since C Z will contain more than one non-zero element per column). This can increase the computational time required for estimation considerably. For large point-referenced datasets, such as the dataset considered in Section 4.2, it is reasonable to summarize the data at the BAU level. Since FRK is designed for use with large datasets, the argument average_in_BAU of the function SRE is defaulted to TRUE. In this default setting, all data falling into one BAU is averaged, and the standard deviation of the measurement error of the averaged data point is taken to be the average standard deviation of the measurement error of the individual data points. Consequently, the dataset is thinned. With large datasets and small BAUs, this thinning frequently does not cause performance degradation (see Section 4.2). Since the meuse dataset is relatively small, we set average_in_BAU = FALSE. Step 5: The SRE model is fitted using the function SRE.fit. Maximum likelihood estimation is carried out using the EM algorithm of Section 2.2, which is assumed to have converged either when n_EM is exceeded or when the log-likelihood across subsequent steps does not change by more than tol. In this example, the EM algorithm converged in about 30 iterations; see Figure 5. R> S <-SRE.fit(SRE_model = S, n_EM = 400, tol = 0.01, print_lik = TRUE) Step 6: Finally, we predict at all the BAUs with the fitted spatial model. This is done using the function predict. The argument obs_fs dictates whether we attribute the fine- scale variation to intra-BAU systematic error (Case 1) or to the process model (Case 2). In the code below, we use the default setting and allocate it to the process model. R> GridBAUs1 <-predict(S, obs_fs = FALSE) The object GridBAUs1 now contains the prediction vector, the prediction standard error, and the square of the prediction standard error at the BAU level in the fields mu, sd, and var, respectively. These can then be visualized using standard plotting commands. Predicting over larger polygons/areas Now, assume that we wish to predict over regions encompassing several BAUs such that the matrix C P in (11) contains multiple non-zeros per row. We can create this larger regionalization by using the function auto_BAUs and specifying the cell size. This gives a 'super-grid' shown in Figure 6. R> Pred <-predict(S, newdata = Pred_regions, obs_fs = FALSE) The predictions and the corresponding prediction standard errors on this super-grid are shown in Figure 6. Computational considerations While FRK beats the curse of 'data dimensionality' by dealing with matrices of size r × r instead of matrices of size m × m, one must ensure that the number of basis functions, r, remains reasonably small. The reasons are two-fold. First, the computational time required to invert an r × r matrix increases cubicly with r, and several such inversions are required when running the EM algorithm. Second, it is likely that more EM-algorithm iterations are required when r is large. In practice, r should not exceed a few thousand. The number of basis functions r can usually be controlled through the argument nres. The function auto_basis also takes an argument max_basis that automatically finds the number of resolutions required to not exceed the desired maximum number of basis functions. The fitting and prediction algorithms scale linearly with the number of data points m and the number of BAUs N . However, if one has millions of data points, then the number of BAUs must exceed this and a big-memory machine will probably be required. Irrespective of problem size, we have noted considerable improvements in speed when using the OpenBLAS libraries (Wang, Zhang, Zhang, and Yi 2013). Some of the operations in FRK can be run in parallel. To use a parallel back-end, one needs to set an option as follows: R> opts_FRK$set("parallel", numcores) where numcores is of class 'integer' (e.g., numcores = 4L to use 4 cores). When this option is set, the parallel package is used to set up a parallel backend using makeCluster, which is subsequently used for parallel operations. Currently, parallelism is limited in FRK to • computing the integrals in (3) using Monte Carlo integration or, when appropriate, the approximation (4); • finding which data are influenced by which BAUs and computing the weights in (6). Unfortunately the EM algorithm, which is the bottleneck in a spatial analysis using FRK, is serial in nature and difficult to parallelize. Hence, SRE.fit takes as argument method, in recognition that in the future other, possibly parallelizable, estimation methods might be implemented to speed up the fitting process. Comparison studies In this section we compare the utility of the SRE model in FRK to standard kriging using gstat, and to two other popular models for modeling and predicting with large datasets in R: the LatticeKrig model that can be implemented with the package LatticeKrig (Nychka et al. 2019), and the SPDE-GMRF model that can be implemented with the package INLA (Lindgren and Rue 2015). In both these models the spatial field is decomposed as and K −1 is modeled in such a way that it is sparse. These two models allow for feasible computation with large r, however neither includes an extra fine-scale effect ξ(·). The SPDE-GMRF model has the added interpretable feature that, for a given set of basis functions, K −1 is such that the resulting field approximates a Gaussian process with a stationary covariance function from the Matérn class. In Section 4.1, we first analyze a 2D simulated dataset. We shall see that while FRK may sometimes perform less well in terms of prediction accuracy due to the practical limit on the number of basis functions it uses, it does not under-fit (i.e., it gives valid results) since fine-scale variation is taken into account. In fact, we see that the SRE model in FRK provides better coverage in terms of prediction intervals, even with large datasets, when compared to other models that use considerably more basis functions but that do not account for fine-scale variation. In the second case study (Section 4.2), we consider three days of column-averaged carbon dioxide data from the Atmospheric InfraRed Sounder instrument on board the Aqua satellite . A 2D simulated dataset Let D = [0, 1] × [0, 1] ⊂ R 2 , and consider a process Y (·) with covariance function COV(Y (s), Y (s + h)) ≡ σ 2 exp(− h /τ ), where σ 2 is the marginal variance of the process and τ is the e-folding length-scale. Further, let m be the number of observations, and define the signal-tonoise ratio (SNR) to be the ratio of the marginal variance σ 2 to that of the measurement-error process, σ 2 . In the inter-comparison, we consider cases where m is either 1, 000 or 50, 000, SNR is 0.2, 1, or 5, and τ is either 0.015 or 0.15. These choices of parameters help highlight the strengths and weaknesses of FRK with respect to the other approaches. For example, due to the relatively small number r of basis functions employed, we expect FRK to have lower prediction precision when the SNR is high and τ is low, but we expect the prediction intervals to be valid. We further split the domain into two side-by-side partitions, and we placed 95% of the observations in the left half (LH) and 5% in the right half (RH). This partitioning helps identify the different methods' capability of borrowing strength from a region with dense measurements to a region with sparse measurements. The measurement locations for the m = 1, 000 case are shown in Figure 7, left panel. We simulated the process on a 1,000 × 1,000 grid using the package RandomFields (Schlather, Malinowski, Menck, Oesting, and Strokorb 2015). We used the 10 6 cells of the grid as our set of BAUs, D G , and therefore each BAU was of size 0.001 × 0.001. One such spatial-process realization for τ = 0.15 and σ 2 = 1 is shown in Figure 8, left panel, while one with τ = 0.015 and σ 2 = 1 is shown in Figure 8, right panel. With gstat, which we used to implement simple kriging (denoted gstat), we assumed the true underlying covariance function was known. Hence, when available (for the m = 1, 000 case), the results of gstat should be taken as the gold standard. As m gets larger, simple kriging quickly becomes infeasible, since it is O(m 3 ) in computational complexity. We implemented the LatticeKrig model (denoted LTK) using the package LatticeKrig. We used nlevel = 3 resolutions of Wendland basis functions, set the smoothness parameter nu = 0.5, and the number of grid points per spatial dimension at the coarsest resolution to NC = 33. The first resolution contained 1,849 basis functions, the second resolution contained 5,625, and the third resolution contained 19,321. In the case where m = 1, 000, we set findAwght = TRUE for the effective process range to be estimated by maximum likelihood methods. Setting findAwght = TRUE was prohibitive for m = 50, 000, but separate experiments showed that predictions from LTK were largely insensitive to this option for this value of m. We implemented and fit the SPDE-GMRF model (denoted SPDE) using the package INLA. We constructed a triangular mesh using inla.mesh.2d with max.edge = c(0.05, 0.05) and cutoff = 0.012. This gave a mesh with a higher density of basis function on the lefthand side of the domain and (as with the LatticeKrig model) a buffer to reduce edge effects. The basis functions are defined by the triangles, and their number was around 2,500 for m = 1, 000, and 5,000 for m = 50, 000, while the parameter α = 3/2 was used to reproduce Gaussian fields with a Matérn covariance function with smoothness parameter of 1/2 (i.e., an exponential covariance function). Unlike LatticeKrig and FRK, INLA uses an approximate Bayesian method for inference, and thus it requires the specification of prior distributions of the parameters, for which we use penalized complexity priors (Fuglstad, Simpson, Lindgren, and Rue 2019). For these simulation settings and our choice of prior distributions, we do not expect the inferential method to be a factor that largely influences the prediction and prediction errors. For the SRE model implemented in FRK (denoted FRK) we put a block-exponential covariance structure on K • (ϑ) (K_type = "block-exponential"), and we set nres = 3, yielding, in total, 819 basis functions regularly distributed in the domain D. In this study we used LatticeKrig v7.0, INLA v18.07.12, and FRK v0.2.2. For each configuration in the simulation experiment (i.e., the factorial design defined by m ∈ {1, 000, 50, 000}, SNR ∈ {0.2, 1, 5}, and τ ∈ {0.015, 0.15}), we simulated L = 100 datasets. For prediction locations we took 1, 000 locations at random on the left-hand side of the gridded domain D G that coincided with measurement locations, and another 1, 000 that did not; and we did the same for the right-hand side. When there were less than 1,000 measurement locations on a given side, all measurement locations were chosen as prediction locations for that side. The sets of locations are denoted as D O LH , D M LH , D O RH , and D M RH , respectively. These locations were kept constant across all simulation experiments for a given m. In addition to the stationary, exponential, Gaussian process, we also simulated from the nonstationary process Y NS (·), where with COV(Y 1 (s), Y 1 (s+h)) = σ 2 1 exp(− h /τ 1 ) and COV(Y 2 (s), Y 2 (s+h)) = σ 2 2 exp(− h 2 /τ 2 ). For this additional experiment, we set m = 10, 000, σ 1 = σ 2 = 0.5, and τ 1 = τ 2 = 0.15, and we used all configurations in the original experiment as described above. The measurement locations for this case are shown in Figure 7, right panel. As prediction-performance measures ('responses' of the experiment), we considered the following: • Root mean-squared prediction error: LetŶ X (s; l) denote the 'model-X' predictor of Y (s; l), where Y (s; l) is the lth simulated process evaluated at location s and X = gstat, LTK, SPDE, FRK. Then the model-X predictor root-mean-squared prediction error (RMSPE) for the lth simulation is where Since we are interested in benchmarking the model we use in FRK, we considered a measure of relative skill (RS), relative to FRK: where X = gstat, LTK, SPDE. Hence, RS > 1 (< 1) indicates that FRK has better (worse) prediction accuracy. We intentionally focus on coverage in order to highlight the strengths and weaknesses of the models in terms of uncertainty quantification. Other related measures, such as the interval score , penalize for both prediction interval width and poor coverage and are thus less suited to assess the issue of validity (i.e., whether the prediction intervals are correct, on average). The measures RS X and I 90 X were considered for {Y (s) : s ∈ D G } simulated from the stationary process with exponential covariance function and from the nonstationary process Y NS (·) in (14). Distributions of RS X for the original experiment with m = 1, 000 and m = 50, 000 are shown in Figures 9 and 10, respectively. While it is possible to proceed with an analysis of variance to analyze these results (e.g., Zhuang and Cressie 2014), here we discuss their most prominent features. First, when there are few (m = 1, 000) data points (Figure 9), there is little difference between the methods for low SNR but, for high SNR, SPDE and LTK perform better in terms of RS when τ is small (τ = 0.015; see, for example, the bottom left two panels of Figure 9). This was expected since the number of basis functions used begins to play an important role as the SNR increases (Zammit-Mangion, Cressie, and Shumack 2018). As expected, all prediction methods perform worse than or as well as, simple kriging with gstat (under a known covariance function). The comparison in terms of RS is less clear when m = 50, 000 ( Figure 10). First, at unobserved locations, FRK is frequently outperformed in terms of RS by the other methods, since the relatively small number of basis functions is unable to adequately reconstruct the optimal (simple-kriging) predictor. On the other hand, at observed locations, the performance is SNR dependent and data-density dependent. In much of the design space, FRK performs worse (in terms of RS) than the other predictors at the measurement locations, but it begins to outperform SPDE and LTK as the SNR increases and when τ is small (τ = 0.015). Now we turn to the question of 'validity' of the predictors. An equally important performance measure to RMSPE is coverage. Distributions of I 90 X for m = 1, 000 and m = 50, 000 are shown in Figures 11 and 12, respectively. In the smalldata case (m = 1, 000), all methods are over-confident (more so in the left-hand part of the domain) and by varying degrees. In the large-data case (m = 50, 000), both SPDE and LTK perform poorly in terms of coverage, providing over-confident predictions, especially when the SNR is large (SNR = 5). This is a result of these models relying on basis functions to reproduce the fine-scale variation and not attempting to separate out fine-scale variation from measurement error. The model implemented in FRK places a white-noise process at the BAU level to capture the fine-scale variation and can thus yield good coverage despite the use of a relatively low-dimensional manifold. It is worth nothing that it is straightforward in INLA to include an extra fine-scale-variation term and fix the standard deviation of the measurement error to some pre-specified value, although this is rarely done. Here we are illustrating that Figure 11 but with the number of data points m = 50, 000. Note that, in this case, simple kriging (with gstat) is computationally prohibitive and hence does not appear in the figure. not doing this may lead to severe deleterious effects on coverage. The model used in FRK was also found to yield 90% interval scores that were at least as good as, or better than, the other two models for the case with m = 50, 000 (results not shown). To further investigate this issue, we re-ran the simulations and generated coverage diagnostics for predicted data, Y P + P , rather than for just Y P . The coverage for all methods was very good (results not shown), indicating that all methods are able to correctly apportion total variability. Consequently, these results show that inclusion of the fine-scale variation term is critical in reduced-rank approaches (irrespective of the number of basis functions) with large datasets when predicting the hidden process. (It is not critical when predicting missing data). The simple semivariogram method employed by FRK for estimating the measurement-error Table 3: Root mean squared prediction error (RMSPE) and 90% coverage (CR) for the case where m = 10, 000 data were simulated from the nonstationary process Y NS (·). variance is a step in the right direction, and it appears to yield good results in the first instance. However, ideally, the standard deviation of the measurement error is known from the application and fixed a priori. Overall, all models have their own relative strengths and weaknesses, largely arising from the differences in (i) the type and number of basis functions employed, and (ii) the presence or otherwise of a fine-scale-process term. In this experiment we saw that the model employed by FRK produces predictions that are valid, on average. However for large-data situations, our experiment shows FRK predictions to be less efficient, as expected due to the restriction on the number of basis functions that can be used. In the nonstationary case (14), all methods performed similarly, with LTK being slightly overconfident and FRK being slightly underconfident; see Table 3. This similarity is not surprising since in (14) we set τ 1 = τ 2 = 0.15, which results in a process that is highly spatially correlated as well as rather smooth. The resulting process has a similar overall length scale and SNR to that simulated in the original experiment that yielded the results shown in the second row (SNR = 1) of the third (LH, '10'), fourth (LH, '11'), seventh (RH, '10'), and eighth (RH, '11') columns of Figures 9 and 11. We see that all three methods performed similarly, and satisfactorily, in this case. Modeling and prediction with data from the AIRS instrument The US National Aeronautics and Space Administration (NASA) launched the Aqua satellite on 2002-05-04, with several instruments on board, including the Atmospheric Infrared Sounder (AIRS). AIRS retrieves column-averaged CO 2 mole fraction (in units of parts per million), denoted XCO 2 (with particular sensitivity in the mid-troposphere), amongst other geophysical quantities (Chahine et al. 2006). The data we shall use consist of XCO 2 measurements taken between 2003-05-01 and 2003-05-03 (inclusive). These data are a subset of those available with FRK. We compare LTK, SPDE, and FRK on the three-day AIRS dataset, and we assess the utility of the methods on a validation dataset that we hold out. Modeling on the sphere with FRK proceeds in a very similar fashion to the plane, except that a coordinate reference system (CRS) on the surface of the sphere needs to be declared for the data. This is implemented using a 'CRS' object with string "+proj=longlat +ellps=sphere". We next outline the six steps required to fit these data using FRK. Step 1: Fifteen days of XCO 2 data from AIRS (in May 2003) are loaded by using the command data("AIRS_05_2003", package = "FRK"). In this case study, we subset the data to include only the first three days, which contains 43,059 observations of XCO 2 in parts per million (ppm). We subsequently divide the data into a training dataset of 30,000 observations, chosen at random (AIRS_05_2003_t), and a validation dataset (AIRS_05_2003_v) containing the remaining observations. To instruct FRK to fit the SRE model on the surface of a sphere, we assign the appropriate 'CRS' object to the data as follows: R> coordinates(AIRS_05_2003) <-~lon + lat R> proj4string(AIRS_05_2003) <-CRS("+proj=longlat +ellps=sphere") Step 2: The next step is to create BAUs. This is done using the auto_BAUs function but this time with the manifold specified to be the surface of a sphere with radius equal to that of Earth. We also specify that we wish the BAUs to form an icosahedral Snyder equal area aperture 3 hexagon (ISEA3H) discrete global grid (DGG) at resolution 9 (for a total of 186,978 BAUs). Resolutions 0-6 are included with FRK; higher resolutions are available in the package dggrids (Zammit-Mangion 2020). By default, this will create a hexagonal grid on the surface of the sphere, however it is also possible to have the more traditional lon-lat grid by specifying type = "grid" and declaring a cellsize in units of degrees. An example of an ISEA3H grid, at resolution 5 (which would yield a total of 6,910 BAUs), is shown in Figure 13, left panel, while a 5 • × 5 • lon-lat grid using type = "grid" is shown in Figure 13, right panel. R> G <-auto_basis(manifold = sphere(), data = AIRS_05_2003_t, nres = 3, + isea3h_lo = 2, type = "bisquare") Steps 4-5: Since XCO 2 , a column-averaged CO 2 mole fraction in units of ppm, has a latitudinal gradient, we use latitude as a covariate in our model. The SRE object is then constructed in the same way as in Section 3.3. The AIRS footprint is approximately 50 km in diameter, which is smaller than the BAUs we use (approximately 100 km in diameter), and hence it is possible that multiple observations fall into the same BAU. Recall from Section 3.3 (Step 4) that when multiple data points fall into the same BAU that these are correlated through either intra-BAU systematic error (Case 1) or fine-scale process variation at the BAU level (Case 2). However, recall also that when multiple observations fall into the same BAU, the matrices V ξ,Z and V δ,Z are sparse but not diagonal, and this can increase computational time considerably. For large datasets in which each datum has relatively small (relative to the BAU) spatial support, such as the AIRS dataset, it is frequently reasonable to let the argument average_in_BAU = TRUE (as it is by default) to indicate that one wishes to summarize the data at the BAU level. In the code below we implement FRK using the default case (Case 2, where σ 2 δ = 0 and σ 2 ξ is estimated). Step 6: To predict at the BAU level, we invoke the predict function. R> pred_isea3h <-predict(S) The prediction and prediction standard error maps obtained using FRK, together with the observations, are shown in Figure 15. We denote the implementation above of FRK as FRK-Ma, where "M" denotes the case for the modeled VAR(η) = K • (ϑ) and "a" denotes the case for average_in_BAU set to FALSE. We evaluated the utility of the SRE model used in FRK, the LatticeKrig model (with Lat-ticeKrig), and the SPDE-GMRF model (with INLA), using out-of-sample prediction at the validation-data locations. We also re-ran FRK with average_in_BAU set to TRUE (denoted FRK-Mb) and K_type = "unstructured" (FRK-V). With INLA we approximated an SPDE with α = 2 on a global mesh of 6,550 basis functions and used penalized complexity prior distributions for the parameters (Fuglstad et al. 2019). With LatticeKrig we used three resolutions with a total of 12,703 basis functions on R 2 using the lon-lat coordinates to denote spatial locations. As comparison measures we used the RMSPE (15) between the validation values and their respective predicted observations, the continuous ranked probability score (CRPS, Gneiting, Balabdaoui, and Raftery 2007), and the actual coverage of a 90% prediction interval (16) but obtained with respect to the validation data instead of the process Y at the validation-data locations. The results are summarized in Table 4. In this example, we see that there is little practical difference in performance between the five methods despite the large difference in the number of basis functions and the form of the models; FRK performs about 2% worse than the others in terms of RMSPE. As expected, since we are validating against data (and not against the true process, which is unknown here), all methods perform acceptably in capturing total variation. However, the FRK methods gave prediction standard errors of the process that were, on average, double those provided by LTK and SPDE. This mirrors what was seen in Section 4, where SPDE and LTK were generally overconfident, although in this case the Computation time for all three packages were similar under the chosen configurations (except for FRK-Ma that assumes intra-BAU correlations). For FRK, we computed the predictions and prediction standard errors directly using sparse-matrix operations, while we used INLA's predictor functionality to obtain the prediction and prediction standard errors for the SPDE-GMRF model. We obtained LatticeKrig's prediction errors using 30 conditional simulations. Change-of-support, anisotropy, and custom basis functions Sections 1-4 introduced the core spatial functionality of FRK. The purpose of this section is to present additional functionality that may be of use to the spatial analyst. Multiple observations with different supports In FRK, one can make use of multiple datasets with different spatial supports with little difficulty. Consider the meuse dataset. We synthesize observations with a large support by changing the meuse object into a 'SpatialPolygonsDataFrame', where each polygon is a square of size 300 m × 300 m centered around the original meuse data point (see Figure 16, left panel). For reference, the constructed BAUs are of size 50 m × 50 m. Once this object is set up, which we name meuse_pols, we assign zinc values to the polygons by fitting a spherical semi-variogram model to the log zinc concentrations in the original meuse dataset, generating a realization by conditionally simulating once at the BAU centroids, exponentiating the simulated values, aggregating accordingly, and adding on measurement error with variance 0.01. The analysis proceeds in precisely the same way as in Section 3, but with meuse_pols used instead of meuse. The predictions and the prediction standard errors using meuse_pols are shown in Figure 16, center and right panels, respectively. We note that the supports of the observations and the BAUs do not precisely overlap: Recall that, for simplicity, we assumed that an observation is taken to overlap a BAU if and only if the centroid of the BAU lies within the observation footprint. (A refinement of this simple method will require a more detailed consideration of the BAU and observation footprint geometry and is the subject of future work.) Anisotropy: Changing the distance measure Highly non-stationary and anisotropic fields may be easier to model on a deformed space on which the process is approximately stationary and isotropic (e.g., Sampson and Guttorp 1992;Kleiber 2016). In FRK, a deformation can be introduced to capture geometric anisotropy by changing the distance measure associated with the manifold. As an illustration, we simulated an anisotropic, noisy, spatial process on a fine grid in D = [0, 1] × [0, 1] and assumed that 1,000 randomly-located grid points were observed. The process and the sampled data are shown in Figure 17. In this simple case, to alter the modified distance measure we note that the spatial frequency in x is approximately four times that in y. Therefore, in order to generate anisotropy, we use a measure that scales x by 4. In FRK, a 'measure' object requires a distance function and the dimension of the manifold on which it is used, and it is constructed as follows: R> dist_fun <-function(x1, x2 = x1) { + scaler <-diag(c(4, 1)) + distR(x1 %*% scaler, x2 %*% scaler) + } R> asymm_measure <-measure(dist = dist_fun, dim = 2L) The distance function can be assigned to the manifold as follows. R> TwoD_manifold <-plane(measure = asymm_measure) We now generate a grid of basis functions (here at a single resolution) manually. First, we create a 5 × 14 grid on D, which we will use as centers for the basis functions. We then call the function local_basis to construct bisquare basis functions centered at these locations with a range parameter (i.e., the radius in the case of a bisquare) of 0.4. Due to the scaling used, this implies a range of 0.1 in x and a range of 0.4 in y. Basis-function number 23 is shown in Figure 18. Customized basis functions and BAUs The package FRK provides the functions auto_BAUs and auto_basis to help the user construct the BAUs and basis functions based on the data that are supplied. However, these could be done manually. The object containing the basis functions needs to be of class 'Basis' that defines 5 slots: • dim: The dimension of the manifold. • fn: A list of functions. By default, distances used in these functions (if present) are attributed to a manifold, but arbitrary distances can be used. • pars: A list of parameters associated with each basis function. For the local basis functions used in this paper (constructed using auto_basis or local_basis), each list item is another list with fields loc and scale where length(loc) is equal to the dimension of the manifold and length(scale) is equal to 1. • df: A data frame with number of rows equaling the number of basis functions and containing auxiliary information about the basis functions (e.g., resolution number). • n: An integer equal to the number of basis functions. The constructor Basis can be used to instantiate an object of this class. There are less restrictions for constructing BAUs; for spatial applications, they are usually either 'SpatialPixelsDataFrame's or 'SpatialPolygonsDataFrame's. In a spatio-temporal setting, the BAUs need to be of class 'STFDF', where the spatial component is usually either a 'SpatialPixels' or a 'SpatialPolygons' object. In either case, the data slot of the object must contain • all covariates used in the model; • a field fs containing elements proportional to the fine-scale variation at the BAU level; and • fields that can be used to summarize the BAU as a point, typically the centroid of each polygon. The names of these fields need to equal those of the coordnames(BAUs) (typically c("x", "y") or c("lon", "lat")). Spatio-temporal FRK Fixed rank kriging in space and time is different from fixed rank filtering (Cressie, Shi, and Kang 2010), where a temporal autoregressive structure is imposed on the temporally evolving basis-function weights {η t }, and where Rauch-Tung-Striebel smoothing is used for inference on {η t }. In FRK, the basis functions can also have a temporal dimension; then the only new aspect beyond the purely spatial analysis given above is specifying these spatio-temporal basis functions. We describe how this can be done in Section 6.1. In Section 6.2 we show how this can be applied to modeling and prediction with data from the more recent Orbiting Carbon Observatory-2 (OCO-2) satellite that measures XCO 2 . To illustrate their construction, consider the following set of spatial basis functions. Basis-function construction R> centroids <-as.matrix(expand.grid(x = 1:3, y = 1:3)) R> G_spatial <-local_basis(manifold = plane(), loc = centroids, + scale = rep(2, 9), type = "bisquare") The function call above returns a set of bisquare basis functions centered at loc with aperture equal to scale. The same call, given below, can be used to construct temporal basis functions; note that now manifold = real_line(), and we choose Gaussian basis functions instead (in which case scale represents the standard deviation). As in the spatial case, other basis functions (such as bisquare) can also be used. R> G_temporal <-local_basis(manifold = real_line(), loc = matrix(seq(2, + 28, by = 4)), scale = rep(3, 7), type = "Gaussian") The generated basis functions can be visualized using show_basis; see Figure 20. The spatiotemporal basis functions are then constructed using the function TensorP as follows: R> G <-TensorP(G_spatial, G_temporal) The object G can be subsequently used for constructing SRE models, as in the spatial case. Note that since we have nine spatial basis functions and seven temporal basis functions, we have 63 spatio-temporal basis functions in total. Care should be taken that the total number of basis functions does not become prohibitively large (say > 4000). Global prediction of column-averaged carbon dioxide from OCO-2 The NASA OCO-2 satellite was launched on 2014-07-02, and it produces between 100,000 and 300,000 usable retrievals per day. Between the beginning of October 2014 and the end of February 2017, the satellite produced around 100 million retrievals. The specific data product we used was the OCO-2 Data Release 7R Lite File Version B7305Br (OCO-2 Science Team, Gunson, and Eldering 2015). Following pre-processing, we reduced the number of data entries in the product to around 50 million. Each retrieval produces a number of variables; in this example, we consider the most commonly used one, XCO 2 , the column-averaged mole fraction in ppm. Obtaining reliable global predictions of XCO 2 does not require consideration of all the data simultaneously. The atmosphere mixes quickly within hemispheres, and temporal correlation-length scales are on the order of days. Hence, we consider the OCO-2 data in a moving-window of 16 days, and for each 16 days we fit a spatio-temporal SRE model in order to obtain a global prediction of XCO 2 in the middle (i.e., the 8th day) of the window. We use this moving-window approach to obtain daily XCO 2 global prediction and prediction errors, between 2014-10-01 and 2017-03-01. We first put the OCO-2 data into an 'STIDF' object, before using the function auto_BAUs to construct spatio-temporal voxels. The following code constructs gridded BAUs and instructs FRK to use 1 day as the smallest temporal unit and to limit the latitude grid to the minimum and maximum latitude of the OCO-2 data locations, rounded to the nearest degree. Following pre-processing, we had about one million usable soundings per 16-day window. However, several of these are in quick succession and thus also in close proximity to each other, so that they fall within the same spatio-temporal BAU. We therefore keep the flag average_in_BAU set to TRUE when calling SRE as in Section 4.2. Following averaging, the number of observations per 16-day window reduces by a factor of about 100. We did not need to estimate the measurement-error variance σ 2 in this case, since measurement-error variances are provided with the OCO-2 data. However, we forced all measurement-error standard deviations that were below 2 ppm to be equal to 2 ppm, since the reported values are likely to be optimistic. The total time needed to fit and predict with FRK in a single 16-day window was about 1 hour on a standard desktop computer. In Figure 21 we show the measurement locations and values for the 16 days, and we show the central day of the 16-day window centered on 2016-09-08. Predictions and prediction standard errors for the central day are shown in Figure 22. Note how the error map reflects the data coverage over the entire 16-day window and not just the day at the center of the window. Prediction standard error maps on other days clearly show when the satellite is only taking readings over the ocean and when it is not taking any readings due to instrument reset or satellite manoeuvers. An animation showing these and other interesting features of predicted column-averaged CO 2 (i.e., XCO 2 ) between 2014-10-01 and 2017-03-01 is available at https://www.youtube.com/watch?v=wEws67WXvkY. Future work There are a number of useful features that could be implemented in future versions of FRK, some of which are listed below: • Currently, FRK is designed to work with local basis functions having an analytic form. However, the package could also accommodate basis functions that have no known functional form, such as empirical orthogonal functions (EOFs) and classes of wavelets defined iteratively; future work will attempt to incorporate the use of such basis functions. Vanilla FRK (FRK-V), where the entire positive-definite matrix K is estimated, is particularly suited to the former (EOF) case where one has very few basis functions that explain a considerable amount of observed variability. • There is currently no component of the model that caters for sub-BAU process variation, and each datum that is point-referenced is mapped onto a BAU. Going below the BAU scale is possible, and intra-BAU correlation can be incorporated if the covariance function of the process at the sub-BAU scale is known (Wikle and Berliner 2005). • Most work and testing in FRK has been done on the real line, the 2D plane, and the surface of the sphere (S 2 ). Other manifolds can be implemented since the SRE model always yields a valid spatial covariance function, no matter the manifold. Some, such as the 3D hyperplane, are not too difficult to construct. Ultimately, it would be ideal if the package would allow the user to specify his/her own manifold, along with a function that can compute the appropriate distances on the manifold. • Currently, only the EM algorithm is implemented and hence the argument method = "EM" is implicit in the function SRE.fit. The EM algorithm has been seen to be slow to converge to a local maximum in other contexts (e.g., McLachlan and Krishnan 2007). Other methods for finding maximum, or restricted maximum, likelihood estimates for the SRE model (e.g., Tzeng and Huang 2018) will be considered for future versions of FRK. • Although designed for very large data, FRK begins to slow down when several hundreds of thousands of data points are used. The flag average_in_BAU can be used to summarize the data and hence reduce the size of the dataset, however it is not always obvious how the data should be summarized (and whether one should summarize them in the first place). Future work will aim to provide the user with different options for summarizing the data. • Currently, in FRK, all BAUs are assumed to be of equal area even if they are not. Unequal BAU area (see, for example, the lon-lat grid shown in Figure 13) is important when aggregating the process or the predictions. In FRK there is the option to use equal-area icosahedral grids on the surface of the sphere, and regular grids on the real line and the plane for when areal data or large prediction regions are used. (Note that in Section 6.2 an equal-area grid was not used, but also note that we did not spatially aggregate our results and that our data were point-referenced). In conclusion, the package FRK is designed to provide core functionality for spatial and spatio-temporal prediction with large datasets. The low-rank model used by the package has validity (accurate coverage) in a big-data scenario when compared to high-rank models that do not explicitly cater for fine-scale variation. However, it is likely to be less statistically efficient (larger root mean squared prediction errors) than other methods when data density is high and the basis functions are unable to capture the spatial variability. FRK is available on the Comprehensive R Archive Network (CRAN) at https://CRAN. R-project.org/package=FRK. Its development page is https://github.com/andrewzm/ FRK. Users are encouraged to report any bugs or issues relating to the package on this web page.
18,881.8
2017-05-23T00:00:00.000
[ "Computer Science", "Environmental Science", "Mathematics" ]
Predicting the Health Status of an Unmanned Aerial Vehicles Data-Link System Based on a Bayesian Network Unmanned aerial vehicles (UAVs) require data-link system to link ground data terminals to the real-time controls of each UAV. Consequently, the ability to predict the health status of a UAV data-link system is vital for safe and efficient operations. The performance of a UAV data-link system is affected by the health status of both the hardware and UAV data-links. This paper proposes a method for predicting the health state of a UAV data-link system based on a Bayesian network fusion of information about potential hardware device failures and link failures. Our model employs the Bayesian network to describe the information and uncertainty associated with a complex multi-level system. To predict the health status of the UAV data-link, we use the health status information about the root node equipment with various life characteristics along with the health status of the links as affected by the bit error rate. In order to test the validity of the model, we tested its prediction of the health of a multi-level solar-powered unmanned aerial vehicle data-link system and the result shows that the method can quantitatively predict the health status of the solar-powered UAV data-link system. The results can provide guidance for improving the reliability of UAV data-link system and lay a foundation for predicting the health status of a UAV data-link system accurately. Introduction Unmanned aerial vehicles (UAVs) are used widely in military and civilian applications because of their low initial cost, high cost-effectiveness over time and ability to operate without casualties [1]. UAVs serve in diverse areas such as exploration, investigation, weather forecasting, pipe network inspection, aerial photography and express delivery services. However, unlike manned aircraft, UAVs require data-link system to link ground terminals to the real-time control of each vehicle. The condition of a UAV data-link system determines whether the UAV can perform its tasks successfully. Therefore, it is particularly important to develop a model for predicting the health status of UAV data-link system. Because of the complexity and diversity of the tasks carried out by UAVs and the harsh environments in which they may operate, the networking modes of UAV data-links are complex and diverse in order to provide effective control. The failure of a UAV data-link that results in the degradation or failure of performance may also involve accidental failure of hardware and even failure of the link itself. Complex and volatile environments often have an impact on the health of UAV data-link systems. For example, an increase in the bit error rate will reduce the quality of information transmission and affect the health of a UAV data-link. So, we have not found any research on the health status prediction of the UAV data-link system. Many scholars [2][3][4][5][6][7] have conducted extensive research on health status prediction. For example, Nguyen et al. studied the selection of different degradation models using a large number of health monitoring data [8]. Most of the systems for health status prediction have been modeled based on one of several approaches: the gamma process [9][10][11], Wiener process [12,13], Markov process [14], general generation function [15], Monte Carlo Simulation [16][17][18]. However, these methods can only describe the physical health status caused by performance degradation or accidental failure but the health status of the data-link system is affected by the health status of the hardware, as well as the link health status. Schumann et al. designed a real-time, on-board system health management (SHM) to the health status of UAV and adopted Bayesian network methods for fault diagnosis [19]. Chonlagarn et al. developed a method to predict the online health status of the UAV system based on hybrid dynamic Bayesian network [20]. Khan et al. proposed a method for predicting the state of health of systems based on artificial intelligence [21] but there is not enough data to build the model. Bayesian networks (BN) as proposed by Pearl [22] provide a reasoning model based on Bayesian theory and graph theory. Graph theory is used to describe a complex system clearly and qualitatively and the probabilistic method is used for quantitative analysis. BNs have obvious advantages for modeling complex systems in areas such as financial risk analysis, wireless sensor networks, system reliability analysis [23] and system health management [24]. Through the use of qualitative network topologies and quantitative conditional probability descriptions, Bayesian networks can clearly represent the inter-component correlation and can integrate information from different sources, including experimental data, historical data and expert experience. In addition, BNs have obvious advantages for describing the multi-level systems [25] used widely in communication quality prediction [26] and the systems involving information interaction [27][28][29]. Many scholars have adopted the Bayesian network to do a lot of research on the system health management of UAV systems. Therefore, we adopted a BN in this research to evaluate and predict the health status of the UAV data-link system. This paper proposes a UAV data-link health status prediction method based on a BN. This approach combines information about the health status of hardware devices that have different lifetime characteristics with data about links health status as affected by the environment. The degraded health status of the UAV data-link system due to hardware device performance degradation and link failure can be described by this model and provides a unified framework for the health status prediction of the UAV data-link system. The remainder of this paper is organized as follows: Section 2 provides an overview of BNs, including a summary of the concept, construction and inference algorithms used in Bayesian network models. Section 3 proposes a Bayesian network modeling method for UAV data-links that considers the networking mode and the bit error rate. In Section 4, we present a case study in which we apply our research to a type of solar-powered UAV. Finally, Section 5 provides our conclusions and future work. Overview of Bayesian Networks Bayesian Network is a graphical network model of probabilistic reasoning based on Bayesian theory. It consists of a directed acyclic graph (DAG) and a conditional probability table (CPT). The former determines the qualitative network structure between variables and the latter determines the quantitative relationship between variables. Figure 1a is a 5-node DAG. The attributes of the node variables are arbitrary and can be an abstraction of any problem. The directed edges between nodes represent the interdependencies between nodes and the directed edges are always directed by the parent node to the child nodes. While the variable with no parent node is the root node and the variable with no child node is the leaf node and the rest are the intermediate nodes. The CPT for constructing node C according to the logical After the BN model is constructed, the appropriate inference algorithm is selected for probabilistic reasoning. The Junction Tree (JT) [30] algorithm has been widely used in precision inference algorithms because of its high search efficiency and simple application. The procedure of solution is shown in Figure 2. The first step is to construct the moral map corresponding to the Bayesian network structure, the second step is to triangulate the moral map to obtain the triangulated graph, the third step is to construct the junction tree and the fourth step is to assign parameters to the clusters in the junction tree. The final step is the belief update, which updates the belief in the junction tree using the message propagation algorithm after the evidence added [Error! Reference source not found.]. After the BN model is constructed, the appropriate inference algorithm is selected for probabilistic reasoning. The Junction Tree (JT) [30] algorithm has been widely used in precision inference algorithms because of its high search efficiency and simple application. The procedure of solution is shown in Figure 2. The first step is to construct the moral map corresponding to the Bayesian network structure, the second step is to triangulate the moral map to obtain the triangulated graph, the third step is to construct the junction tree and the fourth step is to assign parameters to the clusters in the junction tree. The final step is the belief update, which updates the belief in the junction tree using the message propagation algorithm after the evidence added [1]. leaf node and the rest are the intermediate nodes. The CPT for constructing node C according to the logical relationship is as shown in Figure 1b. As shown in Equation (1), BN probabilistic reasoning is based on the conditional independence assumption that the probability of a child node depends only on the probability of the parent node and is independent of other nodes. where pi X is the parent node of i X and pai X is parent node pi X other children node except i X . Applying conditional independence to chain rules enables computation of the joint probability, as follows: Building a BN requires the following steps: 1. Define node variables; 2. Connect the node variables through the directed edges; 3. Establish a CPT for the non-root node. (a) (b) After the BN model is constructed, the appropriate inference algorithm is selected for probabilistic reasoning. The Junction Tree (JT) [30] algorithm has been widely used in precision inference algorithms because of its high search efficiency and simple application. The procedure of solution is shown in Figure 2. The first step is to construct the moral map corresponding to the Bayesian network structure, the second step is to triangulate the moral map to obtain the triangulated graph, the third step is to construct the junction tree and the fourth step is to assign parameters to the clusters in the junction tree. The final step is the belief update, which updates the belief in the junction tree using the message propagation algorithm after the evidence added [Error! Reference source not found.]. Model for Predicting the Health of an Unmanned Aerial Vehicle Data-Link System In this section, we present our model for predicting the health of a UAV data-link system based on a Bayesian network. The data-link system for a UAV consists of a part that is airborne and a part that is on the ground. The airborne portion includes the airborne data terminal and antenna. The on-ground portion comprises the ground data terminal and antenna. Both the airborne data terminal and the ground data terminal include a radio frequency receiver, radio frequency transmitter and modem. When the distance between them is relatively close, the UAV and the ground data terminal establish line-of-sight (LOS) wireless communication via the airborne and ground communication units. When the wireless communication line-of-sight link cannot be established because the signal is weakened by long distances or ground obstructions such as buildings, repeater satellites are utilized to establish non-line-of-sight (NLOS) wireless communication. The UAV data-link communication mode is shown in Figure 3. Model for Predicting the Health of an Unmanned Aerial Vehicle Data-Link System In this section, we present our model for predicting the health of a UAV data-link system based on a Bayesian network. The data-link system for a UAV consists of a part that is airborne and a part that is on the ground. The airborne portion includes the airborne data terminal and antenna. The onground portion comprises the ground data terminal and antenna. Both the airborne data terminal and the ground data terminal include a radio frequency receiver, radio frequency transmitter and modem. When the distance between them is relatively close, the UAV and the ground data terminal establish line-of-sight (LOS) wireless communication via the airborne and ground communication units. When the wireless communication line-of-sight link cannot be established because the signal is weakened by long distances or ground obstructions such as buildings, repeater satellites are utilized to establish non-line-of-sight (NLOS) wireless communication. The UAV data-link communication mode is shown in Figure 3. The link is divided into an uplink and a downlink according to the transmission direction of the information in the link. The ground data terminal transmits a telecontrol command to the UAV through the uplink; the UAV transmits telemetry data, such as the position and attitude of the UAV and other data such as pictures, to the ground data terminal. Both the telecontrol link and the telemetry link must work properly to ensure the UAV data-link system works properly. Therefore, to predict the health status of the UAV data-link system, it is necessary to consider the health status of the telecontrol link and the telemetry link comprehensively. To improve the reliability of UAV data-links, the networking modes of the data-links often adopt a redundant design. UAV data-link networking modes may vary and the information transmission paths may also be different. Because it is often the case that UAVs are used to perform repetitive or dangerous tasks, degradation of the hardware will affect the health status and performance of the UAV data-link. The complex external environment can have a severe impact as well. Constructing a Bayesian Network Root Node Prediction Model By analyzing the path of information transmission in the UAV data-link system and the connection relationship between devices, we can use the radio frequency receiver, radio frequency transmitter and modem as the root nodes of the Bayesian network. For ground data terminals and for the UAV and repeater satellites, the radio frequency receiver, radio frequency transmitter and modem can be described with tandem logic as terminals that perform communication functions. Considering that the radio frequency receiver, radio frequency transmitter and modem have no influence on the quality of information transmission, this paper combines these three nodes into communication terminal module nodes for modeling. Under this approach, the sensors are set in each communication terminal module to collect corresponding data and a prediction model of the BN root node is established. Considering that the most important characteristic of the link terminal equipment is the transmission power, the radio frequency device is the key component after the failure mode and mechanism analysis. The power-MOSFET has been adopted for all the airborne The link is divided into an uplink and a downlink according to the transmission direction of the information in the link. The ground data terminal transmits a telecontrol command to the UAV through the uplink; the UAV transmits telemetry data, such as the position and attitude of the UAV and other data such as pictures, to the ground data terminal. Both the telecontrol link and the telemetry link must work properly to ensure the UAV data-link system works properly. Therefore, to predict the health status of the UAV data-link system, it is necessary to consider the health status of the telecontrol link and the telemetry link comprehensively. To improve the reliability of UAV data-links, the networking modes of the data-links often adopt a redundant design. UAV data-link networking modes may vary and the information transmission paths may also be different. Because it is often the case that UAVs are used to perform repetitive or dangerous tasks, degradation of the hardware will affect the health status and performance of the UAV data-link. The complex external environment can have a severe impact as well. Constructing a Bayesian Network Root Node Prediction Model By analyzing the path of information transmission in the UAV data-link system and the connection relationship between devices, we can use the radio frequency receiver, radio frequency transmitter and modem as the root nodes of the Bayesian network. For ground data terminals and for the UAV and repeater satellites, the radio frequency receiver, radio frequency transmitter and modem can be described with tandem logic as terminals that perform communication functions. Considering that the radio frequency receiver, radio frequency transmitter and modem have no influence on the quality of information transmission, this paper combines these three nodes into communication terminal module nodes for modeling. Under this approach, the sensors are set in each communication terminal module to collect corresponding data and a prediction model of the BN root node is established. Considering that the most important characteristic of the link terminal equipment is the transmission power, the radio frequency device is the key component after the failure mode and mechanism analysis. The power-MOSFET has been adopted for all the airborne terminal equipment in this paper [31]. To establish a predictive model, we combined the degradation model for power-law [32] (considering time), Arrhenius model [33] (considering temperature) and Eyring model [34] (considering electrical stress) with Wiener's stochastic degradation process [35]. Taking the equipment whose performance degrades from the Wiener process as an example, we introduce the prediction model. Assuming the performance parameter P is a key indicator of product health status and is sensitive to stress S, the parameter then follows the Wiener degradation process as follows [36]: where P (t) is the product performance at time t and d(s) is the drifting reflecting the performance degradation rate, which is a function of stress and time. In an accelerated model is a function of stress S; Constant σ is the diffusion coefficient that is irrelevant with respect to environment and time, B(t) ∼ N(0, t) is the standard Brownian motion and P 0 is the initial value of the parameter. The degradation amount within the time ∆t from the properties of the Wiener process is ∆p ∼ N(d(s)∆t, σ 2 ∆t). L is defined as the failure threshold of performance p and then the time t that performance parameter value first passed through L satisfies the inverse Gaussian distribution. The distribution function is the unhealth state function of the product and the corresponding probability density function is given by: The corresponding health state function is the prediction model of equipment, as follows: With a certain time sequence The elements i H i p n represent root node i X to be in the heath state at the τth predicted point. Assigning the prior probability of the root node according to the probability prediction matrix. The probability of the root node i Based on the probability of the root node and intermediate node, the predicted probability of the leaf node in health state can be further solved according to: where, * ( ) Pa is the parent node of the node " * ( ) ". According to the above formula, the JT estimation algorithm traverses the DAG and the state of the system node L can be predicted. By the probability prediction matrix ( ) Rt n of the root node, the corresponding prediction sequence of probability at system level will be obtained to achieve continuous prediction of health state. A Bayesian Network Model of an Unmanned Aerial Vehicle Data-Link System Considering the Bit Error Rate In addition to the possible degradation of hardware equipment performance and accidental failure, the health status of the UAV data-link system can be affected by the health status of UAV data-links. Such as the bit error rate (BER), packet loss rate, path loss, UAV speed and weather and channel capacity will affect the health status of UAV data-links. It can be expressed as follows: where h indicates the health status of UAV data-links, i x is a factor that affects the health status of UAV data-links, such as 1 x is UAV speed, 2 x is the weather 3 x is bit error rate (BER) and so forth. We introduce the bit error rate as a factor affecting the health status of the UAV data-inks in the following paper. In communications, the bit error rate is an important indicator that measures the accuracy of data transmission within a specified time. Often, bit errors are caused by a combination of factors, such as the decay of the signal transmission or pulse interference caused by noise, alternating For the p root nodes, the probability of health status is solved by the prediction model of each corresponding device and discretized according to the unsupervised equal-width interval method with a time sequence to achieve the state prediction in the future T time; that is, the p × n order health state probability prediction matrix [36]: With a certain time sequence T = (t + ς, t + 2ς · · · t + nς). The elements H i,τ (i = 1, 2, · · · p, τ = 1, 2 · · · n) represent root node X i to be in the heath state at the τth predicted point. Assigning the prior probability of the root node according to the probability prediction matrix. The probability of the root node X i at time t is P(RX i (t)) = H(R i (t)), i = 1, 2, · · · , p. For the solution of the state probability of intermediate nodes, it is assumed that the parent-node set F = {R 1 , R 2 , · · · R i } exists for the node M j , According to the assumption of independent conditions, the probability prediction of the intermediate nodes at time t can be solved based on: Based on the probability of the root node and intermediate node, the predicted probability of the leaf node in health state can be further solved according to: P( RMq(t)|RPa(Mq)(t)) · · · P(Rx1(t)) · · · P(Rxp(t)) where, Pa( * ) is the parent node of the node "( * )". According to the above formula, the JT estimation algorithm traverses the DAG and the state of the system node L can be predicted. By the probability prediction matrix R p×n = (R(t), R(t + ς), R(t + 2ς) · · · R(t + nς)) of the root node, the corresponding prediction sequence of probability at system level will be obtained to achieve continuous prediction of health state. A Bayesian Network Model of an Unmanned Aerial Vehicle Data-Link System Considering the Bit Error Rate In addition to the possible degradation of hardware equipment performance and accidental failure, the health status of the UAV data-link system can be affected by the health status of UAV data-links. Such as the bit error rate (BER), packet loss rate, path loss, UAV speed and weather and channel capacity will affect the health status of UAV data-links. It can be expressed as follows: where h indicates the health status of UAV data-links, x i is a factor that affects the health status of UAV data-links, such as x 1 is UAV speed, x 2 is the weather x 3 is bit error rate (BER) and so forth. We introduce the bit error rate as a factor affecting the health status of the UAV data-inks in the following paper. In communications, the bit error rate is an important indicator that measures the accuracy of data transmission within a specified time. Often, bit errors are caused by a combination of factors, such as the decay of the signal transmission or pulse interference caused by noise, alternating current, lightning and equipment failure. Since the bit error rate is the number of bit errors divided by the total number of transferred bits during a studied time interval, a probability value can reflect the error (i.e., unhealthy state) of information transmission and therefore it can be used in a BN prediction model as a representation between the nodes that are associated with each of the error transmissions. In this way, we can modify the BN model to predict the UAV data-link health state not only considering the networking mode and we can add the bit error rate node to represent the data-link affected by the external environment. The BER value of the newly added root node can be simulated and generated. Each link generates a random bit stream for information transmission by means of the BPSK spread spectrum. The signal is processed by the method of generating the sixteenth order Walsh code and the simulation can be performed. As shown in Figure 5, interfering with the modulated bit stream generates random errors and statistics on the bit error rate, generating a bit error rate value BERn for each link, thereby obtaining a probability that the link information transmission is normal: current, lightning and equipment failure. Since the bit error rate is the number of bit errors divided by the total number of transferred bits during a studied time interval, a probability value can reflect the error (i.e., unhealthy state) of information transmission and therefore it can be used in a BN prediction model as a representation between the nodes that are associated with each of the error transmissions. In this way, we can modify the BN model to predict the UAV data-link health state not only considering the networking mode and we can add the bit error rate node to represent the data-link affected by the external environment. The BER value of the newly added root node can be simulated and generated. Each link generates a random bit stream for information transmission by means of the BPSK spread spectrum. The signal is processed by the method of generating the sixteenth order Walsh code and the simulation can be performed. As shown in Figure 5, interfering with the modulated bit stream generates random errors and statistics on the bit error rate, generating a bit error rate value BERn for each link, thereby obtaining a probability that the link information transmission is normal: _ = 1 -Error n n P BER . The method for predicting the health status of the UAV data-link system proposed in this paper can be summarized as follows: 1. Determining nodes of the UAV data-link system; 2. Construct a DAG of the UAV data-link system and establish the CPT of the non-root nodes; 3. Consider the impact of bit error rate, add the bit error rate nodes and establish the CPT; 4. The JT estimation algorithm is used to solve the joint probability of relevant nodes, to update the conditional probability values of each node and to achieve the deduction of state probability The method for predicting the health status of the UAV data-link system proposed in this paper can be summarized as follows: 1. Determining nodes of the UAV data-link system; 2. Construct a DAG of the UAV data-link system and establish the CPT of the non-root nodes; 3. Consider the impact of bit error rate, add the bit error rate nodes and establish the CPT; 4. The JT estimation algorithm is used to solve the joint probability of relevant nodes, to update the conditional probability values of each node and to achieve the deduction of state probability of UAV data-link system nodes. Case Study To test and verify our model, we applied our model to a type of solar-powered UAV data-link system. The solar-powered UAV data-link system consists of two non-line-of-sight links and three line-of-sight links. The B-line-of-sight link (HF band) cannot transmit the task load information of large data such as pictures and images due to the bandwidth and cannot transmit the telemetry information. Therefore, only the telecontrol information can be uploaded in the simplex mode. A-Line-of-sight link (UHF band) and C-line-of-sight link (UHF band) are mutually redundant and the bandwidth of A-line-of-sight link (UHF band) and C-line-of-sight link (UHF band) is sufficient for simultaneous telecontrol and telemetry. Outside the line-of-sight range, the control command is received by the non-line-of-sight links and the long-distance data back-transmission is completed. α-non-line-of-sight link (Ku band) and β-non-line-of-sight link (Ka band) form redundancy. The non-line-of-sight links are relayed by two satellites respectively but the airborne communication terminal can only point to one of them at a certain time and the two non-line-of-sight links share the airborne communication terminal. The normal operation of the UAV is inseparable from the real-time control of the data-link system. The data-link, a subsystem of the unmanned aircraft system that provides for information transmission, has high sensitivity. The airborne communication terminal, repeater satellite's communication terminal and ground data communication terminal of the unmanned aircraft system are in motion relative to each other at all times. The radio frequency transmitter, radio frequency output power, external interference and the type and gain of the transmitting/receiving antenna determine the maximum acceptable distance for the UAV data-link to establish wireless communication. Data transmission through the UAV data-link will experience channel fading as a result of an increase in the communication distance that goes beyond the range limit, which will have a negative impact on the continuity and stability of signal transmission. This unreliable condition is reflected in the communication quality as the error rate. Based on these considerations, we took the bit error rate and communication distance together as parameters affecting the quality of information transmission to establish the health state prediction model of the UAV data-link system. The factors influencing the quality of each channel of the link were composed of two parts: channel fading caused by increases in the communication distance and the bit error rate caused by random fluctuations. In our model, the communication interruption rate affected by the communication distance can be described by the Barnett-Vignant formula: The probability that the information will not be interrupted is: where FM is the fading margin (dB); d is transmission distance (k); A is a factor of roughness; B is a factor of climate and environment; and f is the carrier frequency of the channel (GHz). Figure 6 shows the mode of the solar-powered UAV data -link system with three line-of-sight links and two non-line-of-sight links. The root node prediction model is shown in Table 1. and then we modified the BN by considering the impact of the communication interruption rate and bit error rate. We added 13 root nodes to indicate the factors affecting the health of each channel because of the communication distance and bit error rate. _ 1 _ 13 Error Error P P are the BERs of the αchain non-line-of-sight uplink, α-chain non-line-of-sight downlink, A-chain of the line-of-sight uplink, A-chain of the line-of-sight downlink, C-chain of the line-of-sight uplink, C-chain of the lineof-sight downlink and B-chain of the line-of-sight uplink, respectively. Accordingly, we constructed the BN as show in Figure 7, the leaf node 50 X respect the health state of solar-powered UAV data-link. Figure 8. According to the communication distance and fading margin (6~10 dB) requirements of each channel, we combined the communication interruption rate simulation curves using the Barnett-Vignant formula and bit error rate to get the health status of each channel. Figure 9 shows the curve ) as a function of time. Table 1. Root node of solar-powered UAV data-link prediction model. Description of the Prediction Model Prediction Model and Parameter α-chain airborne data terminal degradation model for power MOSFET We built a BN according to the networking mode of the solar-powered UAV data-link system and then we modified the BN by considering the impact of the communication interruption rate and bit error rate. We added 13 root nodes to indicate the factors affecting the health of each channel because of the communication distance and bit error rate. P Error_1 ∼ P Error_13 are the BERs of the α-chain non-line-of-sight uplink, α-chain non-line-of-sight downlink, A-chain of the line-of-sight uplink, A-chain of the line-of-sight downlink, C-chain of the line-of-sight uplink, C-chain of the line-of-sight downlink and B-chain of the line-of-sight uplink, respectively. Accordingly, we constructed the BN as show in Figure 7, the leaf node X 50 respect the health state of solar-powered UAV data-link. Figure 8. According to the communication distance and fading margin (6~10 dB) requirements of each channel, we combined the communication interruption rate simulation curves using the Barnett-Vignant formula and bit error rate to get the health status of each channel. Figure 9 shows the curve P Error_n (except P Error_1 , P Error_2 , P Error_7 , P Error_8 ) as a function of time. For P Error_1 (α-chain ground-satellite uplink), P Error_2 (α-chain ground-satellite downlink) and P Error_7 (β-chain ground-satellite uplink), P Error_8 (β-chain ground-satellite downlink), the distance from the repeater satellite to the ground data terminal can be considered a fixed value because the distance between the repeater satellite and the ground data terminal was much larger than the relative motion distance between the repeater satellite and the ground data terminal. Therefore, the information interruption rate affected by the communication distance was considered a fixed value. We combined the information interruption rate and bit error rate to obtain the curve of P Error_1 , P Error_2 and P Error_7 and P Error_8 as shown in Figure 10. The communication distance of the channel is considered to be a fixed value but the health of the channel still produces random fluctuations over time. The health status of the uplink is slightly better than the health status of the downlink. (β-chain ground-satellite downlink), the distance from the repeater satellite to the ground data terminal can be considered a fixed value because the distance between the repeater satellite and the ground data terminal was much larger than the relative motion distance between the repeater satellite and the ground data terminal. Therefore, the information interruption rate affected by the communication distance was considered a fixed value. We combined motion distance between the repeater satellite and the ground data terminal. Therefore, the information interruption rate affected by the communication distance was considered a fixed value. We combined the information interruption rate and bit error rate to obtain the curve of _ as shown in Figure 10. The communication distance of the channel is considered to be a fixed value but the health of the channel still produces random fluctuations over time. The health status of the uplink is slightly better than the health status of the downlink. Figure 12 shows that the health of the line-of-sight links are better than the health of non-line-of-sight links. Because of the redundancy of links, the telemetry link and telecontrol link did not reach the median life at 860 h. The telemetry link is redundant by 3 links and the telecontrol link is redundant by 2 links, so the health status of the telemetry link is slightly better than the health status of the remote link. Figure 13 shows that the health status of the UAV data-link system is worse than the health status of the telemetry link and the remote link. After 250 h, there is a significant difference. The UAV data-link system reaches the median life about 840 h. Figure 12 shows that the health of the line-of-sight links are better than the health of non-line-of-sight links. Because of the redundancy of links, the telemetry link and telecontrol link did not reach the median life at 860 h. The telemetry link is redundant by 3 links and the telecontrol link is redundant by 2 links, so the health status of the telemetry link is slightly better than the health status of the remote link. Figure 13 shows that the health status of the UAV data-link system is worse than the health status of the telemetry link and the remote link. After 250 h, there is a significant difference. The UAV data-link system reaches the median life about 840 h. Figure 13 shows that although changes in the position of the solar-powered UAV during the mission changed the communication distance of each channel periodically, the health status of the entire solar-powered UAV data-link system tended to be stable. It can be seen from the data-link system health state prediction curve that both node 46 (telemetry link) and node 49 (telecontrol link) have an impact on the health status of the data-link system node 50 in the next 840 h but the latter is the main reason for the decline of system health status. The weak link is analyzed layer by layer using the prediction curve of the intermediate node: telecontrol → non-line-of-sight link → β-chain nonline-of-sight link → UAV-RS uplink → repeater satellite . At the same time, it can be seen from the DAG diagram of the solar-powered UAV data-link system that the repeater satellite b node in the β non-line-of-sight link is the parent node of the total of four nodes β-chain non-line-of-sight uplinks and downlinks. So, the repeater satellite health status has a critical and direct impact on the health of the solar-powered UAV data-link. Conclusions and Future Work In this paper, we proposed a method for predicting the health status of UAV data-link system based on a Bayesian network. This model employs the Bayesian network to describe the information and uncertainty associated with a complex multi-level system. In addition, we proposed an approach considers both hardware equipment degradation and the health status of UAV data-links. This method combines the health status of hardware with different life characteristics and health status of UAV data-links affected by the external environment to predict the health status of the UAV datalink system. We provide a framework to predict the health status of the UAV data-link system, other factors that affect the health status of the UAV data-link system can also be incorporated into this Figure 13 shows that although changes in the position of the solar-powered UAV during the mission changed the communication distance of each channel periodically, the health status of the entire solar-powered UAV data-link system tended to be stable. It can be seen from the data-link system health state prediction curve that both node 46 (telemetry link) and node 49 (telecontrol link) have an impact on the health status of the data-link system node 50 in the next 840 h but the latter is the main reason for the decline of system health status. The weak link is analyzed layer by layer using the prediction curve of the intermediate node: telecontrol → non-line-of-sight link → β-chain non-line-of-sight link → UAV-RS uplink → repeater satellite b. At the same time, it can be seen from the DAG diagram of the solar-powered UAV data-link system that the repeater satellite b node in the β non-line-of-sight link is the parent node of the total of four nodes β-chain non-line-of-sight uplinks and downlinks. So, the repeater satellite b health status has a critical and direct impact on the health of the solar-powered UAV data-link. Conclusions and Future Work In this paper, we proposed a method for predicting the health status of UAV data-link system based on a Bayesian network. This model employs the Bayesian network to describe the information and uncertainty associated with a complex multi-level system. In addition, we proposed an approach considers both hardware equipment degradation and the health status of UAV data-links. This method combines the health status of hardware with different life characteristics and health status of UAV data-links affected by the external environment to predict the health status of the UAV data-link system. We provide a framework to predict the health status of the UAV data-link system, other factors that affect the health status of the UAV data-link system can also be incorporated into this method. In this paper, we describe the hardware health status of different life/performance characteristics and link's BER value to predict the health status of UAV data-link system with a unified state probability index. Through this approach, we can describe the health status of a UAV data-link system quantitatively and comprehensively. The case study of a multi-level solar-powered UAV data-link system shows that the model can quantitatively describe the health status of a solar-powered UAV data-link system with hardware degradation failure and link failure affected by communication distance and BER value. Based on the predicted results, we can understand the health status of the UAV data-link in real time. Based on the predicted results, we can improve the networking mode of the UAV data-link system and provide guidance for the maintenance decision of the UAV data-link system. At the same time, the study laid the foundation for accurately predicting the health of the UAV data-link system. However, factors such as weather (rain, cloudy), UAV speed, electromagnetic interference and so forth. that have an important impact on the communication quality of the UAV data-link system are not quantified in this paper. In our future work, we will do experiments to get data to verify the indicators of these influencing factors, then more factors can be incorporated into this model, we can accurately predict the health status of the UAV data-link system. At the same time, this method has a high dependence on the prediction model, the more accurate lifetime prediction for UAV data-link system is based on accurate device-level prediction information which imposes higher requirements on information acquisition, processing and analysis from multiple sensors.
9,374.4
2018-11-01T00:00:00.000
[ "Computer Science" ]
Application of neurite orientation dispersion and density imaging (NODDI) to a tau pathology model of Alzheimer's disease Increased hyperphosphorylated tau and the formation of intracellular neurofibrillary tangles are associated with the loss of neurons and cognitive decline in Alzheimer's disease, and related neurodegenerative conditions. We applied two diffusion models, diffusion tensor imaging (DTI) and neurite orientation dispersion and density imaging (NODDI), to in vivo diffusion magnetic resonance images (dMRI) of a mouse model of human tauopathy (rTg4510) at 8.5 months of age. In grey matter regions with the highest degree of tau burden, microstructural indices provided by both NODDI and DTI discriminated the rTg4510 (TG) animals from wild type (WT) controls; however only the neurite density index (NDI) (the volume fraction that comprises axons or dendrites) from the NODDI model correlated with the histological measurements of the levels of hyperphosphorylated tau protein. Reductions in diffusion directionality were observed when implementing both models in the white matter region of the corpus callosum, with lower fractional anisotropy (DTI) and higher orientation dispersion (NODDI) observed in the TG animals. In comparison to DTI, histological measures of tau pathology were more closely correlated with NODDI parameters in this region. This in vivo dMRI study demonstrates that NODDI identifies potential tissue sources contributing to DTI indices and NODDI may provide greater specificity to pathology in Alzheimer's disease. Introduction The progression of soluble tau protein to neurofibrillary tangles (NFTs) is at the centre of many human neurodegenerative diseases, including Alzheimer's disease (AD) (Spillantini and Goedert, 2013). A known function of tau protein is to stabilise axonal microtubules. However, in AD these proteins disassociate resulting in a breakdown of the microtubules and aggregation of insoluble hyperphosphorylated tau filaments, impairing neural pathways (Ballatore et al., 2007). Tau aggregates form within discrete neurites in select nerve cells and propagate, effecting downstream synaptically connected brain regions (de Calignon et al., 2012;Pooler et al., 2013;Ahmed et al., 2014). Monitoring the progression of tau and its effect on neuronal reorganization due to tau-induced neurodegeneration in vivo is a key requirement in understanding the progression of AD and determining the efficacy of current therapeutic attempts to target tauopathies (de Calignon et al., 2012;Ahmed et al., 2014). To study these effects, we used an animal model of tau pathology: the TG mouse model which overexpresses a mutant form of human tau (P301L) resulting in tau accumulation in the form of NFTs largely restricted to the hippocampus, cortex, olfactory bulb, and striatum (Santacruz et al., 2005). These NFTs are an intracellular hallmark of AD and are believed to lead to neuronal dysfunction, neurotoxicity and brain atrophy resulting in neurological deficits and neuronal cell death (Santacruz et al., 2005). Current understanding of tau propagation in synaptically connected brain regions is primarily obtained from invasive tissue measurements using immunohistological methods (Liu et al., 2012;Hardy and Revesz, 2012), which are restricted to single time point analyses and do not allow in vivo dynamic assessment of tissue remodelling due to pathology. Contents lists available at ScienceDirect NeuroImage j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / y n i m g Diffusion magnetic resonance imaging (dMRI) is sensitive to the diffusion process of MR visible molecules. In biological tissue, dMRI techniques provide indices associated with the dispersion pattern of water molecules, which reflect the integrity of the tissue microstructure in vivo (Basser and Pierpaoli, 1996). Most previous studies of AD related pathologies have applied a tensor model referred to as diffusion tensor imaging (DTI) to investigate changes in neuronal cytoarchitecture (Assaf and Pasternak, 2008;Horsfield and Jones, 2002) and age related changes in the TG mouse model (Sahara et al., 2014;Wells et al., 2015). However, the common diffusion tensor model is based on the assumption of a simple underlying Gaussian diffusion process (Alexander et al., 2002). In contrast, the neurite orientation dispersion and density imaging (NODDI) technique uses a non-Gaussian biophysical model, providing higher sensitivity to the non-monoexponential diffusion in the brain parenchymal environment (Alexander et al., 2002;Syková and Nicholson, 2008). In tauopathies, where the densely packed microtubules in neurite structures become destabilised and tau protein misfolds to form NFTs, diffusion MRI techniques that use biophysical models of tissue (Vestergaard-Poulsen et al., 2011;Jespersen et al., 2012;Hansen et al., 2011;Wang et al., 2013) may have higher specificity to changes in tissue cytoarchitecture and the pathological processes of tau accumulation. NODDI is a recent microstructure imaging technique based on diffusion MRI but using diffusion gradients of different strengths to provide more specific indices of tissue microstructure than DTI (Zhang et al., 2012). NODDI relies on a biophysical model that separates the diffusion of water in brain tissue into three types of microstructural environment: intra-neurite, extra-neurite, and cerebrospinal fluid (CSF) compartments (Zhang et al., 2012). To achieve this, NODDI applies a two level approach by separating the volume fraction of Gaussian isotropic diffusion (IsoVF), representing the freely diffusing water (i.e. CSF), from the neural tissue (Zhang et al., 2012). The remaining signal is sub-compartmentalised into components from intra and extra-neurite water, which are non-exchanging. This separation yields important markers: the neurite density index (NDI) (the fraction of tissue that comprises axons or dendrites) and the interdependent extra-neurite volume fraction (the fraction of tissue other than neurites) (Zhang et al., 2012). The dispersion of the neurite structures is further characterised by the orientation dispersion index (ODI) which reflects the spatial configuration of the neurite structures, with large values of ODI corresponding to highly dispersed neurites (e.g. grey matter) and small values of ODI to highly aligned axons (e.g. white matter tracts) (Zhang et al., 2012). In this study, we sought to determine the effectiveness of NODDI as an in vivo imaging marker to detect tau pathology and demonstrate the improved specificity of the technique over standard DTI measures for detecting AD-like pathology. We report the correlations of both models with immunohistological stains for filamentous tau and evaluate each technique's specificity to tau pathology. Methods Generation of homozygous rTg4510 transgenic mice has been reported previously (Ramsden et al., 2005). Mice were licensed from the Mayo Clinic (Jacksonville Florida, USA) and bred for Eli Lilly by Taconic (Germantown, USA). Mice were imported to the UK for imaging studies at the Centre for Advanced Biomedical Imaging (CABI), London. All studies were carried out in accordance with the United Kingdom's Animals (Scientific Procedures) Act of 1986. Five TG and five WT female litter matched mice (8.5 months of age) were imaged. The animals were placed in an induction chamber and anesthetised with inhaled isoflurane (2% isoflurane at 1 l/min O 2 ) until pedal withdrawal reflex was lost. They were then transferred to an MRI compatible head holder to minimise motion artefacts and maintained at 1.5% isoflurane at 1 l/min O 2 for the duration of scanning. Core temperature was maintained at 37°C using a warm air blower feedback system and rectal probe (SA instruments). Physiological monitoring of temperature and respiration was recorded throughout the scan (SA instruments). The scans were performed on a 9.4T Agilent scanner with Vnmrj 3.1 front end software using the Agilent 205/120HD gradient set. RF transmission was performed with a 72 mm inner diameter volume coil and a 4-channel receiver coil (Rapid Biomedical). Diffusion-weighted images were acquired using a 4-shot spin echoplanar imaging (EPI) sequence over sixteen slices. The olfactory bulbs were used as an anatomical landmark to maintain consistency in slice positioning between animals and the slices covered the cortex and subcortical structures up to the cerebellum. Imaging parameters were: FoV 20 × 20 mm 2 , resolution 200 × 200 μm, 16 × 500 μm axial slices, TR = 2 s, TE = 24 ms. To enable the NODDI analysis, the diffusion MRI protocol consisted of two shells, which were: Images were corrected for motion using 2D rigid body registration to a reference volume (first b = 0 image) using DTI-TK (Zhang et al., 2006). Brain masks were created manually with Matlab (Mathworks version 7.1) and the NODDI microstructure parameter maps were estimated from motion-corrected images using the NODDI toolbox (Zhang et al., 2012). The DTI metrics of fractional anisotropy (FA) and mean diffusivity (MD) were generated from the Camino toolbox (Cook et al., 2006) using weighted linear least-squares. The intraneurite compartment in healthy grey matter is highly dispersed due to sprawling dendritic processes (Zhang et al., 2012). In this TG mouse model there is a large dendritic degeneration in cortical regions of the grey matter (Santacruz et al., 2005). This would cause a reduction in dispersion and would be reflected by reduced ODI. The reduction in dendritic volume would also cause a reduction in NDI (the volume fraction that comprises the intraneurite compartment). Healthy white matter mainly contains highly orientated axon bundles. In tau pathology, the microtubules of the axon bundles dissociate (Santacruz et al., 2005) which should lead to higher diffusion tortuosity in intraneurite compartment. An increase in diffusion tortuosity would be reflected by an increase in dispersion (higher ODI) and as the white matter region thins, a reduction in the neurite density volume (lower NDI). Any increase or decrease in free isotropic diffusion respectively reflects a higher or lower value measured through the tissue volume fraction (Zhang et al., 2012). Directly after in vivo imaging the animals are removed from the MRI scanner and terminally anaesthetised with Euthanal (0.1 mL) administered via intraperitoneal injection. The thoracic cavities were opened and the animals perfused through the left ventricle of the heart with 15-20 mL of saline (0.9%) followed by 50 mL of buffered formal saline at a flow rate of 3 mL per minute. Following perfusion, the animal was decapitated, defleshed and the lower jaw removed. All brains were stored in-skull in buffered formal saline at 4°C before being dispatched for histology. Brain samples were then processed using the Tissue TEK® VIP processor (GMI Inc, MN USA). After processing, sections were embedded in paraffin wax to allow coronal brain sections to be cut. Serial sections (6-8 μm) were taken using HM 200 and HM 355 (Thermo Scientific Microm, Germany) rotary microtomes. Immunohistochemistry (IHC) was performed using a primary antibody for tau phosphorylated at serine 409 (PG-5; 1:500 from Peter Davies, Albert Einstein College of Medicine, NY, USA). Secondary antibody was applied and slides were then incubated with avidin biotin complex (ABC) reagent for 5 min (M.O.M. kit PK-2200, Elite ABC rabbit kit PK-6101, or Elite RTU ABC PK-7100 Vector Labs). After rinsing, slides were treated with the chromogen 3, 3′-diaminobenzidine (DAB; Vector Laboratories, SK-4100) to allow visualisation. The slides were then cover slipped, dried and digitised using an Aperio Scanscope XT (Aperio Technologies Inc., CA, USA). Images were viewed and analysed with Aperio ImageScope software (version 10.2.2319). In this study, two sections that align with the diffusion slices for each mouse were analysed. For each section stained, regions of specific interest (in this case the cortex, corpus callosum, hippocampus and thalamus) were delineated. The density of PG-5 immunoreactivity was quantified within these regions of interest using Aperio ImageScope and exported into spreadsheets for statistical comparison with diffusion measurements. The t-test of group differences between WT and TG and Pearson correlations of diffusion metrics with PG-5 immunoreactivity were carried out using Graphpad Prism (version 5) (Motulsky et al., 1994). Adjustment for multiple comparisons was performed for each model using false discovery rate (FDR) correction and the level of significance was set at 0.05. Results Representative maps of fractional anisotropy (FA), mean diffusivity (MD), orientation dispersion (ODI), neurite density (NDI), and isotropic diffusion (IsoVF) for WT and TG mice are presented in Fig. 1. The FA and ODI maps show the highly anisotropic nature of the corpus callosum white matter. The IsoVF map clearly delineates CSF and the associated ventricular enlargement in the TG. The output of the ROI analysis of the four regions selected: three grey matter regions (cortex, hippocampus and thalamus) and one white (corpus callosum) (Fig. 2B), were compared with the percentage tau burden measured through immunohistology in the same four regions. The cortex and the hippocampus, as expected, presented with the highest degree of tau burden and the thalamus had the lowest tau burden ( Fig. 2A). Thalamus In the thalamus (region with the lowest tau burden, Fig. 2A), FA, MD, ODI and NDI could not discriminate between WT and TG groups. However there was a significantly lower isotropic diffusion volume fraction (IsoVF) in the TG mice compared with WT animals (Tratio = 2.9; df = 8; p b 0.05) (Fig. 3D). Discussion In this work, we have shown that NODDI indices correlated with histological measurements of tau pathology in grey matter regions in a mouse model of human tauopathy whereas traditional DTI indices of MD and FA do not. The significantly lower FA in the white matter tract of the corpus callosum (CC) is in good agreement with previous studies of the TG animals at 8 months (Sahara et al., 2014;Wells et al., 2015). Lower FA and higher MD in the CC have been reported in AD patients when compared with healthy volunteers (Bozzali et al., 2001) and more recent studies have found that reductions in the white matter FA in AD patients correlate with CSF AD biomarkers of total and phosphorylated tau (Amlien et al., 2013). The higher ODI and reduced FA in the white matter reflect the previously reported disorganisation of the white matter due to elevated tau burden (Sahara et al., 2014). The ability of the NODDI technique to separate the microstructural compartments that all contribute to DTI metrics is reflected by the higher sensitivity of ODI reduced directionality of white matter tissue due to tau pathology (Sahara et al., 2014) (Supplementary Table 1). Furthermore we examined two additional regions of the white matter, the fimbria and internal capsule. In these two regions we found similar changes to that of the corpus callosum in the TG group with higher ODI, reduced NDI and no significant difference in IsoVF in the NODDI technique ( Supplementary Fig. 2). In the cortex, a grey matter region which is normally characterised by sprawling dendritic processes, the dispersion (i.e. ODI) was reduced in the TG, and the neurite density (NDI) and isotropic diffusion (IsoVF) were higher. The increase in filamentous tau burden ( Fig. 2A) and intraneuronal inclusions has resulted in cortical thinning (Fig. 2B). Previous findings have reported a loss of pyramidal cells, a reduction in cortical volume and an increase in neurite density at this time point in this TG animal model (Ludvigson et al., 2011). Lower ODI and higher NDI in the TG animals implies a similar reduction in dendritic complexity, due to atrophy of the cortical layers and accompanied loss of neurite structures as previously reported in this animal model (Ludvigson et al., 2011). As such the tissue would become less dispersed; leading to lower ODI and the reduced volume of cortex with higher density is reflected by higher NDI in the TG group. As the cortical structure atrophies the space becomes occupied by CSF. The higher CSF contamination increases the free diffusion compartment reflected by a higher isotropic volume fraction (IsoVF). The higher MD in this region is likely due to the partial volume effects of the CSF contamination, and NODDI's ability to remove the effect of CSF contamination increases the specificity to the cytoarchitecture. Interestingly, NDI was the only parameter to correlate with degree of tau pathology, indicating that this parameter may be sensitive to the underlying nature of the disease process. However two of the animals may be driving the significant differences in the isotropic volume fraction and neurite density observed in the cortex. Similarly in the hippocampus, the neurite density was the only parameter that correlated with histological measures of tau burden. There are also region-specific effects of tau pathology in the marked atrophy of hippocampus with previously reported neurite loss at 8.5 months amounting to 85% in dentate gyrus, 82% in CA1 and 69% in CA2/3 (Spires et al., 2006). Currently we do not have the resolution in dMRI data to separate each compartment of the hippocampus, however the dramatic loss in neurite projections is reflected by the reduced NDI, and the reduced dispersal of neuron projections is reflected by a lower ODI. The correlations in both the cortex and hippocampus indicate that NDI may provide higher specificity to tau pathology formation in grey matter relative to MD or FA. Correspondingly, in the thalamus, which has a very low degree of tau burden in comparison to the hippocampus and cortex, the FA, MD, ODI and NDI did not discriminate TG from WT, although the IsoVF was lower in the TG group. However no correlation was found between IsoVF and tau burden in the thalamus (Supplementary Table 1). Previous studies have reported a significant higher MD in the thalamus at 8.5 months in this animal model (Wells et al., 2015). However due to the lower numbers in this study this change may not be detected. DTI has great potential value as a clinical biomarker in AD (Oishi et al., 2011;Keihaninejad et al., 2013;Ryan et al., 2013). However, although attempts have been made (Molinuevo et al., 2014;Patil and Ramakrishnan, 2014), it is difficult to relate the parameters directly to pathological or biological processes. In contrast, a NODDI parameter derived from a biological model of brain cytoarchitecture has displayed a close correlation with the degree of tau pathology in the TG mouse model (Supplementary Table 1). This approach could extend the applicability of diffusion imaging in both preclinical models and clinical AD by providing insight into grey matter microstructural alterations and their relationship to underlying disease processes, other biomarkers of AD pathology and clinical or behavioural phenotype. Limitations Further work is required to apply this technique to a multi-parameter longitudinal study over the time course of disease. This would determine the sensitivity of the NODDI metrics to other novel and established MR methods in the pre-symptomatic phase of the disease. We envisage that a multi-parameter longitudinal study would have three separate animal groups of (1) WT littermates; (2) untreated TG and (3) doxycycline treated TG group (which suppresses tau expression) investigated from 2 months to 8 months. This study would aim to determine the specificity of the NODDI metrics to pathology in the pre-symptomatic phase of the disease and during controlled expression. Although we have demonstrated specificity of NODDI metrics in this tau model the number of mice in this study is low and greater conspicuity may be reached by increasing the sample size. Conclusion The results of this cross-sectional study suggest that NODDI measurements could provide a higher degree of specificity to the pathological effects of tau in grey matter, and higher interpretability with respect to the underlying biological processes, in comparison to traditional DTI measures. The NODDI metrics discriminated between the TG and WT groups, and the correlations between the immunohistological measures of pathological tau and NODDI measures in the cortex, hippocampus and corpus callosum demonstrated the heightened specificity of this technique in comparison to DTI (Supplementary Table 1). The correlation of microstructural grey and white matter changes with pathology represents a new target of investigation in AD which may serve as an early diagnostic marker of pathology, however in light of the limitations outlined above, these results must be considered as exploratory at this point.
4,517.8
2016-01-15T00:00:00.000
[ "Biology" ]
Research on Application of Computer Virtual Technology in Basketball Training . Sports training is an important way to improve the level of sports competition. By giving full play to the advantages of the application of virtual technology, it can improve the training results of various sports events and enable athletes to master sports details, to help athletes adjust their movements in time, improve the level of basketball skills. In the process of basketball sports training, giving full play to the advantages of the application of computer virtual technology not only reconstructs the basketball training system, but also greatly improves the students' understanding and understanding of basketball, has realized the basketball training system the comprehensive sublimation. Based on the application background of computer virtual technology in basketball sports training and combining with the requirements of basketball sports training, this paper puts forward the application strategy of computer virtual technology. Immersion experience characteristics VR technology can simulate different situations according to people's needs, touch the sense of body with the idea of spirit, thus fully meet the students' basic needs for the situation.This technology can make the situation more lifelike, and guide the students to experience the charm of the physical education curriculum deeply by constructing an ideal space form.Computer virtual technology is a new type of technology which has emerged in the background of the development of science and Technology.This technology mainly consists of sensing technology, computer technology, simulation technology and so on, to better simulate the real world.By simulating real vision and various actions, users can deepen their understanding of a certain thing, feel the various phenomena constructed by the virtual world, and strengthen the communication between users and the virtual environment, more of the real world. THE APPLICATION VALUE OF COMPUTER VIRTUAL TECHNIQUE IN BASKETBALL TRAINING As a forward-looking science and technology, virtual technology plays an increasingly important role in modern sports, which shows the superiority of traditional sports teaching and training.There will be some accidents in the course of daily training, so for this kind of sports events can be effectively combined with computer virtual technology, which can effectively solve many problems in the course of sports, reduce the occurrence of unnecessary safety accidents.Through the virtual skills to improve the overall sports skills, correct the training of non-standard movements, improve the training effect.The computer virtual technology can make the sports movement virtual, can handle the difficult movement flexibly, and avoid the serious injury to the athletes' body caused by the complicated movement, to allow athletes to experiment with new moves.Make up for the deficiency of modern basketball teaching and training conditions.In the modern basketball teaching and training, often because of the weather, venue, equipment Material, funding and other reasons, which makes some teaching and training courses can not be carried out.The use of virtual reality system can make up for these deficiencies, and students can understand basketball techniques and tactics without leaving home, and gain the same experience as the real feeling, thus enriching perceptual knowledge, deepen the understanding of the teaching content.Avoid sports injuries caused by difficult and complicated technical movements. With the high development of modern basketball technology, the competition of basketball and high-altitude competition is more and more intense, and the technical difficulty is more and more high.The use of virtual reality technology for virtual action experiments can eliminate this concern. Break the limit of space and time completely.Using the virtual reality technology, we can break the limitation of time and space completely, and can watch the technique and tactics of the world's excellent athletes and the technique instruction of the world's famous coaches and experience the modern basketball idea.Yeah, avatars.Virtual reality system can virtual world famous athletes and coaches and other images, to create a human learning environment.In the virtual classroom learning atmosphere, we can also exchange and discuss with the virtual coaches, teachers, athletes and so on, discuss the problems of various techniques and tactics in learning and training, and carry out cooperative learning. Virtual reality technology can provide students and athletes with a vivid and lifelike learning environment, students and athletes can become a participant in the virtual environment and play a role in the virtual environment, they can compete together with the world's famous athletes and coaches, which will arouse the enthusiasm of students and athletes in learning, and break through the key points and difficulties in techniques and tactics, the development of Students' skills will play a positive role.Virtual simulation technology can simulate famous athletes and coaches in the world, as well as the techniques of famous athletes and the tactics of famous coaches, so that sports training can be free from the pure state of relying on experience, into the theoretical, digital era. Virtual reality technology can only be used as an auxiliary means in basketball games, and should be properly used in auxiliary teaching, technical design and analysis, tactical simulation and analysis, as well as application training and result display, etc. , not too much emphasis on or even replace the traditional basketball teaching and training.At present, although virtual reality system hardware equipment is still relatively expensive, virtual reality technology is not yet popular.However, with the continuous development and improvement of virtual reality technology, as well as the continuous reduction of hardware equipment prices, virtual reality technology as a new teaching platform, with its own strong teaching advantages and potential, it can be applied not only in the teaching of basketball technique, but also in the teaching of other technical movements, which will play an important role in the whole field of sports. APPLICATION STRATEGY OF COMPUTER VIRTUAL TECHNOLOGY IN BASKETBALL TRAINING In the rapid development of modern basketball, not only a large number of direct and vivid teaching materials are needed, but also basketball skills are constantly improved, it is difficult for the ordinary teachers and coaches to complete the demonstration movements in teaching and training, and it is also difficult for the coaches to express clearly some tactics because of the complicated and changeable contents At the same time, in the process of teaching and training, athletes sometimes need to have a lot of observation, imitation, feedback, correction and other sensory signals in addition to proprioception, especially audio-visual sense.The virtual simulation technology based on computer 3D can not only assist the traditional teaching and training methods, but also stimulate the interest of the athletes in learning and training and the ability of recognizing the role of the main body, it is beneficial to the acquisition of knowledge and the effective organization and management of teaching and training information, and provides an ideal environment for teaching and training. Virtual reality technology is widely used in the training of coaches and referees.As a brand-new technology, virtual reality technology has brought us brand-new educational thinking and solved problems that we could not solve before.In the traditional training of coaches and referees, the traditional teaching method, indoor theory introduction and field on-the-spot guidance are usually adopted.And as a brand-new technology, virtual reality not only makes up for the deficiencies of traditional basketball teaching, but also provides a brand-new teaching platform, which makes you feel as if you are in the same place without leaving home, the new 3D virtual animation technology enables complex content to be expressed in simple ways, making it easier for teachers to express their teaching ideas and content, and making learners more intuitive and visual, it is easier to understand the teaching thoughts and contents of the teachers.As a brand-new science and technology, virtual simulation technology not only makes up the deficiency of modern basketball teaching methods, but also enriches teaching methods and means.Modern basketball techniques and tactics are complex and changeable, when teachers and coaches are teaching and training, it is sometimes difficult to express these complicated and changeable techniques and tactics clearly in simple language, then it will have an unexpected effect.Build virtual simulation character and virtual environment.Using virtual simulation technology can simulate the world's best athletes and coaches, using the characteristics of interaction can compete with them.Three-dimensional virtual simulation technology action.By means of visualization, athletes can master the main points of technical movements more easily and quickly, so as to improve the overall skill level of athletes.New Movement design and technical movement standardization.It can edit, modify and design new movements.It must be combined with the practice of basketball and human body science and the laws of human body motion, through this tool, coaches can also design their own "ideal" movements in mind, according to the establishment of standard technical movements for teaching and training, improve competition results. Technical motion analysis.Can Do quantitative analysis of the technical action, and graphical display of the analysis results, including displacement, speed, force and so on.This paper makes a deep analysis of the "ideal" movement and the athlete's technical movement, and gives the guiding opinions of improving the athlete's technical movement.Tactical Simulation and analysis.Modern basketball techniques and tactics are rich and complex and changeable.Coaches can use simulation technology to simulate the tactical changes and characteristics of virtual opponents and display them with intuitive and lifelike 3D animation simulation technology, in this way, the athletes can observe the tactical characteristics of their opponents vividly and vividly, thus enabling the coach to formulate corresponding tactics and countermeasures for cracking the opponents according to their characteristics, it makes it easier to win the game. In the process of optimizing the function design of computer virtual reality technology, it is necessary to construct an effective target experience function and simulate it fully with the actual situation of different scenarios.If the scientific presentation of grass, cement, plastic track and field, basketball court and other aspects of the venue information, the three-dimensional quota information into the functional requirements.In this process, we need to accurately analyze the experience of students.Therefore, it is necessary to make an effective comparison between the technical requirements and the actual conditions of the sports events, to carry out practical optimization in combination with the policy design of different scenarios, and to formulate the projects around the advantages of the technology, so the design requirement of specific function is clear. According to the request mode in the background of computer, we should divide the basic request of different action, construct an effective communication space, so as to improve the effectiveness of physical education teaching.In particular, attention should be paid to the integration of training requirements and simulation training, and to the comparison of methods with the data structure of cloud network, so as to improve the validity of simulation function.In the design of sports scene, it is necessary to develop an effective simulation training environment according to the students' reception.At the same time, teachers need to combine the form of multimedia physical education, let students feel the charm value of the actual scene through multimedia equipment, so as to faster cognition of the goal-oriented sports scene.In particular, it is necessary to use this technology to design three-dimensional data models with different structures, and to integrate the corresponding context design content to design effective competition standards and competition situations, let students in VR technology understand the rules of different events and the training objectives and methods of the events in the field, which subtly deepen students' understanding of different sports. CONCLUSION To carry out basketball training, it is necessary for athletes to train their ability of holding the ball and team cooperation and reaction.In the application of computer virtual technology in basketball training, it is necessary to catch the individual wrong movements of the athletes, and then the coaches need to explain the wrong movements for the athletes and analyze the causes of the problems in the training, highlight the important points of tactical coordination that can occur as a result of the error in the video, allowing it to explore the different details of the movement. ACKNOWLEDGEMENTS Project level: Provincial Social Sciences, Project Name: Research on the construction and development of adolescent sports health promotion system in Jilin Province, Subject No.: jjkh20220625sk.
2,787.4
2022-01-01T00:00:00.000
[ "Computer Science", "Education" ]
Identifying influential nodes in complex networks using a gravity model based on the H-index method Identifying influential spreaders in complex networks is a widely discussed topic in the field of network science. Numerous methods have been proposed to rank key nodes in the network, and while gravity-based models often perform well, most existing gravity-based methods either rely on node degree, k-shell values, or a combination of both to differentiate node importance without considering the overall impact of neighboring nodes. Relying solely on a node's individual characteristics to identify influential spreaders has proven to be insufficient. To address this issue, we propose a new gravity centrality method called HVGC, based on the H-index. Our approach considers the impact of neighboring nodes, path information between nodes, and the positional information of nodes within the network. Additionally, it is better able to identify nodes with smaller k-shell values that act as bridges between different parts of the network, making it a more reasonable measure compared to previous gravity centrality methods. We conducted several experiments on 10 real networks and observed that our method outperformed previously proposed methods in evaluating the importance of nodes in complex networks. Preliminaries Centrality measures In the context of an undirected and unweighted simple network G =< V , E >,V and E respectively represent the sets of nodes and links.The cardinality of V and E can be expressed as |V | = N and |E| = M , indicating the presence of N nodes and M links within the network.The network's connectivity structure is typically captured by its adjacency matrix A = (a ij ) N×N , where a ij = 1 if node i and node j are linked, and 0 otherwise.Degree centrality 17 of node i is defined as where The maximum integer fulfilling that there are at least H(i) neighbors of node i whose degrees are all at least H(i) , represented by H(i) , is known as the H-index 18 of the node i. The k-shell decomposition method 24 (KS), operates through an iterative process of decomposing the network into distinct shells.Initially, KS removes nodes with a degree of 1 from the network, resulting in a decrease in the degree values of the remaining nodes.This process is repeated by removing nodes with residual degrees less than or equal to 1 until all remaining nodes have residual degrees greater than 1.The nodes removed in the first step constitute the 1-shell, and their k-shell values are assigned as 1.This process is then iteratively applied to obtain the 2-shell, 3-shell, and so on.The decomposition process continues until all nodes in the network have been accounted for. Gravity centrality 32 (G) of node i is defined as where k s (i) is the k-shell value of node i , d(i, j) is the shortest path distance from node i to node j , and ψ i is the set of nodes whose distance from node i does not exceed 3. Extended gravity centrality 32 (G+) of node i is described as i is the nearest neighborhood of node i.The improved gravity centrality 33 (IGC) of node i is measured by where R is the truncation radius, and the optimal truncation radius R * can be estimated by ( 1) where d is the average distance of the network.Extended improved gravity centrality 33 (IGG+) of node i is described as i is the nearest neighborhood of node i.The local gravity model 34 (LGM) of node i is determined by The generalized gravity centrality 35 (GGC) of node i is defined as where C i is the local clustering coefficient of node i , n i denotes the number of edges between neighbors of node i , and α = 2. The k-shell based on gravity centrality 36 (KSGC) is defined as where c ij is the coefficient of attraction exerted by node i on node j , k s (i) and k s (j) denote the k-shell values of node i and node j , respectively.ks max and ks min refer to the largest and smallest k-shell values present in the network.d(i, j) is the shortest path distance from node i to node j. The DK-based gravity model 37 (DKGM) is measured by assume that the value of the k-shell of node i is k s (i).For the process of the k-degree iteration, the total iteration number is q(k) , and node i is removed in the p(i) iteration of the k-degree process.k * s (i) is called the improved k-shell index of node i. The multi-characteristics gravity model 38 (MCGM) is measured by where k mid , k smid and x mid denote the median of degree value, k-shell value and eigenvector centrality value, respectively.k max , k s max and x max denote the maximum values of degree value, k-shell value, and eigenvector centrality value. The entropy-based gravity model 39 (SEGM) is defined as where E(i) is the information entropy of node i , Ŵ(i) represents the set of neighboring nodes of node i,and I(i) is the importance of node i. The SIR model used in this paper To evaluate the ranking of impact generated by the algorithm and the simulation, we employed the widely used SIR model 40 .In the beginning, a single node in the network, referred to as the "source node," is in the infected state (I), while the remaining nodes are in the susceptible state (S).An infected node has the potential to infect its susceptible neighbors with a probability of β , and the probability of each infected node entering the recovery (R) state is , after which it ceases to participate in the dynamics.This propagation process continues until no infected nodes remain in the network.The impact of any given node i can be estimated by the number of nodes that recover after the diffusion process has stabilized is represented by N r .For the sake of simplicity, has been set to 1. Subsequently, the corresponding epidemic threshold 41 can be computed by where k and k 2 are the degree distribution's average degree and second-order moments. Measures Kendall's tau coefficient Kendall's tau coefficient 42 is a measure of correlation between two sequences, with a larger value indicating a greater similarity between the sequences.The definition of Kendall's tau coefficient is as follows: given two sequences X and Y of the same length, where the i th values are represented by x i and y i , respectively.Let each pair of elements x i and y i form a set, denoted by (x i , y i ) .If x i > x j and y i > y j , or x i < x j and y i < y j , the pairs (x i , y i ) and (x j , y j ) are considered concordant.They are considered discordant if x i > x j and y i < y j , or x i < x j and y i > y j .If x i = x j and y i = y j , the pair is neither concordant nor discordant.Therefore, the Kendall's tau coefficient τ is defined as where n + is the number of concordant pairs, and n − is the number of discordant pairs. Jaccard similarity coefficient In some applications, concentrating on the top-rank nodes rather than all nodes may be appropriate.In contrast to the Kendall correlation coefficient, the Jaccard similarity coefficient is utilized to assess the similarity between the top-k nodes in two ranking lists 25,43 .The Jaccard similarity is calculated by dividing the number of common nodes by the number of unique nodes in the two lists, and its expression is where X and Y represent the top-k nodes with the highest influence as determined by two different methods.In the context of our experiments, X represents the top-k nodes identified by HvGC and other baseline methods, while Y represents the top-k nodes obtained through the SIR simulation.We use the Jaccard similarity coefficient to measure the similarity between these two sets of top-k nodes.The Jaccard similarity coefficient ranges from 0 to 1, where a higher value indicates a greater degree of similarity between the two ranking results.A Jaccard similarity coefficient of 0 indicates completely distinct results, while a value of 1 indicates that the two sets of top-k nodes are identical.( 19) The monotonicity index The monotonicity 25 M is used to quantitatively measure the resolution of different indices in ranking list X , and can be calculated by where N is the size of network, and N c is the number of nodes with the same index value c. Algorithms Previous research has utilized the gravity model approach to analyze node importance in complex networks.Degree and k-shell values are commonly used metrics to consider the number of neighbors a node has and its position within the network, respectively.However, these metrics alone do not capture the overall influence of a node's neighbors.While the H-index considers the importance of a node's neighbors, it may overlook certain information from neighboring nodes, failing to account for the collective impact of all neighbors.We take the toy network shown in Fig. 1 to illustrate the problem for H-index, where the node spreading capacity derived from 1000 independent runs of the SIR model has been numerically labeled in Fig. 1.Obviously, , where H(i) represents the H-index of node i .The H-index always assigns the same value to different nodes, which leads to a lack of excellence in the ability to differentiate the influence of nodes. The same issue exists in DC 17 and KS 24 .Additionally, from Fig. 1, it can be observed that Node 3 has a higher propagation capability compared to Node 9, but Node 3 has a lower H-index than Node 9.This indicates that the H-index overlooks some information from the neighbors of a node.From this, we take out all neighboring nodes in the set of neighbors of node i with degree values greater than or equal to H(i) and add up the degree values of these nodes to measure the overall influence of the neighboring nodes on node i .The value obtained is denoted as HV (i) , and the expression is where i is the nearest neighborhood of node i,H(i) represents the H-index of node i. By incorporating the overall influence of node neighbors into the definition, it enhances the discriminative power of node identification compared to the H-index.However, it is still insufficient to accurately distinguish cluster-like nodes, due to their close connections, these nodes can more easily achieve greater HV values, but, their actual influence may not be greater than that of nodes with lower HV values, As shown in Fig. 1.HV (6) = 8 ,HV (9) = 7,HV (3) = 4 , and the actual propagation capacity from high to low is nodes 3, 9, and 6, a similar problem with the k-shell approach was noted by Liu et al. 44 In other words, removing node 3 from the network would result in nodes 1, 2, and 4 losing their interactions with the core nodes, while removing node 6 has a minimal impact on information transmission in the network.This finding demonstrates the higher importance of nodes that serve as bridges between different clusters compared to those within individual clusters. Based on this, we considered the structural hole position of nodes to enhance the algorithm's ability to identify nodes within community networks.This allows us to identify those bridge nodes that may not have high HV values but play a crucial role in facilitating information flow across different parts of the network.The network constraint coefficient measures the level of constraints imposed on nodes forming a structural hole (SH) in a network 45 , and it can be calculated as follows: where Ŵ(i) represents the set of neighboring nodes of node i , and w ∈ Ŵ(i) ∩ Ŵ(j) indicates the nodes that are common neighbors of both node i and node j .p ij represents the proportion of energy invested by node i to maintain its relationship with node j .where z ij = 1 (i � = j) if there is a link between nodes i and j , otherwise z ij = 0 .Based on the above discussions, the gravity centrality based on the H-index (HVGC) measure proposed in this paper is defined as follows: where c(i) represents the structural hole constraint coefficient in Eq. (29).A smaller value of c(i) indicates that the node occupies more structural holes and has a stronger ability to bridge different parts of the network.Finally, the metrics, including HVGC, H-index, HV, DC, and KS, were computed for each node in the toy network and compared with the node's spreading capability (SC).The results are presented in Table 1, revealing that HVGC achieves a nearly identical ranking to SC, indicating excellent performance.The algorithmic description of the HVGC is provided in Algorithm 1. In addition, Fig. 2 depicts a network with a clear community structure, where the four nodes with the strongest propagation capabilities are marked in green.The propagation capabilities of these nodes were determined (29) Data description This paper evaluates the efficacy of HVGC by analyzing ten real networks from six distinct domains, including a transportation network(USAir 46 ), an infrastructure network (Power 47 ), a communication network (Email 48 ), a technology network (Router 49 ), two collaborative networks (Jazz 50 and NS 51 ), and four social networks (Facebook 52 , PB 53 , WV 54 , and Sex 55 ).Table 3 presents the fundamental topological properties of these networks.N represents the number of nodes in the network, and M represents the number of links.The average degree of nodes is denoted as k , and the average distance between pairs of nodes is denoted as d .The clus- tering coefficient 47 of the network is denoted by C , while r represents the assortative coefficient 56 .The degree heterogeneity 57 of the network is denoted by H . Additionally, β c represents the epidemic threshold 58 of the SIR model 40 used to simulate the diffusion process. Empirical results Based on the aforementioned real network, we conducted simulations and compared the influence rankings of various algorithms utilizing the SIR model.In order to ensure the credibility of our findings and the standard ranking of nodes' influence, we conducted 1000 independent experiments for each given network and transmission probability β , with any one node being chosen as the seed node once during each run.The processor and runtime environment used for the calculations are i7-12700H and Python 3. The development platform used for this paper is Anaconda 3, and the code was executed in Jupyter Notebook.Kendall's tau ( τ ) was utilized to evaluate the accuracy of the algorithms, with a higher value indicating a greater correlation between the observed sequences and an improved algorithm performance.Table 4 provides a comparison of the accuracy of the proposed algorithm (HVGC) and ten benchmark algorithms, which include degree centrality 17 (DC), k-shell decomposition method 24 (KS), the extended version of gravity centrality 32 (G+), extended version of improved gravity centrality 33 (IGC+), local gravity model 34 (LGM), generalized gravity centrality 35 (GGC), the improved gravitational centrality based on k-shell values 36 (KSGC), the DK-based gravity model 37 (DKGM), multi-characteristics gravity model 38 (MCGM), and entropy-based gravity model 39 (SEGM).Additionally, Fig. 3 displays the accuracy of the different algorithms for varying values of β , within the range of 0.5β c to 1.5β c .4, the methods that utilise the gravitational formula (G+, IGC+, LGM, GGC, KSGC, DKGM, MCGM, SEGM, and HVGC) exhibit significant advantages over classical methods (DC and KS).These advantages are especially prominent in the Power, Router, NS, and Sex networks.Furthermore, it is noteworthy that among all gravity-based algorithms tested on the ten networks, HVGC exhibited the best overall performance.Its Kendall coefficient ranked first in six out of ten networks, with a remarkable 70% proportion being in the top two ranks.Specifically, HVGC ranked first in the Jazz, email, Facebook, PB, WV, and USAir networks and second in the Router network.Additionally, as shown in Fig. 3, when β = β c , although HVGC did not perform best in the NS, Power, and Sex networks, as β increases, its performance becomes very close to or even surpasses the previous best-performing algorithm.Taking into account HVGC's superior performance in community-type networks discussed earlier, it demonstrates a stronger overall performance, affirming the robustness of our findings.Furthermore, Fig. 4 displays the optimal truncation radius of HVGC in the ten real networks, revealing that the majority of networks concentrate their optimal truncation radius at R = 1 .This indicates that HVGC achieves remarkably high accuracy by considering only the influence of the first-order neighbouring nodes of a node, while most other gravity model methods require considering information from second-or third-order neighbouring nodes.In other words, HVGC achieves a high level of accuracy while incurring lower time costs. Discussion This paper introduces a novel method called HVGC for identifying influential nodes in a network.While the original gravity model considered both neighbourhood and path information, this new method enhances the existing gravity centrality approaches by taking into account the overall influence of a node's neighbourhood, considering the structural hole position of nodes, and incorporating the differences in interactions between nodes.This method addresses the limitations of existing gravity centrality methods and strengthens the ability to identify important nodes in networks with clear community structures.Therefore, this approach demonstrates a high level of comprehensive performance.We conducted an analysis of the SIR dynamic propagation process in 10 real networks to compare the performance of HVGC with previous state-of-the-art methods.The results, as shown in Table 4, indicate the strong competitiveness of our method. In certain scenarios, it is necessary to identify the top-k influential nodes for controlling information propagation.Therefore, in addition to evaluating the different ranking methods for individual nodes, we also assessed their performance in identifying the top-k influential spreaders.In other words, we compared the ranked lists of node influence obtained from the ranking methods with the ranked lists of node influence obtained from the SIR simulation, both sorted in descending order.Subsequently, we analysed the similarity between the two lists www.nature.com/scientificreports/by considering the top-k nodes.Figure 5 illustrates the results of the Jaccard coefficient for identifying the top-k influential spreaders, ranging from 5 to 100 with a step size of 5.The X -axis shows the number of top influential spreaders, and the Y -axis shows the Jaccard similarity coefficients.We can observe that, except for the Sex, Power, PB, and Router networks, HVGC exhibits the best and most stable overall performance in identifying the top-k influential spreaders in other networks.Specifically, across all networks, as the number of selected top-k nodes increases, HVGC consistently maintains a high-level or steadily increasing Jaccard coefficient, while other methods display varying degrees of fluctuations.Furthermore, we provide detailed plots for the top-25 nodes, revealing that HVGC consistently ranks among the top three in identifying the top-25 influential spreaders and, in some cases, even secures the first position, except for the Sex network.Therefore, we can conclude that HVGC not only accurately ranks the influence of all nodes in the network but also successfully identifies the top-k nodes with the highest impact. After applying monotonicity 25 , we assessed the resolution of various algorithms.Table 5 illustrates that HVGC and MCGM demonstrate similar performance in terms of monotonicity.However, HVGC excels in the majority of networks by solely considering the first-order neighbour information of nodes, whereas MCGM, even with the inclusion of second-order neighbour information, does not necessarily outperform HVGC and incurs higher computational complexity.Furthermore, HVGC demonstrates significantly better performance in identifying important nodes in networks with community structure compared to MCGM.Therefore, overall, HVGC surpasses other gravity model algorithms.Based on the results presented in Table 5, HVGC consistently ranks either at the top or very close to the best-performing algorithm in terms of monotonicity. Based on the above discussion, it is evident that centrality based on the gravitational model is more accurate than classical centrality.However, many of these models tend to identify false core nodes in the network and do not take into account the influence of neighbouring nodes.In our proposed HVGC (H-index-based Gravity Centrality), we address this limitation by comprehensively considering the overall impact of a node's neighbours and its position within the network's structural holes.This approach effectively overcomes the drawbacks of gravity-based methods and demonstrates superior performance compared to other algorithms.The optimal truncation radius R * of HVGC at β = β c is presented in the graph.Each pentagram in the graph corresponds to a network, with a total of ten networks represented.The blue line corresponds to R * = 1 .Specifically, for HVGC, the value of R * is 1 in Email, Facebook, Jazz, PB, USAir, NS and WV networks, 2 in Router and Sex networks, and 4 in Power network.The majority of the networks have an optimal truncation radius of 1, with the next most common radius being 2. This outcome aligns with the characteristics of domain centrality, which typically considers first-order and second-order neighbor nodes.HVGC represents a significant advancement over the H-index in domain centrality to obtain centrality, which is consistent with this characteristic.However, this does not impede its competitiveness relative to other algorithms, as it manages to achieve both simplicity and accuracy. Despite the excellent performance exhibited by HVGC, it shares a common limitation with other gravitybased methods, namely the need to determine the optimal truncation radius R .However, this disadvantage is mitigated by the fact that most real networks exhibit small-world characteristics 47,59 , and the optimal truncation radius is approximately linearly related to the average distance 34 .Furthermore, since HVGC is derived from the domain centrality method, even considering only the first-order neighbor nodes in the ten real networks studied can lead to very high performance and accurate results.In conclusion, while HVGC demonstrates better overall performance compared to other gravity-based methods and introduces improvements to existing gravity models, there are still areas that require further refinement.For example, the current approach does not consider the influence of weight factors associated with different indicators.Instead, it directly operates on the indicator values of the nodes.The weights of HV and the structural hole constraint coefficient c(i) in the computation process may affect the accuracy of the algorithm.In networks with clear community structures, a higher weight for c(i) may lead to better performance, while in other types of networks, a lower weight may yield better results.Therefore, future work may involve incorporating adjustable parameters to balance the weights of different indicators, which is a direction for further exploration.Additionally, these algorithms have not been evaluated in weighted networks, where the impact of the path from node i to node j may differ from that of the path from node j to node i , and the link heterogeneity 60 in a weighted network may result in varying node impact.Lastly, future research may involve incorporating adjustable parameters to modify the interplay of gravitational forces among nodes and balance the weights of different metrics in order to improve the performance of the algorithm. Figure 1 . Figure 1.A toy network.The red node is ranked first in terms of H-index, while green and yellow represent second and third, respectively. Figure 3 . Figure 3. Kendall's Tau was utilized to measure the accuracy of the algorithms at various β values.The different colour symbols represent different methods, and the red symbol represents HVGC algorithms. Figure 5 . Figure 5.The Jaccard similarity coefficients on the top-k influential spreaders. Figure 5 . Figure 5. (continued) through 1000 independent experiments using the SIR model.We compared HVGC with other gravity modelbased methods in identifying the top 5 nodes in this network, and the results are presented in Table2. Table 2 . Comparison of the rankings of the top-5 nodes identified by different methods and the rankings based on the SIR propagation ability in the sample network. Table 3 . The topological features of ten real networks. Table 4 . The algorithms' accuracies for β = β c , measured by the Kendall's Tau (τ).The top-ranked value in each row of the table is marked in italics, the second in bold. Table 5 . Monotonicity of the various algorithms is observed, with the best algorithm for each network highlighted in bold.
5,639.2
2023-09-29T00:00:00.000
[ "Computer Science" ]
Local renormalization group functions from quantum renormalization group and holographic bulk locality The bulk locality in the constructive holographic renormalization group requires miraculous cancellations among various local renormalization group functions. The cancellation is not only from the properties of the spectrum but from more detailed aspects of operator product expansions in relation to conformal anomaly. It is remarkable that one-loop computation of the universal local renormalization group functions in the weakly coupled limit of the $\mathcal{N}=4$ super Yang-Mills theory fulfils the necessary condition for the cancellation in the strongly coupled limit in its $SL(2,\mathbf{Z})$ duality invariant form. From the consistency between the quantum renormalization group and the holographic renormalization group, we determine some unexplored local renormalization group functions (e.g. diffusive term in the beta function for the gauge coupling constant) in the strongly coupled limit of the planar $\mathcal{N}=4$ super Yang-Mills theory. The idea that the renormalization group scale may be regarded as the holographic direction has been appreciated since the birth of the AdS/CFT correspondence.In recent years, there have been various attempts to sharpen the idea so that we may derive the AdS/CFT from the first principle of the local (or quantum) renormalization group in the dual field theories [1][2] [3][4] [5] [6] It is, however, still mysterious what kind of cut-off and renormalization prescription should be used or how the bulk locality emerges in the strongly coupled regime.See also [7] for a generalized viewpoint on the cut-off and the renormalization inspired by the bulk diffeomorphism. Certainly, we do not expect that all quantum field theories should reveal the bulk locality in their holographic descriptions if any.There are various necessary conditions such as the factorization of correlation functions ("the large N limit"), the sparseness of the low scaling dimension operators, the conditions on the central charges (or more generally those of OPEs) and so on (see e.g.[8] for summaries).These conditions are typically deduced from what we empirically observe in the direct computations of the bulk gravity side, so it is instructive to see how (or even whether) these conditions show up from the first principle approach of the constructive holography alluded above. From the viewpoint of the constructive holography based on the local (quantum) renormalization group, it seems rather surprising that the higher derivative terms disappear in the bulk action.First of all, since there is no natural suppression parameter in the derivative expansions of the local renormalization group, there is no apparent reason why we can truncate the expansions at a certain derivative order.If we could keep only the leading term in each expansions, we might say that such truncations would be attributed to (still mysterious) properties of strongly coupled gauge theories.However, the actual situation is much more involved [9].The constructive approach demands that certain higher derivative terms should necessarily appear as a consequence of the universal aspect of the local renormalization group, and what is actually happening is not the naive truncation in derivative expansions in each terms but miraculous cancellations among various seemingly unrelated terms in the local renormalization group evolution.We should emphasize that these local renormalization group functions are determined (in principle) by the OPE coefficients, so the cancellation is not only the properties of the spectrum of the CFT, but the dynamical information contained in the OPE structure. In our recent paper [9], we have discussed how such a cancellation mechanism is tightly related to the holographic condition of the central charges a = c, and how non-zero difference in central charges a − c leads to the unique structure of the higher derivative corrections.In a certain sense the AdS/CFT computation had predicted these results more than 15 years ago (because AdS/CFT works!), yet we managed to obtain new predictions for the structure of the local renormalizaiton group functions (such as the metric beta functions) as a byproduct of the consistency check. 1 In this paper, we investigate the cancellation mechanism further in the massless "universal" sector of the AdS/CFT given by the metric (dual to the energy-momentum tensor) and the axion-dilaton (dual to the gauge coupling constant and theta angle ).As is clear, if we add more (massless) fields, the cancellation becomes more involved because the disparity in number becomes larger with more fields between the available local renormalizaiton group functions that can be adjusted 2 and the possible higher derivative terms that must be cancelled.Nevertheless, we do find that the cancellation does happen if we choose the non-universal local renormalization group functions properly.Since the local renormalization group functions in non-massless sector are generically not universally computable in a power-counting renormalization scheme, we may declare that our results essentially complete the consistency check of the bulk locality of the constructive holography in the N = 4 super Yang-Mills theory.While we will do present some plausible argument for the nature of the non-universal terms in our paper, we leave it for a future study to establish the precise Wilsonian cut-off scheme in which the non-universal local renormalization group functions can be unambiguously computed in agreement with our predictions. Let us investigate the local renormalization group flow of the planar N = 4 super Yang-Mills theory in d = 4 dimensions.We are particularly interested in the renormalization of the single trace energy-momentum tensor T µν that couples to the metric g µν (x) and the single trace "holomoprhic Lagrangian" L = Tr F µν F µν + iF µν F µν + • • • that couples to the (space-time dependent) "holomorphic coupling constant" τ (x) = θ(x) 2π + i 4π g 2 (x) (and its multi trace composites).The main idea of the quantum renormalization group [2][3][4] is that we do not introduce the independent sources for the multi-trace operators out of T µν 1 Strictly speaking, it is unfortunate that our paper did not come out as "a new prediction" because the purely AdS/CFT computation [10] [11] (without reference to quantum renormalization group construction) appeared a couple of weeks earlier than ours while we were preparing for the final draft. 2 In principle, these functions are determined once the theory is fixed so they are not even adjustable. or L, but rather try to encode them in the bulk fields as a "second quantization". According to the prescription given in [2][3] [4], the Schwinger functional for the source g µν (x) and τ (x) will be expressed in terms of the bulk d + 1 = 5 dimensional path-integral where the bulk action in the Hamiltonian formulation takes the form The radial (dimensionless) coordinate r is conventionally chosen as the renormalization group direction, and to make the radial coordinate and spatial coordinate x µ on the same footing, it would be more convenient to introduce the scaling coordinate z through r = log(z/z 0 ) such that z has a dimension of length.Then by introducing the lapse field ñ = n z 0 z and the shift field ñµ = n µ z 0 z , we may reconstruct the 1 + 4 dimensional metric In addition, we identify 1 + 4 dimensional dilaton-axion field as τ (x µ , z) = C + ie −φ .We will later fix the arbitrary dimensionful parameter z 0 from the requirement of the manifest 1 + 4 dimensional diffeomorphism, but such a fixing is of course just conventional.Furthermore, in order to simplify the following equations we choose the gauge in which the shift vector n µ vanishes by using the scheme independence of the local renormalization group.We can always recover the shift dependence from imposing the 1+4 dimensional diffeomorphism invariance (as long as the physics of the local renormalization group is scheme independent). The Hamiltonian density H is determined by the local renormalization group flow of the N = 4 super Yang-Mills theory: Here Λ[g µν , τ ] is the local renormalization of the "cosmological constant" in the dual field In order to determine the bulk action, we have to compute these local renormalization group functions in the (strongly coupled) N = 4 super Yang-Mills theory.Since we should work in the Wilsonian local renormalization scheme, they contain higher derivative terms. As we have already emphasized, in general, it means that there is no reason to expect that the resulting bulk action is local.Indeed, we are going to see that each terms do contain higher derivative terms even in the strongly coupled limit, but they should eventually cancel out after integrating out the canonical momenta to obtain the Lagrangian formulation of the bulk action.This cancellation determines several renormalization group functions that are not computable within the power-counting renormalization scheme, but also requires the very specific structure of the local renormalization group functions that are computable within the power-counting renormalization scheme. The power-counting renormalization scheme is tightly related to the removal of UV divergence in quantum field theories, and it computes the universal part of the more generic (Wilsonian) renormalization scheme.For instance, we may use the dimensional regularization, and subtract the power diverging part.The usual renormalizability argument shows that the higher dimensional operator does not mix with the lower dimensional operator in this scheme. In relation, we are quite implicit about the scheme choice.Even if we started with the scheme in which the higher derivative terms do not exsit for some reasons, we may always change the scheme by g µν → gµν (g µν , τ, R µνρσ , ∂ µ τ • • • ) so that the renomalization group functions apparently contain higher derivative terms.From the 1 + 4 dimensional viewpoint, this is nothing but the field redefinition of the bulk fields. 4Our locality condition "there is no higher derivative terms in the bulk" should be understood up to this field redefinition.In other words, when we require the cancellation among higher derivative terms, we are implicitly choosing such a good scheme. Let us look at each local renormalization group functions.The computation of the local renormalization of the cosmological constant Λ[g µν , τ ] in the N = 4 super Yang-Mills theory with U(N) gauge group was recently completed by [12] (as a generalization of the contribution from the gauge fields obtained in [13]) in one-loop perturbation theory.The universal part of their results show which is identified as the conformal anomaly with a = c = N 2 4(4π) 2 .It is noteworthy that this one-loop result is SL(2, Z) invariant under τ → aτ +b cτ +d . 5This fact, however, is not so deep as it sounds.The trick is that at one-loop level, the conformal anomaly is just N 2 times that of the U(1) theory which is obviously SL(2, Z) invariant.We will work in the renormalization group scheme in which SL(2, Z) invariance of the N = 4 super Yang-Mills theory is manifest.It will be inherited to the SL(2, Z) invariance of the dilaton-axion sector of the bulk type IIB gravity. We should also realize that the SL(2, Z) invariant "metric" ds 2 = 1 (Imτ ) 2 dτ dτ that appears in front of the (generalized) Riegert four derivative kinetic operator 6 4) is related to the Zamolodchikov metric of the N = 4 super Yang-Mills theory.This is because the two-point functions of dimension four operators in d = 4 dimensions is logarithmically divergent in flat space-time as k 4 log k 2 and this is the origin of the four derivative terms with quadratic on τ in the conformal anomaly. Obviously, this computation provides the universal part determined from the powercounting renormalization scheme.In the Wilsonian local renormalization group scheme, we would add further lower dimensional terms 5 For reader's convenience, we quote: Imτ → 6 This operator, also known as Paneitz operator in some mathematics literature, first appeared in [14] in the context of superconformal gravity, and given our context, it may be most appropriate to name it after this paper, but we followed the convention. where Λ 1 [τ ], Λ dτ dτ [τ ], and Λ R [τ ] are some SL(2, Z) invariant modular functions.The power-counting renormalization scheme such as dimensional regularization does not say anything about these lower dimensional terms.Note, however, that if we use the supersymmetric scheme, the potential term must not be renormalized so that it is natural to assume Λ 1 [τ ] = 0, and since this is the only origin of the potential term in the bulk action when the single trace beta function vanishes for constant τ , it is indeed agreement with the fact that the axion-dilaton does not have any potential term in the type IIB supergravity.Furthermore, within the same scheme, it was argued that the induced Newton constant Λ R [τ ] vanishes for the N = 4 super Yang-Mills theory at the one-loop order [21] [22]. It is worth noting that the condition that the Newton constant is not renormalized at one-loop is different from the other more familiar holographic prediction a = c.If we demand the both conditions for general massless field theories with weakly coupled Lagrangian descriptions, the matter contents must be multiples of 6 real scalars, 4 Majorana fermions and 1 real vector.Indeed, the AdS/CFT with weakly coupled Lagrangian descriptions always have this matter contents (including N = 4 super Yang-Mills theory and its orbifold cousins). As a working hypothesis, we assume that the one-loop computation (4) continues to be valid in the strongly coupled regime (see also [12][23]).We do not know of the formal proof of the non-renormalization theorem in particular for dilaton-axion interaction terms (e.g.∂ µ τ ∂ µ τ ∂ ν τ ∂ ν τ terms), but we will eventually find that there is no room to modify it in order to reproduce what we know in the AdS/CFT in the strongly coupled limit.A possible argument for the non-renormalization theorem is that in the N = 4 superconformal field theories, the OPE of the energy-momentum tensor superconformal multiplet is completely fixed by one number given by the central charge a = c.The Wess-Zumino consistency condition of the conformal anomaly [15] (which amounts to the consistency of the Hamiltonian constraint in terms of quantum renormalization group) demands that this number does not depend on τ and all the quadratic terms in (4) are not renormalized.Since the OPE depends on the central charge alone, it is thus plausible that not only the quadratic term but also the entire conformal anomaly coming from the energy-momentum tensor superconformal multiplet are fixed by this number.Presumably, this conformal anomaly is uniquely fixed from the N = 4 superconformal invariance with additional SU(1, 1) symmetry up to overall coefficient determined by the central charge a = c, but this has not been demonstrated in the literature. 7We will come back to this point at the end of this paper, but let us proceed under the working hypothesis for now. We move on to the double trace beta functions.In the strongly coupled regime, it is natural to assume that the derivative expansions of the local renormalization group functions (up to a scheme choice).For the double trace beta functions, we have where η(τ ), λ(τ ) and κ(τ ) are SL(2, Z) invariant modular functions. First thing to notice here is that after integrating out the canonical momentum π µν and P τ , these double trace beta functions appear as the metric in the kinetic term of , where G µν;ρσ is the inverse of β µν;ρσ (i.e.β µν;ρσ G ρσ;ηκ = δ η µ δ κ ν ).Thus if we would like to obtain the second order action, there should be no higher derivative corrections in (6).It also determines the modular , Z) . There are further symmetry constraints.As discussed in [4] (see also [28] [29]), the consistency of the Hamiltonian constraint demands λ = 1/3.Of course, this is the value that we would get in the Einstein-Hilbert action.It is related to the fact that we did not have bare R 2 term in the conformal anomaly from the holographic renormalization group viewpoint [31]. We now turn to the single trace beta functions.One important consistency requirement in the holographic interpretation of the quantum renormalization group is that the single trace beta functions must be derived from the "local boundary counter-term" as a gradient flow with respect to the double trace beta functions as their metric [3][4]: Only when this gradient condition is satisfied, we may get rid of the first order z derivative 7 Rather, this field theoretic one-loop computation has been the only way to fix the SU (1, 1) invariant N = 4 superconformal gravity so far known [12].Note, however, SU (1, 1) invariance is stronger than SL(2, Z) invariance we could demand. terms in the final bulk action as a boundary term after integrating out the canonical momenta. 8e will expand the boundary counter-term as where Λ B (τ ), κ B (τ ) and λ B (τ ) are some modular functions invariant under SL(2, Z). From the double trace beta functions discussed just above and the gradient condition (7), the single trace beta functions must be Let us fix some parameters from what we know in N = 4 super Yang-Mills theory. Since the gauge coupling constant of the N = 4 super Yang-Mills theory does not run when τ = const, Λ B (τ ) must be independent of τ .It is presumably also true that the beta function does not depend on the background curvature R, so κ B (τ ) = κ B , which is independent of τ , is plausible, but we do not assume it for now, and rather we will derive it later from the requirement of the cancellation among beta functions. Since we have introduced all the ingredients at this point, now we would like to demonstrate how the cancellation of various four derivative terms happen in the final bulk action. We first integrate out the canonical momenta to get the Lagrangian formulation of the bulk action Substituting ( 6) and ( 9), we have the contribution to the four derivative terms from G µν;ρσ 4 β µν β ρσ + β −1 τ τ β τ β τ as well as from Λ univ [g µν , τ ] contained in Λ[g µν , τ ].It is interesting to observe that given the gradient nature, the potential term from G µν;ρσ β µν β ρσ gives the interaction that satisfies the detailed balance condition advocated in [24][25]. 9In order to get rid of all the four derivative terms in the bulk action, the terms that satisfy the detailed balanced condition must cancel against the conformal anomaly.Or in other words, the conformal anomaly must satisfy the detailed balance condition, which is by far the very non-trivial necessary condition for the AdS/CFT with the second order bulk action to work. In addition to the condition Λ B (τ ) = Λ B we have already discussed from the exactly marginal coupling constant of the N = 4 super Yang-Mills theory, the cancellation demands various relations among the beta functions such as which essentially remove all the possible non-trivial modular functions in relation to the instanton contributions to the local renormalization group functions beyond one-loop. 10here are further normalization condition we would employ.The first is the overall normalization between the conformal anomaly and the four derivative terms giving κκ 2 B = 4c, where we recall c = N 2 4(4π) 2 is the central charge.Relatedly, in order to precisely connect the local renormalization group and the conformal anomaly, we had to provide the canonical normalization of the local renormalization group transformation.The computation that led to [12] should have corresponded to κΛ B = 1 so that the engineering Weyl weight of the d = 4 dimensional metric is two (i.e.β µν = 2g µν + O(R)) from (9). In this way, we have determined the single trace as well as double trace beta functions of the N = 4 super Yang-Mills theory in the strongly coupled limit as In particular, there is no dimensionless arbitrary parameter left in the local renormalization group functions. We have a couple of comments about the so-determined local renormalizaiton group functions.Firstly, we see that the local renormalization group flow of the metric and the coupling constant has a diffusive nature as observed in [10].The purely curvature dependent part of the metric beta function given by the Schouten tensor in holography has been given in [32][10] (up to the scheme choice elucidated in [11]), and our results are in agreement.However, its microscopic origin is not so obvious.It appears that the leading derivative corrections come from the zeroth order in coupling constant, but we do not know how this is obtained in the perturbative computation, and it may be very particular to the strongly coupled limit.Secondly, while the local renormalization group equation may suggest that the single trace beta functions show the gradient property in some situations (see e.g.[15]), the metric used for the gradient condition there does not have to coincide with the double trace beta functions.Again this seems very peculiar to the models with holographic duals. Once we determine these local renormalization group beta functions, we see that all the higher derivative terms are automatically cancelled, and the final bulk action is second order in derivative expansions.We still do not know the precise details of Λ dτ dτ [τ ] and Λ R [τ ] in (5) in the strongly coupled regime, but one-loop results quoted there suggests that they vanish.Independently if we would like to reproduce what we know in the AdS/CFT correspondence, they should vanish in the strongly coupled limit.In either non-trivial cancellation among various local renormalization group functions.By using the same construction above, however, it is conceivable that we may construct the entire low energy effective bulk action (with some further ansatz on the single trace as well as double trace beta functions). For instance, it is trivial to see that the gauge kinetic term TrF µν F µν of SO(6) for the vector fields induced by the conformal anomaly (see e.g.[12]) is precisely what we would get in the effective type IIB supergravity, which reproduces the correct current central charges from the holographic dictionary.One thing to be noticed, however, is that the gauging of SO( 6) is anomalous.Thus, the Schwinger functional for the SO(6) background gauge fields is not gauge invariant.It is well-known (see e.g.[34]) that such non-gauge invariance is supplied by the 1 + 4 dimensional Chern-Simons functional, and this is precisely what we need in the type IIB supergravity in agreement with the holography. Since the Chern-Simons functional is topological, there is no problem in identifying the extra direction appearing in the anomaly action with our holographic direction.We also note that the essential features of the local renormalization group analysis is not affected by the anomaly (see e.g.[19][20] for detailed analysis). Let us come back to the question of one-loop exactness of the conformal anomaly contribution to the local renormalization of the cosmological constant.As observed in [13], the bosonic contribution to the conformal anomaly in N = 4 super Yang-Mills theory shows the non-zero two-loop corrections to the interaction terms of and it was conjectured that these coefficients ζ(τ ) and ξ(τ ) will be certain modular invariant functions.However, as we have already seen in ( 11) (12), we have fixed all the other local renormalization group functions to cancel the other one-loop exact terms of Λ[g µν , τ ]. If the term like (15) are generated by the renormalization group flow and takes a different functional form in the strongly coupled regime, the AdS/CFT that we know today would not work.Therefore, we believe that the entire conformal anomaly is not renormalized beyond the one-loop order, and for this two-loop computation, we conjecture that the fermionic contribution precisely cancels against it. 12It would be very interesting to see if this is indeed the case or not by explicit two-loop computations and/or the constraint from N = 4 superconformal invariance.
5,567.8
2015-02-25T00:00:00.000
[ "Physics" ]
Method of estimating the effective zone induced by rapid impact compaction This paper proposes a method for estimating the effective zone, including effective depth and effective range of compaction degree, from rapid impact compaction (RIC) on sand layer whose fines content is less than 10%. The proposed method utilizes a string of microelectromechanical system accelerometers to monitor the acceleration at various depths and propagation distances during compaction. To interpret and extract useful information from monitored data, peak-over-threshold (POT) processing and normal distribution function were used to analyze the recorded acceleration. The mean and standard deviation of the threshold peak acceleration were used to evaluate the effective depth and the effective range of compaction degree during RIC compaction. Moreover, spatial contours were used to determine the correlation of the threshold peak acceleration against depth and propagation distance from the RIC impact point. These contours help indicating the distribution of the effect zone after compaction. Lastly, a proposed method is suggested for frequent use in trial tests to quickly determine RIC’s required depth and impact spacing. www.nature.com/scientificreports/ formed 8 , particularly at sites with an erratic soil condition and a high groundwater table. The said phenomenon also reveals the uniqueness of effective depths caused by different compaction parameters and different sites. Therefore, effective depth must be confirmed through site investigation for each compaction project. When the site investigation indicates that the trial compaction cannot achieve the required depth or compaction degree, the impact energy, including the foot weight, fall height and blow count must be adjusted until the design requirements are met before a formal construction can be launched. However, this approach leads to a waste of construction time and cost. To accurately evaluate the effective depth during each compaction project, this study conducted RIC trial tests and proposed a monitoring method synchronized with a compaction activity. The proposed method can be performed at any impact point in the test area using a string of microelectromechanical system (MEMS) accelerometers (or called a shape accel array string, SAA string) to simultaneously record the soil particle acceleration induced by impact energy at various depths and various propagation distances under field conditions. Furthermore, normal distribution function was used in the data process and reduction. Specifically, the mean (µ) and the standard deviation (σ) of the threshold peaks from recorded accelerations along various depths were used as indicators to evaluate the effective depth. In addition, being taken out from the recorded accelerations along various propagation distances, they evaluate the effective range of compaction degree for obtaining a reasonable reference for the arrangement of spacing between impact points. Background In general, the RIC method involves using a hydraulic hammer installed on a compactor with a fall height of 1.2-1.5 m and frequency of 30-60 blows/min to impact cohesiveless soils, such as sand, silty sand and gravel. The impact energy is transmitted to the compaction foot (common diameter is approximately 1-2.4 m and weight is 5-16 t) in contact with the ground through the hydraulic hammer and then transferred to the cohesiveless soil. Consequently, the soil particles are rearranged and densified, which leads to soil compaction and an increase in the soil density. Over the past few decades, considerable progress has been made in the RIC technology in terms of the operation experience and functions of the compaction equipment, particularly the positioning system, digital parameter control unit and data acquisition system. These pieces of equipment can be implemented into the RIC compactor to record the working position of the compaction foot and impact performance. Accordingly, RIC technology is regarded as a mature in-situ compaction method. Further exploration is required on the evaluation of the effective depth after RIC compaction. Some studies 9-12 have suggested empirical equations based on the energy per blow, enabling effective depth prediction for most in-situ compaction methods. However, Watts and Cooper 6 noted that the empirical equations suggested by the previous studies are unsuitable for effective depth prediction in the RIC method, which is influenced by the cumulative energy contribution, because these equations did not consider the accumulation of energy and cannot reasonably reflect the variation of soil properties. Oshima and Takada 13 suggested the use of the total impact energy or total momentum compaction theory for determining the effective depth of the RIC method. Berry et al. 14 also proposed an empirical rule of using three to four times the diameter of the compaction foot to estimate the effective depth for this method. These approaches have increased the practicality of the effective depth estimation in RIC method. Published case studies involving RIC compaction in-situ are summarized in Table 1, when the compacted soil is heterogeneous and the particle size distribution spreads in a wide range (e.g., granular, miscellaneous, waste or ash fills), the achievable effective depth may vary largely. Table 1 also indicates that the effective depth is reduced when the soil particles are decreasing. Furthermore, the effective depth after RIC compacted in silty sand to gravel sand ranges between 2 and 9 m. Although the variation of soil properties (e.g., the particle size distribution, fines content and soil saturation), groundwater table, and applied impact energy are not indicated in Table 1, the large variation of the effective depth of sand with different particle sizes after RIC compaction highlights the risk of effective depth control due to the uncertainty of site conditions and 19 , and Vukadin 20 used trial tests to evaluate the compaction performance of different soils. Their adopted methods involved both invasive and non-invasive site investigation techniques, including the standard penetration test (SPT), the cone penetration test (CPT), the dynamic CPT, the plate bearing test and continuous surface wave (CSW) measurement, to determine the effective depth and compaction degree. These previous studies have contributed to the advancement of RIC, but unfortunately, the past data were obtained at each particular test site and they were not guaranteed to be completely applicable to different sites even if they have similar soil properties. In addition to the compaction performance, Thilakasiri et al. 21 , Parvizi and Merrifield 22 , and Parvizi 23 performed centrifuge model tests to explore the behaviors of excess pore water pressure, wave propagation, and the stress-strain relationship for sand and organic soil at different relative densities under impact load. Ghanbari and Hamidi 24 and Allouzi et al. 25 presented the finite element simulation for effective depth prediction and proposed a new evaluation method for determining the optimal blow counts required to meet the ground improvement requirements. Overall, these studies focused on verifying the rationality of the RIC compaction parameters but did not conduct advanced quality control and quality assurance method of the effective zone on site (including effective depth and suitable spacing between impact points). Measurement of the particle acceleration caused by soil distortion due to RIC When compaction energy is effectively applied to the ground, the compression wave (P wave) and shear wave (S wave) interact with soil producing compressions, vibrations, and distortions while increasing the soil density; thus, the larger the soil distortion and soil particle vibration, the greater the densification action on the subsoil 26 . However, the distortion properties of general soil are often not as sensitive as the vibration response of soil particles. Therefore, this study evaluates the effective zone that used SAA string to monitor the particle acceleration induced by the soil shear distortion at various depths and propagation distances during the RIC. To ensure the suitability of the proposed method, this study assumed that the interaction of impact points could only improve the compaction degree within the critical effective depth but could not increase the effective depth. This assumption was supported by the fact that the groundwater is nearly incompressible. When the RIC foot impact ground, it is necessary to synchronize the SAA string to monitor the soil particle accelerations at any impact point selected from the trial test area. As shown in Fig. 1, two SAA strings were installed vertically (2.5 m from the impact point) and horizontally (1.5 m from the impact point) in the ground. The vertical SAA string comprised 20 MEMS accelerometers, which were installed into the ground at an interval of 0.5 m over a depth of 0.25-9.75 m. This string was mainly used to record the particle acceleration response along the soil depth. The horizontal SAA string with 40 MEMS accelerometers was embedded in a 0.75 m excavated trench. These accelerometers were placed at 0.5 m intervals to record the particle acceleration response at a distance of 1.75-21.25 m along the horizontal propagation distance. www.nature.com/scientificreports/ The SAA string is a rope-like rigid sensor array segment with flexible and watertight joints constructed from a hydraulic hose. Each rigid sensor has a length of 0.5 m and contains three MEMS accelerometers that measure the accelerations in the X-direction, Y-direction, and Z-direction at specific locations. In this study, the acceleration response in the Z direction mainly generated by the P wave during RIC compaction, which might be disturbed by the incompressible characteristics of groundwater and soil particles. According to Richart et al. 27 , the wave energy transmitted from the source of footing compaction, the intensity of S wave is stronger than the P wave. Therefore, it can be assumed that the soil shear distortion is mainly caused by the S wave without estimating the acceleration in the Z direction caused by the P wave. As reported by Bennett et al. 28 , real-time monitoring of infrastructure and geotechnical facilities can be exercised by SAA string. Figure 2 displays photos of the on-site SAA string installed or embedded in the ground. In this study, the SAA string was specifically designed for repeated installation or embedding to measure the ground vibration (or soil particle acceleration) during the RIC process. Therefore, the SAA string which had a diameter of 23 mm was protected by a polyvinyl chloride (PVC) pipe with an inner diameter of 24 mm and outer diameter of 32 mm. Although the SAA string and PVC pipe are almost laminated to ensure the accuracy of the recorded data, the zone between the PVC pipe and the drilled hole (or the excavated trench) was backfilled and compacted with in-situ sand to ensure that the acceleration response of the RIC compaction could be effectively captured. Moreover, to obtain the low-strain soil acceleration response effectively, the sensing range of all the MEMS accelerometers in the SAA string was set as 0-1000 Hz, whereas the sampling rate was set as 40 Hz (∆t = 0.025 s). Full-scale field test A hydraulic reclaimed land in southern Taiwan was used for field testing. The dynamic compaction (DC) was selected as the main ground improvement method in the design stage. To demonstrate the suitability of the proposed monitoring method for the effective depth and spacing between compaction points evaluation, this study used the low-impact-energy RIC for conducting a trial test adjacent to the DC working zone. The subsoil conditions of the trial test, compaction sequence and grid pattern, and monitored acceleration records are described as follows. Subsoil conditions. The test field is a typical hydraulic fill reclaimed land, whose main filling material is seabed sediments. According to the soil classification system used by the American Association of State Highway and Transportation Officials (AASHTO), the filled soil can be categorized as fine sand. The SPT and the conventional soil tests were conducted on samples collected from four in situ boreholes, the results of which are shown in Fig. 3. It is indicated that the geology of the site within 10 m below the ground surface was almost homogeneous, the mean standard penetration test blow count (SPT-N) was 5-15, the mean unit weight (γ t ) was approximately 16-20 kN/m 3 , the mean specific gravity (G s ) was approximately 2.75, the mean void ratio (e) www.nature.com/scientificreports/ was approximately 0.65-0.9, and the mean water content (ω) was approximately 20-30%. The properties of soil particle size distribution and fines content were crucial factors that affected the soil compaction effectiveness. The particle size distribution in the compaction zone located 5 and 10 m below the ground surface revealed that the average particle size of the soil (D 50 ) was approximately 0.16 mm and that the fines content (< 0.075 mm) was less than 10% (Fig. 4). Thus, the soil in the test field was sand with a minimal fines content that was suitable for soil compaction. The groundwater table of the reclaimed land at the time of trial test was close to 3 m below the ground surface; however, the groundwater table may vary by ± 1 m due to the tidal fluctuation near the port area. Based on the groundwater table and subsoil conditions of the reclaimed land, the design specification of DC compaction indicated that the reclaimed land had high liquefaction potential (liquefaction potential index, PL > 15). Therefore, to evaluate the effectiveness of RIC against liquefaction after compaction, a two-hole CPT test (CPT 1 and CPT 2) was conducted in the trial test area before compaction. The investigation results shown that the average cone tip resistance (q c ), average sleeve friction (f s ), and average friction ratio (f s /q c ) were approximately 4 MPa, 0.5-0.6 MPa, and 0.5, respectively (Fig. 5). Furthermore, the liquefaction resistance of the test area after www.nature.com/scientificreports/ RIC was obtained using the CPT-based evaluation procedure recommended by Robertson 29 for determining the soil liquefaction potential. According to Robertson 29 , the average q c within a depth of 10 m should be > 8 MPa. RIC sequence and grid pattern. The area of the trial test was 14 m × 14 m, and the impact points of RIC were arranged in a grid pattern (Fig. 6). To focus on the suitability of the proposed monitoring method, this study did not examine in detail the effect of various grid spacing and improvement rates but considered the field conditions and the previous studies 30,31 to arrange the layout of RIC sequence and grid pattern. Compaction was conducted in three passes with a grid spacing of 3.5 m. The detailed compaction sequence and grid layout are displayed in Fig. 6. Information related to the RIC equipment and impact energy is presented in Table 2. Each impact point was produced by impacting a 14 t round hammer 50 times on the ground over a fall height of 1.2 m. The total input energy for each impact point was 840 t-m. Meanwhile, when the depth of compaction foot www.nature.com/scientificreports/ penetration was 0.9 m, the impact pit was backfilled with the in-situ soil to an initial elevation and compacted to the designed blow counts (50 blow counts) before moving to the next impact point. To verify the effectiveness of RIC, two CPT tests were conducted at the center of the trial test area 7 days after the three-pass compaction procedure was completed. The CPT results (CPT3 and CPT 4) are displayed in Fig. 5. The q c from post-treatment along the depth to 6-7 m was close to the designed requirement of 8 MPa. This study determined that under the adopted grid pattern, a design effective depth of 10 m could not be achieved with an effective depth of approximately 6-7 m. The result is consistent with the effective depth reported by Adam and Paulmich 3 , Watts and Cooper 6 , Berry et al. 14 in the sand layer (Table 1). Figs. 1, 6 and mentioned above, that monitored points can be selected from the trial test area. Two SAA strings were installed vertically (with 20 accelerometers) and embedded horizontally (with 40 accelerometers) around 1-8 impact point. During the RIC process, all the accelerometers simultaneously detected the acceleration generated by the soil shear distortion at the monitoring position. Consider the vertically installed SAA string and the specific depths of 1.25 m, 4.75 m, and 8.25 m as an example. Figure 7 displays the acceleration record due to soil shear distortion in the X-direction (Fig. 7a-c) and Y-direction (Fig. 7d-f) at each blow. The transient waveforms and peak amplitudes for each blow can be clearly observed in the acceleration records of the selected window in Fig. 8 (taken from 6 to 15 blows). However, the real-time acceleration recorded during the RIC process was a signal of random vibration, and useful information was difficult to obtain without data processing. Nevertheless, according to the recorded accelerations at depths of 1.25 m, 4.75 m, and 8.25 m, it indicates that a larger soil distortion may result in larger peak amplitude. Moreover, the soil distortion may enlarge to a critical depth and decreased with increasing depth. Thus, the characteristics of the peak acceleration distribution can be captured at each recording position to evaluate the effective depth of compaction. The distributions of the peak acceleration captured from horizontally embedded SAA string may decrease as the propagation distances increase. It can be used to evaluate the suitable spacing between impact points. Table 2. RIC equipment and compaction information. Data processing and interpretation When monitoring work is performed at a soil compaction site, the recorded data may contain white noise to varying degrees due to the uneven soil shear distortion (interference of large gravel) or changes in the working direction of the compaction equipment. Because the subsoil at the trial test was almost uniformly distributed, the effect of white noise was non-significant in this study. In addition, in-situ compaction may cause the interruption of monitoring signals due to expected or unexpected conditions (e.g., backfilling of impact pits and moving of machinery), and these unusable signals should also be eliminated. Hence, this study adopted the peak-over-threshold (POT) method to threshold peak accelerations from the vertical SAA string recorded (the response along different depths) and the horizontal SAA string recorded (the response along different propagation distances). Moreover, normal distribution function was used to determine the dispersion degree of the captured peak acceleration at each recorded position. The details are described as follows. Distribution of the threshold peak acceleration. In time-domain signal processing and feature extraction, peaks, amplitudes, and means are often used as essential indicators for signal interpretation. Therefore, this study took monitored data from Fig. 8 as an example and adopted the POT method to threshold the peak acceleration. Figure 9 illustrates the POT processing result, after eliminating the unusable records and ignoring whether the vibration direction was positive or negative, acceleration is used as the absolute value of the signal to calculate the mean. Moreover, the maximum amplitude of each blow that exceeded the mean was threshold as the peak acceleration. Similar procedure as above, Fig. 10 shows that both the X-direction and Y-direction threshold peak acceleration distribution extracted from the vertically installed SAA string recorded. It can be found that the peak acceleration distribution in X-direction (Fig. 10a-e) and Y-direction (Fig. 10f-j) have the same trend. The peak acceleration is enlarged to a certain depth and decreased with increasing depth from the impact point. Therefore, the peak acceleration of the recorded data can be an important indicator for evaluating the effective depth of the soil compaction. Based on the findings in Fig. 10, the normal distribution function is used herein to represent the distribution of the threshold peaks from the field (Fig. 11). Figure 12 shows the distribution of the mean (µ) and standard deviation (σ) calculated by threshold peak acceleration. It indicates that a larger µ and σ represent a more significant compaction effect (larger soil distortion). Moreover, this finding could be extracted within 1-10 blows and the trend was almost the same for 11-50 blows. As depicted in Fig. 12a, c, the µ at the depth of 0-6 m in the X-direction and Y-direction were 0.42-0.84 g and 0.30-0.70 g within 1-10 blows, respectively. In addition, the µ at a depth of 6-10 m in the X-direction and Y-direction were 0.06-0.32 g and 0.05-0.18 g within 1-10 blows, respectively. Obviously, the depth of 6 m was approximately the boundary, reflecting that large vibration intensity has stronger compaction effect. Furthermore, the σ of threshold peak accelerations were maintained within the 0.10-0.36 range above the compaction effect reached its limit (6 m) within 1-10 blows (Fig. 12b in X-direction and Fig. 12d in Y-direction). In comparison, there was a clear drop to 0.01-0.12 when the compaction effect limit (6 m) of the ground was reached. Therefore, the decrease in σ is an indication of the recorded acceleration from the inconspicuous soil shear distortion and weak soil densification effect (Appendix I display the detailed values calculated from the measurements recorded www.nature.com/scientificreports/ by the vertically SAA string). As mentioned above, the effective depth was determined to be approximately 6 m when the energy per blow was 16.8 t-m for impact over 50 blows. This result was consistent with the CPT results obtained after RIC trial test. Moreover, the result is in an agreement with the conclusion of Berry et al. 14 , who stated that the effective depth of compaction can generally be determined in the first few blow counts of the RIC (< 10 blows). With an increase in the blow count, achieving an increase in the effective depth becomes more difficult subjected to incompressible groundwater unless the compaction parameters related to the energy per blow are readjusted (e.g., compaction foot weight and fall height). The same data processing was also applied to the recorded data from the horizontally embedded SAA string. Figure 13 displays that both the X-direction and Y-direction threshold peak acceleration distribution extracted from the horizontally SAA string recorded. Same as the trend obtained from vertical SAA string recorded, whatever 1-10 blows, 11-20 blows, 21-30 blows, 31-40 blows, and 41-50 blows, the peak acceleration is maintained to a limit distance and drops down with increasing distance from the compaction point. Therefore, the µ and www.nature.com/scientificreports/ σ calculated from threshold peak acceleration within 1 to 10 blows can be an important indicator for adjusting the suitable spacing between impact points. As depicted in Fig. 14a, c, the µ at the distance of 1.5-5.25 m from the impact point in the X-direction and Y-direction were 1.33-1.61 g and 0.25-0.54 g, respectively. The µ at 5.25-21.5 m from the impact point in the X-direction and Y-direction were 0.39-0.60 g and 0.07-0.27 g, respectively. Unfortunately, because the Y-direction recorded (Fig. 14c) is in parallel to the horizontally embedded SAA string, its response is not obvious in the adjusted of the suitable spacing. Therefore, we need the σ indicator to supply the reliability of impact spacing adjustment. As shown in Fig. 14b (X-direction), d (Y-direction), the σ of threshold peak accelerations were maintained above the 0.10 when the compaction degree reached the distance of 5.25 m within 1-10 blows. In comparison, there was decrease below the value of 0.1 when the compaction degree limit of 5.25 m was reached (Appendix II display the detailed values calculated from the measurements recorded by the horizontally SAA string). Thus, the suitable spacing between impact points was determined to be approximately 5.25 m under energy per blow of 16.8 t-m for 50 blows. The cumulative impact energy only affected the densification in the range of 5.25 m; thus, the spacing between impact points was conservatively designed within this range. Spatial analysis of the threshold peak acceleration. This study compared the distribution of µ and σ threshold by peak acceleration with the CPT results obtained after RIC. It is demonstrated that used the acceleration caused by the soil shear distortion to evaluate the suitability of the effective zone (combined with effective depth and effective range of compaction degree). The kriged spatial contours were used to analyze the correlations of the threshold peak acceleration with the depth and propagation distance from the RIC impact point for obtaining a clear understanding of the spatial distribution characteristics of the monitored data. Although its reliability depends on the number of measurements recorded in the compaction field, this study adopted the limited data recorded from two vertical and horizontal SAA strings to quickly explain whether the compaction parameters can be used for achieving the required design. By overlapping the threshold peak acceleration from vertical and horizontally SAA strings recorded, Fig. 15 displays the impact points 1-8 and the features of the threshold peak acceleration spatial contours in the X-direction (Fig. 15a-e) and Y-direction (Fig. 15f-j) from the recorded data. The cumulative impact energy for 10, 20, 30, 40, and 50 blows was 168, 336, 504, 672, and 840 t-m, respectively. It illustrates that the spatial contours in the X-direction and Y-direction will change with the cumulative impact energy, but the effective zone of compaction were consistent with the accumulation of impact energy. This finding verified that the compaction effectiveness (effective depth and compaction degree) can be determined within the initial few blow counts (< 10 blows). The key features of the spatial contour map for 50 blows and a cumulative impact energy of 840 t-m (Fig. 15e, j) are explained in the following as an example. A soil plug was formed in the impact process. This plug moved downward to penetrate into deeper soil. The wedging effect caused by the compaction foot induced the development of shear bands that extended from the edges of the impact pit to the ground surface. Under the impact pit, the main compaction zone was formed due to body wave propagation. This zone extended approximately 5 m laterally from the impact point and approximately 6 m www.nature.com/scientificreports/ vertically from the ground surface. Beyond the main compaction zone, a moderately affected or unaffected zone was formed. This zone had almost no influence on the effective depth and compaction degree. The conclusion is consistent with the idealized spatial profile for RIC at a single impact point reported by Becker 30 and Jia 32 . Additionally, it can be found from the spatial contour that the distributions of peak acceleration were different in different directions (X-direction and Y-direction) due to soil anisotropy. However, when the vibration recorded direction is in parallel to the accelerometers (SAA string), the acceleration response is not obvious, and its value may be underestimated, resulting in the invisible development of the shear band. Therefore, if we ignore the influence of soil anisotropy, we infer that the recorded data of X-direction will be more representative than Y-direction. Conclusions RIC is a cost-effective and time-efficient method that is often used for ground improvement to satisfy the bearing capacity, reducing the excessive settlement and resisting liquefaction on the reclaimed land at medium (< 10 m) compaction depth. This study used SAA strings (MEMS accelerators) to monitor the acceleration caused by RIC. Moreover, normal distribution function was used in the data processing to calculate the µ and σ of the threshold peak accelerations from the recorded data, which can be used in compaction operations for evaluating the effective depth and adjusting the suitable spacing between impact points. In contrast to general site investigation tests (e.g., the SPT or CPT), the proposed method cannot provide acceptance values (the SPT-N or q c value) for compaction performance evaluation. However, the trial test results of RIC indicated that the proposed method is suggested for frequent use in individual trial tests of compaction projects to quickly determine the required depth and suitable spacing between compaction points. The following conclusions can be drawn from the findings of this study. 1. The proposed method can be synchronized with the RIC procedure for the real-time monitoring of the acceleration response for various depths and various propagation distances. The statistical analysis of the dispersion degree (µ and σ) of the threshold peak accelerations from recorded data helps effectively evaluating the effective depth and adjusting suitable spacing between impact points of compaction projects. 2. Normal distribution function was employed to obtain the µ and σ of the threshold peak acceleration. These two parameters were used to evaluate the dispersion degree of recorded acceleration for various depths and various propagation distances. A higher vibration intensity (e.g., µ) or dispersion degree (e.g., σ) indicated a stronger shear distortion and soil densification effect, which can be used as indicators of the effective depth and the spacing between impact points. 3. The results of the RIC trial tests revealed that the effective depth and the spacing between impact points could be determined within the initial blow counts (within 10 blows or less). Moreover, no obvious changes were observed as the cumulative energy increased. The achieved performance of a compaction project depends on the site conditions. It can be determined in the early stage of the compaction plan. 4. Spatial contours can be used to establish the correlations of the threshold peak acceleration with the various depths and various propagation distances from the RIC impact point. Moreover, when the number of measurements is adequate, they can indicate the distributions of the effective depth and the spacing between impact points of the vertical and horizontal profiles of the compactor, which can improve the effectiveness of quality control for the compaction project. 5. In addition to SAA string, other useful sensing equipment can be used to measure soil particle acceleration. The amplitude of peak acceleration may change marginally with changes in the accelerometer type and sampling rate. In that, the changes in the accelerometer type and sampling rate did not affect the distribution characteristics of the recorded acceleration at different monitoring locations because the dispersion degree of the calculated peak acceleration represents the compaction performance that can be achieved by the site conditions and setting of compaction parameters.
6,848.8
2021-09-15T00:00:00.000
[ "Geology" ]
Large anomalous Hall effect in the chiral-lattice antiferromagnet CoNb3S6 An ordinary Hall effect in a conductor arises due to the Lorentz force acting on the charge carriers. In ferromagnets, an additional contribution to the Hall effect, the anomalous Hall effect (AHE), appears proportional to the magnetization. While the AHE is not seen in a collinear antiferromagnet, with zero net magnetization, recently it has been shown that an intrinsic AHE can be non-zero in non-collinear antiferromagnets as well as in topological materials hosting Weyl nodes near the Fermi energy. Here we report a large anomalous Hall effect with Hall conductivity of 27 Ω−1 cm−1 in a chiral-lattice antiferromagnet, CoNb3S6 consisting of a small intrinsic ferromagnetic component (≈0.0013 μB per Co) along c-axis. This small moment alone cannot explain the observed size of the AHE. We attribute the AHE to either formation of a complex magnetic texture or the combined effect of the small intrinsic moment on the electronic band structure. T he anomalous Hall effect (AHE) is a signature of emergent electromagnetic fields in solids that affect the motion of the electrons, and hence it has been a recent topic of intense study in the context of the topological materials 1,2 . The Hall effect in general is an intrinsic property of a conductor due to the Lorentz force experienced by the charge carriers. In systems with spontaneously broken time-reversal symmetry, an additional contribution, independent of the Lorentz force, is observed, the AHE 1 . AHE was first observed in ferromagnets where its origin lies in the interplay between spin-orbit coupling (SOC) and magnetization. Reformulation of the SOC-induced intrinsic mechanism of AHE in ferromagnets to the Berry phase curvature in momentum space has been fruitful in predicting and describing the AHE in several other systems, including Weyl (semi)metals 3 , non-collinear antiferromagnets 4 , non-coplanar magnets [5][6][7] , and other nontrivial spin textures [8][9][10][11] . Recent observations of the large anomalous Hall effect in metals with possible Weyl [12][13][14] and massive Dirac fermions 15,16 and/or complex spin textures, e.g., skyrmion bubbles 17 , have generated interest in such materials, especially for the role of correlated topological states in the emergent electronic properties. Here we present a large AHE in CoNb 3 S 6 that cannot be understood in terms of conventional mechanisms of the AHE. CoNb 3 S 6 is a member of a large class of intercalated transition metal dichalcogenides, where a 3d-transition metal sandwiches between layers of a 5d-transition metal dichalcogenide that are coupled by a weak Van der Waals force. CoNb 3 S 6 represents the 1/3 fractional intercalation of Co atoms between the layers of NbS 2 18 . It crystallizes in the hexagonal chiral space group P6 3 22 19,20 . It orders antiferromagnetically below~26 K and is known to have a rather complex susceptibility for the magnetic field applied parallel to the c-axis 21,22 . At 4 K, neutron diffraction measurement has suggested a collinear antiferromagnetic state in which the spins are in the ab-plane pointing along a certain crystallographic axis 20 . By itself, however, such a spin structure cannot give rise to the anomalous Hall effect. The key finding of this work is a large c-axis anomalous Hall effect in the antiferromagnetic CoNb 3 S 6 . Although CoNb 3 S 6 shows a small, intrinsic ferromagnetic component (≈0.0013 μ B per Co) along the c-axis, this small moment alone cannot explain the observed size of the AHE. Based on its chiral crystal structure and the calculated band structure, we attribute the AHE in CoNb 3 S 6 either to the formation of a complex magnetic texture or to an influence of the small intrinsic ferromagnetic moment on the underlying electronic band structure. Results Crystal structure and physical properties of CoNb 3 S 6 . We verified the room temperature crystal structure of CoNb 3 S 6 in the chiral space group P6 3 22 by means of single crystal X-ray diffraction (see Supplementary Note 1 and Supplementary Fig. 1). A sketch of the crystal structure adopted by CoNb 3 S 6 is shown in Fig. 1a, where the Co atoms occupy the octahedral position between the triangular prismatic layers of the parent compound 2H-NbS 2 . Figure 1b shows the magnetic susceptibility of CoNb 3 S 6 as a function of temperature measured with a magnetic field of 0.1 T applied along a-and c-axis (χ a and χ c , respectively). Over the entire temperature range, χ c exceeds χ a , consistent with behavior reported in the literature 21 . χ a shows a kink at T N = 27.5 K corresponding to the antiferromagnetic transition and decreases on further cooling down to 1.8 K. The field cooled (FC) and zero field cooled (ZFC) measurements show identical behavior. Along the c-axis, the susceptibility below T N shows irreversibility between FC and ZFC measurements. The ZFC χ c shows a behavior similar to that of χ a . However, the FC χ c increases on cooling below T N , becomes maximum at 25 K and decreases on further cooling. This increase in FC susceptibility along the c-axis near the transition temperature has been reported in previous studies 20,21 and implies the presence of a small ferromagnetic component along the c-axis. With increasing magnetic field, the magnitude of this jump decreases, and at 3 T, FC susceptibility shows a continuous decrease below T N , as shown in the inset of Fig. 1b, suggesting that the moment along the c-axis is suppressed by a sufficiently large magnetic field. At higher temperatures, the susceptibility shows Curie-Weiss behavior. A Curie-Weiss fit to the data between 200 and 300 K (Supplementary Note 2 and Supplementary Fig. 2) yields the powder averaged effective moment of 3.0 μ B per Co. This value is smaller than the spin-only moment expected for Co 2+ (3.87 μ B ) established by neutron diffraction and optical studies 20,23 . Resistivity shows metallic behavior over the measured temperature range of 1.8-300 K, with a sudden drop below 27.5 K (T N ), presumably due to the reduction of electron scattering with the onset of the spin ordering (Fig. 1c). CoNb 3 S 6 has a very small magnetoresistance of 0.2% at 2 K in the field of 9 T applied along c-axis, as shown in the inset of Fig. 1c. Magnetization. M vs H measured between 29 and 22 K, with the magnetic field along c-axis, is shown in Fig. 2a, and the corresponding first-order derivatives are shown in Fig. 2b. Despite the nearly linear M-H curves, a hysteresis with very small remanent magnetization appears below T N , which becomes maximum at 25 K (0.0013 μ B per formula unit) and decreases at lower temperatures, as shown in Fig. 2b, c. The hysteresis becomes more clear in dM/dH vs H plots shown in Fig. 2b. At 29 K, which is above T N , dM/dH is featureless. At 27 K, when decreasing the field from 6 T to −6 T, a peak in dM/dH appears at −1 T. When increasing the field from −6 T to 6 T, no peak is seen at −1 T. Instead, the peak is observed at 1 T. These peaks appear at the coercive field of the hysteresis in M vs H. With the decrease in temperature, the coercive field increases and becomes 3 T at 24 K. No such peaks are seen below 24 K. These data suggest that there is an intrinsic, hard ferromagnetic component along the c-axis in the ordered state that is flipped by a temperature-dependent critical field. Switching of this ferromagnetic component gives rise to the hysteresis in M vs H. Below 24 K, a magnetic field of 6 T cannot switch the ferromagnetic component. As a result, no hysteresis is observed. Application of a larger magnetic field is expected to reveal the hysteresis even below 24 K. In fact, this hysteresis is evident in the Hall effect measurement presented below. In Fig. 2c we show the ferromagnetic component as a function of the magnetic field obtained by subtracting a linear antiferromagnetic background from the M vs H curves presented in Fig. 2a. On the other hand, when magnetic field is applied along the a-axis, M vs H shows linear behavior at all temperatures measured, as depicted in Fig. 2d Hall effect. Hall resistivity (ρ yx ) vs H measured between 28 and 23 K with current along a-axis and magnetic field along c-axis is depicted in Fig. 3a. The Hall effect at each temperature was measured by cooling the sample in a magnetic field of 9 T and then by changing the magnetic field from 9 to −9 T and subsequently from −9 to 9 T. At 28 K, which is just above T N , the Hall resistivity is linear, as expected for a normal conductor. The sign of the Hall resistivity indicates that holes are the majority charge carriers in CoNb 3 S 6 . Within the single band model, the estimated carrier concentration (n = 1/|eR 0 |) is 2.49 × 10 21 cm −3 , where e is the charge of an electron and R 0 is the ordinary Hall coefficient. When the temperature is lowered below T N , an additional contribution appears in the Hall resistivity. At 27 K, this anomalous Hall signal is small. It increases both in magnitude and in field with decreasing temperature, becoming maximum at 23 K. Below 23 K, no switching behavior of the Hall resistivity is observed when measured with the maximum field of 9 T. However, the Hall resistivity increases down to 20 K (Supplementary Note 3 and Supplementary Fig. 3). At 15 K, a very small AHE is observed, and at 2 K there is no anomalous Hall effect. These observations reveal two important points. (1) Hysteresis in the AHE is observed when the magnetic field switches the small FM component ( Fig. 2a-c). (2) The AHE is observed due to the stabilization of the FM component, which is observed between T N and 23 K. At 20 K, a magnetic field of 9 T cannot switch the FM component, and hence the AHE does not switch sign, but gives the same large value. At 2 K, a magnetic field of 9 T cannot induce the FM component at all. As a result, there is no AHE observed for |H| ≤ 9 T. Conventionally, the Hall resistivity in a ferromagnet is given by the relation yx is the anomalous Hall resistivity 1 . In Fig. 3a, we see that the Hall resistivity measured in the paramagnetic state, i.e., at 28 K and in the antiferromagnetic state, except near the switching fields has the same slope. This indicates that R 0 B is constant between 28 and 23 K, and allows us to subtract the ordinary Hall resistivity to obtain the anomalous part. Figure 3b shows ρ A yx vs H, where ρ A yx is obtained by subtracting the data measured at 28 K from ρ yx measured below T N . As there is a finite ferromagnetic component along the c-axis, we first examine the effect of this small ferromagnetic component on the anomalous Hall effect. For a ferromagnet, ρ A yx ¼ R S μ 0 M, where R S is the anomalous Hall coefficient, μ 0 is the permeability, and M is the magnetization. R S scales the M-H curve to the anomalous part of the Hall resistivity. To check for this scaling in CoNb 3 S 6 , we obtained the magnetization of the ferromagnetic component by subtracting a straight line from the M vs H curve measured at a representative temperature of 25 K, which shows the maximum remanent magnetization. The obtained magnetization multiplied by −1, (−ΔM) vs H is plotted in Fig. 3c. The anomalous Hall resistivity measured at 25 K is also plotted in the same figure. These plots show that the anomalous Hall effect does scale with magnetization. R S is negative and has an exceedingly large value compared to R 0 . R S estimated from this scaling at 25 K is −1.41 × 10 −4 m 3 C −1 , which is four orders of magnitude larger than R 0 = 2.4 × 10 −9 m 3 C −1 . In a canonical ferromagnet-like Fe 24 or ferromagnets with closely related and even the same crystal structure as CoNb 3 S 6 , such as Fe 1/4 TaS 2 25 and Fe 1/3 TaS 2 26 , R S /R 0 ≈ 100, two orders of magnitude smaller than that in CoNb 3 S 6 . This significant boost of the coefficient of AHE in CoNb 3 S 6 is unlikely to be accounted for by the effect of the weak c-axis ferromagnetic component alone and therefore must have an additional contribution. The contribution of the ferromagnetic component to the anomalous Hall resistivity, i.e., R S can be estimated from the high magnetic field part of the anomalous Hall resistivity vs magnetization plot. As ρ A yx ¼ R S μ 0 M, the slope of ρ A yx vs M gives μ 0 R S . In Fig. 3d, we see that ρ A yx vs M resembles ρ A yx vs H such that ρ A yx changes sign at the coercive fields. After the abrupt sign change, ρ A yx shows a linear variation with M. This linear behavior of ρ A yx vs M at high field arises from the continuous spin tilting by the magnetic field toward the c-axis and represents the contribution from the field-induced ferromagnetic component. The slope in this linear regime gives μ 0 R S for this compound, which is represented by dashed lines in Fig. 3d. R S estimated from this slope is −1.12 × 10 −7 m 3 C −1 , which is comparable to that of the ferromagnet Fe 1/3 TaS 2 26 . The extra component in the anomalous Hall resistivity, which we label as ρ A ′ yx can be obtained by subtracting this linear part from the anomalous Hall effect: Fig. 3e, which shows that contribution from the FM component from spin tilting is negligibly small compared to ρ A yx . In Fig. 3f Discussion We now discuss possible origins of the anomalous Hall effect in CoNb 3 S 6 . First, we can rule out impurity-related extrinsic mechanisms of the AHE from the fact that the AHE vanishes at low temperatures, where the laboratory magnetic fields are insufficient to alter spins state, e.g., at 2 K as shown in Fig. 3a and Supplementary Fig. 3a. Second, a collinear antiferromagnetic state cannot give rise to the anomalous Hall effect. Finally, we showed above that an FM component due to simple magnetic fieldinduced spin tilt alone cannot account for the observed AHE. We thus consider other scenarios for the origin of the AHE in (1) non-collinear, non-coplanar or other complex magnetic textures, or (2) interplay of electronic and magnetic degrees of freedom. CoNb 3 S 6 has a chiral crystal structure. In a magnet with such a chiral lattice, competition among the exchange interaction, Dzyaloshinskii-Moriya interaction (DMI), magnetocrystalline anisotropy, and Zeeman energy may result in complex magnetic textures [28][29][30] , including non-collinear or non-coplanar spins that can give rise to large AHE [10][11][12][13]27,31 . The neutron diffraction experiment reported by Parkin et al. 20 revealed no such complex spin structure. However, this is not unexpected given (1) the measurement is carried out in zero field at 4 K, where our data does not show AHE when measured up to the magnetic field of 9 T, and (2) a diffraction experiment such as that carried out by Parkin et al. would be insensitive to the large-scale spin structures. Thus, the nature of the spin structure in CoNb 3 S 6 near T N remains an open question. Figure 4a-c shows the paramagnetic band structure of CoNb 3 S 6 without inclusion of SOC. The metallic character of CoNb 3 S 6 is evident by the presence of the hole pockets along Γ −A, which is consistent with the measured hole-like character of the charge carriers. There are two linear electron and hole band crossings along Γ−M and K−Γ. Inclusion of SOC (Fig. 4b) opens a small gap at the band crossings along these lines and splits the bands due to the lack of inversion symmetry 32,33 . The band structure along Γ−A is enlightening in this regard. In the calculation with SOC (Fig. 4a) both at Γ and A, two doubly degenerate bands touch at a point ≈40 meV above the Fermi energy (E F ). The splitting of these bands due to SOC along Γ−A (Fig. 4b) is remarkably large and results in the formation of several linearly crossing bands and avoided crossings. In Fig. 4d, we show the band dispersion along A-Γ-A near E F (see Supplementary Note 4 and Supplementary Fig. 4 for symmetry analysis). Bands cross linearly at all of these points. Two of these points lie within ≈15 meV of E F . It has been established rigorously that in nonmagnetic crystals and in directions parallel to a 6 3 screw axis (along Γ−A in CoNb 3 S 6 ), band degeneracies at the Γ points are Weyl nodes 34 . The AHE in CoNb 3 S 6 is observed only when the external magnetic field induces a small moment along c-axis, which suggests that the large enhancement in AHE may result due to the combined effect of such a field in the presence of the near E F Weyl nodes. In summary, CoNb 3 S 6 is a member of the class of antiferromagnets that show a large anomalous Hall effect. The large AHE is not explained by the reported collinear antiferromagnetic structure, and we suggest the formation of complex, noncollinear magnetic textures or an interplay between the magnetic texture and electronic band structure as two possible mechanisms for the large AHE. CoNb 3 S 6 is a member of a large family of the 1/3 intercalated TX 2 compounds, where T is a 4-or 5-d transition metal element, and X is a chalcogen (S, Se). As the change in the intercalated 3d element changes the nature of both magnetic and electronic structure, our realization of the AHE in CoNb 3 S 6 opens a platform to explore and perhaps manipulate the interplay among spin texture, electronic band structure, and the associated emergent phenomena in a large class of poorly explored materials. Methods Crystal growth and characterization. Single crystals of CoNb 3 S 6 were grown by chemical vapor transport using iodine as the transport agent. First, a polycrystalline sample was prepared by heating stoichiometric amounts of cobalt powder (Alfa Aesar 99.998%), niobium powder (Johnson Matthey Electronics 99.8%), and sulfur pieces (Alfa Aesar 99.9995%) in an evacuated silica ampoule at 900°C for 5 days. Subsequently, 2 g of the powder was loaded together with 0.5 g of iodine in a fused silica tube of 14 mm inner diameter. The tube was evacuated and sealed under vacuum. The ampoule of 11 cm length was loaded in a horizontal tube furnace in which the temperature of the hot zone was kept at 950°C and that of the cold zone was ≈850°C for 7 days. Several CoNb 3 S 6 crystals formed with a distinct, wellfaceted flat plate-like morphology. The crystals of CoNb 3 S 6 were examined by single crystal X-ray diffraction at beamline 15-ID-D at the APS, Argonne National Laboratory (ANL), where the data were collected with an APEX2 Area Detector using synchrotron radiation (λ = 0.41328 Å) at 293 K. Compositional analysis was done using an energy dispersive X-ray spectroscopy (EDS) at the Electron Microscopy Center, ANL. Magnetic and transport property measurements. Magnetization measurements were made using a Quantum Design VSM SQUID. Both in FC and ZFC mode, susceptibility data were measured by sweeping temperature up from 1.8 K. In the derivative of M vs H, we observed a small peak around H = 0 both above and below T N and both along a-and c-axis, possibly due to an unknown paramagnetic impurity. For data at all temperatures presented in Fig. 2, a background measured at 30 K was subtracted. A small asymmetry of the peak position in the dM/dH vs H was observed when only one loop of M vs H was measured. We did not see such an asymmetry when a second loop of the M vs H was measured, which is presented in Fig. 2. Transport measurements were performed on a quantum design PPMS following a conventional 4-probe method. Au wires of 25 μm diameter were attached to the sample with Epotek H20E silver epoxy. An electric current of 1 mA was used for the transport measurements. In magnetoresistance measurements, the contact misalignment was corrected by field symmetrizing the measured data. The following method was adopted for the contact misalignment correction in Hall effect measurements. The Hall resistance was measured at H = 0 by decreasing the field from the positive magnetic field (R H+ ), where H represents the external magnetic field. Again the Hall resistance was measured at H = 0 by increasing the field from negative magnetic field (R H− ). Average of the absolute value of (R H+ ) and (R H− ) was then subtracted from the measured Hall resistance. The conventional antisymmetrization method was also used for the Hall resistance measured at 28 K (above T N ) and at 2 K (where no anomalous Hall effect was observed), which gave same result as obtained from the former method. Electronic structure calculations. The electronic structure calculations were carried out within density functional theory (DFT) using the all-electron, full potential code WIEN2K 35 based on the augmented plane wave plus local orbital (APW + lo) basis set 36 . The Perdew-Burke-Ernzerhof (PBE) version of the generalized gradient approximation (GGA) 37 was chosen as the exchange correlation potential. Spin-orbit coupling (SOC) was introduced in a second variational procedure 38 . A dense k-mesh of 28 × 28 × 12 was used for the Brillouin zone (BZ) sampling. A R MT K max of 7.0 was chosen for all calculations. Muffin tin radii were 2.5 a.u. for Nb, 2.45 a.u. for Co, and 2.01 a.u. for S. Data availability. The authors declare that the main data supporting the findings of this study are available within the article and its Supplementary Information files. Extra data are available from the corresponding author on request.
5,268.2
2018-08-16T00:00:00.000
[ "Physics" ]
Option Pricing Formulas in a New Uncertain Mean-Reverting Stock Model with Floating Interest Rate Options play a very important role in the financial market, and option pricing has become one of the focus issues discussed by the scholars. This paper proposes a new uncertain mean-reverting stock model with floating interest rate, where the interest rate is assumed to be the uncertain Cox-Ingersoll-Ross (CIR) model. The European option and American option pricing formulas are derived via the α -path method. In addition, some mathematical properties of the uncertain option pricing formulas are discussed. Subsequently, several numerical examples are given to illustrate the effectiveness of the proposed model. Introduction Previous studies of option pricing are based on the assumption that the underlying asset price follows the stochastic differential equation [1][2][3][4]. According to the viewpoint of behavioral finance, the change of underlying asset price is not completely random. In fact, investors' belief degrees usually play an important role in real financial practice. So some scholars argued that stochastic differential equations may not be appropriate to describe the stock price process. Liu [5] founded a branch of axiomatic mathematics for modeling belief degrees. Liu [6] proposed the uncertain stock model and deduced the European option pricing formulas. Furthermore, the uncertainty theory is introduced into the financial field, then the uncertain financial theory is formed. American option price formulas were derived by Chen [7]. Peng and Yao [8] proposed an uncertain stock model with mean-reverting process. Yao [9] gave the no-arbitrage determinant theorems on uncertain mean-reverting stock model in uncertain financial market. Zhang and Liu [10] investigated the pricing problem of geometric average Asian option. Yin et al. [11] gave the lookback option pricing formulas of uncertain exponential Ornstein-Uhlenbeck model, and Wang and Chen [12] derived Asian options pricing formulas in an uncertain stock model with floating interest rate. Zhang et al. [13] investigated the pricing problem of lookback options for uncertain financial market, and so on. In this paper, we proposed a new uncertain stock model with floating interest rate. e European option and American option pricing formulas are investigated under the assumption that the underlying stock price follows an uncertain mean-reverting stock model, and the interest rate follows an uncertain CIR model. Preliminaries Uncertain measure M is a real-valued set function on a σ-algebra L over a nonempty set Γ satisfying normality, duality, subadditivity, and product axioms [5]. Definition 1 (see [6]). An uncertain variable is a function ξ from an uncertainty space (Γ, L, M) to the set of real numbers. e uncertainty distribution Φ of an uncertain variable ξ follows as for any real number x. If the uncertainty distribution Φ(x) is a continuous and strictly increasing function with respect to x at which 0 < Φ(x) < 1, and then Φ(x) is said to be a regular distribution, and the inverse function Φ − 1 (α) is called the inverse uncertainty distribution of ξ. Definition 2 (see [14]). Let ξ be an uncertain variable. en the expected value of ξ is defined by provided that at least one of the two integrals is finite. Theorem 1 (see [5]). Let ξ be an uncertain variable with uncertainty distribution Φ. If the expected value exists, then Theorem 2 (see [14]). Let ξ be an uncertain variable with regular uncertainty distribution Φ. en Definition 3 (see [6]). e uncertain variables ξ 1 , ξ 2 , . . . , ξ n are said to be independent if for any Borel sets B 1 , B 2 , . . . , B n of real numbers. An uncertain process is a sequence of uncertain variables indexed by a totally ordered set T, which is used to model the evolution of uncertain phenomena. Definition 4 (see [6]). An uncertain process C t is said to be a canonical Liu process if (i) C 0 � 0 and almost all sample paths are Lipschitz continuous; (ii) C t has stationary and independent increments; (iii) every increment C s+t − C t is a normal uncertain variable with expected value 0 and variance t 2 , whose uncertainty distribution is Definition 5 (see [6]). Suppose C t is a canonical Liu process, f and g are two real functions. en is called an uncertain differential equation with an initial value X 0 . Definition 6 (see [15]). Let α be a number with 0 < α < 1. An uncertain differential equation is said to have an α-path X α t if it solves the corresponding ordinary differential equation where Φ − 1 (α) is the inverse standard normal uncertainty distribution, i.e., Theorem 3 (see [15]). Assume that X t and X α t are the solution and α-path of the uncertain differential equation respectively. en Theorem 4 (see [15,16]). Let X t and X α t be the solution and α-path of the uncertain differential equation respectively. en, the solution X t has an inverse uncertainty distribution Theorem 5 (see [16]). Let X t and X α t be the solution and α-path of the uncertain differential equation respectively. en, for any time s > 0 and strictly increasing function J(x), the supremum has an inverse uncertainty distribution and the time integral s 0 J(X t )dt has an inverse uncertainty distribution Theorem 6 (see [16]). Let X t and X α t be the solution and α-path of the uncertain differential equation 2 Discrete Dynamics in Nature and Society respectively. en, for any time s > 0 and strictly decreasing function J(x), the supremum has an inverse uncertainty distribution and the time integral s 0 J(X t )dt has an inverse uncertainty distribution Liu [17] proposed that the uncertain processes X 1t , X 2t , . . . , X nt are independent if, for any positive integer k and any times t 1 , t 2 , . . . , t k , the uncertain vectors are independent. Theorem 7 (see [18]). Assume that X 1t , X 2t , . . . , X nt are some independent uncertain processes derived from the solutions of some uncertain differential equations. If the . , x m and strictly decreasing with respect to x m+1 , x m+2 , . . . , x n , then the uncertain process X t � f(X 1t , X 2t , . . . , X nt ) has an α-path Uncertain Mean-Reverting Stock Model with Floating Interest Rate In the real market, the interest rate is an important economic indicator, which is always affected by some uncertain factors. To meet the needs of actual financial markets, Yao [18] assumed that both the interest rate r t and the stock price X t follow uncertain differential equations and presented an uncertain stock model with floating interest rate as follows, where μ 1 and σ 1 are the drift and diffusion of the interest rate, respectively, μ 2 and σ 2 are the drift and diffusion of the stock price, respectively, and C 1t and C 2t are independent canonical Liu processes. Considering the long-term fluctuations of the stock price and the changing of the interest rate from over time, Sun and Su [19] proposed an uncertain mean-reverting stock model with floating interest rate to describe the stock price and interest rate. In Sun and Su's model, the interest rate model was assumed to be the uncertain Vasicek model. ere is no doubt that the Vasicek model could bring a negative value to the interest rate. However, the CIR model can overcome the problem, and it can ensure that the interest rate remains positive all the time. In this paper, we will make some improvements to the stock models (27). To ensure that the interest rate is always positive, we assume the interest rate process to be the uncertain CIR model and introduce a new uncertain meanreverting stock model with floating interest rate, where a 1 represents the rate of adjustment of r t , b 1 represents the average interest rate, σ 1 represents the interest rate diffusion, a 1 , b 1 , σ 1 , a 2 , b 2 , σ 2 are constants, and C 1t and C 2t are independent canonical Liu processes. European Call Option. A European call option offers the holder the right without the obligation to buy a certain asset at an expiration time T with a strike price K, and X t is the stock price of the time t. e payoff of the European call option is given by (X T − K) + . Definition 7. Assume European call option has a strike price K and an expiration time T. en the European call option price is Theorem 8 Assume European call option for the uncertain stock model (3) has a strike price K and an expiration time T. en the European call option price is where r 1− α s solves the following the ordinary differential equation Proof. According to eorem 3, we can get the α-path of the stock price X t : Discrete Dynamics in Nature and Society 3 Similarly, we also get that r 1− α s satisfies the differential equation, It follows from eorem 5 that the α-path of T 0 r s ds is T 0 r α s ds. Since y � exp(− x) is strictly decreasing with respect to x, from eorem 6, the discount rate has an α-path Since (X T − K) + is an increasing function with respect to X T , it has an α-path erefore, the present value of the option has an α-path according to eorem 7. We have the price of the European call option according to eorems 2 and 4. e theorem is proved. □ Theorem 9. Let f Ec be the European call option price of the uncertain stock model (28). en (1) f Ec is an increasing function of X 0 ; (2) f Ec is an increasing function of a 2 ; (3) f Ec is a decreasing function of K. Proof. According to eorem 8, where r 1− α s solves the following the ordinary differential equation is increasing with respect to X 0 and the European call option price f Ec is increasing with respect to the initial stock price X 0 . is means that the higher the initial stock price, the higher the European call option price. is increasing with respect to a 2 and the European call option price f Ec is increasing with respect to the parameter a 2 . (3) Since the function is decreasing with respect to K and the European call option price f Ec is decreasing with respect to the strike price K, this means that the higher the strike price, the lower the European call option price. European Put Option. Suppose that a European put option has a strike price K and an expiration time T, and X t is the stock price of the time t. e payoff of the European put option is given by (K − X T ) + . Definition 8. Assume a European option has a strike price K and an expiration time T. en the European put option price is where r α s solves the following the ordinary differential equation: 4 Discrete Dynamics in Nature and Society Proof. According to the proof of eorem 8, we can get that the discount rate has an α-path Since (K − X T ) + is an decreasing function with respect to X T , it has an α-path erefore, the present value of the option has an α-path according to eorem 7. We have the price of the European put option According to eorems 2 and 4. e theorem is proved. □ Theorem 11. Let f Ep be the European put option price of the uncertain stock model (28). en (1) f Ep is a decreasing function of X 0 (2) f Ep is a decreasing function of a 2 (3) f Ep is an increasing function of K Proof. According to eorem 10, (1) Since exp(− b 2 T) > 0, the function − X 0 exp(− b 2 T) is decreasing with respect to X 0 and the European put option price f Ep is decreasing with respect to the initial stock price X 0 . is means that the higher the initial stock price, the lower the European put option price. is decreasing with respect to a 2 and the European put option price f Ep is decreasing with respect to the parameter a 2 . (3) Since the function is increasing with respect to K and the European put option price f Ep is increasing with respect to the strike price K, this means that the higher the strike price, the higher the European put option price. American Call Option. e American call option gives the holder the right, without obligation, to buy an agreed quantity of stock at any time before the expiration date T with a strike price K. Apparently, the best choice for the holder is to exercise the right at the supreme value, so the payoff of the American call option is given by sup 0≤t≤T (X t − K) + . Definition 9. Assume that the American call option has a strike price K and an expiration time T. en the American call option price is Discrete Dynamics in Nature and Society
3,143
2020-11-03T00:00:00.000
[ "Mathematics", "Business" ]
Genetic Virulence Profile of Enteroaggregative Escherichia coli Strains Isolated from Danish Children with Either Acute or Persistent Diarrhea Enteroaggregative Escherichia coli (EAEC) is frequently found in diarrheal stools worldwide. It has been associated with persistent diarrhea, weight loss, and failure to thrive in children living in developing countries. A number of important EAEC virulence genes are identified; however, their roles in acute and persistent diarrhea have not been previously investigated. The aim of this study was to identify specific EAEC virulence genes associated with duration and type of diarrhea in Danish children. We aimed to improve the current diagnostics of EAEC and enable targeting of strains with an expected severe disease course. Questionnaires answered by parents provided information regarding duration of diarrhea and presence of blood or mucus. A total of 295 EAEC strains were collected from children with acute (≤7 days) and persistent diarrhea (≥14 days) and were compared by using multiplex PCR targeting the genes sat, sepA, pic, sigA, pet, astA, aatA, aggR, aaiC, aap, agg3/4C, ORF3, aafA, aggA, agg3A, agg4A, and agg5A. Furthermore, the distribution of EAEC genes in strains collected from cases of bloody, mucoid, and watery diarrhea was investigated. The classification and regression tree analysis (CART) was applied to investigate the relationship between EAEC virulence genes and diarrheal duration and type. Persistent diarrhea was associated with strains lacking the pic gene (p = 0.002) and with the combination of the genes pic, sat, and absence of the aggA gene (p = 0.05). Prolonged diarrhea was associated with the combination of the genes aatA and astA (p = 0.03). Non-mucoid diarrhea was associated with strains lacking the aatA gene (p = 0.004). Acute diarrhea was associated with the genes aggR, aap, and aggA by individual odds ratios. Resistance toward gentamicin and ciprofloxacin was observed in 7.5 and 3% of strains, respectively. Multi-drug resistance was observed in 38% of strains. Genetic host factors have been associated with an increased risk of EAEC-associated disease. Therefore, we investigated a panel of risk factors in two groups of children—EAEC-positive and EAEC-negative—to identify additional factors predisposing to disease. The duration of breastfeeding was positively correlated with the likelihood of belonging to the EAEC-negative group of children. Enteroaggregative Escherichia coli (EAEC) is frequently found in diarrheal stools worldwide. It has been associated with persistent diarrhea, weight loss, and failure to thrive in children living in developing countries. A number of important EAEC virulence genes are identified; however, their roles in acute and persistent diarrhea have not been previously investigated. The aim of this study was to identify specific EAEC virulence genes associated with duration and type of diarrhea in Danish children. We aimed to improve the current diagnostics of EAEC and enable targeting of strains with an expected severe disease course. Questionnaires answered by parents provided information regarding duration of diarrhea and presence of blood or mucus. A total of 295 EAEC strains were collected from children with acute (≤7 days) and persistent diarrhea (≥14 days) and were compared by using multiplex PCR targeting the genes sat, sepA,pic,sigA,pet,astA,aatA,aggR,aaiC,aap,agg3/4C,ORF3,aafA,aggA,agg3A,agg4A,and agg5A. Furthermore, the distribution of EAEC genes in strains collected from cases of bloody, mucoid, and watery diarrhea was investigated. The classification and regression tree analysis (CART) was applied to investigate the relationship between EAEC virulence genes and diarrheal duration and type. Persistent diarrhea was associated with strains lacking the pic gene (p = 0.002) and with the combination of the genes pic, sat, and absence of the aggA gene (p = 0.05). Prolonged diarrhea was associated with the combination of the genes aatA and astA (p = 0.03). Non-mucoid diarrhea was associated with strains lacking the aatA gene (p = 0.004). Acute diarrhea was associated with the genes aggR, aap, and aggA by individual odds ratios. Resistance toward gentamicin and ciprofloxacin was observed in 7.5 and 3% of strains, respectively. Multi-drug resistance was observed in 38% of strains. Genetic host factors have been associated with INTRODUCTION Enteroaggregative Escherichia coli (EAEC) is an established pathotype within the group of diarrheagenic E. coli (DEC), which also includes the enteropathogenic E. coli (EPEC), enterotoxigenic E. coli (ETEC), enteroinvasive E. coli (EIEC), and verocytotoxin-producing E. coli (VTEC). EAEC is associated with diarrhea, failure to thrive, weight loss, and stunted growth in children living in developing countries (Steiner et al., 1998;Albert et al., 1999;Lima et al., 2000;Medina et al., 2010;Hebbelstrup Jensen et al., 2014). EAEC-positive children were seen to have increased levels of fecal lactoferrin and Il-1β, regardless of the presence of gastrointestinal symptoms in a Brazilian study (Steiner et al., 1998). This indicates a considerable inflammation potential of EAEC and severe illness, which may also be present in children in industrialized countries. EAEC has been associated with childhood diarrhea in Germany, England and America (Huppertz et al., 1997;Jenkins et al., 2006;Vernacchio et al., 2006). Furthermore, several outbreaks of EAEC have been reported in children in Serbia, Japan and Korea (Cobeljić et al., 1996;Harada et al., 2007;Shin et al., 2015). Persistent diarrhea was described in EAEC-infected children (Bhan et al., 1989), which may result in considerable loss of electrolytes and impaired absorption of micronutrients. Several genetic host factors have been associated with an increased susceptibility toward EAEC infection including single nucleotide polymorphisms (SNPs) in the interleukin-8 promoter region (Jiang et al., 2003) and in the CD14 gene (Mohamed et al., 2011). However, only few general risk factors associated with EAEC infection have been investigated (Hebbelstrup Jensen et al., 2014). Susceptibility testing of EAEC strains has revealed a considerable resistance toward antibiotics, which includes resistance toward ciprofloxacin (Aslani et al., 2011), multi-drug resistance (Khoshvaght et al., 2014), and extended-spectrum beta-lactamases (Guiral et al., 2011), which is a cause for concern. The pathogenic potential of EAEC in children in industrialized countries warrants further research and the role of EAEC virulence factors in acute and persistent diarrhea needs clarification. The gold standard for the identification of EAEC is by the HEp-2 cell assay. This test is performed in reference laboratories only; it requires cell culture facilities and is time-consuming (Hebbelstrup Jensen et al., 2014). This method is depend on recognition of the so-called "stacked-brick" appearance by special trained personnel and it has been observed to be subject to interobserver variability (Hebbelstrup Jensen et al., 2014). In addition, this phenotypical assay does not distinguish between pathogenic and non-pathogenic strains. Molecular techniques have been developed to detect pathogenic EAEC strains, and various gene targets have been used. It was originally suggested that EAEC could be detected by the master regulator aggR (Morin et al., 2013), which led to a general classification of EAEC based on the presence of the aggR gene into typical and atypical strains (Morin et al., 2013), but not all diarrheagenic EAEC strains possessed the aggR gene (Jenkins et al., 2007). Conventionally, the genes aggR, aatA, and aap are used as initial detection for the EAEC virulence plasmid encoding the master regulator, a secretion system and the dispersin protein, respectively (Vial et al., 1988). In addition, the chromosomal gene aaiC is frequently used for diagnosis of EAEC. The aaiC gene is encoded on a genomic island encoding a type VI secretion system (Dudley et al., 2006). The ORF3 gene encodes a cryptic protein and it was the most frequently detected EAEC virulence gene in children in Mali (Boisen et al., 2012). In spite of these research findings, no consensus has been reached on which EAEC genes are unambiguously pathogenic. Colonization of the gut is accomplished by adhesion facilitated by the EAEC aggregative adhesion fimbriae (AAFs) with five variants AAF/I-AAF/V (Czeczulin et al., 1997;Shamir et al., 2010;Jønsson et al., 2015;Nezarieh et al., 2015). An important pathogenic trait of EAEC is the formation of biofilm (Hicks et al., 1996), which is in general associated with persistent infection (Costerton et al., 1999). Biofilm formation has been associated with the genes aatA, aggR, pic, sepA, and sigA Mendez-Arancibia et al., 2008;Nezarieh et al., 2015) and the AAFs (Czeczulin et al., 1997;Shamir et al., 2010;Jønsson et al., 2015;Nezarieh et al., 2015). The genes sat and the pet are toxins that causes considerable damage to the intestinal epithelium Taddei et al., 2005). In a Brazilian study investigating childhood diarrhea, the toxin gene astA was associated with acute diarrhea, and the CVD432 probe (corresponding to the aatA gene) was associated with persistent diarrhea (Pereira et al., 2007). The role of EAEC in childhood diarrhea in Denmark has only been investigated sparsely (Olesen et al., 2005;Hebbelstrup Jensen et al., 2016b). To date a number of EAEC virulence genes have been identified, inducing adhesion, cytotoxicity, mucosal inflammation, and immune evasion (Hebbelstrup Jensen et al., 2014). The aim of this study was to investigate the clinical manifestations and risk factors for EAEC infection in Danish children with diarrhea. Specific EAEC virulence genes (Table 1) were investigated for an association with diarrheal type and duration. Improvement of the current diagnostics of EAEC could be achieved by targeting EAEC genes with an expected protracted course of disease. Furthermore, identification of specific EAEC genes associated with diarrheal type would contribute to the general understanding of the pathophysiological mechanisms of EAEC. Study Population In the period between January 2011 and October 2013, we conducted a multi-center study to identify children with diarrhea and EAEC detected in stool samples. The Group of EAEC-Positive Children In the period between January 2011 and October 2013, 295 children tested positive for EAEC at the three participating Departments of Clinical Microbiology (Figure 1). From the 295 children, 89 (30%) were excluded due to co-infection with one or several additional enteric pathogens. Among co-infections, the most frequently detected were the attaching-and-effacing E. coli (AEEC; n = 24), rotavirus (n = 12), ETEC (n = 12), and C. difficile (n = 10). From the remaining 206 EAEC-positive children, the parents of 50 children had no phone number listed, could not be contacted, did not speak Danish, or did not live in Denmark and were excluded. We included 156 children positive for EAEC only in the EAEC-positive group. The EAEC-Negative Group of Children The EAEC-negative group consisted of 155 healthy children aged 0-6 years, who were included in a cohort study, we had conducted in the period between 2009 and 2013 (Hebbelstrup Jensen et al., 2016a,b;Jokelainen et al., 2017). In that study, we had investigated the incidence and pathological significance of EAEC in Danish children in municipal daycare in the Copenhagen area. Each child had been observed for a 1 year period with registration of gastrointestinal symptoms and submission of stool samples. We included children in the EAECnegative group from the cohort at their first point of observation. Six EAEC-positive children were excluded from the EAECnegative group, where two children had reports of diarrhea or vomiting. In the remaining 155 EAEC-negative children, 34 had reports of diarrhea. From the 34 children with diarrhea the following pathogens were detected: norovirus (n = 7), sapovirus (n = 4), rotavirus (n = 3), adenovirus and AEEC (n = 2), sapovirus and norovirus (n = 1). AEEC, EPEC, VTEC, and C. difficile were each detected in one child ( Table 2). No pathogens were detected in 13 children with diarrhea in the EAEC-negative group. The characteristics of the EAEC-positive and the EAECnegative groups are presented in Table 3. Microbiological Analysis Stool samples were analyzed for the enteropathogens as a part of the routine diagnostics at the participating Departments of Clinical Microbiology. Culturing of the stool samples was performed by using the SSI selective enteric medium (Blom et al., 1999) for detection of Salmonella spp., Yersinia spp., Shigella spp., Vibrio spp. Aeromonas spp., and E. coli spp. For identification of Campylobacter spp. and C. difficile the modified charcoal cefoperazone deoxychocolate agar medium was used (Hutchinson and Bolton, 1984). The microbiological analysis included PCR for detection of ETEC, VTEC, EPEC, EIEC, AEEC, Aeromonas spp., and C. difficile (Persson et al., 2007(Persson et al., , 2008(Persson et al., , 2015. E. coli colonies with different morphology were handpicked and were sub-cultured on MacConkey agar. To diagnose EAEC, the genes aatA (dispersin transporter protein), aggR (transcription activator), and aaiC (secreted protein) were targeted by PCR at the Department of Bacteria, Parasites, and Fungi, at Statens Serum Institut (Boisen et al., 2012). Detection of one of these genes was considered diagnostic of EAEC. Initial screening for EAEC was performed at the Departments of Clinical Microbiology at Slagelse Hospital and Copenhagen University Hvidovre Hospital by PCR targeting the aggR gene. EAEC positive strains were forwarded to the Danish National Reference Center at Statens Serum Institut for further characterization as described below. In selected cases, the treating physician had requested microbiological analysis for enteric viruses or parasites to diagnose the cause of diarrhea. Only samples without detection of other pathogens were included in the study. Questionnaires When EAEC was the only pathogen detected by the microbiological analysis, the children's parents were contacted by telephone by a medical doctor and were interviewed using a questionnaire. The parents of the children in the EAECnegative group had answered the same questionnaire. The questionnaire inquired about gastrointestinal symptoms and exposures, including foreign travel, use of antibiotics, and contact with sick animals. In addition, information regarding birth-weight, breastfeeding, infant colic, and pet ownership was inquired. Ethics The study was carried out in accordance with The National Committee on Health Research Ethics with written informed Statistics We performed a CART analysis using the Chi-Squared Automatic Interaction Detection (CHAID) growth method with a minimum of five cases for each parent and child node. We used likelihood ratio tests to identify statistically significant branching points between specific EAEC virulence genes and duration of diarrhea. Associations were considered statistically significant, when p < 0.05. The p-values were Bonferroni corrected to account for multiple testing. We present two CART trees: One where the duration of diarrhea is treated as an interval level construct and one where we treat it as a categorical construct distinguishing between acute diarrhea and persistent diarrhea, respectively. Furthermore, we investigated whether we could find a significant association between EAEC genes and watery diarrhea, bloody diarrhea or mucoid diarrhea. To identify risk factors associated with EAEC infection, we compared the EAEC-positive group of children with the EAEC-negative group of children. For these analyses, we used independent samples t-tests for the interval level constructs, such as age and birth weight, and a difference of proportions test for the categorical constructs, such as gender and pet ownership. In addition, we performed a logistic regression to assess the effect of breastfeeding on the risk of EAEC infection. Finally, the odds ratios were calculated for each individual EAEC gene in the group of children with acute and persistent diarrhea provided with a 95% confidence interval and p-values, by using Fisher's exact test. For data analysis, we used SPSS version 23.0 software for windows (SPSS Inc., USA). The study was approved by the Danish Data Protection Agency, protocol number (2013-41-2338). Significance of EAEC Virulence Genes in Disease To assess the association between EAEC genes in acute and persistent diarrhea, we performed a CART tree analysis. This analysis clusters the genes in a stepwise manner according to the investigated categories. With each branch, a statistical significance between the absence and/or presence of a gene further divides the tree into new branches and provides a discrimination between acute and persistent diarrhea, respectively. Five outliers with reports of diarrhea lasting 200 days or longer were removed from the analysis leaving 83 observations. From the 83 observations, 27 EAEC strains were collected from children with acute diarrhea, 45 strains from children with persistent diarrhea. Eleven children had diarrhea lasting 8-13 days. First, we present the results from the CART tree, where we treat duration of diarrhea as a categorical construct using the categories acute and persistent. EAEC strains lacking the pic gene were more likely to cause persistent diarrhea (≥14 days), p = 0.002 (Figure 2). EAEC strains possessing the genes pic, sat but lacking the aggA gene were also more likely to cause persistent diarrhea, p = 0.05. When duration of diarrhea in days is treated as an interval level construct, we found the absence of the aatA gene to be associated with highly prolonged diarrhea (mean duration = 74 days) compared to the grand mean in the sample of 29 days (Figure 3). Presence of the genes aatA and astA was associated with slightly prolonged diarrhea (mean duration = 38 days; p = 0.03). We calculated the individual odds ratios for EAEC virulence genes in cases of acute and persistent diarrhea. By this analysis, we found the genes pic, aggR, aap, and aggA to be associated with acute diarrhea (Table 5). Furthermore, the CART analysis was used to investigate any statistically significant association between EAEC virulence genes and watery, mucoid, and/or bloody diarrhea, respectively. Only the absence of the aatA gene was a significant predictor and was associated with not having mucoid diarrhea, p = 0.004 (Figure 4). Clinical Manifestations Associated with EAEC Infection Parents of children infected with EAEC were interviewed by a medical doctor and symptoms were registered ( Table 6). Fever was reported for 35% of the EAEC-positive children and was usually ≤39 • C. Other clinical presentations associated with EAEC infection were abdominal cramping (55%), reduced appetite (52%), vomiting (39%), and weight loss (39%). The median weight loss was 1,000 g. Acute diarrhea defined as ≤7 days was reported from 36 (23%) children, but the majority of EAEC-positive children suffered from persistent diarrhea defined as > 14 days (n = 83, 53%). The duration of diarrhea was 8-13 days for 12 children (8%) and unknown for 25 children (16%). The median duration of EAEC-associated diarrhea was 14 days. The median number of passing diarrheal stools was 6.5 per day at disease maximum. Different categories of diarrhea was reported among EAEC cases, including mixed watery and mucoid diarrhea (n = 74, 47%), watery diarrhea (n = 56, 36%), mucoid diarrhea only (n = 15, 10%), and bloody diarrhea (n = 10, 6%). EAEC Infection and Risk Factors To identify factors associated with an increased risk or protective effect against EAEC infection we compared age, gender, pet-ownership, contact with sick animals, the use of antibiotics, infant colic, and birth-weight between EAEC-positive children and EAEC-negative children. The only statistically significant difference between the two groups was the duration of breastfeeding. The children in the EAEC-negative group had been breastfeed significantly longer compared with the children in the EAEC-positive group (p < 0.00; see Table 3). We performed a logistic regression to assess the relationship between EAEC status and the duration of breastfeeding. To illustrate the relationship between duration of breastfeeding and having EAEC infection in a more understandable metric, we graphed the relationship in Figure 4. Antibiotic Resistance in EAEC Strains Susceptibility testing toward antibiotics was performed for 134 EAEC strains. In general, a high level of resistance toward antibiotics was observed, and multi-drug resistance was seen in 38% (n = 51) of the EAEC strains (Figure 6). From the multidrug resistant strains, 35 (69%) were collected from children with reports of foreign travel. Multidrug-resistant EAEC strains were collected from children, who had visited Asia (n = 7, 57%, 15 in total), Southern Europe (n = 6, 27%, 22 in total), Northern Europe (n = 1, 14%, 7 in total), Africa (n = 20, 59%, 34 in total) and the Middle East (n = 1, 14%, 8 in total). Resistance toward broad-spectrum antibiotics was detected in FIGURE 2 | The CART tree analysis shows the affiliations between EAEC genes and acute diarrhea ≤ 7 days compared with persistent diarrhea ≥ 14 days. N is the number of children. The number 0 in each branch indicates the absence of a gene and the number 1 indicates the presence of a gene. The EAEC genes included in the analysis were sat, sepA,pic,sigA,pet,astA,aap,aaiC,aggR,aatA,ORF3,agg3/4C,agg3A,aafA,aggA,agg4A,and agg5A. Each branch in the CART tree terminates in a "node," which represents a specific combination of genes, or absence of genes, statistically significant in the categories of acute and persistent diarrhea. Persistent diarrhea was associated with EAEC strains lacking the pic gene. EAEC strains possessing the pic and sat genes, but lacking the aggA gene were associated with persistent diarrhea. Frontiers in Cellular and Infection Microbiology | www.frontiersin.org pic,sigA,pet,astA,aap,aggR,aaiC,aatA,ORF3,agg3/4C,agg3A,aafA,aggA,agg4A,and agg5. Each branch in the CART tree terminates in a "node," which represents a specific set of genes, or absence of genes, with a statistical significant association with different durations of diarrhea in days. Strains lacking the aatA gene were associated with diarrhea with the longest median duration (74 days). EAEC strains with the combination of the aatA and astA genes were associated with prolonged diarrhea with a higher median duration (38 days) compared with EAEC strains lacking the astA gene and presence or absence of the pic gene (29 and 12 days, respectively). FIGURE 4 | The CART tree analysis shows the association between EAEC genes and mucoid diarrhea. N is the number of children. The number 0 in each branch indicates the absence of a gene and the number 1 indicates the presence of a gene. EAEC genes included in the analysis were sat, sepA,pic,sigA,pet,astA,aap,aggR,aaiC,aatA,ORF3,agg3/4C,agg3A,aafA,aggA,agg4A,and agg5. Each branch in the CART tree terminates in a "node," which represents a specific set of genes or absence of genes, which are statistically significant in the categories investigated. Strains lacking the aatA gene were associated with non-mucoid diarrhea, p = 0.004. No association between EAEC genes and watery or bloody diarrhea was observed. EAEC strains collected from 14 children with reports of foreign travel, which included resistance against ciprofloxacin in 4 strains and resistance against gentamicin in 10 strains. Hospitalization of EAEC Infected Children Twenty-three (15%) of the 156 EAEC-positive children with diarrhea were hospitalized. Ten of the hospitalized children were treated with different antibiotics, including broad-spectrum penicillin (n = 3), macrolides (n = 2), and cefuroxime (n = 1). For three children the antibiotics used were unknown. Enteric parasites were examined for in 10 of 23 hospitalized children and enteric viruses in seven hospitalized children. Thirteen hospitalized children (57%) had a history of foreign travel within a period of 2 months prior to examinations, where traveling to Egypt was reported from eight children, while two children had visited Turkey. One child had either visited Ethiopia, Somalia, or Pakistan. The genetic profile of the strains collected from hospitalized children did not differ statistically significantly from other strains (data not shown). DISCUSSION EAEC is an acknowledged common diarrheal pathogen, but the identification of a sole and causative pathogenic EAEC trait remains inconclusive. A large number of virulence factors and combinations hereof, have been associated with clinical illness in epidemiologic studies (Hebbelstrup Jensen et al., 2014). It is plausible that the host immunity and exposure (endemic presence of EAEC) plays an important role in EAEC pathogenicity, therefore, the combination of virulence genes in EAEC provides the necessary variety for local disease. The mosaic nature of EAEC genomes described today seems to enhance our idea that EAEC pathogenicity is very dependent on the host and its environment. In this study, we have characterized a number of EAEC strains collected from children with acute and persistent diarrhea. The EAEC strains were characterized in respect of classical EAEC virulence genes and antibiotic resistance profiles. We found the absence of the pic gene to be associated with persistent diarrhea, p = 0.02. On the other hand, the combination of the genes pic and sat and absence of the aggA gene was associated with persistent diarrhea, p = 0.05 (Figure 2). The pic gene has mucinolytic activity, it causes hemagglutination and serum resistance (Henderson et al., 1999). Pic is a protease, which is secreted by EAEC and Shigella flexneri (Henderson et al., 1999) and it has been shown to play a key role in the colonization and growth of EAEC in mucus in a mouse model (Harrington et al., 2009). The combination of the genes pic, sepA, and agg4A has been associated with the formation of strong biofilm (Nezarieh et al., 2015). Biofilm formation is a key pathogenic trait for the development of persistent infection (Costerton et al., 1999) and the formation of biofilm is what separates EAEC from other diarrheagenic E. coli pathotypes. The sat gene has been shown to cause intestinal damage with fluid accumulation and villus necrosis in a rabbit ileal loop model (Taddei et al., 2005). Sat was one of the most prevalent genes detected in children with diarrhea in an Iranian study (Nezarieh et al., 2015). The aggA gene encodes a subunit of the aggregative fimbria type I, which has been shown to elicit an immunogenic response in a mouse vaccination study (Bouzari et al., 2010). It could be speculated that the children were partly protected from antibodies toward the aggregative fimbria type I, due to previous exposure resulting in a shortened span of disease. EAEC strains with the combination of the aatA and astA genes were associated with prolonged diarrhea (p = 0.03; Figure 3). The aatA gene is a key EAEC gene and it encodes part of an outer membrane transport system involved in translocation of the dispersin protein. The aatA gene corresponds to a part of the EAEC virulence plasmid pCVD432 (Baudry et al., 1990). It was one of the first genes targeted by PCR to diagnose EAEC (Nishi et al., 2003). The genes astA and pic are both present in the prototype EAEC strain 042, which has been associated with diarrhea in a volunteer study (Nataro et al., 1995). The aatA gene has been associated with the formation of biofilm , which is characterized by a thick mucus layer in the intestine and predisposes to persistent infection (Costerton et al., 1999). The astA gene encodes the EAST-1 toxin, and it was one of the first virulence factors associated with diarrheagenic EAEC strains (Ménard and Dubreuil, 2002), but astA is not restricted to EAEC (Savarino et al., 1996). The EAST-1 toxin has been shown to elicit a secretory response in a rabbit ileal model (Savarino et al., 1991). It could be speculated that this response induces an inflammatory response with protracted disease manifestations. Thus, in combination, the aatA and astA genes could be responsible for prolonged inflammation and formation of biofilm in the intestine resulting in prolonged diarrhea. Strains lacking the aatA gene were associated with nonmucoid diarrhea, p = 0.00 (Figure 4). A wide range of EAEC genes have been associated with increased secretion of mucus, such as pic (Navarro-Garcia et al., 2010) and pet . The combination of the aatA, aggR, astA, and aap genes has been associated with gross mucus and leukocytes in stools from patients with diarrhea compared with healthy controls, p < 0.05 (Cennimo et al., 2009). Collectively, our results suggest that the pathophysiology of EAEC enteric infection involves a complex and dynamic modulation of several virulence genes. The CART analysis suggested the sat and astA genes for virulent EAEC strains with persistent disease manifestations. Strains, with the aggA gene in different combinations with other EAEC genes were associated with less-pathogenic EAEC strains, with shorter duration of disease. Suggesting that the fimbriae themselves might not be the sole cause for disease. The odds ratio for the individual EAEC genes between acute and persistent diarrhea suggested an association between the genes aap, aggR, pic and aggA, and acute diarrhea ( Table 5). In two other studies where the CART analysis was applied (Boisen et al., 2012;Lima et al., 2013), it was found that virulent EAEC strains in Mali harbored the EAEC heat-stable toxin 1 (EAST-1 enterotoxin), and the flagellar type H33 correlated with diarrhea. A Brazilian study (Lima et al., 2013), identified trait clusters in EAEC strains (isolated genes or in combination), which correlated with both children with diarrhea (pet and aafA) and healthy children (agg4A and ORF61). In Ghana, it was found that the presence of the aap gene was significantly associated with diarrhea, even though the aatA was the most prevalent gene among the EAEC isolates tested (Opintan et al., 2010). These findings suggest that EAEC infection involves a complex and dynamic modulation of several virulence genes for the bacteria in combination with its endemic presence in the population and the immunity present in the population. The variation of the EAEC genetic repertoire in relation to disease determined in Danish strains, as well as from different populations of distinct geographical regions of the world, confirms again the genetic heterogeneity of EAEC (Huppertz et al., 1997;Glandt et al., 1999;Sarantuya et al., 2004;Huang et al., 2007;Oundo et al., 2008). Furthermore, it is very possible that since many of the virulence genes and antibiotic resistance genes are encoded on plasmids, the dynamic horizontal acquisition and loss of genetic traits can be explained by the favoring of the variety of genetic profiles found in EAEC strains. In order to investigate general predisposing factors associated with EAEC infection, we compared children with and without EAEC. Previous studies have shown that infant colic predisposes to gastrointestinal diseases later in life (Collado et al., 2012). Although EAEC is not generally considered to be transmitted through animals it has been isolated from animals (Puño-Sarmiento et al., 2013), and we investigated if having a pet could pose a risk for EAEC infection. None of the factors investigated were represented more frequently in the EAEC-positive group, however, prolonged breastfeeding was discovered to be strongly associated with the EAEC-negative group of children (Table 3 and Figure 5). It is known that breastfeeding is protective toward infectious diarrhea (Mølbak et al., 1994). Here, we observed a protective effect of breastfeeding against EAEC infection beyond the age of weaning. Only three (2%) children in the EAECnegative group and nine (6%) in the EAEC-positive group were still breastfed while included in the study (Table 3). In the period prior to inclusion in the study, more children had been breastfed in the EAEC-negative group compared to the EAEC-positive group (94 vs. 80%). Immunological components in the breast milk such as immunoglobulins, lactoferrin, and lymphocytes have been shown to be crucial for the development of the immune system (Hanson, 1999). Furthermore, benefits in term of long-term protection against infections have previously been observed with prolonged breastfeeding (Hanson, 2004). A health bias in the EAEC negative group could be speculated to effect the analysis since resourceful parents may be more likely to attend such a study. On the other hand, the groups were very comparable in many other aspects (Table 3), and the children were recruited within the same geographical area and in the same period in time. Examination for enteric viruses was performed for 23 (15%) of the EAEC-positive children. It has been shown, that enteric viruses are highly prevalent in diarrheal episodes in this age group (Olesen et al., 2005) and this could be a confounding factor in the children in our study, who lacked examinations for enteric viruses. However, gastrointestinal viruses often causes acute diarrhea with a high diarrheal output, which was not the clinical manifestations of disease in the majority of children in this study (Table 6). Enteric parasites were examined for in 53 (34%) of the EAEC-positive children. Parasitic infection is mostly associated with traveler's diarrhea in Denmark, which 51% of the children in this study suffered from. Therefore, parasitic infection may be a confounding factor for the 17% of the children with reports of travel in this study. Hospitalization was reported for 23 (15%) of the EAECpositive children and was mainly seen in children at the age of ≤2 years. It is well-known, that very young children are more susceptible to dehydration caused by infectious diarrhea (Ramaswamy and Jacobson, 2001), and it has been described for other diarrheagenic E. coli pathotypes, such as EPEC (Essers et al., 2000). We did not discover any particular EAEC virulence gene to be associated with hospitalization. The EAEC-positive children had more reports of foreign travel within a period of 2 months prior to examination, when compared to the EAEC-negative group of children (51 vs. 16%; Table 3). However, the majority of the included children were recruited from Copenhagen University Hvidovre Hospital, where only patients suffering from travelers' diarrhea were investigated for EAEC (Figure 1). EAEC was diagnosed in all categories of diarrhea at SSI, and a total of 48 children were included from this site. From the 48 children diagnosed at SSI, 14 (29%) had reports of foreign travel. Other studies have shown a strong association between foreign travel and EAEC-associated diarrhea in children (Huppertz et al., 1997;Denno et al., 2012). We detected a high level of antibiotic resistance among the EAEC strains, which considerably limited the treatment possibilities in diarrheal cases. Resistance toward gentamicin was observed in 10 (7.5%) of the EAEC strains and resistance toward ciprofloxacin in 4 (3%) of the EAEC strains (Figure 6). An Indian study showed a high level of resistance toward ciprofloxacin in 63.4% (n = 40) of strains tested (Raju and Ballal, 2009) this phenomenon is mostly described in cases of travelers' diarrhea (Vila et al., 2001). Gentamicin resistance in EAEC strains is rarely reported and only in low levels (Khoshvaght et al., 2014;Hebbelstrup Jensen et al., 2016b). Multidrug-resistance was most frequently detected in cases of travelers' diarrhea (69 vs. 31%). Yet, multidrug-resistance in EAEC strains is reported in several other studies (Sang et al., 1997;Raju and Ballal, 2009;Aslani et al., 2011;Hebbelstrup FIGURE 6 | Susceptibility toward antibiotics was tested in the EAEC strains by using the disc diffusion method. A high level of resistance was observed toward trimethoprim, sulfonamides, tetracycline, nalidixic acid, and to a lesser extent, cefotaxime and chloramphenicol. Jensen et al., 2016b) in both industrialized and developing countries, which is a major cause for concern. However, the efficacy of antibiotics in the treatment of EAEC infection remains to be determined and it should be restricted to the severe cases only. The vast majority of EAEC gastrointestinal infections should be treated with fluid and electrolyte replacement similar to infectious diarrhea in general. However, all strains in this study were susceptible to mecillinam and piperacillintazobactam, which must be regarded as the drugs of choice in the few selected cases of EAEC-associated diarrhea that requires treatment. CONCLUSION Persistent diarrhea was associated with EAEC strains without the pic gene, and with strains with the combination of the genes pic and sat and absence of the aggA gene. The combination of the aatA and astA genes was associated with prolonged diarrhea. Acute diarrhea was associated with the genes aggR, aap, and aagA by individual odds ratios. Strains lacking the aatA gene were associated with non-mucoid diarrhea. Breastfeeding was seen to be protective against EAEC infection beyond the age of weaning. AUTHOR CONTRIBUTIONS KK, AMP, JE acquired the funding for this research. BH and AP performed the interviews of participating parents. SH performed the statistical analysis and performed critical appraisal. CS, RJ, AMP, NB, JE and RP were responsible for the microbial analysis performed. BH, CS, NB, RJ were responsible for the interpretation of the microbial analysis performed. All authors contributed to the writing of this manuscript and all authors approved of the final draft. All authors take responsibility for the integrity and the accuracy of this research and the interpretation hereof. FUNDING This work was partly supported by The Danish Council for Strategic Research, Innovation and Higher Education (grant number 2101-07-0023) to KK and by the Regional Department of Research in the Zealand Region (grant number 12-000095/jun 2014) to JE.
8,218
2017-05-30T00:00:00.000
[ "Biology", "Medicine" ]
Calibration of Acousto-Optic Interaction Geometry Based on the Analysis of AOTF Angular Performance Acousto-optic interaction geometry determines the spectral and spatial response of an acousto-optic tunable filter (AOTF). The precise calibration of the acousto-optic interaction geometry of the device is a necessary process before designing and optimizing optical systems. In this paper, we develop a novel calibration method based on the polar angular performance of an AOTF. A commercial AOTF device with unknown geometry parameters was experimentally calibrated. The experimental results show high precision, in some cases falling within 0.01°. In addition, we analyzed the parameter sensitivity and Monte Carlo tolerance of the calibration method. The results of the parameter sensitivity analysis show that the principal refractive index has a large influence on the calibration results, while other factors have little influence. The results of the Monte Carlo tolerance analysis show that the probability of the results falling 0.1° using this method is greater than 99.7%. This work provides an accurate and easy-to-perform method for AOTF crystal calibration and can contribute to the characteristic analysis of AOTFs and the optical design of spectral imaging systems. Introduction An AOTF device is a spectral-splitting device based on the acousto-optic effect [1]. When compared with traditional optical splitters, such as prism, grating and interferometer, it has advantages in terms of a large-angle aperture, high spectral resolution, arbitrary wavelength configuration, fast tuning speed and it does not require moving elements [2,3]. Therefore, it is widely used in many applications, such as spectral imaging [4,5], polarization analysis [6,7], stereoscopic imaging [8] and notch filtering [9], etc. In addition, in the case of monochromatic input light, AOTFs also have the abilities of spatial filtering and edge enhancement [10,11]. There are two basic configurations, collinear and noncollinear, for AOTFs. Furthermore, in the collinear configuration, the interacting optical and acoustic waves propagate in identical directions, while the directions of the optical and acoustic waves are different in the noncollinear configuration. The first AOTF with a collinear design was reported by Harris and Wallace in 1969 [12]. Subsequently, Chang described noncollinear AOTFs in 1974 [13], which are commonly used today. Noncollinear AOTFs have some advantages, as a larger angular aperture and more materials with a large acousto-optic figure of merit can be chosen. Among them, TeO 2 crystals are very suitable for noncollinear AOTFs that cover the spectral range of 350-4500 nm [14,15]. Acousto-optic interaction geometry is a key inherent property of an AOTF device, which greatly affects the spectral and spatial response of the device. Previously, Voloshinov analyzed the acousto-optic effect under three kinds of acoustic cut angles, and the results showed that acousto-optic interaction geometry affects the angular apertures and spectral resolution of an AOTF device [16]. In addition, acousto-optic interaction geometry will also affect the tuning relationship, sound field distribution, aberration and chromatic aberration characteristics of the device, thus greatly affecting all aspects of the AOTFs in characteristic analysis and optical system design [4,17]. Therefore, AOTFs must be designed in detail to meet the special output requirements. Chang first proposed the parallel tangent condition for the design of noncollinear AOTFs with large angular apertures in the 1970s [13]. Yano discussed some properties of AOTFs using the simplified treatment [18]. Moreover, Gass corrected the birefringence approximation for the accurate design of the acousto-optic interaction geometry in the analysis [19]. These theories are all typically under the parallel tangent condition. For the non-parallel tangent condition, Yushkov expressed an exact phase-matching calculation equation at an arbitrary incident angle and recently proposed an alternative method for analyzing the Bragg angle curve in wide-angle AOTFs [10,20]. Zhang discussed the function of the phase mismatching condition and proposed a new tuning method with a non-radio-frequency signal [21]. After the processes of analysis and design, an AOTF will be manufactured using a series of fabrication technologies. Many steps are involved in the fabrication of AOTFs, including X-ray orientation, cutting, polishing, transducer orientation and fabrication, mounting and grinding [22]. The fabrication technologies of an AOTF are so complex that they easily lead to machining tolerances between the designed and actual device. If the designed values were used for device characteristic analysis and optical system design, it would lead to inaccurate results. Therefore, it is vital to calibrate the acousto-optic interaction geometry of an actual AOTF before use. In the past, our team proposed an acousto-optic interaction geometry calibration method by the tested tuning frequency curve under the parallel tangent condition [23]. However, this multi-wavelength method is not conducive to calibration accuracy as it relies on more constant parameters. In addition, to find the parallel tangent condition, as shown in Figure 1a, the incident angles must be adjusted accurately, which requires high precision in the experiment. To overcome these shortcomings, here we develop a calibration method based on the polar angular performance of an AOTF. Firstly, we establish an AOTF angular frequency relationship model that can be solved analytically. Moreover, based on this model, a novel method is developed to calibrate the acousto-optic interaction geometry of an actual AOTF device. It does not introduce principal refractive index errors between multiple wavelengths and works with a single monochromatic light source. Furthermore, this method does not depend on determining characteristic incident angles, as shown in Figure 1b. Finally, using the principle of the minimum root mean square error (RMSE) between the measured and theoretical data, the acousto-optic interaction geometry of the actual AOTF device can be calculated through the use of the parameter traversal method. This method is an improvement of the calibration process in terms of simplicity and robustness and has been tested with high precision in experiments. Simultaneously, the method analyzes the influence of crystal constants on calibration results in the visible range. This work is significant and provides a database for a range of research related to AOTF devices. Methods The acousto-optic interaction geometry of an AOTF refers to the front facet angle (θ * i ), the acoustic cut angle (θ α ) and the back facet angle (θ β ), respectively. As shown in Figure 2, this is the top view of the AOTF device and corresponds to the polar plane. In the AOTF, an acoustic wave is generated by a transducer and absorbed by an absorber. By switching the radio frequency signals applied to the transducer, the AOTF can scan the spectral regions of interest [24]. Given that the polarization state of the incident light (L 0 ) is inconsistent with one of the eigenwave modes in TeO 2 , four types of emitted light are produced, namely diffraction ordinary polarized light (L o d ), transmission extraordinary polarized light (L e t ), transmission ordinary polarized light (L o t ) and diffraction extraordinary polarized light (L e d ). Two coordinate systems have been established for analysis in this paper: the optical axes coordinate system (x 0 oy 0 ) and the crystal axes coordinate system (xoy), as shown in Figure 2. The optical axes coordinate system is a rectangular coordinate system, where the y 0 axis is the intersection line of the incident surface and polar plane, while the x 0 axis is perpendicular to the incident surface. The crystal axes coordinate system is also a rectangular coordinate system, wherein the y axis is the crystal axis [110], while the x axis is the crystal axis [001]. The model between acousto-optic interaction geometry and polar angular performance for AOTFs involves two processes: (a) the calculation of the refraction at the plane of incidence, shown in Section 2.1, and (b) wave vector analysis of acousto-optic interaction, shown in Section 2.2. In addition, the relationship between the incident polar angles and matching frequencies is independent of the back facet, which will be analyzed in Section 2.1. Therefore, we need two measurements to obtain complete calibration results by swapping the front and back facets, as shown in Figure 3. For the exchange of the input and output facets, the AOTF device needs to be rotated about 180 • around the axis perpendicular to the xoy plane. Refraction at the Plane of Incidence Firstly, the refraction of light in the plane of incidence obeys Snell's law as follows: where θ 0 is the incident polar angle between the incident light and the normal of the incident plane. θ 1 is the refraction angle in the crystal between the refracted light and the normal to the plane of incidence. n 0 and n 1 are the refractive indices in the air and crystal, respectively. Moreover, it is known that n 0 is equal to 1 in the air. TeO 2 crystal has the anisotropy and n 1 can be solved by: for o-polarized and e-polarized lights, respectively. The difference between θ 1 and θ 2 is as follows: where θ c i is the angle between the incident plane and the crystal axis [110], for which c = 1 corresponds to the positive mode, as shown in Figure 3a, while c = 2 corresponds to the reverse mode, as shown in Figure 3b. When switching from the positive mode to the reverse mode, the AOTF device must be rotated for swapping the input and output facets. Furthermore, the relationships between θ c i and the acousto-optic interaction geometry (θ * i , θ α and θ β ) of an AOTF are as follows: which means that both the positive and reverse modes are necessary to obtain the complete acousto-optic interaction geometry of an AOTF device. From Equations (1)-(3), we can obtain: where F e and F o correspond to the conditions under which the incident lights are the e-polarized and o-polarized, respectively. Furthermore, (5) contains both the quadratic equations and can easily be solved. Then, using Equations (1)-(5), the relationship between θ 0 and θ 2 can be expressed as follows: where F 1 is an implicit function. Wave Vector Analysis of Acousto-Optic Interaction Acousto-optic interaction in the AOTF is usually analyzed through a wave vector diagram [25]. When the momentum-matching condition is satisfied, the incident wave vector k i , the acoustic wave vector k α and the diffraction wave vector k d constitute a closed triangle (Figure 4), shown as k i ± k α = k d . Some are dependent on the incident wavelength and refraction indices, as follows: where n i and n d are the refractive indices of the incident and diffracted light in TeO 2 . They satisfy: In addition, acoustic wave vector k α satisfies: where f α is the acoustic frequency and V α is the acoustic wave velocity. In the crystal, V α is given by [26]: where V 001 and V 110 are the acoustic wave velocities along the respective crystal axes. According to Equation (9), in order to solve acoustic frequency f α , we need to calculate the → AB . As shown in Figure 4, point A (x A , y A )satisfies tan θ 2 = y A x A and Equation (8), which can be solved by following: AB are the same one as: Therefore, point B can be solved by following: which are all quadratic equations and easy to be solved exactly. F o→e corresponds to the condition wherein the incident light is o-polarized and the diffraction light is e-polarized, while F e→o corresponds to the condition wherein the incident light is e-polarized and the diffraction light is o-polarized. From Equations (9)- (13), the relationship between f α and θ 2 can be expressed as follows: where F 2 is an implicit function. In summary, with Equations (6) and (14), the model between the acousto-optic interaction geometry and the incident polar angular frequencies of AOTFs can be established as follows: where F is an implicit function. This means that, for an actual AOTF device with inherent acousto-optic interaction geometry (θ * i , θ α and θ β ), acoustic frequencies and incident polar angles are correlated when the wavelength (λ) of the incident lights is fixed. Therefore, the acousto-optic interaction geometry of the AOTF device can be calibrated by analyzing the incident polar angles and corresponding acoustic frequencies. The numerical values of the constants used in the calculations in this paper are provided in Table 1 [27]. Experimental Setup The schematic diagram of our experimental setup is shown in Figure 5. The monochromatic source we used was a 632.8 nm He-Ne laser (DH-HN250), from which the linearly polarized light was generated. A group of frosted glasses was used to reduce light intensity, and the effect of the frosted glasses can be replaced by multiple polarizers. The commercial AOTF used in the experiment was manufactured by China Electronics Technology Group Corporation (CETC) and is referred to as SGL100-400/850-20LG-K. The polarizer, located ahead of the AOTF, was used to adjust the polarization state of incident lights so that o-polarized and e-polarized components were close and convenient for measurements. The turntable (GCM-1107M) was used to accurately control the incident polar angles of the incident light into the AOTF, and here the accuracy of the rotation angle was 2 . A Basler acA640-120gm camera with an 8 mm focal length lens was used as the detector in the experiments. Compared with the optical power meter, a camera can simultaneously detect the intensities of transmitted and diffracted lights ( Figure 6) to effectively avoid the measurement error caused by the instability of the power intensity and the polarization state of the laser. Results and Discussions For incident polar angle analysis, we needed to measure the matching frequency at each incident polar angle, which corresponds to the peak diffraction intensity. In practice, the potential range of the matching frequency can be estimated from Equation (15) using the design geometry parameters from the AOTF manufacturer or through the direct observation of the maximum diffraction intensity. It should be noted that in some special applications, the changing ultrasonic signal and the angle of the incident light will greatly change the shape of the AOTF transfer function, which needs further discussion [28,29]. The processes of the tests are organized as follows: • Step 1: Adjust the polar angle of the AOTF by using the turntable and make sure that the incident plane of the AOTF is perpendicular to the incident light. This step can be judged by whether the reflected laser point coincides with the exit point. We recorded the scale of the turntable at this point as the "0" scale, and the other incident polar angles were able to be adjusted with this scale. • Step 2: After adjusting the AOTF incident polar angle, the laser, AOTF and detector must be switched on. Then a montage of images, including transmitted and diffracted light, can be taken by scanning the acoustic frequencies, as shown in Figure 6. For each image, both transmitted and diffracted light can be captured, or only o-polarized and e-polarized light can be measured separately by adjusting the polarizer. Given that, in some cases, the AOTFs do not have the wedge angle compensation, the directions of transmitted o-polarized and e-polarized light are coincident. In these experiments, the frequency step was 0.05 MHz. • Step 3: To find the matching frequency corresponding to the peak diffraction intensity, use the relative diffraction efficiency to evaluate as: where I t and I d are the light intensity values for the transmitted and diffracted light from the same incident light. The intensity values are quantified by the digital number (DN) values with 8-bit digitization. For each order of emitted light, we used the sum of DN values in the effective area, where nine adjacent pixels were selected for calculation, as shown in Figure 7a. • Step 4: The matching frequencies were able to be solved by quartic polynomial fitting, as shown in Figure 7b, and at least five frequency points are required for each incident polar angle. • Step 5: In order to ensure that the temperature of each measurement is close to the room temperature, the AOTF needs to be switched off for a few minutes because the temperature of the AOTF rises during operation, which would affect its polar angular performance. • Step 6: Adjust another incident polar angle of the AOTF, switch on the AOTF and repeat Steps 2-6 again. The coefficients of determination (R 2 ) of all the results were better than 0.98, and the fitting residuals were less than 0.003. Some other data fitting results at different incident polar angles are shown in Figure 8. From these results, we found that the matching frequencies of o-polarized and e-polarized lights are generally not consistent under the same incident polar angles. In other words, at the same acoustic frequency, the diffracted wavelengths of the o-polarized and e-polarized lights are not consistent. However, under a specific incident polar angle, we obtained the same diffracted wavelengths of the opolarized and e-polarized lights at the same acoustic frequency. In some research, they named this condition the equivalent point, wherein the matching frequencies are the same for o-polarized and e-polarized light at the same incident polar angle [30,31]. Here, we obtained this condition by adjusting the incident polar angle. Moreover, it would be exactly calculated with an exact acousto-optic interaction geometry of the AOTF. Therefore, we will discuss it after the acousto-optic interaction geometry calibration. In this paper, a total of 21 incident polar angles were sampled in the positive mode, while 17 incident polar angles were sampled in the reverse mode. In the experiments, the minimum angle sampling step was 0.5 • . All of the matching frequencies in positive and reserve modes can be found in Figure 9c,d. After sampling all of the measured data, we used the parameter traversal method to calculate the acousto-optic interaction geometry of the AOTF with the principle of minimum RMSE. For each input of the geometry parameters, the RMSE between the theoretical data and measured data is as follows: (17) where N is the number of sampling points for each mode, and there are two types of sampling points for both o-polarized and e-polarized lights at some incident polar angles. N was 31 for the positive mode in this paper, and 26 for the reserve mode. The θ 0 (i) is the incident polar angle, and the incident wavelength (λ 0 ) was 632.8 nm. F m is the measured data of the matching frequency at each incident polar angle. As shown in Figure 9, we obtained the RMSE distributions in both positive and reserve modes. Then, the acoustooptic interaction geometry of the AOTF device, corresponding to the minimum RMSE, was able to be obtained. As shown in Figure 9a, we obtained θ 1 i = 15.074 • and θ 1 α = 6.484 • in the positive mode with the minimum RMSE (0.032 MHz). Meanwhile, we also obtained θ 2 i = 10.435 • and θ 2 α = 6.486 • in the reserve mode with the minimum RMSE (0.042 MHz) in Figure 9b. The difference of θ α between the two measurements was 0.002 • , which means this calibration method has a high precision and can be better than 0.01 • . We took the average of two results as the calibration value that θ α = 6.485 • . As shown in Figure 9c,d, the measured data were very close to the theoretical data. In summary, we obtained the calibration results of θ * i = 15.074 • , θ α = 6.485 • and θ β = 4.639 • with Equation (4). In addition, according to the reference [19], we calculated the acousto-optic interaction geometry, meeting the parallel tangent condition, whereby θ * i = 15.074 • and θ α = 6.548 • under normal incidence of e-polarized light at 632.8 nm. Therefore, we found that the calibration result of the actual AOTF device was close but did not meet the parallel tangent condition at 632.8 nm. From Figure 9a,b, we found that the acoustic cut angle of the AOTF was more sensitive than the front facet angle. Therefore, we further analyzed the angular frequency relationship under the different acoustic cut angles and front facet angles, and the results are shown in Figure 10. The deviation caused by the change of acoustic cut angles (±0.01 • ) was higher than that caused by the change of front facet angles (±0.1 • ). These results confirm that changing the acoustic cut angle has a greater influence. Moreover, changing the acoustic cut angle makes the angular frequency curves shift up and down, and the larger acoustic cut angle corresponds to the state of shifting up. In comparison, changing the front facet angle makes the angular frequency curves shift left and right, and the larger front facet angle corresponds to the state of shifting right. In order to further verify the accuracy of the calibration result, the equivalent points in two modes are calculated and tested here. According to the calibration results and Equation (15), the equivalent points can be solved as the incident polar angle is 2.19 • in the positive mode and −8.34 • in the reserve mode. The measured results are shown in Figure 11a,b, respectively. The results show that the matching frequencies are approximately the same at the same incident polar angle when ignoring the bandwidth for o-polarized and e-polarized lights. Furthermore, the differences in the peak diffraction efficiency are both less than 0.01 MHz. This work is significant for non-polarization AOTF applications. Tolerance Analysis The constant parameters and measurement parameters involved in the calibration method may have some tolerances, which have not been taken into account above. The constant parameters mainly include the principal refractive index tolerance and the acoustic wave velocity tolerance. The principal refractive index tolerance was taken from reference [27], and the acoustic wave velocity tolerance was set to ±0.5 m/s here. The measurement parameters mainly include the rotational accuracy of the precision turntable (2 ) and the sampling step of the tuning frequency (0.05 MHz). The specific range of the tolerance setting can be found in Table 2, and the tolerance distribution of all parameters assumes the uniform distribution probability. We performed a tolerance analysis on the calibration method. The tolerance analysis includes two aspects: parameter sensitivity analysis and Monte Carlo analysis [32]. The variable-controlled method was used to analyze the parameter sensitivity, wherein only one of the parameters varied for 100 times at a time. The standard deviations of the calibration results were used as the parameter sensitivity analysis index. The results show that the principal refractive index tolerance had the greatest influence on the calibration results, especially for the front facet angle and the acoustic cut angle. In comparison, other parameter tolerances had very little effect. Monte Carlo analysis was then used to analyze the statistical tolerance of the entire calibration process. In this paper, a total of 1000 simulated calibrations were performed as the statistical sample in Figure 12. The statistical results of the error distribution of the calibration results are shown in Table 3, and the cumulative probability of a result within than 0.1 • was greater than 99.7%. Moreover, the cumulative probability of the front facet angle falling within 0.01 • was greater than 18.4%. The cumulative probability of an acoustic cut angle falling within 0.01 • was greater than 35.3%. The cumulative probability of a back facet angle falling within 0.01 • was greater than 83.0%. Furthermore, the cumulative probability of maximum a cut-angle deviation falling within 0.01 • was greater than 15.0%. Conclusions In summary, we proposed a method for calibrating the acousto-optic interaction geometry of AOTFs based on polar angular analysis. Moreover, based on this method, we obtained the complete acousto-optic interaction geometry of an actual AOTF device, including the front facet angle (θ * i ), the acoustic cut angle (θ α ) and the back facet angle (θ β ). Specifically, we carried out the following research: (a) We established a model of the AOTF angular frequency relationship that can be solved analytically. (b) We proposed a novel and easy-to-perform method for calibrating the acousto-optic interaction geometry of an actual AOTF device. Furthermore, the experimental results showed a high precision with the acoustic cut angle, within results falling within 0.01 • . (c) We analyzed the polar angular performance with the acousto-optic interaction geometry of the AOTF and the results showed that the acoustic cut angle of the AOTF is more sensitive than the front facet angle. Specifically speaking, changing the acoustic cut angle makes the angular frequency curves shift up and down, and the larger acoustic cut angle corresponds to the state of shifting up. In comparison, changing the front facet angle makes the angular frequency curves shift left and right, and the larger front facet angle corresponds to the state of shifting right. (d) We calculated and tested the equivalent points for the o-polarized and e-polarized lights in both positive and reserve modes, which is vital to the non-polarization applications of AOTFs. (e) We analyzed the parameter sensitivity and Monte Carlo tolerance of the calibration method. The results of the parameter sensitivity analysis showed that the principal refractive index of the crystal has a large influence on the calibration results, while other factors have little influence. The results of the Monte Carlo tolerance analysis showed that the cumulative probability of the results falling within 0.1 • with this method is greater than 99.7%. Moreover, the probability of falling within 0.01 • , for the front facet angle is greater than 18.4%. In comparison, the acoustic cut angle and the back facet angle are greater than 35.3% and 83.0%, respectively. These works are of great significance for the studies of AOTFs, such as ray tracing, characteristic analysis and optical system design.
5,987.8
2023-05-01T00:00:00.000
[ "Physics" ]
Mass Spectrometry of Cardiac Calsequestrin Characterizes Microheterogeneity Unique to Heart and Indicative of Complex Intracellular Transit* Cardiac calsequestrin concentrates in junctional sarcoplasmic reticulum in heart and skeletal muscle cells by an undefined mechanism. During transit through the secretory pathway, it undergoes an as yet uncharacterized glycosylation and acquires phosphate on CK2-sensitive sites. In this study, we have shown that active calsequestrin phosphorylation occurred in nonmuscle cells as well as muscle cells, reflecting a widespread cellular process. To characterize this post-translational modification and resolve individual molecular mass species, we subjected purified calsequestrin to mass spectrometry using electrospray ionization. Mass spectra showed that calsequestrin glycan structure in nonmuscle cells was that expected for an endoplasmic reticulum-localized glycoprotein and showed that each glycoform existed as four mass peaks representing molecules that also had 0–3 phosphorylation sites occupied. In heart, mass peaks indicated carbohydrate modifications characteristic of transit through Golgi compartments. Phosphorylation did not occur on every glycoform present, suggesting a far more complex movement of calsequestrin molecules in heart cells. Significant amounts of calsequestrin contained glycan with only a single mannose residue, indicative of a novel post-endoplasmic reticulum mannosidase activity. In conclusion, glyco- and phosphoforms of calsequestrin chart a complex cellular transport in heart, with calsequestrin following trafficking pathways not present or not accessible to the same molecules in nonmuscle. Cardiac calsequestrin concentrates in junctional sarcoplasmic reticulum in heart and skeletal muscle cells by an undefined mechanism. During transit through the secretory pathway, it undergoes an as yet uncharacterized glycosylation and acquires phosphate on CK2-sensitive sites. In this study, we have shown that active calsequestrin phosphorylation occurred in nonmuscle cells as well as muscle cells, reflecting a widespread cellular process. To characterize this post-translational modification and resolve individual molecular mass species, we subjected purified calsequestrin to mass spectrometry using electrospray ionization. Mass spectra showed that calsequestrin glycan structure in nonmuscle cells was that expected for an endoplasmic reticulum-localized glycoprotein and showed that each glycoform existed as four mass peaks representing molecules that also had 0 -3 phosphorylation sites occupied. In heart, mass peaks indicated carbohydrate modifications characteristic of transit through Golgi compartments. Phosphorylation did not occur on every glycoform present, suggesting a far more complex movement of calsequestrin molecules in heart cells. Significant amounts of calsequestrin contained glycan with only a single mannose residue, indicative of a novel post-endoplasmic reticulum mannosidase activity. In conclusion, glyco-and phosphoforms of calsequestrin chart a complex cellular transport in heart, with calsequestrin following trafficking pathways not present or not accessible to the same molecules in nonmuscle. Both CSQ isoforms are substrates for protein kinase CK2 in vitro (16,23), and phosphorylation sites have been determined for canine cardiac and rabbit fast-twitch isoforms (16). The fast-twitch isoform is phosphorylated on Thr 373 , whereas the cardiac isoform is phosphorylated on a cluster of three serine residues that reside in the cardiac-specific tail (Ser 378,382,386 ). These three serine residues were previously shown to be partially phosphorylated in the purified cardiac isoform, whereas no phosphate appeared in the rabbit fast-twitch isoform (16). A function for CSQ phosphorylation by CK2 has not been determined; however, a mechanism for sorting of resident ER and Golgi proteins by CK2 phosphorylation on cytosolic sites has been characterized (24 -30). In this study, we report definitive structural findings for CSQ revealed by mass spectrometry, reflecting its cellular transport in heart and muscle cells. CSQ glycosylation and phosphorylation, although highly similar in nonmuscle cells as diverse as human embryonic kidney (HEK) and insect Sf21 cells, were distinctly different in muscle cells, reflecting a pathway in muscle not present or not accessible in nonmuscle, one in which phosphorylation on CK2 sites exists in a complex compartmental relationship with glycan-modifying reactions. Animals-Canine tissues were obtained from three separate mongrel dogs under anesthesia. Animals were obtained from authorized suppliers and maintained in accordance with National Institutes of Health guidelines. The Division of Laboratory Animal Resources of Wayne State University is fully equipped and licensed by the appropriate agencies. Construction of Recombinant CSQ Viruses-Replication-deficient adenoviruses containing wild-type canine cardiac CSQ cDNA (Ad.CSQ) or a triple point S378A,S382A,S386A mutant (Ad.nPP) that removed CK2 phosphorylation sites (16) were amplified from the gt10 clone IC3A (14) by PCR (31) and primers containing restriction sites for directional cloning. The forward primer contained 63 bp of 5Ј-untranslated sequence, the reverse primer, 5 bp of 3Ј-untranslated sequence. The * This work was funded by Grant HL62586 from the NHLBI, National Institutes of Health. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. mutating reverse primer (67-mer) contained 3 single base changes necessary for Ser to Ala conversion. PCR products were cloned into pBluescript (Stratagene), sequenced by the dideoxy method (32), and then subcloned into a transfer plasmid (pAbl.CMV) containing the cytomegolovirus major immediate-early promoter. Recombinant viruses were identified by restriction endonuclease digestion and isolated by CsCl density centrifugation, with titers of viral stocks determined by plaque assay using HEK 293 cells (33). Plaque-purified clones were expanded in HEK 293 cells, and virus preparations were purified by CsCl density centrifugation. Titers of viral stocks were determined by plaque assay using HEK 293 cells. Recombinant baculovirus encoding canine cardiac was prepared as previously described (34). Infection of Cells with Viruses-HEK 293 cells were infected with recombinant adenoviruses at a multiplicity of infection of ϳ1.0 for 4 h in Dulbecco's modified Eagle's medium without serum, then incubated under our normal conditions for 48 -72 h. Sf21 cells were infected with recombinant baculovirus for 48 h in ESF 921 medium. All recombinant adenovector work was carried out in accordance with National Institutes of Health guidelines for research involving recombinant DNA molecules and the policies of the Wayne State University Office of Environmental Health and Safety. Purification of Recombinant Cardiac CSQ from Nonmuscle Cells-Purification of CSQ from cultures of HEK 293 and Sf21 cells was carried out 2 d post-infection. Cell pellets were resuspended at 1 mg/ml in buffer A containing 20 mM MOPS, pH 7.5, 250 mM NaCl, 1% CHAPS, 0.5 mM EGTA, and 0.5% of a protease inhibitor mixture (Sigma). 10 mM NaF and 10 mM ␤-glycerophosphate were included in the buffer to inhibit protein phosphatases. Extracts were centrifuged at 50,000 ϫ g ϫ 20 min, bound to DEAE-Sephacel (Amersham Biosciences), and then washed extensively in buffer B (buffer A without CHAPS) until detergent was removed. CSQ was eluted in buffer B with 750 mM NaCl. Eluate was loaded onto phenyl-agarose and purification carried out as previously described (35) with calsequestrin eluted in purified form by adding 20 mM CaCl 2 to the elution buffer. SDS-PAGE was carried out according to Laemmli (36). Protein assays were carried out using a Lowry protocol (37). Purification of Native CSQ from Canine Cardiac and Skeletal Muscle Tissue-Native CSQs from canine left ventricles and hind leg muscles were also purified as previously described (35), with minor modifications. Homogenization was carried out in a buffer containing 20 mM MOPS, pH 7.5, 50 mM NaCl, 0.1 mM dithiothreitol, 0.5% protease inhibitor mixture (Sigma), and phosphatase inhibitors as described above. Membrane vesicles were isolated from homogenates following removal of only a 500 ϫ g pellet to recover a larger portion of CSQcontaining membranes. Dephosphorylation and Phosphorylation of Purified CSQ-CSQ phosphorylation and dephosphorylation were carried out as previously described (16). For phosphorylation, 0.2 g of calsequestrin was incubated in 20 mM MOPS, pH 7.5, 150 mM NaCl, 0.5 mM EGTA, 10 mM MgCl 2 , 0.1% Triton X-100, 20 M [␥-32 P]ATP, and 10 ng of purified CK2 (Promega). When samples were pretreated with acid phosphatase, 1 g of calsequestrin was incubated for 20 min in 30 l of 30 mM MES buffer, pH 5.8, 0.1 mM EGTA with or without the addition of 0.05 units of acid phosphatase (Sigma), determined to be the minimal amount capable of removing all radioactivity resulting from CK2. Phosphatase-or controltreated samples were diluted 10-fold to neutral pH in the presence of sufficient kinase to label calsequestrin to high stoichiometry. Mass Spectrometry-For mass spectrometry, salts and buffers were removed from calsequestrin samples by repeated centrifugations through a Centricon-30 concentrator (Amicon) after the addition of 20 mM EGTA to phenyl-agarose-purified samples to chelate Ca 2ϩ . Electrospray ionization mass spectrometry was carried out at the Biotechnology Resource Facility of the HHMI Biopolymer Facility/W. M. Keck Foundation, Yale University, using a Q-ToF mass spectrometer (Micromass, Altrincham, UK). Prior to analysis, samples were desalted using C-4 ZipTips (Millipore Corp., Bedford, MA). The eluted samples in 50% acetonitrile/0.1% formic acid were analyzed using the nanospray technique in positive ion mode. Masses were calculated using Q-Tof Mass-Lynx software. Spectra were calibrated using either sodium iodide or the fragment ions from the MS/MS spectrum of (Glu)fibrinogen (Sigma). RESULTS Cardiac CSQ purified from dog heart contains significant levels of phosphate (Ͼ1 mol/mol) on a carboxyl-terminal cluster of serine residues that are in vitro substrates of protein kinase CK2. To determine whether this reaction is unique to heart cells, we analyzed the purified CSQ from HEK and Sf21 cells, following heterologous expression in nonmuscle cells using a recombinant adenovirus or baculovirus. Cardiac CSQ was phosphorylated by purified CK2 either under control conditions or following treatment with acid phosphatase to remove endogenous phosphate. Pretreatment with phosphatase led to increases in 32 P incorporation into CSQ, showing that CK2 sites in CSQ existed predominantly in a phosphorylated form in nonmuscle cells. Levels of endogenous CSQ phosphorylation in ER were comparable with that observed in cardiac SR (Fig. 1A). Phosphatase treatments increased subsequent CK2 phosphorylation by 2-4-fold, depending upon the cell type. CSQ purification from canine skeletal muscle led to isolation of both CSQ isoforms, with the fast-twitch form (63 kDa) being the predominant one, as previously reported (38). Application of our phosphatase/CK2 kinase assay to the skeletal muscle preparation showed that, as in heart tissue, cardiac CSQ (55 kDa) was present mainly as the phosphoprotein (Fig. 1B), whereas the canine fast-twitch isoform did not accumulate in a phosphorylated form (Fig. 1B, upper band). To try and resolve individual forms of phosphorylated CSQ, we subjected purified CSQ to mass spectrometry using electrospray ionization. Mass spectrometry of purified wild-type CSQ from HEK cells showed a series of mass peaks differing by about 81 Da, consistent with a mixture of molecules differing by a single phosphate moiety ( Fig. 2A). In contrast, the phosphorylation site mutant nPP showed a series of mass peaks that differed by ϳ162 Da, consistent with a mixture of glycoforms differing by a single mannose (Fig. 2B). Mass peaks for CSQ versus the nPP mutant also reflected the ϳ47 Da difference between three serines and three alanines. The mass peaks for wild-type CSQ and the nPP mutant in Fig. 2 are offset by 47 Da to align equivalent mass peaks. Comparison of the aligned spectra for wild-type CSQ and the nPP mutant from HEK cells (compare Fig. 1, A and B) shows that wild-type CSQ consists of two glycoforms (Man8-and Man9GlcNAc) along with three additional phosphoforms. For example, the mass expected for unmodified cardiac CSQ tran- script is 45,269 Da, and CSQ containing a single high mannose oligosaccharide is 47,135 Da (Man9GlcNAc2 ϭ 1865 Da). As predicted, this exact mass peak (47,135 Da) appears in the CSQ spectrum along with 3 peaks of higher mass at 81-Da intervals, corresponding to protein molecules (with the oligosaccharide Man9GlcNAc2) in unphosphorylated, singly, doubly, and triply phosphorylated states. All CSQ phosphate resided on the cardiac-specific carboxyl-terminal serine cluster Ser 378,382,386 , because the latter three mass forms were absent from the nPP mutant. CSQ molecules in which the oligosaccharide has been trimmed to Man8GlcNAc2 (and contains no phosphate) have an expected mass of ϳ46,974 Da, in agreement with the first mass peak observed in Fig. 1A. Peaks corresponding to phosphorylated Man8GlcNAc2 are obscured by the fact that the difference in mass due to a single mannose residue (162 Da) is very nearly the same as the mass change from two phosphates (2 ϫ 81 Da). Thus, for example, the peak of 47,135 Da corresponds both to the Man8GlcNAc2 glycoform with two phosphates and Man9GlcNAc2 with no phosphates. Therefore, wild-type CSQ from HEK cells existed in only two glycoforms, and both glycoforms appeared to exist in each of four phosphoforms (0 -3 sites occupied, summarized in Fig. 4). The nPP form of CSQ existed as five glycoforms, indicating more extensive glycan processing than occurred for the wild-type protein. The mass spectrum for wild-type CSQ overexpressed in Sf21 insect cells was very similar to that from HEK cells, yielding a spectrum containing the identical six peaks (data not shown, but see Fig. 4). To validate our interpretation of the spectra for CSQ and nPP in HEK cells, we analyzed CSQ and nPP from HEK cells treated with 0.5 g/ml tunicamycin over the entire time course of overexpression, a treatment known to prevent N-linked glycosylation (39). The mass spectrum for CSQ treated with tunicamycin yielded a mass peak corresponding to the amino acid backbone (deduced mass ϭ 45,269 Da) and three higher mass peaks separated by roughly 80 Da, corresponding to molecules The putative identification of glyco-and phosphoforms for the cardiac isoform is indicated above, and glycoforms that do not show phosphate distributed on the serine cluster (0 -3 sites occupied) are indicated by arrows. The mass peak of 46,398 Da (asterisk) probably contains components of both Man3 (plus 2 phosphates) and Man4 glycoforms (see Fig. 4). having one, two, and three phosphates (Fig. 2C). The vast majority of CSQ molecules contained two or three phosphates, showing that glycosylation per se was not necessary for phosphorylation to occur. If CK2 phosphorylation sites were also removed, then all of the CSQ was synthesized as a single mass species of 45,269 (Fig. 2D), which is the deduced mass of the expressed canine CSQ clone without any covalent modifications. Mass spectrometry of native CSQ purified from canine ventricular tissue gave a mass spectrum that was more complex than that from HEK and Sf21 cells (Fig. 3, upper panel). As for HEK cells, tissue CSQ showed a succession of mass peaks separated by 81 Da, representing a similar combination of glyco-and phosphoforms. The masses of native CSQ molecules, however, were lower by 70 Da than the recombinant protein from HEK and Sf21 cells. This difference may be because of a polymorphism within the animal source of our native CSQ sample, compared with the protein encoded by the well characterized cardiac CSQ cDNA IC3A (14) used in both virus constructs (and from which the deduced mass was calculated). Thus, for example, assuming that the lowest peak seen for tissue CSQ corresponded to a glycoform with no phosphate, the peak of 45,916 Da represents the structure Man1GlcNAc2 but with an additional 79 Da moiety attached to the protein backbone. Therefore, it does not affect any of the differences in mass observed for CSQ isoforms. The nature of this mass discrepancy was not further investigated. Compared with nonmuscle, molecules of CSQ from heart tissue exhibited a greater degree of mannose trimming and a larger range of glycoforms (Man1 through Man6), and glycoforms were less uniformly phosphorylated. For example, for one of the highest mass peaks observed (46,559 Da, Man5), no protein peak was found having a mass 81 Da higher. A much smaller peak at 162 Da higher, therefore, represents an additional mannose (Man6), which also lacked additional phosphate-containing mass peaks. On the other hand, the lowest glycoform masses (45,916 and 46,238 Da, Man1,3) appeared to exist in all 4 phosphoforms (0 -3 sites occupied). Again, glycoforms and phosphoforms are schematically shown in Fig. 4. The mass peak at 46,398 Da represents Man3 plus 2 phosphates but may also contain molecules of Man4 with no phosphate, in which case it appears that these molecules are also not phosphorylated in vivo, because there is no peak corresponding to Man4 plus 3 phosphates. In summary, Man1,3 are partially to fully phosphorylated in vivo, whereas Man4 -6 remain unphosphorylated. To compare the mass spectrum for the fast-twitch isoform, we purified CSQ from canine hind leg muscle by a procedure identical to that for heart tissue. The mass spectrum for the skeletal muscle protein (Fig. 3, lower panel) consisted of only two protein peaks, separated by 323 Da, a difference of two mannose residues. This pattern appears to be maintained in the cardiac isoform as well (compare arrowheads in both panels). Based upon a deduced mass of 42,216 for the canine fast-twitch skeletal muscle isoform, 2 the 2 peaks in the mass spectrum of 42,810 and 43,133 Da are likely to represent the Man1GlcNAc2 and Man3GlcNAc2 forms but with each population of molecules leaving an additional 25 Da unaccounted for (again most likely representing a polymorphism within the sample studied here, compared with that from which the cDNA clone was derived). Most notable in the fast-twitch isoform was the absence of phosphate in the two observed glycoforms, a finding in agreement with our in vitro phosphorylation data (Fig. 1B). In Fig. 4, we have summarized the data from the mass spectra, providing a schematic view of the distributions of CSQ glyco-and phosphoforms. In cases where mass peaks probably contain contributions from more than one glycoform, we have approximated relative contributions, as detailed in the figure legend. DISCUSSION Cellular trafficking of CSQ in muscle and nonmuscle cells is readily visualized by mass spectrometric analysis of intact CSQ molecules, which is unusually revealing given the nature of CSQ as a soluble ER/SR glycoprotein with a single oligosaccharide. Resolution of CSQ microheterogeneity yields important insights into its intracellular trafficking by charting the actions of intracellular mannosidases and yields insights into CSQ phosphorylation by revealing differences in the degree of phosphorylation among many CSQ glycoforms. Phosphorylation of CSQ Is an Active Process in All Cells-Cardiac CSQ contained between 1.5 and 2.0 mol P i /mol protein on CK2 sites, whether biosynthesized in canine heart or in nonmuscle cells. Phosphorylation of cardiac CSQ varied from non-phosphorylated to fully (3 mol/mol) phosphorylated. Interestingly, nonmuscle cells contained sufficient calsequestrin kinase to produce high steady levels of the phosphorylated protein, even upon overexpression to levels comparable with that 2 L. Jones, personal communication. FIG. 4. Schematic summary of CSQ glycoforms and phosphoforms in SR and ER. Results of mass spectrometry of native dog heart, native dog fast-twitch skeletal muscle, cardiac CSQ (wild-type and nPP mutant) from human HEK cells, and cardiac CSQ from Sf21 insect cells. Amounts of individual glycoforms were derived from relative peak heights and are expressed as number of mannose residues in the oligosaccharide chain. For the two extremes of oligosaccharide trimming (Man1GlcNAc2 and Man9GlcNAc2), the structures are drawn (squares) N-acetyl glucosamine and (circles) one to nine mannose residues. Relative levels of singly, doubly, and triply phosphorylated CSQs (dark gray bars) are indicated next to each non-phosphorylated glycoform (black bars). In some cases, contributions from more than one isoform are predicted to comprise a single peak in the mass spectrum, and individual components are approximated as follows. a, mass peak 46,398 Da in heart CSQ (Fig. 3, upper panel) was assumed to contain Man3 (plus 2 phosphates) and Man4 glycoforms (indicated by connecting bar) in a 2 to 1 ratio. b, mass peak 47,135 Da in HEK CSQ (Fig. 2, panel A) was assumed to contain equal amounts of two isoforms as shown. c, mass peak 47,295 Da (same spectrum) was assumed to contain Man9 and 8 glycoforms in a 2 to 1 ratio. Individual glycoform contributions from overlapping mass peaks for Sf21 CSQ were approximated the same as for HEK CSQ. of heart cells. Mass spectrometry showed similar patterns of phosphorylation on similar glycoforms in human and insect cells, indicating an extraordinary conservation of protein processing and suggesting that phosphorylation of lumenal ER/SR proteins may be a ubiquitous cellular reaction. Although the mechanism and precise cellular compartmentation remain uncertain, it appears that CSQ phosphorylation involves CK2 or a CK2-like protein kinase which co-localizes with CSQ, if only transiently. CSQ Glycan Processing in ER and SR-In all cells, CSQ glycosylation occurred on only one of its two potential N-glycosylation sites, likely Asn 316 because this site is highly conserved among species and isoforms (40). Prevention of glycosylation with tunicamycin did not prevent phosphorylation on CK2 sites, and besides N-glycosylation and phosphorylation on Ser 378,382,386 no other modifications of CSQ occurred. Nonetheless, glycan processing was different in muscle and nonmuscle cells. The predominant glycoforms of cardiac CSQ in nonmuscle cells were Man9GlcNAc2 and Man8GlcNAc2, consistent with processing by ER ␣ 1,2-mannosidase and indicative of a protein that does not leave the ER (41)(42)(43). The mechanism for ER retention of CSQ is unknown, as has been previously been discussed (44,45) and investigated (46,47). Although many resident ER proteins contain the carboxyl-terminal peptide retrieval signal -KDEL (48), this sequence is absent from CSQ and other muscle-specific resident SR/ER proteins (44,45). In contrast to Man8 and 9 forms in nonmuscle ER, maturation of cardiac CSQ in heart led predominantly to Man1, 3, and 4 forms of the glycan with most present as Man1 or 3, indicative of transit through the Golgi and the actions of post-ER mannosidases (41)(42)(43). These data are in agreement with previous biochemical analyses by Jorgensen et al. (21), who reported that fast-twitch CSQ exists predominantly as the Man3GlcNAc2 glycoform. Golgi transit of CSQ was also reported by Thomas et al. (22), who showed that CSQ is transported to terminal cisternae of the SR junction in clathrincoated vesicles in developing chick skeletal myotubes. A mechanism by which CSQ could move through the Golgi complex in muscle but could not move beyond the ER in nonmuscle represents an interesting area for future research. Cell Biology of Phosphorylated Cardiac CSQ-Cardiac CSQ exhibited a complex pattern of phosphorylation on molecules that are clearly processed within the secretory pathway, whereas the fast-twitch isoform was not phosphorylated, consistent with results of point mutants using this isoform (49,50). Cardiac CSQ existed in a broad range of glycoforms; those which contained phosphate (Man1,3) and those lacking phosphate (Man4 -6). The pattern of phosphorylated glycoforms suggests that transit to specific compartments may require prior phosphorylation or that some compartments may contain a CSQ phosphatase. It could also be that phosphorylated glycoforms of cardiac CSQ do not co-localize with non-phosphorylated glycoforms, a possibility that is currently under investigation. Nevertheless, our data support the idea that there exists a cellular relationship between glycosylation, a modification generally viewed as a marker for cell transport, and the phosphorylation state of the molecule. Among CSQ glycoforms that do undergo phosphorylation (Man1,3), 10 -20% of the molecules contained no phosphate. It is hypothesized that these phosphate-free molecules are part of a dynamic phosphorylation-dephosphorylation cycle. In nonmuscle cells, deletion of phosphorylation sites from CSQ resulted in a roughly 2-fold decrease in the Man9 glycoform in favor of a mannose-trimmed glycan, reflecting the actions of ER ␣ 1,2-mannosidase and Golgi ␣ 1,2-mannosidase I, the enzymes generally thought to reduce glycans to Man5 (41)(42)(43). This divergence between glycosylation and phosphorylation of cardiac CSQ in nonmuscle as well as muscle cells may reflect divergent pathways that lead to differential glycosylation in a phosphorylation-dependent manner. A similar divergence of phosphorylation and glycosylation was previously reported for the lumenal ER glycoprotein GRP94 (34), where CK2 phosphorylation in intact cells occurred for only one of two distinct pathways for glycoprotein processing. In conclusion, heart cells process CSQ through two primary processes within the secretory pathway on the way to retention by terminal cisternae, N-glycosylation, and phosphorylation on CK2 sites. The distribution of phosphate among CSQ glycoforms suggests that phosphorylation and glycosylation processes involve both a common and a distinct cellular compartmentation. Details of CSQ transit in cardiac cells may shed light on mechanisms that regulate calcium transients by maintaining levels of component proteins in terminal cisternae.
5,553.8
2002-10-04T00:00:00.000
[ "Biology" ]
Preparation of nitrogen-doped TiO2 nanotube deposited with gold nanoparticles for photocatalytic phenol degradation under visible light One of the many functions of TiO2 photocatalysts is degrading organic compounds. TiO2 photocatalysts have a band gap of 3.2 eV, which is equivalent to the energy of ultraviolet (UV) photons with a wavelength of 388 nm and allows them to be effectively activated only by UV light. However, they are less effective under visible (Vis) light. Researchers have developed many ways to use the photocatalytic properties of TiO2 under visible light via nitrogen doping. Further, the photocatalytic activity of N-doped TiO2 can be improved via surface plasmon resonance. In this study, N-doped TiO2 nanotubes (N-TiO2 NTs) are prepared and deposited with Au nanoparticles (Au/N–TiO2 NTs) via electrodeposition. The TiO2 NTs and Au/N-TiO2 NTs are characterized using UV-VIS spectrometry, Fourier transform infrared spectroscopy (FTIR), X-ray diffractometry (XRD), field emission scanning electron microscopy, energy dispersive spectroscopy (EDS), and linear sweep voltammetry (LSV). The photocatalytic test under tungsten lamp illumination of a 20-ppm phenol solution in a batch reactor shows a phenol degradation of 60 %, whereas a 20-ppm bisphenol A (BPA) solution is degraded by 40 %. Introduction The increasing human needs for energy, in particular for fossil fuels, have led the industry to rapidly evolve in this field. However, this must be balanced with the development of exploration technologies that are environmentally friendly because the untreated waste produced by such fuels exploration will pollute the environment. Phenol and its derivatives (phenolics) are among such hazardous waste products [1]. An alternative way proposed to deal with pollutants such as phenols and phenolics is using photocatalysts. Photocatalytic reactions can be applied for environmental repair using sunlight as energy source. TiO2 has environmentally friendly properties and is nontoxic, inexpensive, and stable. Because of these advantages, TiO2 is developed for several environmentally friendly applications [2]. For example, it is used for coating the vessel bottoms to make them water resistant by its hydrophobic properties. TiO2 can be used to degrade organic pollutants, such as phenols and derivatives, via photocatalysis. TiO2 anatase phase has a quite wide energy gap (band gap) of ~3.2 eV, which is equivalent to the energy of ultraviolet (UV) photons with a wavelength of 388 nm. Hence, TiO2 is photocatalytically active only under UV light and cannot be effectively used under visible light, although this is very abundant and easily available owing to sunlight. Therefore, activating the photocatalytic activity of TiO2 using visible light would be beneficial. In this study, TiO2 nanotubes were modified via nitrogen doping (N-TiO2 NTs) and decorated with Au nanoparticles (Au/N-TiO2 NTs) via electrodeposition. Then, their ability to degrade phenols under visible light was tested. Materials The materials used to prepare the modified nanotubes and test their photocatalytic activity included a titanium plate (imported from China), deionized water, ammonium fluoride (Merck), sodium nitrate (Merck), ethylene glycol (Merck), a 4.5 × 1-cm 2 Pt electrode, absolute ethanol, and a 5-mM chloroauric (III) acid solution. Synthesis The Ti plate was cut into smaller plates with a size of 4.5 × 1 cm 2 , which were sanded with abrasive paper of 1500 mL To obtain polished and clean surfaces, the plates were sonicated for 30 min with acetone and for another 30 min with ethanol, rinsed with distilled water, and dried. Then, the plates were anodized by placing them in an electrolytic solution with the Pt electrode as the cathode for 1 h with a potential of 40 V. The electrolytic solution comprised urea and NH4F in ethylene glycol concentration of 0.3% and 2% distilled water [3]. The amorphous TiO2 obtained was calcined at 450 °C for 2 h. Then, the N-doped TiO2 was deposited with metallic Au nanoparticles via electrodeposition by placing it as the cathode in an electrolytic solution (HAuCl4; 2.5 × 10 -4 M), whereas the anode was a woven Pt wire. Electrodeposition performed at a potential of −1.5 V for 60 s. Then, the cathode was removed and dried in a desiccator for 24 h. Characterization The TiO2 plate obtained were analyzed via UV-Vis diffuse reflectance spectroscopy (UV-DRS) to determine the band gap, by Fourier transform infrared spectroscopy (FTIR) to identify the functional groups, by X-ray diffractometry (XRD) to determine the crystal structures, by scanning electron microscopy and energy dispersive spectroscopy (SEM-EDS) to study the morphology, and by linear sweep voltammetry (LSV) to evaluate the photocatalytic activity. Testing phenol and BPA degradation ability Phenol standard solutions were prepared by taking aliquots from the stock solution and obtaining concentrations of 5, 10, 15, 20, and 25 ppm; standard bisphenol A (BPA) solutions were made with the same concentrations. After analyzing the standard solutions using the UV-Vis spectrophotometer, the degradation of phenol and BPA was measured by analyzing 20 ppm of their standard solutions without potential bias for 1 h and in three different modes: 1) without photocatalyst, 2) with photocatalyst under UV light, and 3) with photocatalyst under visible light. Characterization The addition of modified Au nanoparticles on the surface of the N-TiO2 plates did not shift their energy gap but a new peak characteristic of Au nanoparticles appeared around 560 nm. This peak indicates that the Au/N-TiO2 photocatalyst is active in the visible light region. This absorbance peak is the result of the plasmon resonance phenomenon on the surface of the Au nanoparticles, which initiates the photocatalytic activity of the Au/N-TiO2 NTs. The presence of two peaks in the UV and visible light ranges indicates that the Au/N-TiO2 photocatalyst can be activated in both cases. From the curve in figure 1, the value of the energy gap required to activate the Au/N-TiO2 photocatalyst for F (R) 2 = 0 was calculated to be 2.16 eV. Figure 2 shows an XRD spectrum of Au/N-TiO2 NTs and crystal structure related to 2θ value listed in table 1. Based JCPDS2-1095, the characteristic peaks of Au are those at 38°, 44°, 82°, 98°, 111°, and 116°. Yazid et al. [4] reported a peak of Au at 47.6°, and in our XRD spectrum there is a peak at 47.61°; this suggests that Au was deposited on the surface of the TiO2 NTs, with a crystallite size of 26.64 nm. The elemental analysis shown in figure 3 and table 2 proves the Au nanoparticles had been successfully deposited on the surface of the N-TiO2 plate via electrodeposition and no impurity element is present. Figure 4 shows that the current response of Au/N-TiO2 NTs illuminated by UV and tungsten lamps was higher than without illumination. When illuminated by the tungsten lamp (visible light), the current response was smaller than when exposed to UV light; however, the response was higher Table 3 reports the absorbance values of the phenolic compounds from the beginning till the end of a 60-min degradation reaction. The absorbance peak at 216 nm reached a value of 0.752 after 60 min (table 3a), while that at 276 nm was still there with a value of 0.250 (table 3b). This is indicative of the bonding orbitals or nàσ* or nàπ* allegedly owned by the intermediate compounds of short-chain acids. Table 4 reports the absorbance values of PBA from the beginning till the end of a 60-min degradation reaction. The absorbance peal at 224 nm reached a value of 0.748 after 60 min (table 4a), whereas that at 275 nm was still there with a value of 0.169 (table 4b). This is indicative of the bonding orbitals or nàσ* or nàπ* allegedly owned by the intermediate compounds of short-chain acids. Conclusions Gold nanoparticles were successfully deposited on the surface of N-doped TiO2 nanotubes via electrodeposition. The synthesized Au/N-TiO2 NTs had a band gap of 2.16 eV. The Au/N-TiO2 NTs exhibited better photocatalytic activity under visible light compared to TiO2 NTs and N-TiO2 NTs, according to the results of phenol and BPA degradation; however, TiO2 NT still showed better photocatalytic activity under UV light.
1,831.2
2020-01-01T00:00:00.000
[ "Materials Science" ]
More Results from the Opera Experiment at the Gran Sasso Underground Lab The OPERA experiment reached its main goal by proving the appearance of ν τ in the CNGS ν µ beam. Five ν τ candidates fulfilling the analysis defined in the proposal were detected with a S/B ratio of about ten allowing to reject the null hypothesis at 5.1 σ . The search has been extended by loosening the selection criteria in order to obtain a statistically enhanced, lower purity, signal sample. One such interesting neutrino interaction with a double vertex topology having a high probability of being a ν τ interaction with charm production is reported. Based on the enlarged data sample the estimation of ∆ m 223 in appearance mode is presented. The search for ν e interactions has been extended over the full data set with a more than twofold increase in statistics with respect to published data. The analysis of the ν µ → ν e channel is updated and the implications of the electron neutrino sample in the framework of the 3+1 neutrino model is discussed. An analysis of ν µ → ν τ interactions in the framework of the sterile neutrino model has also been performed. Finally, the results of the study of charged hadron multiplicity distributions is presented. The OPERA experiment The Oscillation Project with Emulsion tRacking Apparatus(OPERA) experiment was designed to search for ν τ appearance in an almost pure ν µ beam. The OPERA detector was located in the underground Gran Sasso Laboratory (LNGS), 730 km away from the neutrino source, at CERN 1-2 . The neutrino beam, produced by 400 GeV-protons accelerated in the SPS, had an average energy of about 17 GeV, optimized for the observation of ν τ charged current (CC) interactions in the OPERA detector. In terms of interactions, the ν µ contamination was 2.1%, the ν e andν e contaminations were together below 1%, while the number of prompt ν τ negligible. This is an Open Access article published by World Scientific Publishing Company. It is distributed under the terms of the Creative Commons Attribution 4.0 (CC-BY) License. Further distribution of this work is permitted, provided the original work is properly cited. A CC ν τ interaction in the lead-emulsion target can be identified by detecting the decay of the short-lived τ lepton in the high space-resolution nuclear emulsions. The OPERA detector was composed of two identical super modules, each consisting of a target region followed by a muon spectrometer. The target had an overall mass of ≈ 1.25 kt and a modular structure with approximately 150000 units, called bricks. A brick was made of 56 1 mm-thick lead plates, acting as target, interleaved with 57 nuclear emulsion films, used as a micro-metric tracking device. Each film was composed of two 44µm-thick emulsion layers on both sides of a 205µm-thick plastic base. Bricks were arranged in walls interleaved with planes of scintillator strips forming the Target Tracker (TT). Each brick was a stand-alone device allowing momentum measurement through Multiple Coulomb Scattering (MCS) 3 in the lead plates, and electromagnetic shower energy reconstruction. For each event the information provided by the electronic detectors allows identifying the brick containing the neutrino interaction, the muon charge and the momentum determination. Discovery of ν µ → ν τ in the CNGS neutrino beam During the physics runs from year 2008 to 2012, OPERA collected data corresponding to 1.8x10 20 protons on target. The electronic detectors recorded 19505 neutrino interactions in the target. Decay channels of observed ν τ events are given in Tab. 1 4-6 . Expected background events amounted to 0.25 ± 0.05. Main sources of background are misidentified charmed events, hadronic re-interactions and large-angle muon scattering. Given the low background and observed candidate events, the discovery of ν τ appearance was reported with a significance of 5.1σ 6 . Table 1. Decay channels of observed ντ events. 3. ∆m 2 23 and ν τ cross-section measurements In order to increase the number of ν τ candidates and to reduce the statistical error, a new search strategy was implemented based on looser kinematical cuts and multivariate analysis. Five additional ν τ candidates were collected, with a signal to background ratio reduced from 10 to 3. The ∆m 2 23 has been evaluated using the Feldman-Cousins method. Given 10 observed events with 6.8 ± 1.4 expected signal and 2.0 ± 0.5 expected background events, the result is: The ∆m 2 23 value is in agreement with the PDG 2016 value 8 within 1σ. The ν τ cross-section determined assuming ∆m 2 23 = 2.5 × 10 −3 eV 2 and maximal mixing is: 4. ν µ → ν e oscillations Thirty-five ν e events were observed. Most of them are CC interactions of the ν e and ν e beam components, other contributions are ν τ CC interactions with τ decaying into electron and ν µ event with π 0 misidentified as electron. Using the values θ 13 , θ 23 , ∆m 2 atm and the standard parameterization of the mixing matrix U, taken from 8 , the total number of expected ν e candidates is 34.6±3.2. No matter effects were taken into account. The number of observed events is compatible with the 3-flavour oscillation model. OPERA ν e data sample has been used to set limits on the oscillation parameters of a massive sterile neutrino in the 3+1 neutrino hypothesis. ∆m 2 41 and sin 2 (2θ µe ) = 4|U e4 | 2 |U µ4 | 2 are the parameters of interest. Preliminary 90% C.L. exclusion plot obtained by OPERA experiment is shown in Fig. 1. Event with two secondary vertices OPERA detected a neutral current like interaction with two secondary vertices. The total hadronic energy is about 20 GeV. The primary vertex (V I ), in Fig. 2, is 581.8 µm upstream of the top emulsion layer of plate 32, while the secondary vertex (V II ) is just 102.6µm downstream of V I . The primary vertex is composed by tracks 2, 4 and 5, while V II is composed by tracks 1 and 3. Track 4 (parent) exhibits a kink with track 6 (daughter). The kink point is labelled as V III . Dedicated simulations and Artificial Neural Networks analysis was performed to distinguish between possible interpretations. The most likely one is the vertex II is originated by a charm decay and vertex III by a tau decaying into a hadron. The event is classified as a ν τ CC interaction with charm production with a significance of 3.5σ. Study of charged hadron multiplicity distributions The multiplicity distribution of charged hadrons is an important characteristic since it reflects the dynamics of the interaction. An unbiased sub-sample of ν µ CC interactions occurring in the lead was selected. Only events with the square of the invariant mass of the hadronic system (W 2 ) larger than 1GeV 2 /c 4 were used in order to eliminate the quasi-elastic contribution. Selected ν µ CC events were inspected carefully and tracks were classified depending on their ionization features: minimum ionization particle (mip), grey and black. The mip tracks are highly relativistic charged particles generated by neutrinonucleon interaction. Black tracks are produced by low energy fragments. Grey tracks are produced by slow particles interpreted as recoil nucleons emitted during the nuclear cascade. The charged hadron multiplicity ( n ch ) is defined as the number of mip tracks ( n mip ) excluding the muon track, so n ch = n mip − 1. Multiplicity distribution of mip tracks is shown in Fig 3. The average charged hadron multiplicity( n ch ) is well described by a linear function in lnW 2 shown in Fig. 4 (left). The dispersion (D ch ) is defined as D ch = More Results from the OPERA Experiment n ch 2 − n ch 2 . The dispersion as a function of n ch is presented in Fig. 4 (right). The dependence is approximately linear. Conclusions After the discovery of the ν τ appearance in the CNGS neutrino beam, an extended analysis has been performed to increase number of ν τ candidate events. Based on the enlarged data sample the values of ∆m 2 23 in appearance mode and the ν τ crosssection determined. The ν µ → ν e oscillation search results are updated: the number of observed events is in agreement with the expected background in the 3-flavour oscillation model. An upper limit to the mixing with a fourth sterile neutrino is set.
1,993.6
2018-05-03T00:00:00.000
[ "Physics" ]
Quality Test of Flexible Flat Cable (FFC) With Short Open Test Using Law Ohm Approach through Embedded Fuzzy Logic Based On Open Source Arduino Data Logger A technological development, especially in the field of electronics is very fast. One of the developments in the electronics hardware device is Flexible Flat Cable (FFC), which serves as a media liaison between the main boards with other hardware parts. The production of Flexible Flat Cable (FFC) will go through the process of testing and measuring of the quality Flexible Flat Cable (FFC). Currently, the testing and measurement is still done manually by observing the Light Emitting Diode (LED) by the operator, so there were many problems. This study will be made of test quality Flexible Flat Cable (FFC) computationally utilize Open Source Embedded System. The method used is the measurement with Short Open Test method using Ohm’s Law approach to 4-wire (Kelvin) and fuzzy logic as a decision maker measurement results based on Open Source Arduino Data Logger. This system uses a sensor current INA219 as a sensor to read the voltage value thus obtained resistance value Flexible Flat Cable (FFC). To get a good system we will do the Black-box testing as well as testing the accuracy and precision with the standard deviation method. In testing the system using three models samples were obtained the test results in the form of standard deviation for the first model of 1.921 second model of 4.567 and 6.300 for the third model. While the value of the Standard Error of Mean (SEM) for the first model of the model 0.304 second at 0.736 and 0.996 of the third model. In testing this system, we will also obtain the average value of the measurement tolerance resistance values for the first model of - 3.50% 4.45% second model and the third model of 5.18% with the standard measurement of prisoners and improve productivity becomes 118.33%. From the results of the testing system is expected to improve the quality and productivity in the process of testing Flexible Flat Cable (FFC). Introduction Technological developments will affect all the changes in all fields, one of which is the development of hardware electronic device that would change very quickly and significantly, one of which is the media interface cable that connects the main board Printed Circuit Board (PCB) with the other parts of the device the electronics. Interface is now widely used in electronic devices is a Flexible Flat Cable (FFC) which has a very thin design and size so they can follow the shape of the mechanical groove design an electronic device in which case it requires a wide space for the placement of the cable in the design of electronic devices proficiency level. The manufacturing process of Flexible Flat Cable (FFC) will go through the testing process before it is sent to the company that booked Flexible Flat Cable (FFC). Testing Flexible Flat Cable (FFC) one of which is to identify whether Flexible Flat Cable (FFC) in conditions PASS (Good) or FAIL (No Good) by providing a flow of DC voltage and output indicator lamp in the form of Light Emitting Diode (LED). 2 The testing process is separated into two stages: the first stage is to test the condition of the Open (disconnected) for each leg of the Flexible Flat Cable (FFC) and the second stage is to test the condition Short (connected) between the legs on a Flexible Flat Cable (FFC) do manually by the operator and the determination of Flexible Flat Cable (FFC) to observe the condition of the lights light Emitting Diode [1]. This study will be made of quality test system Flexible Flat Cable (FFC) with Short Open Test using Ohm's Law through the Embedded Fuzzy Logic approach based on Open Source Arduino Data Logger. With the research and manufacture of these systems, we can get the precision measurement according to Ohm's law and gain level measurement speed and better accuracy in determining the condition of Flexible Flat Cable (FFC) PASS (Good) or Fail (No Good), in addition to the system created can be obtained reports on the results of each measurement will be stored in memory and Data Logger which can be used as a report on the production of Flexible Flat Cable (FFC). IAES Identification problems research is: 1. Based on the identification and restriction problems that have been described, it can be formulated how the test of the quality Flexible Flat Cable (FFC) can be done by an embedded system based open source arduino with the application of methods Short Open Test is using a legal approach Ohm 4-wire (Kelvin) as a process measure value detainee a Flat Flexible Cable (FFC) and fuzzy logic as decision makers the results of measurements and data logger as a data storage processing of the measurement results. Related Research In this study [2] do Arduino as an LCR meter designs using the AD5933 impedance converter for measuring reactance. The system incorporates an on board frequency generator with 12-bit, 1 MSPS and Analog to Digital Converter (ADC). Impedance response signal sample from Analog to Digital Converter (ADC) and signal Discrete Fourier Transform (DFT) is processed by on-board DSP engine. The results of processing that can be read from the I2C serial interface. Arduino controls the AD5933 via a serial interface I2C protocol. Measurements for high value capacitance and inductance values lower in impedance in the range of 1 Ω -10 Ω by using additional circuit reference resistors are added to the reactive component of the unknown and make corrections to the measured impedance value. In this research [4] is to create a weather station to monitor the characteristics of lightning. Data Logger will be SD Card Shield is used to store data read distance of a lightning strike, the frequency as well as the movement of the lightning from the sensor lightning and GPS data from this study has the advantage of which consumes low power, small size, greater portability and lower cost. In this research [4] is to create a tool to be used to test the functionality of a Cable Insulation Displacement (IDC) whether there is an error or malfunction. This tool uses Microcontroller ARM7 LPC and 74HC574 octal D-latch with a test gives a value of 1 and 0 binary data and compared with 3 the logic of Ex-OR, output using the LCD 24 * 2 as an indicator to display error messages Short or Open, the testing has been done several scenario combination of methods Short Circuit and Open Circuit. IAES International In this research [5] is to design a measuring instrument for voltage, current and frequency of the alternating current electric single phase-based personal computer where there are two, namely microcontroller and computer hardware for implementation by generating an error average of 0.45% voltage measurement, current measurement by 2 % and a measurement frequency of 0.09%. Research Method The method used in this research is the method of Short Open Test using Ohm's law approaches to measurement with 4-wire (Kelvin). Short Open Test is the method used to determine the condition of a circuit no current flow or not. In Short Open Test method, a circuit will be given voltage so the current will flow. Circuit condition is said to be "Short" if the current flowing is huge because of the resistance value close to 0 Ω (Ohm) and the circuit is said to be "open" if the current does not flow or a current value of 0 Ampere because the resistance value is very large or infinite. Ohm's Law approach method 4-wire (Kelvin) is used to measure a resistance value which is smaller than a conductor material with the aim of getting more value detainees measurement precision. The next method is fuzzy logic is implemented in an Arduino-based Embedded System Data Logger. Fuzzy logic is used to determine whether the measurement results in a condition Pass (Good) or Fail (No Good) on a Flexible Flat Cable (FFC). The resulting value is the measurement process that has been done is stored in a data logger that will be used as a place to store measurement results and used as a production report. System development method used in this study is the waterfall method, and for testing the system using the Black-box testing other than that the system is built to be compared with the results of measurements from other instruments and compared with the results of the calculation of an equation of the resistance value of a conductor and using standard deviation and tolerance to get the value of the accuracy of the measurement. Sampling Selection Method The sample selection based on quota sampling method for samples taken based on the needs of researchers and predetermined criteria or condition. Samples are Flexible Flat Cable (FFC) that have been determined by the number of feet 8 specification which has a length of each is 210 mm, 420 mm and 630 mm. Samples Flexible Flat Cable (FFC) in good condition and will be prepared several scenarios for testing a system that meets the measurement function Short Open Test conductor resistivity values. Data Collection Method To get the data in this study used multiple methods of data collection that the method of data collection by direct observation of Framework Concepts Proposed a system of quality testing Flexible Flat Cable (FFC) with Short Open Test Using Ohm's Law through the Embedded Fuzzy Logic approach is based on Open Source Arduino Data Logger. The purpose of the proposal is to change the tests and measurements manually before becoming computationally and utilize measurement data as a report of production so that it can be used as feedback to improve the quality. By way of computing all the testing, measurement and determination of the results of measurements performed by a program embedded on a board that is based on Arduino, thus avoiding errors in the determination of the quality Flexible Flat Cable (FFC). Overview Step-by-step testing process: Figure 2. Short-checking process flow Step-by-step testing process: 1. Prepare the tools used and turn the key in the ON position. Implementation System In the implementation of the system in this study consists of two parts, namely the implementation of the hardware and software implementation on the part. Implementation of this research jointly establishes a system that works in computing. Before implementing the system must understand and process flow to load the system so that the implementation can be done in accordance design. This is a process flow system from the beginning to the end of the process to produce the output of the system. Hardware Hardware Implementation of the system in the study includes the design Schematics, PCB layout design and PCB Assembly. On the hardware is made separately into two parts which will cooperate with each other, namely the Load Relay Board and the Main Control Board Arduino. On Load Relay Board has the function of running the command from the main control that implements the method of short open reading test for voltage and current values, while the hardware implementation can be seen in Figure 5. Software In this study determined that the current value will be streamed at 88.51 mA. In this study, designed the fuzzy logic process using Sugeno Model and Method Height defuzzification method. In the process of fuzzy there are two variables that will be used as a variable input voltage shown in Figure 6 and a variable threshold seen in Figure 7. The variable voltage graph that there are 5 3 2 graphics of graphic triangle and trapezoid, Whereas the threshold is 1 and 2 triangular charts trapezoid graph so that it can be made fuzzy rule base as much as 15 as shown in Whereas Figure 12 describes a process flow measurement and data processing in order to obtain the FFC resistance value and determine whether the measurement value in accordance with the existing specifications. 10 Accuracy, Precision and Tolerance Testing Testing accuracy, precision and tolerance is focused on the output data from the system functions. The method used is the standard deviation method where precision is determined by the diversity of the variables to be measured against the average, while the value of accuracy depends on the standard deviation and the number of samples and tolerance is an aberration measurements with actual values. In this test uses 40 measurement data on each model or use 5 Flexible Flat Cable (FFC) by the number of feet 8 and the value of the measurement data in units of mΩ. Here is the measurement data that has been done by the system. Table 3 and Table 4 In the method of standard deviation is stated that the smaller the standard deviation, the better and the less the value of the standard error of the average value closer to the true value. From the test results can be obtained under the standard deviation and standard error is the smallest is the measurement of Flexible Flat Cable (FFC) with a length of 210 mm with a length of 420 mm next and last is 630 mm. Table 5, Table 6 and Table 7 that with a length of 210 mm is -3.50%, Flexible Flat Cable (FFC) with a length of 420 mm is 3.42%, Flexible Flat Cable (FFC) with a length of 630 mm was 5.18%. So it can be stated that the measurements made by the system met the standard measurement tolerance prisoners are allowed where the maximum tolerance is ± 25%. Process Time Testing (Take Time) At the time of the testing process (take time) this is by comparing the processing time of the system that has been running with the processing time of the system that do the research. The turnaround time on a running system is obtained by looking at the data production targets and production per hour. The following is a comparison of the processing time from a running system with a system that performed the study. Table 8. Comparison Data Processing Time (Take Time) From table 8 for systems running data showed that the target every hour testing process Flexible Flat Cable (FFC) for 3 specifications of 360 pcs not met because the production of an average of 91% of its production target, while the processing time (take time) the mean average test every Flexible Flat Cable (FFC) on systems running for FFC_210 is 10.931 ms, for FFC_420 is 11.043 ms and FFC_630 is 11,099 ms. While the system results were obtained when the process (take time) that FFC_210 is 8.451 ms so that the estimated yield is 426 pcs per hour, while the process (take time) on FFC_420 is 8,451ms so estimates of production per hour is 426 pcs and processing time (take time) on FFC_630 is 8.457 ms so that the estimated output per hour is 425 pcs. From the data obtained that the test results using the computing system to increase the productivity of the target of 426 or 360 pcs be increased to 118.33%. 13 Implications Research Flexible Flat Cable (FFC) is essential in electronic devices; the quality Flexible Flat Cable (FFC) is very influential on electronic products. Because of the presence of the test system quality Flexible Flat Cable (FFC) is very useful for producing quality Flexible Flat Cable (FFC) is good and improve productivity, where the resistance value on Flexible Flat Cable (FFC) measurement and testing in accordance with the standards that have been determined so that the performance of the Flat Flexible Cable (FFC) used in electronic devices can work optimally. prisoners on Flexible Flat Cable (FFC) for a length of 210 mm was -3.50%, length 420 mm is 3.42% and the length of 630 mm was 5.18 %. 5. The test results processing time (take time) on the system can increase production to 118.33% or 18.33% above the production target of 360 pcs per hour. Suggestion Based on the results of research by the author, advice to authors give is as follows: 1. The research may be continued by changing the quality of the hardware so that the level of accuracy, precision and tolerance is heading for the better. 2. Research can proceed with implementing the system using other Open Source Embedded systems and integrated with other systems based on object-oriented to support the production process.
3,799.6
2016-12-01T00:00:00.000
[ "Computer Science" ]
Brain Shift in Neuronavigation of Brain Tumors: An Updated Review of Intra-Operative Ultrasound Applications Neuronavigation using pre-operative imaging data for neurosurgical guidance is a ubiquitous tool for the planning and resection of oncologic brain disease. These systems are rendered unreliable when brain shift invalidates the patient-image registration. Our previous review in 2015, Brain shift in neuronavigation of brain tumours: A review offered a new taxonomy, classification system, and a historical perspective on the causes, measurement, and pre- and intra-operative compensation of this phenomenon. Here we present an updated review using the same taxonomy and framework, focused on the developments of intra-operative ultrasound-based brain shift research from 2015 to the present (2020). The review was performed using PubMed to identify articles since 2015 with the specific words and phrases: “Brain shift” AND “Ultrasound”. Since 2015, the rate of publication of intra-operative ultrasound based articles in the context of brain shift has increased from 2–3 per year to 8–10 per year. This efficient and low-cost technology and increasing comfort among clinicians and researchers have allowed unique avenues of development. Since 2015, there has been a trend towards more mathematical advancements in the field which is often validated on publicly available datasets from early intra-operative ultrasound research, and may not give a just representation to the intra-operative imaging landscape in modern image-guided neurosurgery. Focus on vessel-based registration and virtual and augmented reality paradigms have seen traction, offering new perspectives to overcome some of the different pitfalls of ultrasound based technologies. Unfortunately, clinical adaptation and evaluation has not seen as significant of a publication boost. Brain shift continues to be a highly prevalent pitfall in maintaining accuracy throughout oncologic neurosurgical intervention and continues to be an area of active research. Intra-operative ultrasound continues to show promise as an effective, efficient, and low-cost solution for intra-operative accuracy management. A major drawback of the current research landscape is that mathematical tool validation based on retrospective data outpaces prospective clinical evaluations decreasing the strength of the evidence. The need for newer and more publicly available clinical datasets will be instrumental in more reliable validation of these methods that reflect the modern intra-operative imaging in these procedures. Neuronavigation using pre-operative imaging data for neurosurgical guidance is a ubiquitous tool for the planning and resection of oncologic brain disease. These systems are rendered unreliable when brain shift invalidates the patient-image registration. Our previous review in 2015, Brain shift in neuronavigation of brain tumours: A review offered a new taxonomy, classification system, and a historical perspective on the causes, measurement, and pre-and intra-operative compensation of this phenomenon. Here we present an updated review using the same taxonomy and framework, focused on the developments of intra-operative ultrasound-based brain shift research from 2015 to the present (2020). The review was performed using PubMed to identify articles since 2015 with the specific words and phrases: "Brain shift" AND "Ultrasound". Since 2015, the rate of publication of intra-operative ultrasound based articles in the context of brain shift has increased from 2-3 per year to 8-10 per year. This efficient and low-cost technology and increasing comfort among clinicians and researchers have allowed unique avenues of development. Since 2015, there has been a trend towards more mathematical advancements in the field which is often validated on publicly available datasets from early intra-operative ultrasound research, and may not give a just representation to the intra-operative imaging landscape in modern imageguided neurosurgery. Focus on vessel-based registration and virtual and augmented reality paradigms have seen traction, offering new perspectives to overcome some of the different pitfalls of ultrasound based technologies. Unfortunately, clinical adaptation and evaluation has not seen as significant of a publication boost. Brain shift continues to be a highly prevalent pitfall in maintaining accuracy throughout oncologic neurosurgical intervention and continues to be an area of active research. Intra-operative ultrasound continues to show promise as an effective, efficient, and low-cost solution for intraoperative accuracy management. A major drawback of the current research landscape is that mathematical tool validation based on retrospective data outpaces prospective clinical evaluations decreasing the strength of the evidence. The need for newer and INTRODUCTION Neuronavigation using pre-operative imaging data for neurosurgical guidance is a ubiquitous tool for the planning and resection of oncologic disease in the brain and has become common practice in many centers. It is well known that these systems are rendered unreliable when brain shift is present. Any factor, physical, surgical, or biological, that violates the rigid body assumption of neuronavigation causes the tissues of the brain to shift and move away from the pre-operative images creating a difference between the reported location of anatomy in the virtual image and patient spaces. Simply put, brain shift invalidates the patient-to-image mapping (1). In our previous 2015 review of brain shift in neuronavigation (1), we offered a new taxonomy, classification system, and a historical perspective related to the causes, measurement, and pre-and intra-operative compensation of this phenomenon. In this work, we present an updated and focused review using the same taxonomy and framework on the developments of intra-operative ultrasoundbased brain shift applications over the last five years, i.e. from 2015 to the present. A visual representation of the previously described classification system along with the highlighted trajectory of the focus of this review can be seen in Figure 1. The first use of A-mode (1D) ultrasound (US) for adult neurosurgery was completed by Dr. William Peyton in 1951 and reported by Wild and Reid in 1953 (2). The first use of Bmode (2D) US in adult neurosurgery of the spine was in 1978 by Reid (3) and in the brain in 1980 by Rubin et al. (4). In the latter, they observed intra-cranial anatomy with real-time ultrasound as well as a grade III astrocytoma and postulated that there may be benefit for this technology as a tool for surgical planning and biopsy procedures. Since then, and throughout the 2000s, intraoperative ultrasound (iUS) has been used in many capacities to evaluate, quantify, and correct for brain shift and modify surgical plans in real-time without the use of ionizing radiation exposure (e.g. from CT) all while minimizing any disruption to the surgical workflow. Over the last 5 years the rate of publication for intraoperative based ultrasound intervention for brain shift evaluation, quantification, and correction has dramatically increased. In the context of these advances, we review the current state, potential, and challenges that remain in the context of iUS for neuronavigation of brain tumors. BRAIN SHIFT TAXONOMY In order to assist with the clarity of the review and the discussions to follow, this review follows the same taxonomy and classification system as the 2015 publication: Brain shift in neuronavigation of brain tumours: A review (1). To begin, brain shift is defined as-any factor, physical, surgical, or biological, that violates the rigid body assumption of neuronavigation creating a difference between the reported location of anatomy in the virtual image and patient spaces. The discussion of brain shift is further separated into three categories; 1) factors that cause brain shift, 2) methods for quantifying brain shift, and 3) methods to correct or account for brain shift, followed by more specific subclassifications. As highlighted in Figure 1, the articles in this review are primarily those that describe either the measurement or compensation of brain shift using intraoperative ultrasound imaging in the context of image registration, biomechanical modeling, or predictive modeling. INTRA-OPERATIVE ULTRASOUND FOR NEUROSURGERY Ultrasound imaging uses high frequency sound waves that are emitted and detected by different probes and transducers. In the context of neurosurgery, the optimal choice of transducer and type of acquisition frequency will depend on the location and sonographic properties of the lesion of interest, the size of the craniotomy in which the probe can be placed, the surrounding anatomy, and of course, surgeon preference. The intensity of structures in these images directly reflects the amplitude of the detected signal driven by micro reflectors within tissue and the interfaces between tissues with different acoustic impedance. As a general principle, tissues that are acoustically homogeneous will generate low intensity signals, while structures with high gradients of acoustic impedance, such as bone or necrotic tissue, generate strong echoes and can obscure other structures deeper in the imaging plane. In a normal human brain, anatomical structures that give a hyperechoic signal on ultrasound imaging include the sulci, falx cerebri, choroid plexus, and vessel walls. In contrast, the ventricles and other spaces filled with cerebrospinal fluid are generally acoustically homogenous and create a low intensity hypoechoic signal. Lesions in the brain can have varying appearance depending on the mass density, necrotic infiltration, or fluid filled cavities but generally appear hyperechoic with areas of mixed echogenicity depending on the above specific features. Intra-operative ultrasound, in the context of brain shift, was first introduced in 1997 by Bucholz (5) where they provided the first documented quantitative measurement of brain shift during hematoma and tumor neurosurgery. Before this, ultrasound had been previously introduced as an intra-operative neurosurgical tool to assist in small lesion identification in the context of arterio-venous malformation surgery by Chandler in 1987 (6). Since these initial publications, numerous investigators have implemented unique applications and procedures to harness this low-cost and widely available intra-operative imaging tool to gather real-time anatomical information for measuring and compensating for brain shift. The primary link between intraoperative imaging, such as ultrasound, and brain shift measurement or compensation is a registration procedure that relates intra-operative and pre-operative images to each other. In the context of iUS, the main challenge stemming from these registration procedures relates the widely different nature and quality of the iUS images as compared with the pre-operative MRI images. While voxel intensity of both modalities is directly dependent on the specific tissues imaged, there is an additional dependence for iUS on probe orientation and depth that leads to significant image intensity non-uniformity due to the presence of acoustic impedance transitions. The quality of individual ultrasound images is known to vary among users adding another obstacle when developing tools and methods to use this modality reliably for brain shift related interventions. METHODS This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2009 guidelines without prior publication of the review protocol (7) and was performed using PubMed 1 on August 17, 2020, to identify articles since 2015 with the specific words and phrases: ("brain shift" OR brainshift) AND "ultrasound" The returned titles were screened for any non-English, duplicate, or clearly irrelevant entries, which were excluded. The inclusion criterion used during the selection was that the work must be focused on brain shift in the paradigm of imageguided neurosurgery of brain tumors. Exclusion criteria included review papers and work with animal-based studies and no clinical validation. For publications that were more mathematical in nature focusing on modeling, compensation, or prediction, validation of the methods on clinical datasets was required. Thirty-eight (38) relevant publications were found using the search query, of which 22 were included in this review. A PRISMA diagram is presented in Figure 2. RESULTS A summary of the papers reviewed, as they relate to the described taxonomy, location of the measured brain shift, pre-resection vs. post-resection measurement, and quantitative findings can be found in Table 1. In total, the list includes four qualitative retrospective case reviews, eight brain shift compensation methods papers, and the remaining 10 articles focused on prospective evaluation of brain shift measurement and/ or compensation. Qualitative Retrospective Case Reviews Since 2015, four groups have published qualitative analysis in the form of a retrospective case review of their center's experience with using intra-operative ultrasound for neurosurgical guidance. The first was published in 2015 by Petridis et al. (23) that reviewed 34 patients undergoing low grade glioma (LGG) resection between 2011 and 2014 in a German center. The retrospective analysis compared iUS use for localization of surgical targets with cases where iUS was not performed. They found in the 15 cases where iUS was used that the surgical target was properly found for either resection or biopsy, whereas in five of 19 cases where iUS was not used, the target was missed. The improvement was qualitatively attributed to intra-operative update of real-time information about brain shift as provided by the iUS imaging during these cases. In 2016, Steno et al. (26) described a qualitative use of iUS during resection of insular low grade gliomas (LGG) during awake resections but with a focus on visualization of the lenticulostriate arteries. These landmarks served to measure brain shift compensation and to guide increased extent of resection, when compared to non-iUS interventions, without creating any new deficit while being nearby anatomic brain structures with important functional roles. Overall, their retrospective review of six cases demonstrated this to be a useful tool for this anatomical location of LGG. In 2018 (27), this group published a follow-up cohort case series of 49 patients undergoing awake resections for insular LGG nearby eloquent cortical and subcortical structures with 21 cases using only neuronavigation and the remaining 28 using iUS guidance. The mean extent of resection was significantly improved with iUS guidance (87 vs. 76%) without the addition of any new functional neurologic deficit. Altieri et al. (8) describe a retrospective analysis of 264 patients with high-grade gliomas undergoing resection with neuronavigation and iUS guidance at the University of Turin between 2013 and 2016. The goal of their work was to improve the detection and characterize the echogenicity-the visual characteristics on ultrasound-of both normal and pathologic anatomical structures using different probes. The main challenge identified by the analysis, as often reported, was related to the surgeon's comfort in interpreting the anatomy in oblique planes, a characteristic that increased with iUS experience. Finally, in 2019, Liang et al. (18) published a retrospective case review on a cohort of patients that underwent iUS alone without registration during resection and iUS with pre-operative MRI registration to review the extent of resection (EOR) improvement. Of the 45 total patients reviewed, only 6/19 cases using iUS alone achieved gross total resection (GTR) whereas 22/26 (85%) cases using MRI registered to iUS had GTR. This significant clinical improvement was attributed primarily to the comfort and quality of using MRI images for guidance after registration as compared to iUS images alone. The authors also described a significantly lower postoperative morbidity rate in the iUS registration group and concluded that iUS-MRI registration is an essential tool to improve EOR and functional protection. Brain Shift Compensation Based on Clinical Datasets Currently, there exists only two widely used and publicly available clinical databases with pre-operative MRI and iUS images that can be used for new brain shift compensation registration or predictive modeling algorithm validation: the Brain Images of Tumors for Evaluation (BITE) (31) and the REtroSpective Evaluation of Cerebral Tumors (RESECT) (30). Both databases have different internal limitations; however, they provide a necessary tool for comparison of brain shift Table 1, and described below in more detail. Farnia et al. have recently described brain shift compensation in a series of three articles (10-12) through matching of echogenic structures, specifically sulci, and optimization of the residual complexity value in the wavelet domain, a strategy to balance between feature and intensity-based registration approach advantages in multi-modal registration. With the introduction of the method in 2015, they validated the novel approach on both phantom and the BITE datasets, demonstrating a noted robustness to noise which is commonly encountered in iUS imaging. The following updates to their methods in 2016 and 2018 focused on improving computational time and the addition of a joint co-sparsity function to obtain a clinically acceptable and useful algorithm for intra-operative use. They report a registration accuracy of 0.90-1.82 mm depending on the method being evaluated. In all three of their works, they have shown significant improvement for both accuracy and efficiency that only lacks validation in a prospective setting. Zhou and Rivaz 2016 (29) propose a non-rigid symmetric registration framework focused on pre-and post-resection ultrasound images to compensate for brain shift and assess for residual tumor that is difficult to assess on normal post-resection images due to the immense post-operative changes when compared with pre-operative MRI. This novel framework was validated on pre-and post-resection ultrasound images from the BITE database to identify "outlier regions" that may be consistent with possible residual tumor. The registration showed acceptable registration with reported accuracy, on the order of 1.5 mm, between the sets of images with the main drawback being long computation times not conducive to clinical workflow. Continuing with the theme of novel registration strategies for brain shift compensation, in 2019, Masoumi et al. (21) describe an approach based on affine transformation that utilized a covariance matrix adaptation evolutionary strategy (CMA-ES) to optimize the registration. This work built upon their previous work in 2018 (32) that used a gradient descent optimization 2 . The method was evaluated on both the BITE and RESECT databases with statistically significant improvement of the mean target registration error (mTRE) on the order of 2.8 mm. Their proposed fully automatic registration improvement offers another option for iUS-MRI brain shift correction. The main advantages of this work compared to similar methods include an optimization step that is less susceptible to patch sizes and noise and is reported as the first use of CMA-ES specifically for MRI and US images. In 2019, Canalini et al. (9) described a segmentation-based registration approach for brain shift compensation where the falx cerebri and different sulci were automatically segmented in preresection iUS volumes on the dura mater and used to register with iUS at different phases of the operation. The method is based on a trained convolutional neural network using manually annotated structures in the pre-resection ultrasound that are then used to segment and register the corresponding structures ad different phases of the operation. In contrast to previous work done, in this domain their solution focuses on iUS-iUS registration rather than iUS-MRI registration. They validated their method by comparing the mTRE between manually identified landmarks from the BITE and RESECT databases and showed significant improvement among both. In one of the more complete series of brain shift compensation methodology papers, Machado et al. (19) published a registration procedure based on automatic feature detection followed by nearest-neighbor descriptor matching and probabilistic voting models similar to a Hough transform focused on scale-invariant features (SIFT). Their method was validated on two publicly available databases (BITE and RESECT) and, additionally, prospectively validated on a nine patient case series that they describe as the Multimodal Images of Brain Shift (MIBS) database. They report accuracy on the order of 2.2 mm with efficient registration results on all three data sets without the need to manually identify landmarks for evaluation. Within the same vein, in 2019 (20), this group described a correlation-based approach for brain shift compensation through extraction of multi-scale and multi-orientation attribute vectors with robust similarity measures on these attributes while simultaneously explicitly handling field-of-view differences between images as an approach to improve generalization and accuracy across different publicly available datasets. Their approach was validated on the BITE, RESECT, and MIBS databases, and tested against 15 other accepted multimodal registration algorithms. They consistently obtained one of the best results across the three datasets without deviation from their predefined parameters (compared to the often dataset specific tuning described in other papers). This approach highlights the potential need for more robust similarity functions and automatic feature detection frameworks that can be generalizable to the limited public data for future algorithm development. Prospective Brain Shift Measurement and Compensation While retrospective data is important for method development and testing, it is critical to evaluate with prospective data to see how well methods generalize. The first of 10 prospective brain shift evaluation papers is from Prada et al. (24). They described their experience in 58 cases using an iUS-guided neuronavigation system. The measurements and compensation details for each individual case are not included in the article; however, they report that in 42 cases they were able to restore accuracy of the navigation system to below a critical threshold of 2 mm when compared with manually selected anatomic and vascular landmarks. In the 16 remaining cases, despite not reaching this critical clinical threshold, they were accurate to within 3 mm, and visualization of cerebral structures intra-operatively with iUS was achieved. Despite the lack of quantitative details on brain shift measurement and compensation this article highlights the expanding reach of iUS within the neurosurgical clinical community. In 2017, Riva et al. (25) published an eight-patient case series to measure and compensate brain shift using 3D-iUS and an iterative deformation correction framework. Ultrasound was acquired at three time points during surgery: before dural opening, after dural opening, and following complete resection of the brain tumor. The goal of their work was to evaluate the robustness of mono-modal registration from serial iUS acquisitions at different time points in surgery in its ability to maintain accuracy of the navigation system and compensate for brain shift. The initial iUS volume is registered with a rigid transformation to the pre-operative MRI planning images using linear correlation of linear combination as a similarity metric. Following dural opening, iUS volumes are registered with the initial pre-dural iUS using both rigid normalized crosscorrelation registration and deformable B-spline registration procedures and then applied to the original pre-operative planning volume. Their method was evaluated using expert neurosurgeon anatomic landmark identification to evaluate the target registration error. They report significant compensation of brain shift between the rigid registration of the initial iUS and pre-operative images both before (5.9 to 2.7 mm) and after (6.2 to 4.2 mm) dural removal with no significant improvement following complete resection (7.5 to 6.7 mm). The authors conclude that combining both mono-and multi-modal iUS registration in an iterative framework successfully measured and compensated brain shift and was easily integrated into the surgical workflow. This technique also has the potential to easily be expanded in other user-defined time points between those investigated in this work that could further help for more realtime brain shift correction throughout resection. Xiao et al. (28) described a five patient case series evaluating a registration procedure between MRI tractography and iUS based on a correlation-ratio non-linear deformation framework. While the analysis was performed in a retrospective fashion and not intra-operatively, this article was included as the clinical data for which the algorithm was evaluated was not part of any previously published, publicly available database. This is the only report to describe MRI tractography-iUS registration in the context of brain shift measurement and compensation, and registration accuracy on the order of 1.7 mm was reported. As a relatively new imaging modality for surgical planning, tractography offers important information that, when accurately registered with preoperative imaging, can help preserve white matter tracts important for proper brain function. The main limitation of this study results from the lack of data to validate their method and limited literature from which to draw for comparison. Despite this, they were successfully able to measure and compensate for brain shift in this short case series making it an intriguing avenue for future research. In another prospective study, Gerard et al. (15) presented a unique approach to brain shift measurement and compensation with the combined use of iUS and augmented reality in a pilot study of eight cases using the Intraoperative Brain Imaging neuronavigation System (IBIS) (33). Brain shift was measured both with iUS and a compensation method based on gradient orientation alignment multimodal registration, as well as a calibrated augmented reality view where two-dimensional pixel misalignment error in a specified view was reported to provide both a qualitative and quantitative assessment of the associated brain shift. The main drawback in this work relates to the reporting of non-universal metric of pixel misalignment errors and the limited number and subjectivity of the manually identified landmarks. Despite these limitations, the authors demonstrate a combination of complementary technologies but require more extensive validation. In 2019, Frisken et al. (13) describe a two-patient proof of concept study for brain shift measurement and compensation using thin plate spline registration and finite element method (FEM) modeling using physical and geometric constraints along with different material known biophysical properties of different tissues. During these two cases, they measured brain shift with both manually identified landmarks and automatic features using the SIFT method (19) with similar results for both the manual and automatically detected features. The brain shift was then compensated using two independent methods, thin-plate splines and FEM modeling, and the results were compared with one another. The main drawback, as stated by the authors, is that they were unable to compare the behavior of FEM and thin-plate splines for the automatically detected features since these features were used to train the splines and resulted in nearzero residuals; however, the FEM method had better results when compared with the thin-plate spine method for the manually identified landmarks, and given the similarity of brain shift measured between both the automatic features and manual landmarks, it is possible the FEM may have outperformed on these features as well if the splines had been trained on different features. This preliminary work motivated a more complete study which was published in 2020 (14) using a similar methodology with additional measurement and compensation of brain shift with serial iUS (i.e., ultrasounds at multiple time points during the operations) and registration in a series of 19 cases. In their follow-up prospective study, the authors conclude that the FEM method provided more consistent brain shift correction and better compensation at locations further from the driving feature displacement than the thin-plate splines; however, in the cases with smaller deformations, the thin-plate splines performed better but without statistical significance. These results highlight the fact that multiple strategies are likely to be required when trying to account for brain shift in real-time and may evolve even throughout a single procedure. Prospective Brain Shift Measurement and Compensation Using Cerebral Vasculature Contrast-enhanced ultrasound (CEUS) is a technique not often used in neurosurgical procedures; however, in 2016, Ilunga-Mbuyamba et al. (16) report on using CEUS for vascular structure identification and brain shift compensation in a series of 10 patients. The difficulty in reviewing this work for this article stems from the fact that only the similarity measures between the pre-segmented MRI images and the CEUS segmentation after registration are reported with no absolute registration error. Looking past this limitation, nine of the 10 cases evaluated in this report had successful brain shift compensation-reported as usable for clinical guidance-suggesting this unique approach could provide useful in highly vascular regions of operation. Morin et al. (22) also focus on cerebral vasculature in a constraint-based biomechanical simulation of brain shift compensation for a series of five patients undergoing neurosurgery with iUS guidance. Each patient underwent a patient-specific biomechanical model built from pre-operative imaging which is intra-operatively registered with both iUS Bmode and Doppler imaging after a constraint-based simulation of the shift of the cerebral vascular tree. Manually chosen landmarks are used to assess the total brain shift and validate the compensation with reported accuracy on the order of 1.8 mm. The authors compared their work to their previously described rigid registration methods/techniques with successful results and having a workflow that is efficient for clinical integration. In another prospective study, Iversen et al. (17) describe their experience using the CustusX platform (34) in a series of 13 patients. Intra-operative ultrasound was acquired pre-resection to update the guidance system in all 13 cases, and the amount of brain shift and subsequent compensation following registration with pre-operative MRI was evaluated using manual placed anatomic landmarks. They report that their system was deemed accurate enough for tumor resection guidance in nine of 13 cases following neurosurgeon evaluation and showed significant brain shift compensation in all 13 of their cases. The mean reported registration error was on the order of 4 mm with the median being 2.7 mm. This work highlights experience with one of the few open-source neuronavigation systems that support intra-operative ultrasound acquisition prospectively during navigation. DISCUSSION AND CONCLUSION Brain shift is a very complex problem that has many pre-and intra-operative contributing factors. Strategies for measuring and compensating for brain shift continue to evolve, and intraoperative ultrasound continues to show promise as an effective, efficient, and low-cost solution for intra-operative accuracy management. Indeed, the rate of publication of intra-operative ultrasound brain shift related work has seen an increase from two to three articles per year, from 2005 to 2015, to eight to 10 articles per year since 2015. One of the primary issues with the current research landscape is that mathematical tool development in the form of registration, FEM, and predictive modeling continues to progress at a fast rate, but validation is repeatedly performed on a small cohort of publicly available retrospective data, that, while invaluable to the field, is nearly a decade old and may not accurately portray the quality and character of imaging used for guidance and surgical planning today. Additionally, as highlighted in Macahado et al. (19), many of these publicly available datasets require cohort-specific parameter tuning, and the compensation methods presented do not generalize well over the entirety of available data. Indeed, one of the major needs for the field is newer and larger publicly available clinical datasets, such as that in the Brain Images of Tumors for Evaluation (BITE) (31), Retrospective Evaluation of Cerebral Tumors (ResECT) (30) databases. The Multimodal Images of Brain Shift (MIBS) (19) dataset is interesting, but not publicly available. Currently there are roughly 50 cases of pre-operative and intra-operative data freely available for research, a small number that reduces the strength and quality of validation and generalization of these compensation procedures. Another database that has yet to be published publicly but that has been described is the Brain Images of Tumors for Evaluation 2 (BITE2) database (35) which aims to build on the strengths of the original BITE database from the same group. This form of data sharing will be instrumental in more reliable and appropriate validation of these methods that reflect the modern pre-and intra-operative imaging landscape in neurosurgical oncologic procedures. Among the many advances in the field since our last published review, the variety of applications that iUS has seen in neurosurgery over the last half-decade speaks to the extent to which the potential of the technology is being realized. Applications in cerebral vasculature, both as a tool for measuring brain shift, a feature for brain shift compensation, and a landmark for improving extent of resection are exciting for the field in terms of the broadness of how this tool will be used to treat patients and maintain accuracy for clinicians. With the advancement in technology comes additional challenges; as highlighted in the articles above there are numerous metrics reported for evaluation of brain shift compensation ranging from fiducial registration errors, target registration errors, extent of resection, segmentation similarity metrics, qualitative evaluations, and pixel misalignment errors. The difference in evaluation metrics and limited number of cases that techniques are evaluated makes comparison of new methodology especially difficult. In the context of extent of resection, for example, it is hard to know if the percentage increase in improvement is clinically significant and difficult to characterize and is attributed completely to brain shift compensation as opposed to specific patient and tumor anatomy. Two additional points can be made with respect to reporting of results. While a standard for reporting target registration accuracy is desirable and some of the methods described here report accuracies better than 2mm, are these results useful clinically? Resection metrics, such as EOR and GTR provide valuable clinical information but given their value being relative in nature, it is difficult to compare them with other objective measures not specifically related to the tumor volumes. For example, Steno 2018 (27) reports a small but statistically significant improvement in EOR (from 76 to 87%), while Liang 2019 (18) reports an increase of tumor GTR from 32% without ultrasound navigation to 85% with ultrasound-corrected navigation. Both studies demonstrate statistically significant objective and clinical benefits of using iUS-based technology but are near impossible to compare with work that uses registration errors to quantify their results. The lack of a universally accepted evaluation metric and the non-reporting of absolute registration errors when assessing brain shift compensation remain a major challenge in the field for which there is no clear solution currently. Intra-operative ultrasound for surgical guidance is a wellestablished tool and has seen applications in many organ systems including: hepatobiliary, genitourinary, lung, mediastinum, vascular, and breast (36). In many of the above applications, US has evolved from a complementary tool to one that has become almost standard-of-care for therapeutic intervention, especially within the hepatobiliary system. In vascular surgery, both within the cardiac and peripheral systems, both B-mode and Doppler iUS have been used, often to assist with surgical repair and to assess the adequacy of the repaired tissue (36,37). It also plays a very important role in the vascular reconstruction phase of transplantation surgery for flow assessment and minimizing vascular complications (37). Additionally, there has been significant work using iUS navigation in the context of skull-based (38,39). One of the main challenges during skull base tumor surgery is identifying the relationships between the lesion and principal intracranial vessels which are often mediated by neuronavigation systems. While inaccuracies due to brain shift at the skull base are generally minimal, there can still be other sources of inaccuracy making the pre-operative navigation images less reliable (39). Intra-operative US, often in the form of Doppler imaging and contrast-enhanced B-mode, can help improve the understanding of the skull-base and intracranial vessel relationship to avoid vascular damage and assist with lesion resection (38). The application of this technology in the brain adds an additional level of complexity as it is primarily used as a tool for re-calibrating MRI images for guidance as opposed to direct guidance. Despite this, it is important to learn from both the successes and failures from the many years of experience in these other surgical domains to maximize the potential of this technology and to avoid repeat failures or strategies that have proven to be inefficient from both clinical and technical points of view. A final point of discussion stems from the timing of brain shift correction. While accurate navigation seems to be imperative for pre-and early perioperative planning, it is unclear on both how often and at which specific time points during surgery accurate brain shift correction correlated with pre-operative MRI is necessary. Once a neurosurgeon has begun operating and has an open cavity from which they can then see the surrounding anatomy comfortably, are highly accurate navigation images imperative to improve the success measure of that operation? As we saw reported in this review, Steno et al. (26,27). evaluated the improvement of EOR with frequently updated navigation images for resections; however, very few studies report on these types of clinical outcomes, and there currently does not exist any report evaluating both perceived need for improved navigation images from surgeons or the objective analysis on the effects of the workflow and large-scale clinical outcomes. As iUS technology becomes more reliable and easily accessible, it will be important to have studies that identify optimal times of surgery where navigation accuracy is of high importance to improve clinical outcomes while not disrupting surgical workflow. To push the discussion to a further extreme, one may ask if we need to update navigation at all. With real-time imaging, like that provided through iUS, showing up-to-date anatomy and even functional information when Doppler mode is used, perhaps a better strategy would be to focus on improving surgeon comfort and technical proficiency with iUS image interpretation to remove the need for correlation with pre-operative MRI images which seems to have an upper limit of accuracy. In some select cases; however, where resection and anatomy are complex and iUS images difficult to interpret, it may serve beneficial to combine the information with pre-operative MRI as is the practice now for a more complete integration of information. It is clear from the increasing rate of publication, specifically in qualitative retrospective case reviews and quasi-quantitative analysis from different neurosurgery centers across the world that the comfort and training in using iUS during surgery is expanding and its potential being realized by more clinicians. Unfortunately, the lack of prospective evidence continues to limit the overall reliability of the technology. Moving forward it will be imperative for multi-center prospective trials that focus on improving clinical criteria among patients undergoing iUS surgical guidance for tumor resections for this technology to make the next step and broaden into more clinical practices worldwide. With continued improvement on ultrasound hardware including portable probes with a smaller footprint such as the Clarius, Lumify, and Butterfly IQ, further support and easier clinical workflow integration for future trials is possible. In conclusion, the growth of iUS in the field of neurosurgery is exciting and encouraging for both clinicians and researchers and continues to show major promise as a multi-faceted tool for measuring and compensating brain shift and improving both the safety and completeness of neurosurgical tumor resections.
8,608.6
2021-02-08T00:00:00.000
[ "Medicine", "Engineering" ]
Analysing the relationship between CO2 emissions and GDP in China: a fractional integration and cointegration approach This paper examines the relationship between the logarithms of carbon dioxide (CO2) emissions and real Gross Domestic Product (GDP) in China by applying fractional integration and cointegration methods. These are more general than the standard methods based on the dichotomy between stationary and non-stationary series, allow for a much wider variety of dynamic processes, and provide information about the persistence and long-memory properties of the series and thus on whether or not the effects of shocks are long-lived. The univariate results indicate that the two series are highly persistent, their orders of integration being around 2, whilst the cointegration tests (using both standard and fractional techniques) imply that there exists a long-run equilibrium relationship between the two variables in first differences, i.e. their growth rates are linked together in the long run. This suggests the need for environmental policies aimed at reducing emissions during periods of economic growth. process initially damages the environment, but as income per capita increases environmental legislation is introduced to reduce emissions and pollution (see Krueger, 1991 andShafik &Bandyopadhyay, 1992, among others). The relationship between economic growth and CO 2 emissions in China has been analysed in numerous papers using a variety of approaches (e.g. Du et al., 2012;Haisheng et al., 2005;Jalil & Mahmud, 2009;and Jalil & Feridun, 2011;Wang et al., 2011a, Wang et al., 2011bXie et al., 2018;Xu et al., 2014) as discussed in the literature review below. However, these have focused on factors that affect CO 2 emissions or examined the evidence on the EKC, whilst the present study analyses the persistence of the two series and their linkages using fractional integration and cointegration methods, respectively. Thus, its contribution is threefold. First, it improves on earlier works on the relationship between CO 2 emissions and GDP in China by applying fractional integration and cointegration methods that are more general than those based on the classical stationary I(0) (integrated of order 0) v. non-stationary I(1) (integrated of order 1) dichotomy which had been previously used. In the standard framework the order of integration of the variables d can only be an integer, which is a rather restrictive assumption; the setup used in the present paper is instead much more general and flexible since this parameter is also allowed to take fractional values and as a result a much wider range of stochastic behaviours can be modelled. Whether a variable is stationary or not matters a great deal since in the former case the effects of shocks are only temporary whilst in the latter they are permanent and therefore a variable, if hit by a shock, will not revert to its long-run equilibrium, regardless of any policy measures. The more general framework used for the analysis in this paper also sheds light on intermediate cases when equilibrium is eventually restored, but deviations from it resulting from exogenous shocks are long-lived. Second, it is informative about both the dynamics and the long-run equilibrium, and it shows that both series of interest are highly persistent, but linked together in the long run. Thirdly, it provides important implications for both academics and policymakers. Specifically, to the former it suggests an interesting avenue for future research, namely investigating in greater depth the functional form of the relationship that has been established empirically in order to gain a better understanding of the issues of interest; to the latter it highlights the need for appropriate environmental policies during periods of economic growth. In particular, environmental innovation measures aimed at reducing carbon emissions, increasing energy efficiency and promoting green development have been shown also to provide growth opportunities for entrepreneurship in China (Chen & Lee, 2020;Zhang et al., 2017aZhang et al., , 2017b. The layout of the paper is as follows. "Literature review" Section reviews the relevant literature. "Methodology" Section outlines the empirical framework. "Results and discussion" Section describes the data and discusses the empirical findings, while "Conclusions" Section offers some concluding remarks. Literature review Carbon dioxide (CO 2 ) emissions are the main cause of climate change and global warming, and therefore they are among the most used indicators of environmental degradation (Apergis & Payne, 2009;Du et al., 2012;Lean & Smyth, 2010;Shahbaz et al., 2013Shahbaz et al., , 2016Tiwari et al., 2013). Higher CO 2 levels in the Earth's atmosphere are a serious issue Page 3 of 16 Caporale et al. J Innov Entrep (2021) 10:32 as they can cause greenhouse effects and higher air temperature (Bacastow et al, 1985;Hofmann et al., 2009;IPCC, 2015;Liu et al., 2016). If the burning of fossil fuels continues, atmospheric CO 2 concentration will double sometime during this century and air temperature will rise by 1.5-5 °C by 2100 (Baes et al., 1977;Kraaijenbrink et al., 2017;Mahlman, 1997). Carbon dioxide is associated with economic and other human activities, and accounts for three-quarters of Global Greenhouse Gas (GHG) emissions (Huaman & Jun, 2014;IPCC, 2015). The linkage between economic growth and environmental degradation is examined in many studies providing mixed evidence. Some of them find that the relationship between CO 2 emissions and economic growth is negative (Ajmi et al., 2015;Azomahou et al., 2006;Baek & Pride, 2014;Dogan & Aslan, 2017;Roca et al., 2001;Salahuddin et al., 2016), or that it is initially positive but then turns negative (Riti et al., 2017;Shahbaz et al., 2014Shahbaz et al., , 2016. Other papers report instead a positive relationship (Ahmad & Du, 2017;Bakhsh et al., 2017;Chaabouni et al., 2016;Ma et al., 2016;Nasir & Rehman, 2011;Ozturk & Acaravci, 2013;Saidi & Mbarek, 2016). Some more recent research has used the autoregressive distributed lag (ARDL) model and a nonlinear version of the same to analyse the relationship between economic growth and CO 2 emissions and has concluded that there is a positive long-term relationship between these two variables (Ahmad et al., 2018;Akalpler & Hove, 2019;Chen et al., 2019aChen et al., , 2019bCosmas et al., 2019;Dong et al., 2018;Gill et al., 2018;Khan et al., 2019;Riti et al., 2017;Toumi & Toumi, 2019). Differences in terms of sample period, country-specific characteristics, model specifications, econometric methods and pollution indicators are possible reasons for the mixed results of those papers. The environmental Kuznets curve (EKC), analogous to the inverted U-shaped curve originally used by Kuznets (1955) to model the relationship between income inequality and income levels, has become the most common framework to study the linkages between CO 2 emissions and economic growth, either in single countries or in groups of countries. Early studies include Krueger (1991, 1995), Stern and Common (2001) and Dinda (2004) and later Friedl and Getzner (2003), Dinda and Coondoo (2006) and Managi and Jena (2008). The results are mixed, some studies supporting the existence of an EKC (Ahmad, 2016;Ang, 2007;He & Richard, 2010;Iwata et al., 2010;Katz, 2015;Lau et al., 2014;López-Menéndez et al., 2014); others not finding any evidence for it (Jia et al., 2009;Liu et al., 2007aLiu et al., , 2007bMagazzino, 2014aMagazzino, , 2014bMagazzino, , 2015Pao et al., 2012;Riti & Shu, 2016); some reporting an N-shaped relationship (Kijima et al., 2010). Mikaylov et al. (2018) use a variety of cointegration methods (Johansen, ARDL, DOLS (dynamic ordinary least squares), FMOLS (fully modified ordinary least squares) and CCR (correlation regression estimator)) to test for the existence of an EKC in Azerbaijan and find that economic growth has a positive and statistically significant longrun effect on emissions which implies that the EKC hypothesis does not hold. Ru et al. (2018) apply a recently developed methodology based on the long-term growth rates (Stern et al., 2017) to model the income-emission relationship for four sectors (power, industry, residential, and transportation) and three difference types of pollutants (SO 2 (sulfur dioxide), CO 2 , and BC (black carbon)); the analysis uses data for various countries from the global emission inventory developed at Peking University and finds that the results are both sector and pollutant specific. Barassi and Spagnolo (2012) estimate Panel studies not finding evidence of an EKC include instead Onafowora and Owoye (2014) for Brazil, China, Egypt, Japan, Mexico, Nigeria, South Korea, and South Africa (only finding evidence of an EKC in Japan and South Korea); Mallick and Tandi (2015) for Bangladesh, India, Nepal, Pakistan, and Sri Lanka; Zoundi (2017) for 25 countries; Wang (2012) for 98 countries. These contradictory results reflect the issue of heterogeneity in the context of panels. Among single country studies, the following support the existence of an EKC: Ozturk and Oz (2016) Dogan and Turkekul (2016) for the USA. In this case, differences in the results can be attributed to the different pollution indices, model specifications, estimation techniques and sample period used. A few studies carry out fractional integration/cointegration methods to analyse CO 2 emissions. For instance, Galeotti et al. (2009) implement such tests for 24 OECD (Organisation for Economic Cooperation and Development) countries and find only limited evidence supporting the EKC hypothesis. Barassi et al., 2018 use this approach to examine stochastic convergence of relative per capita CO 2 emissions; according to their results this is relatively weak in the case of the OECD countries whilst there is stronger evidence in the case of the BRICS (Brazil, Russia, India, China, and South Africa); in addition, the former cannot be attributed to the presence of structural breaks. Gil-Alana et al. (2017) analyse the stochastic behaviour of CO 2 emissions applying a long-memory approach with nonlinear trends and structural breaks to a long span of data for the BRICS and G7 countries (USA, UK, France, Italy, Germany, Japan and Canada). They conclude that shocks to CO 2 emissions have permanent effects in most cases, except in Germany, the US and the UK. Compared to theirs the present paper, though considering only China rather than various countries, goes one step further since it also carries Page 5 of 16 Caporale et al. J Innov Entrep (2021) 10:32 out fractional cointegration tests to establish whether there exist any long-run linkages between the growth rates of GDP and CO 2 emissions. Several other studies have focused on China given the size of its economy and the high level of its CO 2 emissions. Some of them analyse the factors driving the latter, such as economies of scale, population and energy structure (Xie et al., 2018;Xu et al., 2014), and economic activity and energy intensity (Jalil & Mahmund, 2009;Liu et al., 2007aLiu et al., , 2007bZhang et al., 2009) (Wang et al., 2005). Using SDA (structural decomposition analysis) Peters et al. (2007) conclude that in China the growth in CO 2 emissions from infrastructure construction, household consumption in cities, the urbanization process and lifestyle has been greater than the savings from efficiency improvements. On the other hand, Li and Wei (2015) find that the impact of the industrial structure on carbon dioxide emissions is gradually changing from positive to negative and that the main driver of the reduction of CO 2 emissions in China is carbon intensity. Zhang et al. (2015) use SDA to investigate the factors that influence China's pollutants and conclude that increasing efficiency and intensity of emissions are important in reducing industrial pollution. The inconclusiveness of the results discussed above makes the design of appropriate environmental policies difficult for the Chinese authorities who have been under increasing pressure to deal more effectively with climate change issues. The aim of the present study is to obtain more robust evidence informing policy choices; this is achieved by applying fractional integration and cointegration methods whose features are outlined below. Methodology The order of integration of a time series is the differencing parameter required to make it stationary I(0). Specifically, a covariance stationary process [x t , t = 0, ± 1, …} is said to be I(0) if the infinite sum of all its autocovariances, defined as However, many processes do not satisfy (1); when the sum of the autocovariances is infinite, the process is said to exhibit long memory or long-range dependence, so-named because of the high degree of association between observations which are far apart in time. Within this category, the fractional integration or Page 6 of 16 Caporale et al. J Innov Entrep (2021) 10:32 one of the most widely used. Specifically, a process is said to be integrated of order d or I(d) if it can be represented as: where L is the lag operator (L k x t = x t-k ), and u t is I(0). One can use a Binomial expansion in Eq. (2) such that then, if d is fractional, x t can be expressed as In other words, x t is a function of all its past history, and the higher its value is, the higher is the level of dependence between observations distant in time. Thus, the parameter d measures the degree of persistence of the series. A very interesting case occurs when d belongs to the interval (0.5, 1), which implies non-stationary but mean-reverting behaviour, with shocks having transitory but long-lived effects. The fact that d might be a fractional value allows for a higher degree of flexibility compared to the standard models based on d = 0 for stationary series and d = 1 for non-stationary ones; in particular, it is more suitable to shed light on whether or not the series is mean-reverting, with shocks having longer lasting effects as the parameter d approaches 1. In a similar way, fractional cointegration allows to test for the existence of a long-run equilibrium relationships within a more general framework. To estimate d for the individual series, we use the Whittle function in the frequency domain (Dahlhaus, 1989) following the procedure described in Robinson (1994) (see also Gil-Alana & Robinson, 1997). The bivariate analysis is based on the concept of fractional cointegration, and uses the two-step approach of Engle and Granger (1987) extended to the fractional case as in Cheung and Lai (1993) and Gil-Alana (2003) as well as the FCVAR (fractional cointegration vector autoregressive) model proposed by Nielsen (2010, 2012). The latter approach is an extension of the CVAR (Cointegrated Vector AutoRegressive) model (Johansen, 1996) to the fractional case, and it allows for series that are integrated of order d and cointegrate with order d-b, with b > 0. It considers the following model: where L b = (1 − � b ) and ∆ is the first difference operator, i.e. (1-L), and X t is the vector of the time series under examination. Β is a matrix whose columns are the cointegrating relationships in the system, that is to say the long-run equilibria, while Γ i is the parameter that governs the short-run behaviour of the variables. The coefficients in α represent the speed of adjustment to the long-run equilibrium in response to temporary deviations from it and the short-run dynamics of the system. Results and discussion We use quarterly data on real GDP and CO 2 emissions in China, from 1978 to 2015, obtained from the Eikon database, which merges data from different sources into a single platform. Figure 1 contains plots of these two series, both of which appear to be upward trended, and also, as additional information, of CO 2 intensity (kg per kg of oil equivalent energy use), for the period 1971-2015 (source: the World Bank)-this series is also upward trended but has started to decline most recently. As a first step we examine the orders of integration of the two individual series under examination, i.e. of the logs of CO 2 emissions and real GDP, respectively. For this purpose, we consider the following model: and test the null hypothesis: in (4) for d o -values of − 2, − 1.99, …. − 0.01, 0, 0,01, …, 1.99 and 2 under two alternative assumptions for the I(0) error term u t , namely that it follows a white noise and a weakly autocorrelated process as in the exponential spectral model of Bloomfield (1973), (4) (1 − B) d x t = u t , t = 1 , 2 , ... , Log of CO 2 emissions Log of GDP Growth rate of CO 2 emissions Growth rate of GDP respectively. The latter fits extremely well in the framework suggested by Robinson (1994) and it is stationary for all values unlike the AR (autoregressive) case (see e.g. Gil-Alana, 2004). As for the deterministic terms, we consider the three cases of (i) no terms, (i) a constant, and (iii) a constant and a linear time trend, and choose the specification with statistically significant coefficients. The results are displayed in Table 1. The two individual series appear to be highly persistent. In the case of white noise residuals the estimated values of d are 1.87 and 1.92, respectively, for the log CO 2 and log GDP series, and a significant positive time trend is found in the latter case. When allowing for autocorrelation, the estimated values are 1.91 and 1.82, and the null hypothesis of I(2) (integrated of order 2) behaviour cannot be rejected since the 95% confidence intervals include the value of 2 for both series. These values indicate high persistence with shocks having permanent or long-lasting effects both on the levels and the growth rates of the series. Table 2 displays the estimates of d using the "local" Whittle semi-parametric approach of Robinson (1995). It is semi-parametric in the sense that no specific model is assumed for the I(0) error term. This method (Robinson, 1995) was later extended and improved by Phillips and Shimotsu (2005) and Abadir et al. (2007) among others, but the latter approaches require other user-chosen parameters in addition to the bandwidth and the results are very sensitive to those. When using this method, the estimates must be in the range (− 0.5, 0.5), and therefore we carry out the analysis using the second differences. The null of I(0) behaviour cannot be rejected in any case regardless of the bandwidth parameter. Thus, both the parametric and semi-parametric results indicate that the two series are non-stationary with orders of integration around 2 with shocks having permanent effects and producing changes in the level-trend structure of the data. Next we examine the possibility of fractional cointegration by using in the first instance the method suggested by Gil-Alana (2003), which is an extension of the Page 9 of 16 Caporale et al. J Innov Entrep (2021) 10:32 Engle and Granger (1987) approach to the fractional case. Thus, in the first step, we test for the order of integration of the two variables (in first differences). Since the previous results imply that they are I(1), in the second step, we regress one variable against the other and test whether the residuals are integrated of order d-b, these two parameters corresponding to the orders of integration of the two variables of interest. We display in Tables 3 and 4 the results for the two cases of uncorrelated and autocorrelated errors for three different estimation approaches for the coefficients in the regression model: (i) OLS (ordinary least squares) in the time domain, i.e. (ii) OLS in the frequency domain, i.e. where λ j = 2πj/T, j = 1, …, T are the Fourier frequencies, and where for arbitrary sequences, w t and v t , we define the cross periodogram and periodogram, respectively, as I wv ( ) = ω w ( ) ω v (− ) T ; and I w ( ) = I w w ( ) , with ω( ) being the discrete Fourier transform of w t : ω w ( ) = 1 √ 2 π T T t=1 w t e i t ; ( 1 − L) log GDP t 2 ; (7) , Table 3 Cointegrating regression using first differenced data LSTD means least squares in the time domain (5); LSFD is least squares in the frequency domain (6), and NBLS is narrow band least squares (7 Page 10 of 16 Caporale et al. J Innov Entrep (2021) 10:32 (iii) finally, we also employ a NBLS (narrow band least squares) estimator, which is related to the band estimator proposed by Hannan (1963), and which is given by where 1 ≤ m ≤ T/2, s j = 1 for j = 0, T/2 and s j = 2 otherwise, the motivation for this third approach being that, since cointegration is a long-run phenomenon, when estimating the slope coefficient in the regression model once might be concentrating only on the low frequencies, which are those corresponding to the long-run, hence neglecting information about the high frequencies, which might distort the estimation results (see Gil-Alana & Hualde, 2009). In the first of these cases, the estimated values of d are 0.83 and 0.87, respectively, and while the I(1) hypothesis cannot be rejected with autocorrelated errors, it is rejected in favour of I(d, d < 1) with white noise residuals, i.e. in the latter case we find mean reversion and fractional cointegration, though with a very slow rate of adjustment; however, when using the frequency domain least squares estimator, the values are much smaller, providing evidence of fractional cointegration in the two cases of uncorrelated and autocorrelated errors; finally, when using the NBLS estimator in (7) the estimates are very sensitive to the choice of the bandwidth parameter and with m = (T) 0.5 the null of standard cointegration, i.e. d = b = 1, cannot be rejected. Thus, the results seem to be very sensitive to the estimation method used for the cointegrating regression and the bandwidth parameter, but in all cases there is a reduction in the degree of integration in the long-run equilibrium relationship. Given the lack of robustness of the above results, we also apply the FCVAR method of Nielsen (2010, 2012), first under the assumption that d = b, these two parameters being the order of (fractional) integration of the individual series, which implies that the cointegrating errors will be I(d-b) = I (0). Their estimated order of integration is 1.024, which supports the hypothesis of classical cointegration, with the individual series being I(1) and the cointegrating errors I(0). Further, the null d = b cannot be rejected by means of a LR (likelihood ratio) test, which again implies standard cointegration. This finding is in contrast to the previous test results, which implied that standard cointegration should be rejected in favour of fractional cointegration (i.e. d-b > 0), and suggests that the earlier tests might be biased in favour of higher degrees of integration because of the method used for the estimation of the coefficients in the cointegrating regression (see Gil-Alana, 2003;Gil-Alana & Hualde, 2009). Classical cointegration between the two series, is also supported by the tests of Johansen (1988Johansen ( , 1996 and Johansen and Juselius (1990) (these test results are not reported for reasons of space). Therefore, there is evidence of a stable long-run equilibrium relationship between the growth rates of CO 2 emissions and real GDP in China, implying long run co-movement between the two variables. Concerning the implications of our findings in the context of existing research, one should notice that the previous literature predominantly focused on the causal relationship between energy consumption and economic growth, the determinants of CO 2 , Page 11 of 16 Caporale et al. J Innov Entrep (2021) 10:32 emissions or the EKC hypothesis, whereas the present study provides novel evidence on the persistence and the link between the growth rates of CO2 emissions and real GDP in China. As for the limitations of our analysis, it should be stressed that it does not investigate the functional form of the equilibrium relationship that has been identified, and also the possible presence of non-linearities and structural breaks; future work could examine these issues by carrying out appropriate tests. Seasonal patterns and turning points could also be relevant in this context and should be a special focus of attention in future research. Conclusions This paper has analysed the relationship between the logarithms of CO 2 emissions and real GDP in China by applying fractional integration and cointegration methods. These are more general than the standard methods based on the dichotomy between stationary and non-stationary series, allow for a much wider variety of dynamic processes, and provide information about the persistence and long-memory properties of the series and thus on whether or not the effects of shocks are long-lived. For all these reasons, our study makes a novel contribution to the literature. In particular, the univariate results indicate that the two series are non-stationary and highly persistent, their orders of integration being around 2, whilst the cointegration tests (using both standard and fractional techniques) imply that there exists a long-run equilibrium relationship between the two variables in first differences, i.e. their growth rates are linked together in the long run. Our results also have important policy implications. Specifically, they suggest to policymakers the need for environmental policies aimed at reducing emissions during periods of economic growth: if China wants to be on a sustainable development path, decisive environmental policies appear to be necessary. In particular, the Chinese government should adopt more environmental innovation measures, especially to increase energy efficiency through energy consumption restructuring, to promote social awareness of the advantages of a low-carbon economy and of environmental protection, and to ensure the implementation of environmental protection legislation and compliance; this type of green development also offers new opportunities for entrepreneurship.
5,882.8
2021-01-25T00:00:00.000
[ "Economics" ]
The effect of the use of information and communication technology skills with empowerment indicators on staff in the Ministry of Sports and Youth of Islamic Republic of Iran Information and communication technology (ICT) is one of the most important areas of development in the world. The purpose of this study was to survey the effect of the use of ICT Skills with empowerment indicators on staff in the Ministry of Sports and Youth of Islamic Republic of Iran. This study was an applied research and its method was descriptive correlational. The statistical population of this study was all staff in Youth and Sports Departments of Guilan Province. The Cronbach’s alpha coefficient was reported above 0.90 in both questionnaires. The collected data were analyzed by K-S test and Pearson and Spearman correlation coefficient. The results showed that 59.43% of subjects were men and 40.57% of them were women and 90.10% of them were married, and 9.90% were single. The bachelor’s degree (49.82%) had the highest mean of academic education and doctorate (35.0%) had the lowest mean. The performance improvement index (4.461) had the highest mean and job variety (3.851) had the lowest mean among variables. Therefore, it is suggested that high level managers of the Ministry of Sports and Youth can empower their staff through the creation of a climate for staff’s more participation of ICT, the sufficient freedom for job responsibilities, the allow for the creativity and innovation, and the attention to low level staff’s comments and suggestions. Introduction ICT has been able to be a component of modern society during a short period of time, so that the understanding of ICT along with reading, writing, and counting, and the mastery of basic skills and concepts have been considered as a part of the core of these societies in many countries 1 .Also, human resources should understand that the lack of ICT will lead to inequality in the use of instructional opportunities.The depth of these inequalities has huge differences between the advanced and developing countries.Societies that have no or less ability to use technology are steadily moving back from the participation in a society that moves based on science and technology 2 .ICT is considered as one of the most important areas of development in the world.Many countries in the world consider the development of information technology as one of the most important infrastructures of their development.The integration key of ICT in organizations is the qualifications of ICT and its experiences, so that its skill depends on its professional qualification 3 .Today, the role and effect of ICT is obvious for everybody.In other words, it can be argued that information technology is used for the coordination of affairs and environmental changes commensurate with the community, the thinking speed, the optimal utilization of resources, and the transformation of lifestyle 4 .It must be admitted that today, ICT is a set of hardware, software, and thoughts that facilitates the flow and efficiency of information, so that ICT at the international level has become a powerful force in social and economic 5 , political, and educational developments 6 .On the other hand, many countries and regions of the world do not make any progress without the joining to the era of ICT 7 .It can be stated differently that many of the efforts in the field of ICT in different academic departments have been unsuccessful because the production of thought and creativity 8 , the coordination with this changes, and the creation of novel changes require special training capability using ICT tools 9 .Today, the world is a world with enormous transformations and the imagination of the future without the support of ICT seems unlikely.In the last decade, we have seen the revolution of information and communication, so that this century has been called ICT.Information technology is creating a new revolution all over the world that has created new and significant capacities in the field of human knowledge and it has created tools that have changed the nature of work and life and this has led to widespread developments in all social and economic areas of humanity 10 .Today, we can observe the ascending and amazing growth in the rate of changes through ICT and the increase of staff's knowledge and awareness level in sports organization.The most important and effective advantage of the use of ICT is to reduce the information poverty in the society, making a variety of information and knowledge available, and the increase of the awareness level of public in the society.According that the next century is the century of information and the coordination of the information society, this can contribute to the efficiency and effectiveness of the community in the reduction of information poverty and the development of a comprehensive information in the society 11 .Information technology is a tool in staff's hands to increase their ability, but this tool is the main goal in many organizations and staff focus on the use of technology 12 .On the other hand, any organization that ignores this issue will decline due to the rapid growth of information and communication technologies and the need of organizations for their survival in today's era.Today, the attention to this technology is inevitable and necessary to achieve the goals of the organization 13 .In this regard, employees of state institutions are considered as the resources of the organization and it is necessary to pay attention to them in order to achieve the goals.Staff will be able to do their job properly with the proper information and information and communication technologies will provide this opportunity to process and maintain information 14 .We can find with a look to the past that a human has used information technology since the creation till now.The diversity and development in information technology began in the late twentieth century.The most important feature of information technologies is that those continuously increase their technological capabilities and reduce the cost of their use.At the moment, the investment in this sector is dramatically rising, so that the cost of the purchasing of technological products is more than 50% of the costs of an organization in the United States 15 .Today, government organizations of the country require changes in their activities and online services in accordance with the growth of information technology.The provision of fast and up-to-date services has an important place in goals of an organization and staff that are the human capital of an organization must also keep their goals in line with the goals of an organization and lead to the improvement of productivity in an organization in the direction of the synergy of forces and talents.Therefore, the enhancement of staff's capability level of this organization about information and communication technologies is the inevitable requirement 16 .The emergence and adoption of ICT has accelerated the transformations of the world in the age of information and knowledge and affected the different areas of life in societies.ICT is an important issue for the developing countries to improve the productivity and the quality of life 17 .The adoption of ICT is a powerful and effective tool for developing countries in the world and plays a decisive role in executive organizations such as Youth and Sports General Directorate 18 .Government organizations have invested heavily in the use of ICT due to their capabilities that ICT has in the creation of value for them.One of the most important values that organizations can create in their subsystems using these technologies is the empowerment of their subsets with the use of these tools.successful managers use these technologies to develop their staff's capabilities in order to achieve the goals of the organization due to the potential abilities of ICT to meet the needs of the organization in this field 19 .Empowerment means that we simply encourage individuals to play a more active role in their work, so that they take responsibility for the improvement of their activities and can make key decisions without referring to a top responsible 20 .The staff's empowerment has led to specific attitudinal and behavioral outcomes for organizations and enhances their ability to compete in both internal and external environments.The staff's empowerment is an important strategic for the development of different organizations to adapt to external changes and is one of the main issues of organizations.This has led that successful organizations using different tools and mechanisms try to provide empowerment programs for a subset 21 .Successful managers use these technologies to develop their staff's capabilities to achieve the goals of the organization due to the potential ability of ICT to meet staff's needs in this field 22 .The creation of a balance between empowerment and ICT, the provision of background for the maintaining, recording, and transmission of a part of staff and managers' valuable experiences to new employees, the prevention of staff's gradual burnout and the gradual burnout of organization, the avoidance of costs due to staff's lack of awareness from the developmental plans of organization, the conflict or challenges of organization and the continuous improvement of the organization, the balance between powerful staff's needs and needs of the organization, and the use of opportunities and the prevention of waste of human capital and resources are the part of a set of reasons that can be considered for the importance of these two variables in this study.Therefore, the purpose of this study was to survey the effect of the use of ICT Skills with empowerment indicators on staff in the Ministry of Sports and Youth of Islamic Republic of Iran. Methods This study was a descriptive correlational research that has been conducted through field method. Participants The statistical population of this study was all staff in Youth and Sports Departments of Guilan Province (N= 310).The statistical sample was equal to the statistical population.According to Morgan's table, 175 people completed the questionnaire. Instruments and Tasks The instrument of this study was included a demographic questionnaire, the Organizational Citizenship Behavior Questionnaire, and the Perceived organizational support questionnaire.The questionnaires were distributed among a number of experts (N=15) to determine the validity of questionnaires and the accuracy of the questions. the questionnaires were distributed in the statistical population after the reassuring of the obtained result.The reliability of the questionnaires was also calculated using Cronbach's alpha method.Cronbach's alpha coefficient was reported in both questionnaires above 0.90 for the survey of the reliability of the measurement instrument of this study.The ICT questionnaire included 15 items.Its questions were gathered based on Davis's technology acceptance model (Questions 1-3), Chan and Huff's Information Systems Model (Questions 4-10), and Dinev and Koufteros's Internet Self-Efficacy Model (Questions 12-15).Researchers' a valid and combined questionnaire was also used to assess sports teachers' empowerment components.35 questions were identified in the teachers' empowerment questionnaire.7 questions from Spreitzer's psychological empowerment questionnaire, 10 questions from Robbins's job satisfaction questionnaire, 5 questions from Paterson's job performance questionnaire, 1 question from Scott and Jaffe's climate empowerment questionnaire, 6 questions of Whetten and Cameron's empowerment questionnaire, and 6 questions from Neefe's organizational learning questionnaire were used in this study. Procedure The purpose and the process of this study were explained to subjects.The participants were assured that their data will be kept confidential and those will not be available to anyone.Then all subjects completed a consent form to participant in this study and they attended with the complete satisfaction in this study.The researchers distributed the questionnaires among the subjects.Then, subjects completed the questionnaires. Data Analysis The collected data were classified by descriptive statistical methods and were analyzed by K-S test and Pearson and Spearman correlation coefficient.The SPSS software (version 23) was used for data analysis (α≤0.05). Results The descriptive results of this study showed that 90.10% of subjects were married and 9.99% of them were single.The bachelor's degree (49.82%) had the highest mean of academic education and doctorate (35.0%) had the lowest mean.The performance improvement index (4.461)had the highest mean and job variety (3.851) had the lowest mean among variables.52.65% of staff had a formal contract-agreement and 47.35% of them were contract staff.34.98% of staff had studied the field of physical education and sport sciences and 65.02% of them had studied other fields. Discussion The purpose of this study was to survey the effect of the use of information and communication technology (ICT) Skills with empowerment indicators on staff in the Ministry of Sports and Youth of Islamic Republic of Iran.The results of this study showed that there was a significant relationship between ICT skills and staff's empowerment indicators.It can be inferred that the development of ICT, especially the Internet has created new conditions in the world and it has had many positive and immediate effects on the behaviors, skills, relationships, and social interactions, especially in staff of a government organization in the executive section.The infinite increase of information has led to that the organization, record, and storage of information be more complex and the access to them be more difficult day to day 23 .ICT have become the main focus of human' s attention with the daily change in the hardware and software technology, globalization, competitive stress, the need of an organization for expert staff and their familiarity with the methods of technology, the lack of need for individuals' physical presence at workplace, the need of an organization for the improvement of its staff's efficiency and effectiveness, the development of the scope of information flow and its effective contribution in the decisionmaking, planning, and the hierarchy of most large government organizations.Therefore, the computer science, storage, retrieval, volume of information, and human knowledge were mixed and information technology was formed 24 .ICT helps staff to be aware of their life.They learn about the role of ICT in daily life and become familiar with similar tools of information technology and use these tools independently and in a group.They become familiar with different methods of information gathering, organization, and presentation that affects their performance, so that todays, the type of human's activities in organizations has been transformed into intellectual work with the development of technology and automation tasks 17 .The results of this study was consistent with the results of Jafari and Azmoon 7 ; Nemamyan and Emami 14 ; Nadifard and Shahtalabi 13 ; Choi, et al. 25 ; Dewettinck, et al. 22 ; and Srivastava, et al. 's 20 study.It is suggested to avoid the traditional views and methods of training due to the rapid changes in the field of ICT and these methods are continuously evaluated from several points for the continuous optimization and updated trainings in this regard.It can be said in the explanation of this approach that the greatest effect of technology may has been on human relationships in the 21st century and with the advancement of the technology world and humans anywhere on the planet can communicate with each other with the help of this technology 26 the effect of information technology on their social, personal, occupational, and civic life due to it is becoming more widespread in the society.Technologies such as computers, mobile phones, the Internet, multi-media, and other popular media have affected the appearance and inner aspects of human life in the current era.These massive informationcommunications developments have rebuilt the cultural and social context of the society and have had a deep effect on the existential dimensions of the community 11 .According to the results of the results of this study, it is recommended that the staff announce their information needs and shortcomings in the field of ICT to the authorities to enrich their information load.Also, honorable authorities develop and reform infrastructure structures and provide the necessary facilities for the development of computer literacy among students.The results of Goktas' s 27 study showed that the lack of access to computers, the low speed of Internet, the lack of access to resources in the area of residence are identified as the most important problems in using ICT.ICT is an approach that can be more effective than any other method in staffs' efficiency and it can also meet the needs of an organization with the consideration of training opportunities and facilities better than any other system.ICT uses all factors that play the important role in the process of productivity and tries to create a desirable organizational conditions in terms of goals and intentions through the precision design of factors 28 .The correct use of ICT can have a deep positive effect on staffs' engagement and positive attitudes and the facilitation of executive affairs.Therefore, ICT as an effective resource and a way for the quick sharing of information in today's society can be effective to increase empowerment skills in a society 29 .On the other hand, information technology can also be effective on daily communications and play a role of the facilitator.Also, the emergence of technologies can develop the range of human communications in all fields.The subject of ICT and empowerment has been the subject of debate among researchers of technology field, so that it has affected staffs' knowledge, attitudes, values, and coverings in recent decades 30 .Today, the ascending and amazing development of the rate of changes is observable due to ICT and an increase of the human's knowledge and awareness.The most important and effective advantage of using ICT is to reduce information poverty in the community and make information and knowledge available and increase public awareness of the community.Therefore, this can play an important role in the reduction of information poverty and the development of the inclusive information in the society and the efficiency and effectiveness of the community 7 .the investment and the application of hardware and software of new technologies and its use for information storage are not the emphasized issue in the information society, especially in the executive section of government organizations, but the most important issue is learners' empowerment to achieve selfleadership skills in the learning.An administrative system for the creation of the place of ICT in its system needs to examine the needs of this technology in the interactions of its infrastructures 31 .In other words, it should be possible to identify and explain the role of ICT in the quantitative and qualitative development of the executive field due to the recognition the place of ICT in the country bureaucracy, but we should not expect the extraordinary results in the short term of ICT in the administrative culture 7 .The traditional structure and process does not meet the needs of the human community in the information age of the new millennium, because the knowledgebased is the contemporary human's greatest goal.Therefore, we must find opportunities that would enrich the outcome of the training process, because the illiterates of the new century are not individuals who cannot read and write, but they are who do not know how to learn.The use of training technologies using new methods will enhance the efficiency and effectiveness of administrative systems 32 . Conclusion Generally, the implementation of in-service trainings in accordance with the knowledge of the present era and the work areas can help to staffs' self-efficacy in the Ministry of Sports and Youth of the ministry due to the reinforcement of competence sense and the development of staffs' required skills, abilities, and skills in the Ministry of Sports and Youth to prepare them to meet possible work challenges.We can also reinforce the determination of each employee's role in the Ministry of Sports and Youth and their activities in achieving the goals of the organizational departments by managers, the strengthening of the control sense over the administrative and operational consequences in staff, and the development of the staffs' capabilities to align the environment with their demands through the implementation of training courses for the organizational upgrading.high level managers of the Ministry of Sports and Youth can empower their staff through the creation of a climate for staff's more participation of ICT, the sufficient freedom for job responsibilities, the allow for the creativity and innovation, and the attention to low level staff's comments and suggestions. Table 1 The status of subjects' age. Table 2 The status of subjects' gender. Table 3 The results of variables distribution (Normality of data). Table 4 The results of ICT data and empowerment components. Table 5 The results of Spearman correlation coefficient. Table 6 The results of Pearson correlation coefficient.
4,702.6
2019-04-26T00:00:00.000
[ "Education", "Computer Science" ]
Clock Transitions Versus Bragg Diffraction in Atom-interferometric Dark-matter Detection Atom interferometers with long baselines are envisioned to complement the ongoing search for dark matter. They rely on atomic manipulation based on internal (clock) transitions or state-preserving atomic diffraction. Principally, dark matter can act on the internal as well as the external degrees of freedom to both of which atom interferometers are susceptible. We therefore study in this contribution the effects of dark matter on the internal atomic structure and the atoms' motion. In particular, we show that the atomic transition frequency depends on the mean coupling and the differential coupling of the involved states to dark matter, scaling with the unperturbed atomic transition frequency and the Compton frequency, respectively. The differential coupling is only of relevance when internal states change, which makes detectors, e.g., based on single-photon transitions sensitive to both coupling parameters. For sensors generated by state-preserving diffraction mechanisms like Bragg diffraction, the mean coupling modifies only the motion of the atom as the dominant contribution. Finally, we compare both effects observed in terrestrial dark-matter detectors. I. INTRODUCTION Many envisioned atom-interferometric dark-matter 1 detectors [2][3][4][5][6] rely on internal atomic transitions, even though the atom-optical interaction also manipulates the center-of-mass (COM) motion.While in principle both degrees of freedom can be affected by dark matter (DM), the details of the coupling are key to interpreting and understanding the potential signal measured by atom interferometers (AIs) [7][8][9] .We identify the mean and the differential coupling of the involved atomic states as key quantities and discuss their effect on the atomic transition frequency, as well as on motional effects of terrestrial DM detectors based on atom interferometry. Terrestrial detectors with both, 4 horizontal, 10,11 or vertical 2,3,12 orientations are at the planning stage or under construction, for which the site evaluation requires a thorough analysis of the noise environment. 13,14Possible designs differ in their orientations, geometries, source distribution along the baseline, and techniques for atomic manipulation. 6Here, two-photon transitions 15 can be used for inducing Bragg diffraction which preserves the internal state and for inducing Raman diffraction which additionally changes the internal state, although only at hyperfine energy scales.In contrast, single-photon transitions at optical energy scales can be used to transfer momentum [16][17][18][19] and to simultaneously generate superpositions of two clock states. 20In differential setups the latter benefits from common-mode suppression of laser-phase noise. 21,22However, two-photon transitions are more flexible as they allow to transfer momentum corresponding to optical wavelengths with reasonable laser power without the need for narrow transition lines. DM may affect both the internal energies of the atom as well as its COM motion.These effects can in principle be detected a) Electronic mail: daniel.derr<EMAIL_ADDRESS>AIs, since they have clock-like properties [23][24][25] while being used as accelerometers. 26The planned detectors will mainly rely on internal transitions, as those have been identified as the dominant contribution of an DM-induced signal. 9,27To include DM, one can introduce extensions of the Standard Model that couple to conventional matter, [28][29][30][31] i. e., the constituents of the atom, as well as other elementary particles.By that, each internal energy of the involved atomic states is modified, and through energy-mass equivalence, its motion as well. In this article we highlight the difference of the internal and external degrees of freedom with the help of two relevant coupling parameters: Describing the mean coupling of both involved internal states to DM as well as their differential coupling.Remarkably, both may contribute to the change of the atomic transition frequency and may be detected by AIs based on state-changing diffraction, e. g., using single-photon transitions.3][34][35] Moreover, the relevant energy scales are the unperturbed atomic transition frequency and the Compton frequency, respectively, and are hence of extremely different orders of magnitude. In AIs based on single-photon transitions like in planned detectors, [2][3][4]6 the phase from the clock contribution, i. e., originating from the change of the atomic transition frequency, is dominant. Hwever, the signal in principle also includes motional effects of the coupling to the COM motion.In contrast, Bragg-type AIs, as implemented in MIGA, 10 are only susceptible to the latter.Furthermore, our results are also of relevance for setups that rely on Raman diffraction, 4 where the internal energy scale is in the megahertz regime and much lower than in the envisioned setups built with optical single-photon transitions.It therefore plays a role in between both limiting cases.Moreover, the mean coupling identified in our model is closely related to the parameter measured in tests of the Einstein equivalence principle 36,37 (EP), where possible violations of the universality of free fall between different atomic species 38,39 or isotopes 40 the differential-coupling parameter highlights other facets of the EP, namely violations of the universality of the gravitational redshift 41 and of clock rates.42 To shine light on the influence of these coupling parameters, we furthermore discuss different orders of magnitude of various contributions. II. COUPLING OF ATOMS TO DARK MATTER We model DM and violations of the EP by introducing an extension of the Standard Model.While different approaches are possible that in turn depend on the mass range of interest, our focus lies on ultralight DM. 7 For that, a classical scalar dilaton field 28,29,43,44 is a simple generic extension.This dilaton field is linearly coupled 41,43 to all elementary particles and forces of the Standard Model.Consequently, masses of elementary particles and natural constants become dilaton dependent.Hence, they also introduce a dependence of composite, bound particles through their constituents.To describe the resulting effect on atoms, we rely on an effective coupling of their mass and internal states to the dilaton field. In this article we describe an atom by a two-level system with ground state |⟩ and excited state |⟩.The external degrees of freedom of the atom such as momentum p and position ẑ are included since the interferometer also acts as an accelerometer. 26They obey the canonical commutation relation [ ẑ, p] = iℏ 1ext with the reduced Planck constant ℏ.Therefore, our starting point is the dilaton-modified Hamiltonian (1) Here the relativistic mass defect [45][46][47][48][49] is incorporated through state-dependent masses ( ), which in turn depend on the dilaton field.In accordance with Einstein's mass-energy equivalence, we find the rest energy ( ) 2 , where is the speed of light.The mass defect already encodes the internal energy difference, which is explicitly given by [ ( ) − ( )] 2 .Regarding the external degrees of freedom, we consider terrestrial setups modeled by a linear gravitational field.Both the kinetic and potential energy also become dilaton-dependent.Additionally, we allow for a time-modulated gravitational acceleration 7 () = 0 [1 + ()] with 0 being the gravitational acceleration caused by a source mass and the dimensionless coupling constant of the source mass to the dilaton field.This coupling constant can be defined in principle in analogy to the atomic mass below, cf.Eq. ( 2).Here, is some timedependent modulation induced by the coupling of the dilaton field to the source mass. We assume that the change of the atomic mass due to the dilaton field is small.Consequently, a first-order expansion of the mass 41 at the Standard-Model value = 0, i. e., gives rise to the effective coupling to the dilaton field. Here we introduce the unperturbed, state-dependent mass ,0 (0) and the dimensionless effective coupling parameter ln( )| =0 .In principle one can connect to the individual constituents and natural constants of the atom, introducing coupling parameters that are independent of the atomic species. 35While we refrain from such approaches and focus on signatures of DM in the detector signal, we emphasize that this discussion can be helpful for the design of the sensor and the choice of the atomic source.The (dimensionless) dilaton field 50,51 (, ) includes the dimensionless coupling constant of the source mass.The dilaton field consists of two contributions: (i) The first part introduces modifications of the gravitational potential leading to EP violations. 42For example, it implies that the gravitational acceleration may be state-dependent, providing hints to extend general relativity.(ii) The second part is an oscillating background field which can model cosmic DM. 52 It behaves like a plane wave with the amplitude 0 and wave vector at frequency .The initial phase is unknown, so that only a stochastic background of cosmic DM will be observed by the detector.Through its coupling to the energies of the individual states and by that through mass-energy equivalence to the mass of the atom, see Fig. 1, this oscillating background field directly influences both the COM motion and the atomic transition frequency.In turn, such effects induce signatures of DM in the detector's signal.3).The atomic-energy levels of the ground state (light green solid line), exited state (blue solid line), the atom's mean mass (green solid line), and energy difference between both states (red solid line), which corresponds to the mass defect, oscillate and are plotted against the phase .The relevant parameters are the mean coupling ε and differential coupling Δ of the internal states to the dilaton field, as well as the atom's mass defect Δ 0 = Δ 0 / m0 .Due to the oscillation of the internal energies, the mean mass as well as the atomic transition frequency also oscillate.For single-photon-like schemes the three relevant situations are shown from left to right: (i) The general case in which the mean and differential coupling are non-vanishing, i. e., ε ≠ 0 ≠ Δ.(ii) The internal states couple exactly opposite to the dilaton field, i. e., ε = 0.Only this case leads to a constant mean mass. (iii) Both internal states can couple identically to the dilaton field, i. e., Δ = 0.For Bragg-type schemes we have Δ 0 = 0 = Δ (right panel), as no internal transition is relevant.In this case we find only an oscillating mean mass that affects the motion of the atom. Incorporating ≠ 0 is not straightforward since one has to perform a non-relativistic limit of a dilaton-modified field theory to avoid operator-ordering issues.However, galactic and cosmic observations 7,9,53 suggest small momenta compared to the rest mass of the dilaton, such that we assume = 0 in the following.Thus, the dilaton field's frequency reduces to its Compton frequency = 2 /ℏ and is solely determined by its mass .In this case, the dimensionless dilaton amplitude 34,51 becomes where the DM energy density DM 0.4 GeV/cm 3 is compared to the energy scale given by the Planck mass P = (ℏ/) 1/2 and a volume defined by the Planck length P = (ℏ/ 3 ) 1/2 , with the gravitational constant . In addition, one could model the coupling of the dilaton field to the source of the gravitational field by yet another dilaton field, possibly incoherent to the one interacting with the atoms.However, we assume that only one field is present, but allow for a phase shift compared to the oscillation of the dilaton field.Hence, we model the time-dependent modulation of the gravitational acceleration via where + is the phase of the dilaton field interacting with the atoms.For example, a phase shift of /2 corresponds to an oscillation of the atoms and gravitational acceleration out of phase.We still can include incoherence between the dilatoninduced atomic properties and a gravitational modification by averaging independently over and in analogy to the treatment discussed below. III. DARK-MATTER-INDUCED PERTURBATIONS ON ATOMS With these insights, we expand the Hamiltonian from Eq. ( 1) with respect to , the unperturbed mass defect Δ 0 = ,0 − ,0 , and 1/ 2 .As a result, in first order the state-dependent mass can be replaced by including the unperturbed mean mass m0 = ( ,0 + ,0 )/2 and different dimensionless modifications summarized in Table I with / = ±1 to denote state-dependent perturbations.We observe the following effects: The mean mass oscillates due to μDM () and thereby influencing the COM motion.The dimensionless mass defect Δ 0 gets further modified by Δ DM (), which results in the oscillation of the transition energy explained below.Furthermore, the dilaton field effectively leads to a modification of the gravitational acceleration 0 which becomes state dependent, i. e., () TABLE I. Summary of all parameters that give rise to the atomic structure and dilaton-induced perturbations.We define mean and differential values for the unperturbed mass and coupling parameters through the respective quantities of the individual states, as well as the dimensionless mass defect.The dilaton field induces an oscillation with amplitude 0 and phase = + of the mean mass, the energies of the individual states, as well as gravity, included in the dimensionless parameters μDM (), Δ DM (), and γDM (), respectively.For quantities already including one perturbative parameter, the contribution in braces must be neglected for consistency.Moreover, we introduce the dilaton's coupling to the gravitational source mass, which leads to mean and differential EP violations. Cause Parameter Definition mean mass m0 where γEP parametrizes EP violations between different atomic species depending on the mean coupling of the dilaton field to the atom. 38,39,41An EP violation between different internal states 42,54 is encoded in Δ EP .In addition, the gravitational acceleration changes dynamically via γDM () through the DM coupling to the source mass.We separate the resulting Hamiltonian into an unperturbed part Ĥ0 and perturbations V of the rest mass, Vkin of the kinetic energy, and Vpot of the potential energy.Besides, we generalize to () to allow for time-dependent changes of the internal state.We discuss the explicit form of these four contributions in the following: (i) The unperturbed Hamiltonian describes a particle of mean mass m0 moving in a linear gravitational potential without any state-dependent effects or internal structure. (ii) The rest-mass perturbation does not affect the motion of the atom, but changes the Compton frequency m0 2 /ℏ and atomic transition frequency Ω Δ 0 2 /ℏ.The phase measured by atomic clocks, Mach-Zehnder and comparable interferometers is not affected by modifications of the Compton frequency to lowest order. 55owever, they are sensitive to the atomic transition frequency between both internal states, which is modified to The unperturbed atomic transition frequency Ω is connected to the mass defect and is modulated by an oscillation with amplitude Ω ( Δ + Ω ε) 0 .Here, ε is the mean coupling of both internal states to the dilaton, whereas Δ is the difference of their coupling constants, see Table I.Commonly, it is assumed that Ω is proportional to the atomic transition frequency, 9,24,27,35,52 i. e., the coupling of the dilaton field to both internal states is identical resulting in Δ = 0. Since the details of the coupling are a priori unknown, it is important to allow a different coupling of both internal states, i. e., to allow for Δ ≠ 0. Generally, Ω is a linear combination of both the Compton and the atomic transition frequency with respect to the different coupling parameters.While the coupling parameters Δ and ε may be of very different orders, the Compton and atomic transition frequency also differ by multiple orders of magnitude so that in principle both contributions to Ω have to be considered.Remarkably, the change of the atomic transition frequency depends on the Compton frequency, which could enhance the DM signature in the detector.Besides, an observable change in the atomic transition frequency present for Raman 4 or microwave transitions cannot be completely ruled out.Such transitions are relatively simple to implement and benefit from a much lower recoil velocity, suppressing the impact of gravity-gradient noise. ( As a side note we mention that both Vkin and Vpot depend on the perturbation parameters μDM () and Δ DM ().In principle, they include products of a dilaton coupling with Δ 0 / m0 , which are next order in perturbation.Therefore, they are omitted in further calculations and enclosed in Table I by braces. The discussed perturbations reflect themselves in the signals of atom interferometers that can be used to construct DM detectors.In Secs.IV and V, we discuss the individual contributions and their order of magnitude in an exemplary setup. IV. DARK-MATTER SIGNAL IN ATOM GRADIOMETERS Quantum sensors such as atomic clocks or atom interferometers are affected 35 by the previously derived perturbations.We focus on the signal observed by DM detectors generated from light-pulse atom interferometers.Such high-precision quantum sensors have been proposed for DM searches, 7,9,23,24,32 whose signal can be enhanced by multi-diamond 26,[56][57][58] along with large-momentum-transfer techniques. 22,59,60To focus on the fundamental effects on these detectors, we study an atomic Mach-Zehnder interferometer (MZI) cf.Fig. 2 without transferring large momenta.However, the treatment introduced below can easily be generalized to different interferometer types and geometries. In an MZI, the wave packet is initially split by a beam-splitter pulse into two separate arms.One arm continues along the initial path, while the atoms on the other arm gain momentum due to diffraction, so that the arms become spatially separated.After an interrogation time , a mirror pulse interchanges the momenta of the two arms.At the final time of 2, a second beam-splitter pulse interferes both arms, resulting in two output ports.Each interaction of the atoms with light that transfers momentum might also change their internal state.Here, the specific implementation is the key for the sensitivity of a detector. In delta-pulse approximation 61,62 the effect of light-matter interaction on the atomic motion is described by an effective arm-dependent potential 63 Vem with ℓ being the ℓ th (effective) wave vector acting at time ℓ , transferring the momentum ℏ ( ) ℓ on arm , and (•) denoting the Delta distribution.We assume that both arms are diffracted at the same time, neglecting the finite speed of light on the scale of the arm separation. 64Moreover, we do not include frequency chirps necessary in terrestrial, vertical configurations and omit the standard laser-phase contribution. This interaction allows for different momentum-transfer mechanisms and can in principle incorporate large momentum transfer.Typically, the interaction is categorized into singlephoton 19 and (counter-propagating) two-photon 15 transitions.In the former, ( ) ℓ simply corresponds to the laser's wave vector, while in the latter, ( ) ℓ is the effective wave vector given by the sum of both lasers' wave vectors.Two-photon transitions suffer from laser-phase noise in differential setups with long baselines, which is suppressed for single-photon transitions.Nevertheless, two-photon transitions are generally more flexible.They allow to only transfer momentum without changing the internal state (Bragg) and drive effectively hyperfine-structure transitions while transferring momentum that corresponds to optical wavelengths (Raman). Assuming | ( ) ℓ | = for all interaction times and vanishing gravity gradients leads to a closed, unperturbed MZI.Deviations are introduced by the dilaton field as discussed in Eqs. ( 10)- (13).In a perturbative treatment, 63 we find to first order the phase with the wave packet's initial position 0 , as well as initial momentum 0 , depending on the time 0 at which the interferometer is initialized.It includes the standard phase − 0 2 of an MZI, which is perturbed by the contributions and induced by the dilaton field, compare Table II in Appendix A for their explicit expressions.Common-mode operation of two MZIs spatially separated by the distance > 0, e. g., where the first MZI is located at 0 and the second one at 0 + , suppresses noise and the dominant phase − 0 2 for vanishing gravity gradients. 22We account for the finite speed of light on the separation scale of the two interferometers, but not on the extent of the arm separation of a single one.The first MZI starts its sequence at time 0 with an initial momentum 0 of the wave packet, while the second interferometer starts at time 0 + / with an initial momentum 1 .Subtracting the phases of both interferometers gives rise to the differential phase, which in our approximation only depends on dilaton-introduced perturbations with Δ ( 0 + , 1 , 0 + ) − ( 0 , 0 , 0 ) and the propagation delay / of the light pulse.Since the dilaton phase may vary, we can only measure this stochastic background and we therefore have to average.Thus, we assume a uniform distribution of in accordance with the principle of maximum entropy. 65The signal amplitude 9 is the square of the phase difference Δ averaged over .Due to the square, various correlations Δ 2 , of the individual phases contribute. In the following we identify dominant contributions to the signal, i. e., Δ 2 , , for different experimental realizations, namely for single-photon-type and Bragg-type interferometers. V. SINGLE-PHOTON INTERFEROMETERS Many planned DM detectors 2,3 based on atom interferometry rely on single-photon transitions, due to their intrinsic suppression of laser-phase noise. 21,26In this section, we consider the effects of the dilaton field on such interferometers and focus on the dominant contributions of the observed signal amplitude.Using single-photon transitions for atomic diffraction not only changes the momentum of the atom, but also its internal state.The results discussed in this section therefore also transfer to Raman transitions, also planned for some sensors, 4 where only the frequency scales have to be adjusted. We first focus on the phase difference introduced by the modified atomic transition frequency cf.Eq. ( 11) which gives rise to where the timescale 1 ( 0 ) is proportional to 1/ and is listed in Table III in Appendix A. Recalling 0 ∝ 1/ from Eq. ( 4) shows that this contribution plays an important role in the search for ultralight DM.In particular, the change of the atomic transition frequency yields the dominant contribution Δ 2 , = 32 to the signal amplitude, with S() sin /2 .We refer to factors like S as interferometric factors that include the interrogation-mode function. 58Perturbations and scaling factors like 2 0 (Δ + εΩ) 2 / 2 are independent of the explicit interferometer geometry and appear in a similar form for different geometries, including those with large momentum transfer. Equation ( 19) is a generalization of other treatments of atom-interferometric DM detectors 9 but has a similar form.In fact, the differential coupling Δ is usually neglected, which introduces another frequency scale.In principle, this contribution also arises for Raman-type interferometers, where the atomic energy difference is in the microwave range and much smaller than for optical transitions.However, due to the coupling to the Compton frequency that is orders of magnitude larger, one can also expect a sensitivity to the parameter Δ for Raman setups.The same holds true for single-photon transitions between hyperfine states in the microwave range.For this type of transitions, in contrary, the momentum transfer is negligible, resulting in interferometers that are less sensitive to gravity-gradient noise. As discussed above, generally the coupling scheme is unknown so that both Δ and ε might contribute.Since their order of magnitude is unknown, we consider in the following two limiting cases: Either the coupling of both internal states is completely identical, i. e., Δ = 0, or exactly opposite, i. e., ε = 0. A. Vanishing Mean Coupling ( ε = 0) For vanishing mean coupling, the change of the atomic transition frequency reduces to Ω = Δ 0 .Hence, the relevant scale is given by the Compton frequency and not by the atomic transition frequency.This is different to what is usually postulated in most treatments for AIs and clocks, where Ω ∝ Ω is assumed.Since ≫ Ω generally applies, we expect a larger suppression of contributions which do not originate from the clock phase but keep in mind that we probe for a different coupling parameter.For example, strontium is a promising candidate 66 for future single-photon AIs 16,18 and gives rise to Ω/ 10 −11 . Consequently, we neglect all Δ 2 , for , ≠ as their scale factors have to be compared to the Compton frequency, and arrive at the signal amplitude We recognize the twofold effect of the Compton frequency: On the one hand it leads to the suppression of phase contributions competing with Δ .And on the other hand, it enhances the signal.This result also persists for setups where small transition frequencies are used, e. g., hyperfine or Raman transitions.In this case, the Compton frequency clearly sets the relevant scale, even though the coupling parameter Δ might be small. B. Vanishing Differential Coupling (Δ𝜀 = 0) If both internal states couple identically to the dilaton field, as assumed in most previous treatments, the modulation amplitude of the atomic transition frequency takes the form Ω = Ω ε 0 .Hence, the relevant scale is now the atomic transition frequency.For single-photon transitions, the atomic transition frequency benefits from an optical regime and has a clear advantage over hyperfine or Raman transitions.Similar to the previous discussion, is the dominant contribution.But, by decreasing the relevant scale by several orders of magnitude, we include Δ 2 , as next-order contributions to the signal amplitude. After averaging over , the surviving next-order contributions to the signal amplitude Δ 2 ,1 , Δ (listed in this order) give rise to with the recoil frequency = ℏ 2 /(2 m0 ) and the initial mean momentum ℘0 ( 1 + 0 )/(2ℏ).The dominant part of the signal, i. e., Δ 2 , , has already been discussed. 9We provide the next sub-leading corrections and observe that even in spaceborne experiments Δ 2 ,1 provides a purely kinetic contribution.Additionally, for terrestrial setups long interrogation times 2 increase the significance of Δ 2 ,2 .The contribution Δ 2 ,9 induced by oscillating gravity vanishes for an in-phase oscillation, i. e., = 0, but can be enhanced by = ±/2. C. Influence of Both Couplings Since one usually neglects the differential coupling Δ, we briefly discuss its influence for different values of Ω/ .For that, we compare this approximation to the full expression by studying the ratio Δ 2 , Δ=0 Δ 2 , .It is plotted in Fig. 3 as a function of Ω/ and Δ/ ε > 0. It shows a drastic change between the two regimes discussed above.However, Fig. 3 gives a hint where the general signal-amplitude contribution has to be considered in the analysis for an expected range of coupling parameters and depending on the specific atomic species and transition frequency.The figure highlights that even if the ratio Δ/ ε is small, it can be compensated due to the different order of magnitude of the transition frequency and the Compton frequency.For reference, we have highlighted the frequency ratios for singlephoton transitions (Ω/ 10 −11 for clock transitions in strontium) and for Raman-type schemes (Ω/ 10 −16 for hyperfine transitions in rubidium) by vertical lines.The bottom right corresponds to the standard case described in Sec.V B, while the upper left corresponds to the regime introduced in Sec.V A. The figure highlights the transition between both regimes, where the general expression has to be taken into account.Even if the ratio Δ/ ε is small, this difference can be compensated due to the different order of magnitude of the transition frequency and the Compton frequency. VI. BRAGG-TYPE INTERFEROMETERS (Δ𝜇 Finally, we turn to Bragg-type MZIs, where a two-photon process is used to only transfer momentum without changing the internal state.Therefore, we have Δ 0 = 0 = Δ, which directly implies that such interferometers are only susceptible to the mean coupling ε of the atom to the dilaton field.While there have been proposals for DM detectors that focus on the COM motion 7 and not on the internal structure of the atom, which effectively corresponds to Bragg-based setups, differential configurations have not yet been discussed in detail.Currently, in the context of very-long-baseline atom interferometry differential setups, which can in principle be used for DM detection, are envisioned to rely on Bragg diffraction, e. g., MAGIS-100, 2 MIGA, 10 or ZAIGA. 4or Bragg-type interferometers only a few phase contributions remain in the differential setup cf.Table II in Appendix A, some of which have been also calculated before. 7These terms also arise for single-photon transitions, where, as previously discussed, the clock contribution dominates.Among these remaining phase contributions, it is not straightforward to determine which one is the dominant one in the signal amplitude.We therefore list all of them in Table V in Appendix A. With the help of this overview one can identify three different scales in the signal amplitude, independent of the violation parameters and interferometric factors.For each experiment it has to be checked individually which scale is the dominant one.However, the limiting case 0 = 0, which is important for spaceborne experiments or horizontal setups like the MIGA project, simplifies the signal amplitude to where the interferometric factor originates from Δ 2 1,1 , which is the only remaining contribution cf.Table V.Since all other terms vanish, these types of setups are less susceptible for ultralight DM searches compared to terrestrial setups, where gravity-induced contributions are present. In summary, we can identify different relevant scales in the various limiting cases by comparing Eqs. ( 20)-( 23).This comparison leads to ≫ Ω ≫ .We see that the singlephoton-type interferometers benefit from the Compton and atomic transition frequency, whereas Bragg-type interferometers in zero gravity only depend on the recoil frequency. VII. CONCLUSIONS In our article, we demonstrated that assuming a linear scaling of the dark-matter-induced change of the atomic transition frequency with the unperturbed transition frequency is only an approximation for vanishing differential coupling that is made in most discussions. 7,9,32Atom interferometers that rely on the change of internal states, e. g., single-photon or Raman transitions, are susceptible to both types of coupling, where the signal is dominated by clock-type phases. 25In these cases, the atomic transition and Compton frequency are relevant frequency scales weighted by the mean and differential coupling of both states to dark matter, respectively.However, Bragg-type atom interferometers that preserve the internal state are less susceptible to dark matter in the sense, that they only depend on the mean coupling and the recoil frequency.Therefore, they can only measure effects on the motion like accelerometers. 7,26ur results support the dark-matter search with single-photon atom interferometers in terrestrial setups since their signal profits from any dark-matter-related contributions.Nevertheless, a detailed noise analysis is necessary.Additionally, various mechanisms for atomic diffraction can be used to isolate different coupling parameters, e. g., the results from spaceborne Braggtype setups can give bounds on the mean coupling.These limits can be combined with the results from other setups to give bounds to both independent coupling parameters and make connections to the tests of the Einstein equivalence principle. In perspective, a generalization of our treatment to different geometries might be useful to boost dark-matter signatures in the signal, e. g., large-momentum-transfer techniques.In this context the effect of the modified Compton frequency can be studied in recoil measurements for dark-matter detection or Ramsey-Bordé-type interferometers. 67Following current developments in gravitational-wave detection, various differential schemes can be considered and atom-interferometer-based networks 3,8 can be simultaneously used for dark-matter searches and the detection of gravitational waves.Finally, implications for quantum-clock interferometry 66,68 can be studied, i. e., propagating a superposition of internal states along each interferometer arm, to focus on the effect of the different degrees of freedom. TABLE II.Dilaton-field-induced perturbations in a two-level atom and their respective phase contributions in a Mach-Zehnder interferometer.Perturbations to the rest mass ( V ), to the kinetic energy ( Vkin ), and to the potential energy ( Vpot ) are divided into different expressions due to their physical origin: the mass defect Δ 0 , the mean-mass oscillation μDM (), the state-dependent-mass oscillation Δ DM (), the mean-mass EP violation γEP , the state-dependent EP violation Δ EP (), and the oscillation of gravity γDM ().Their explicit definitions are summarized Table I.Each of these perturbations introduces a contribution to the interferometer phase, that is calculated perturbatively. 63Here, we introduced dimensionless momenta ℘ 0 0 /(ℏ), and ℘ 0 + ( 0 − m0 0 + ℏ/2)/(ℏ), which depend on the initial momentum 0 the initialization time 0 of the interferometer, as well as the (effective) wave vector .For compact expressions, we use the time scales 1 ( 0 ), 2 ( 0 ), 3 ( 0 ), ( 0 ), and EP ( 0 ) introduced in Table III Time Scale Definition Non-vanishing and next sub-leading contributions Δ 2 , to the signal amplitude for single-photon Mach-Zehnder interferometers.The signal contributions are calculated by averaging the square of a differential signal in accordance with Eq. ( 17) and based on the phases provided in Table II.The dominant contribution, i. e., Δ 2 , is given in the main body of the article.The explicit expressions of these terms are split into a numerical factor, a scale, the violation parameters, and some interferometric factor that includes the interrogation-mode function. 58For different interferometer geometries, multiple loops, or large momentum transfer one expects other interferometric and numerical factors.Note that we have Δ 2 ,13 = Δ 17) and based on the phases provided in Table II.Their explicit expression is split into a numerical factor, a scale, the violation parameters, and some interferometric factor.For different interferometers one expects other interferometric and numerical factors.In contrast to single-photon transitions, no dominant scale can be identified. FIG. 1 . FIG.1.The atomic energy structure oscillates around its Standard-Model values due to the oscillating part of the dilaton field, as apparent from Eq. (3).The atomic-energy levels of the ground state (light green solid line), exited state (blue solid line), the atom's mean mass (green solid line), and energy difference between both states (red solid line), which corresponds to the mass defect, oscillate and are plotted against the phase .The relevant parameters are the mean coupling ε and differential coupling Δ of the internal states to the dilaton field, as well as the atom's mass defect Δ 0 = Δ 0 / m0 .Due to the oscillation of the internal energies, the mean mass as well as the atomic transition frequency also oscillate.For single-photon-like schemes the three relevant situations are shown from left to right: (i) The general case in which the mean and differential coupling are non-vanishing, i. e., ε ≠ 0 ≠ Δ.(ii) The internal states couple exactly opposite to the dilaton field, i. e., ε = 0.Only this case leads to a constant mean mass.(iii) Both internal states can couple identically to the dilaton field, i. e., Δ = 0.For Bragg-type schemes we have Δ 0 = 0 = Δ (right panel), as no internal transition is relevant.In this case we find only an oscillating mean mass that affects the motion of the atom. FIG. 2 . FIG.2.Space-time diagram of dark-matter detectors based on Mach-Zehnder gradiometers implemented via (a) single-photon transitions and (b) two-photon transitions (Bragg).The individual trajectories are shown in a freely falling frame for simplicity and are therefore solely influenced by the coupling of DM to the COM motion, which causes an (exaggerated) oscillation.Additionally, the lower panels show the oscillating energy scales, which also affects the rest-mass perturbation.Initially, two wave packets in their ground state |⟩ are separated by the distance > 0. The first beam-splitter pulse splits the wave packet located at ( 0 , 0 ) into two arms: (a) An upper arm in the excited state |⟩ with increased momentum ℏ trough single-photon absorption and a lower arm in which the atoms are still in the ground state.(b) Bragg-diffraction leaves the internal state unaffected but generates a superposition of two momenta through a two-photon process.After an additional time a mirror pulse interchanges the roles of both arms.Then, at the time 0 + 2 a second beam-splitter pulse interferes them.The wave packet located at 0 + undergoes the same process, but with a delay of = / due to the propagation time of light.While the oscillating energy scales influence the motion of the atom, for single-photon absorption the atomic transition frequency is additionally probed at different times, visualized through arrows in the term diagram by arrows.Due to the propagation delay, the transitions of the upper interferometer (orange) are shifted compared to the lower one (red). 1 FIG. 3 . FIG.3.Dominant contribution to the signal amplitude for the case considered in most treatments, i. e., with Δ = 0, compared to the general expression that also allows for a differential coupling.The figure shows the fraction Δ 2 , Δ=0 Δ 2 , with Δ/ ε > 0. For reference, we have highlighted the frequency ratios for singlephoton transitions (Ω/ 10 −11 for clock transitions in strontium) and for Raman-type schemes (Ω/ 10 −16 for hyperfine transitions in rubidium) by vertical lines.The bottom right corresponds to the standard case described in Sec.V B, while the upper left corresponds to the regime introduced in Sec.V A. The figure highlights the transition between both regimes, where the general expression has to be taken into account.Even if the ratio Δ/ ε is small, this difference can be compensated due to the different order of magnitude of the transition frequency and the Compton frequency. TABLE III . 63me scales relevant for the individual phase contributions listed in Table II that naturally arise in the perturbative calculation.63Theydepend on the initial time 0 and are obtained by averaging of the phase = + .For a compact notation we defined ⟨ℎ⟩
8,582
2023-09-18T00:00:00.000
[ "Physics" ]
Yang-Mills theory on the lattice : continuity and the disappearance of the deconfinement transition Fermion boundary conditions play a relevant role in revealing the confinement mechanism of N = 1 supersymmetric Yang-Mills theory with one compactified space-time dimension. A deconfinement phase transition occurs for a sufficiently small compactification radius, equivalent to a high temperature in the thermal theory where antiperiodic fermion boundary conditions are applied. Periodic fermion boundary conditions, on the other hand, are related to the Witten index and confinement is expected to persist independently of the length of the compactified dimension. We study this aspect with lattice Monte Carlo simulations for different values of the fermion mass parameter that breaks supersymmetry softly. We find a deconfined region that shrinks when the fermion mass is lowered. Deconfinement takes place between two confined regions at large and small compactification radii, that would correspond to low and high temperatures in the thermal theory. At the smallest fermion masses we find no indication of a deconfinement transition. These results are a first signal for the predicted continuity in the compactification of supersymmetric Yang-Mills theory. Introduction Gauge theories with adjoint fermions (adjQCD) have interesting thermodynamical properties and the study of their phase transitions provides a deeper understanding of strong interactions at finite temperature. The N = 1 supersymmetric Yang-Mills theory (SYM) is a special case among adjQCD theories with a different number of fermions. One main motivation to study this theory has been its role as gauge part of extensions of the standard model. The phase diagram of N = 1 SYM has been analysed at finite temperatures in a previous publication [1]. Supersymmetry is broken at non-zero temperature as a consequence of the different thermal statistics of fermions and bosons. In this contribution we focus our attention on the phase transitions of the compactified SYM with periodic fermion boundary conditions. Supersymmetry is preserved in this theory and is expected to have a considerable influence on the phase diagram. Confinement and fermion condensation are the two relevant phenomena of QCD-like theories regardless of whether the fermions are in the fundamental or adjoint representation. At low temperatures the theory is in a confined phase with colourless strongly bound particles and unbroken centre symmetry. Chiral symmetry is broken by a non-vanishing fermion condensate. At high temperatures there is a phase transition to a deconfined phase with spontaneously broken centre symmetry. The chiral condensate melts away leading to a restoration of chiral symmetry. However, the deconfinement transition is only a mild crossover in QCD and other theories with fermions in the fundamental representation, due to the explicit breaking of centre symmetry by the quark action. By contrast, the transition from the confined to the deconfined phase is a true phase transition in adjQCD models for any value of the fermion mass and in the massless limit chiral symmetry restoration defines a second one that can have a different critical temperature. JHEP12(2014)133 The picture changes completely when the boundary conditions of fermions are changed from thermal, i.e. antiperiodic, to periodic. The path integral of the compactified theory on R 3 × S 1 with periodic fermion boundary conditions (adjQCD R 3 ×S 1 ) corresponds to a twisted partition function instead of the usual thermal partition function Z = tr[e −H/T ]. For SYM this twisted partition function represents the Witten index [2] tr where the fermion number F is odd for a fermionic state and otherwise even. If the same periodic boundary conditions (PBC) are applied in a compactified theory for adjoint quark and gauge fields, then an interesting interplay exists between bosonic and fermionic degrees of freedom which avoids, in case of SYM, an explicit supersymmetry breaking in contrast to the thermal case. The fermionic contributions can cancel the confining potential of the gauge bosons leading to a restoration of centre symmetry. In SYM there is a cancellation to all orders in the perturbative expansion and a centre stabilisation by non-perturbative semi-classical contributions [3][4][5]. A complicated breaking pattern is obtained for general SU(N c ) gauge groups, where additional phases appear when only parts of the Z Nc centre symmetry are broken [6]. Such phases were also found in Yang-Mills theory extended by adjoint Polyakov loop terms, which are similar to the heavy quark limit of adjQCD R 3 ×S 1 [7]. There are different theoretical concepts related to adjQCD R 3 ×S 1 . The first of them is the Hosotani mechanism [8], the possibility that a partial breaking of the gauge symmetry in the compactified theory allows to interpret the gauge field of the compactified direction as a Higgs field in a lower dimensional theory. This gauge-Higgs unification plays an important role in extensions of the standard model with an extra dimension. A further motivation for the investigations of adjQCD R 3 ×S 1 is the large N c volume independence of gauge theories, known as Eguchi-Kawai reduction [9]. This reduction implies an equivalence between the full four-dimensional gauge theory and a simple single site matrix model in the large N c limit. However, volume independence is known to fail for pure Yang-Mills due to the spontaneous breaking of centre symmetry driven by the compactification [10,11]. Adding adjoint fermions to the model (adjoint Eguchi-Kawai models) can in principle resolve the centre symmetry breaking keeping the large N c volume independence intact [11,12]. The dependence of the ground state on the parameters of the theory can be determined from the effective potential. A perturbative loop expansions of the effective potential is characterised by powers of the coupling constant g 2 and a complete semi-classical expansion adds non-perturbative contributions, that typically come with exponentials of the coupling like e −1/g 2 . The one-loop approximation of the effective potential for pure Yang-Mills theory (YM) predicts the deconfined phase with spontaneously broken centre symmetry at high temperatures, and in QCD, with fermions in the fundamental representation, the explicit breaking of centre symmetry is reproduced at one-loop order. The applicability of semi-classical methods in QCD at lower temperatures and towards the deconfinement transition is limited. With intact supersymmetry there is an exact cancellation between JHEP12(2014)133 confined thermal b.c. deconfined thermal and period. b.c. deconfined Figure 1. The phase diagram of SYM according to the theoretical predictions [5]. In the theory with thermal, i.e. antiperiodic, fermion boundary conditions the critical deconfinement radius R is the inverse of the critical temperature T . The thermal theory has a larger critical deconfinement radius than the one with periodic fermion boundary. The dark shaded part indicates the deconfined region for both theories. fermionic and bosonic perturbative contributions in the loop expansion of the effective potential. The non-perturbative semi-classical effects are the dominant part of the effective potential [4]. Compactified SYM is thus an interesting theory for the investigation of semiclassical non-perturbative contributions. In this work we consider compactified SU(2) SYM on R 3 × S 1 with periodic (PSYM) and thermal 1 (TSYM) boundary conditions and investigate different aspects of the deconfinement transition. For the first time we perform lattice simulations of this theory that capture in principle all perturbative and non-perturbative contributions. In particular we are interested in the differences with respect to the thermal deconfinement transition that we have studied in our previous investigations. adjQCD R 3 ×S 1 was the subject of earlier investigations on the lattice in the context of the Hosotani mechanism [13,14]. Note also the related studies in [15]. Adjoint Eguchi-Kawai models reduced to a single lattice site or small volume were investigated in [16][17][18][19][20][21]. Recently a method for numerical simulations based on the semi-classical analysis was tested in [22,23]. Compactified supersymmetric Yang-Mills theory In a previous publication [1] we have analysed the thermal phase transitions of SU(2) SYM theory. We start with a brief review of these results. The Euclidean on-shell action in the continuum is where F µν is the field strength tensor and D µ the gauge covariant derivative with the structure constants f abc of the gauge group. JHEP12(2014)133 The fields λ represent Majorana fermions in the adjoint representation of the gauge group. There is only one Majorana fermion in SYM (N f = 1) and it is the supersymmetric partner of the gluon called gluino. The additional non-zero gluino mass term leads to a soft supersymmetry breaking. Full supersymmetry is recovered in the limit where the renormalised gluino mass vanishes. The theory is confined at low temperature, confirmed by the linear rise of the static quark-antiquark potential in lattice simulations. The bound state spectrum has been investigated in earlier studies of our collaboration [24,25]. Chiral U(1) R symmetry has a non-trivial breaking pattern in this theory. This symmetry is broken by an anomaly as in one flavor QCD, but a discrete Z 2Nc subgroup is left intact in theories with fermions in the adjoint representation. This remaining symmetry is spontaneously broken down to Z 2 by a non-vanishing expectation value of the gluino condensate. A deconfined phase with restored Z 2Nc chiral symmetry is expected at sufficient high temperatures. There are no simple theoretical connections between centre and chiral symmetry, therefore two phase transitions can occur independently at two different temperatures in SYM and other theories with adjoint fermions. Chiral and deconfinement phase transitions have been found to occur roughly at the same temperature in our previous lattice simulations of SU(2) SYM within our current precision, leaving the question on whether there exists a dynamical hidden link between them. The present work is focused on understanding how the deconfinement phase transition is affected by the fermion boundary conditions. According to our investigations of thermal SYM on the lattice, the temperature of the deconfinement transition in TSYM is lower than in pure YM. These observations of the thermal transition are opposed to what we expect to find in our new simulations of PSYM (see figure 1 and 2): when the gluino mass decreases the critical compactification radius R, which can be identified with an inverse temperature, decreases. The critical R is even expected to vanish in the supersymmetric limit (m → 0) and the deconfinement transition should completely disappear in PSYM without the soft supersymmetry breaking of the mass term. This supersymmetric limit is approached smoothly by the predicted transition line [5]. This implies that at very small R the theory is confined only for very small values of the gluino mass m. The reduced deconfined region in the phase diagram is induced by the adjoint fermions with periodic boundary conditions. Therefore a larger N f is expected to increase the confined region even further, according to [5] the deconfinement transition completely disappears already at a finite m for N f > 1. 2 In this case the confined region at large R is connected to a confined region at small R. At very small R the theory is confined up to a large value of m that tends to infinity as R goes to zero. At an infinite mass there is, of course, always the deconfined region of pure YM. The results of our lattice SYM simulations are summarised in figure 3 and represented in terms of the parameters β = 2Nc g 2 and κ = depends on the gauge coupling g, in particular the lattice spacing is an exponentially decreasing function of β. At a fixed temporal extent N τ of the lattice the larger critical parameter β dec c is hence equivalent to a smaller critical compactification radius or a larger critical temperature. The absence of the deconfinement transition is confirmed in the supersymmetric limit. At finite m the results are, however, in favour of the picture expected for larger values of N f . Within the limited volume and mass range accessible in our simulations we find no evidence for a deconfinement transition below a certain value of the bare mass parameter and a shrinking deconfined region at smaller values of R. Implications and limitations of these findings are discussed in the conclusions, section 5. After a short introduction of our methods, already applied in [1], we summarise our numerical results providing evidence for this scenario in the following sections. We have done scans in the bare parameter space for many different β and fixed bare mass parameters κ. An important point in this analysis is the investigation of finite size effects. The theory at small R has an almost flat effective potential for the Polyakov loop, leading to large fluctuations and autocorrelations of this observable. The effect appears similar to the broadening induced by the tunnelling between the two Z 2 minima in the deconfined phase at smaller volumes. Only in a comparison of different volumes it is possible to discriminate the broad distribution in the confined region at small R from the broadening of the distribution by tunnelling due to finite volume effects in the deconfined region. A comparison of the Polyakov loop histograms for different volumes provides an estimate of the finite volume effects. We have performed this study at certain points in the phase diagram as sketched in figure 3. Lattice simulations In our simulations we have used a tree-level Symanzik improved gauge action and Wilson-Dirac fermions where P µν (x) is the plaquette and R µν (x) the rectangle of gauge links introduced as an improvement of the standard Wilson gauge action. U µ (x) and V µ (x) denote the link variables in the fundamental and in the adjoint representation, respectively. The adjoint links V µ (x) are related to the fundamental links U µ (x) through the well-known formula where the generators in the fundamental representation τ F a are normalised such that tr(τ F a τ F b ) = 1 2 δ ab . The action of the Wilson-Dirac operator D W on the gluino field λ is defined as JHEP12(2014)133 On the lattice chiral symmetry and supersymmetry are explicitly broken with this type of discretisation. The tuning of the bare gluino mass m is enough in SYM to recover both symmetries in the continuum limit [27]. The chiral limit can be reached approaching the point where the adjoint pion mass, defined in a partially quenched setup [28], vanishes. From previous studies [29] the critical value of κ where the adjoint pion mass vanishes is known to be 0.20300(5) at β = 1.6. At β = 1.8 it goes down to 0.1909(1). There is a sign problem in the lattice discretised theory if the total number of Majorana fermions N f is odd, as in the case of SYM. Fermions are integrated out to perform numerical simulations and the result is the Pfaffian of the Wilson-Dirac operator The modulus of the Pfaffian is the square root of the determinant, leaving an additional sign factor for odd N f The sign of the Pfaffian is positive in the continuum limit, but on the lattice configurations with negative sign can occur and the probability that the sign changes during a Monte Carlo simulation increases at smaller gluino masses for fixed lattice spacing. In the compactified theory with periodic boundary conditions sign changes are more likely compared to the theory with thermal boundary conditions. In most of our current investigations we avoid entering the region with a relevant number of sign changes by keeping the gluino mass far enough from its critical value. This is checked by a measurement of the Pfaffian signs on a subset of configurations for the runs with the most critical parameters using the method introduced in [30]. Note that with periodic boundary conditions the problem becomes already relevant at κ ≈ 0.19 for β ≈ 1.7. As in our previous investigations [1] the simulations are done with the RHMC algorithm. Towards the supersymmetric limit (vanishing renormalised gluino mass), the cost of the RHMC trajectory increases drastically. This problem is common to all simulations with dynamical fermions and becomes even more significant with periodic boundary conditions. Therefore the limit of small gluino masses can only be reached at a high cost. Numerical results for compactified supersymmetric Yang-Mills theory In this section we provide numerical evidence for the following facts for the compactified SU(2) SYM theory on R 3 × S 1 : as expected, there is no difference between PSYM and TSYM at small β where both are in the low temperature T (or large radius R) confined phase. Moving towards the deconfinement transition line, we observe that the difference in the fermion boundary conditions becomes significant even at a rather large gluino mass. At large β and a wide range of the bare mass parameter κ we find a phase with unbroken centre symmetry, similar to the "re-confined phase" in [14]. At larger gluino masses there is a clear signal for spontaneously broken centre symmetry and a deconfined phase between the re-confined phase and the confined phase at small β. The two confined phases are JHEP12(2014)133 connected: at lower gluino masses the signal for deconfinement vanishes. The deconfined phase close to the pure Yang-Mills limit shrinks towards larger values of β leading to a sharp transition when κ is increased. The volume averaged Polyakov loop, is an order parameter of the deconfinement transition. Note that SYM is the limit of supersymmetric QCD with infinitely heavy quarks. The expectation value of P L is related to the free energy of these static quarks in the fundamental representation. The constraint effective potential of the Polyakov loop has either a minimum at zero, in the confined phase, or two degenerate minima representing the spontaneously broken Z 2 centre symmetry in the deconfined phase. The histogram of the Polyakov loop is another representation of the constraint effective Potential. The distribution is either centred around a maximum at P L = 0 in the confined phase, or around two symmetric non-zero peaks in the deconfined phase. There is a finite tunnelling rate in the deconfined phase between the two minima corresponding to these peaks, that is suppressed in the infinite volume limit. The broad distribution of the Polyakov loop induced by tunnelling is hard to distinguish from a signal for confinement. To identify the different phases it is necessary to compare the histograms of simulations at different volumes, in particular for the confined phase at large β that is characterised by a rather broad distribution of the Polyakov loop. Due to this broad distribution it is expected that the signal for the transition point can only be identified at rather large volumes. In this work we study the expectation value of the modulus of the volume averaged Polyakov loop, |P L | , since it provides a clearer signal for the deconfinement at finite volumes. The three different phases at large values of the gluino mass We begin our investigations with a scan of the relevant range of β values at fixed κ and compare the behaviour of the order parameter P L for the two boundary conditions. As expected at low β, corresponding to a large R or low T , the theory is confined regardless of the boundary condition and the fermion mass. Consistent with our previous investigations we find that a decreasing β dec c for smaller bare mass parameters, i.e. larger κ, in TSYM. In PSYM the opposite behaviour is observed: the onset of the order parameter shifts towards larger β values as the bare mass is decreased (see figure 4). We observe that |P L | reaches a maximum at intermediate β until it decreases again at large β. This is a first indication of three different phases: a confined phase at large R connected to the low temperature phase of the thermal theory, an intermediate deconfined phase, and a second confined, or re-confined, phase at small R. In the re-confined phase |P L | has a larger expectation value compared to the low temperature confined phase. This is, however, not a signal for a deconfined phase. In a deconfined phase the larger expectation value of the modulus of the Polyakov loop indicates a peak of the distribution of the order parameter at P L = 0, that corresponds to a minimum of the constraint effective potential at this point. On the other hand, a small rise of the |P L | can also be an effect of the modulus function induced by a broadening of the distribution of the order parameter, even though the histogram is peaked at zero. The minimum of the constraint effective potential remains in this case at P L = 0, but its curvature at the minimum gets smaller. We expect a broadening of the distributions at large β due to the flat perturbative effective potential. Different phases can hence be clearly pointed out only from a detailed investigation of the shape of the histogram of the order parameter. The best way to distinguish a phase transition from a broadening of the distribution is the investigation of finite size effects. Comparing different volumes, we are able to distinguish the broad distribution generated by tunnelling between the two Z 2 symmetric minima of the constraint effective potential in the deconfined phase and a broad distribution peaked at zero in a confined phase. If the contributions close to zero are suppressed in the histograms at larger volumes, the theory is in the deconfined phase. The comparison of the histograms for κ = 0.16 and β = 1.8, 2.0, and 2.2 is shown in figure 5. These data show a deconfined phase at β = 1.8, a transition close to β = 2.0, and the second confined phase at β = 2.2. The suppressed tunnelling between the two Z 2 symmetric vacua for larger volumes at β = 1.8 is clearly visible. The second confined (re-confined) phase at β = 2.2 is indicated by the distribution around a peaked maximum value at the origin. At larger volumes the tendency towards one clear maximum at zero is even increased. Compared to the distribution in the confined phase, the fluctuations in this second confined phase are large, leading to a rather broad distribution. The larger JHEP12(2014)133 values of |P L | in the confined phase at large β are hence not a signal for deconfinement; instead they are only indicating this broad distribution. Different phases can be clearly separated by the peaks in the susceptibility of P L as shown in figure 6. We have found that the separation is only possible at rather large volumes. The first peak indicates the transition from the confined phase at small β to the deconfined phase in correspondence to the thermal deconfinement transition. At large β there is a second peak separating the deconfined phase from third phase with unbroken centre symmetry. The transition is characterised by large values for the susceptibility at the peak and in the confined region after the peak. The large susceptibility reflects the mentioned broad distribution of the order parameter. conditions the expectation value of the order parameter increases as the gluino mass gets smaller, a significant decrease is observed in PSYM. This is the signal of the second confined (re-confined) phase at larger β values. At very heavy gluino masses the expectation value of the Polyakov loop tends to its pure YM limit and there is always a deconfined region close to the κ = 0 axis. The boundary of that region can be identified by the steepest decrease of |P | as a function of κ for each β and also from the susceptibility (figure 8). Our results depicted in figure 7 show that the deconfined region shrinks and the transition gets sharper at larger β values. Therefore, we conjecture that the transition moves from first order at very large β towards a crossover at β that are smaller, but still above the phase transition of YM. to an intermediate deconfined phase, but it could also be due to a mere broadening of the distribution of the order parameter. A closer investigation of the histograms points towards the latter situation. Indications for a connection between the two confined phases The histograms of the data from simulations on a 4 × 8 3 lattice at κ = 0.188 and different β (see figure 9(b)) never show a two peak structure. We take the point with the largest |P L | as a reference for the finite volume analysis. The histogram of the order parameter for three different volumes is shown in figure 9(a). For larger volumes the distribution tends to sharpen around the peak at zero. There is hence no indication in our results for a transition to a deconfined phase already at κ = 0.188. This also means that there is a connection between the low β and large β confined phases. Confined and "re-confined" phase are in fact one large confined region in phase diagram. Conclusions We have shown the results of the first lattice simulations of compactified SU(2) SYM on R 3 × S 1 with a soft supersymmetry breaking gluino mass term and periodic fermion boundary conditions. In accordance with theoretical predictions, our results clearly point towards the absence of the deconfinement transition in the supersymmetric limit. Already at rather large gluino masses we have found no indications for the transition in the histograms of the order parameter up to a very small compactification radius. The deconfinement transition line is even more strongly influenced by the different fermion boundary conditions than suggested by the theoretical predictions [3], that assume a continuity (i. e. absence of deconfinement) only in the supersymmetric limit at zero fermion mass. In addition, an intermediate deconfined phase between two confined regions in the scans at a larger bare mass is not predicted for PSYM. These observations are more consistent with the theoretical predictions for theories with a larger number of Majorana fermions than with those for SYM. Especially the results obtained with a fixed bare coupling constant (figure 7) clearly confirm the difference between the periodic and antiperiodic fermion boundary conditions and also indicate the connection to the the pure Yang-Mills limit, i. e. infinite gluino mass. Close to this limit, there is always a deconfined region for β larger than β dec c of YM with a transition to the confined phase at a certain critical gluino mass. The deconfined region shrinks as β is increased. This fact also supports rather the scenario depicted in figure 2 than the one in figure 1 for the phase transitions in SYM on R 3 × S 1 . It is important to note that the finite lattice spacing leads to a breaking of supersymmetry, that invalidates the balance between fermionic and bosonic contributions. The breaking is induced by the Wilson mass in the Dirac operator and the violation of the Leibniz rule on the lattice [31]. The rather flat effective potential might be sensitive even to small perturbations by lattice artefacts. This might explain the observed difference between the measured and predicted transition lines. Therefore, an important next step is a detailed comparison of different N τ , that corresponds to a study of the theory with finer lattices. We also have started investigations with a clover improved version of the SYM JHEP12(2014)133 lattice action. This might be relevant to reduce the lattice artefacts in the fermionic part of the action. Nevertheless one might expect that the lattice artefacts have a small impact on the general picture, in particular on the results at large β values. Besides the most important investigation of the dependence on the lattice spacing, further investigations are still required to confirm these results and there are several aspects that we plan to consider in further, more demanding, numerical simulations. The scale should be set by measurements of the mass ratios to change the axes of phase diagram from bare parameters to renormalised quantities. The influence of the boundary condition on the chiral transition line should also be investigated. On large volumes the clear separation of the phases allows in principle an extrapolation of the transition lines. In this way the critical bare mass for the disappearance of the deconfinement transition can be estimated with a much better precision than in our current measurements. A first exploratory study of our collaboration [32] considers also the compactification of more than one space-time dimension that can relate the results to the investigation of finite size effects [24].
6,584.2
0001-01-01T00:00:00.000
[ "Physics" ]
On convergence of general wavelet decompositions of nonstationary stochastic processes The paper investigates uniform convergence of wavelet expansions of Gaussian random processes. The convergence is obtained under simple general conditions on processes and wavelets which can be easily verified. Applications of the developed technique are shown for several classes of stochastic processes. In particular, the main theorem is adjusted to the fractional Brownian motion case. New results on the rate of convergence of the wavelet expansions in the space $C([0,T])$ are also presented. Introduction In the book [11] wavelet expansions of non-random functions bounded on R were studied in different spaces. However, developed deterministic methods may not be appropriate to investigate wavelet expansions of stochastic processes. For example, in the majority of cases, which are interesting from theoretical and practical application points of view, stochastic processes have almost surely unbounded sample paths on R. It indicates the necessity of elaborating special stochastic techniques. Recently, a considerable attention was given to the properties of the wavelet orthonormal series representation of random processes. More information on convergence of wavelet expansions of random processes in various spaces, references and numerous applications can be found in [3,7,14,15,16,17,18,21,20,24]. Most known stochastic results concern the mean-square or almost sure convergence, but for various practical applications one needs to require uniform convergence. To give an adequate description of the performance of wavelet approximations in both cases, for points where the processes are relatively smooth and points where spikes occur, we can use the uniform distance instead of global integral L p metrics. A more in depth discussion, further references and various applications in econometrics, simulations of stochastic processes and functional data analysis can be found in [6,9,10,20,23]. In his 2010 Szekeres Medal inauguration speech, an eminent leader in the field, Prof. P. Hall stated the development of uniform stochastic approximation methods as one of frontiers in modern functional data analysis. Figures 1 and 2 illustrate some features of wavelet expansions of stochastic processes. Figure 1 presents a simulated realization of the Wiener process and its wavelet reconstructions by two sums with different numbers of terms. The figure has been generated by the R package wmtsa [22]. Besides providing a realization of the Wiener process and its wavelet reconstructions, we also plot corresponding reconstruction errors. Figure 2 shows maximum absolute reconstruction errors for 100 simulated realizations. To reconstruct each realization of the Wiener process two approximation sums (as in Figure 1) were used. We clearly see that empirical probabilities of obtaining large reconstruction errors become smaller if the number of terms in the wavelet expansions increases. Although this effect is expected, it has to be established theoretically in a stringent way for different classes of stochastic processes and wavelet bases. It is also important to obtain theoretical estimations of the rate of convergence for such stochastic wavelet expansions. In this paper we make an attempt to derive general results on stochastic uniform convergence which are valid for wavelet expansions of wide classes of stochastic processes. The 2 paper deals with the most general class of such wavelet expansions in comparison with particular cases considered by different authors, see, for example, [1,3,8,13,20]. Applications of the main theorem to special cases of practical importance (stationary processes, fractional Brownian motion, etc.) are demonstrated. We also prove the exponential rate of convergence of the wavelet expansions. Throughout the paper, we impose minimal assumptions on the wavelet bases. The results are obtained under simple conditions which can be easily verified. The conditions are weaker than those in the former literature. These are novel results on stochastic uniform convergence of general finite wavelet expansions of nonstationary random processes. The specifications of established results are also new (for example, for the case of stationary stochastic processes, compare [15,16]). Finally, it should be mentioned that the analysis of the rate of convergence gives a constructive algorithm for determining the number of terms in the wavelet expansions to ensure the uniform approximation of stochastic processes with given accuracy. It provides a practical way to obtain explicit bounds on the sharpness of finite wavelet series approximations. The organization of the article is the following. In the second section we introduce the necessary background from wavelet theory and certain sufficient conditions for mean-square convergence of wavelet expansions in the space L 2 (Ω). In §3 we formulate and discuss the main theorem on uniform convergence in probability of the wavelet expansions of Gaussian random processes. The next section contains the proof of the main theorem. Two applications of the developed technique are shown in section 4. In §5 the main theorem is adjusted to the fractional Brownian motion case. Lastly, we obtain the rate of convergence of the wavelet expansions in the space C([0, T ]). In what follows we use the symbol C to denote constants which are not important for our discussion. Moreover, the same symbol C may be used for different constants appearing in the same proof. Wavelet representation of random processes Let φ(x), x ∈ R, be a function from the space L 2 (R) such that φ(0) = 0 and φ(y) is continuous at 0, where Suppose that the following assumption holds true: There exists a function m 0 (x) ∈ L 2 ([0, 2π]), such that m 0 (x) has the period 2π and φ(y) = m 0 (y/2) φ (y/2) (a.e.) In this case the function φ(x) is called the f -wavelet. Let ψ(x) be the inverse Fourier transform of the function Then the function where φ(x) and ψ(x) are defined as above. Remark 1. f -wavelets and m-wavelets are scaling functions. Proof. The wavelets φ and ψ admit the following representations, see [11], If k ≥ 0, then Similarly, for k ≤ 0 we get |h k | ≤ CΦ φ (|k|/3) . Thus, for all k ∈ Z, Note that the series in the right-hand side of (2) converges in the L 2 (R)-norm. Therefore, there exists a subsequence of partial sums which converges to ψ (x) a.e. on R. Thus, by (2) and (3) we obtain a.e. on R. If u := 2x − 1, then for u > 0 Notice also that Therefore, for u > 0 We are to prove that, and, similarly, Thus, Thus, we conclude that for u > 0 Since for u < 0 for every u ∈ R. By (4), (5), (6), and (7), The desired result follows from (8) if we chose Motivated by Lemma 1, we will use the following assumption instead of two separate assumptions S ′ (γ) for the f -wavelet φ and the m-wavelet ψ. If sample trajectories of this process are in the space L 2 (R) with probability one, then it is possible to obtain the representation (wavelet representation) where The majority of random processes does not possess the required property. For example, sample paths of stationary processes are not in the space L 2 (R) (a.s.). However, in many cases it is possible to construct a representation of type (9) for X(t). Consider the approximants of X(t) defined by where k n := (k ′ 0 , k 0 , ..., k n−1 ). Theorem 1 below guarantees the mean-square convergence of X n,kn (t) to X(t) if k ′ 0 → ∞, k j → ∞, j ∈ N 0 , and n → ∞. The latter means that we increase the number n of multiresolution analysis subspaces which are used to approximate X(t). For each multiresolution analysis subspace j = 0 ′ , 0, 1, 2... the number k j of its basis vectors, which are used in the approximation, increases too, as n tends to infinity. Thus, for each fixed k and j there is n 0 ∈ N 0 that the terms ξ 0k φ 0k (t) and η jk ψ jk (t) are included in all X n,kn (t) for n ≥ n 0 (i.e., each ξ 0k φ 0k (t) and η jk ψ jk (t) can be absent only in the finite number of X n,kn (t)). Uniform convergence of wavelet expansions for Gaussian random processes In this section we show that, under suitable conditions, the sequence X n,kn (t) converges in probability in Banach space More details on the general theory of random processes in the space C([0, T ]) can be found in [2]. where σ(h) is a function, which is monotone increasing in a neighborhood of the origin and where σ (−1) (u) is the inverse function of σ(u). If the random variables X n (t) converge in probability to the random variable X(t) for all t ∈ [0, T ], then X n (t) converges to X(t) in the space C([0, T ]). The following theorem is the main result of the paper. Theorem 3. Let a Gaussian process X(t), t ∈ R, its covariance function, the f -wavelet φ, and the corresponding m-wavelet ψ satisfy the assumptions of Theorem 1. Suppose that (i) assumption S(γ), γ ∈ (0, 1), holds true for φ and ψ; Then X n,kn (t) → X(t) uniformly in probability on each interval [0, T ] when n → ∞, k ′ 0 → ∞, and k j → ∞ for all j ∈ N 0 . Remark 4. If both wavelets φ and ψ have compact supports, then some assumptions of Theorem 3 are superfluous. In the following theorem we give an example by considering approximants of the form Theorem 4. Let X(t), t ∈ R, be a separable centered Gaussian random process such that its covariance function R(t, s) is continuous. Let the f -wavelet φ and the corresponding m-wavelet ψ be continuous functions with compact supports and the integrals R ln α (1 + |u|)| ψ(u)| du and R ln α (1 + |u|)| φ(u)| du converge for some α > 1/2(1 − γ), γ ∈ (0, 1). If there exist constants c j , j ∈ N 0 , such that E|η jk | 2 ≤ c j for all k ∈ Z, and assumption (13) is satisfied, then X n (t) → X(t) uniformly in probability on each interval [0, T ] when n → ∞. Proof. The assumptions of Theorem 1 and S(γ), 0 < γ < 1, are satisfied because φ and ψ have compact supports. Therefore, the desired result follows from Theorem 3. Proof of the main theorem To prove Theorem 3 we need some auxiliary results. Lemma 2. If δ(x) is a scaling function satisfying assumption S ′ (γ), then where Proof. The lemma is a simple generalization of a result from [11]. Since S γ (x) is a periodic function with period 1, it is sufficient to prove (14) for x ∈ [0, 1]. Notice, that for x ∈ [0, 1] and integer |k| ≥ 2 the inequality |x − k| ≥ |k|/2 holds true. Hence, Φ(|x − k|) ≤ Φ (|k|/2) and Lemma 3. Letδ(x) denote the Fourier transform of the scaling function δ(x), δ jk (x) := 2 then for all x, y ∈ R and k ∈ Z Note that for v = 0 the following inequality holds: By (15) and (16) we obtain The assertion of the lemma follows from this inequality. Lemma 4. If a scaling function δ(x) satisfies the assumptions of Lemmata 2 and 3, then for γ ∈ (0, 1) and α > 0 : 10 Proof. By lemma 3 for j ≥ 1 we obtain We now make use of the inequality |a + b| α ≤ q α (|a| α + |b| α ) , where By lemma 2 we get Inequality (17) follows from this estimate. The proof of inequality (18) is similar. Now we are ready to prove Theorem 3. Note that (11) implies We will only show how to handle S j . A similar approach can be used to deal with the remaining term S. Examples In this section we consider some examples of wavelets and stochastic processes which satisfy assumption (13) of Theorem 3. Example 1. Let ψ be a Lipschitz function of order Assume that for the covariance function R(t, s) Now we show that E|η jk | 2 ≤ c j for all k ∈ Z and find suitable upper bounds for c j . By Parseval's theorem, By properties of the m-wavelet ψ we have ψ(0) = 0. Therefore, using the Lipschitz conditions, we obtain This means that √ c j ≤ C/2 j/2(1+2κ) and assumption (13) holds. In the following example we consider the case of stationary stochastic processes. This case was studied in detail by us in [15]. Note that assumptions in the example are much simpler than those used in [15]. Example 2. Let X(t) be a centered short-memory stationary stochastic process and ψ be a Lipschitz function of order κ > 0. Assume that the covariance function R(t − s) := EX(t)X(s) satisfies the following condition R R(z) · |z| 2κ dz < ∞. By Parseval's theorem we deduce Thus, by the Lipschitz conditions, for all k, l ∈ Z : This means that √ c j ≤ C/2 j/2(1+2κ) and assumption (13) is satisfied. Application to fractional Brownian motion In this section we show how to adjust the main theorem to the fractional Brownian motion case. Let W α (t), t ∈ R, be a separable centered Gaussian random process such that W α (−t) = W α (t) and its covariance function is Lemma 5. If assumption S(γ), 0 < γ < 1, holds true and for some α > 0 then for the coefficients of the process W α (t), defined by (10), for all k, l ∈ Z. In some case, for example, for the fractional Brownian motion the assumption |Eξ 0k ξ 0l | ≤ b 0 of Theorem 3 doesn't hold true. The following theorem gives the uniform convergence of wavelet expansions without this assumption. Theorem 5. Let a random process X(t), t ∈ R, the f -wavelet φ, and the corresponding mwavelet ψ satisfy the assumptions of Theorem 1 and assumptions (i) and (ii) of Theorem 3. Suppose that there exist (iii') constants c j , j ∈ N 0 , such that E|η jk | 2 ≤ c j for all k ∈ Z and (13) holds true; (iv) some ε > 0 such that Then X n,kn (t) → X(t) uniformly in probability on each interval [0, T ] when n → ∞, k ′ 0 → ∞, and k j → ∞ for all j ∈ N 0 . Then assumption (iv) of Theorem 5 holds true for W α (t). Analogously to (22) and (23) we obtain To estimate |φ 0k (t) − φ 0k (s)|, we use the representations Repeatedly using integration by parts and the assumptions of the lemma, we obtain that for k = 0 : By inequalities (8) and (12) given in [15] we get where c α,T andc α,T are constants which do not depend on t, s and z. Applying these inequalities to (24) we obtain If k = 0, then Consequently, we can estimate S as follows Theorem 6. If the assumptions of Lemmata 5, 6, and assumptions (i) and (ii) of Theorem 3 are satisfied, then the wavelet expansions of the fractional Brownian motion uniformly converge to W α (t). Convergence rate in the space C[0, T ] Returning now to the general case introduced in Theorem 3, let us investigate what happens when the number of terms in the approximants (11) becomes large. First we specify an estimate for the supremum of Gaussian processes. where H(ε) is the metric entropy of the space where u > 8I(ε 0 ). Proof. Since is an even function. Then the assertion of the theorem follows from Now we formulate the main result of this section. Theorem 7. Let a separable Gaussian random process X(t), t ∈ [0, T ], the f -wavelet φ, and the corresponding m-wavelet ψ satisfy the assumptions of Theorem 3. Then where u > 8δ(ε kn ) and the decreasing sequence ε kn is defined by (29) in the proof of the theorem. Proof. Let us verify that Y(t) := X(t) − X n,kn (t) satisfies (26) with σ(ε) given by (27). First, we observe that We will only show how to handle S ′ j . A similar approach can be used to deal with the remaining terms S ′ and R ′ j . By Lemmata 2 and 3 we get where Similarly to (28) by lemma 8 we obtain Remark 8. In the theorem we only require that k ′ 0 , and all k j , j ≥ 0, approach infinity. If we narrow our general class of wavelet expansions X n,kn (t) by specifying rates of growth of the sequences k n we can enlarge classes of wavelets bases and random processes in the theorem and obtain explicit rates of convergence by specifying ε kn . For instance, consider the examples in Section 5. It was shown that √ c j ≤ C/2 j/2(1+2κ) . Remark 9. Lemma 8 and formula (29) provide simple expressions to computer ε kn and δ(ε kn ). It allows specifying Theorem 7 for various stochastic processes and wavelets.
4,020.8
2013-07-25T00:00:00.000
[ "Mathematics" ]
PredCRG: A computational method for recognition of plant circadian genes by employing support vector machine with Laplace kernel Background Circadian rhythms regulate several physiological and developmental processes of plants. Hence, the identification of genes with the underlying circadian rhythmic features is pivotal. Though computational methods have been developed for the identification of circadian genes, all these methods are based on gene expression datasets. In other words, we failed to search any sequence-based model, and that motivated us to deploy the present computational method to identify the proteins encoded by the circadian genes. Results Support vector machine (SVM) with seven kernels, i.e., linear, polynomial, radial, sigmoid, hyperbolic, Bessel and Laplace was utilized for prediction by employing compositional, transitional and physico-chemical features. Higher accuracy of 62.48% was achieved with the Laplace kernel, following the fivefold cross- validation approach. The developed model further secured 62.96% accuracy with an independent dataset. The SVM also outperformed other state-of-art machine learning algorithms, i.e., Random Forest, Bagging, AdaBoost, XGBoost and LASSO. We also performed proteome-wide identification of circadian proteins in two cereal crops namely, Oryza sativa and Sorghum bicolor, followed by the functional annotation of the predicted circadian proteins with Gene Ontology (GO) terms. Conclusions To the best of our knowledge, this is the first computational method to identify the circadian genes with the sequence data. Based on the proposed method, we have developed an R-package PredCRG (https://cran.r-project.org/web/packages/PredCRG/index.html) for the scientific community for proteome-wide identification of circadian genes. The present study supplements the existing computational methods as well as wet-lab experiments for the recognition of circadian genes. Supplementary Information The online version contains supplementary material available at 10.1186/s13007-021-00744-3. Background Rhythms of biological activity with a periodicity of 24 h are called circadian rhythms (CR) and are generated endogenously [1,2]. There are molecular components with the underlying rhythmic features defining the circadian clock (CC). The three components (input, output and oscillator) model of the CC is the widely adopted one [3]. In this model, the input connects the environmental cues to the core component oscillator and the output links the functions of the oscillator with different biological processes [4]. So far, the CR has been extensively investigated in Arabidopsis thaliana, and the same clock Open Access Plant Methods *Correspondence<EMAIL_ADDRESS><EMAIL_ADDRESS>1 ICAR-Indian Agricultural Statistics Research Institute, New Delhi, India Full list of author information is available at the end of the article mechanism has been extended to several dicot [5][6][7][8] and monocot [9,10] plants as well. The roles of CR in respect of regulating different metabolic pathways including carbon fixation and allocation of starch & sugar in leaf tissues have been reported in earlier studies [11,12]. Anticipation of plants to environmental fluctuations (on a daily basis) is facilitated by CC [13], where the daily timing of the biological process is organized to specific time of the day and night [11,14,15] to increase the performance and reproductive fitness [16][17][18]. Including contribution to the agronomic traits of crops [19,20], correct circadian regulations have been reported to enhance biomass accumulation, seed viability and photosynthesis [21,22]. The roles of the circadian system in regulating plant response to different biotic and abiotic stresses have also been well studied [23,24]. Plant growth and development related metabolisms are also regulated by CC, where it affects the quality and productivity of crops by bringing changes in the metabolites [25,26]. The CC comprises several genes that form the transcriptional-translational feedback loops, resulting in rhythmic expression [11,27]. The CC genes are reportedly involved in hormonal signaling [28,29], growth and development of plant species [30,31]. As reported in earlier studies [32,33], crop productivity can be enhanced by manipulating the CC, particularly through circadian up-regulation of photosynthetic carbon assimilation. A plethora of computational methods such as COS-OPT [34], Fisher's G-test [35], HAYSTACK [36], JTK-CYCLE [37], ARSER [38] and LSPR [39] have been developed for the identification of potential circadian genes using the gene expression data. A supervised learning approach ZeitZeiger [40] has also been developed for the identification of clock-associated genes from genome-wide gene expression data. In this study, we made an attempt to discriminate protein sequences associated with the circadian rhythms from the proteins that are not involved in the circadian clock. The motivations behind the present study are that (i) the existing computational methods use the genome-wide gene expression data for identifying the genes associated with the CC, (ii) identification of the circadian genes through wet-lab experiments require more time and resource, and (iii) no computational method based on the sequence (protein) data is available. In this study, we have employed the support vector machine with the Laplace kernel for discriminating circadian genes (CRGs) from non-CRGs by using the sequence dataset. We have also developed an R-package for easy prediction of CRGs by using the proteome-wide sequence data. This package is unique and we anticipate that our computational model will supplement the existing efforts for the identification of circadian genes in plants. Collection of protein sequences The protein sequences encoded by the experimentally validated oscillatory genes were collected from the Circadian Gene Database (CGDB) [41]. In this comprehensive database, about 73,000 genes encompassing 68 animals, 39 plants and 41 fungal species were available. A total of 12,041 protein sequences were retrieved from 9 plant species, i.e., A. thaliana (6981), Glycine max (4810), O. sativa (110), Zea mays (72), Hodeum vulgare (22), Arabidopsis lyrata (21), Physcomitrella patens (10), Solanum tuberosum (10) and Triticum aestivum (5). The 12,041 sequences were used to build the positive dataset. Further, 22,586 reviewed protein sequences of Viridi plantae collected from the UniProt (https:// www. unipr ot. org) were used to construct the negative dataset. The positive dataset thus comprised the protein sequences encoded by the circadian genes (CRG) and the negative dataset comprised the protein sequences encoded by other than the circadian genes (non-CRG). The positive and negative datasets were also referred to as CRG and non-CRG classes, respectively. Processing of positive and negative datasets The CD-HIT program [42] was employed to remove the sequences that were > 40% identical to any other sequences. In order to avoid the homologous bias in the prediction accuracy, both positive and negative datasets were subjected to homology reduction. After removing the redundant sequences, 8211 and 6371 sequences were obtained for the negative and positive datasets, respectively. The sequences with residues B, J, O, U, X and Z were also excluded to avoid ambiguity for generating numeric features because these six letters do not stand for any of the amino acids that function as the building blocks of proteins. After removing such sequences, 8202 negative and 6370 positive sequences were retained for the analysis. It was also noticed that the lengths of the sequences in the positive dataset were highly heterogeneous (39-4218 residues). Thus, the positive dataset was divided into four homogeneous subsets (P1, P2, P3 and P4) based on quartile values of the sequence length in order to improve the prediction accuracy, where 39 ≤ P1 < 221 , 221 ≤ P2 < 363 , 363 ≤ P3 < 538 and 538 ≤ P4 < 1001( Table 1). Since the sequences with > 1000 amino acids were detected as outliers (Fig. 1a), using such sequences may generate noisy feature vectors. Hence, the sequences with > 1000 residues were further excluded from the analysis. Similar to the positive set, four subsets (N1, N2, N3 and N4) were created from the negative dataset, where 43 ≤ N 1 < 407 , 407 ≤ N 2 < 485 , 485 ≤ N 3 < 607 and 607 ≤ N 4 < 1001 (Table 1). In this way, we prepared four homogeneous sub-datasets, i.e., Q1 (P1, N1), Q2 (P2, N2), Q3 (P3, N3) and Q4 (P4, N4) instead of a single heterogeneous dataset ( Table 2). Generation of numeric features For each protein sequence, we generated amino acid composition (AAC), ProtFP features [43], FASGAI features [44], Cruciani properties [45], transitional properties [46,47] and other physico-chemical properties (hydrophobicity, instability index, molecular weight and iso-electric point). The AAC is one of the popular features of protein sequences [48][49][50][51] which comprises a 20-dimensional numeric vector of amino acid frequencies. Given its simplicity and computational ease, the AAC is a well-performing feature set in terms of accuracy [51]. The ProtFP descriptor comprises the first 8 principal components obtained from the principal component analysis of 58 AAindex [52] properties of 20 amino acids. Based on the ProtFP features, each sequence was transformed into an 8-dimensional numeric feature vector. The FASGAI is a set of 6 numeric descriptors that represent 6 different properties of protein sequences, i.e., bulky properties, hydrophobicity, compositional characteristics, alpha and turn propensities, electronic properties and local flexibility. The Cruciani properties comprise 3 descriptors (polarity, hydrophobicity and H-bonding) that are based on the interaction of amino acids with different chemical groups. The transitional features represent the frequencies of amino acid residues of one type followed by residues of other types. Pertaining to transitional features, three types of residues for hydrophobicity (polar, neutral and hydrophobic), three types of residues corresponding to secondary structure (strand, helix and coil) and two types of residues for solvent accessibility (exposed and buried) were utilized. By using 8 types of residues, a total of 21 transitional descriptors were generated for each protein sequence. After combining all the feature sets, a total of 62 numeric features were obtained. A brief description about these features and the R-packages used to generate these features are provided in the Additional file 1: Table S1. Prediction with support vector machine Support vector machines (SVM) [53] have been widely and successfully employed in the field of bioinformatics [54][55][56][57][58][59][60], and hence we have utilized the SVM for prediction in the present study. Binary SVM classifier was employed for the classification of CRG and non-CRG proteins. Let x i be the 62-dimensional numeric feature vector for the i th protein sequence, where i = 1, 2, …, N. Further, N 1 and N 2 are the respective number of protein sequences for the CRG and non-CRG classes such that N = N 1 + N 2 . Also, let us denote y i as the class label for x i , where y i ∈{-1, 1} with 1 and -1 as the class labels for the CRG and non-CRG classes, respectively. The decision function for the binary SVM classifier to classify a new observation vector x can be formulated as. The value of α i can be obtained by solving the convex quadratic programming subjected to the constraint. 0≤α i ≤ C and N i=1 α i y i = 0. Here, C is the regularization parameter that controls the tradeoffs between margin and misclassification error, and b is the bias term. Choosing an appropriate kernel function in SVM is important because the kernel function maps the input dataset to a high-dimensional feature space where the observations of different classes are linearly separable. In this study, 7 different kernel functions K x i , x j were utilized ( Table 3). The performances of the kernels were first evaluated with the default parameters (Additional file 2: Table S2) by using a sample dataset. (See figure on next page.) Fig. 1 a Box plot of the sequence lengths of the positive dataset, where it can be seen that sequence length with more than 1000 amino acids are outlying observations. Thus, the maximum sequence length considered is 1000 amino acids. b Overall accuracy for the four homogeneous sub-datasets and the heterogeneous full dataset. It is seen that accuracies are higher for the sub-datasets with homogeneous sequence length as compared to dataset with highly heterogeneous sequence length. c Performance metrics for seven different kernel functions with respect to classification of circadian and non-circadian proteins using support vector machine. Among all the kernel functions, Laplace, linear and radial kernels are found to be superior with regard to overall classification accuracy Then, the kernel functions with higher accuracies were chosen for the subsequent analysis. Cross-validation approach In the present study, we employed fivefold cross-validation to control the bias-variance trade-off [61] and assess the performance of the SVM classification models. To perform the fivefold cross-validation, observations of CRG and non-CRG classes were randomly partitioned into 5 equal-sized subsets each. In each fold of the crossvalidation, one randomly selected subset from each CRG and non-CRG classes were used as test set and the remaining four subsets of CRG and non-CRG classes together were used as training set. The classification was repeated five times with different training and test sets in each fold. The accuracy was computed by taking an average over all the five test sets. Prediction with balanced dataset In all the four sub-datasets (Q1, Q2, Q3, Q4), the size of the negative set was higher than that of the positive set (Table 2). By using such an imbalanced dataset, the SVM classifier may produce biased accuracy towards the class having a larger number of instances. Thus, a balanced dataset was preferred for prediction using the SVM classifier. The balanced dataset was prepared by taking all the instances of the positive class and an equal number of instances from the negative class. For instance, the balanced dataset for Q1 contained 1588 positive and 1588 randomly drawn negative (from 2045) instances. Further, using only one random negative set means the remaining negative instances are out of the evaluation. To overcome such a problem, the classification experiment was repeated 10 times with a different negative set (randomly drawn) each time along with the same positive set. So, the problem of unbalanced-ness was handled by following the repeated cross-validation procedure, without training of the SVM model with unbalanced data. Performance metrics were measured by following the fivefold crossvalidation technique and the final metrics were obtained by taking an average over all the 10 experiments. Using predicted class as a feature The labels of each instance were represented as − 1 and 1 for the CRG and non-CRG classes respectively. The predicted labels of the instances obtained after classification was considered as a numeric feature and added to the existing feature set. Then, the prediction using the same dataset (with different training and test) was performed again by using the new feature set. This process was repeated 50 times and the accuracy was analyzed after adding the new feature each time. The idea of using the predicted label as numeric feature was implemented to achieve higher classification accuracy. Performance metrics The true positive rate (TPR or sensitivity), true negative rate (TNR or specificity), accuracy, positive predictive value (PPV or precision), area under receiver operating characteristic curve (auROC) and area under precisionrecall curve (auRPC) were computed to evaluate the performance of classifier. The TPR, TNR, accuracy and PPV are defined as follows. Table 3 List of kernel functions and their mathematical expressions γ , d, r and order are kernel parameters and < > denotes the inner product Kernel type Kernel function The TP and TN are the number of correctly classified instances of the CRG and non-CRG classes, respectively. The FN and FP are the number of misclassified instances of the CRG and non-CRG classes, respectively. The ROC curve was obtained by taking the sensitivity in y-axis and 1-specificity in x-axis, whereas the PR curve was plotted by taking the precision and recall (sensitivity) in x-and y-axes respectively. Prediction analysis with different sequence length category Prediction was performed with the full dataset and subdatasets, where 50% randomly drawn observations from both CRG and non-CRG classes were utilized. For comparing the accuracy between the full dataset (diverse sequence length) and sub-datasets (homogeneous sequence length), prediction was done only with the RBF kernel because the trend in accuracy between the homogeneous and full datasets was expected to remain the same by using the other kernels as well. The accuracies were observed to be higher (~ 4-6%) for the homogenous sub-datasets (Q1, Q2, Q3, Q4) as compared to the heterogeneous full dataset (Fig. 1b). Thus, the four sub-datasets (i.e., Q1, Q2, Q3 and Q4) were used hereafter instead of full dataset. Prediction analysis with different kernel functions Performance of the kernel functions were compared by using a random sample of 50% observations. The sensitivity and specificity were respectively higher with the Laplace and linear kernels for the sub-datasets Q2, Q3 and Q4 (Fig. 1c). For sub-dataset Q1, sensitivity and specificity were higher with the RBF and polynomial kernels, respectively (Fig. 1c). The linear and Laplace kernels achieved similar accuracy for Q2 and Q3 sub-datasets, whereas the linear kernel achieved a little higher accuracy than the Laplace for Q1 and Q4 (Fig. 1c). Thus, no single kernel was found to perform better for each sub-dataset. It was also observed that the performance accuracies were higher for Q2 and Q3 (~ 65%) than that of Q1 and Q4 (~ 60%). Further, the Bessel kernel function achieved the lowest (~ 50%), followed by the hyperbolic kernel (Fig. 1c). As the Laplace, linear and RBF kernels achieved higher accuracies as compared to the other kernel functions, these three kernels were chosen for the subsequent prediction analysis. The mathematical representations of the Laplace and RBF functions are similar except for the distance between the feature vectors which is expressed in squared term for the RBF and in linear term for the Laplace. This may be the reason the variability captured by the Laplace kernel could be higher than that of RBF kernel, resulting in higher classification accuracy with the Laplace kernel. Further, the polynomial, hyperbolic and sigmoid kernels are the transformation of the linear kernel with additional parameters. So, the variability with respect to the discrimination of the CRG and non-CRG classes couldn't be captured well by these kernels. This may be one of the possible reasons that the linear kernel achieved higher accuracy as compared to the other three kernels. Prediction analysis with iteratively added features Either a little or no improvement in accuracies were observed with the Laplace and linear kernels, even after adding 50 predicted label features (results not provided). On the other hand, 2-4% improvement in accuracies was observed with the RBF kernel after including the additional features. Specifically, accuracies in Q1, Q2, Q3 and Q4 reached plateau after addition of 26, 25, 20 and 45 features, respectively (Fig. 2a). The probable reason for not improvement in accuracy for the linear and Laplace kernels may be the variability introduced in the dataset with the inclusion of features (only -1 s and 1 s) was not captured well by these two kernels. On the other hand, the non-linear RFB kernel could have captured that variability which contributed towards the discrimination of both the classes. Nevertheless, accuracies of the linear and Laplace without iterated features and RBF with iterated features were found to be similar. Thus, we employed these three kernels for the subsequent prediction analysis. The linear kernel achieved higher accuracy and precision for Q1, whereas the aucPR, aucROC and specificity were higher with the Laplace kernel. For Q2, the specificity, accuracy, precision and aucPR were higher with the Laplace kernel, whereas the linear kernel achieved higher accuracy in terms of sensitivity and aucROC. In Q3, the specificity, precision and aucPR were higher with the linear kernel, whereas the sensitivity, accuracy and aucROC were higher with the Laplace kernel. For Q4, though RBF secured higher accuracy in terms of specificity, accuracy, precision and aucPR, the Laplace kernel achieved higher accuracy in terms of sensitivity and aucROC than that of RBF. Thus, no kernel was found to be an obvious choice with regard to higher prediction accuracy. Therefore, we employed a multiple criteria decision making (MCDM) Fig. 2 a Classification accuracy with respect to classification of circadian and non-circadian proteins by using support vector machine with the radial (RBF) kernel with addition of iteratively generated features. It is observed that the accuracies are improved by addition of iteratively generated features in all the four sub-datasets. b ROC and PR curves with regard to the classification of circadian and non-circadian proteins by using support vector machine with linear, Laplace and RBF kernels approach to determine the best kernel function which is explained in the next section. TOPSIS analysis The MCDM method TOPSIS [62] with different performance metrics as the multiple criteria was used to determine the best kernel (in terms of accuracy). The TOPSIS scores were higher with the Laplace kernel for Q1 (61.12) and Q3 (58.11), whereas the linear and RBF kernel achieved higher scores for Q2 (67.50) and Q4 (57.91) sub-datasets, respectively (Table 5). Overall, the highest score (73.20) was achieved by the Laplace kernel as compared to the linear (70.09) and RBF (23.77) kernel functions (Table 5). Thus, the Laplace kernel function was chosen as the best kernel function and utilized for the subsequent analysis. Prediction with the independent test dataset The SVM with the Laplace kernel was used for the prediction of the independent dataset. The independent dataset was built with the circadian clock associated sequences collected from the existing studies. We collected 30 sequences from [63], 27 sequences from [64], 13 sequences from [33] and 26 sequences from [65]. Out of 96 sequences (30 + 27 + 13 + 26), some sequences were not found in NCBI (while searching with the gene ID) and some others were found to be present in the training (positive) dataset. After excluding such sequences, the remaining 54 circadian protein sequences were used as an independent dataset. Prediction for the independent dataset was made by using the models trained with Q1, Q2, Q3 and Q4 sub-datasets. Out of 54 sequences, 34 sequences were correctly predicted as circadian proteins and 20 sequences were wrongly predicted as non-circadian proteins. In other words, an accuracy of 62.96% was obtained with the independent dataset, which was similar to that of fivefold cross-validation accuracy with the Laplace kernel i.e., 62.48% (61.04 + 64.20 + 65.69 + 59.01 /4). Thus, it may be said that the prediction accuracy was neither overestimated nor underestimated. Comparative analysis with other machine learning algorithms The performance of SVM with the Laplace kernel (proposed approach) was further compared with that of other state-of-art machine learning algorithms, i.e., Random Forest (RF) [66], Bagging [67], Adaptive Boosting (Ada-Boost) [68], eXtreme Gradient Boosting (XGBoost) [69] and L1-penalized logistic regression LASSO [70]. The RF, Bagging, AdaBoost, XGBoost and LASSO were implemented by using the R-packages randomForest [71], ipred [72], adabag [73], xgboost [74] and glmnet [75] respectively. All the predictions were made with default parameters (Additional file 3: Table S3) and the performance metrics were measured by following fivefold cross-validation. In terms of sensitivity, specificity, accuracy and precision, performance of the LASSO and the proposed approach were observed to be higher than that of other four algorithms (Fig. 3). RF achieved higher auROC for Q1 (55.08%), Q2 (52.69%) and Q3 (52.23%), whereas XGBoost for Q4 (50.36) sub-datasets (Fig. 3). The proposed approach achieved higher aucPRC for Q1 (53.01%) and Q2 (50.13%), whereas XGBoost and AdaBoost for Q3 (50.67%) and Q4 (60.66%), respectively. Between LASSO and the proposed approach, higher specificities were achieved by LASSO (Q2: 65.45%, Q3: 64.46%, Q4: 57.43%) than that of proposed approach (Q2: 64.32%, Q3: 60.81%, Q4: 53.11%). On the other hand, higher sensitivities were observed for the proposed approach (Q2: 64.07%, Q3: 70.56%, Q4: 64.91%) than that of LASSO Table 4 Classification accuracy of the support vector machine with three different kernels with default parameters Classification was made with each sub dataset and performance metrics were computed following repeated cross validation where the experiment was repeated 100 times. In terms of accuracy, performances are higher for the Laplace kernel for Q2 and Q3 sub-datasets, whereas linear and RBF kernel performed better in Q1 and Q4 respectively. Performance metrics are higher for Q2 and Q3 sub-datasets than that of Q1 and Q4. The accuracies are seen to be more stable for RBF kernel, barring few exceptions (Q2: 63.26%, Q3: 66.6%, Q4: 60.14%). However, the accuracy and precision of the proposed approach and LASSO were found to be similar (Fig. 3). Thus, the LASSO and the proposed approach may achieve similar accuracy and better than the other considered algorithms. Proteome-wide identification and functional annotation The developed computational model was further employed for proteome-wide identification of proteins associated with the CR (CR-proteins). We collected the proteome-wide sequence datasets of two crop species i.e., rice (proteme id: UP000059680) and sorghum (proteome id: UP000000768) from the proteome database (https:// www. unipr ot. org/ prote omes/). There were four trained models in the background corresponding to Q1, Q2, Q3 and Q4. Based on the sequence length of the supplied test sequence, the trained model was first decided and subsequently the prediction was made. Out of 48,903 sequences of rice, only 1538 were predicted as CR-proteins with > 0.8 probability. Similarly, 1510 out of 41,298 sequences of sorghum were predicted as CR-proteins with > 0.8 probability. The probability threshold 0.8 was used to minimize the number of false positives. Functional analysis of the predicted 1538 rice sequences and 1510 sorghum sequences were also carried out with Gene Ontology (GO) terms. The GO annotation (biological process and molecular function) was performed using the PANTHER [76]. In rice, 1260 out of 1538 were mapped into biological processes (BP) and molecular functions (MF). In sorghum, 1140 out of 1510 were mapped into BP and MF. For BP in rice, biological_process (GO:0008150; 51.98%), cellular process (GO:0009987; 39.44%), metabolic process (GO:0008152; 38.57%), organic substance metabolic process (GO:0071704; 33.33%) and cellular metabolic process (GO:0044237; 31.19%) showed maximum number of hits (Fig. 4). With regard to MF in rice, the most represented GO terms were molecular_function (GO:0003674; 55.31%), catalytic activity (GO:0003824; 39.04%), binding (GO:0005488; 33.57%) and ion binding (GO:0043167; 20.79%) (Fig. 4). In sorghum, metabolic process (GO:0008152; 39.47%), organic substance metabolic process (GO:0071704; 33.15%), cellular metabolic process (GO:0044237; 32.11%) and nitrogen compound metabolic process (GO:0006807; 26.22%) were the most represented BP, whereas the molecular_function (GO:0003674; 57.36%), catalytic activity (GO:0003824; 40.78%) and hydrolase activity (GO:0016787; 14.12%) were the most represented MF (Fig. 4). The metabolic process showed significant enrichment in BP, whereas the catalytic, hydrolase and transferase activities were found significantly enriched for MF category in both rice and sorghum (Fig. 4). An R-package for users Based on the proposed computational model, we developed an R-package "PredCRG" (https:// cran.r-proje ct. org/ web/ packa ges/ PredC RG/ index. html) for proteomewide identification of proteins encoded by the circadian genes. There are three main functions in this package i.e., PredCRG, PredCRG_Enc and PredCRG_training. With the function PredCRG, users can predict the labels of the test protein sequences as circadian (CRG) or noncircadian (non-CRG) along with their probabilities. The function PredCRG_Enc can be used to encode the protein sequences based on the features of the PredCRG model. Most importantly, with the function PredCRG_training, users can develop their prediction models using four different kernel functions (Laplace, RBF, linear and polynomial) with their training datasets. The trained model can be subsequently used for the prediction of the test sequence of their interest. In summary, the developed R-package will be of great help for the researchers working in the field of identifying circadian genes via wet-lab experiments. Discussion The distribution of common CR-related genes in plants is yet to be fully understood [63]. Identification of molecular components underlying the plant CR will certainly facilitate understanding the plant behavior in response to different environmental stimuli [77]. Circadian genes manipulation may help breeding crop cultivars with enhanced reproductive fitness [1,33]. Circadian genes also reciprocate the defense signaling genes in plants [78]. Keeping in mind the roles of circadian genes, a computational model was developed in the present study to recognize the proteins encoded by the circadian genes. We collected the experimentally validated circadian gene sequences of the plant species from the CGDB database (http:// cgdb. biocu ckoo. org/) and constructed the positive set. As far as non-circadian gene sequence is concerned, no database having such sequences is available. Thus, the protein sequences of the Viridiplantae clad collected from the UniProt database was used as the negative set. Further, we employed the CD-HIT algorithm to remove the redundant sequences from both the positive and negative sets. The CD-HIT algorithm sorts the input sequences from long to short, and processes them sequentially from the longest to the shortest. The first sequence is classified as the representative sequence of the first cluster. Then, each of the remaining sequences is compared to the representative sequences and is classified as redundant if it is found similar (with the given sequence identity cut-off ) to the existing representative sequence. This process is repeated till all the sequences are classified as either redundant or representative. Finally, the non-redundant dataset (at the given threshold) is obtained by combining all the representative sequences. In this study, we applied a 40% sequence identity cut-off and obtained the dataset in which none of the sequences were > 40% identical to any other sequences. The positive (39-4218 amino acids) and negative (43-5400 amino acids) datasets were found to be much diverse with regard to sequence length. As sequence length plays an important role in determining the physico-chemical properties of protein sequences, both positive and negative sets were partitioned into four homogeneous sub-datasets. As expected, improvements in accuracies were found with the homogeneous sub-datasets as compared to the heterogeneous full dataset. One of the probable reasons for this may be the generation of noisy observation vectors with the diverse sequence length. Amino acid composition and physicochemical features of proteins determine their functions to a large extent [79][80][81]. Thus, the compositional and physico-chemical features were adopted for the generation of discriminative features. The considered kernel functions are either expressed as the inner product of the feature vectors (polynomial, hyperbolic, linear and sigmoid) or the distance between the feature vectors (radial, Laplace and Bessel). Among the kernel functions, the Laplace kernel emerged as the best kernel followed by the linear and RBF for classification of circadian and non-circadian proteins. Though the Laplace kernel was found more appropriate in the present study, accuracy may vary with different positive and negative datasets. While compared with other start-of-art machine learning methods such as RF, XGBoost, AdaBoost, Bagging, SVM was found to outperform them. We also noticed that the accuracy obtained with the LASSO was similar to that of SVM with the Laplace kernel. Although LASSO produces biased estimates, an advantage of LASSO is that it may yield higher accuracy by ignoring the redundant features. When we plotted the correlation matrix among the generated numeric features in the form of heat maps (Fig. 5), a higher degree of correlations was observed among certain features. The higher correlations among the features might have induced the redundancy in the feature set. So, one of the probable reasons for getting higher accuracy with the LASSO may be the use of only non-redundant features. Motivated from the earlier studies [82,83], the predicted label of the observation was utilized as additional feature. With the addition of such features, a little or no improvement in accuracy was found with the linear and Laplace kernels. On the other hand, improvement in accuracy was noticed with the RBF kernel. Improvement with the RBF and no improvement with the linear and Laplace kernels may be due to the non-linear relationship between the iteratively generated features (-1 s and 1 s only) and the response vector. The developed computational model achieved ~ 63% classification accuracy, while assessed through fivefold cross-validation procedure. Similar accuracy was also obtained with the independent test dataset. Equivalent accuracy for five-fold cross-validation and independent test set implies that there was neither over-prediction nor under-prediction accuracy with the proposed model. We further performed proteome-wide identification of circadian proteins using proteome dataset of rice and sorghum, followed by the functional annotation of the predicted circadian proteins. For reproducibility of the work, we have developed the R-package "PredCRG". We anticipate that this package would not only be helpful for the users to predict their test sequences, but also to build their prediction model using their training dataset. Conclusions This study presents a novel computational approach for the recognition of proteins encoded by the circadian genes. The prediction accuracy is not very high. However, this is the first computational approach for predicting the circadian genes (proteins) with the sequence dataset. So, we believe that further improvement can be made by including more discriminatory feature sets. The developed approach is expected to supplement the existing models that are based on gene expression data. The R-package "PredCRG" is believed to be of great help to the scientific community for proteome-wide identification of circadian genes. Our future endeavor would be to develop a more accurate model by using the sequence dataset.
7,585.2
2021-04-26T00:00:00.000
[ "Computer Science", "Biology" ]
Ensemble classification of integrated CT scan datasets in detecting COVID-19 using feature fusion from contourlet transform and CNN The COVID-19 disease caused by coronavirus is constantly changing due to the emergence of different variants and thousands of people are dying every day worldwide. Early detection of this new form of pulmonary disease can reduce the mortality rate. In this paper, an automated method based on machine learning (ML) and deep learning (DL) has been developed to detect COVID-19 using computed tomography (CT) scan images extracted from three publicly available datasets (A total of 11,407 images; 7397 COVID-19 images and 4010 normal images). An unsupervised clustering approach that is a modified region-based clustering technique for segmenting COVID-19 CT scan image has been proposed. Furthermore, contourlet transform and convolution neural network (CNN) have been employed to extract features individually from the segmented CT scan images and to fuse them in one feature vector. Binary differential evolution (BDE) approach has been employed as a feature optimization technique to obtain comprehensible features from the fused feature vector. Finally, a ML/DL-based ensemble classifier considering bagging technique has been employed to detect COVID-19 from the CT images. A fivefold and generalization cross-validation techniques have been used for the validation purpose. Classification experiments have also been conducted with several pre-trained models (AlexNet, ResNet50, GoogleNet, VGG16, VGG19) and found that the ensemble classifier technique with fused feature has provided state-of-the-art performance with an accuracy of 99.98%. www.nature.com/scientificreports/For automated COVID-19 lung segmentation and severity assessment in 3D chest CT scans, He et al. 16 proposed a synergistic learning framework.They created a multi-task multi-instance deep network (M2UNet) to assess the severity of COVID-19 patients and segment the lung lobe at the same time, where the context data supplied by the segmentation could be utilized to improve the performance of the severity evaluation.To begin with, they depicted each input image by a bag to deal with the challenging problem that the severity was attributed to the local infected regions in the CT scan image.In M2UNet, a hierarchical multi-instance learning technique was also suggested for severity evaluation.Through experimental analysis of their prepared dataset (666 CT scans), they demonstrated that their method outperformed several cutting-edge techniques by obtaining an accuracy of 98.5%. In order to identify COVID-19 utilizing relatively small-sized CT images, Li et al. 17 presented a deep learning methodology based on transfer learning.Their suggested approach made use of the transfer learning principles, which moved information from one or more source tasks to a target domain when the latter had less training sets.CheXNet was employed for COVID-19 identification by fine-tuning the network weights on the limited dataset for the objective goal.Evaluation was carried out on the freely accessible COVID-19-CT dataset (349 CT scans of 216 COVID-19 patients).According to the experimental findings, their method provided good performance in comparison to six state-of-the-art approaches by achieving an accuracy of 87%.However, their network design and optimizer still have scope for further development.In addition to the challenges, they continued to encounter data dependence, one of the most serious issues with deep learning makes it impossible to train the models in some specialized fields, particularly at the early stages of the COVID-19 spread when attempting to capture the characteristics of COVID-19 and Non-COVID-19.Table 1 summarizes existing image-based system methodologies in COVID-19 detection and their limitations. According to the findings of the above studies, four key challenges in COVID-19 detection research have been identified: (a) segmentation of COVID-19 image region, (b) extraction of discriminating characteristics, (c) detection or classification approach based on the retrieved features and (d) limited number images in the dataset.Many region clustering algorithms for segmentation have been offered by the researchers, however, the best one is still yet to be found.On the other hand, feature extraction methods can be based on a single strategy or a combination/fusion of strategies.In most cases, the fusion approach yields better results.Besides, selecting an appropriate detection method can be challenging as the number of choices is too many.Hence, an improved method capable of executing region-based segmentation, fusing features extracted by more than one technique, selecting appropriate features, and conducting accurate classification from a large number of images would be required to overcome the existing limitations. As artificial intelligence (AI) has proven to have outstanding capability in the autonomous diagnosis of COVID-19 based on CT, thanks to deep learning's strong representational learning ability.AI offers several benefits: (1) Make a speedy diagnosis, especially if the medical system is overburdened.(2) Lighten the load on radiologists and (3) Assist underdeveloped areas in getting a proper diagnosis.Most critically, as a new pandemic, there is a lack of systematic consensus on the sensitivity and particular signs of COVID-19.AI can develop discriminative features automatically based on the available data, which can help in identifying COVID- 19 other pneumonia 18 .Despite the success of AI in COVID-19 CT diagnosis, the models' generalization is still lacking and must be enhanced further to improve the detection accuracy.This paper aims to develop a machine learning (ML) based automatic system that detects COVID19 either positive or negative from the CT scan images and provides better output compared to the existing methods.The main contributions have been provided as follows. 1.The proposed method has developed a new database by collecting two different categories of CT scan images consisting of normal and COVID-19 from three publicly available major data sources [19][20][21] .2. A modified region-based clustering method has been applied to segment the whole CT scan image leading to a better classification result.3. A fused feature vector has been proposed from two different feature extraction methods including contourlet transform and CNN. 4. Hybrid binary differential evolution (BDE) has been selected for obtaining Meta heuristic features from the fused feature vector and achieving optimized features.5.A voting-based technique has been suggested for detecting COVID-19 using an ensemble of three base classifiers. The rest of the paper is organized as follows: "Model development" section describes the methodology to detect COVID-19 using the CT image and deep neural network."Results and analysis of the proposed method" section illustrates the experiments conducted with corresponding classification performance and model validation."Discussion" section presents discussions on performance comparison with the existing methods, complexity analysis and limitations.Conclusions and future research directions are outlined in "Conclusion" section. Proposed methodology The architecture of the proposed model as shown in Fig. 1 considered CT scan images as the input to detect COVID-19 or non-COVID-19 images.The CT scan image datasets were collected and merged from three publicly available datasets.Since the dataset images were not of the same size, they were resized and merged.The images were then converted to grayscale from RGB.A modified region-based clustering method was proposed to segment the CT scan grayscale images.Furthermore, the model deliberated two feature extraction techniques including contourlet transform and CNN.Firstly, the contourlet transform method and secondly, the CNN feature extraction technique extracted feature vectors.These two vectors were fused in one feature vector, which was used as the input to train the classification model.The fused feature vector considered a large number of features that helped to accurately identify the COVID-19 or normal images.The system also proposed an authentic feature selection technique that extracted meta-heuristic features by using BDE.This optimized vector was subsequently used to recognize COVID-19 CT scan test pictures using an ensemble classifier. The most important step in designing a computer-aided diagnostic (CAD) system for detecting COVID-19 at an early stage is the CT scan image segmentation 22 .In order to diagnose unusual disorders, segmentation is widely used in the area of medical images.Manual segmentation of the same medical images is possible.Image segmentation utilizing segmentation algorithms has a higher accuracy compared to manual segmentation.The www.nature.com/scientificreports/original fuzzy c-means (FCM) algorithm 23 works well for segmenting noise-free images, however, it fails to accurately segment the images with noise, outliers, or other imaging artifacts.The modified region-based clustering technique was used in this work to segment the CT images.The objective of the modified region-based clustering algorithm was updated to reduce the intensity of homogeneities by including spatial neighborhood information and altering the membership weighting of each cluster.The proposed segmentation algorithm has the following advantages: (a) propagates more homogeneous regions than other old fuzzy c-means algorithms, (b) manages noisy spots and (c) it is comparatively less sensitive to noise.These techniques have produced excellent output images with the simplest approach to isolate the objects from the background. Dataset used A chest CT scan is a useful medical imaging tool for accurately diagnosing COVID-19 cases 24 .As the open repository had a limited quantity of CT scan images, thus the images from all three databases were integrated As the open repository had a limited quantity of CT scan images, the images from all three databases were integrated to form a new database for this work.A total of 11,407 CT images with 7397 images from the COVID-19 class and 4010 images from the non-COVID-19 class.www.nature.com/scientificreports/ Preprocessing Image pre-processing is a key step in medical image processing to obtain meaningful information and appropriate classification by eliminating noisy or distorted pixels from each CT scan image.In this stage, the images were first resized to 256 × 256 pixels and transformed from RGB to grayscale images using the MATLAB function as the input for the model development.Color has no significance in detecting COVID-19 from the CT scan images hence grayscale images were employed during building the models to avoid any false classification and complexity.Grayscale images are simpler and easier to process than color images because they contain only one-color channel, which represents the intensity of the color for each pixel.Figure 3 displays the preprocessing steps employed in this work. Histogram equalization, an image processing technique that is frequently used on CT scan images to improve image quality in black and white color scales.The input images and its contrast-enhanced (after histogram equalization) images are shown in Fig. 3 with the related histograms.Histogram equalization was achieved by efficiently spreading out the most frequent intensity values, extending the image intensity range.The adoption of a spatially variable histogram equalization technique seems to improve the visibility of anatomic structures in various clinical scenarios 25 .However, the technique increased the amount of noise and artifacts in the presented image. Modified region-based clustering techniques The region-based clustering was employed to simplify the COVID-19 image region, which ensured less computational complexity and relatively accurate analysis.K-means, C-means, thresholding, morphology-based, edge-based, watershed, region-growing, and cluster-based approaches are among the various segmentation algorithms 26 .The authors of this paper proposed a cluster-based algorithm that segmented the image effectively and provided a better performance in terms of measuring evaluation matrices SSIM (structural similarity index), PSNR (peak signal to noise ratio) and RMSE (root mean square error) scores. The proposed segmentation method partitioned the COVID-19 image into four clusters (C1 to C4) as gray matter (GM), cerebra-spinal fluid (CSF), white matter (WM), the necrotic focus of glioblastoma multiforme (GBM).The proposed segmentation technique employs an iterative process to locate the cluster region.In each iteration, the cluster's centroid is modified to reduce the distance between pixels and the centroid.The mean brightness of all pixels within a cluster and the distance are obtained by using Eqs.(1) and ( 2) respectively.The COVID-19 segmentation process is depicted in Algorithm 1. where µ k is the clusters mean intensity, and r means pixel's distance from a cluster's centroid.The intensity of the ith pixel within a cluster is Z i , C k is the center of the kth cluster, and x i is the intensity of the ith pixel.The number of pixels in a cluster is denoted by N. The COVID-19 segmentation process is depicted in Algorithm 1. Figure 4 illustrates the grouping of COVID-19 image data step by step. (1) Extraction of contourlet transform features The contourlet transform tries to capture curves rather than points and includes anisotropy and directionality.The CT was created to solve the wavelet transform's limitations such as poor directionality, shift sensitivity and lack of phase information 27 .At each scale, it allows for a variable and elastic number of directions while obtaining virtually critical sampling.The contourlet transform 28 is accomplished based on two steps including Laplacian pyramid decomposition and directional filter banks (DFB).At every level of the Laplacian pyramid, a downsampled lowpass version of the source image is generated, as well as the difference between the source image and the down-sample lowpass image, resulting in a high-pass image.The next level Laplacian pyramid builds an iterative structure linking with the down-sampled lowpass version of the original signal.DFBs are used to create high-frequency sub-bands with a variety of directions.The contourlet transform acts on two-dimensional CT scan images.This work generated sixteen different multi-directional multiscale images using four-level CT with the '9-7' filter and computed thirteen various image features, including entropy, homogeneity, energy, correlation, and others from the segmented images, by enumerating the gray level co-occurrence matrix (GLCM) of each image.Figure 5 presents the contourlet transformed images considering edges, lines, textures and contours in contrast to the wavelet transform. Extraction of CNN based features For feature extraction, the proposed system employed the benchmark VGG19 CNN model, which outperformed the other CNN models such as AlexNet, GoogleNet, and ResNet50.A 19-layer version of VGGNet 29 was used to create this network.Figure 6 shows the VGG19 architecture, which includes sixteen convolution layers and three fully connected (dense) layers.For each convolution layer's output, a non-linear ReLU was employed as an activation function.The entire convolution sections were divided into five sub-regions by five consecutive max-pooling layers.Two convolution layers were employed with depth dimensions of 64 and 128 respectively.Each of the other three sub-regions was made up of four consecutive convolution layers with depth sizes of 256, 512, and 512 in each sub-region.In this case, a convolutional kernel of size of 33 was chosen.The last layer of the proposed VGG19 models was replaced by a softmax classification layer.Two fully connected layers with neurons 1024 and 4096 were installed before the output layer.As a result, the fully connected layer yields 4096 features for classification. Features fusion and generation of optimized features A fusion-feature vector was created by combining the extracted features from the contourlet transform and CNN.Overlapping, redundancy, and dimensional expansion are regular occurrences in all fusion-based techniques, therefore dimension reduction, as well as redundancy minimization or the elimination of irrelevant features, is required to obtain the optimum features.Many researchers obtain optimized features using Principal Component Analysis (PCA) 30 and minimum Redundancy-Maximum Relevance (mRMR) 31 but the BDE feature optimization method provides better performance than the others.For the dataset used in this study, three feature optimization approaches were tested and BED performed best. In the mRMR feature selection algorithm, the mutual dependencies of x and y variable can be determined using Eq.(3) where p(x), p(y) and p(x,y) are the probability density functions.Equation (4) approximates the maximal relevance D(S,c), where x i is the mean of all mutual dependencies and c is the class.As a result, the function R(S), is represented by Eq. ( 5) that can be used to add minimal redundancies.S is the feature combination.www.nature.com/scientificreports/ In the PCA algorithm, the covariance of features is determined to take uncorrelated features.PCA uses Eq. ( 6) to combine the correlated features. The BDE feature selection technique is a heuristic evolutionary strategy for reducing the successive problem.The notion of advanced binary differential evolution (ABDE) is expanded to include feature selection difficulties.Three random vectors P u1 , P u2 , and P u3 are chosen for vector pk for the mutation operation, so that u1 = u2 = u3 = k, where k is a population vector arrangement.The dth characteristic of the difference vector (Eq. ( 7) is zero if the dth dimensions of the vectors P u1 and P u2 are equal; otherwise, it has the same value as the vector P u1 : Following that, the mutation and crossover processes are carried out, as illustrated by the Eqs.( 8) and ( 9). ( 4) www.nature.com/scientificreports/Here, W denotes the try vector, CR ǫ (0, 1), a crossover amount, and γ ε (0, 1) denotes the mutation amount.If the try vector W k has a higher fitness value than the current vector P k , then it will be replaced in the selec- tion phase.In a different way, the current vector P k is saved for the next generation.Finally, this fused method achieved 1300 accurate optimized features. Figure 7 illustrates the steps in obtaining the optimized features in a single vector by fusing the features vectors extracted by the contourlet transform and CNN.The size of this feature vector is 4109.BDE based feature selection method was then employed to get 1300 most discriminating features. Hybrid selective mean filtering (HSMF) method The authors suggested a novel, straightforward hybrid selective mean filter (HSMF) technique 32 to calculate the average value selectively, unlike the traditional mean filter (MF) method, which calculates the average pixel utilizing all pixels in a given kernel region.A threshold value was used to define pixel selection (h).Noise was not considered in the noise reduction procedure if an adjacent pixel in a kernel was higher or smaller than the threshold value from the value of the core pixel.The pixel selection was performed with the following Eq.(10). If I x, y − I(x + i, y + j) ≤ h, foreveryiandj then N ′ x, y = N − 1.The noise image reduction is then calculated using Eq.(11). In the Eqs.( 10) and (11), the disparities between all nearby pixel values and the central pixel value are likely to exceed h in the edge areas.The pixel value I SMF (x, y) is equal to I in this situation (x, y).In contrast, in the homogenous regions, the disparities between all nearby pixel values and the central pixel value are likely to be smaller than h.The pixel value I SMF (x, y) is equivalent to I MF in such situations (x, y). Figure 8 depicts the noise reduction process of the HSMF method.The mean pixel value at the central pixel in a position (x, y) was calculated only from the black area where the differences in pixel values from the value of the central pixel were less than the threshold value, not from all the pixels in a particular square kernel (i.e., union of black and red areas).The pixels outside of the black region, as well as those still inside the kernel of interest with pixel values higher than the threshold value, were not included in the calculation. The threshold (h) was calculated using the magnitude of the standard deviation (SD) of the pixel values an image, which is a measure of noise 33 .To cover the majority of the image noise in this study, a 3 SD threshold was utilized.An approach proposed in Ref. 34 was used to determine the SD automatically.This selects the minimum value of the standard deviation map automatically (SDM) as defined by Eq. ( 12). The HSMF was supposed to reduce the noise dramatically while maintaining good spatial resolution.The technique is computationally light and fast it is based on MF, making it easier to employ in clinical imaging than the BF (bilateral filter).Figure 9 displays the filtered image by using the HSMF method. Ensemble classifier To determine the COVID-19, a ML/DL based ensemble classifier was employed 35 .Four ensemble models are commonly used to create the predictive classifier such as boosting, bagging, stacking, and voting 36 .The bagging approach of the ensemble methods like a bootstrap aggregation was used in this experiment.To compare the classification performance utilizing the optimized feature vector, three distinct types of classifiers including Long Short-Term Memory (LSTM), ResNet50 and Support Vector Machine (SVM) were employed.These three base The ensemble decision class is one that receives majority of the votes from the base classifiers that means if n i=1 > n 2 as indicated in Eq. ( 13). Final features vector where the total number of base classifiers is n. Figure 10 represents the ensemble classifier-based bagging approaches where C1, C2, and C3 depict the LSTM, ResNet50, and SVM base classifiers, respectively.Similarly, P1, P2, and P3 signify the votes they represent.The final classification result combines the votes P1, P2, and P3 using Eq. ( 13) to yield the anticipated class based on the majority votes.To train the base classifiers, the training dataset set was divided into three subsets, D1, D2, and D3, then the testing was performed after training. Segmentation performance In this work, evaluation metrics such as PSNR (peak signal to noise ratio), SSIM (structural similarity index), and RMSE (root mean square error) were calculated to measure the segmentation performance (Table 3).It was clear that the proposed modified region-based clustering method produced a better performance compared to the other segmentation methods in terms of PSNR and SSIM.However, the RMSE value was slightly worse than Fast C-means and K-means clustering methods.www.nature.com/scientificreports/ Filtered method performance The HSMF has the potential to lower a given noise level by up to 75% without sacrificing spatial resolution.For a similar noise level, the bilateral filter (BF) was only able to lower the noise by 50% from 3.0 to 1.5 mGy.While at a higher noise level, BF cannot achieve a 50% reduction for instance a noise reduction from 6.0 to 3.0 mGy would not be possible.According to the current experiments, the HSMF reduced the noise by 75% (from 6.0 to 1.5 mGy), implying that the HSMF proved to be a better filter than the BF.In this study, PSNR was also used to compare the performance of different filtering techniques.Highest PSNR value was obtained for the HSMF (29.34) when compared to the adaptive median filter (AMF) (28.54) and the BF (28.75). Classification performance-feature fusion The deep features from the pre-trained CNN (VGG19) model were additionally merged with the extracted features using the contourlet transform.While concatenating with deep CNN, the fusion of contourlet transform exhibited superior classification results than the interpolation-oriented descriptor such as scale-invariant feature transform (SIFT) 37 and Histogram Oriented Gradients HOG 38 .Table 4 shows the comparison results utilizing contourlet transform, SIFT and HOG feature descriptors.It was clear that fusion of features by contourlet transform + CNN with feature optimization showed better performance that the individual techniques.Again, after optimization with three techniques (PCA, mRMR and BDE), BDE produce the best performance.Therefore, fusion of CNN features with contourlet transformed features and optimization with BDE were considered in this work.For all fused based CNN feature extractor models, this work employed the ensemble classifier. Classification performance-feature extraction by pre-trained CNN models Feature extraction experimentations were carried out by various pre-trained CNN models such as GoogleNet, VGG16, Resnet50, AlexNet and VGG19 and the extracted features were fused with features obtained by contourlet transform.It was identified that the VGG19 outperformed the others, in terms of all performance measures (Fig. 11).For each of the per-trained models, various performance metrics were determined, and the optimum results were obtained by changing the learning parameters and number of epochs.The best outcomes were achieved by selecting an appropriate learning parameter of 0.001 and an epoch of 50 for each of the pre-trained models. Classification performance-ensemble method For classification, the final feature vector with BDE optimization was considered as the input in developing the suggested ensemble model, which included LSTM, SVM and ResNet50 classifiers.Figure 12 shows the classification results of each separate classifier and the ensemble method.Compared to the three classifiers separately, the ensemble of these classifiers provided better outcome with an accuracy of 99.98%, a specificity 99.93%, a sensitivity and precision of 99.87%.Figure 13 shows different performance accuracy/loss curves for classification.The true positive rate (TPR) against False Positive Rate (FPR) in a collection of threshold values is represented using a ROC curve.The Receiver Operating Characteristics (ROC) curves for the individual and ensemble methods are presented in Fig. 14.The goodness of the ensemble method's classification performance was clearly noticed.The ROC curve's area covered was almost 100% indicating that the model showed outstanding performance in terms of COVID-19 identification from the CT images. Validation performance The performance of the proposed model was additionally assessed using the generalization and k-fold validation techniques.Cross-validation is a resampling method used in ML to ensure that a model is efficient and precise on unseen data.The K-fold cross-validation technique was employed in this study to divide the data into five folds and ensure that each fold was utilized as a testing set at least once.By doing so, the model was tested on completely unseen CT images, which would provide confidence in the model's capacity to accurately recognize the COVID-19 cases.Table 5 presents the accuracy and loss values while testing and validating the proposed model using the fivefold cross-validation technique.It is clear that an average accuracy of 95% was obtained during testing and validation indicating the reliability of the proposed method on the unseen data.Figure 16 shows that ROC curves for individual and average folds. To determine the model's performance, the generalization technique was used where the model trained on a given dataset would predict COVID-19 on a completely new dataset.The majority of the current research encounters challenges in using the generalization technique since the models were unable to recognize the varied relationship between pixel values in unique X-ray or CT images from different sources datasets 39 .In this study, attained by the proposed model were 98.10%, 96.70%, 98.79%, and 97.33%, respectively, demonstrating the model's robustness even when a new set of data was tested. Comparative analysis Given the increasing size of biomedical datasets and complexity of the data, the use of ML and DL techniques in data analysis is continuing to grow in the coming years.As a result, novel strategies for uncovering the biological patterns, particularly biomedical imaging data, are required.This paper provides an ensemble classification technique for detecting COVID-19 cases from CT images.Table 6 compares the performance of the proposed strategy with the previous methods available in literature that used various classifiers and pre-processing techniques.It is obvious from the table that the proposed method outperformed all the previous state-of-the-art (SOTA) models by achieving a high classification accuracy of 99.98%.Ensemble learning is a simple machine learning approach that seeks better predictive performance by combining the predictions from multiple models.However, in the proposed system, first, the dataset was pre-processed and segmented the Covid-19 affected regions using appropriate segmentation technique.Relevant features were extracted by two different feature extractors (VGG-19 and contourlet transform) and fused them in one vector.For classification purposes, the voting technique of the ensemble method was employed.It should be noted that the ensemble method was only used for classification of the features not fusing them together.Hence, in this proposed system, the modified region-based segmentation, fused features, BDE feature selection method, and ensemble classification play a significant role in obtaining significantly improved accuracy. Most studies published in the literature did not use a segmentation technique to pre-process CT images [40][41][42][43][44] .However, the proposed method used modified region-based clustering technique for segmenting the COVID-19 CT images.Hasan et al. 45 and Zain et al. 46 used an LSTM network as a classifier to achieve a classification accuracy of above 98% on 321 and 1322 CT images, respectively.However, the LSTM networks might pose problems when training on small amounts of images since they are susceptible to overfitting.Also, the LSTM network requires additional memory and training time to train a network.Most previous research used a single pre-trained DL model as a classifier [41][42][43][44][45][46][47][48][49][50][51][52] , whereas LSTM, ResNet50 and SVM were combined as an ensembled classifier to achieve better classification performance.Most research only used a single dataset for their experiments 40,[42][43][44][45][46][47][48][49][50][51]53 , making their models unreliable in predicting COVID-19 from a different dataset. Soe work produced lower accuracies even when they used small number of datasets 40,43,44,49 .In contrast, the current method employing three distinct datasets with large number of images to develop the model would enhance its reliability.www.nature.com/scientificreports/ Complexity analysis of the proposed method The processing time of a system plays a significant role in determining the image retrieval process.For this purpose, the entire operation of this study was performed by MATLAB 2019b on a high performance computer specified in "Experimental settings" section.The estimated processing times in this study are shown in Table 7. The entire operational time for each image is a combination of processing, training, and testing times.The processing time consists of preprocessing, feature extraction and classification times where the process begins with reading the image and finishes with feature extraction.On the other hand, the training time is the amount of time required to train each classifier on the complete dataset.The testing time merely consists of the prediction and voting of each classifier.Therefore, based on the processing times, it can be concluded that the proposed method was not computationally complex. Limitations and future work This feature fusion ensemble method of detecting COVID-19 was developed based on three publicly accessible datasets.Despite huge success of the proposed method in identifying COVID-19 cases correctly, some drawbacks need to be highlighted for further improvement.One of the key challenges faced by the researchers in the ML based automated detection of COVID-19 cases is the requirement for a substantial annotated image dataset collected by a qualified physician or radiologists in order to develop a robust model. To the best of our knowledge, the majority of the contemporary ML tools for medical imaging have this same constraint.The researchers are currently making their datasets available to the public in an effort to address this problem.However, the difficulty of gathering accurate data is made even more difficult by the absence of accurate annotation of the data that has already been collected. Adopting zero-shot, few-shot, and deep reinforcement learning (DRL) techniques could help to address this problem in the near future 54,55 .Zero-shot learning has the capacity to build a recognition model for the unseen test samples that have not been labelled for training.Therefore, the zero-shot learning can address the issue of lack of training data for the COVID-19 classes.Additionally, a deep model can learn information from a small number of labeled instances per class using few-shot learning technique.On the other hand, DRL can reduce the need for precise annotations and high-quality images. Another limitation is that in this study CT images were exclusively used.However, in future, the same described strategy can be applied on X-ray images to detect COVID-19 cases.This would enable to assess the effectiveness of THE model on a variety of image datasets.Although the proposed method achieved an outstanding performance on three publicly available dataset, the work has not been validated in actual clinical study yet.Therefore, efforts are required to test the model in clinical condition and gather feedback from the doctors and radiologists for further improvement of the model.In addition, fine-tuning of the proposed strategy could be carried out to address the issue of the lengthy training time resulting from the hybrid feature fusion technique. Conclusion The proposed research has developed a high-accuracy, low-complexity intelligent ML model for COVID-19 identification using CT scan images.For the detection of COVID-19, the system combined the strength of contourlet transform with the power of CNN for feature fusion optimized by BDE, as well as the bagging-based ensemble classifier.The analysis of the results was performed considering the evaluation metrics including accuracy, sensitivity, specificity, and precision obtained from the confusion metrics.The proposed methods attained superior results of 99.98% accuracy compared to other classifiers including LSTM, ResNet50, and SVM or the existing approaches reported in the literature.Furthermore, the proposed system tested using fivefold cross-validation and with an unknown dataset for generalization purpose produced accuracies of 95.68% and 98.10% respectively. Figure 2 demonstrates sample CT scan images from each dataset.The training and testing phases included images of COVID-19 and non-COVID-19. Figure 2 . Figure 2. Sample CT scan images from three datasets. Figure 4 . Figure 4. Applied modified region based clustering method for COVID19 and non-COVID19 image segmentation. Figure 5 . Figure 5. Overall structure of contourlet transform feature extraction method. Figure 6 . Figure 6.Architecture of VGG19 for feature extraction from CT scan images. Figure 7 . Figure 7. Block diagram of optimised feature selection process. Figure 8 . Figure 8.An illustration of picking neighboring pixels for noise reduction in the hybrid selective mean filter (HSMF) method. Figure 9 .Figure 10 . Figure 9. Filtered CT scan images using hybrid selective mean filter method. Figure 15 presents two colors have been used in the confusion matrix based on the labels, represented by negative and positive prediction values.The yellow color represents (true prediction: TP and TN) how many COVID-19 and normal images have been detected accurately.Whereas the blue color (false prediction: FP and FN) indicates the number of COVID-19 and normal images that have been misclassified.According to the confusion matrix presented in, the proposed ensemble model missed 1 COVID-19 image (false negative) out of 788 COVID-19 images in this testing experiment, while it misidentified 1 non-COVID-19 images as COVID-19 images (false positive) out of 1462 non-COVID-19 images. Figure 11 .Figure 12 .Figure 13 . Figure 11.Comparisons of the classification performance results achieved by different CNN models for feature extraction and fusion combined with contourlet transform (feature selection by BDE optimisation and classification with ensemble technique). Figure 17 . Figure 17.Training accuracy and training loss curves, confusion matrix, and ROC curves by appling generalization method.for COVID-19 Radiography database. 16 2D image patches + feature embedding + classifier (M 2 UNet) Detect positivity and severity of COVID-19, used 666 Ct scan images, accuracy 98.50% High computational time.classification more complex Li et al. 202 17 Preprocessing + feature extractor + modified CheXNet Classify COVID-19 or normal image, used 1212 X-ray images, accuracy 87% Small-sized training datasets CoV-2 from men(32)and females(28), and 1230 CT scan images of 60 patients who were not infected with SARS-CoV-2 but had other pulmonary disorders.The data of CT scan images was gathered from hospitals in Sao Paulo, Brazil.The CT scan images in this dataset are digital scans of printed CT tests, and there is no criterion for image size.The smallest CT scan images in the dataset are 324 × 412 pixels, while the largest CT scans are 484 × 456 pixels.In this dataset, the number of training and testing images are 1842 and 640 respectively.(b) The original CT scans image of 377 people are included in this COVID-19 CT image dataset 20 .There are 1558 and 4826 CT scan images, respectively, belonging to 95 affected COVID-19 people and 282 normal people.The Negin Medical Center in Sari, Iran, provided this dataset.All the CT image sizes are 256 × 256 × 3.In this dataset, the number of training and testing images are 5594 and 790 respectively.(c) These publicly available datasets are collected from authentic website 21 .This dataset contains a total of 2541 CT scan images with 1200 COVID-19 and 1341 non-COVID-19.In this dataset, a total of 1726 and 815 images are considered for the training and validation. Table 2 . Data distribution for training and testing. Table 3 . Comparison of several segmentation techniques.Best values are in bold. Table 4 . COVID-19 detection performance using features from several techniques.Best values are in bold. Table 5 . The values of accuracy and loss for training and test data during cross validation. Table 6 . Performance comparison between the proposed method and existing methods.Best values are in bold. Table 7 . The entire processing time of the proposed method.
8,146.6
2023-11-16T00:00:00.000
[ "Computer Science", "Medicine" ]
In situ IR spectroscopy data and effect of the operational parameters on the photocatalytic activity of N-doped TiO2 The TiO2 photocatalyst doped with nitrogen was synthesized via a precipitation method and investigated in the oxidation of acetone vapor under UV (371 nm) and visible light (450 nm). The data were collected in a continuous-flow set-up equipped with a long-path IR gas cell for in situ analysis of oxidation products and evaluation of the photocatalytic activity. The IR spectra for inlet and outlet reaction mixtures and their change during the process are presented. A technique for quantitative analysis of initial substrate and oxidation product using collected IR spectra is described. The effects of main operational parameters, namely, outlet concentration of oxidizing substrate in the range of 0–25 μmol/L, humidity in the range of 10–85%, and surface density of photocatalyst in the range of 0.6–5.7 mg/cm2 were investigated, and the data received are presented. The data show the influence of these parameters on the UV and visible light photocatalytic activity of N-doped TiO2. The data is publicly available on GitHub according to the link: https://github.com/1kovalevskiy/Effect-of-the-operational-parameters. a b s t r a c t The TiO 2 photocatalyst doped with nitrogen was synthesized via a precipitation method and investigated in the oxidation of acetone vapor under UV (371 nm) and visible light (450 nm). The data were collected in a continuous-flow set-up equipped with a long-path IR gas cell for in situ analysis of oxidation products and evaluation of the photocatalytic activity. The IR spectra for inlet and outlet reaction mixtures and their change during the process are presented. A technique for quantitative analysis of initial substrate and oxidation product using collected IR spectra is described. The effects of main operational parameters, namely, outlet concentration of oxidizing substrate in the range of 0e25 mmol/L, humidity in the range of 10e85%, and surface density of photocatalyst in the range of 0.6e5.7 mg/cm 2 were investigated, and the data received are presented. The data show the influence of these parameters on the UV and visible light photocatalytic activity of N-doped TiO 2 . The data is publicly available on GitHub according to the link: https://github.com/1kovalevskiy/Effect-ofthe-operational-parameters. © 2019 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons. org/licenses/by/4.0/). Data The TiO 2 photocatalyst doped with nitrogen was tested in a continuous-flow set-up during the oxidation of acetone vapor under UV (371 nm) and visible light (450 nm) to receive the data on the effects of operational parameters on the steady-state photocatalytic activity. The schematic diagram of the experimental set-up is shown in Fig. 1. A qualitative and quantitative analysis during the photocatalytic oxidation (PCO) process was performed using in situ IR spectroscopy technique. Fig. 2 shows the typical IR spectra for inlet and outlet reaction mixtures during the photocatalytic oxidation of acetone vapor. The IR spectra were collected periodically every 30 s to monitor the concentrations of acetone and CO 2 during the PCO experiment. To illustrate this point, Fig. 3 shows the evolution of IR spectra during an experiment of acetone PCO with two switching between monitoring of inlet and outlet mixtures for 10 minutes. The IR spectra collected were analyzed, and the concentrations for acetone and CO 2 were estimated according to the Beer-Lambert law. Fig. 4 shows the typical acetone and CO 2 concentration profiles during the PCO experiment. The effects of main operational parameters, namely, outlet concentration of oxidizing substrate in the range of 0e25 mmol/L, humidity in the range of 10e85%, and surface density of photocatalyst in the range of 0.6e5.7 mg/cm 2 were investigated, and the data received are presented. Fig. 5 shows the dependence of steady-state PCO rate for N-doped TiO 2 under UV and visible light on the concentration of acetone in the outlet reaction mixture. Fig. 6 shows the effect of relative humidity on the photocatalytic activity of N-doped TiO 2 under UV and visible light. Fig. 7 shows the dependence of steadystate PCO rate under UV and visible light on the photocatalyst surface density. Specifications table Subject area Chemistry More specific subject area Photocatalysis Type of data Figure How data was acquired A continuous-flow set-up equipped with a special valve system for analysis of the inlet and outlet reaction mixtures using IR spectroscopy. IR spectroscopy: an FTIR spectrometer FT-801 from Simex LLC (Russia) equipped with a long-path IR gas cell (Infrared Analysis Inc., USA) Data format Raw and analyzed Experimental factors N-doped TiO 2 photocatalyst was prepared via a precipitation method using titanyl sulfate as a titanium precursor and ammonium hydroxide as a precipitating agent, as well as a source of nitrogen. Before photocatalytic experiments, the synthesized photocatalyst was deposited on a 9 cm 2 glass plate from an aqueous suspension followed by drying in air at 110 C Experimental features The synthesized photocatalyst was tested in the oxidation of acetone vapor under UV (371 nm) and visible light (450 nm) in the continuous-flow set-up under steady-state conditions. Acetone was selected as a test organic substrate due to a fact that it does not cause the deactivation of photocatalyst and is completely oxidized to CO 2 and water without gaseous intermediates. Value of the data Data allow for comparing the efficiency of the photocatalytic oxidation using N-doped TiO2 under UV and visible light Data are useful for selection of the optimal parameters to compare different photocatalytic materials In situ IR spectroscopy has great promise for the investigation of photocatalytic activity in the oxidation of volatile organic compounds Data show great promise of continuous-flow set-ups for the investigation of kinetic characteristics and stability of photocatalysts 2. Experimental design, materials, and methods Experimental set-up The TiO 2 photocatalyst doped with nitrogen was tested in the oxidation of acetone vapor under UV and visible light in the continuous-flow set-up to determine the steady-state photocatalytic activity and to investigate the effect of operational parameters on the activity. This continuous-flow set-up was previously successfully employed for the investigation of various photocatalytic materials and target pollutants [1e4]. The set-up had the gas purification unit to remove particles, CO 2 , water vapor, and volatile organic compounds traces from air. The purified air flow was divided into three flows. Two flows were saturated with water and acetone vapor, respectively. Then, all the flows were mixed. The total volume rate, humidity, and concentration of acetone vapor in the reaction mixture was adjusted by the rate for each flow. The N-doped TiO 2 photocatalyst was prepared via a precipitation method using titanyl sulfate as a titanium precursor and ammonium hydroxide as a precipitating agent, as well as a source of nitrogen. The synthesized photocatalyst was deposited on a 9 cm 2 glass plate from an aqueous suspension and placed into the photoreactor. The surface density of photocatalyst on the glass plate was varied from 0.6 to 5.7 mg/cm 2 . The set-up had a special valve system that allows for analyzing the inlet and outlet reaction mixtures alternately using an FTIR spectrometer FT-801 from Simex LLC (Russia) equipped with a long-path IR gas cell (Infrared Analysis Inc., USA). During the analysis of inlet (10 min), the gas from a mixing chamber flows firstly through the IR cell and then goes to the photoreactor. In the case of outlet analysis (10 min), the gas flows through the photoreactor and then through the IR cell. The other experimental parameters were as follows: the reactor temperature is 40.0 ± 0.1 С, the volume flow rate is 0.069 ± 0.001 L/min. IR spectroscopy analysis As stated above, the special valve system allows for analyzing the inlet and outlet reaction mixtures alternately using IR spectroscopy. According to the NIST database [5], the bands at 1092, 1217, 1365, 1435, 1735, and 2970 cm À1 can be attributed to the acetone molecule that is the initial oxidizing substrate. In addition to these bands, the band at 2349 cm À1 was appeared in the IR spectra that correspond to the outlet mixture. This band can be attributed to CO 2 molecule [5]. No other carboncontaining compounds were detected using IR spectroscopy. This result indicates that CO 2 is the major product during the acetone PCO over N-doped TiO 2 both under UV and visible light. The IR spectra were collected periodically every 30 s to monitor the concentrations of acetone and CO 2 during the experiment. The quantitative analysis was performed by the integration of collected IR spectra using the Beer-Lambert law as follows: where AðuÞ ¼ lgðI 0 ðuÞ=IðuÞÞ is the absorbance, u 1 and u 2 are the limits of the corresponding absorption bands (cm À1 ), ε is the attenuation coefficient (L/(mmol$cm 2 )), l is the optical path length (cm), and C is the concentration of a substance in the gas phase (mmol/L). The regions for the integration were selected as follows: 1160e1263 cm À1 for acetone and 2230e2450 cm À1 for CO 2 . The attenuation coefficients for each substrate were calculated from the calibration data. The regions for other compounds, which may be detected as intermediates during the PCO process, can be found elsewhere [6e9]. Photocatalytic activity Before the photocatalytic test, the adsorption-desorption equilibrium of acetone on the photocatalyst was achieved until no difference in inlet and outlet acetone concentrations was observed. After that, UV or visible light source was turned on and the photocatalytic activity was evaluated. A highpower UV-LED with a maximum at 371 nm and Vis-LED with a maximum at 450 nm were used for the photocatalyst irradiation. The total irradiance was 9.7 mW/cm 2 for UV-LED and 145 mW/cm 2 for Vis-LED. The photocatalytic activity was estimated as the steady-state PCO rate of the acetone oxidation. The PCO rate can be expressed as follows: where PCO rate is the steady-state photocatalytic oxidation rate (mmol/min), DC CO2 is the difference in the outlet and inlet CO 2 concentration (mmol/L), U is the volume flow rate (L/min). Typically, the CO 2 concentration in the outlet increases as the irradiation time increases until a constant value that corresponds to the achievement of a steady state. The time required for the achievement of steady state depended on the activity of the catalyst and its adsorption capacity. The data for CO 2 concentration from the region, which corresponds to the steady state, were used for the calculation of PCO rate. Based on the statistics of many experiments, a total error in measuring the PCO rate using the set-up does not exceed 10%.
2,440.2
2019-04-28T00:00:00.000
[ "Chemistry" ]
Improved YOLOv5: Efficient Object Detection Using Drone Images under Various Conditions : With the recent development of drone technology, object detection technology is emerging, and these technologies can also be applied to illegal immigrants, industrial and natural disasters, and missing people and objects. In this paper, we would like to explore ways to increase object detection performance in these situations. Photography was conducted in an environment where it was confusing to detect an object. The experimental data were based on photographs that created various environmental conditions, such as changes in the altitude of the drone, when there was no light, and taking pictures in various conditions. All the data used in the experiment were taken with F11 4K PRO drone and VisDrone dataset. In this study, we propose an improved performance of the original YOLOv5 model. We applied the obtained data to each model: the original YOLOv5 model and the improved YOLOv5_Ours model, to calculate the key indicators. The main indicators are precision, recall, F-1 score, and mAP (0.5), and the YOLOv5_Ours values of mAP (0.5) and function loss were improved by comparing it with the original YOLOv5 model. Finally, the conclusion was drawn based on the data comparing the original YOLOv5 model and the improved YOLOv5_Ours model. As a result of the analysis, we were able to arrive at a conclusion on the best model of object detection under various conditions. Introduction Recently, drones have been a field that is developing a lot, and they are likely to be combined into various fields in the future to create high value. Especially, low-budget drone photography technology can boost the local economy or help scientists research cultural heritage areas on the coast [1,2]. In this paper, we study the performance improvement of object detection model using drone photography. There are also many cases of searching for object using drones at accident or disaster sites. However, it is confusing to detect missing persons or objects in a situation where visibility is not secured due to heavy rain and snow. On the 10th of 2021, at least 40 tornadoes occurred in six weeks, including Kentucky, Arkansas, Illinois, Missouri, Tennessee, and Mississippi, confirming that at least 84 people were killed [3]. In this case, the number of missing persons will be much higher than the death. In this situation of lifesaving, a detection technique using a drone [4]; could be a solution. Drones and UAVs (unmanned aerial vehicles) have done many missions recently. For example, be studied in fields such as automatic license plate recognition [5]; detection of the diseased plant [6]; traffic light detector for self-driving vehicles [7,8]; for violent individual identification [9]; and detector for ship detection in SAR Images [10]. Searching for missing objects in a disaster situation or used in operational missions in war situations, and it is necessary in a situation where medical staff can quickly find injured people at the accident site [11][12][13]. However, detection using such drone is greatly affected by surrounding situations [14]. To solve this problem, object detection using drones has been researched and developed [15], but related research is lacking a lot. Additionally, it can be used in numerous situations as well as the above-mentioned situations. In the future, object detection using drones will be further developed and necessary in various situations. This paper discusses how to detect well in environment that is confusing to recognize objects to solve these problems. We were able to efficiently improve the performance of the model through Conv layer modification, the main layer of the original YOLOv5. In this work, we demonstrate the association of activation function with mAP (0.5) and loss function. In this paper, we can summarize our main contributions as follows: • Firstly, we improved the performance of model that can detect object under various environmental and weather conditions, such as Clear, Cloudy, Rainy, Snowy day, Evening, Night, Low altitude, and High altitude. • Secondly, the Precision and mAP (0.5) were increased by modifying the Conv layer, the main layer of the Original YOLOv5 model. We replaced the SiLU activation function of the Conv layer with the ELU activation function. We applied the replaced ConvELU layer to the original C3, SPPF, and Conv layer of the Backbone and head part, and we used CIoU in two models: Original YOLOv5 and YOLOv5_Ours to find association with ELU activation function. As a result, we were able to reduce the convergence speed of loss function at the training process. YOLOv5_Ours Network Currently, there are two types of detection methods based on deep learning: 1-stage detector and 2-stage detector. Firstly, 2-stage detector in which regional proposal and classification are performed sequentially. The faster R-CNN [16] and mask R-CNN [17] correspond to the kind of 2-stage detector. In contrast to 2-stage detector, in the 1-stage detector, a regional proposal and classification are performed simultaneously. In other words, it is a method of solving classification and localization problems at the same time. YOLO [18], TPH-YOLOv5 [19], SSD [20], SSD MobileNet [21], Focal Loss [22], and RefineDet [23]; are representative algorithm of 1-stage detector. While it was popular in the past, Fast R-CNN has an inefficient problem in learning and execution speed because the candidate area generation module is performed in a separate module independently of CNN [24]. The YOLO is a famous object detection algorithm with several versions. It is easy to implement and can train the entire image immediately. For this reason, YOLO has developed gradually [25]. In 2020, the fifth version of YOLO was released. Compared to fast R-CNN, speed and accuracy have increased. Since YOLO does not apply a separate network for extracting candidate regions, it shows better performance in terms of processing time than Fast R-CNN [26]. Because Fast R-CNN was the combining hand-crafted and deep convolutional features method is used, there are limitations in detecting objects or humans [27]. The basic structure of the previous YOLOv5 [28] is largely divided into the backbone network part, the neck part, and the head part, as shown in Figure 1 [29]. Backbone is a convolutional neural network formed by aggregating image features in various particle sizes. Neck is a series of layers that mix and combine image features to deliver prior to prediction, and Head consumes features from Neck (PAnet) and takes box and class prediction steps. The biggest feature of YOLOv5 is that it has Focus and CSP (cross-stage partial connections) [30] layer. The focus layer was created to reduce layers, parameters, FLOPS, and CUDA memory and improve forward and backward speed while minimizing the impact of mAP. Three layers were used in YOLOv3 [31], but in the previous YOLOv5, it was changed to one layer [32]. The CSP layer extends to shallow information in the focus layer to maximize functionality, while the feature extraction module is iterated to extract detailed information and functions more thoroughly [33]. The basic principle of YOLOv5 is similar to YOLOv4 [34]. YOLOv5 is an improvement base to YOLOv4, and YOLOv5 has the best performance in precision, recall, and average precision compared to Faster R-CNN, YOLOv3, and YOLOv4 [35,36]. In addition, YOLOv5 consists of four versions on its own, which are YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x. This is classified according to the memory storage size, but the principle is the same. YOLOv5x has the largest storage size, and YOLOv5s has the smallest storage size. We improved the model based on the most basic YOLOv5s in this experiment. There are two major differences between previous and current YOLOv5. Firstly, replaced the Focus layer with 6 × 6 Conv2d layer [37]. It is equivalent to a simple 2d-convolutional layer without the need for the space-to-depth operation. For example, a Focus layer with kernel size 3 can be expressed as a Conv layer with kernel size 6 and stride 2. Secondly, the SPP layer was replaced by the SPPF layer. These operations increase the computational speed by more than double. This replacement is consequently efficient and faster in terms of speed. We noted the main layer of the current original YOLOv5 structure, the Conv layer, and we modified the Conv layer. In the original Conv layer, SiLU (Sigmoid-Weighted Linear Units) was used as an activation function. Usually, the Conv layer uses ReLU (Rectified Linear Unit) as an activation function. This is because learning is fast and implementation is very simple due to the low amount of computation. However, the disadvantage of the ReLU activation function is that if it outputs a value less than zero, the gradient is likely to remain at zero, and the weight is likely to remain at zero forever until learning is completed. As a result, there is also a disadvantage in that learning is not conducted properly. The ELU activation function is a variant of the ReLU activation function. This reduces training time and improves the test set performance of neural networks. When x < zero, the differential function is connected without breaking using the exponential function. If a broken function such as the step function is used, the loss function can be defined as The basic principle of YOLOv5 is similar to YOLOv4 [34]. YOLOv5 is an improvement base to YOLOv4, and YOLOv5 has the best performance in precision, recall, and average precision compared to Faster R-CNN, YOLOv3, and YOLOv4 [35,36]. In addition, YOLOv5 consists of four versions on its own, which are YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x. This is classified according to the memory storage size, but the principle is the same. YOLOv5x has the largest storage size, and YOLOv5s has the smallest storage size. We improved the model based on the most basic YOLOv5s in this experiment. There are two major differences between previous and current YOLOv5. Firstly, replaced the Focus layer with 6 × 6 Conv2d layer [37]. It is equivalent to a simple 2dconvolutional layer without the need for the space-to-depth operation. For example, a Focus layer with kernel size 3 can be expressed as a Conv layer with kernel size 6 and stride 2. Secondly, the SPP layer was replaced by the SPPF layer. These operations increase the computational speed by more than double. This replacement is consequently efficient and faster in terms of speed. We noted the main layer of the current original YOLOv5 structure, the Conv layer, and we modified the Conv layer. In the original Conv layer, SiLU (Sigmoid-Weighted Linear Units) was used as an activation function. Usually, the Conv layer uses ReLU (Rectified Linear Unit) as an activation function. This is because learning is fast and implementation is very simple due to the low amount of computation. However, the disadvantage of the ReLU activation function is that if it outputs a value less than zero, the gradient is likely to remain at zero, and the weight is likely to remain at zero forever until learning is completed. As a result, there is also a disadvantage in that learning is not conducted properly. The ELU activation function is a variant of the ReLU activation function. This reduces training time and improves the test set performance of neural networks. When x < zero, the differential function is connected without breaking using the exponential function. If a broken function such as the step function is used, the loss function can be defined as uneven, resulting in local optima, as shown in Figure 2. The value of α is usually specified as 1. (If α is not 1, it is called SeLU.) In other words, the exclusive linear unit includes all Appl. Sci. 2022, 12, 7255 4 of 16 the advantages of ReLU and solves the Dying ReLU problem. The output value is almost zero-centered, and the exp function is calculated differently from the general ReLU. The SiLU (Swish) activation function can solve these problems, but it is only available in the hidden layers of deep neural networks and has the disadvantage that it can only be used in reinforcement learning-based systems. To solve this comprehensive problem, we used ELU (Exponential Linear Unit) as an activation function. SiLU activation function, which was previously used in the Conv layer, was replaced by the ELU activation function, as shown in Figure 3. Both the SiLU activation function and ELU activation function can solve dying RELU, but the SiLU activation function has a problem of limited use, so we replaced it with the ELU activation function. We created the Conv layer with the activation function ELU applied, and we applied this to all of the ConvELU layers in the YOLOv5_Ours structure, as shown in Figures 3-5. The SiLU (Swish) activation function can solve these problems, but it is only available in the hidden layers of deep neural networks and has the disadvantage that it can only be used in reinforcement learning-based systems. To solve this comprehensive problem, we used ELU (Exponential Linear Unit) as an activation function. SiLU activation function, which was previously used in the Conv layer, was replaced by the ELU activation function, as shown in Figure 3. fied as 1. (If α is not 1, it is called SeLU.) In other words, the exclusive linear unit includes all the advantages of ReLU and solves the Dying ReLU problem. The output value is almost zero-centered, and the exp function is calculated differently from the general ReLU. The SiLU (Swish) activation function can solve these problems, but it is only available in the hidden layers of deep neural networks and has the disadvantage that it can only be used in reinforcement learning-based systems. To solve this comprehensive problem, we used ELU (Exponential Linear Unit) as an activation function. SiLU activation function, which was previously used in the Conv layer, was replaced by the ELU activation function, as shown in Figure 3. Both the SiLU activation function and ELU activation function can solve dying RELU, but the SiLU activation function has a problem of limited use, so we replaced it with the ELU activation function. We created the Conv layer with the activation function ELU applied, and we applied this to all of the ConvELU layers in the YOLOv5_Ours structure, as shown in Figures 3-5. Both the SiLU activation function and ELU activation function can solve dying RELU, but the SiLU activation function has a problem of limited use, so we replaced it with the ELU activation function. We created the Conv layer with the activation function ELU applied, and we applied this to all of the ConvELU layers in the YOLOv5_Ours structure, as shown in The formula for calculating the output size at the Conv2d layer is Equation (7). In the equation, W is the size of the input data, F is the kernel size, P is the padding size, and S is the stride. • Output size of Conv2d: The flowchart of the ConvELU layer is shown in Figure 6 as follows. BatchNorm2d The formula for calculating the output size at the Conv2d layer is Equation (7). In the equation, W is the size of the input data, F is the kernel size, P is the padding size, and S is the stride. • Output size of Conv2d: The flowchart of the ConvELU layer is shown in Figure 6 as follows. BatchNorm2d layer means normalizing using average and variance, even if the data have various distributions for each batch unit in the training process. Figure 6 shows that the distribution of input values varies by batch unit or layer, but normalization makes the distribution Gaussian. This adjusts the distribution of the data to average zero and standard deviation to 1. Finally, the result of applying the Normalization and derivative activation function. And the final structure of applying all the measures is shown in Figure 7. Data Preparation and Processing Class selection and data collection are important to increase the accuracy of object Data Preparation and Processing Class selection and data collection are important to increase the accuracy of object search by training the model. The F11 4K PRO was used as the drone for filming. It has an Data Preparation and Processing Class selection and data collection are important to increase the accuracy of object search by training the model. The F11 4K PRO was used as the drone for filming. It has an adjustment distance of 10 m and a Wi-Fi image distance of 100 m. It is also suitable for object detection because it supports 4k camera image quality. According to the purpose of the study, the classes were designated as objects that are confusing to distinguish. Therefore, person, car, and notice were set as Classes, and the distance from the object was divided by less than 10 m: Low altitude and more than 10 m: High altitude. In addition, we took photos in various environments by changing the altitude of the drone, surrounding background, and weather. The shooting was conducted in the mountain and a downtown area, at low light: Evening and Night. In addition, it was filmed while changing the altitude of the drone. This is caused to create an environment where it is confused to identify objects. Additionally, drone photographs were added from VisDrone (http://aiskyeye.com, accessed on 5 June 2021) [38] to collect more diverse data. VisDrone is a dataset used annually for object detection using drones and is very reliable [39]. This is to increase the accuracy of the experiment through reliable data combinations. In the VisDrone dataset, only data photographed above 10m: High altitude were added to meet the existing data and standard. Figure 8 shows the samples used in the experiment. In the final dataset used in the experiment were 2080 images: Clear, Cloudy, Rainy, Snowy day, Evening, Night, Low altitude, High altitude in training, 960 images: Clear, Cloudy, Rainy, Snowy day, Evening, Night, Low altitude, High altitude in validation, and 320 images: Clear, Cloudy, Rainy, Snowy day, Evening, Night, Low altitude, and High altitude in testing, prepared a total of 3360 images: Clear, Cloudy, Rainy, Snowy day, Evening, Night, Low altitude, and High altitude. Details are summarized in Table 1. The collected data were then labeled from the online platform makesense (http:// www.makesense.ai/, accessed on 14 July 2019) [40]. As shown in Figure 9, the label was created as three objects: person, car, and notice and annotated, and the annotated image was converted to a txt format according to the YOLO format. surrounding background, and weather. The shooting was conducted in the mountain and a downtown area, at low light: Evening and Night. In addition, it was filmed while changing the altitude of the drone. This is caused to create an environment where it is confused to identify objects. Additionally, drone photographs were added from VisDrone (http://aiskyeye.com, accessed on 5 June 2021) [38] to collect more diverse data. VisDrone is a dataset used annually for object detection using drones and is very reliable [39]. This is to increase the accuracy of the experiment through reliable data combinations. In the VisDrone dataset, only data photographed above 10m: High altitude were added to meet the existing data and standard. Figure 8 shows the samples used in the experiment. In the final dataset used in the experiment were 2080 images: Clear, Cloudy, Rainy, Snowy day, Evening, Night, Low altitude, High altitude in training, 960 images: Clear, Cloudy, Rainy, Snowy day, Evening, Night, Low altitude, High altitude in validation, and 320 images: Clear, Cloudy, Rainy, Snowy day, Evening, Night, Low altitude, and High altitude in testing, prepared a total of 3360 images: Clear, Cloudy, Rainy, Snowy day, Evening, Night, Low altitude, and High altitude. Details are summarized in Table 1. The collected data were then labeled from the online platform makesense (http://www.makesense.ai/, accessed on 14 July 2019) [40]. As shown in Figure 9, the label was created as three objects: person, car, and notice and annotated, and the annotated image was converted to a txt format according to the YOLO format. Experimental Setup and Flowchart For the experiment, the basic environment of the experiment was conducted in Google Colab. Colab is well organized with a GPU environment, so we used it. We also trained and compared with same data acquired by drone shooting. The difference between the original YOLOv5 model and the YOLOv5_Our model is as follows. The weight trained by the original YOLOv5 model is put on the image data set as the pre-training weight of the configured data set [41]. That is, the original YOLOv5 model uses its own weight obtained by pre-learning on COCO (Common Object in Context) dataset. However, in this study, both the original YOLOv5 model and YOLOv5_Our model conducted experiments based on the same data. This is to compare the performance of the models under the same condition. We were three classes: person, car, and notice labeled to be annotated according to the purpose of the study. This is because we thought it was the easiest thing to confuse with objects based on the photos taken. All data taken by drone were labeled with three objects: person, car, notice in this way. Through training, the loss function is calculated, and the best weight is updated in models: the original YOLOv5 model and YOLOv5_Our model. After that, we proceed with the validation and testing process with the best weight obtained through training. Then, predict the test data with the obtained weight. To make an accurate comparison, the original YOLOv5 model and YOLOv5_Our model conduct the experiment completely separately. After the experiment, the following indicators were used to evaluate the performance of the model. In short, the research is conducted in the process shown in Figure 10. Experimental Setup and Flowchart For the experiment, the basic environment of the experiment was conducted in Google Colab. Colab is well organized with a GPU environment, so we used it. We also trained and compared with same data acquired by drone shooting. The difference be tween the original YOLOv5 model and the YOLOv5_Our model is as follows. The weigh trained by the original YOLOv5 model is put on the image data set as the pre-training weight of the configured data set [41]. That is, the original YOLOv5 model uses its own weight obtained by pre-learning on COCO (Common Object in Context) dataset. Howeve in this study, both the original YOLOv5 model and YOLOv5_Our model conducted ex periments based on the same data. This is to compare the performance of the models un der the same condition. We were three classes: person, car, and notice labeled to be annotated according to the purpose of the study. This is because we thought it was the easiest thing to confus with objects based on the photos taken. All data taken by drone were labeled with thre objects: person, car, notice in this way. Through training, the loss function is calculated and the best weight is updated in models: the original YOLOv5 model and YOLOv5_Ou model. After that, we proceed with the validation and testing process with the best weigh obtained through training. Then, predict the test data with the obtained weight. To make an accurate comparison, the original YOLOv5 model and YOLOv5_Ou model conduct the experiment completely separately. After the experiment, the following indicators were used to evaluate the performance of the model. In short, the research i conducted in the process shown in Figure 10. Experimental Key Indicators In this paper, the performance of the original YOLOv5 model and YOLOv5_Our model is evaluated based on Precision, Recall, F1-score, AP (average precision), and mAP (mean average precision). Experimental Key Indicators In this paper, the performance of the original YOLOv5 model and YOLOv5_Ours model is evaluated based on Precision, Recall, F1-score, AP (average precision), and mAP (mean average precision). Precision refers to the percentage of all detection results that are correctly detected. Recall is used to indicate how well a positive prediction is made when a positive input is given. Simply put, it means how well model detect it. TP (True Positive) is a number detected to fit an object. FP (False Positive) means that it is detected as an object of another class. In other words, it is a false detection. FN (False Negative) means an object that should have been detected but not detected, and the TN (True Negative) means nothing that should not be detected. • F1-score: It is calculated as the harmonic mean of precision and recall and not the arithmetic mean. F1-score has a value between zero and 1; the higher the value, the higher the accuracy of detecting an object. mAP (mean average precision) is the average value of the AP (average precision), indicating how accurate the predicted result is. Experimental Loss Function IoU (Intersection over Union) [42] is produced by the interaction between the predicted box and the ground truth box. That is, it is a value representing the size of the predicted Bounding Box and Ground Truth in the field of object detection as a value between zero and 1. The formula is as follows. A is the predicted box, and B is the ground truth box. C box is the smallest box, including A and B box, and C\A ∩ B is the area in which the sum of A and B box is subtracted from the C box area. The GIoU (Generalized IoU) is the value obtained by subtracting the ratio of areas that do not overlap with both A and B in the C box. The larger the GIoU, the better the performance. When 1 − GIoU is used as loss in object detection (the range of the loss value is zero~2), the bounding box prediction process of GIoU loss according to Iteration is performed by expanding the B box area to overlap with GT and then reducing the B box area to increase IoU. This can improve the gradient vanishing problem for non-overlapping boxes, but there is a problem that the convergence rate is slow and the box is predicted incorrectly. To solve this problem, we use CIoU (Complete-IoU) in this paper to compare the loss function of the Original YOLOv5 model with the YOLOv5_Ours. In other words, the experiment is conducted under the condition that CIoU is applied equally to two models. • IoU: • GIoU: • L GIoU : As can be seen from Equation (18), w is the width, and h is the height of the prediction box. Additionally, w gt and h gt are the width and height of the ground truth box. v measures the consistency of the aspect ratio of the two boxes, α is a positive trade-off parameter to adjust the balance between the non-overlapping case and the overlapping case. In particular, in the case of non-overlapping, the overlap area factor gives a higher priority to regression loss. Results The original YOLOv5 model and YOLOv5_Ours model were trained at 100 epochs and with the 3360 images: training images, validation images, and testing images. As a result of training all models, the average time spent training was about 2 h per model. The model that took the most time was the original YOLOv5 model, which took 2 h and 10 min. The object detection comparison results of the two models (the original YOLOv5 model and the YOLOv5_Ours model) are shown in Table 2 and Figure 11. Additionally, this table shows the Precision, Recall, F-1 score, and mAP of the original YOLOv5 model and YOLOv5_Ours. We compared based on the best of the 100 epochs result values. In order to objectively evaluate the performance of the models, the values of mAP (Mean average precision) were compared. The mAP value of the original YOLOv5 model is 94.6%, and YOLOv5_Ours is 95.5%. Overall, it may be seen that the YOLOv5_Ours model has higher than the original YOLOv5 model. As a result of the training and validation process, we found that the YOLOv5_Ours model was the best. Thus, the final prediction was made based on the weight obtained from the trained YOLOv5_Ours model, which was considered to have the best performance. The left part of Figure 12 shows the graphs of the metrics curves as training pro- As a result of the training and validation process, we found that the YOLOv5_Ours model was the best. Thus, the final prediction was made based on the weight obtained from the trained YOLOv5_Ours model, which was considered to have the best performance. The left part of Figure 12 shows the graphs of the metrics curves as training progresses. It is proved the detection accuracy of the YOLOv5_Ours model [43]. After evaluation, the YOLOv5_Ours model had a validation precision score of 90.7%, recall score of 87.4%, as well as F1-score of 88.8%, and mAP score is 95.5%. This result confirms the effectiveness of our approach in predicting experiment performed in several environments correctly. As a result of the training and validation process, we found that the YOLOv5_Ours model was the best. Thus, the final prediction was made based on the weight obtained from the trained YOLOv5_Ours model, which was considered to have the best performance. The left part of Figure 12 shows the graphs of the metrics curves as training progresses. It is proved the detection accuracy of the YOLOv5_Ours model [43]. After evaluation, the YOLOv5_Ours model had a validation precision score of 90.7%, recall score of 87.4%, as well as F1-score of 88.8%, and mAP score is 95.5%. This result confirms the effectiveness of our approach in predicting experiment performed in several environments correctly. The first three columns are the YOLOv5_Ours model loss components, box loss, objectness loss, and classification loss, train the leftmost row and validation second row [44]. The box loss, objectness loss, and classification loss are indicators of how well an algorithm predicts an object [45]. These results mean that the three classes: person, car, and notice, which we use for detection, are accurately recognized during the training process. Precision-Recall curve is a method of evaluating the performance of an object detector due to a change in the threshold value for the confidence level. The confidence level is a value that tells user how confident the algorithm is about the detection. In other words, the closer the number is to 1, the more confident the model is in detecting the target object. The first three columns are the YOLOv5_Ours model loss components, box loss, objectness loss, and classification loss, train the leftmost row and validation second row [44]. The box loss, objectness loss, and classification loss are indicators of how well an algorithm predicts an object [45]. These results mean that the three classes: person, car, and notice, which we use for detection, are accurately recognized during the training process. Precision-Recall curve is a method of evaluating the performance of an object detector due to a change in the threshold value for the confidence level. The confidence level is a value that tells user how confident the algorithm is about the detection. In other words, the closer the number is to 1, the more confident the model is in detecting the target object. The right part of Figure 12 is the Precision-Recall curve graph of the YOLOv5_Ours model. It can be seen that the value of person is 97.3%, which is quite high. The results are shown in Figure 13 by experimental conditions: Clear, Cloudy, Rainy, Snowy day, Evening, Night, Low altitude, and High altitude. For clear day and evening, object detection showed high accuracy above the value of about 87.0%. Rainy day is relatively low, about 57.0%, but overall, object detection is excellent. It can be seen from Table 3 that the object detection results of the YOLOv5_Ours model. Among the detected objects, the value for a person was the highest. The person detection was calculated as 97.1% for Precision, 84.3% for Recall, 90.2% for F1-Score, and finally 97.3% for mAP. This means that the person detection rate is quite high. The function loss difference between the two models results in a large gap at the beginning of the training. Therefore, the experiment was conducted by setting the epoch to 100. It can be seen that YOLOv5 function loss occurs rapidly at the beginning of training. On the other hand, YOLOv5_Ours decreased function loss slowly. The gap appears to be narrowing until the epoch reaches 60. After that, the function loss of the two models: Original YOLOv5 and YOLOv5_Ours, is a little different. Figure 14 shows a graph comparing the function loss value of the two models. That is, YOLOv5_Ours means an efficient model with low convergence speed. The right part of Figure 12 is the Precision-Recall curve graph of the YOLOv5_Ours model. It can be seen that the value of person is 97.3%, which is quite high. The results are shown in Figure 13 by experimental conditions: Clear, Cloudy, Rainy, Snowy day, Evening, Night, Low altitude, and High altitude. For clear day and evening, object detection showed high accuracy above the value of about 87.0%. Rainy day is relatively low, about 57.0%, but overall, object detection is excellent. It can be seen from Table 3 that the object detection results of the YOLOv5_Ours model. Among the detected objects, the value for a person was the highest. The person detection was calculated as 97.1% for Precision, 84.3% for Recall, 90.2% for F1-Score, and finally 97.3% for mAP. This means that the person detection rate is quite high. The function loss difference between the two models results in a large gap at the beginning of the training. Therefore, the experiment was conducted by setting the epoch to 100. It can be seen that YOLOv5 function loss occurs rapidly at the beginning of training. On the other hand, YOLOv5_Ours decreased function loss slowly. The gap appears to be narrowing until the epoch reaches 60. After that, the function loss of the two models: Original YOLOv5 and YOLOv5_Ours, is a little different. Figure 14 shows a graph comparing the function loss value of the two models. That is, YOLOv5_Ours means an efficient model with low convergence speed. Comparison with Previous YOLO Models For accurate verification of the study, it is necessary to compare performance with previous YOLO models. Therefore, we decided to experiment by applying the dataset to YOLOv3 and YOLOv4 model. The value of mAP was compared with the previous models: YOLOv3 and YOLOv4 model, and all the experiments were conducted independently. Comparison with Previous YOLO Models For accurate verification of the study, it is necessary to compare performance with previous YOLO models. Therefore, we decided to experiment by applying the dataset to YOLOv3 and YOLOv4 model. The value of mAP was compared with the previous models: YOLOv3 and YOLOv4 model, and all the experiments were conducted independently. It is summarized as shown in Table 4 and Figure 15 for comparison of the data result value. As a result of comparing the final value, it was found that the performance of YOLOv5_Ours was the best. Comparison with Previous YOLO Models For accurate verification of the study, it is necessary to compare performance with previous YOLO models. Therefore, we decided to experiment by applying the dataset to YOLOv3 and YOLOv4 model. The value of mAP was compared with the previous models: YOLOv3 and YOLOv4 model, and all the experiments were conducted independently. It is summarized as shown in Table 4 and Figure 15 for comparison of the data result value. As a result of comparing the final value, it was found that the performance of YOLOv5_Ours was the best. Conclusions In this paper, we studied a model for detecting objects in conditions that are confusing to detect objects. To create this environment, images were acquired using a drone in Conclusions In this paper, we studied a model for detecting objects in conditions that are confusing to detect objects. To create this environment, images were acquired using a drone in situations where it was confusing to detect objects such as various altitudes, weather, and background. In addition, it aimed to detect objects in these environments and increases detection performance. The experimental method is based on the YOLOv5 structure. We compared the results with the original YOLOv5 model and improved the YOLOv5_Our model, and through training, it was selected for the YOLOv5_Ours model with the best performance. Then, the best weight obtained through validation is applied to the YOLOv5_Ours model and tested. As a result, we found that the mAP has increased to 0.9% compared with the original YOLOv5 model and improved the YOLOv5_Ours model. Finally, for a more accurate comparison, the key indicators were calculated with the previous version of YOLO: YOLOv3 and YOLOv4. The difference between the value of YOLOv3, YOLOv4, and mAP was 1.6% and 4.5%, respectively, which was greater than the original YOLOv5 model. In addition, it was confirmed that the convergence speed of loss function of YOLOv5_Ours model was reduce the compared to original YOLOv5 model at the beginning of training. Object detection using drones is greatly influenced by the surrounding environment. We conducted research to improve the performance of the model under bad conditions, and we were able to obtain improved results. It may be applied to object recognition studies using drones that have been previously conducted [46,47]. In the future, the results of this study will help use drones to detect objects in various conditions. Data Availability Statement: The data used in this paper were directly produced and processed.
8,654.8
2022-07-19T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Processing of Desalination Reject Brine for Optimization of Process Efficiency, Cost Effectiveness and Environmental Safety The currently acceptable norm of recovery of desalted water in projects of brackish water reverse osmosis (BWRO) ranges usually between 65 to 85 % according to raw water quality, level of chemical pretreatment and concept of plant design/operation, would it be intended to be a sophisticated facility of low operation cost or vice versa. The balance of 15 %, or above, the desalination reject stream in which the RO rejected components are concentrated, is disposed as a wastewater (WW). Among the disposal options selected to get rid of the de‐ salination reject stream are: 1) Sewer stream, 2) Land application including percolation, 3) Deep well injection and, 4) Evaporation ponds. The last option is the most common in the Middle East in view of: Introduction Reverse Osmosis (RO) is currently confirmed and generally approved as the most feasible technology for desalination of brackish groundwater being the most economic for its range of salinity over a wide range of production capacities, and in view of its lowest requirements of energy, and its application ease. The currently acceptable norm of recovery of desalted water in projects of brackish water reverse osmosis (BWRO) ranges usually between 65 to 85 % according to raw water quality, level of chemical pretreatment and concept of plant design/operation, would it be intended to be a sophisticated facility of low operation cost or vice versa. The balance of 15 %, or above, the desalination reject stream in which the RO rejected components are concentrated, is disposed as a wastewater (WW). Among the disposal options selected to get rid of the desalination reject stream are: 1) Sewer stream, 2) Land application including percolation, 3) Deep well injection and, 4) Evaporation ponds. The last option is the most common in the Middle East in view of: • The rather common high temperature • The low ambient humidity • The relatively low cost of land in desert areas Disposal of RO reject water aims, in most of the alternatives, to just get rid of that stream without further water recovery which wastes the cost of initial pumping and chemical treatment. It is, therefore, evident that the increase of desalted water recovery is a main factor in determining the process cost effectiveness. On the other hand, a too high recovery would lead to most, if not all, the membrane fouling problems and the subsequent decline of performance and eventually membrane damage [1]. The present work investigates the promotion of the RO desalination efficiency and cost effectiveness. Desalination reject stream (DRS) represents, in fact, a WW disposal problem. It includes, in addition to increased salinity, higher concentrations of polyvalent ionic species [2] due to the preferential high rejection of e.g. hardness components, heavy metal cations (HMCs) [3] or radioactive isotopes [4], organics [5]. DRS includes also the residual pretreatment chemicals of the primary desalination step, i.e., coagulants as iron or aluminum salts or polyelectrolytes, disinfection by products, antiscalants [6]. In big RO desalination facilities, however, the surface area of evaporation ponds may attain several millions of square meters and represents, therefore, one of the main cost factors of the desalination projects [7] due to the cost of land and of installation of ponds, digging, lining, construction of dykes [8]. Besides the considerable cost of installation of evaporation ponds and their annual maintenance, they may cause considerable environmental threat through: 1. Possible leak of concentrated brine and possibly contaminated water to pollute the groundwater reserves. 2. Flooding of ponds which was reported for many existing desalination plants in view of inadequate initial design or operation problems. Flooding of contaminated reject would contaminate the neighboring habitat. In view of the increasingly stringent environmental regulations related to disposal of WWs and the high cost of evaporation ponds the present laboratory and pilot investigation work aims to promote RO desalted water recovery and reduction of the disposed brine stream to a minimum value so as to realize: c. Optimized hydrophilic/hydrophobic characters and fouling resistance 3. The recent introduction of sensitive energy recovery systems capable of recovery of the residual pressure from the BWRO reject stream. Our previous results of desalination reject processing through laboratory and pilot investigation [7] showed remarkable optimization of recovery of desalted water which increased the total RO process recovery up to 95 %. Comparative evaluation of performance and cost of several alternatives of brine processing was conducted as e.g. application of high rejection, low energy, secondary RO together with use of specific antiscalants or partial softening NF of reject stream prior to secondary RO [7]. A primary cost analysis showed that the studied reject processing is quite cost effective even without consideration of the reduced surface area of evaporation ponds and consequently their cost. Superior rejection of polyvalent cations from the reject stream was observed by NF as compared to hot lime softening (HLS) together with absence of chemical dosing stoichiometric to deposition of hardness components and, consequently, absence of sludge formation which represents itself a daily disposal problem. NF, on the contrary of HLS, does not require subsequent filtration. NF also leads to partial desalination of the brine stream while HLS results in increase of the concentration of some components like sodium and carbonate ions, and does not modify other components not included in the softening reactions as SO 4 ²and Clions and, therefore, results in increase of total dissolved solids (TDS). As for the reject streams,where radioactive isotopes and/or HMCs were concentrated upon primary BWRO, treatment by NF and low energy RO revealed, under adequate application conditions, more efficient than conventional methods of WWs treatment [4]. In fact, several technical challenges remain with regards to the efficiency and cost of conventional methods for rejection of these contaminants. NF and Low energy reverse osmosis (LERO) were evaluated in this respect in comparison with methods of chemical precipitation, chelating ion exchange resins (IER's), hot lime softening and coagulation/settling/ precipitation. Membrane methods gave higher rejection of radionuclides and HMC's ranging from zero to 20 pCi/L could match the maximum contaminant level (MCL) of the US Environmental Protection Agency (US-EPA) for drinking water. NF and LERO, on the contrary of the other methods, are continuous processes which are not shutdown for regeneration, do not suffer from interference of similar valence ions with contaminant separation and are not limited by high pH dependence. Investigation of desalination reject processing (DRP) is of economic and strategic interests in view of the huge daily production rate of such stream. In Riyadh region alone, according to data from the National Water Company [9], if the main desalination facilities, Wasiea, Buwaib, Salboukh, Manfouha, and Malaz are operated at original design rate, the yearly rate of reject stream amounts to 30 million m³/year which is expected to increase to more than 45 million m³/year upon installation of the new Wasia project. It is, therefore, expected that the total BWRO reject in KSA, in view of the planned giant projects in Ha'il, Tabuk, etc..., would amount to > hundred million m³/year. Literature survey Processing of brine concentrate of water desalination has been conducted for various purposes. For salt extraction, Sommariva et al [10] Smith and Humphreys [11] considered the processing of the desalination reject up to zero discharge using concentrate disposal processes among which solar/evaporation ponds until crystallisation. They stated that evaporation ponds are preferred in presence of strong solar radiation, low precipitation, and low cost desert land. Produced salts were proposed for use in agriculture, forestry, fauna, and algae production, and energy production. Ahmed et al [12] investigated salt production from reject brine by SAL-PROC technology which consists in multiple evaporation and/or cooling steps. For the purpose of environmental protection, Shahalam [13] evaluated the removal of nitrogen and phosphorus from RO reject of refining effluent of biological processes treating municipal WW. While RO is proven to be effective in producing high quality effluent water for non potable uses e.g. for irrigation purposes, its reject contains too high amounts of P and N compounds harmful for the environment if the feed to RO units is effluent stream from municipal and industrial WW treatment plants. Brine treatment included activated sludge treatment and then granular medium filtration. Heijman et al [14] considered the pretreatment of RO and NF reject so as to attain recovery as high as 99% aiming to overcome the problem of reject disposal. A complicated and expensive sequence of steps is proposed and pilot tested that consisted of precipitation of hardness components at high pH, sedimentation, cation exchange resin, and then NF. As for desalination by NF or RO of surface water a more complex processing was tested, i.e. cation exchange resin, then Ultrafiltration (UF), NF followed by treatment by granular activated carbon (GAC). A recovery of 97% was achieved. For a still high recovery up to 99% SiO₂ removal was conducted by co-precipitation with Mg hydroxide at high pH. The total treatment scheme included double barrier against pathogens (UF and NF) and against micropolutants (NF and GAC). Furthermore, the resulting suspended particle concentration is low and the biological stability is expected to be excellent. According to Jeppesen et al [15] disposal of highly concentrated brines poses significant environment risks. Extraction of some metals from this stream can multiple environmental and economic benefits. Removal of P has little economic benefit but may become interesting in view of environmental restrictions. This study showed that recovery of NaCl from brine can significantly lower the cost of potable water production if employed in conjunction with thermal processing systems. The high ammonia, sulphate, TDS and HMC's render the RO brine hazardous if dumped untreated [16]. Denitrification of RO brine concentrate was conducted by Anaerobic Fluidized Bed Biofilm Reactors with GAC media. The main purpose for most of the research work related to reject processing was to promote the desalted water recovery by various techniques. Queen et al [17] treated the RO brine by NF for removal of polyvalent cations then it goes to the concentrate compartment of an Electrodeionization unit (EDI) while the initial RO permeate goes to the diluate compartment. Overall consumption of feed water was, therefore, reduced. However, on site reject treatment by EDI was reported to be effective only for small RO treatment units [18], while for large reject stream rates the cost can be very high. A modified evaporation system that consists of forced air thermal evaporation using turbine technology so as to create a high wind speed and generate a very high air temperature was used. This system is approved by US-EPA. It can operate in high humidity, low temperature conditions, and can evaporate up to 126 GPM at the cost of just discharge to sanitary sewer. Evaporation of RO reject was also investigated in underground rock salt mining operation [19]. In case of inland communities which have no ready sink for RO brine the disposal cost will increase significantly the cost of RO treatment, specially with the limited recovery to avoid scale deposition. Coral et al [20] studied the minimization of RO reject through vibratory shear enhanced process (VSEP) without softening. They stated that strategies to minimize brine volume include 1) pre RO softening to remove hardening components and achieve higher recovery, 2) two stage RO interrupted by brine softening, 3) innovative technologies for extraction of water from RO brine without softening e.g. VSEP. The same technology was used by Arnold [21] for optimization of water recovery from RO brine issued by Central Arizona Project and by Cates et al [22], in both cases, however, no cost analysis was conducted and no justification was given for selection of such expensive technique. Electrodialysis (ED) [23] was also applied for treatment of brine resulting from RO treatment of textile effluent for the purpose of reduction of TDS with the recovery of acids and bases. The WW of textile dyeing was first treated by coagulation/precipitation for color removal followed by RO. RO reject was then treated by ED. This treatment was qualitatively reported to enable the protection of environment from contamination by dyes and the related additives and to promote the reject water recovery. Capacitive deionization (CDI) was used for SWRO reject treatment [24] instead of blending the brine with secondary effluent and discharge to the sea. The objective of the project was to increase water recovery to more than 95% at the required water quality. Correspondingly the volume of the brine will be reduced to less than 5%. For inland RO facilities where disposal of untreated RO brine has adverse environmental impacts, this approach would represent a cost effective alternative for the management of the brine stream. Results of pilot testing have met expectation. Lee et al investigated the treatment of RO brine towards sustainable water reclamation practice [25]. RO brine generated from water reclamation contains high concentrations of organic and inorganic compounds. These authors concluded that cost effective technologies for treatment of RO brine are still relatively unexplored. The proposed treatment consists of biological activated carbon (BAC) column followed by CDI for organic and inorganic removal. 20% TOC was removed by BAC while 90% conductivity reduction was realized by CDI. Ozonation was used to improve the biodegradability of RO brine. The laboratory scale O₃ + BAC was able to achieve three times higher TOC removal compared to using BAC alone. Further processing with CDI was able to generate product water with better water quality than the RO feed water. The O₃ + BAC reduced better the fouling in the successive CDI step [26]. Duraflow Company [27] employed a three step approach to define a pretreatment process compatible with the recovery of RO brine using a secondary RO. The objective was to remove all components detrimental to secondary RO and realize suitable values of Silt Density Index (SDI). The three step approach includes: I. RO brine analysis to determine the components II. Chemistry Development which is based on type & concentration of fouling substance identified in the RO brine, a chemical treatment process is developed to counteract each of the fouling factors: III. Microfiltration to the adequate SDI then secondary RO Kepke et al [28] considered the options of RO brine concentrate treatment: Landfill They defined high efficiency RO as a combination of the hardness removal pretreatment which include Lime soda softening followed by filtration and weak cation exchange resin. This type of RO treatment is relatively new. It has not been used for water reuse applications but has been applied in the power stations and mining industries. The advantages of this process over Conventional RO include reduction in scaling, elimination/reduction of biological and organic fouling due to high pH where SiO₂ solubility is high. The expected recovery would attain 95%. IER's were also applied in desalination brine reclaim. This did not only optimize system efficiency through additional permeate recovery but also reduced the amount of water and salt required for softener resin regeneration. Some of the salt in the last part of the brine cycle is used for the next regeneration of the exhausted resin. According to the survey report "Managing Water In The West" [29] by the Southern California Regional Brine-Concentrate Management, the concentrate disposal technologies include 1-the volume reducing and 2-the zero liquid discharge, and 3-the final disposal technologies: The available volume reducing technologies include: • Electrodialysis/Electrodialysis reversal • Vibratory Shear -Enhanced Processing • Precipitative Softening and Reverse Osmosis Technologies which may be useful in this application but are still under development include: • Two-pass Nanofiltration The zero liquid discharge technologies, on the other hand, include: • Thermal processes Wiseman [29] underlined the criteria for evaluation of the desalination reject processing technology and the related pilot testing as follows: 1. Does the technology/pilot have regional applicability? Is the pilot implementable from regulatory, environmental, and funding perspectives? 5. Does the project improve water quality or provide environmental benefits? 6. Can the technology be implemented for a full-scale project? 7. Are there barriers to full scale project implementation (regulatory, environmental, or funding? Objectives, Aim and Scope of the Present Work The main objective of the present research program is the optimization of RO process efficiency and the decrease of consumption of the limited groundwater reserves through upgrading of the recovery of desalted water by adequate application of the most developed technologies of desalination membranes and chemicals. This chapter focuses on assessing the feasibility of increase of total RO recovery from the brine concentrate stream generated from RO by either secondary RO of the reject stream or by back recycling of reject to the initial RO feed, without significant sacrifice of permeate quality or excessive increase of product unit cost. Increase of total RO recovery means parallel decrease of the surface area of evaporation ponds required for disposal of the final reject stream which will enable a considerable saving in cost of plant installation. In case of highly concentrated reject streams, the work includes an evaluation of the pretreatment processes required for attaining the highest possible recovery as e.g. removal or reduction of scale forming and gel forming ionic species and other membrane foulant components. Other than promotion of process cost effectiveness, this investigation is also directed at promoting the environmental safety in relation to final reject disposal particularly in evaporation ponds, the commonly used approach for disposal of BWRO reject stream in Saudi Arabia. Reduction of the reject rate is expected to reduce the possibility of leak from these ponds and the pollution of groundwater reserves, also to control the frequent flooding of evaporation ponds to pollute the neighbouring habitat. Experimental Both laboratory and pilot NF testing were conducted. The laboratory experimental system: It consists of six test cells with circular turbulent agitation at the level of surface of membrane coupons installed in a test circuit which consisted of a low pressure pump, pressure gauge, cartridge filter, flowmeter and thermostated feed tank. Membrane samples were stored dry and thoroughly rinsed with deionized water before use. They were compacted in the distilled water at 120 psi, prior to testing, until steady flux is obtained, then conditioned by soaking in the testing solution for one hour. The testing feed pressures ranged from 80 to 100 psi. Tangential cross flow velocity ranged from 0.005 to 1 m/s and feed flux from 120 to 720 L/m 2 .d. The pilot testing unit: Fig (1) shows schematic representation of the mobile pilot unit designed so as to enable conduction of NF and RO runs over a wide range of operation conditions, feed pressures, flow rates, pretreatment steps, and feasibility of reject recycling. Percent recovery was 85% except when otherwise stated. Both permeate and reject streams were recirculated back to the feed tank in order to keep steady feed water composition and concentration. Ionic concentrations were determined by ICP-AES (Parkin-Elmer, Boston, USA). Feed water temperature was thermostated at 25 C and pH was adjusted to the range 7.5 to 8. Pilot site testing should enable: • Direct connection to reject header or collection tank of existing desalination facilities for continuous treatment. • Conduction of RO/NF pilot testing using: • Conduction of desalination runs with different pretreatments for determination of the optimum recovery i.e. highest possible recovery attained under safe and steady operation performance. • Optimization of operation conditions towards lower production cost, lower power and chemicals consumption. • Investigation of reject treatment in different production sites in order to determine effect of reject characteristics and validity of selected technologies. Chemical precipitation of radionuclides according to: • Chemical precipitation of hardness components of reject stream by coagulation/settling. • Conduction of NF runs for comparison of radionuclides and hardness rejection by NF to that obtained by chemical precipitation. • Recycling of reject stream to the feed stream in the primary RO process. General BW RO Reject Characteristics • In addition to concentrated TDS, RO reject stream usually gets concentrated in hardness components and other polyvalent ionic species which are efficiently rejected in initial RO step as HMCs and radioactive isotopes. • This stream is already sterilized and have passed already coarse and cartridge filtration. • The unreacted pretreatment additives as antiscalants already concentrated in reject stream will lower the required dosing for scaling inhibition. • pH and temperature values lie in the reasonable range for RO operation. • Treatment of this stream either totally or partially would solve the problem of deficiency of evaporation ponds. • Care should be taken for components which are harmful to RO process or membranes as Al, Fe, and Mn which become concentrated in the RO reject. A typical reject streams analysis investigated in the present work is given by According to Fig (2), if we consider the rate of feed stream to the initial RO treatment, e.g. raw well water, as 100% which is treated in the primary RO at percent recovery of e.g. 85%, the reject stream of 15% from the original feed will go for further processing in a secondary RO unit at a lower percent recovery of e.g. 70% the secondary permeate will be of 10.5% and the final reject will be reduced to 4.5% as referred to the original feed. Upon blending of the primary + secondary RO permeate streams: The total RO recovery becomes upgraded to much higher recovery (95.5% in the described case). The final reject rate becomes reduced to less than the third of pervious reject rate and consequently the required surface area of evaporation pond. The blending ratio is 8:1 The question is, how much higher is the cost per m 3 of reject processing and what is its effect on the total process cost per m 3 in view of the problems related to the treated reject i.e.: 1. Higher TDS. 2. The required higher feed pressure. The possible higher cost of additives as specific Antiscalant. In order to answer to those questions the various alternatives of reject treatment were investigated in detail. The higher the RO reject water TDS, the higher the required RO feed pressure particularly with SWRO. SWRO is shown to be the optimum selection in case of high salinity reject waters. It enables the highest recovery and lowest permeate TDS but required the highest operation pressure. SWRO is also useful in processing of reject water of high concentrations of undesired species as NO 3 or HMCs. Blending of the secondary RO permeate with the primary one is shown to realize the increase of total RO recovery with only a slight increase in final TDS in view of the low blending ratio. On the other hand, such reject processing in a secondary RO enabled the final reject rate to be remarkably reduced with consequent reduction of disposal cost. Processing of Desalination Reject by Secondary RO: Extent of RO reject processing and reduction of final brine rate is determined by the initial reject TDS and higher applied pressure and consequently the higher recovery realized upon use of sea water RO membrane elements. Comparison of Performance of RO & NF in Processing of Desalination Reject Stream For this comparative investigation pilot testing unified the main test conditions so that the different results reflect essentially the process behavior. A reject stream of 32,711 mg/l was treated by RO and NF systems having the same array adjusted to produce 1000 m 3 /d, of course operated at different feed pressures, at the maximum attainable steady recovery. Final blending of the primary permeate (that of initial desalination unit) with the secondary permeate (that of the reject processing unit) was conducted to determine the total system recovery and the final product water quality. Comparison included also the extent of reduction of the final reject rate. Table (3) show that NF is operated at much higher recovery and much lower pressure than RO so that it could be operated by residual pressure of the reject stream. It is suitable, in fact, for intermediate reject treatment prior to a secondary RO desalination step or recycling in the feed of the primary RO unit. While NF has an only moderate TDS rejection, it rejects efficiently divalent or polyvalent species, organics and colloids [8]. A high hardness reject stream upon NF will, therefore, enable a subsequent RO treatment at a much higher percent recovery and lower operation pressure. Results of On the other hand, NF reject treatment upon blending would help to raise the primary RO permeate to a required TDS e.g. for drinking water level. The higher recoveries investigated with NF did not lead to higher TDS rejection. Case Study of a 10,000 m 3 /d BWRO Plant In this plant the raw feed water have a radioactive contamination of 207.2 + 5.4 pCi/l of combined radium 226+228. It was requested to increase the product rate to the maximum possible value by blending with conditioned feed stream while lowering the radioactivity to < 5 pCi/l the MCL of drinking water of the US-Environmental Protection Agency (EPA), with a final TDS higher than 300 ppm as a regional norm of drinking water TDS. The present plant design failed to realize the required performance. On the other hand, the same site suffered flooding of evaporation ponds which was reported to be due to an over-estimated evaporation rate. In fact, according to our previous results [6] the raw well water of TDS of 720.5 mg/l of this plant would be ideal for treatment by NF to produce the requested salinity since NF is characterized by an only modest rejection of TDS, but a rather high rejection of polyvalent ionic species as HMCs and radioactive isotopes [8]. However, in view of the important radioactive contamination, the concerned Water Authority selected RO, of much higher rejection than NF, to be conducted after a partial radionuclide separation by adsorption on the surface of hydrous manganese oxide (HMnO) according to: Results showed efficient rejection of both radionuclide and TDS to the level of drinking water, however, the value of product TDS was quite lower than 300 ppm. In order to realize the required final product TDS increase the final product rate and simultaneously decrease the reject rate to the insufficient evaporation ponds partial treatment of the reject stream (already pressurized) by NF was investigated. Results of pilot testing of reject treatment confirmed the realization of higher product rate at TDS > 300 mg/l and Ra activity less than the MCL. Recycling of treated reject stream to the initial RO feed stream For the case of already existing desalination facilities and the unavailability of space for additional reject processing unit, partial recycling of reject stream to the main feed stream aiming to upgrade the total recovery rate and reduce the final reject one is evaluated. The recycling circuit [9] Fig (3) consists of a low pressure pump, a control valve, and a flowmeter. It returns the required fraction of the reject stream ahead of the high pressure pump of the initial feed. The pilot plant was operated at various recycling rates. Upon recycling the reject, the total system working recovery remains at the previous value (85%) but from a higher feed TDS. A state of equilibrium is rapidly attained with a higher permeate TDS. However, in order to make the calculated percent recovery expressive of the saving in feed water from the wells and of the decrease in final reject stream i.e. representative of the promoted process efficiency, we adopted [9] referring the permeate rate to the lowered raw water feed rate in calculation of recovery. Table (5) describes a pilot test of BWRO dealing with a groundwater of 2,520.0 mg/l which results in a permeate water of 82.5 mg/l and reject water of 16,790.0 mg/l at 85% recovery. The first three columns give the analysis of each of these streams. Column no. 4 shows the analysis of the increased RO feed TDS upon recycling of 33.3% of the reject stream to initial RO feed one. The corresponding permeate analysis is given by column no. 5 column no. 6 and 7 give the corresponding results for the recycling of 66.6% of the reject stream to the RO feed. Results revealed than partial recycling of the reject stream introduced only a moderate increase of the individual ion concentrations in RO feed stream (despite the high reject TDS) in view of the dilution of the recycled fraction of reject upon mixing with the whole feed stream. In already existing BWRO facilities, therefore, partial reject recycling is shown to raise the percent recovery, lower the consumption of raw feed water, and to lower remarkably the final reject rate and consequently the required land area and cost of installation of evaporation ponds without significant sacrifice of product water quality. Advancing Desalination Fig (4) shows the variation of concentration of the RO feed component species with percent reject recycling. These values correspond to an increase of feed TDS from 2,506.8 mg/l to 3,209.9 mg/l by recycling of 33.3% of the reject stream, then to 4,062.3 ppm by increase of recycling to 66.6%. Fig (5), on the other hand, shows the corresponding variation of the concentration of the water species in the permeate stream upon recycling of reject at the mentioned rates. According to these results the increase of permeate TDS parallel to increase of feed TDS upon recycling of reject to original feed stream is limited and did not compromise the drinking water quality. The recycling of 66.6% of the reject raised the permeate TDS only from 44.3 to 76.8 ppm. As for antiscalant dosing during RO reject processing, in principle the antiscalant which is concentrated in the reject stream is useful for the subsequent reject processing. However, with the higher concentration of certain scale forming components like SiO 2 in the reject, a different type of antiscalant became required to cover the saturation during the reject processing. As an example, the general validity antiscalant (Genesys LF) was used in the primary BWRO step of the raw well water of 2,506.8 mg/l (1,000 m 3 /d) operated at 85% recovery at a dose of 3.03 mg/l. for the reject processing, on the other hand, (150 m 3 /d) of a TDS of 16,230.3 mg/l and at higher concentration of different components particularly SiO 2 , a SiO 2 specific antiscalant was required at a rate of 11.42 mg/l consideration. The difference in price between the different dosed antiscalants did not add much to the general cost/m 3 (< 1% increase). Desalination Reject Processing by Chemical Softening Prior to Recycling or Secondary RO Treatment After RO of high salinity groundwaters, processing of reject stream by chemical softening or NF aims to remove or reduce the scale forming components accumulated during RO so as to enable the promotion of the total process recovery through subsequent secondary RO step or partial recycling of treated reject to the initial RO feed. Reject water rather high in Ca, Mg and SiO 2 can be softened by addition of hydrated lime, Ca(OH) 2 and sodium carbonate which settles out of water CaCO 3 and after all of HCO 3 -is consumed, the remaining OH-combines with Mg 2+ to deposit Mg(OH) 2 on which surface SiO 2 is removed as an adsorption complex (10). Results have shown that for high SiO 2 reject streams additional Mg may have to be added in order to attain the required SiO 2 removal. Partial Cold Lime Softening (CLS) Fig (6) shows typical results of partial CLS which consists in dosing only hydrated lime to the RO reject water. For each species the first column to the left represents the concentration in the reject water and the second shows the effect of dosing of Ca(OH) 2 , concentrations are represented as ppm CaCO 3 . When reject pH was raised from 8.3 to 10.0 by lime dosing, the precipitation which took place resulted in reduction of Ca 2+ content by 56.5%, M alkalinity by 70% the remainder being as CO 3 2-, and complete consumption of HCO 3 -. On the other hand, P alkalinity increased by 140 ppm while other reject water components including Mg and SiO 2 remained unchanged. TDS was lowered by 26.35% depending on extent of conducted lime dosing. According to these results the advantages of the partial CLS are: 1. Ease of operation with only one dosing and coagulation step. 2. Lower cost of chemical dosing than lime-soda ash CLS. 3. Parallel lowering of TDS by precipitation lowers the desalination load on the subsequent reject processing. 4. It is particularly interesting in case of reject streams where Mg and consequently SiO 2 removal do not represent a problem for processing. Softening Fig (7), on the other hand, shows the effect of addition of Na 2 CO 3 after the initial partial CLS. For each species the first column to the left represents the concentration in the RO reject water, the second shows the effect of dosing of Na 2 CO 3 at a concentration of 45% of the lime concentration previously added during the partial CLS. This lowered Ca concentration by 75.3%. Mg was practically not removed at this level of alkalinity in view of the absence of additional free OHfor deposition of Mg(OH) 2 . The third bar of Fig (7) belongs to dosing of an excess of Na 2 CO 3 to attain the double of concentration of lime of the initial partial CLS in order to raise alkalinity to a quite higher level. Advancing Desalination Such increase of alkalinity did not lead to any further deposition of Ca. In fact, our CLS results showed a minimum Ca concentration (22 ppm) at which higher lime-soda ash dosing had no effect. Fig (7) shows in parallel a considerable increase of Na, a lower increase of CO 3 and a decrease of Mg of 67% in view of the additional free OH -. SiO 2 is lowered by 22% by adsorption on the deposited Mg(OH) 2 . Complete CLS resulted in decrease of reject water TDS by 7.6% with respect to original reject water TDS. It is worthy to notice that stoichiometrically equivalent concentrations of coions CO 3 2-and OHto those of Ca 2+ and Mg 2+ , or higher, are required for precipitation of CaCO 3 and Mg(HO) 2 . As precipitation advances, however, alkalinity as well as supersaturation are reduced. In order to achieve a steady rate of precipitation and residence time typically between 60 to 90 minutes, we had to keep a supersaturation factor (SSF) of at least three. Parallel to chemical softening and in the same reactor, components like HMCs which may be concentrated in RO reject were shown to be better precipitated through dosing of sulphide since their sulphides are more insoluble than their hydroxides or carbonates, similarly, chlorine (hypochlorite), added during softening improved removal of Fe 2+ by oxidation to the Fe 3+ , Or sulphite improved precipitation of the soluble Cr 6+ by reduction to Cr 3+ . Reject Processing after CLS or NF After each of partial or conventional CLS or NF of the RO reject stream, further processing was conducted through either partial recycling to the feed stream of the primary RO unit or feeding an independent secondary RO unit. Fig (8) shows the results of recycling of softened reject stream (partial CLS) in the range of 25 to 75 percent to the feed stream of the primary RO unit. Recycling increased feed TDS which was shown to have only limited influence on permeate TDS. While at 75% recycling the feed TDS was practically doubled to attain 5461.7 mg/l, treated under mainly similar conditions by the same pilot RO unit operated using High rejection, low energy RO membranes, the permeate TDS showed an only limited increase from 60.9 to 139.3 mg/l which does not compromise its quality for subsequent application. According to these results, already present RO facilities, without need of additional equipment or space, can promote the total system recovery and reduce the final reject rate and consequently the cost of the waste disposal through a simple system modification without significant sacrifice of RO permeate quality. On the other hand, if reject processing aims to increase the final product rate, the softened stream can be treated in an independent secondary RO unit. The comparison between RO performance of the softened and the unsoftened reject streams shows that presoftening is particularly interesting in case of high TDS, high hardness brines. *pressure vessels of 6 RO elements each arrayed in three stages. On the other hand, for NF of reject prior to secondary RO treatment and the efficient dehardening by NF in addition to partial TDS rejection, it enabled recoveries as high as 85% of the secondary RO. This resulted in total process recovery as high as 97.75%. Results showed that treatments of RO reject by NF prior to recycling or treatment in secondary RO unit is particularly interesting in case of medium salinity and total hardness reject streams while for highly concentrated reject streams CLS is more effective and has a lower cost than NF. Comparison between Removal of Scale Forming Components from RO reject by NF and by CLS Removal of hardness components concentrated in RO reject as Ca, Mg, SO 4 together with SiO 2 , Fe and Mn as well as other possible components like HMC's, was investigated by NF in comparison with precipitation by the conventional CLS. In order to conduct the comparison of the two methods under similar conditions the extent of rejection recorded by NF was the basis of selection of the dosing rate of lime and soda ash which realize the same Ca rejection. In fig (9) for each species, the first column to the left represent the initial concentration in the reject stream, the second and the third represent the results of rejection by NF, and softening, respectively. While NF lowered the concentration of all the species to various extents and consequently lowered TDS, softening lowered only that of components included in the softening reactions as HCO 3-, P-alkalinity and SiO 2 [11]. Softening raised, on the other hand, concentration of Na+, CO 3 2- , and M-alkalinity. As for SiO 2 , which is directly rejected by NF, it is removed upon softening by adsorption on Mg(OH) 2 deposited surface at high pH values, but at a lower efficiency than NF rejection. While the chemical softening is usually stated as having lower cost [10], [12] the detailed consideration of all the related cost factors or additional process steps that are not included in NF and which should be added to the cost of softening in order to realize the same performance as NF, revealed the cost advantage of NF reject treatment. Chemical softening but not NF requires stoichiometric or higher dosage of lime and soda ash to reduce hardness, disposal of large amounts of sludge which may include dosage of polyelectrolytes and/or sludge conditioning before delivery to settling ponds and landfill disposal, raising of pH of the reject stream up to > 9.5 for indirect removal of SiO2 after deposition of Mg(OH) 2 and sophisticated installations for chemical dosage, and settling tanks. Our results have shown that CLS is not as complete as by IER or NF for removal of Ca and does not effectively remove organics, or reduce TDS. The above considerations extend to the treatment of different types of industrial WW's which contain hardness components, HMC's, may be together with organics and suspended solids where NF application will be optimum particularly if complete desalination is not required. Conclusions • Processing of the desalination reject stream, instead of just getting rid of it, is conducted by laboratory and pilot testing in order to promote the desalted water recovery and reduce the final reject disposal problems and costs which will increase the total desalination process efficiency, cost effectiveness and environmental safety. • Among the investigated processing alternatives the most efficient ones in case of medium concentration brine stream (up to 10,000 mg/l) are (a) (high rejection low energy RO + use of specific antiscalant), (b) partial recycling of reject to the feed stream of the initial RO unit. In already present RO facilities, reject recycling does not require extra footprint. Results showed that percent reject recycling as high as 75% did not significantly increase the final permeate salinity. For new projects, on the other hand, increase of total product rate was realized through a secondary RO treatment of reject. • In case of high TDS reject streams up to 33,000 mg/l, reject processing by partial cold lime method, conventional cold lime method or nanofiltration was conducted prior to circulation to initial RO feed or treatment in secondary RO unit. Results confirmed the promotion of total percent recovery without significant sacrifices of total permeate qualities. • Partial CLS is particularly interesting in case of reject streams where Mg and SiO 2 removal do not represent a problem for processing. Beside ease of operation and lower cost than conventional CLS, a partial CLS lead to higher decrease of reject TDS and does not increase Na concentration.
9,531.2
2012-09-14T00:00:00.000
[ "Engineering", "Environmental Science" ]
Search for resonant top plus jet production in ttbar + jets events with the ATLAS detector in pp collisions at sqrt(s) = 7 TeV This paper presents a search for a new heavy particle produced in association with a top or antitop quark. Two models in which the new heavy particle is a color singlet or a color triplet are considered, decaying respectively to tbarq or tq, leading to a resonance within the ttbar + jets signature. The full 2011 ATLAS pp collision dataset from the LHC (4.7 fb-1) is used to search for ttbar events produced in association with jets, in which one of the W bosons from the top quarks decays leptonically and the other decays hadronically. The data are consistent with the Standard Model expectation, and a new particle with mass below 430 GeV for both W boson and color triplet models is excluded at 95% confidence level, assuming unit right-handed coupling. Search for resonant top plus jet production in t t + jets events with the ATLAS detector in pp collisions at √ s = 7 TeV This paper presents a search for a new heavy particle produced in association with a top or antitop quark.Two models in which the new heavy particle is a color singlet or a color triplet are considered, decaying respectively to tq or tq, leading to a resonance within the t t + jets signature.The full 2011 ATLAS pp collision dataset from the LHC (4.7 fb −1 ) is used to search for t t events produced in association with jets, in which one of the W bosons from the top quarks decays leptonically and the other decays hadronically.The data are consistent with the Standard Model expectation, and a new particle with mass below 430 GeV for both W boson and color triplet models is excluded at 95% confidence level, assuming unit right-handed coupling. PACS numbers: 14.80.-j In the past few decades, remarkable agreement has been shown between measurements in particle physics and the predictions of the Standard Model (SM).The top quark sector is one important place to look for deviations from the SM, as the large top quark mass suggests that it may play a special role in electroweak symmetry breaking.The recent top quark forward-backward asymmetry measurements from the Tevatron experiments [1,2] are in marginal agreement with SM expectations.A non-SM explanation could come from a possible top-flavorviolating process [3][4][5].In these models, a new heavy particle R would be produced at the LHC in association with a top or anti-top quark.Figure 1 shows representative production diagrams for these new particles, for the cases of R = W or R = φ (see below).As shown in Ref. [6], the production mechanism in pp collisions mainly involves quarks rather than anti-quarks at √ s=7 TeV, even for relatively low mass particles. The larger number of quarks relative to anti-quarks produced in the initial state at the LHC leads to a resonance R that decays predominantly to either the t+jet or t+jet final state, where baryon number conservation restricts the models that are available.Two models that can give rise to these final states are a color singlet resonance (W ) mostly in the tq system, and a di-quark color triplet model with a resonance (φ) in the tq system.In both cases a t t+jet final state is produced, but a peak will be present in only one of the t+jet or t+jet invariant mass distributions.The new resonances are assumed not to be self-conjugate, which makes searches for same-sign top quarks insensitive to them [7][8][9], and to have only right-handed couplings.The t or t then decays to W + b or W −b , respectively.This paper considers the decay signature of events in which one W boson decays leptonically (to an electron or muon, plus neutrino final state) and the other W boson decays hadronically.The first direct search for such particles was performed at CDF [10], which excluded color triplet resonances with masses below 200 GeV and W resonances with masses below 300 GeV, for particles with unit right-handed coupling (g R ) to tq.As is done in this paper, CDF used the formalism in Ref. [3] to define g R .CMS recently performed a search that excluded a new W with a mass less than 840 GeV [11] for particles with g R = 2 [12].The analysis presented here uses the full ATLAS 7 TeV pp collision dataset collected in 2011, corresponding to 4.7±0.2fb −1 of integrated luminosity [13,14] delivered by the LHC.ATLAS [15] is a multi-purpose particle physics detector with cylindrical geometry [16].The inner detector (ID) system consists of a high-granularity silicon pixel detector and a silicon micro-strip detector, as well as a transition radiation straw-tube tracker.The ID is immersed in a 2 T axial magnetic field, and provides charged particle tracking in the range |η| < 2.5.Surrounding the ID, electromagnetic calorimetry is provided by barrel and endcap liquid-argon (LAr)/lead accordion calorimeters and LAr/copper sampling calorimeters in the forward region.Hadronic calorimetry is provided in the barrel by a steel/scintillator tile sampling calorimeter, and in the endcaps and forward region by LAr/copper and LAr/tungsten sampling calorimeters, respectively.The muon spectrometer (MS) comprises separate trigger and high-precision tracking chambers measuring the deflection of muons in a magnetic field with a bending power of 2-8 Tm, generated by three superconducting air-core toroid systems.A three-level trigger system is used to select interesting events.The level-1 trigger is implemented in hardware and uses a subset of detector information to reduce the event rate to a design value of at most 75 kHz.This is followed by two software-based trigger levels, level-2 and the event filter, which together reduce the event rate to ∼ 300 Hz. Events with an electron (muon) are required to have passed an electron (muon) trigger with a threshold of transverse energy E T > 20 GeV (transverse momentum p T > 18 GeV), ensuring that the trigger is fully efficient for the off-line selection discussed below.Electrons reconstructed offline are required to have a shower shape in the electromagnetic calorimeter consistent with expectation, as well as a good quality track pointing to the cluster in the calorimeter.Candidate electrons with E T > 25 GeV are required to pass the "tight" electron quality criteria [17], to fall inside a well-instrumented region of the detector (|η| < 2.47, excluding 1.37 < |η| < 1.52), and to be isolated from other objects in the event.Muons with transverse momentum p T > 20 GeV are required to pass muon quality criteria [18], to be well measured in both the ID and the muon spectrometer, to fall within |η| < 2.5, and to be isolated from other objects in the event. Jets are reconstructed in the calorimeter using the antik t [19] algorithm with a radius parameter of 0.4.Jets are required to satisfy p T > 25 GeV and |η| < 2.5.Events with jets arising from electronic noise bursts and beam backgrounds are rejected [20].Jets are calibrated to the hadronic energy scale using p T -and η-dependent corrections derived from simulation, as well as from test-beam and collision data [21].Jets from the decay of heavy flavor hadrons are selected by a multivariate b-tagging algorithm [22] at an operating point with 70% efficiency for b-jets and a mistag rate for light quark jets of less than 1% in simulated t t events.Neutrinos are inferred from the magnitude of the missing transverse momentum (E miss T ) in the event [23].The signal region for this analysis is defined by requiring exactly one charged lepton and five or more jets, including at least one b-tagged jet.To select events with a leptonically decaying W boson, events are required to have E miss , where E T is the magnitude of the transverse momentum of the lepton, and φ is the angle between the lepton and the missing transverse momentum in the event. A variety of Monte Carlo generators are used to study and estimate backgrounds.The generated events are processed through full detector simulation [25], based on GEANT4 [26], and include the effect of multiple pp interactions per bunch crossing.To predict the event yield, the simulation is given an event-by-event weight such that the distribution of the number of pp collisions matches that in data. The t t background is modeled with MC@NLO v4.01 [27] interfaced to HERWIG v6.520 [28] and JIMMY v4.31 [29].An additional t t sample modeled with MC@NLO interfaced to PYTHIA v6.425 [30] is used to study potential systematic uncertainties.Other t t samples use POWHEG [31] interfaced either to PYTHIA or HERWIG, as well as AcerMC v3.8 [32].The background from the production of single W bosons in association with extra jets is modeled by the ALPGEN v2.13 [33] generator interfaced to HERWIG.The MLM matching scheme [34] is used to form inclusive W boson + jets samples such that overlapping events produced in both the hard scatter and parton showering are removed.In addition, the heavy flavor contributions are reweighted using the data-driven procedures of Ref. [35] using the full 2011 LHC dataset.Diboson events are generated using HERWIG.Single-top-quark events are modeled by MC@NLO, interfaced with HER-WIG for the parton showering, in the s-channel and W t channel, and by AcerMC v3.8 in the t-channel.The small background in which multi-jet processes are misidentified as prompt leptons is modeled from a data-driven matrix method [36].In determining the expected event yields, the t t cross section is normalized to approximate nextto-next-to-leading-order QCD calculations of 167 +17 −18 pb for a top quark mass of 172.5 GeV [37,38], and the total W +jets background is normalized to inclusive next-tonext-to-leading-order predictions [39].Signal events are produced, for a range of W and φ masses, with Mad-Graph v5.1.3.16[40] and interfaced to PYTHIA v6.425.Next-to-leading-order (NLO) cross sections are used for the predicted W boson signal normalization [6], and leading-order (LO) cross sections using MSTW2008 are used for the φ-resonance normalization [3]. Events are reconstructed with a kinematic fitting algorithm that utilizes knowledge of the over-constrained t t system to assign jets to partons.In the fit, the two top quark masses are each constrained at the particle level to 172.5 GeV by a penalty in the likelihood, computed from variations from this nominal value and the natural top quark width of 1.5 GeV.The two W boson masses are similarly constrained to 80.4 GeV within a width of 2.1 GeV.This allows the z-component of the momentum of the neutrino from the leptonically decaying W boson to be computed.Both solutions from the quadratic ambiguity of this computation are tested when computing the likelihood.Charged lepton, neutrino and jet fourmomenta are constrained in the fit by resolution transfer functions derived from simulated t t events that relate the measured momenta in the detector to true particle momenta.The full shapes of these transfer functions are used in the likelihood computation.All assignments of any four jets to partons from the t t decay are tested and the assignment with the largest likelihood output for the t t hypothesis is selected.After the assignment is selected, the originally measured jet and lepton momenta and E miss T are used.The remaining jets not associated with the t t partons are included to form m tj and mt j masses, where the charge of the lepton is used to infer which is the top candidate and which is the anti-top candidate.All combinations of extra jets with the top and anti-top quark candidates are considered, and the pairings that give the largest m tj and mt j masses are used.In this way, the same extra jet can (but does not necessarily have to) be used to form m tj and mt j .These two masses are used as observables for the search. Several control regions are used to ensure good model-ing and understanding of the backgrounds before the signal region is examined.The preselection control region requires at least four jets, but does not require a b-tag.The dominant t t background is tested in a control region with exactly four jets (including at least one b-tagged jet).The rejection of events with more than four jets reduces signal contamination.A second t t control region is defined by events with exactly four jets with p T above 25 GeV, one of which must be b-tagged, and exactly one additional jet with p T between 20 GeV and 25 GeV.Signal contamination is further reduced by requiring that the ∆R ≡ (∆η) 2 + (∆φ) 2 between the fifth jet and both the reconstructed top and anti-top quarks is greater than π/2. Figure 2 shows distributions in the two t t control regions, where good agreement is observed between data and the prediction.The second major background, production of single W bosons in association with extra jets, is tested in a control region with five or more jets, vetoing events with b-tagged jets.The requirement of zero b-tagged jets reduces both signal and t t contamination.The distribution in Figure 3 shows good agreement between data and the prediction within uncertainties.Table I summarizes the expected and observed yields in the control regions. Figure 4 shows the expected and observed m tj and mt j distributions in the signal region.The data are found to be consistent with the SM expectation.A variety of potential systematic effects are evaluated for the predicted signal and the background rates and shapes.The dominant systematic effects of the jet energy scale [21] and resolution [41] lead to uncertainties of up to 10% on the total background rate and up to 21% on the total signal expectation, depending on the mass of the new particle.The other dominant systematic uncertainty from the difference in b-tagging efficiency between simulation and data leads to uncertainties of roughly 16% on both the signal and background rates.Effects due to lepton trigger uncertainties and ID efficiency as well as the energy scale and resolution are assessed using Z → ee and Z → µµ data, which lead to systematic uncertainties of a few percent.Other potential systematic effects considered are the size of the small multi-jet background (assigned 100% uncertainty); t t generator uncertainties (evaluated by comparing different results using the MC@NLO and POWHEG generators, 1-10%); t t showering and fragmentation uncertainties (evaluated by comparing samples using both PYTHIA and HERWIG, 1-6%); an uncertainty on the total integrated luminosity (3.9%) [13,14]; and the amount of QCD radiation for the signal and the t t background (approximately 10%, evaluated using AcerMC).Total cross section uncertainties of 10% (55%) are used for the t t (W +jets) backgrounds. Expected and observed upper limits on the signal cross section are computed at discrete mass points as follows. For each benchmark signal mass point under consideration, a signal region is defined in the m tj -mt j plane.When setting limits for the W (φ) model, the m tj (mt j ) window is significantly wider than the mt j (m tj ) window FIG.2: The leading jet pT in the four-jet t t control region (a), and mtj in the five-jet t t control region (b).The example signal-only distributions are overlaid for comparison, where unit coupling for the new physics process is assumed.The total uncertainty shown on the ratio includes both statistical and systematic effects.The "other" background category includes single top production, diboson production and multijet events.TABLE I: Expected and observed yields in the four control regions (CR).Total refers to the total expected background, including t t, W +jets and the other smaller backgrounds: single top production, diboson production and multi-jet events.The last two lines show the expected number of events for two benchmark signal samples in each of these control regions.The errors include all systematic uncertainties. to account for the fact that the resonance is predominantly in the mt j (m tj ) system.The windows are optimized to maximize sensitivity, accounting for the full effect of systematic uncertainties.Typical mass windows are shown in Table II.For each mass window, 95% confidence level (C.L.) upper limits on the signal cross section (times the branching ratio to t or t) are computed using a single bin frequentist CL s method [42]. No shape information is used within the mass windows. Table II shows the expected and observed event yields in several of the signal region windows.Expected and observed 95% C.L. lower limits on the signal mass are derived, assuming a coupling of g R = 1 and g R = 2, and are shown in Figure 5. Assuming that the cross section scales as g 2 R , the exclusion in the mass-coupling plane is shown in Figure 6.As shown, most of the parameter space in this model, which was favored by the Tevatron forward-backward asymmetry and cross section measurements [43], has been excluded. In conclusion, this paper presents a search for a new heavy particle R in the tj or tj system of t t plus extra jet events with the ATLAS detector.Such new particles have been proposed as a potential explanation of the difference from the SM values of the forward-backward asymmetries measured in top quark pair production at the Tevatron.The full 2011 ATLAS pp dataset (4.7 fb −1 ) is used in the search.Assuming unit coupling, the expected 95% C.L. lower limit on the mass of the new par-ticle is 500 (700) GeV in the W (φ) model.No significant excess of data above SM expectation is observed, and 95% C.L. lower limits of 430 GeV for both the W and φ models are set.At g R = 2, the limits are 1.10 (1.45) TeV for the W (φ) model, with expected limits of 0.93 (1.30) TeV.These are the most stringent limits to date on such models.Most of the regions of parameter space for these models that are more consistent with the Tevatron forward-backward asymmetry and t t cross section measurements than the SM are excluded at 95% C.L. by these results. We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently. We The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN and the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA) and in the Tier-2 facilities worldwide.The hatched area shows the region of parameter space excluded by this search at 95% C.L. The CDF result is documented in Ref. [10].The W cross sections are NLO calculations, and the φ cross sections are LO calculations.The region favored by the Tevatron AF B and σ t t measurements is shown as the dark band [43]. FIG. 1 : FIG. 1: Example production and decay Feynman diagrams for the (a) W and (b) φ models. T > 30 GeV (E miss T > 20 GeV) in the electron (muon) channel.Additionally, the event must have a transverse mass of the leptonically-decaying W boson m W T > 30 GeV in the electron channel, or scalar sum E miss T + m W T > 60 GeV in the muon channel [24].Here, (m W T FIG. 3 : FIG.3:Expected and observed distribution of mtj in the W +jets control region.The example signal-only distributions are overlaid for comparison, where unit coupling for the new physics process is assumed.The total uncertainty shown on the ratio includes both statistical and systematic effects.The "other" background category includes single top production, diboson production and multi-jet events. FIG. 4 :FIG. 5 :FIG. 6 : FIG.4:Expected and observed distributions of (a) mtj and (b) mt j in the signal region.The example signal distributions assume unit coupling for the new physics process.The total uncertainty shown on the ratio includes both statistical and systematic effects.The "other" background category includes single top production, diboson production and multijet events. TABLE II : Expected and observed yields in different signal regions.The errors include all systematic uncertainties.Total refers to the total expected background, including t t, W +jets and the other smaller backgrounds: single top production, diboson production and multi-jet events, which are not tabulated separately here.Signal window eff.refers to the efficiency for the signal to fall inside the optimized two-dimensional mass window.The signal region yield is calculated in the mass window at each benchmark signal point.Signal σ refers to the total expected signal cross section, not taking into account the t (or t) plus jet branching fraction.494 566 < mtj < 904 401 < mtj < 455 766 < mtj < 819 mt j window [GeV] 292 < mt j < 339 549 < mt j < 650 371 < mt j < 608 628 < mt j < 973 Signal window eff.
4,781
2012-09-28T00:00:00.000
[ "Physics" ]
UNCERTAINTY IN LANDSLIDES VOLUME ESTIMATION USING DEMs GENERATED BY AIRBORNE LASER SCANNER AND PHOTOGRAMMETRY DATA The purpose of this paper is to identify an approach able to estimate the uncertainty related to the measure of terrain volume generated after a landslide. The survey of the area interested of landslide can be performed by Photogrammetry & Remote Sensing (PaRS) techniques. Indeed, depending on the method and technology used for the survey, a different level of accuracy is achievable. The estimate of the quantity of the terrain implicated in the landslide influences the type of geological and geotechnical approach, the civil engineering project on the area and of consequence, the costs to sustain for a community. According to the experiences and recommendations reported in the ASPRS guidelines, an example of the approach used to estimate volumetric accuracy concerning one of the most important landslide in Europe is shown in this paper. In this case study, the dataset is constituted by a Digital Elevation Model (DEM) obtained by photogrammetric (stereo-images) method (pre-landslide) and another by Airborne Laser Scanner (after-landslide). By the comparisons of Airborne Laser Scanner (ALS) and photogrammetry DEMs obtained from successive surveys, it has been possible to produce maps of differences and of consequence, to calculate the volume of the terrain (eroded or accumulated). In order to calculate the uncertainty of volume, a procedure that takes in account even the different accuracy achievable in the vegetation area is explained and discussed. *<EMAIL_ADDRESS> INTRODUCTION Volumetric change analysis plays an important role in the observation of some phenomena, such as the estimation of earthflow quantity generated after a landslide.A common method used to analyse the landslides phenomena is the DEMs (Digital Elevation Models) difference (DoD).This technique consists in comparing more DEMs acquired over the time (James, et al, 2012;William, 2012).For this reason, DoD map allows to interpret the evolution of geomorphological processes and to quantitatively assess morphological changes due to erosion and/or accumulation on landslide (Bossi et al, 2016).In order to describe the topography of Area Of Interest (AOI), the survey can be carried out by several geomatics techniques: digital photogrammetric aerial images (Hapke, 2005), Airborne Laser Scanner (ALS) (Jaboyedoff et al, 2012;Garnero & Godone, 2013), space-borne Synthetic Aperture Radar (SAR) interferometry (Colesanti & Wasowski, 2006;Osmanoğlu et al, 2016), Terrestrial Laser Scanner (Bitelli et al, 2004) and satellite images (Seker et al, 2004;Nichol et al., 2006).In this paper, the attention is focused on the DEMs generated by stereo-images obtained by airborne sensor and ALS technology.The monitoring of landslide processes by stereoscopic aerial photography is applied and known for a long time (Fookes et al, 1991;Lee & Min, 2001;Karsli, et al., 2004).The advantage of using photogrammetry method is to identify in easy mode the area interested by landslide activity.The accuracy of photogrammetric measurements mainly depends on the flying height.Therefore, performing the aerial survey to low height, it was possible to achieve centimetre accuracy for points on the ground (Brückl, et al, 2006).Passini & Jacobsen (2008) have showed by a test with different aerial digital sensors (Vexcel Imaging UltraCamD, UltraCamX, Z/I Imaging DMC digital frame and 3D-CCD-line scanner camera Leica ADS40) that the accuracy achieved in terms of RMSE (Root Mean Square Error), using a Ground Sample Distance (GSD) of about 5cm, has been of few centimetres.However, this method does not allow to obtain terrain information if the area of study is cover by high and dense vegetation.Of course, the accuracy in vegetation area decreases considerably.In this way, the Italian Technical Requirements for the Production of DTM suggest a tolerance value in vegetation area a quarter of the threes height.As concerning the ALS survey, over the last few years this technique is increasingly widespread thanks to its ability to produce, in short times, dense point clouds.In the case of the landslide surveys, using ALS data it is possible to realize landslide morphology maps and to obtain spatial information even in areas that are partly or completely covered by dense vegetation (Razak et al, 2011).Indeed, using ALS sensor and carried out a suitable post-processing data, it is possible to create accurate and precise High Resolution Digital Elevation models (HRDEM) in raster grids or Triangulated Irregular Networks, so-called TINs (Maglione et al, 2014), which are 2.5D representations of the topography or in true 3D point clouds with a high density of information (Jaboyedoff et al. 2012).Vosselman and Maas (2001) have showed that the accuracy onto planimetry are often much larger than onto the height and the accuracy ranges from 10 to 20 cm.Csanyi and Toth (2007) in order to analyse both planimetric and height accuracies of laser point clouds, have designed special ground targets: the comparison between the coordinates of the targets obtained by ALS sensor and by GPS (Global Positioning System) has showed a planimetric accuracy of 5-10 cm and height accuracy of 2-3 cm.Vosselman (2008;2012) by specific case studies and suitable methods, has showed that the horizontal accuracy can be contained in 5 cm.However, the accuracy values achieved in these papers have been obtained in optimal condition and open environment.Other studies, which consider the vegetation, show a higher error.Reutebuch et al, 2003 in the survey on forested lands in western Washington have obtained the mean and standard deviation of vertical errors between the Digital Terrain Models (DTM) and 347 ground checkpoints of 0.22m and 0.24m, respectively (RMSE=0.32m).This order of error is similar but lower than that obtained by Kraus and Pfeifer (1998) for a wooded area in Austria (RMSE=0.57m).The study conducted over the Utikuma Uplands boreal wetland located north of Utikuma Lake (Canada) by Hopkinson et al., 2005 Other accuracy parameters can be obtained from the datasheet of the manufacturer of the ALS system.For example, for ALS50 II sensor, after suitable post-processing data, the lateral placement accuracy is 7-64 cm and vertical placement accuracy is 8-24cm (one standard deviation).These values are valid in acquisition condition of full-fieldfilling targets of 10 percent diffuse reflectivity or greater with atmospheric visibility of 23.5 km or better for flying heights up to 6000 m AGL (height above ground level) and nominal Field of View of 40 degrees (Leica Geosystems, 2017).In both geomatics techniques taken in consideration, it is necessary to consider even the influence of the different interpolation methods for DEM generation.Indeed, as shown in the paper of Ismail et al. (2016) concerning a survey by ALS systems on a vegetation area, IDW (Inverse Distance Weighting) and Spline methods give lowest accuracy to the DEM than to Kriging method.Indeed, by an ALS survey on quite flat area (average point density of 2.2 per m 2 ) and adopting a DEM with a geometric resolution of 1 m, the RMSE value archived, depending on the interpolation technique, varies between 0.78-1.06m.In terrain with a slope greater than 15°, Su et al. (2006) have showed that the error range varies between 1.458-1.788m. Error prediction in the calculation of volume changing In the case where it is possible to measure directly the side of the rectangular prism of length a, width b and height h, the volume V is the product of these three components.The variance associated to the measure of this volume is (Bevington, 1969): The previous formula is valid for direct measure.In the case of indirect measure, such as the direct georeferncing in the photogrammetry and ASL method, it is necessary to take in account more factors generated by indirect measure.An efficient approach to the estimation of volume uncertainty has been proposed by Hapke (2005) in monitoring a landslide through the photogrammetric method.In this approach, the total model error is transformed to an uncertainty in volume.In addition, because the estimation error decreases as landslide areas become larger (Tseng et al, 2013), total uncertainty volume (σ 2 Vt) is function of following parameters: Where: σ 2 Vt total uncertainty volume; Et total error (3D) of the model; A area over which the volume is calculated; Vt volume. The Et is sum of the error component, which is related to remote sensing technique.Indeed, the total error for photogrammetry purpose, can be write as function of several aspects: error in direct georeferencing, aerial triangulation, determination of Ground Control Points, flying height and image distortions.As concerning the ALS survey, the error is related to: positioning of the carrying platform, orientation determination, offsets between the laser sensor, position and orientation (POS/INS) system and an aircraft platform, errors in the electro-optical parts of the laser sensor, wrong laser and POS/INS data processing, careless integration and interpolation of the INS and GNSS data (Pre-processing), erroneous data from the reference ground GNSS base stations and wrong data/coordinate transformation. In addition, in order to obtain an estimate of the error as close to reality, the Et value should be taken in account the presence of the vegetation on the AOI.In other words, on the AOI, it is possible to separate the vegetation area (VA) and not vegetation area (eroded or accumulated).In this way, if a different and greater error is not considered in areas covered by high vegetation, than the estimate of the volumes could be too optimistic.Therefore, the formula (1) is the sum of two components, as shown below: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-3/W4, 2018 GeoInformation For Disaster Management (Gi4DM), 18-21 March 2018, Istanbul, Turkey Monitoring 4D phenomena, i.e. in terms of spatial and temporal dimensions, the variation of the volume over the time can be expressed by the following formula: The total uncertainty volume in 4D analysis assumes the following equation: The procedure that allows to calculate of the total uncertainty related to volume eroded or accumulate can be sketched as showed in the following workflow (figure 1). Study area and datasets The area study is located in South Italy, in Campania Region (φ=41°15'05" N; λ=15°12'59" E) and concerning an important landslide.Indeed, Montaguto landslide had a length of more than 3 km and it has distributed over an area of about 45 hectares.This landslide led the closure of an important road link and a railway line for a period of about three months. In order to estimate the morphology of the terrain after earthflow, two DEM datasets have been considered.The first dataset (before of the landslide) derived by photogrammetric survey while the second (after landslide) by ALS survey. Photogrammetry dataset The photogrammetric dataset is constituted by numerical cartography at scale 1:5000 obtained by stereoscopy approach on digital images.The images were acquired by digital photogrammetric camera Z/I DMC (Zeitler et al., 2002;Pepe, 2017a).The most important features of this sensor are: sensor size 13824x7680 pixels, pixel size 12 µm, focal length 120mm; the sensor is able to acquire in panchromatic and multispectral mode. The special accuracy values reported on the cartography are: 0.75m in elevation and 1.0m in plan.So, in order to build the DEM, the vector information contained in the cartography (contour lines, elevation points, etc.) have been transformed in TIN and, subsequently, this latter model has been transformed in raster.These operations have been carried out in ArcGis environment.The first task, i.e. the transformation from vector information to TIN, has been realized by the tool called "Create TIN" surface.In order to do not consider the triangles outside the AOI, the tool called "Delineate Tin Data Area" has been used.This tool modifies the input TIN by reclassifying its edges to be either included or excluded from the interpolation zone.Subsequently, using the tool called "TIN to Raster" it has been possible to interpolate cell z-values from the input TIN at the 0.5m resolution.The interpolation method used to create the raster has been "linear".In this way, the tool calculates cell values by applying linear interpolation to the TIN triangles. Lidar dataset The survey after earthflow has been realized by ALS sensor in order to obtain a minimum point density on the terrain of 3pt/m 2 .The flight planning has been designed and realized by 6 Flight Lines (FLs), which two along the axis of the landslides and other 4 parallel to the central ones.The ALS survey has been realized at a relative height of 500m using Partenavia P68 aircraft.Because the point density is a function of the acquiring speed (Pepe, 2017b), during the flight it has been paid much attention to respect the velocity of the project.In this way, it has been possible preserve the designed point density.Once completed the flight planning, to obtain the point clouds in a specific mapping frame, it is necessary assembly the three datasets: calibration data and mounting parameters, laser distance measurements with their respective scanning angles and Position and Orientation System data (Wehr & Lohr, 1999).The calibration has been performed using a combination of standard and in-house software and factory boresight calibration is set using Leica Geosystems Attune software.The post-processing of the Lidar data has been obtained using several software: Waypoint GrafNav for Differential Global Positioning System (DGPS) processing, Leica Geosystems IPAS Pro in order to manage the combination of GNSS/IMU (Roth & Thompson, 2008).The GNSS station used for DGPS is the permanent station called "Accadia" belong the Puglia Region GNSS network.The distance between the operations area of survey and master station has been of about 20km.The trajectory carried out by the aircraft during the acquisition operation is shown in Figure 2. As shown in the figure 2, before and after the execution of the flight lines, the aircraft has carried out an "eight" in flight.This action allows to improve performance of the inertial sensor and thus to avoid the effects of drift over time.Subsequently, using Leica Geosystems ALS Post Processor software, has been generated the point clouds in ASPRS LASer (LAS) File Format (Samberg, 2007;Pepe et al, 2017a;Pepe et al, 2017b).Because the point clouds have been generated in geographic coordinates, in order to simplify the geomatics operations with the point clouds, the projection in a coordinate plane system has been performed. In particular, because the Italian government has adopted the reference system called ETRF2000 (2008.0) the coordinates of the survey have been represented in RDN2008/TM33.The elevation coordinates produced by direct georeferencing have been generated in the ellipsoidal height (h).In order to obtain the orhometric height (H), it has been necessary to convert this height.Indeed, knowing the geoid undulation model (N) and using a suitable tool developed in Matlab environment, the orthometric point clouds have been obtained.The geoid undulation adopted for the transformation has been Italgeo05 (Barzaghi et al., 2005) Analyses of the volume changing: DEM Comparison The DEM comparison between the two raster (referring to two different temporal moments) has been conducted using ArcGIS software.Using the tool Raster Calculator, the difference in elevation between the two DEMs has been calculated by following formula: The elevation difference (delta H) concerning the landslide area is show in the figure 4a.The boundary of the landslide area has been determined by the use of orthophoto after the landslide.In this way, it has been possible to draw the boundary (polygon feature) in GIS environment. Based on the morphological characteristics, the Montaguto landslide can be divided into three main sector (Ferrigno et al., 2017): the upper sector represents the source area, the middle sector is the propagation area and the lower sector is the accumulation area (landslide toe).In each section, the eroded and accumulated volumes have been calculated.A widespread method to evaluate the terrain evolution is the construction of the profile.An example of the profile (transversal to the earthflow direction) before and after landslide in the accumulation area, is shown in figure 4b where it is easy to note as the difference in elevation reach values of 20 meters.This last parameter indicates the importance of the landslide that involved this area.The change of terrain volumes, accumulation and erosion, are respectively 349991m 3 and 869920m 3 .Therefore, the total volume balance archived was about 5.2•10 5 m 3 .The uncertainty of the volume has been calculated by relation [2], suitably adapted to raster information.The high vegetation in the AOI covers a surface of about 14% of the area subject to landslides.The heights of trees were obtained in post-processing task, i.e. after obtained the classification of point clouds in several classes.In this way, it has been possible measure the heights of the trees in several zones within the landslide area (figure 5).Lastly, a terrestrial survey, using Hi-Target V8 GNSS RTK (Real Time Kinematic) system and total station (Pentax NX-325) has been carried out in order to obtain check points.In this way, it has been possible to compare the spatial coordinates of the check points with the points belongs to ALS and photogrammetry models.Therefore, the different total errors achievable in relation to vegetation area by photogrammetry and ALS technique are shown in table 2. Et(photogrammetry) Et(ALS) m m Vegetation area 1.600 0.300 No vegetation area 1.050 0.125 Table 2. Total errors of the model occurred in area with and without high vegetation.Lastly, the terrain volumes values and their relative uncertainties in the area interested by landslide activity are: -Accumulation: +349991±30760m 3 ; -Erosion: -869920 ±64720m 3 . CONCLUSIONS Due to the numerous parameters that contribute to determining the uncertainty of the volume estimation, the approach in budget error in order to determinate the terrain volume changing after earthflow is almost complex, as shown in the presented case study. The choice of a specific survey technique can significantly affect the accuracy of the volume estimation.Indeed, especially in dense vegetation areas the photogrammetric survey cannot permit the exact determination of the terrain coordinates (ground level) and of consequence, in the estimate of the volume.Therefore, in the analysis of landslides, especially in densely vegetated areas, the ALS technology makes an important contribution to the measurement of the terrain.In other words, because it allows to achieve great accuracy, the ALS sensor is a suitable survey technology in order to monitor landslide area, especially if the area under investigation is densely vegetated. Figure 1 . Figure 1.Algorithm to calculate uncertainty volume Table 1 . because it is the most accurate geoid model available on Italian territory.Subsequently, using specific software of classification, all point clouds have been classified in the following classes (table 1) Classification according *.LAS format The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-3/W4, 2018GeoInformation For Disaster Management (Gi4DM), 18-21 March 2018, Istanbul, Turkey
4,215.4
2018-03-06T00:00:00.000
[ "Environmental Science", "Geology", "Engineering" ]
Toward Decision-Making Support: Valuation and Mapping of New Management Scenarios for Tunisian Cork Oak Forests Forest ecosystems are an important anthropogenic pillar to human wellbeing, providing a multitude of ecosystem services. In Mediterranean countries, where climate change effects are exponentially increasing, the value of the forest ecosystem services is even higher and their preservation is more crucial. However, the biophysical and economic value of such services is usually not observable due to their non-marketable characteristics, leading to their underestimation by decision-makers. This paper aims to guide decision-making through a set of new management scenarios based on ecosystem services’ values and their spatial distribution. It is a cumulative multidisciplinary study based on biophysical models results, economically valued and implemented using the geographic information system (GIS) to analyze spatial data. The investigation was based on a biophysical and economic valuation of cork, grazing, carbon sequestration and sediment retention as a selection of ecosystem services provided by cork oak forest (Ain Snoussi, Tunisia). The valuation was made for the actual situation and two management scenarios (density decrease and afforestation of the shrub land), with emphasis on their spatial distribution as a basis to new management. The total economic value (TEV) of the investigated services provided by Ain Snoussi forest (3787 ha) was €0.55 million/year corresponding to €194/ha/year. The assessment of two different scenarios based on the land cover changes showed that the afforestation scenario provided the highest TEV with €0.68 million/year and an average of €217/ha, while the density decrease scenario provided €0.54 million/year and an average of €191/ha. Such results may orient decision-makers about the impact new management may have, however they should be applied with caution and wariness due to the importance of the spatial dimension in this study. Introduction Considered an evolving concept, the ecosystem services (ES) approach appeared in the 1970s with a visionary objective based on increasing concern about the vulnerability of nature. The concept appeared to attract policymakers' attention and enhance their awareness about the future risks to be faced [1]. However, more recently the ES studies' objectives have evolved from warning policy makers about the extinction risks and the need to conserve biodiversity [2], to the integration of ES into decision-making processes, considering conservation only as an option and aiming for more relevant and sustainable management [3,4]. The fact that ecosystem services have indirect effects on human well-being, and that their value is usually unobservable, leads to their underestimation. The economic The study area is a transition zone between intense forest and wooded area, with 3787 ha comprising 50% cork oak, 24% shrubs, 13% cropland and olive trees and 10% of bare land as shown in Figure1. The region is characterized by a humid climate with a temperate winter presenting an average annual rainfall of 1000 mm. The forest is also characterized by the existence of a local population living inside the forest, estimated at 1700 inhabitants in 2014 (according to the National Institute of Statistics). Despite the fact that cork oak forests are publicly owned, the local population benefits from free usage rights. The importance of the local population has been highlighted in several studies proving that local households are the main beneficiaries of the forest's economic value. According to Daly-Hassen and Ben Mansoura, 58% of the net benefits in 2005 corresponded to the private economic value benefiting local users [25]. The forest is located upstream of the Sidi Barrak watershed. The water reservoir was created in 1999 and is used for three main puroses: drinking water supply, improvement of the water quality of the Medjerda Canal and irrigation; however, the actual use of the water collected in the watershed is drinking water. According to the literature, the study site provides a multitude of goods and services, [23,26,27]. In the present work, classifications and definitions used were based on the Common International Classification of Ecosystem Services (CICES) explicit classification for consistency with the commonly used nomenclature. Differences between the land cover types in terms of management were considered; forests and shrubs are publicly managed, while olive plantations and crop lands are managed by the local population (making access to information more complicated). Only publicly managed land covers were studied. Biophysical valuation Methods A multidisciplinary study was established to estimate the biophysical values of the ecosystem services provided in the area based on the most appropriate methods. Table 1 shows the average production per ha and per year in 2016 and the corresponding biophysical method. The following ecosystem services were chosen to be studied for their importance, based on the literature review and on local experts' advices. The study area is a transition zone between intense forest and wooded area, with 3787 ha comprising 50% cork oak, 24% shrubs, 13% cropland and olive trees and 10% of bare land as shown in Figure 1. The region is characterized by a humid climate with a temperate winter presenting an average annual rainfall of 1000 mm. The forest is also characterized by the existence of a local population living inside the forest, estimated at 1700 inhabitants in 2014 (according to the National Institute of Statistics). Despite the fact that cork oak forests are publicly owned, the local population benefits from free usage rights. The importance of the local population has been highlighted in several studies proving that local households are the main beneficiaries of the forest's economic value. According to Daly-Hassen and Ben Mansoura, 58% of the net benefits in 2005 corresponded to the private economic value benefiting local users [25]. The forest is located upstream of the Sidi Barrak watershed. The water reservoir was created in 1999 and is used for three main puroses: drinking water supply, improvement of the water quality of the Medjerda Canal and irrigation; however, the actual use of the water collected in the watershed is drinking water. According to the literature, the study site provides a multitude of goods and services, [23,26,27]. In the present work, classifications and definitions used were based on the Common International Classification of Ecosystem Services (CICES) explicit classification for consistency with the commonly used nomenclature. Differences between the land cover types in terms of management were considered; forests and shrubs are publicly managed, while olive plantations and crop lands are managed by the local population (making access to information more complicated). Only publicly managed land covers were studied. Biophysical Valuation Methods A multidisciplinary study was established to estimate the biophysical values of the ecosystem services provided in the area based on the most appropriate methods. Table 1 shows the average production per ha and per year in 2016 and the corresponding biophysical method. The following ecosystem services were chosen to be studied for their importance, based on the literature review and on local experts' advices. The cork production cycle is 12 years in Tunisia, resulting in "virgin cork" (cork bark harvested at the first bark stripping at age 30), "reproductive cork" (cork bark harvested in the following cycles) and "miscellaneous cork" (wasted cork after stripping). Forest administration is responsible for the harvest and sale of the product, allowing the availability of relevant quantities (in quintal) related to the product. Grazing average quantities per hectare were obtained from the pastoral inventory, determining grazing averages in FU per ha depending on cork oak forest density and recovery rate [28]. Stored carbon data was obtained from the estimation of annual tree and shrubs production giving average quantities per type of land cover and tree density in T/ha [24,29]. Sediment retention data were obtained via the revised universal soil loss equation (RUSLE) model results [30], based on rainfall, soil characteristics, topography, land use and land cover estimated at the watershed level. Total Economic Value and Economics Valuation Methods In North Tunisia, forest ecosystems are multifunctional and are known for being significant providers of goods and services. The focus on the assessment of the ecosystem services began in the early 21st century with the publication of the book "Valuing Mediterranean Forests: Towards Total Economic Value" in 2005, which was followed with an increasing interest up to the present day [23][24][25][26][27][28]. In this study, the methods used for economic valuation of the investigated ecosystem services were the market price method for carbon sequestration and cork, residual valuation method for grazing and the production function approach to determine the economic price for sediment retention. The market price method is the most commonly used method for valuation of ecosystem services when they are exchanged in a real market with observable prices assuming that there are no distortions [31,32]. The residual valuation method, also called residual imputation, is a technique used to determine the shadow price for non-market goods, obtained by subtracting all inputs costs from the total income. In the present case, the technique was applied to assess grazing value as an intermediate to local household income. The production function approach also called "environmental function" in the literature, focuses on measuring physical changes in output due to environmental change [12]. In specific cases, ecosystem services may be non-market such as the protection from damage and sediment retention. In such cases, the literature offers a suitable adaptation of the production function methodology in the form of the "expected damage function". In certain cases, when the ES is considered as a protection service limiting damages and providing individuals with benefits, the expected damage function (EDF) can be applied. According to Hanley, the approach is based on "the estimation of how changes in the asset affect the probability of the damaging event occurring", valuing the changes that might occur in quantities or qualities with respect to a non-market services using the consumer surplus approach [12]. The water demand function (P) can be considered based on the following formula: Forests 2020, 11, 197 5 of 13 where Q is water quantity, ε the price elasticity, and C is a constant. Figure 2 presents the actual water demand curve. Assuming that sedimentation leads to a loss in quantity bringing the available water supply from Q to Q*, P* is considered as the water economic price, defined as the amount a rational user is willing to pay for it. Mapping and Spatial Distribution The interest in spatial ecosystem assessment has increased in the recent years, being applied for the integration of spatial ES assessment data into planning, decision-making, or even management contexts. Such techniques reinforce the trend toward the multidisciplinary, aiming to combine the assessment of biophysical, economic, and social context when taking into consideration the spatial distribution of ES. Considered an effective communication tool, spatial assessment is based on the illustration of ecosystem services' provision, and sometimes of trade-offs at different scales [33]. The mapping technique has been described as a powerful tool for integrating complex information related to ecosystem services into decision-making and for combining results to identify proprietary action regions [34]. The need for spatially explicit mapping and assessment can be explained by the need to understand the level at which the natural processes occur [35]. However, besides the accuracy of the information provided by the initial mapping technique, there is a need for a higher integration of the results in order to reach the ultimate objective "decision-making guidance" [33]. More recently, decision-making support tool have been based on a combination of mapping and modeling tools, which vary depending on the valuation approach used, the geographical scale, and the range of the considered ES (individual or portfolios) [36]. Management Scenarios In the framework of the INFORMED project, the social component was considered as important as the economic and biophysical component. The study area is a particular case of a situation with conflict between different stakeholders, since property rights for the forest and shrubs are detained by the state and managed by the forest administration while usage rights are detained by the local population. Agricultural lands (crop land and olive plantations) are privately owned. In the present study, besides estimation of the total economic value to point the economic importance of cork oak forest in North Tunisia with emphasis on different land covers, an evaluation of new management scenarios toward a better decision-making orientation was made. Only shrubs and cork oak stands Mapping and Spatial Distribution The interest in spatial ecosystem assessment has increased in the recent years, being applied for the integration of spatial ES assessment data into planning, decision-making, or even management contexts. Such techniques reinforce the trend toward the multidisciplinary, aiming to combine the assessment of biophysical, economic, and social context when taking into consideration the spatial distribution of ES. Considered an effective communication tool, spatial assessment is based on the illustration of ecosystem services' provision, and sometimes of trade-offs at different scales [33]. The mapping technique has been described as a powerful tool for integrating complex information related to ecosystem services into decision-making and for combining results to identify proprietary action regions [34]. The need for spatially explicit mapping and assessment can be explained by the need to understand the level at which the natural processes occur [35]. However, besides the accuracy of the information provided by the initial mapping technique, there is a need for a higher integration of the results in order to reach the ultimate objective "decision-making guidance" [33]. More recently, decision-making support tool have been based on a combination of mapping and modeling tools, which vary depending on the valuation approach used, the geographical scale, and the range of the considered ES (individual or portfolios) [36]. Management Scenarios In the framework of the INFORMED project, the social component was considered as important as the economic and biophysical component. The study area is a particular case of a situation with conflict between different stakeholders, since property rights for the forest and shrubs are detained by the state and managed by the forest administration while usage rights are detained by the local population. Agricultural lands (crop land and olive plantations) are privately owned. In the present study, besides estimation of the total economic value to point the economic importance of cork oak forest in North Tunisia with emphasis on different land covers, an evaluation of new management scenarios toward a better decision-making orientation was made. Only shrubs and cork oak stands were considered as land cover types, while carbon sequestration, sediment retention, cork and grazing were studied as ecosystem services provided. A distinction between two types of cork oak stands was made based on density. A forest was assumed to be dense when the density was higher than 317 tree/ha (considered as the maximum limit for optimal cork production [24]) and/or the forest recovery rate was higher than 50%. Two scenarios were suggested besides the business as usual scenario (BAU). The first was proposed by researchers based on the water scarcity situation and the productivity of cork oak stands, while the second scenario was more conservative towards the cork oak forest, proposed by the public authorities aiming to increase forest surface area. Scenario 1: Forest Density Decrease Scenario 1 was based on the decrease in forest density. In areas where the forest was considered dense (density>317tree/ha), the stands were cleared to reach an average optimal density of 300 tree/ha. Scenario 2: Afforestation of Shrub Area The scenario 2 was based on the replacement of shrub area with new plantations of cork oak. The new plantation took the average value of the total economic value provided by the clear forest. Economic Valuation The provisioning and regulation services prices were either calculated or estimated as follows. Cork is a market good. The cork production value of the 2016 estimated from the forest administration data gave: €61.5/Q for reproductive cork, €13/Q for the virgin cork, and €22.95/Q for the miscellaneous cork. The operating average cost was estimated at €26.1/Q, is deducted from the obtained net profit of the cork production. Grazing has no observable price, and the valuation was later derived by applying the residual value of the forage unit, estimate at €0.19/FU. For carbon sequestration, the economic valuation was processed through the application of the 2016 economic price for emissions reduction, to the annual average carbon flux. Regarding sediment retention, the valuation was carried out applying the economic price of water, based on the Tunisian national water distribution utility data [37] and water price elasticity [38], assuming a constant price elasticity of demand along the curve to solve the water demand function (Equation (1)). Considering potable water as the main use of the water from the watershed, the sediment retention service was valued by applying the water economic price at €0.274/m 3 , resulting from the application of the demand function model. The value of sediments retained in terms of water capacity was then estimated, applying a discount rate of 2% for a period of 30 years [9,23,39]. Total Economic Value of the Investigated Services The results showed that the total economic value of the total area of Ain Snoussi (cork oak Forest (clear and dense) and shrubs) is estimated at €0.55 million/year, obtained as follows: 40% from regulating services (21% form sediment retention and 19% from carbon sequestration and 60% from provisioning services (50% from grazing, 11% from cork). Figure 3 shows the distribution of the TEV across the land cover types. Grazing provides the highest value for all the land cover types with 79% for shrubs, 48% for clear forest and 33% for dense forest, followed by sediment retention providing 24% of the TEV of clear forests and 19.3% and 19.2% for shrubs and dense forests, respectively. Carbon sequestration contributes up to 30% of the TEV of the dense forests, 17% for the clear forests, and 2% for shrubs. Cork gives the lowest percentage of the TEV, with 17% for dense forests and 11% for clear forests. Table 2 summarizes the biophysical flows and the economic value of the selected ecosystem services provided by the cork oak forest (dense and clear) and shrubs in Ain Snoussi, and the average of economic value in Euro per ha per year for each land cover type. The provisioning services valuation accounted for around €0.33 million/year, provided mainly by the estimated grazing and the average annual production of cork (considered by public authorities as the main product of the area, as it is a market product). Regulation services, in terms of sediment retention and carbon flow, accounted for €0.21 million/year. The total economic value TEV, was the sum of the estimated ecosystem services in the cork oak forest and shrubs of Ain Snoussi, evaluated at €0.55 million/year, giving an average TEV of €194/ha/year. With €0.41 million/year of total economic value at cork-oak forest level, the importance of the ecosystem services provided in the area and, more precisely the non-market services that are commonly underestimated is highlighted. Knowing the TEV of the cork oak forest, may hint the importance of the ecosystem and can be used as a solid argument for management and conservation, and mainly highlights the important differences between clear forest and dense forest. However, this knowledge is not enough to create targeted action. Table 2 summarizes the biophysical flows and the economic value of the selected ecosystem services provided by the cork oak forest (dense and clear) and shrubs in Ain Snoussi, and the average of economic value in Euro per ha per year for each land cover type. The provisioning services valuation accounted for around €0.33 million/year, provided mainly by the estimated grazing and the average annual production of cork (considered by public authorities as the main product of the area, as it is a market product). Regulation services, in terms of sediment retention and carbon flow, accounted for €0.21 million/year. The total economic value TEV, was the sum of the estimated ecosystem services in the cork oak forest and shrubs of Ain Snoussi, evaluated at €0.55 million/year, giving an average TEV of €194/ha/year. With €0.41 million/year of total economic value at cork-oak forest level, the importance of the ecosystem services provided in the area and, more precisely the non-market services that are commonly underestimated is highlighted. Knowing the TEV of the cork oak forest, may hint the importance of the ecosystem and can be used as a solid argument A spatial distribution of the average TEV of the cork-oak forest was obtained by computing unitary ecosystem services values per ha. The results showed a variation between €122 and €267 per ha per year for the dense forest, with an average TEV of €210 and a variation between €166 and €292 per ha and per year for the clear forest, with an average of €217 (Figure 4). The same figure shows the spatial distribution of the average economic value of shrubs with a variation between €128 and €278 per ha per year, with an average of €157 for all the parcels larger than 1 ha. Forests 2020, 11, 197 8 of 13 Forests 2020, 11, x FOR PEER REVIEW 8 of 12 The same figure shows the spatial distribution of the average economic value of shrubs with a variation between €128 and €278 per ha per year, with an average of €157 for all the parcels larger than 1ha. Management Scenarios Results Cork-oak clear and dense forest and shrubland were considered for the evaluation of a new management plan. For decision-making support purposes, two scenarios were evaluated to check which scenario was the most valuable (Table 3). The estimation of the two previously described scenarios showed that the second scenario based on afforestation of the shrub area, provided the highest TEV (€0.68 million/year) compared to both BAU (€0.55 million/year) and the first scenario based on density decrease (€0.54 million/year) ( Table Total Economic Management Scenarios Results Cork-oak clear and dense forest and shrubland were considered for the evaluation of a new management plan. For decision-making support purposes, two scenarios were evaluated to check which scenario was the most valuable ( Table 3). The estimation of the two previously described scenarios showed that the second scenario based on afforestation of the shrub area, provided the highest TEV (€0.68 million/year) compared to both BAU (€0.55 million/year) and the first scenario based on density decrease (€0.54 million/year) ( Table 3). The first scenario had a slight effect on the ecosystem services, with a change in 1095 ha of cleared cork oak surface bringing density to 300 ha. The change led to a decrease of 2% in the total economic value. The scenario negatively affected both carbon sequestration and cork, an effect that can be explained by the decrease in the number of trees, while for grazing and sediment retention, a remarkable increase was noticed with 14% and 40%, respectively. This increase can be explained by the negative correlation of grazing with forest density (the higher the density, the lower the grazing value per forage unit). For sediment retention, the results could be interpreted based on the initial biophysical results, which showed that the clear forest sediment retention was higher than dense forest sediment retention. The results can be explained by two major biophysical factors: the variation of the elevation and the recovery rate of the impacted area. The second scenario led to a larger impact with an increase of 24% in the TEV. The afforestation scenario had a considerable positive impact on cork (37%), carbon sequestration (30%), and sediment retention (83%). Only the grazing service was negatively impacted, with a decrease of 6%. The spatial distribution ( Figure 5) showed the exact geolocation for each scenario and the remaining and the new economic value for each area affected by the transition. The first scenario, homogenous forest area with a density lower than 317 tree/ha, presented an average TEV of €207/ha and no change for the shrub area. The second scenario showed the appearance of a new plantation replacing the shrub area with an average TEV of €218/ha (average TEV of the clear forest for the BAU). value. The scenario negatively affected both carbon sequestration and cork, an effect that can be explained by the decrease in the number of trees, while for grazing and sediment retention, a remarkable increase was noticed with 14% and 40%, respectively. This increase can be explained by the negative correlation of grazing with forest density (the higher the density, the lower the grazing value per forage unit). For sediment retention, the results could be interpreted based on the initial biophysical results, which showed that the clear forest sediment retention was higher than dense forest sediment retention. The results can be explained by two major biophysical factors: the variation of the elevation and the recovery rate of the impacted area. The second scenario led to a larger impact with an increase of 24% in the TEV. The afforestation scenario had a considerable positive impact on cork (37%), carbon sequestration (30%), and sediment retention (83%). Only the grazing service was negatively impacted, with a decrease of 6%. The spatial distribution ( Figure 5) showed the exact geolocation for each scenario and the remaining and the new economic value for each area affected by the transition. The first scenario, homogenous forest area with a density lower than 317 tree/ha, presented an average TEV of €207/ha and no change for the shrub area. The second scenario showed the appearance of a new plantation replacing the shrub area with an average TEV of €218/ha (average TEV of the clear forest for the BAU). Discussion This study investigated both the biophysical flows and economic value of a cork oak ecosystem services with emphasis on the spatial distribution. Tunisian forests are particularly vulnerable to change and degradation risks, due to their location on the southern side of the Mediterranean basin and exposure to climate change [40] from one side, and the presence of the local population and the pressure they are exerting [26] from the other side. The actual study answered several questions and addressed doubts about the actual situation of cork oak forests in Tunisia and the potential management scenarios. However, the restricted number of the ecosystem services investigated due to data limitation was a serious constraint. Discussion This study investigated both the biophysical flows and economic value of a cork oak ecosystem services with emphasis on the spatial distribution. Tunisian forests are particularly vulnerable to change and degradation risks, due to their location on the southern side of the Mediterranean basin and exposure to climate change [40] from one side, and the presence of the local population and the pressure they are exerting [26] from the other side. The actual study answered several questions and addressed doubts about the actual situation of cork oak forests in Tunisia and the potential management scenarios. However, the restricted number of the ecosystem services investigated due to data limitation was a serious constraint. Remaining faithful to the original concept behind valuation of ES, monetary valuation was applied in this study as a tool to capture the benefits human may receive from nature, which can be underestimated due to their non-market characteristics. The assessment showed that the average total economic value of €194/ha was higher than the average TEV in Tunisian cork oak forest given by the most recent study of 2010, which reported €160/ha (305.4 TND/ha in 2010) [41], proving that at least during the last past 5 years, no degradation has occurred in the study area and indicating an increase in the total value. To preserve this resource and to support decision-making toward sustainable management, the valuation of two management scenarios was carried out based on the suggestion of the forest administration to increase the forest area by replacing the shrub lands, and on the suggestion of researchers to decrease the forest density. Scenario 1 showed that decreasing forest density would lead to a decrease in the total value of the forest with 2%, while the second scenario based on afforestation would allow a significant increase of the TEV with 24%. The process of distinguishing two types of forest makes the management suggestions more precise and clear. Considering that management is usually based on biophysical objectives and that the forest is divided into parcels, a land cover change can be suggested based on values and geographic distribution. Based on the latest Tunisian Sustainable Development Strategy 2015-2024, the national policy is oriented towards sustainable management, taking into consideration the existence of the local population and aiming to solve an ancestral conflict and an institutional issue [42][43][44]. To align with these, the results of this study should be discussed between the three stakeholders initially implicated in the elaboration of the studied scenarios. Such results can be used to make accurate and precise management plans to face and prevent anthropogenic and non-anthropogenic hazards, and to warn decision-makers. The spatial distribution of economic value by land cover type and ecosystem service is a new tool that may facilitate future planning and management of such stands, suggesting a change in certain areas. However, two constraints should be taken into consideration at this level: first, that such results are sensitive to the chosen methods, and secondly that these results are specific to this case study; thus, excessive generalization may be harmful to biodiversity. The impacts of such scenarios on local populations and their social acceptance also need to be investigated. Conclusions The present work showed the total economic value of a set of investigated of ecosystem services provided by the cork-oak forest in Ain Snoussi and highlighted the importance of the supplied services in terms of market and non-market products. Mapping ecosystem services allows a deeper and more explicit understanding and a clearer visualization of the spatial distribution of total economic value, and of individual services in terms of biophysical and economic values. The results also showed that spatially distributed value can be used as a strong tool with which to assess management scenarios and guide decision-makers. The chosen example of management scenarios showed that for the Ain Snoussi forest area, increasing the forest stand by planting the shrub area is the most valuable scenario; however a cost-benefit analysis needs to be carried out in order to return more relevant results. Author Contributions: M.K. and H.D.-H. were responsible for the overall design of the study, the methodology, the interpretation of the results, and the writing of the paper. B.S. and S.J. were responsible of the biophysical models and made significant contributions in the scenarios elaboration and maps design. All authors have read and agreed to the published version of the manuscript.
7,049
2020-02-11T00:00:00.000
[ "Environmental Science", "Economics" ]
Subgame-perfect Equilibria in Mean-payoff Games In this paper, we provide an effective characterization of all the subgame-perfect equilibria in infinite duration games played on finite graphs with mean-payoff objectives. To this end, we introduce the notion of requirement, and the notion of negotiation function. We establish that the plays that are supported by SPEs are exactly those that are consistent with the least fixed point of the negotiation function. Finally, we show that the negotiation function is piecewise linear, and can be analyzed using the linear algebraic tool box. As a corollary, we prove the decidability of the SPE constrained existence problem, whose status was left open in the literature. Introduction The notion of Nash equilibrium (NE) is one of the most important and most studied solution concepts in game theory.A profile of strategies is an NE when no rational player has an incentive to change their strategy unilaterally, i.e. while the other players keep their strategies.Thus an NE models a stable situation.Unfortunately, it is well known that, in sequential games, NEs suffer from the problem of non-credible threats, see e.g.[Osb04].In those games, some NEs only exist when some players do not play rationally in subgames and so use non-credible threats to force the NE.This is why, in sequential games, the stronger notion of subgame-perfect equilibrium is used instead: a profile of strategies is a subgame-perfect equilibrium (SPE) if it is an NE in all the subgames of the sequential game.Thus SPEs impose rationality even after a deviation has occured. In this paper, we study sequential games that are infinite-duration games played on graphs with mean-payoff objectives, and focus on SPEs.While NEs are guaranteed to exist in infinite duration games played on graphs with mean-payoff objectives, it is known that it is not the case for SPEs, see e.g.[SV03,BBMR15].We provide in this paper a constructive characterization of the entire set of SPEs, which allows us to decide, among others, the SPE threshold problem.This problem was left open in previous contributions on the subject.More precisely, our contributions are described in the next paragraphs. 1.1.Contributions.First, we introduce two important new notions that allow us to capture NEs, and more importantly SPEs, in infinite duration games played on graphs with meanpayoff objectives 1 : the notion of requirement and the notion of negotiation function. A requirement λ is a function that assigns to each vertex v ∈ V of a game graph a value in R ∪ {−∞, +∞}.The value λ(v) represents a requirement on any play ρ = ρ 0 ρ 1 . . .ρ n . . .that traverses this vertex: if one wants the player who controls the vertex v to follow ρ and to give up deviating from ρ, then the play must offer to that player a payoff that is at least λ(v).An infinite play ρ is λ-consistent if, for each player i, the payoff of ρ for player i is larger than or equal to the largest value of λ on vertices occurring along ρ and controlled by player i. We first use these notions to rephrase a classical result about NEs: if λ maps each vertex v to the largest value that the player who controls v can secure against a fully adversarial coalition of the other players, i.e. if λ(v) is the zero-sum worst-case value, then the set of plays that are λ-consistent is exactly the set of plays that are supported by an NE (Theorem 3.8). As SPEs are forcing players to play rationally in all subgames, we cannot rely on the zero-sum worst-case value to characterize them.Indeed, when considering the worst-case value, we allow adversaries to play fully adversarially after a deviation and so potentially in an irrational way w.r.t.their own objective.In fact, in an SPE, a player is refrained to deviate when opposed by a coalition of rational adversaries.To characterize this relaxation of the notion of worst-case value, we rely on our notion of negotiation function. The negotiation function nego operates from the set of requirements into itself.To understand the purpose of the negotiation function, let us consider its application on the requirement λ that maps every vertex v to the worst-case value as above.Now, we can naturally formulate the following question: given v and λ, can the player who controls v improve the value that they can ensure against all the other players, if only plays that are consistent with λ are proposed by the other players?In other words, can this player enforce a better value when playing against the other players if those players are not willing to give away their own worst-case value?Clearly, securing this worst-case value can be seen as a minimal goal for any rational adversary.So nego(λ)(v) returns this value; and this reasoning can be iterated.One of the contributions of this paper is to show that the least fixed point λ * of the negotiation function is exactly characterizing the set of plays supported by SPEs (Theorem 4.4). To turn this fixed point characterization of SPEs into algorithms, we additionally draw links between the negotiation function and two classes of zero-sum games, that are called abstract and concrete (see Theorem 6.3) negotiation games.The abstract negotiation game is conceptually simple but is played on an uncountably infinite graph, and therefore cannot be turned into an effective algorithm.However, it captures the intuition behind the concrete negotiation game, which is played on a finite graph.We show that in the concrete negotiation game, one of the players has a memoryless optimal strategy (Lemma 6.4), which can be used to solve it effectively.Thus, the negotiation function is computable.However, that is not 1 A large part of our results apply to the larger class of games with prefix-independent objectives.For the sake of readability of this introduction, we focus here on mean-payoff games but the technical results in the paper usually cover broader classes of games. sufficient to compute its least fixed point, because the sequence of Kleene-Tarski's iterations may require a transfinite number of steps to reach a fixed point (Theorem 5.3). Nevertheless, we prove that the concrete negotiation game can be used to construct a geometrical representation of the fixed points of the negotiation function, from which one can extract its least fixed point (Theorem 7.3).Thus, the SPE threshold problem is decidable (Theorem 8.3). All the previous results do also apply to ε-SPEs, a classical quantitative relaxation of SPEs -see for example [FP16].In particular, Theorem 8.3 does also apply to the ε-SPE threshold problem. 1.2.Related works.Non-zero sum infinite duration games have attracted a large attention in recent years, with applications targeting reactive synthesis problems.We refer the interested reader to the following survey papers [BCH + 16, Bru17] and their references for the relevant literature.We detail below contributions more closely related to the work presented here. In [BDS13], Brihaye et al. offer a characterization of NEs in quantitative games for cost-prefix-linear reward functions based on the worst-case value.The mean-payoff is costprefix-linear.In their paper, the authors do not consider the stronger notion of SPE, which is the central solution concept studied in our paper.In [BMR14], Bruyère et al. study secure equilibria that are a refinement of NEs.Secure equilibria are not subgame-perfect and are, as classical NEs, subject to non-credible threats in sequential games. In [Umm06], Ummels proves that there always exists an SPE in games with ω-regular objectives and defines algorithms based on tree automata to decide constrained SPE problems.Strategy logics, see e.g.[CHP10], can be used to encode the concept of SPE in the case of ω-regular objectives with application to the rational synthesis problem [KPV16] for instance.In [FKM + 10], Flesch et al. show that the existence of ε-SPEs is guaranteed when the reward function is lower-semicontinuous.The mean-payoff reward function is neither ω-regular, nor lower-semicontinuous, and so the techniques defined in the papers cited above cannot be used in our setting.Furthermore, as already recalled above, see e.g.[VS03,BBMR15], contrary to the ω-regular case, SPEs in games with mean-payoff objectives may fail to exist. In [BBMR15], Brihaye et al. introduce and study the notion of weak subgame-perfect equilibria, which is a weakening of the classical notion of SPE.This weakening is equivalent to the original SPE concept on reward functions that are continuous.This is the case for example for the quantitative reachability reward function, on which Brihaye et al. solve the SPE threshold problem in [BBG + 19].On the contrary, the mean-payoff cost function is not continuous and the techniques used in [BBMR15], and generalized in [BRPR17], cannot be used to characterize SPEs for the mean-payoff reward function. In [Meu16], Meunier develops a method based on Prover-Challenger games to solve the problem of the existence of SPEs on games with a finite number of possible payoffs.This method is not applicable to the mean-payoff reward function, as the number of payoffs in this case is uncountably infinite. In [FP17], Flesch and Predtetchinski present another characterization of SPEs on games with finitely many possible payoffs, based on a game structure that we will present here under the name of abstract negotiation game.Our contributions differ from this paper in two fundamental aspects.First, it lifts the restriction to finitely many possible payoffs.This is crucial as mean-payoff games violate this restriction.Instead, we identify a class of games, that we call with steady negotiation, that encompasses mean-payoff games and for which some of the conceptual tools introduced in that paper can be generalized.Second, the procedure developed by Flesch and Predtetchinski is not an algorithm in computer science acceptation: it needs to solve infinitely many games that are not represented effectively, and furthermore it needs a transfinite number of iterations.On the contrary, our procedure is effective and leads to a complete algorithm in the classical sense: with guarantee of termination in finite time and applied on effective representations of games. In [CDE + 10], Chaterjee et al. study mean-payoff automata, and give a result that can be translated into an expression of all the possible payoff vectors in a mean-payoff game.In [BR15], Brenguier and Raskin give an algorithm to build the Pareto curve of a multi-dimensional two-player zero-sum mean-payoff game.Techniques defined in these papers are used in several technical steps of our algorithm.1.3.Structure of the paper.In Section 2, we introduce the necessary background.Section 3 defines the notion of requirement and the negotiation function.Section 4 shows that the set of plays that are supported by an SPE are those that are λ-consistent, where λ is a fixed point of the negotiation function.Section 5 draws a link between the negotiation function and the abstract negotiation game.Section 6 shows that the abstract negotiation game can be transformed into a game on a finite graph, the concrete negotiation game, which can be solved to compute effectively the negotiation function.Section 7 uses the concrete negotiation game to prove that the negotiation function is a piecewise affine function, of which one can compute an effective representation.Finally, Section 8 applies these results to prove that the SPE threshold problem in mean-payoff games is decidable, 2ExpTime-easy and NP-hard. Background 2.1.Games, strategies, equilibria.In all what follows, we will use the word game for the infinite duration turn-based quantitative games on finite graphs with complete information.Definition 2.1 (Non-initialized game).A non-initialized game -or game for short -is a tuple G = (Π, V, (V i ) i∈Π , E, µ), where: • Π is a finite set of players; • (V, E) is a directed graph, called the underlying graph of G, whose vertices are sometimes called states and whose edges are sometimes called transitions, and in which every state has at least one outgoing transition.For the simplicity of writing, a transition (v, w) ∈ E will often be written vw. Definition 2.2 (Initialized game).An initialized game -or game for short -is a tuple (G, v 0 ), often written G ↾v 0 , where G is a non-initialized game and v 0 ∈ V is a state called initial state. When the context is clear, we often use the word game for both initialized and noninitialized games. Definition 2.3 (Play, history).A play (resp.history) in the game G is an infinite (resp.finite) path in the graph (V, E).It is also a play (resp.history) in the initialized game G ↾v 0 , where v 0 is its first vertex.The set of plays (resp.histories) in the game G (resp. the initialized game G ↾v 0 ) is denoted by PlaysG (resp.PlaysG ↾v 0 , HistG, HistG ↾v 0 ).We write Hist i G (resp.Hist i G ↾v 0 ) for the set of histories in G (resp.G ↾v 0 ) of the form hv, where v is a vertex controlled by player i. Given a play ρ (resp.a history h), we write Occ(ρ) (resp.Occ(h)) the set of vertices that appear in ρ (resp.h), and Inf(ρ) the set of vertices that appear infinitely often in ρ.For a given index k, we write ρ ≤k (resp.h ≤k ), or ρ <k+1 (resp.h <k+1 ), the finite prefix ρ 0 . . .ρ k (resp.h 0 . . .h k ), and ρ ≥k (resp.h ≥k ), or ρ >k−1 (resp.h >k−1 ), the infinite (resp.finite) suffix ρ k ρ k+1 . . .(resp.h k h k+1 . . .h |h|−1 ).Finally, we write first(ρ) (resp.first(h)) the first vertex of ρ, and last(h) the last vertex of h.Definition 2.4 (Strategy, strategy profile).A strategy for player i in the initialized game G ↾v 0 is a function σ i : Hist i G ↾v 0 → V , such that vσ i (hv) is an edge of (V, E) for every hv.A history h is compatible with a strategy σ i if and only if h k+1 = σ i (h 0 . . .h k ) for all k such that h k ∈ V i .A play ρ is compatible with σ i if all its prefixes are. A strategy profile for P ⊆ Π is a tuple σP = (σ i ) i∈P , where each σ i is a strategy for player i in G ↾v 0 .A play or a history is compatible with σP if it is compatible with every σ i for i ∈ P .A complete strategy profile, usually written σ, is a strategy profile for Π.Exactly one play is compatible with the strategy profile σ: we call it its outcome and write it ⟨σ⟩. When i is a player and when the context is clear, we will often write −i for the set Π \ {i}.We will often refer to Π \ {i} as the environment against player i.When τP and τ ′ Q are two strategy profiles with P ∩ Q = ∅, (τ P , τ ′ Q ) denotes the strategy profile σP ∪Q such that σ i = τ i for i ∈ P , and σ i = τ ′ i for i ∈ Q.In a strategy profile σP , the σ i 's domains are pairwise disjoint.Therefore, we can consider σP as one function: for hv ∈ HistG ↾v 0 such that v ∈ i∈P V i , we liberally write σP (hv) for σ i (hv) with i such that v ∈ V i . Before moving on to SPEs, let us recall the notion of Nash equilibrium. Definition 2.5 (Nash equilibrium).Let G ↾v 0 be a game.A strategy profile σ is a Nash equilibrium -or NE for short -in G ↾v 0 if and only if for each player i and for every strategy σ ′ i , called deviation of σ i , we have the inequality µ i (⟨σ ′ i , σ−i ⟩) ≤ µ i (⟨σ⟩).To define SPEs, we need the notion of subgame. Definition 2.6 (Subgame, substrategy).Let hv be a history in the game G.The subgame of G after hv is the game (Π, V, (V i ) i , E, µ ↾hv ) ↾v , where µ ↾hv maps each play to its payoff in G, assuming that the history hv has already been played: formally, for every ρ ∈ PlaysG ↾hv , we have µ ↾hv (ρ) = µ(hρ). If σ i is a strategy in G ↾v 0 , its substrategy after hv is the strategy σ i↾hv in G ↾hv , defined by σ i↾hv (h ′ ) = σ i (hh ′ ) for every h ′ ∈ Hist i G ↾hv . Remark 2.7.The initialized game G ↾v 0 is also the subgame of G after the one-state history v 0 .Definition 2.8 (Subgame-perfect equilibrium).Let G ↾v 0 be a game.The strategy profile σ is a subgame-perfect equilibrium -or SPE for short -in G ↾v 0 if and only if for every history h in G ↾v 0 , the strategy profile σ↾h is a Nash equilibrium in the subgame G ↾h . The notion of subgame-perfect equilibrium can be seen as a refinement of Nash equilibrium: it is a stronger equilibrium which excludes players resorting to non-credible threats. Example 2.9.In the game represented in Figure 1a, where the square state is controlled by player and the round states by player , if both players get the payoff 1 by reaching the state d and 0 in the other cases, there are actually two NEs: one, in blue, where goes to the state b and then player goes to d, and both win, and one, in red, where player goes to c because player was planning to go to e.However, only the blue one is an SPE, as moving from b to e is irrational for player in the subgame G ↾ab . An ε-SPE is a strategy profile which is almost an SPE: if a player deviates after some history, they will not be able to improve their payoff by more than a quantity ε ≥ 0. Hereafter, we focus on prefix-independent games, and in particular mean-payoff games. Mean-payoff games. Definition 2.12 (Mean-payoff, mean-payoff game).In a graph (V, E), we associate to each reward function r : E → Q the mean-payoff function: A game G = (Π, V, (V i ) i , E, µ) is a mean-payoff game if its underlying graph is finite, and if there exists a tuple (r i ) i∈Π of reward functions, such that for each player i and every play ρ: In a mean-payoff game, the quantities given by the function r i represent the immediate reward that each action gives to player i.The final payoff of player i is their average payoff along the play, classically defined as the limit inferior over n (since the limit may not be defined) of the average payoff after n steps.When the context is clear, we liberally write MP i (h) for MP r i (h), and MP(h) for (MP i (h)) i , as well as r(uv) for (r i (uv)) i .Definition 2.13 (Prefix-independent game).A game G is prefix-independent if, for every history h and for every play ρ, we have µ(hρ) = µ(ρ).We also say, in that case, that the payoff function µ is prefix-independent. Mean-payoff games are prefix-independent.A first important result that we need is the characterization of the set of possible payoffs in a mean-payoff game, which has been introduced in [CDE + 10].Given a graph (V, E), we write SC(V, E) the set of simple cycles it contains.Given a finite set D of dimensions and a set X ⊆ R D , we write ConvX the convex hull of X.We will often use the subscript notation Conv x∈X f (x) for the set Convf (X).MP(c) . 2.3.The ε-SPE threshold problem.In the sequel, we prove the decidability of the ε-SPE threshold problem, which is a generalization of the SPE threshold problem (since SPEs are 0-SPEs and conversely, by Remark 2.11), defined as follows. That problem is illustrated by the two following examples. Example 2.18 (A game without SPEs).Let G be the mean-payoff game of Figure 1b, where each edge is labelled by the rewards r and r .No reward is given for the edges ac and bd since they can be used only once, and therefore do not influence the final payoff.For now, the reader should not pay attention to the red labels below the states.As shown in [BRPR16], this game does not have any SPE, neither from the state a nor from the state b.Indeed, the only NE outcomes from the state b are the plays where player eventually leaves the cycle ab and goes to d: if he stays in the cycle ab, then player would be better off leaving it, and if she does, player would be better off leaving it before.From the state a, if player knows that player will leave, she has no incentive to do it before: there is no NE where leaves the cycle and plans to do it if ever she does not.Therefore, there is no SPE where leaves the cycle.But then, after a history that terminates in b, player has actually no incentive to leave if player never plans to do it afterwards: contradiction. to the gray and blue areas in Figure 2b.Indeed, following exclusively one of the three simple cycles a, ab and b of the game graph during a play yields the payoffs 01, 10 and 22, respectively.By combining those cycles with well chosen frequencies, one can obtain any payoff in the convex hull of those three points.Now, it is also possible to obtain the point 00 by using the properties of the limit inferior: it is for instance the payoff of the play a 2 b 4 a 16 b 256 . . .a 2 2 n b 2 2 n+1 . . . .In fact, one can construct a play that yields any payoff in the convex hull of the four points 00, 10, 01, and 22. We claim that the payoffs of SPEs plays correspond to the red-circled area in Figure 2b: there exists an SPE σ in G ↾a with ⟨σ⟩ = ρ if and only if µ (ρ), µ (ρ) ≥ 1.That statement will be a direct consequence of the results we show in the remaining sections, but let us give a first intuition: a play with such a payoff necessarily uses infinitely often both states.It is an NE outcome because none of the players can get a better payoff by looping forever on their state, and they can both force each other to follow that play, by threatening them to loop for ever on their state whenever they can.But such a strategy profile is clearly not an SPE. It can be transformed into an SPE as follows: when a player deviates, say player , then player can punish him by looping on a, not forever, but a large number of times, until player 's mean-payoff gets very close to 1. Afterwards, both players follow again the play that was initially planned.Since that threat is temporary, it does not affect player 's payoff on the long term, but it really punishes player if that one tries to deviate infinitely often. Not that such an SPE requires infinite memory. 2.4.Two-player zero-sum games.The concept of SPEs has been designed for non-zerosum games with arbitrarily many players, but the methods we will present in the sequel will bring us back to the more classical framework of two-player zero-sum games, with more complex payoff functions.We will therefore need the following notions and results. Definition 2.21 (Borel game).A game G is Borel if the function µ, from the set V ω equipped with the product topology to the Euclidian space R Π , is Borel, i.e. if, for every Borel set B ⊆ R Π , the set µ −1 (B) is Borel. Definition 2.24 (Optimal strategy).Let G ↾v 0 be a zero-sum Borel game, with Now, let us define memoryless strategies, and state a condition under which they can be optimal. Definition 2.25 (Memoryless strategy).A strategy σ i in a game G ↾v 0 is memoryless if for all vertices v ∈ V i and for all histories h and h ′ , we have For every game G ↾v 0 and each player i, we write ML i (G ↾v 0 ), or ML (G ↾v 0 ) when the context is clear, for the set of memoryless strategies for player i in G ↾v 0 . Lemma 2.29.In a two-player zero-sum game played on a finite graph, every player whose payoff function is concave has an optimal strategy that is memoryless. Proof.According to [Kop06], this result is true for qualitative objectives, i.e. when µ can only take the values 0 and 1.It follows that for every α ∈ R, if a player i, whose payoff function is concave, has a strategy that ensures µ i (ρ) ≥ α (understood as a qualitative objective), then they have a memoryless one.Hence the equality: Since the underlying graph (V, E) is assumed to be finite, there exists a finite number of memoryless strategies, hence the infimum above is realized by a memoryless strategy σ 1 that is, therefore, finite. Requirements and negotiation We will now see that SPEs are strategy profiles that respect some requirements about the payoffs, depending on the states they traverse.In this part, we develop the notions of requirement and negotiation. 3.1.Requirement.In the method we will develop further, we will need to analyze the players' behaviours when they have some requirement to satisfy.Intuitively, one can see requirements as rationality constraints for the players, that is, a threshold payoff value under which a player will not accept to follow a play.In all what follows, R denotes the set R ∪ {±∞}. Definition 3.1 (Requirement).A requirement on the game G is a mapping λ : V → R. For a given state v, the quantity λ(v) represents the minimal payoff that the player controlling v will require in a play beginning in v. The set of the λ-consistent plays from a state v is denoted by λCons(v).Definition 3.3 (λ-rationality).Let λ be a requirement on a mean-payoff game G. Let i ∈ Π.A strategy profile σ−i is λ-rational if and only if there exists a strategy σ i such that, for every history hv compatible with σ−i , the play ⟨σ ↾hv ⟩ is λ-consistent.We then say that the strategy profile σ−i is λ-rational assuming σ i .The set of λ-rational strategy profiles in G ↾v is denoted by λRat(v). Note that λ-rationality is a property of a strategy profile for all the players but one, player i. Intuitively, their rationality is justified by the fact that they collectively assume that player i will, eventually, play according to the strategy σ i : if it is the case, then everyone gets their payoff satisfied. Finally, let us define a particular requirement: the vacuous requirement, that requires nothing, and with which every play is consistent.Definition 3.4 (Vacuous requirement).In any game, the vacuous requirement, denoted by λ 0 , is the requirement constantly equal to −∞. 3.2.Negotiation.We will show that SPEs in prefix-independent games are characterized by the fixed points of a function on requirements.That function captures a negotiation process: when a player has a requirement to satisfy, another player can hope a better payoff than what they can secure in general, and therefore update their own requirement.Note that we always use the convention inf ∅ = +∞.Definition 3.5 (Negotiation function, steady negotiation).Let G be a game.The negotiation function is the function that transforms any requirement λ on G into a requirement nego(λ) on G, such that for each i ∈ Π and v ∈ V i , we have: If that infimum is realized for every λ, i and v ∈ V i such that λRat(v) ̸ = ∅, then the game G is called a game with steady negotiation2 . Remark 3.6.The negotiation function satisfies the following properties. • There exists a λ-rational strategy profile from v against the player controlling v if and only if nego(λ)(v) ̸ = +∞. In the general case, the quantity nego(λ)(v) represents the worst case value that the player controlling v can ensure, assuming that the other players play λ-rationally. Example 3.7.Let us consider the game of Example 2.18: in Figure 1b, on the two first lines below the states, we present the requirements λ 0 and λ 1 = nego(λ 0 ), which is easy to compute since any strategy profile is λ 0 -rational: for each v, λ 1 (v) is the classical worst-case value or antagonistic value of v, i.e. the best value the player controlling v can enforce against a fully hostile environment.Let us now compute the requirement λ 2 = nego(λ 1 ). From c, there exists exactly one λ 1 -rational strategy profile σ− = σ , which is the empty strategy since player has never to choose anything.Against that strategy, the best and the only payoff player can get is 1, hence λ 2 (c) = 1.For the same reasons, λ 2 (d) = 2. From b, player can force to get the payoff 2 or less, with the strategy profile σ : h → c.Such a strategy is λ 1 -rational, assuming the strategy σ : h → d.Therefore, we have λ 2 (b) = 2. Finally, from a, player can force to get the payoff 2 or less, with the strategy profile σ : h → d.Such a strategy is λ 1 -rational, assuming the strategy σ : h → c.But, he cannot force her to get less than the payoff 2, because she can force the access to the state b, and the only λ 1 -consistent plays from b are the plays with the form (ba) k bd ω .Therefore, λ 2 (a) = 2. It will be proved in Section 6 that mean-payoff games are with steady negotiation. 3.3.Link with Nash equilibria.Requirements and the negotiation function are able to capture Nash equilibria.Indeed, if λ 0 is the vacuous requirement, then nego(λ 0 ) characterizes the NE outcomes, in the following formal sense: Theorem 3.8.Let G be a game with steady negotiation.Then, a play ρ in G is an NE outcome if and only if ρ is nego(λ 0 )-consistent. Proof. • Let σ be a Nash equilibrium in G ↾v 0 , for some state v 0 , and let ρ = ⟨σ⟩ : let us prove that the play • Conversely, let ρ be a nego(λ 0 )-consistent play from a state v 0 .Let us define a strategy profile σ such that ⟨σ⟩ = ρ, by: -⟨σ⟩ = ρ; for each history of the form ρ 0 . . .ρ k v with v ̸ = ρ k+1 , let i be the player controlling ρ k .Since the game G is with steady negotiation, the infimum: −i be λ 0 -rational strategy profile from ρ k realizing that minimum, and let τ k i be some strategy from ρ k such that τ k i (ρ k ) = v.Then, we define: for every other history h, the state σ(h) is defined arbitrarily.Let us prove that σ is an NE: let σ ′ i be a deviation of σ i , let ρ ′ = ⟨σ −i , σ ′ i ⟩ and let ρ 0 . . .ρ k be the longest common prefix of ρ and ρ ′ .Let v = ρ ′ k+1 .Then, we have: Example 3.9.Let us consider again the game of Example 2.18, with the requirement λ 1 given in Figure 1b.The only λ 1 -consistent plays in this game, starting from the state a, are ac ω , and (ab) k d ω with k ≥ 1.One can check that those plays are exactly the NE outcomes in that game. In the following section, we will prove that as well as nego(λ 0 ) characterizes the NEs, the requirement that is the least fixed point of the negotiation function characterizes the SPEs. Link between negotiation and SPEs The notion of negotiation will enable us to find the SPEs, but also more generally the ε-SPEs, in a game.For that purpose, we need the notion of ε-fixed points of a function. Remark 4.2.A 0-fixed point is a fixed point, and conversely.By Tarski's fixed point theorem, the negotiation function, which is a monotone function from a complete lattice to itself, has a least fixed point.That result can be generalized to ε-fixed points. Proof.The following proof is a generalization of a classical proof of Tarski's fixed point theorem.Let Λ be the set of the ε-fixed points of the negotiation function.The set Λ is not empty, since it contains at least the requirement v → +∞.Let λ * be the requirement defined by: For every ε-fixed point λ of the negotiation function, we have then λ * (v) ≤ λ(v) for each v, and then nego(λ * )(v) ≤ nego(λ)(v) since nego is monotone; and therefore, we have nego(λ * )(v) ≤ λ(v) + ε.As a consequence, we have: The requirement λ * is an ε-fixed point of the negotiation function, and is therefore the least of them. In all what follows, for a given game G and a given ε > 0, we will write λ * for the least ε-fixed point of the negotiation function.Intuitively, the requirement λ * is such that, from every vertex v, the player i controlling v cannot enforce a payoff greater than λ * (v) + ε against a λ * -rational behaviour.Therefore, the λ * -consistent plays are such that if one player tries to deviate, it is possible for the other players to prevent them improving their payoff by more than ε, while still playing rationally -which defines ε-SPE outcomes.Formally: Theorem 4.4.Let G ↾v 0 be a prefix-independent game played on a finite graph, and let ε ≥ 0. Let θ be a play starting in v 0 .If there exists an ε-SPE σ such that ⟨σ⟩ = θ, then θ is λ * -consistent.If G is also a game with steady negotiation, then conversely, if θ is λ * -consistent, then it is an ε-SPE outcome. Proof. Let us define a requirement λ by, for each i ∈ Π and v ∈ V i : Then, for every history hv starting in v 0 , the play ⟨σ ↾hv ⟩ is λ-consistent.In particular, the play θ is.Let us now prove that λ is an ε-fixed point of nego.We will then have λ ≥ λ * , which implies that the play θ is λ * -consistent.Let i ∈ Π, let v ∈ V i , and let us assume toward contradiction (since the negotiation function is non-decreasing) that nego(λ)(v) > λ(v) + ε, that is to say: Then, since all the plays generated by the strategy profile σ are λ-consistent, and therefore since any strategy profile of the form σ−i↾hv is λ-rational, we have: Therefore, there exists a history hv such that: which is impossible if the strategy profile σ is an ε-SPE.Therefore, there is no such v, and the requirement λ is an ε-fixed point of the negotiation function. • If G is a game with steady negotiation and θ is λ * -consistent, then θ is an ε-SPE outcome. -A particular case: if there exists v accessible from v 0 such that λ * (v) = +∞. In that case, for each u such that uv ∈ E, if the player controlling u chooses to go to v, no λ * -consistent play can be proposed to them from there, hence there is no λ * -rational strategy profile against that player from u, and nego(λ * )(u) = +∞.Since ε is finite and since λ * is an ε-fixed point of the negotiation function, it follows that λ * (u) = +∞.Since v is accessible from v 0 , we can repeat this argument and show that λ * (v 0 ) = +∞; in that case, there is no λ * -consistent play θ from u, and then the proof is done.Therefore, for the rest of the proof, we assume that for all v, we have λ * (v) ̸ = +∞.As a consequence, since λ * is an ε-fixed point of the function nego, for each v accessible from v 0 , we have nego(λ * )(v) ̸ = +∞; which implies that for each such v, there exists a λ * -consistent strategy profile against the player controlling v, starting from v. The rest of the proof constructs the strategy profile σ and proves that it is an SPE.That construction is illustrated by Figure 3. -Spare parts: the strategy profiles τ v * . Recall that since G is a game with steady negotiation, for every requirement λ * , for every player i and for every state v ∈ V i , since by the previous point we assume Figure 3.The construction of σ we know that there exists a strategy profile τ v −i from v that is λ * -rational assuming a strategy τ v i and that satisfies the inequality: i.e. there exists a worst λ * -rational strategy profile against player i from the state v, with regards to player i's payoff.Our goal in this part of the proof is to construct a strategy profile τ v * −i , that is λ * -rational assuming a strategy τ v * i , and that will be used to punish player i when they deviate from σ until another player deviates.The strategy profile τ v −i and the strategy τ v i are not sufficient for that purpose, because if some history h compatible with τ v −i is such that µ i (⟨τ v ↾h ⟩) < µ i (⟨τ v ⟩), then in the corresponding subgame, it may be possible for player i to deviate and get a payoff that would be smaller than or equal to µ i (⟨τ v ⟩), but greater than µ i (⟨τ v ↾h ⟩).On the other hand, the construction of τ v * −i will ensure that each time player i deviates, the other players punish them at least as harshly as they were planning to do before the deviation.Let us construct inductively the strategy profile τ v * .We define it only on histories that are compatible with τ v * −i , since it can be defined arbitrarily on other histories.We proceed by assembling the strategy profiles of the form τ w for various w ∈ V i , and the histories after which we follow a new τ w will be called the resets of τ v * : they will be histories of the form hw ′ , where h is empty or last(h) = w.First, we set ⟨τ v * ⟩ = ⟨τ v ⟩: the one-state history v is then the first reset of τ v * −i .Then, for every history hww ′ from v such that h is compatible with τ v * −i , that w ∈ V i , and that w ′ ̸ = τ v * i (hw): let us decompose hww ′ = h 1 h 2 , so that the history h 1 first(h 2 ) is the longest reset of τ v * −i among the prefixes of hw.Or, in other words, so that the strategy profile τ v * ↾h 1 first(h 2 ) has been defined as equal to τ u over the prefixes of h 2 until w, where u = v if h 1 is empty, or u = last(h 1 ) otherwise.By prefix-independence of G and by definition of τ u and τ w , we have: Let us now separate two cases.* Suppose first that there is equality: Then, we choose ⟨τ v * ↾hww ′ ⟩ = ⟨τ u ↾uh 2 ⟩: the coalition of players against player i keeps following the same strategy profile.* Suppose now that the inequality is strict: Then, we choose ⟨τ v * ↾hww ′ ⟩ = ⟨τ w ↾ww ′ ⟩: player i has done something that lowers the payoff they can ensure, and therefore the other players have to update their strategy profile in order to punish them more.The history hw is a reset of τ v * −i .Since there are finitely many histories of each length, this process completely defines τ v * .Moreover, all the plays constructed are λ * -consistent, hence the strategy profile τ v * −i is λ * -rational assuming τ v * i , as desired.-Construction of σ. Let us now construct inductively the strategy profile σ: we will prove in the next part of the proof that it is an ε-SPE.We proceed inductively, by defining all the plays ⟨σ ↾hv ⟩, for hv ∈ Hist(G v 0 ) with v ̸ = σ(h).We maintain the induction hypothesis that such a play is always λ * -consistent. * Let now huv be a history such that the strategy profile σ has been defined on all the prefixes of hu, which we now assume to be nonempty, but not on huv itself, and such that v ̸ = σ(hu).Let i be the player controlling the state u. Then, we define ⟨σ ↾huv ⟩ = ⟨τ u * ↾uv ⟩, and inductively, for every history h ′ w starting from v and compatible with σ−i↾huv , we define ⟨σ ↾huh ′ w ⟩ = ⟨τ u * ↾uh ′ w ⟩.The strategy profile σ↾huv is then equal to τ v * ↾uv on any history compatible with τ v * −i .Since there are finitely many histories of each length, this process completely defines σ. Consider a history h 0 w ∈ HistG ↾v 0 , a player i ∈ Π, and a deviation σ ′ i of σ i .Let ρ = h 0 ⟨σ ↾h 0 w ⟩, and let ρ ′ = h 0 ⟨σ −i↾h 0 w , σ ′ i↾h 0 w ⟩.We wish to prove that µ i (ρ ′ ) ≤ µ i (ρ) + ε.First, if the play ρ ′ is compatible with σ i , then ρ ′ = ρ and the proof is immediate.Now, if it is not, we let ρ ′ ≤n denote the shortest prefix of ρ ′ such that ρ ′ n−1 ∈ V i and ρ ′ n ̸ = σ i (ρ ′ <n ), and such that ρ ′ ≥n is compatible with σ−i↾ρ ′ ≤n .Thus, the transition ρ ′ n−1 ρ ′ n marks the time when player i begins to deviate unilaterally from σ i .However, note that ρ ′ ≤n can be both longer or shorter than h 0 w: player i may have already deviated in h 0 w, or may wait afterwards to effectively deviate.Be that as it may, the history ρ ′ <n is a common prefix of the plays ρ and ρ ′ , and the substrategy profile σ↾ρ ′ ≤n has been defined during the construction of σ as equal to , where v = ρ ′ n−1 , on any history compatible with σ−i↾ρ ′ ≤n .By construction of τ v * , the sequence (nego(ρ ′ k )) k≥n−1,ρ ′ k ∈V i is non-increasing.It is therefore stationary (or finite), because it can take only a finite number of values.Consequently, there is a finite number of resets along the play ρ ′ ≥n−1 .Let ρ ′ n−1 . . .ρ ′ m be the last (longest) one.Afterwards, the play ρ ′ ≥m is compatible with the strategy profile . By definition of that strategy profile, we have the inequality µ i (ρ ′ ) ≤ nego(λ * )(ρ ′ m−1 ).We need now to prove nego(λ * )(ρ ′ m−1 ) ≤ µ i (ρ) + ε.Let ρ ≤p = ρ ′ ≤p denote the longest common prefix of ρ and ρ ′ such that ρ p ∈ V i .Since player i does not control any vertex between ρ p and ρ n−1 , and therefore cannot deviate, we have ρ ≥p = ⟨σ ↾ρ ≤p ⟩, which is λ * -consistent.As a consequence, we have µ i (ρ) ≥ λ * (ρ p ). Finally, since the sequence of the quantities nego(ρ . Consequently, we have: The strategy profile σ is an ε-SPE. 5. A first way to handle negotiation: the abstract negotiation game 5.1.Informal definition.We have now proved that SPEs are characterized by the requirements that are fixed points of the negotiation function; but we need to know how to compute, in practice, the quantity nego(λ) for a given requirement λ.We first define an abstract negotiation game, that is conceptually simple but not directly usable for an algorithmic purpose, because it is defined on an uncountably infinite state space. A similar definition was given in [FP17], as a tool in a general method to compute SPE outcomes in games whose payoff functions have finite range, which is not the case of mean-payoff games.Here, linking that game with our concepts of requirements, negotiation function and steady negotiation enables us to present an effective algorithm in the case of mean-payoff games, by constructing a finite version of the abstract negotiation game, the concrete negotiation game. The abstract negotiation game from a state v 0 ∈ V i , with regards to a requirement λ, is denoted by Abs λi (G) ↾v 0 and opposes two players, Prover and Challenger, with the following rules: • first, Prover proposes a λ-consistent play ρ from v 0 (or loses, if she has no play to propose). • Then, either Challenger accepts the play and the game terminates; or, he chooses an edge ρ k ρ k+1 , with ρ k ∈ V i , from which he can make player i deviate, using another edge ρ k v with v ̸ = ρ k+1 : then, the game starts again from v instead of v 0 .• In the resulting play (either eventually accepted by Challenger, or constructed by an infinity of deviations), Prover wants player i's payoff to be low, and Challenger wants it to be high.That game gives us the basis of a method to compute nego(λ) from λ: the maximal payoff that Challenger -or C for short -can ensure in Abs λi (G) ↾[v 0 ] , with v 0 ∈ V i , is also the maximal payoff that player i can ensure in G ↾v 0 , against a λ-rational environment; hence the equality val C Abs λi (G) ↾[v 0 ] = nego(λ)(v 0 ).A proof of that statement, with a complete formalization of the abstract negotiation game, is presented in Appendix A. Example 5.1.Let us consider again the game of Example 2.18: the requirement λ 2 = nego(λ 1 ), computed in Section 3.2, is also presented on the third line below the states in Figure 1b.Let us use the abstract negotiation game to compute the requirement λ 3 = nego(λ 2 ). From a, Prover can propose the play abd ω , and the only deviation Challenger can do is going to c; he has of course no incentive to do it.Therefore, λ 3 (a) = 2. From b, whatever Prover proposes at first, Challenger can deviate and go to a.Then, from a, Prover cannot propose the play ac ω , which is not λ 2 -consistent: she has to propose a play beginning by ab, and to let Challenger deviate once more.He can then deviate infinitely often that way, and generate the play (ba) ω : therefore, λ 3 (b) = 3.The other states keep the same values.Note Iterations of the negotiation function that there exists no λ 3 -consistent play from a or b, hence nego(λ 3 )(a) = nego(λ 3 )(b) = +∞.This proves that there is no SPE in that game. 5. 2. An imperfect method: the negotiation sequence.A classical way to compute the least fixed point of a function is, as in the example above, to compute its iterations on the least element of the set we are considering until reaching a fixed point -which is, then, the least one.We call this sequence the negotiation sequence, and write it (λ n ) n∈N = (nego n (λ 0 )) n .In many simple examples, in practice, computing the negotiation sequence, using the abstract negotiation game, is the way we will find the least fixed point of the negotiation function and solve SPE problems. Example 5.2.Let G be the game of Figure 4, where each edge is labelled by the rewards r and r .Below the states, we present the requirements λ 0 : v → −∞, λ 1 = nego(λ 0 ), λ 2 = nego(λ 1 ), λ 3 = nego(λ 2 ), and λ 4 = nego(λ 3 ).Let us explicate those computations, using the abstract negotiation game.From λ 0 to λ 1 : since every play is λ 0 -consistent, Prover can always propose whatever she wants.From the state a, whatever she (trying to minimize player 's payoff) proposes, Challenger can always make player deviate in order to loop on the state a.Then, in the game G, player gets the payoff 1, hence λ 1 (a) = 1.From the state b, Prover (trying to minimize player 's payoff) can propose the play (bc) ω .If Challenger makes player deviate to go to the state a, then Prover can propose the play a(bc) ω .Even if Challenger makes player deviate infinitely often, he cannot give him more than the payoff 0, hence λ 1 (b) = 0. Similar situations happen from the states c and d, hence From λ 1 to λ 2 : now, from the state b, whatever Prover proposes at first, Challenger can make player deviate and go to the state a.From there, since we have λ 1 (a) = 1, Prover has to propose a play in which player gets the payoff 1.The only such plays do also give the payoff 1 to player , hence λ 2 (b) = 1.Similar situations explain λ 3 (c) = 1, and λ 4 (c) = 1.Finally, plays ending with the loop a ω are all λ 4 -consistent, hence Prover can always propose them, hence the requirement λ 4 is a fixed point of the negotiation function -and therefore the least. The interested reader will find other such examples in Appendix B. However, this cannot be turned into an effective algorithm: the negotiation sequence is not always stationary. Theorem 5.3.There exists a mean-payoff game on which the negotiation sequence is not stationary.From a, the worst play that player could propose to player would be a combination of the cycles cd and d giving her exactly 1.But then, player will deviate to go to b, from which if player proposes plays in the strongly connected component containing c and d, then player will always deviate and generate the play (ab) ω , and then get the payoff 2. Then, in order to give her a payoff lower than 2, player has to go to the state e.Since player does not control any state in that strongly connected component, the play he will propose will be accepted: he will, then, propose the worst possible combination of the cycles ef and f for player , such that he gets at least his requirement λ n (b).The payoff λ n+1 (a) is then the minimal solution of the system: 2 , and by induction, for all n > 0: which converges to 2 but does never reach it.6.A tool to compute negotiation: the concrete negotiation game 6.1.Definition.In the abstract negotiation game, Prover has to propose complete plays, on which we can make the hypothesis that they are λ-consistent.In practice, there will often be an infinity of such plays, and therefore it cannot be used directly for an algorithmic purpose.Instead, those plays can be given edge by edge, in a finite state game.Its definition is more technical, but it can be shown that it is equivalent to the abstract one.Definition 6.1 (Concrete negotiation game).Let G be a prefix-independent game played on a finite graph, let i ∈ Π and v 0 ∈ V i , and let λ be a requirement on G.The concrete negotiation game of G ↾v 0 is the two-player zero-sum game Conc λi (G) ↾s 0 = ({P, C}, S, (S P , S C ), ∆, ν) ↾s 0 , defined as follows: • player P is called Prover, and player C is called Challenger. • The set of states controlled by Prover is S P = V × 2 V , where the state s = (v, M ) contains the information of the current state v on which Prover has to define the strategy profile, and the memory M of the states that have been traversed so far since the last deviation, and that define the requirements Prover has to satisfy.The initial state is s 0 = (v 0 , {v 0 }).• The set of states controlled by Challenger is S C = E × 2 V , where in the state s = (uv, M ), the edge uv is the edge proposed by Prover.• The set ∆ contains three types of transitions: proposals, acceptations and deviations. -The proposals are transitions in which Prover proposes an edge of the game G: the acceptations are transitions in which Challenger accepts to follow the edge proposed by Prover (it is in particular his only possibility when that edge begins on a state that is not controlled by player i) -note that the memory is updated: the deviations are transitions in which Challenger refuses to follow the edge proposed by Prover, as he can if that edge begins in a state controlled by player i -the memory is erased, and only the new state the deviating edge leads to is memorized: : the projection of the history H is the history Ḣ = h 0 . . .h n in the game G.That definition is naturally extended to plays. • The payoff function ν C = −ν P measures player i's payoff, with a winning condition if the constructed strategy profile is not λ-rational, that is to say if after finitely many player i's deviations, it can generate a play which is not λ-consistent: ν C (π) = +∞ if after some index n ∈ N, the play π ≥2n contains no deviation, and if the play Like in the abstract negotiation game, the goal of Challenger is to find a λ-rational strategy profile that forces the worst possible payoff for player i, and the goal of Prover is to find a possibly deviating strategy for player i that gives them the highest possible payoff.Remark 6.2.The concrete negotiation game has the following properties. • When π ≥2n contains no deviation, the memory of its states is increasing, and therefore eventually equal to the memory M = Occ( π≥n ).If it is the longest such suffix of π, it means that the play π≥n is λ-consistent if and only if for each player j and each vertex v ∈ V j , we have µ j ( π) ≥ λ(v). 3 When we combine the notations π and π ≥n , the notation π is applied first; that is, the play π≥n is the projection of the play π ≥2n , not π ≥n . • When G is a mean-payoff game and when λ has finite values, the concrete negotiation game can be seen as a multidimensional two-player zero-sum mean-payoff game, with one dimension for each player, meant to control that each player gets the payoff they require, plus a special dimension ⋆, meant to measure player i's actual payoff.The rewards of the proposals are all equal to 0, and the rewards of acceptations and deviations are r⋆ ((uv, M )(v ′ , N )) = 2r i (uv ′ ), and rj ((uv, M )(v ′ , N )) = 2r j (uv ′ ) − 2 max{λ(w) | w ∈ M ∩V j }.The payoff ν C (π) equals then +∞ for every π that contains finitely many deviations and such that for some j ∈ Π, the mean-payoff μj (π) is negative, and ν C (π) = μ⋆ (π) otherwise. 6.2.Link with the negotiation function.The concrete negotiation game is equivalent to the abstract one: the only differences are that the plays proposed by Prover are proposed edge by edge, and that their λ-consistency is not written in the rules of the game but in its payoff function. Theorem 6.3.Let G be a Borel prefix-independent game played on a finite graph.Let λ be a requirement, let i be a player and let v 0 ∈ V i .Then, we have val C (Conc λi (G) ↾s 0 ) = nego(λ)(v 0 ).Moreover, if for each player i and every state v 0 ∈ V i , Prover has an optimal strategy in Conc λi (G) ↾(v 0 ,{v 0 }) , then G is a game with steady negotiation. Proof. Let τ P be a strategy such that sup τ C ν C (⟨τ ⟩) ̸ = +∞, and let σ be the strategy profile defined by σ( Ḣ) = w for every history H compatible with τ P (by induction, the projection is injective on the histories compatible with τ P ) with τ P (H) = (vw, •), and arbitrarily defined on any other histories.We prove that the strategy profile σ−i is λ-rational assuming the strategy σ i , and that sup -The strategy profile σ−i is λ-rational, assuming the strategy σ i .Indeed, let us assume it is not.Then, there exists a history h = h 0 . . .h n in G ↾v 0 compatible with σ−i such that the play ⟨σ ↾h ⟩ is not λ-consistent.Then, let: be the only history in Conc λi (G) ↾s 0 compatible with τ P such that Ḣ = h.Let τ C be a strategy constructing the history h, defined by: for every k, and: τ C H ′ (vw, M ) = (w, M ∪ {w}) for any other history H ′ (vw, M ).Then, the play π = ⟨τ ⟩ contains finitely many deviations (Challenger stops the deviations after having constructed the history h), and the play π≥n is not λ-consistent.Therefore, we have ν C (π) = +∞, which is false by hypothesis. -Now, let us prove the inequality sup Let σ ′ i be a strategy for player i, and let η = ⟨σ −i , σ ′ i ⟩.Let τ C be a strategy such that for every k: i.e. a strategy forcing η against τ P .Then, since ν C (⟨τ ⟩) ̸ = +∞ by hypothesis on τ P , we have Moreover, if τ P is optimal, then the λ-rational strategy profile σ−i realizes the infimum: hence if there exists such an optimal strategy for every vertex v 0 , then the game G is with steady negotiation. Let σ−i be a λ-rational strategy profile from v 0 , assuming the strategy σ i ; let us define a strategy τ P , by τ P (H(v, •)) = vσ( Ḣv), • for every history H and for every v ∈ V .Let us prove the inequality . Let τ C be a strategy for Challenger, and let π = ⟨τ ⟩.If ν C (π) = +∞, then there exists n such that the play π ≥2n contains no deviation, i.e. π≥n = ⟨σ ↾ π≤n ⟩, and that play is not λ-consistent, which is impossible.Therefore, we have ν C (π) ̸ = +∞, and as a consequence , hence the desired inequality.The dotted arrows indicate the deviations, and the transitions have been labelled with the immediate rewards defined as in the remark above.The transitions that are not labelled are either zero for the three coordinates, or meaningless since they cannot be used more than once.The red arrows indicate a (memoryless) optimal strategy for Challenger.Against that strategy, the lowest payoff Prover can ensure is 2. Therefore, we have nego(λ 1 )(v 0 ) = 2, in line with the abstract game in Example 5.1.6.4.Resolution.We now know that nego(λ)(v), for a given requirement λ, a given player i and a given state v ∈ V i , is the value of the concrete negotiation game Conc λi (G) ↾(v,{v}) .But we still do not know how to compute that value.We present here an important result for that purpose. For any game G ↾v 0 and any memoryless strategy σ i , we write G ↾v 0 [σ i ] the graph induced by σ i , defined as the underlying graph of G where all the transitions that are not compatible with σ i , and all the vertices that are then no longer accessible from v 0 , have been omitted.Lemma 6.4.Let G ↾v 0 be a mean-payoff game, let i be a player, let λ be a requirement and let Conc λi (G) ↾s 0 be the corresponding concrete negotiation game.There exists a memoryless strategy τ C that is optimal for Challenger. Proof.The structure of that proof is inspired from the proof of Lemma 14 in [VCD + 15]. Let ν ′ C be the payoff function defined by: • ν ′ C (π) = +∞ if there exists n such that π ≥2n contains no deviation, and such that the play π≥n is not λ-consistent. The payoff function ν ′ C is then defined as ν C , but with a limit superior instead of inferior.The payoff function ν ′ C is concave.Indeed, let π and χ be two plays in Conc λi (G) ↾v 0 , and let ξ be a shuffling of them.Let us check that ν Otherwise, we also have ν ′ C (ξ) ̸ = +∞: if either π or χ contains infinitely many deviations, then so does ξ.If both contain finitely many deviations, then so does ξ: the states of ξ have therefore eventually the same memory M , which is also the memory of, eventually, the states of both π and χ.Now, since ν ′ C (π), ν ′ C (χ) ̸ = +∞, we have µ j ( π), µ j ( χ) ≥ λ(v) for each player j and every v ∈ M ∩ V j .Since mean-payoff functions are convex, it is also the case for the play ξ, which is a shuffling of π and χ.Hence ν C (ξ) ̸ = +∞. Therefore, by Lemma 2.29 Challenger has a memoryless strategy that is optimal with regards to the payoff function ν ′ C : let us write it τ C .Now, we want to prove that the memoryless strategy τ C is also optimal with regards to ν C .Note that for every play π, we have ν C (π) ≤ ν ′ C (π), and therefore val C (Conc λi (G) ↾s 0 ) ≤ α, where α is the value of the game Conc λi (G) ↾s 0 with the payoff function ν ′ C instead of ν C .Therefore, we have proven that τ C is optimal with regards to ν C if we prove that inf τ P ν C (⟨τ ⟩) ≥ α. Let π be a play compatible with τ C , i.e. an infinite path from s 0 in the graph and by Lemma 2.16, we have: where C is the set of the simple cycles of the graph Conc λi (G) ↾s 0 [τ C ]. Now, for each such cycle, there exists a history H such that the play Hc ω is compatible with the strategy τ C , and therefore satisfies ν ′ C (Hc ω ) ≥ α, and consequently MP i ( ċ) ≥ α.Therefore, we have µ i ( π) ≥ α, and the strategy τ C is optimal with regards to the payoff function ν C .Using this lemma, computing nego(λ) for any given λ amounts to looking for an optimal path for Prover in each graph Conc λi (G) ↾s 0 [τ C ].When (V, E) is a graph, we write SConn(V, E) the set of its strongly connected components accessible from the vertex v. Let K be a strongly connected subgraph of a concrete negotiation game.If K contains no deviation, then the states of K share all the same memory: let us us write it Mem(K).If K contains at least one deviation, we define Mem(K) = ∅.Then, we write: The set in which the variable x evolves is the set of payoffs of plays that Prover can construct against Challenger, when she chooses to go in the strongly connected component K, and observes the requirements she has to observe -those stored in the common memory of the states of K if K contains no deviations, and none otherwise.Hence the following formal result.Lemma 6.5.Let G be a mean-payoff game, let λ be a requirement, let i be a player, and let v ∈ V i .Then, we have: Moreover, mean-payoff games are games with steady negotiation. Proof.By Lemma 6.4, in the game Conc λi (G) ↾s , where s = (v, {v}), there exists a memoryless strategy τ C which is optimal for Challenger.Therefore, the best payoff that Prover can ensure against every strategy of Challenger is the best payoff she can ensure against τ C .It follows from Theorem 6.3 that the highest value player i can enforce against a hostile λ-rational environment is the minimal payoff of Challenger in a path in the graph Conc λi (G) ↾s [τ C ] starting from s.For any such path π, there exists a strongly connected component K of Conc λi (G) ↾s [τ C ] such that after a finite number of steps, the path π is a path in K. Let us now prove that the least payoff of Challenger in such a play is given by opt(K).Let us distinguish two cases. • If there is at least one deviation in K. Then, for every play π in K, it is possible to transform π into a play π ′ with µ( π′ ) = µ( π), which contains infinitely many deviations: it suffices to add round trips to a deviation, endlessly, but less and less often.Therefore, the outcomes ν C (π) of plays in K are exactly the mean-payoffs µ i ( π) of plays in K (plus possibly +∞); and in particular, the lowest payoff Challenger can get in K is the quantity: By Lemma 2.16, the set of possible values of µ( π) for all plays π in K is exactly the set: Since all the plays in K contain finitely many deviations (actually none), for every path π in K, we have ν C (π) = +∞ if and only if there exists j ∈ Π and u ∈ V j ∩ Mem(K) such that µ j ( π) < λ(u).Then, the lowest outcome Prover can get in K is: that is to say opt(K). Theorem 6.3 enables to conclude to the desired formula.Moreover, let us notice that in all cases, Prover can choose one optimal play against each memoryless strategy of Challenger.By determinacy of Borel games, it comes that Prover has an optimal strategy, hence by Theorem 6.3, mean-payoff games are games with steady negotiation. We are now able to compute nego(λ) for a given λ.However, because of Theorem 5.3, that is not sufficient to compute the least ε-fixed point λ * , and then to decide the SPE threshold problem.Nevertheless, we will prove that λ * can also be extracted from the concrete negotiation game.where every P ∈ Φ d is a polyhedron, such that for each such P there exists āP ∈ R D \ { 0} and b P ∈ R such that for every x ∈ P , the vector ȳ = f (x) satisfies: Analysis of the negotiation function where • is the canonical scalar product. From now, we can therefore drop the downward sealing.Moreover, let us note that the projection x → x i , as an affine mapping over the polytope Q ∩ R λ , finds its minimum on a vertex of that polytope.Each vertex {x} of Q ∩ R λ is the intersection between a face F of the polytope Q, and a face F ′ of the polyhedron R λ .Such a face F is of the form Conv c∈C MP( ċ), where C is a subset of SC(K); and such a face F ′ is of the form H λw , where W is a subset of Mem(K), and where H λw is the hyperplane {x | x j = λ(w)} for each j and w ∈ V j .Thus, if we define the set: it is included in the polytope Q ∩ R λ , and contains all its vertices; hence the equality opt(K) = inf{x i | x ∈ X}.We can therefore write: where f CW is, for every C and W , the function defined by: Let us now study each of those functions f CW .Given C, W and λ, we have f CW (λ) ̸ = +∞ if and only if the three following conditions are satisfied. • First, the intersection I is a singleton.The elements of I are the points of the form x = c∈C α c MP j ( ċ) j with ᾱ ∈ R C , c α c = 1, and x j = λ(w) for each j and w ∈ W ∩V j .The set I is therefore a singleton if and only if the matrix: is such that there exists exactly one vector ᾱ ∈ R C satisfying: That condition is satisfied if and only if A is invertible, which can be decided in a time polynomial in the size of A, and does actually not depend on λ: either it is not satisfied, and the function f CW is constantly equal to +∞, or it is, and only the following conditions must be considered. • Second, the unique element of I belongs to Conv c∈C MP( ċ).That is the case if and only if the vector: Thus, the vector ᾱ = A −1 (Bλ ¯+ β) has non-negative coordinates if and only if λ ¯belongs to the set: which, as a pre-image of a polyhedron by an affine function, is itself a polyhedron, which can be constructed in a time polynomial in the size of A, B and β. • Third, the vector: x = (MP j ( ċ)) j∈Π,c∈C ᾱ is such that for each j and each w ∈ Mem(K) (not only in W ), we have x j ≥ λ(w).The set P 1 of requirements λ ¯satisfying that condition can itself be written as the pre-image of a polyhedron by an affine function, and is therefore itself a polyhedron, which we can construct in a time polynomial in the size of A, B, β and (MP j ( ċ)) j∈Π,c∈C . Therefore, the function f CW is equal to +∞ outside of the polyhedron P 0 ∩ P 1 , and satisfies: inside it.It is therefore an affine function of which a representation can be constructed in a time polynomial in the size of K. Therefore, a representation of opt(K) = inf C,W f CW (λ ¯) as an affine function of λ ¯can be constructed in a time exponential in the size of K, and the negotiation function, expressed by: nego : λ ¯ → sup • If such an SPE exists: let us write it σ, and let ρ = ⟨σ⟩.Since µ S (ρ) = 1, the sink state ⊥ is never visited.Let us define a valuation ν on X as follows: for each variable x, we have ν(x) = 1 if and only if µ x (ρ) < 1.Now, let C be a clause of φ: since C, as a state, is necessarily visited infinitely often and with a fixed frequence in the play ρ (because no player ever go to the sink state ⊥), one of its successors, say (C, L), is visited with a non-negligible frequence (more formally, the time between two occurrences of (C, L) is bounded).If L is a positive litteral, say x, then by definition of ν, we have ν(x) = 1 and the clause C is satisfied. If L has the form ¬x, then each time the state (C, ¬x) is traversed, player x has the possibility to deviate and to go to the sink state ⊥, where he is sure to get the payoff 1.Since σ is an SPE, it means that he already gets the payoff 1 in the play ρ.By definition of ν, we then have ν(x) = 0, hence the litteral ¬x is satisfied, hence so is the clause C. The valuation ν satisfies all the clauses of φ, and therefore satisfies the formula φ itself.• If φ is satisfied by some valuation ν: let us define a strategy profile σ by: σ S (hC) = (C, L) for each history hC where C is a clause of φ, where L is a litteral of C that is satisfied in the valuation ν; and σ x (h(C, ¬x)) = ⊥ if and only if ν(x) = 1 for each history h(C, ¬x) where C is a clause of φ and x is a variable.Any other state has only one successor, hence we now have completely defined a strategy profile.Now, let us prove it is an SPE, in which Solver gets the payoff 1. Let hC be a history, where C is a clause of φ.We want to prove that σ↾hC is a Nash equilibrium, in which Solver gets the payoff 1.Let ρ = ⟨σ ↾hC ⟩.If µ S (ρ) < 1, i.e. if ρ is of the form hD(D, ¬x)⊥ ω , then by definition of σ we have ν(x) = 0.But then, we cannot But then, at the third step, from the state b: whatever Prover proposes at first, Challenger can deviate to reach the state a.Then, Prover has to propose a λ 2 -consistent play from a, i.e. a play in which player gets at least the payoff 2: such a play necessarily end in the state d, i.e. after possibly some prefix, Prover proposes the play abd ω .But then, Challenger can always deviate to go back to the state a; and the play which is thus created is (ab) ω which gives player the payoff 3. Finally, from the states a and b, there exists no λ 3 -consistent play, and therefore no λ-rational strategy profile.and for all n ≥ 4, λ n = λ 4 . Example B.2.In all the previous examples, all the games whose underlying graphs were strongly connected contained SPEs.Here is an example of game with a strongly connected underlying graph that does not contain SPEs.This game is similar to the game of Example 2.18, hence we do not give the details of the computation of the negotiation sequence.At the first step, the requirement λ 1 captures the antagonistic values.Then, from the state c, if player forces the access to the state b, then player must get at least 1: the worst play that can be proposed to player is then (babc) ω , which gives player the payoff 3 2 .From the state f , if player forces the access to the state g, then the worst play that can be proposed to them is g ω .Then, from the state d, if player forces the access to the state c, then player must get at least 3 2 : the worst play that can be proposed to player is then (cccd) ω , which gives player the payoff 1 2 .At the same time, from the state e, player can now force the acces to the state f : then, the worst play that can be proposed to them is f g ω .But then, from the state c, player can now force the access to the state e: then, the worst play that can be proposed to them is ef g ω .And finally, from that point, if from the state d player forces the access to the state c, then player must have at least the payof 2; and therefore, the worst play that can be proposed to player is now (ccd) ω , which gives her the payoff Definition 2.14 (Downward sealing).Given a set Y ⊆ R D , the downward sealing of Y is the set ⌞ Y = (min z∈Z z d ) d∈D Z is a finite subset of Y .Two NEs and one SPE Figure 2 . Figure 2. A game with an infinity of SPEs 6. 3 . Example.Let us consider again the game from Example 2.18. Figure 6 represents the game Conc λ 1 (G) (with λ 1 (a) = 1 and λ 1 (b) = 2), where the dashed states are controlled by Challenger, and the other ones by Prover. Definition 7. 1 ( Piecewise affine function).Let D be a finite set of dimensions.A function f : R D → R D is piecewise affine if for each d ∈ D, there exists a finite partition Φ d of R D , Figure 7 . Figure 7.The negotiation function on the games of Examples 2.19 and 2.18 Figure 8 . Figure 8.The game G φ 3. This example shows how a new requirement can emerge from the combination of several cycles.Let G be the following game: The requirement λ 5 is a fixed point of the negotiation function. Two examples of gamesExample 2.15.In R 2 , if Y is the blue area in Figure2b, then ⌞ Y is obtained by adding the gray area.Lemma 2.16 [CDE + 10].Let G be a mean-payoff game, whose underlying graph is strongly connected.The set of the payoffs µ(ρ), where ρ is a play in G, is exactly the set: ⌞ Conv c∈SC(V,E) Proof.Let i ∈ Π, let v ∈ V i , and let λ be a requirement.Let τ C be a memoryless strategy of Challenger in the game Conc λi (G), and let K be a strongly connected component of the graph Conc λi (G) ↾s , where s = (v, {v}).The result will follow from the fact that the quantity opt(K) is, itself, a piecewise affine function of λ ¯.Note that the underlying graph of the game Conc λi (G) does not depend on λ.For a given λ, let us consider the polytope Q Remark 7.2.The function f is fully represented by the family (Φ d , (ā P , b P ) P ∈Φ d ) d∈D .That representation is finite if each polyhedron P ∈ Φ d , for each dimension d, is defined by rational equations, and if each āP and each b P has rational or infinite values.Theorem 7.3.Let us assimilate every requirement λ to the vector λ ¯= (λ(v)) v∈V .Then, the negotiation function is piecewise affine, and a finite representation of it can be constructed in a time doubly exponential in the size of G. This work is licensed under the Creative Commons Attribution License.To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/or send a letter to Creative Commons, 171 Second St, Suite 300, San Francisco, CA 94105, USA, or Eisenacher Strasse 2, 10777 Berlin, Germany
18,485.6
2021-01-26T00:00:00.000
[ "Mathematics", "Economics" ]
Inflation in Supergravity with a Single Chiral Superfield We propose new supergravity models describing chaotic Linde- and Starobinsky-like inflation in terms of a single chiral superfield. The key ideas to obtain a positive vacuum energy during large field inflation are (i) stabilization of the real or imaginary partner of the inflaton by modifying a Kahler potential, and (ii) use of the crossing terms in the scalar potential originating from a polynomial superpotential. Our inflationary models are constructed by starting from the minimal Kahler potential with a shift symmetry, and are extended to the no-scale case. Our methods can be applied to more general inflationary models in supergravity with only one chiral superfield. Introduction In the single-field inflationary models the availability of arbitrary choice of the inflaton scalar potential V (φ) allows one to theoretically describe any value of the Cosmic Microwave Background (CMB) observables (n s , r). It is one of the reasons for popularity of the much more restrictive one-parameter single-field inflationary models with the quadratic (V = 1 2 m 2 φ 2 ), quartic (V = 1 4! λφ 4 ), or the Starobinsky (V (φ) = 3 4 M 2 (1 − e − √ 2/3φ ) 2 ) scalar potentials of the canonically normalized inflaton scalar field φ. For instance, the Starobinsky model [1] has only one mass parameter M that is fixed by the observational (CMB) data as M = (3.0 × 10 −6 )(50/N e ) where N e is the e-foldings number. The predictions of the Starobinsky model for the spectral index n s ≈ 1 − 2/N e ≈ 0.96 (for N e = 50), the tensor-to-scalar ratio r ≈ 12/N 2 e ≈ 0.0048 and low non-Gaussianity are in agreement with the WMAP and Planck data (r < 0.13 and r < 0.11, respectively, at 95% CL) [2,3], but are in disagreement with the BICEP2 measurements (r = 0.2 +0.07 −0.05 , or r = 0.16 +0.06 −0.05 when dust subtracted) [4]. The competitive model of chaotic inflation with a quadratic scalar potential, proposed by Linde [5], predicts r ≈ 8/N e = 0.16 (50/N e ) in good agreement with the BICEP2 data, though in apparent disagreement with the Planck data (when running of the spectral index is ignored). It is desirable to realize those single-field inflationary models (and their known extensions, in order to accommodate both the Planck and BICEP2 data) in supergravity because supersymmetry (SUSY) is one of the leading candidates for new physics beyond the Standard Model. At the same time, we would like such model building to be minimalistic in the sense that a number of fields and parameters should be limited or constrained as much as possible. It is not straightforward to extend the single-field inflationary models to 4D, N = 1 supergravity. In the context of the old-minimal supergravity, it requires a complexification of inflaton and its embedding into a scalar supermultiplet whose generic action is parameterized by a Kähler potential K(Φ, Φ) and a superpotential W (Φ). The F -type scalar potential of supergravity in the Einstein frame is [6] where the subscripts denote the differentiation, and the superscripts stand for the inverse matrix. There are two obstacles to realize inflation with this potential. First, the exponential factor generically prevents a flat inflationary potential. Second, the scalar potential in terms of a single chiral superfield tends to become negative in a large field region (Sec. 2). A way to overcome those obstacles was proposed in Refs. [7,8,9] by the use of two chiral superfields Φ and S with a shift-symmetric Kähler potential and the following superpotential: respectively. This choice leads to the very simple scalar potential in terms of the leading (and canonically normalized) complex scalar field component Φ 1 , provided that the Kähler potential is quadratic in (Φ + Φ) and the superfield S is stabilized at S = 0. 2 The supergravity extension given above describes a large (parameterized by the function f ) class of non-negative scalar potentials suitable for inflation. For instance, the quadratic single-field scalar potential is easily generated by choosing f (Φ) = mΦ/ √ 2, with Im Φ playing the role of inflaton along the inflationary trajectory S = Re Φ = 0. Similarly, the Starobinsky inflation can be described by identifying inflaton (scalaron) with ) after stabilization of the other fields at The minimal Kähler potential, having the shift symmetry and used in Refs. [7,8,9], leads to free kinetic terms. Another (no-scale) Kähler potential with the shift symmetry is well motivated by (perturbative) superstring compactification [10] and supersymmetric particle phenomenology beyond the Standard Model [11,12], where the two complex superfields (Φ, S) parameterize the non-compact homogeneous space SU (2, 1)/U (2). In the context of superstrings, Φ has physical interpretation as a Kähler (or compactified volume) modulus. The no-scale supergravity in terms of two chiral superfields (Φ, S) with the Kähler potential (5) and a superpotential W (Φ, S) was used to embed the Starobinsky inflation in Refs. [13,14] and to embed the quadratic (Linde) inflation in Ref. [15]. The superfield S above is introduced to obtain the suitable potential of Φ, so that the S plays the auxiliary role. Our purpose in this Letter is to get rid of the superfield S. There are models with a single chiral (or linear) superfield and a real (vector) superfield [16,17,18], as well as models with a dynamical complex scalar field based on the non-linear realization of supergravity that requires additional constrained or auxiliary superfields [19]. Our construction does not require non-linear constraints, auxiliary superfields, or vector superfields. As a matter of fact, there are the negative statements against such construction in the literature. As regards a quadratic inflation in the SU (1, 1)/U (1) no-scale supergravity, 1 The leading field component of a superfield and the superfield itself are denoted by the same letters. 2 It is not difficult to properly choose the SS-dependence of the Kähler potential (2) for that purpose. it was noticed in Ref. [15] under Eq. (24) that "there are no polynomial forms of W (Φ) that lead to a quadratic potential for a canonically normalized field, and we are led to consider N ≥ 2 models with additional matter fields" (like S). Also, an extension of the original (R + R 2 ) Starobinsky model to higher-derivative supergravity does require the second superfield S -see, e.g., Refs. [20,21] for details. It has also been shown that it is impossible to embed the Starobinsky model into standard supergravity with the SU (1, 1)/U (1) no-scale Kähler potential when using a monomial superpotential [14]. However, it is still unclear (a) whether the extra (matter) superfield S is truly necessary in order to embed a quadratic inflation into supergravity, and (b) whether the Starobinsky scalar potential can be embedded into standard supergravity with a single chiral matter superfield with more general Kähler-and super-potentials. Needless to say, both issues are highly actual in light of the BICEP2 and Planck data. In this Letter we are going to show by explicit construction that it is possible to realize supergravity models of the quadratic (Linde) inflation and of the Starobinsky-like inflation by using a Kähler potential K(Φ,Φ) and a superpotential W (Φ), in terms of a single chiral superfield Φ. Our paper is organized as follows. In Sec. 2 we formulate a new class of the supergravity models with a generic Wess-Zumino superpotential in terms of a single chiral superfield Φ, and demonstrate how to get a quadratic potential of the imaginary part of the Φ-scalar (identified with axion, similarly to the natural inflation [22]). We also explain how to stabilize the second scalar given by the real part of Φ. In Sec. 3 we apply the same strategy for realizing the Starobinsky-like inflation in supergravity. Our conclusion is Sec. 4. Chaotic inflation with a single chiral superfield The scalar potential of supergravity (1) is a sum of the positively definite (first) term and the negatively definite (second) term. To obtain a positive potential in the large field region, the first term should dominate over the second one. Due to a presence of the exponential factor, having a shift symmetry is the key to realize a flat potential in supergravity. Let us begin with a minimal Kähler potential having a shift symmetry, and a monomial superpotential Then the scalar potential is given by If inflation is driven by the imaginary part of Φ, the real part of Φ vanishes because of the Z 2 symmetry. But then the factor K Φ = Φ +Φ drops out, and the negative term dominates the potential. Note that the shift symmetry of the Kähler potential eliminates the inflaton (imaginary part) from the Kähler potential and its derivatives, so that the large field behavior of the inflaton gets worse. Even if one takes a polynomial superpotential breaking the Z 2 symmetry, as long as the value of real part of Φ is of the order of the inflationary scale, which is much lower than the reduced Planck scale, the situation does not significantly change. To improve the situation, there are two possibilities: either to enhance the positively definite term, or to suppress the negatively definite term. Most of the literature uses the second opportunity. In particular, the sGoldstino field S, whose value is fixed to zero, is introduced to eliminate the negatively definite term. Here we employ the alternative (first) option. We stabilize the real part of Φ by a higher dimensional operator in the Kähler potential, so that the factor |K Φ | becomes larger than √ 3. We assume a sufficiently large value of ζ so that the expectation value of the real part of Φ is well approximated by Φ 0 . The inflaton scalar potential reads With n = 1 we obtain a quadratic potential of the inflaton ImΦ. The cosmological constant can be adjusted to zero (Minkowski vacuum) by adding a constant c 0 to the superpotential without affecting the inflationary potential, The real constant Φ 0 must be larger than √ 3/2, avoiding a tachyonic inflaton mass. The same strategy can be applied to the no-scale Kähler potential. Let us consider the no-scale Kähler potential with a generic superpotential, leading to the following kinetic term and the scalar potential: The no-scale property means that V = 0 for W = c 3 Φ 3 with any value of the coupling constant c 3 . It is worth mentioning that the standard matter-coupled supergravity defined by Eq. (13) is dual (equivalent) to the so-called F (R) higher-derivative supergravity defined by the chiral superspace action [23] whose holomorphic function F of the chiral scalar curvature superfield R is related to the chiral superpotential W (Φ) via the Legendre transform [24,25,26]. In the case of a generic monomial superpotential (7), the scalar potential reads and is positive only when n > 3 or n < 0. To gain more insight, let us take a polynomial superpotential having two terms, and natural powers n > m. Then the scalar potential is Since the Re Φ will be eventually stabilized (see below), we can regard the functions of Re Φ as the coefficients of the polynomial of Im Φ in Eq. (19). We want a quadratic potential for ImΦ. The largest power of ImΦ in Eq. (19) is 2n − 2, while m + n − 1 is of the same value provided that m = n − 1. We want these to be quadratic. When demanding 2n − 2 = 2, we need m + n − 1 = 2, i.e. n = 2 and m = 1, in order to get a positive overall coefficient. Alternatively, one may take n = 3 and generate a quadratic term by using the cross terms of the (m + n − 1)th power. In this case m = 0. Next, let us take the most general renormalizable (Wess-Zumino) superpotential, with arbitrary (complex) coupling constants (c 0 , c 1 , c 2 , c 3 ). It yields the scalar potential having many mixing terms between the couplings of Eq. (20) due to the nonlinearity of the equation (15) with respect to W . Then, after a stabilization of the real part of Φ, the canonically normalized imaginary part, Φ I , is given by where the bracket denotes the vacuum expectation value. For instance, setting c 3 = 0 eliminates both cubic and quartic terms with respect to Im Φ, so that the scalar potential in terms of the canonically normalized field Φ I is given by In particular, the mass squared of the inflaton Φ I is It is positive for some c 1 and c 2 , and should be matched with the amplitude of curvature perturbations, P R (k) = (2.196 +0.051 −0.060 )×10 −9 (k/k 0 ) ns−1 with pivot scale k 0 = 0.05Mpc −1 [3], so that m Φ I = 1.8 × 10 13 GeV. The cosmological constant can be adjusted to zero by the choice of c 0 . It is also possible to preserve SUSY at the vacuum by tuning of parameters, Of course, the whole approach makes sense only when the real part of Φ is stabilized by modifying the Kähler potential of Eq. (13) by some extra terms breaking its no-scale structure. A particularly simple example of such modification was proposed the long time ago in Ref. [27] (and was used more recently in Refs. [15,28] for the stabilization purposes with two chiral superfields), with two real parameters ζ and Φ 0 . It is straightforward to compute the scalar potential with the modified Kähler potential. We find The real part of Φ is stabilized by a SUSY breaking mass, Taking a large ζ, it becomes m 2 ReΦ ≃ 648ζ − 81/4Φ 3 0 |W | 2 at the vacuum. It is larger during inflation. As in the minimal Kähler case, the factor (Φ +Φ − 2Φ 0 ) is suppressed due to the stabilization effects itself. Therefore, many extra contributions, including those in the canonical normalization, are small compared to the original contribution above, and merely perturb the coefficients of the polynomial in Φ I . The only non-trivial change 3 in the potential of Φ I comes from the term proportional to |W | 2 (it represents the gravitational corrections). In contrast to the case of (9), that term arises because the stabilization term makes a cancellation of the terms characteristic to the no-scale type models incomplete. The extra term should be small enough, when compared to the original terms, in order to ensure the quadratically generated chaotic inflationary dynamics. For example, the term responsible for the inflation, 1 2 m 2 Φ I Φ 2 I , should be dominant over the quartic term contained in the |W | 2 term, which implies during inflation. We have assumed here for simplicity that the first term in the mass formula of Φ I in Eq. (24) dominates over the second term, which is negatively definite. Starobinsky-like inflation with a single chiral superfield In this Section we revisit the issue of minimal realization of the Starobinsky-like inflation in supergravity with a single chiral superfield (or 2 B + 2 F extra d.o.f. only). The nogo arguments in the literature against it refer to a supergravity extension of (R + R 2 ) gravity [20,21] in the original (higher-derivative) picture, and a monomial Ansatz for a superpotential (in the dual picture) [14]. Since the stabilization (25) leads to breaking of the no-scale structure of the Kähler potential, it also breaks its correspondence to the F (R) higher-derivative supergravity (16), so that the first no-go obstruction is dismissed. As regards the second obstruction, in order to achieve a scalar potential which behaves like a constant in the asymptotically large field region, the power n of the superpotential (7) must be 3/2. Substituting that monomial superpotential into the supergravity scalar potential (15) results in a negative constant (i.e. AdS instead of dS) -see (17) and Ref. [14]. However, that can be improved by using a polynomial superpotential. Let us consider Eqs. (18) and (19) with ReΦ as inflaton, and ImΦ stabilized as in Eq. (25) though with −i(Φ −Φ) in the ζ-term. Since we are interested in the large field behavior of ReΦ, its largest power is most relevant. Demanding it to be zero yields either 2n − 3 = 0 that is inappropriate, or n + m − 3 = 0, while a larger power due to the (2n − 3)-term is eliminated by choosing n = 3 that, in turn, implies m = 0. According to those arguments, we should take a superpotential as our starting point. After stabilization of the imaginary part (ImΦ = 0), this leads to a constant potential V = −27Re(c 0 c 3 )/2, whose sign can be chosen to be positive. At this stage, the good news is that we have a positive inflationary energy, and the bad news is that our scalar potential is just a constant. As the next stage, we consider two modifications of the superpotential (29), in order to obtain meaningful Starobinsky-like potentials. The first proposal is to introduce a constant shift of the field Φ in the superpotential as where the constant a is assumed to be real, so that a cancellation between the contributions of (ReΦ − a) 2 and (ReΦ) −2 becomes incomplete. After a field redefinition ReΦ = exp( 2/3φ), the scalar potential of the canonical field φ becomes The first term has the form of the Starobinsky potential, and the second term blows up in the large field region, with |φ| of the order ≤ O(10) (see the pink solid line in Fig. 1). However, if |c 3 | is sufficiently small, the latter term can be ignored. In the parameter space, where the second term is small but non-negligible, the predictions of the Starobinsky model for r (Sec. 1) are modified. As an illustration, we plot the prediction of our model in the (n s , r)-plane in Fig. 2. We set a = 1, because increasing a has the same effect as increasing the coefficient of the correction term, and assume the ideal stabilization with ImΦ = Φ 0 . The e-foldings number N e is defined between the point where the (n s , r) are evaluated and the point where the slow roll inflation ends, ǫ ≡ (V ′ /V ) 2 /2 = 1. Another (second) way of modification of the Ansatz (29) is by adding terms of a smaller power to the superpotential, while keeping its large field behavior. As an example, let us consider a superpotential 4 After stabilization of the imaginary part, it results in the following scalar potential (the cyan dotted line in Fig. 1): where a = −27Re(c 0 c 3 )/2, b = −18Re(c 3 c −1 ), c = 27Re(c 0 c −1 )/2, and d = 18|c −1 | 2 . SUSY is preserved at the vacuum. See Fig. 2 for the prediction of this model in the (n s , r)-plane. Finally, we point out that the Starobinsky-like potential of the ImΦ can also be obtained from a non-logarithmic Kähler potential (9) with a superpotential where we have introduced the mass parameter m, and two real constants, a and Φ 0 (the vacuum expectation value of ReΦ), and the complex constant b. After stabilization of the ReΦ, the scalar potential of ImΦ reads and can be a very good approximation to the Starobinsky scalar potential in the large field region. When a = 2/3 and Φ 0 > √ 3/2, there are always two solutions for Reb and Imb, realizing the Starobinsky scalar potential. Spectral index Tensor-to-scalar ratio Figure 2: The spectral index and the tensor-to-scalar ratio of the models (30) and (32). The prediction of (30) corresponds to blue points (N e = 50) and red points (N e = 60). The very left point (double circle) in each case is the prediction of the Starobinsky model. The relative coefficient −c 3 /3c 0 of the correction term is set to 10 −8 , 10 −7.8 , 10 −7.6 , . . . from left to right. We take real parameters and a = 1. The prediction of (32) corresponds to the yellow (N e = 50) and green (N e = 60) star, with the same parameters as in Fig. 1. Conclusion We demonstrated that it is possible to realize both the Linde (quadratic)-and the Starobinskytype scalar potentials in supergravity with a single chiral superfield, without other chiral superfields or any tensor (vector) matter. It requires certain modifications of the Kähler potential as well as fine-tuning of the parameters, which are the common features of all realizations of chaotic inflation in supergravity. Other stabilization mechanisms are certainly possible, while we used the one of Eqs. (9) and (25) as an example. The no-scale property can be sacrificed because it does not survive against quantum corrections. Some of our models favor the SUSY breaking scale that is much higher than the electroweak scale, because inflaton breaks SUSY even after inflation. Our realizations lead to super-Planckian excursion of inflaton field [30], though it may not be a problem [31]. A derivation of the proposed supergravity models from extended supergravity, extra dimensions or string theory is beyond the scope of this paper (see, however, Refs. [32,33]). The Linde and Starobinsky models have different shapes of the scalar potential and, hence, lead to different predictions of inflationary observables. Our method may be used to develop other inflationary models in supergravity with the help of a single chiral superfield. It contributes to the inflationary model building to be consistent with the Planck, BICEP2, and future data.
5,091.8
2014-06-02T00:00:00.000
[ "Physics" ]
Blended learning: A study on Tamil primary schools This research was conducted to study the interest of blended learning among the students of Tamil primary schools and to identify the performance on the verse and figurative language (ceiyulum moziyaniyum). Primary schools including Tamil primary schools of Malaysia have implemented blended learning in their teaching and learning process. Both traditional and online learning are blended. Due to the implementation of blended learning in schools, the attitude of students changed and it became a powerful and effective tool for the students to learn. Many types of research on blended learning among primary school students and teachers have been carried out. According to these studies' findings, blended learning in primary schools both in traditional and online learning is essential. The impacts, effects, and environment are focused. Compared with these previous studies, the present study discusses and provides information about the interest and performance of blended learning in primary schools. Moreover, Tamil primary school students are focused on in this study. Further, the findings of this study show that students’ participation in blended learning and their performance are high even during pandemic days. In this study, 4 different Tamil primary schools in Kedah State, Malaysia involved in blended learning were selected. Based on the objectives, the data were analyzed. The findings show that students’ participation in blended learning and their performance are high even during pandemic days. Introduction *Education plays a vital role in society. Teaching and learning is a process of it. Of this, the learning includes different skills, acquisition of subject knowledge, societal values, and societal beliefs. Various learning methods are used widely in a classroom to educate the students like training, discussion, narration, test, quiz, dictation, etc. And teaching mostly goes with classroom lectures because it takes place in a classroom situation. Classroom teaching and learning is a traditional method. Due to the pandemic situation prevailing throughout the globe, the teaching and learning process is being shifted from traditional to online teaching. This situation prevails even in Malaysia. Malaysia Tamil schools implemented blended learning in their education system (Letchmanan and Saad, 2021). This article aims to study the interest of blended learning among the students of Tamil primary schools and to identify the performance on the verse and figurative language (ceiyulum moziyaniyum) through blended learning. Schools all around the world are increasingly using blended approaches that combine online and face-to-face teaching. Blended learning is a combination of traditional classroom teaching and online teaching. Though this concept was introduced earlier, the application was from the beginning of the 21 st century (Shanmugam and Balakrishnan, 2019;Ismail and Khalib, 2020). Many educational institutions came forward to introduce blended learning in the schools, during the pandemic, Covid19. There are various definitions by different scholars for blended learning. Blended learning is said to it 'a blend in education' (Dziuban et al., 2018). Blended learning is 'one of the contemporary trends of education' (Hubackova and Semradova, 2016). This method came to exist after the introduction of multimedia in education where multimedia is a collection of various tools including, texts, images, audios, videos, and designs. In blended learning, one can find the blend of technology with face-to-face and online communication. Online learning can be called distance learning. In face-to-face learning, blended learning materials like lecture content, lecture, conference, handphone apps, YouTube, blogs, videos, audios, and so on are used. Whereas in online teaching, e-learning, online lecture, and sessions, social media like Facebook, e-mail, blogs, Twitter, YouTube, Skype, live chat, etc. are used. Moreover, blended learning is used in the two methods of teaching such as synchronous and asynchronous. At present, online teaching and learning which takes place, use both synchronous and asynchronous methods of teaching and learning. Synchronous method of teaching is almost similar to face-to-face class takes place through online platforms like, google meet, zoom meet, WhatsApp, and so on. But, the asynchronous method of teaching takes place through social media without a face-toface learning environment. Social media includes YouTube, v channels, Facebook, recorded audio or video content, and digital texts. Education in primary schools is for children from 4 to 11 years old. But, it varies from country to country. Even, the name 'primary school' changes according to the countries' educational systems. In some countries, it is named elementary school and in others as grade school. In Malaysia, primary school education comes after preschool. Tamil education in Malaysia was started in the year 1816 by establishing the first Tamil primary school in Penang. At present, Tamil primary schools are government schools and government-aided schools which come under the Ministry of Education. Moreover, the Ministry of Education has transformed education through the Malaysia Education Blueprint 2013-2025 which plans various initiatives to empower education (Ponniah et al., 2019). In Tamil primary schools, students from Tamil language backgrounds study Tamil. Tamil is one of the major languages in the Dravidian language family and it has a rich literature background starting from 300 BC. Tholkappiyam is considered as the ancient grammar book in the Tamil language. In Tamil primary schools of Malaysia, emphasis is given to listening, speaking, reading, and writing skills to an extent, arithmetic skills, and other skills-related subjects are taught. Moreover, the Tamil literary concepts such as proverbs, short stories, rhymes, poetic forms, verse, and figurative language (ceiyulum moziyaniyum), and many more are taught in Tamil primary schools (Ponniah et al., 2019). Tamil primary schools have applied blended learning in their education process. It blended both traditional and online in teaching and learning. Implementing blended learning in schools changes the attitude of students and becomes a powerful and effective tool for the students to learn. Many changes occurred after applying blended learning in Tamil primary schools (Shanmugam and Balakrishnan, 2020). Some of them are, the role of the teachers became essential, the deliverance of subject knowledge to the students improved, the physical environment in which the learning takes place is shifted, and the flexibility in time and place of learning came to exist. Moreover, the benefits after applying it in primary schools are enumerable. They are, ease in the interaction between teachers and students, teachers to keep all the teaching contents or instruction's at their fingertips, students learn of their own interest choosing own time, place and environment, teaching can be done by online, students never feel shy to raise questions, teachers use any type of technology while teaching, opportunities are provided for group and individual interactions, huge access to online resources, easy to understand, and use the digital content available and finally, the learners' assessment tools are easy to use. The results of the research work are based on the objectives. This study has two objectives such as; to study the interest of blended learning among the students of Tamil primary schools and to identify the performance on the verse and figurative language (ceiyulum moziyaniyum) through blended learning. Tamil primary schools are located in all the states of East Malaysia. The present study is limited to the Kedah state of Malaysia. Four schools of Kedah state were selected due to the time constraint. Of these, two schools with online classes with synchronous teaching and the remaining two schools with online asynchronous teaching classes. In online synchronous classes, various learning apps are used in schools. Moreover, the data from school classes are limited to 10 weeks of study. Methodology A descriptive research method is used for this study. Observational method, case-study method, and survey method are the three types of descriptive method. Of these, the survey method is used for this study. In this method, a questionnaire was used as a research tool to collect data. The questionnaire contains a set of questions that were used to collect information from the respondents. The elements in the questionnaire were determined by the comments and criticism from the field experts of education. The interest in blended learning was identified using the attendance of the students. Data regarding the students' performance and assessment were analyzed to fulfill the second objective. Online Collaborative Learning Theory (OCLT) proposed by Harasim (2017) is selected for the study. Online teaching and learning have their own characteristic features like displaying digital texts, videos, mass learning, users' own time, and so on. Online classes are termed virtual classes. This theory explains the indicators of learning in order to assess the quality of learning and its effectiveness (Harasim, 2017). Active participation, satisfaction, experience, and role of instructors are the main aspects of OCLT. It has three phases such as idea-generating, idea organizing, and intellectual convergence. Of these, intellectual convergence is used to assess the performance of the students in this study. Intellectual convergence is any assessment based on individual thinking and logical reasoning. Moreover, active participation is identified using the students' attendance in their schools in this study. Hence, data were collected based on the attendance and performance of the students in the selected Tamil primary schools. Data collection Data were collected from four different schools. As blended learning is the main element of this study, all the four schools involved in online classes were chosen. Of these, two schools are with the synchronous mode of teaching and the other two schools are with the asynchronous mode of teaching. The framed questionnaire was used to collect data from the classrooms. Moreover, the data were collected from the 10 weeks of regular teaching. Allwright et al. (1991) have said that the classroom condition as, what and how learners learn, what teachers actually do, what kind of events take place should be researched. Tamil primary school teachers were selected for this study to collect data. The content validity of the research tool was determined by comments and criticism from experts in the field. Findings and discussions The data collected were analyzed and the percentage was calculated using Microsoft Excel software in this study. For this analysis, data collected were from 4 different schools with synchronized and asynchronized teaching through blended learning. The analysis is based on the students' interest in attending the class and their performance on the verse and figurative language (ceiyulum moziyaniyum) through blended learning. Moreover, the data were analyzed separately which were collected from the schools with two modes of teaching and learning. The blended learning schools are referred to as school1, school2, school3, and school4 respectively in Tables 1 and 2. The percentage (%) and mean value are calculated. Discussions are also provided. Attendance Attendance is the number of students attending class. It is a practice and compulsory in school education in many countries including Malaysia. School attendance is the foundation of a student's ability to receive education and the benefits that such education provides (Dalton and Beacon, 2018). Data for daily attendance were collected from the staff of the selected Tamil primary schools for 10 weeks. In the schools with the synchronous mode of teaching, there are 22 and 28 students respectively. Table 1 shows the details. The analysis of this data shows that the average percentage of attendance in school1 is 82.8%, whereas in school2 is 85.8%. The average mean value for attendance in 10 weeks is 84.3% in both schools. This shows the students' interest in learning through the online synchronous classes. Further, data were collected from schools with the asynchronous mode of teaching, for a period of 10 weeks. In the 2 schools, there are 13 and 20 students respectively. The data were analyzed and the Table 2 shows the details about the attendance of asynchronous mode of teaching from two different schools referred to as school3 and school4. The average percentage of attendance in school3 and school4 is 46.1% and 76% respectively. The average mean value for the attendance for 10 weeks is in both the schools is 61%. This shows the students' interest in learning in the online asynchronous classes. When the attendances from the online classes of both synchronous and asynchronous classes from the four schools are compared, online synchronous classes have an advantage over the other asynchronous classes. It has an average percentage of 82.8% and 85.8% in synchronous classes and 46.1% and 76% in online asynchronous classes. Based on the 10-week classes, the average mean value of the synchronous class is 84.3% and the asynchronous class is 61.0%. This shows that students have more interest in learning through the online synchronous classes of blended learning. Fig. 1 shows a clear picture of the attendance in synchronous and asynchronous classes of blended learning. Fig. 1: Attendance in blended learning However, the attendance in blended learning classes shows the interest of students in attending the classes. Performance on the verse and figurative of language One of the Tamil literary concepts of verse and figurative language (ceiyulum moziyaniyum) is taught in the Tamil primary schools through blended learning. Both verse (ceiyul) and figurative language (moziyani) are poetic forms of Tamil literature. The moral poetic forms in verses are provided with descriptions. This is taught in both synchronous and asynchronous classes of the selected four schools through blended learning. The mobile applications (APPS) are used to conduct the assessments on the verse and figurative language (ceiyulum moziyaniyum) in the online class with the synchronous mode of teaching. They are conducted using various mobile applications (APPS) such as Kahoot, Quizizz, Quizlet, and Wordwall. The performance of the students is analyzed using the data collected from two different schools. The assessments are conducted using the abovementioned APPS. Table 3 provides the details. The analysis of this data shows that the students have performed well in the verse and figurative language (ceiyulum moziyaniyum) assessment given through APPS in the synchronous mode of teaching. The total mark varies according to the mode of assessment. The total mark through Kahoot is 18; Quizizz is 15, 16, 18, and 20 respectively; Wordwall is 19 and 20, and Quizlet is 20. These modes of assessments are used in 10 weeks of the synchronous classes. In certain weeks, many have performed centum in both the schools. The average mean value for the performance in various modes of learning is 97.4% and 96.2% respectively in both the schools conducted through blended learning. The average mean value of both the schools in percentage is 96.8%. Further, in the asynchronous mode of teaching, verse and figurative language (ceiyulum moziyaniyum) are taught in the Tamil primary schools. Here the performance is based on the offline exercises given through asynchronous mode. Fill in the blanks, frame sentences, writing verses, writing using the instructions provided, and review is the exercises provided. After writing the answers, students are instructed to take a snap and upload through any social media like telegram or WhatsApp and send them to their respective teachers. The performance is analyzed using the data collected from the asynchronous class. The analysis of this data shows the performance of the students in verse and figurative language (ceiyulum moziyaniyum) assessment. The percentage is also calculated and exposed. The average mean value for the performance in various varieties of assessment in percentage is 76.3%. Table 4 shows the performance of assessment offline. Attendance in blended learning Attendance in verse and figurative language (ceiyulum moziyaniyum) assessment in online classes with the synchronous mode is 96.8% and with the asynchronous mode is 76.3%. The synchronous mode in blended learning is better when compared with others which has an average percentage of 96.8% which is high. Fig. 2 provides the information. Fig. 2: Performance of blended learning The performance of students while assessing verse and figurative language (ceiyulum moziyaniyum) through blended learning classes shows their intellectual convergence. The interest of students in attending the blended learning classes and the performance of the students are provided with statistical information. According to Harasim (2017), active participation is an essential element in the OCL theory. In the selected schools, the attendance in online classes is high up to 84.3%, in which one can find active participation. Moreover, intellectual convergence is assessed with the performance of students in the class which is high as 96.8%. Conclusion In this article, the attendance of the students who attended the online classes of blended learning was presented. Moreover, the performance of the students based on the assessments on the verse and figurative language (ceiyulum moziyaniyum) through blended learning is presented. When the data of the blended learning for attendance is analyzed, it has been found out that the average mean value of the synchronous class is 84.3% and asynchronous class is 61.0%. Moreover, the performance of verse and figurative language (ceiyulum moziyaniyum) assessment through blended learning classes conducted in the synchronous and asynchronous methods of learning is identified. The average mean value in synchronous mode is 96.8% and with the asynchronous mode is 76.3%. The findings of the study show that students actively participated in the blended learning classes. Moreover, they performed well in their assessments provided which show their intellectual convergence.
3,915.8
2022-03-01T00:00:00.000
[ "Education", "Computer Science" ]
Hadronization Scheme Dependence of Long-Range Azimuthal Harmonics in High Energy p+A Reactions We compare the distortion effects of three popular final-state hadronization schemes. We show how hadronization modifies the initial-state gluon correlations in high energy p+A collisions. The three models considered are (1) LPH: local parton-hadron duality, (2) CPR: collinear parton- hadron resonance independent fragmentation, and (3) LUND: color string hadronization. The strong initial-state azimuthal asymmetries are generated using the GLVB model for non-abelian gluon bremsstrahlung, assuming a saturation scale Qsat = 2 GeV. Long-range elliptic and triangular harmonics for the final hadron pairs are compared based on the three hadronization schemes. Our analysis shows that the process of hadronization causes major distortions of the partonic azimuthal harmonics for transverse momenta at least up to pT = 3GeV. In particular, they appear to be greatly reduced for pT<1{\div}2GeV. Introduction and motivation Multi-particle, long-range in pseudo-rapidity azimuthal correlations are widely studied in high-energy nuclear collisions at RHIC Au+Au [1,2] and LHC Pb+Pb [3][4][5]. In particular, they are considered as a signature for the "perfect fluid" behavior of the strongly coupled Quark-Gluon-Plasma (sQGP) produced in such reactions. The recent discovery of long-range p+A azimuthal harmonics , see Fig. 1(a), with magnitudes and p T dependence comparable to the ones found in A+A collisions [6][7][8][9], and the near energy independence of these A+A moments observed in the Beam Energy Scan (BES) at RHIC [10] together with LHC have challenged the uniqueness of the sQGP interpretation of v n in A+A. Prior to the recent p+A and BES data, it was assumed that in smaller transverse size p+A systems or lower energy A+A reactions, the perfect fluid sQGP "core" would gradually change into a highly dissipative hadron resonance gas "corona" and lead to a substantially smaller magnitude of the azimuthal harmonic moments. The observed near independence of v n (p T ) to system size and initial energy density has motivated the search for possible alternative non-hydrodynamic sources of azimuthal harmonics. Experimental v n harmonics for p+Pb collisions and |∆η| > 2 taken from ATLAS [12]. The magnitudes of the moments are comparable to the one observed in A+A reactions. Right panel: Azimuthal Fourier moments arising from the GB distribution (Eq. (1)) as a function of the gluon transverse momentum and for different choices of the exchanged momentum, as computed in [22]. While it has been shown that hydrodynamic equations, with particular assumptions about the initial and freeze-out conditions [11], are sufficient to describe the data, the uniqueness of that description is not obvious, especially given the unexpected features of the previously mentioned data. One important class of non-hydrodynamic models proposed to explain this puzzle is based on the Color Glass Condensate (CGC) and Glasma paradigm involving initial-state nonperturbative classical field correlations controlled by a gluon saturation scale, Q sat (x, A) [13][14][15][16][17][18][19][20][21]. A simpler perturbative QCD source of multi-gluon azimuthal correlations due to classical nonabelian field interference effect was recently presented in GLVB [22] based on the well-known Gunion-Bertsch (GB) LO analytic formula [23]. This non-abelian gluon radiation generates nonzero v n moments with a shape that closely resembles the experimental one -see Fig. 1(b). The magnitudes of those parton level harmonics are however too large and some kind of damping mechanism is therefore required. A natural candidate for this job is hadronization. In fact, the primary advantage of GLVB, for our purpose of exploring systematic theoretical uncertainties associated with hadronization scheme dependence of azimuthal harmonics, is its ease of adaptability to Monte Carlo (MC) multi-particle production in p+p, p+A and A+A collisions via the HIJING [24] type of models as emphasized in [22]. Thus far CGC/Glasma phenomenology of azimuthal harmonics moments in p+A and A+A reactions has been limited to the idealized local parton-hadron hadronization scheme [25] that by assumption preserves the parton level correlations into the final-state hadrons. This guarantees minimal distortion of the initial-state multi-parton correlations predicted by QCD. More realistically, it is well-known from decades of hadronic phenomenology that phenomena like the production and dacay of intermediate hadron resonances and final-state correlations can introduce non-trivial complications and, therefore, some non-perturbative hadronization scheme is required. Hadronization phenomenology cannot be rigorously predicted from QCD, but two generic approaches, carefully tuned to e + e − , ep, and pp data, have been developed during the 2 years. These models are the independent fragmentation scheme [26] and the LUND string model [27] implemented in the JETSET algorithm [28]. They can be used to estimate the distortions of initial-state partonic correlations due to a more realistic hadronization process. In particular, in the nuclear event generator HIJING, JETSET is used to convert the multiple diquarkquark beam jets with gluon mini-jet kinks into the multi-hadron resonances with particle data book properties and decay branchings. In this paper we study the hadronization scheme dependence of final-state correlations using three different hadronization models that can be conveniently selected within the JETSET code. We deliberately neglect any intermediate parton or hadron level transport or hydrodynamic effect to concentrate exclusively on final-state hadronization modification of initial multi-gluon correlations. We compare: (1) local parton-hadron duality (LPH), (2) collinear (independent) partonto-hadron resonance fragmentation/decay (CPR) and (3) LUND string hadronization models. We treat Q sat as an input parameter controlling the p T range of azimuthal asymmetry of Gunion-Bertsch multi-beam gluon bremsstrahlung. Our results strongly support the analysis of Skokov et al. [21], indicating that the CGC/Glasma predictions (without detailed hadronization modelling) should be limited to the kinematic range p T > 3 GeV to avoid complications and theoretical uncertainty of non-perturbative hadronization physics. At this point, it should be stressed that the configurations that we will consider in this work are quite simple and hence cannot be considered as a realistic representation of the ridge effect observed experimentally in p+A reactions. However, as we will see, they are already enough to draw strong conclusions about the hadronization mechanism dependence of the final azimuthal harmonics. Elliptic interference harmonics of gluons from two recoiling beam jets For our simulations, we used the MC hadronization algorithm JETSET 7.4 [28] with 30 × 10 6 simulated GLVB "events" with two recoiling beam jets. Each beam jet is represented by a high invariant mass qq pair along a "beam axis",ẑ. For our simulations the invariant mass of each beam jet is taken to be 100 GeV. The two beam jets are assumed to scatter with equal but opposite net momentum transfer Q 1 = −Q 2 = (q, ψ), with magnitude distributed according to a simple Rutherford form Q 2 sat /(q 2 + Q 2 sat ) 2 , and azimuthal direction, ψ, distributed uniformly in [−π, π]. For our numerical simulations, we further assumed Poisson fluctuations of the number, N g , of bremsstrahlung gluons with N g = 8 and distributed uniformly in rapidity between kinematic bounds. In the LUND color string model [27], all gluons are represented by "kinks" of the qq string that deform it in the transverse plane. In contrast, from an independent fragmentation point of view [26], gluons are assumed to be just isolated partons that hadronize independently from each other. We generate the bremsstrahlung gluons transverse momenta and azimuthal angles, k = (k, φ), from the perturbative regularized GB distribution: with Q sat = 2 GeV being the typical momentum scale expected from the CGC model for p+A collisions. We take µ = 300 MeV and Λ QCD = 200 MeV as infrared regulating scales. This setup produces initial azimuthal anisotropy at the parton level that with the simplest LPH scheme leads to harmonic moments similar to experiment, as seen in Fig.1. In particular, 3 φ ∆ the pseudo-rapidity independence and the preference of radiated gluons transverse momenta k to be aligned along ±Q gives rise to a "ridge-like" structure with azimuthal asymmetry resembling the one observed experimentally. After the hadronization, an analysis is performed over pairs of final pions with criteria as close as possible to [8]. In particular, our pseudo-rapidity cuts on the final pions are |η| < 2.5 and |∆η| > 2 (the so-called long-range), ∆η = η a − η b being the relative psuedo-rapidity of the pair. Both pions are taken in the same p T bin and the correlation functions we computed are given by: where ∆φ = φ a − φ b is the relative azimuthal angle between the two pions and S and B represent the same and mixed event pair distributions respectively [8]. The analysis is repeated for the three hadronization schemes to reveal the hadronization scheme dependence of final azimuthal harmonics shown in Figs. 2 and 3. From these distributions we extracted the n ≤ 6 single-particle moments, v n (p T ), using the following fitting function: The results of this fit for n ≤ 4 are shown in Fig. 4. Numerical Results The computation of the two-particle ∆φ distributions as reported in Figs. 2 and 3 shows that the process of hadronization has some deeply non-trivial consequences for the correlations between final state particles. The gluon curves clearly present the behavior expected from the discussion around Eq. (1). The long-range near-side (∆φ 0) peak is simply due to the fact that the gluons are produced with transverse momentum preferentially close to the exchanged momentum Q, while the awayside (∆φ π) peak is produced by the recoil of the two strings. These features extend over the full p T range. The distributions of the final pairs of hadrons, instead, have some very peculiar properties, depending on which p T region we are considering. For small values of the pion transverse momentum -smaller than a scale, λ, to be determined in Sec. 3 -the initial anisotropy is extremely reduced and the two-pion distributions become more and more uniform for decreasing p T . This behavior is common to both the LUND model and the independent fragmentation model, even though the shape of the ∆φ distributions already appears to be different for p T > 1 GeV. The reasons for this strong dilution of the initial anisotropy are proper to the process of hadronization itself and will be analyzed again in Sec. 3. If a simple system like ours already has such a "decoherence power", one can reasonably expect the same thing to be valid for a more realistic situation -see for example Sec. 4. It should also be stressed that, since the low-p T pions seem to carry no information about the initial gluon distribution, this feature appears be true in general, no matter what causes the partonic correlation in the first place, and hence can be applied to other initial-state models as well. The second result refers to the higher transverse momentum regime (p T λ). In both models this region presents strong two-particle correlations among the final pions, since the gluon anisotropy is transmitted more efficiently. However, one can immediately appreciate an essential difference between the independent fragmentation and the QCD string models. In the former, by definition, each parton fragments independently from the others and hence the pion correlation function closely resembles the gluon one since, when higher p T are considered, no other relevant sources of angular correlation come into play. For the case of the LUND model, instead, while the near-side peak is generally lower for the pions than for the gluons, an extremely pronounced away-side peak is present. This large ∆φ π signal (several times bigger than the ∆φ 0 one) is a direct consequence of the transverse momentum conservation taking place at each breaking of the color string [28]. Each qq system, in fact, has zero total p T and the transverse fluctuations are governed by a roughly Gaussian probability distribution. This means that when the string breaks and a new pair of partons, say q andq , is created from the vacuum, the p T of these partons has a probability distribution given by with m being the mass of the produced q and κ 1 GeV/fm being the string constant. By conservation of total momentum they are always produced back-to-back. It then follows that, for every hadron of momentum p T there will always be another with momentum −p T , i.e. such that ∆φ = π. This essentially means that the differential distribution C(∆φ; p T ) contains a contribution proportional to δ(∆φ − π), which gets broader when finite p T bins are taken into account, as in any realistic situation. We conclude that the QCD strings hadronization scheme that conserves event energy-momentum (unlike independent fragmentation) can introduce large away-side correlations in the spectrum of the final hadrons that can lead to negative odd harmonics absent at the purely partonic level. Table 1: Ratios between the pion even moments (v π n ) and the gluon ones (v g n ) for p T < 2 GeV. One immediately notices that there is a consistent damping of the initial magnitudes. In particular, the final-state v n are essentially zero (less than 30% of the initial ones) for p T < 1 GeV. The suppression for p T > 2 GeV appears to be negligible. All these properties lead to the azimuthal moments, v n (p T ), shown in Fig. 4 which therefore differ dramatically between the two hadronization schemes. In particular, as a consequence of the almost complete flatness of the ∆φ distributions, both models present even harmonics for low-p T which are much smaller than the gluonic ones. To be more quantitative, in Tab. 1 we compare the magnitudes of the even azimuthal harmonics at the partonic and hadronic level. As one can see, the damping effect is almost total for p T < 1 GeV and persists up to p T 2 GeV. Moreover, the prominent away-side peak present in the distributions of Fig. 3 causes large negative odd harmonics due to assumed local transverse momentum conservation. It should also be noted that these v n arise solely from initial GB gluon bremsstrahlung and final hadronization effects. No final state interactions have been included in our simulations. Hadronization mechanisms that damp gluon harmonics At this point one needs to ask: what is the scale which determines the quenching of anisotropy for small values of the transverse momenta? Our picture has two natural scales: the typical energy of the hadronization process, λ 1 GeV, and the typical momentum exchanged, Q sat = 2 GeV. Which of the two plays the role of discriminant between the uniform and the anisotropic regions? To answer this question we repeated the previous simulation using the independent fragmentation scheme but with a different value for the average momentum exchanged, Q sat = 4 GeV. The results are reported in Fig. 5. If compared to Fig. 2 one can appreciate that the final hadronic anisotropies now follow the gluonic ones much more closely than in the Q sat = 2 GeV case, as one might expect, for almost all values of the transverse momentum except for p T ≤ 1 GeV, where the two-pion distributions are again completely flattened out. This shows that the energy scale relevant in this decoherence effect is indeed the scale of hadronization, λ 1 GeV. As a further check, one can study the average ratio between the pion and the gluon momenta both for the transverse and longitudinal components. Our simulation indicates that for the two models This again is a hint for the role of hadronization in damping the anisotropies at low p T . The reasons for this "quenching power" of the hadronization process can be found in essentially two features: (i) non-collinearity of the gluon radiation and (ii) isotropic decays of resonances. In particular, the fact that gluon fragmentation is not perfectly collinear is already manifest in the first order pQCD Altarelli-Parisi distribution, where k is again the gluon transverse momentum, z is the Bjorken variable and P(z) the splitting function. Although this distribution is strongly peaked around k 0 it also has very long tails, clearly showing that a totally collinear picture is a too naïve approximation (for an interesting discussion on the role of fragmentation functions on the azimuthal harmonics see [29]). Morever, in the LUND string model we also have the transverse fluctuations of the flux tube, whose typical scale is √ κ 0.45 GeV (see Eq. (4)), which provide another source of dilution of the initial anisotropy. Lastly, the intermediate steps between the initial gluons and the final pions are populated by resonances. These particles decay without any preferential direction and hence strongly contribute to the flattening of the final ∆φ distributions. In particular, this is the reason why the observed decoherence happens for p T 1 GeV, this being the typical mass of the most common resonances. To better illustrate this point we performed an overly simplified simulation whose description and results are shown in Tab. 2. We learn from above that the strong damping of parton level v n at p T λ is not a consequence of the particular initial-state model chosen, but it is rather an intrinsic property of the hadronization mechanism. This is one of the striking results of our analysis. Table 2: Output of two events composed by a qq pair and a gluon for both independent fragmentation and LUND model. The quark and antiquark are moving in opposite directions along z-axis, each of them with energy E q = 10 GeV. The gluon, instead, flies away along the x-axis (η = 0, φ = 0) with energy E g = 3 GeV. We only report those particles with pseudo-rapidity |η| < 1, i.e. close to the gluon in phase space. One immediately notices that even though the initial gluon has zero azimuthal angle, the fragmentation produces particles with φ 0. Every event always contains at least one resonance (particles in parenthesis) which then decays isotropically. Moreover, the fact that even the resonances (which are produced in the very first step of hadronization) have non-zero anzimuthal angle shows that the gluon fragmentation is non-collinear. The final pions are therefore widely spread in ∆φ. They also have p T < 1 GeV as expected. Hadronization damping of triangular and higher gluon harmonics As one can see from Fig. 4, in contrast to the LUND model, the independent fragmentation leads to very small odd harmonicsalbeit fluctuations due to small statistics effects. The reason is that for two color antennas the system is back-to-back symmetric, with just two qq pairs recoiling from each other. Since at high-p T no other correlations are introduced, the final pions inherit the symmetries of the initial partons. To check the consequences of hadronization on the purely geometrical odd moments -and to reproduce a slightly more realistic configuration -we performed a simulation involving three color antenna simulating three recoiling beam jets ( Figure 6: Two-particle ∆φ distributions obtained from three initial qq pairs in a triangular configuration for both the initial-state gluons (blue) and the final pions (red). The hadronization has been performed using the independent fragmentation scheme. conserving transverse momentum. Both the simulation and the analysis closely follow what we explained in Sec. 2 but now we implemented a third quark-antiquark pair such that the whole system conserves the total momentum, i.e. if Q 1 , Q 2 and Q 3 are the momenta exchanged by each of the three pairs then Q 1 + Q 2 + Q 3 = 0. In this case we only used the independent fragmentation scheme, which lacked the odd v n in the first place. The results for the ∆φ distributions and for the final Fourier moments are shown in Fig. 6 and 7. Results As one immediately notices, the initial gluons ∆φ distributions now have a contribution from odd Fourier components, v 2n+1 . Once again, in the p T λ region none of the initial information is preserved and the pions correlation functions are totally flattened. Even though we suffer from a lack of statistics, it is also clear that in the higher transverse momentum regime the final particles distributions keep following the gluonic ones. In terms of Fourier harmonics, even though the initial gluons have definitely sizeable v n , the final pions moments in the low-p T region are essentially zero (less than 10%) -see again Fig. 7. Hadronization is once again very effective in quenching the azimuthal asymmetry of the system. For p T λ the gluonic and hadronic v n have, instead, similar values. Conclusions Our analysis shows that the process of hadronization can lead to major distortions of the azimuthal harmonics up to p T = 3 GeV, which are of interest in the search for signatures of perfect fluidity in p+A systems. In particular, one important aspect seems to be general and modelindependent: at small values of the transverse momentum of the hadrons the complexity of the hadronization itself -namely non-collinearity of fragmentation and isotropic resonance decays -greatly reduces the information contained in the initial-state partons, causing an almost total smearing of the final hadronic spectrum. As shown in Sec. 3, this is intrinsic to the hadronization itself and hence should be taken into account by every model with initial-state anisotropy at least with a theoretical error band estimated by testing several hadronization schemes. Many models of initial-state anisotropy seem to claim that the only scale relevant for the generation of non-zero v n is the typical momentum exchanged, Q sat . Our analysis shows that this is only true above a second hadronization scale, λ 1 GeV. Hadronization scheme choice does matter, and those schemes that assume local collinear parton-hadron duality or pure collinear fragmentation without resonance production may over-estimate final-state hadron harmonics. It is interesting to note that assuming that fragmentation is perfectly collinear and described by any fragmentation function f (z) should actually lead to an enhancement of the initial gluon anisotropy since low-p T pions come from gluons with higher transverse momentum that naturally have greater azimuthal asymmetry. This is in striking contrast with the results of our JETSET simulations, where collinearity of hadronization is broken. A second conclusion is that the two-particle correlations in the p T λ 1 GeV region can strongly depend on the chosen hadronization model. Specifically, while in the independent fragmentation scheme the initial parton anisotropy almost completely transmits anisotropy to the final hadrons, in the LUND string model a new source of correlation due to transverse momentum conservation is introduced, leading to a large away-side peak in the pion ∆φ distributions that is not due to back-to-back mini jets, which are not taken into account in the present simulations. If these features also survive to a system more complicated than ours, then this leads to two important remarks: first of all, whenever performing a Monte Carlo simulation to study multi-particle correlations one should always pay particular attention to the degree of model-dependence of the simulation itself since this might introduce important biases on the final conclusions. Secondly, if one assumes the description of hadronization in terms of a QCD color string as a fairly accurate one, then we might have an unexpected source of non-flow away side correlation for particles with p T > 1 GeV which would be difficult to deconvolute from pQCD di-jets by any experimental analysis. We emphasize again that the properties of the final two-hadron correlations computed in this study, for the most part, are due neither to a particular initial-state mechanism nor to any collective transport or hydrodynamic effects: they are genuine consequences of the hadronization process that are scheme dependent. There is unfortunately no guarantee of universality of hadronization scheme. Lastly, we also notice how, considering the observed mass dependence of final-state correlations [30], it would be interesting to repeate the previous analysis for different flavors and study how hadronization effects are affected by the particle masses. We leave this analysis to a future study.
5,552.4
2015-05-14T00:00:00.000
[ "Physics" ]
Multi-stakeholder News Recommendation Using Hypergraph Learning Recommender systems are meant to fulfil user preferences. Nevertheless, there are multiple examples where users are not the only stakeholder in a recommendation platform. For instance, in news aggregator websites apart from readers, one can consider magazines (news agencies) or authors as other stakeholders. A multi-stakeholder recommender system generates a ranked list of items taking into account the preferences of multiple stakeholders. In this study, news recommendation is handled as a hypergraph ranking task, where relations between multiple types of objects and stakeholders are modeled in a unified hypergraph. The obtained results indicate that ranking on hypergraphs can be utilized as a natural multi-stakeholder recommender system that is able to adapt recommendations based on the importance of stakeholders. Introduction Classic news recommender systems try to model user preferences based on users' previous interactions with articles. Such systems typically consist of only two types of objects, i.e. users and articles, taking into account only interactions between them [4]. Nevertheless, in many applications there are multiple types of objects and stakeholders. For example, Airbnb should take into account the preferences of both hosts and guests [1]. The same case holds in news aggregator platforms where recommender systems should take into account the preferences of their corresponding stakeholders (e.g., readers, journalists, magazines, etc.). It is therefore crucial that multi-stakeholder news recommender systems should include these different sets of objects and model the complex relations between them when generating lists of recommendations. Here, we show that the use of hypergraph ranking is a natural way to address multi-stakeholder news recommendation. A hypergraph is a generalization of a graph that consists of multiple node sets and hyperedges modeling high order relations between them. There have been some studies that have applied hypergraph learning for recommendation, especially in the field of multimedia. Indicatively, in [2] the task of music recommendation was addressed as a hypergraph ranking problem, while in [7], users, images, tags, and geo-tags were modeled in a unified hypergraph model, for tag recommendation. Moreover, the authors in [6] also modeled news recommendation as a hypergraph learning problem by defining hyperedges between users, news articles, news topics and named entities. Despite its capability in modeling relations between several types of objects/stakeholders, to the best of our knowledge, hypergraph learning has not been used in the context of multi-stakeholder recommender systems. Methodology Let H be the hypergraph incidence matrix of size |N |×|E|, where N corresponds to nodes (users, articles, authors, topics, sources) and E to hyperedges. H(n, e) = 1, if n ∈ e and 0 otherwise. Let be a symmetric matrix with each item A ij indicating the relatedness between nodes i and j. D n , D e , and W are the node degree, hyperedge degree, and weight matrix (here W = I), respectively. Although there are approaches to adjust or optimize the hyperedge weights (e.g. [8]), for the sake of simplicity, here we assign equal weights to all the hyperedges. We try to find a ranking vector where L is the hypergraph Laplacian matrix and R represents the real numbers. This problem is extended with the 2 regularization norm between the ranking vector f and the query vector y ∈ R |N | , resulting in where ϑ is a regularizing parameter. The optimal ranking vector is f * = ϑ 1+ϑ I − 1 1+ϑ A −1 y [2]. To generate the recommendation list for user u in a regular recommendation task, one sets the corresponding value in the query vector to one (y u = 1) and all the other values to zero. We used a dataset from a Flemish news content aggregator website (Roularta Media Group). It consists of 3194 users, 4685 news articles and 108 authors. Each article is accompanied by title, text, tags, topics, authors and source (publisher). We defined five types of hyperedges in the unified hypergraph that are presented in Table 1. E1 connects an author to the articles he/she has written. E2 and E3 model high-order relations between the articles and their metadata. E4 represents hyperedges that connect each article to its k (we used k = 6) most content-wise similar articles based on article embeddings. E5 models user-article interactions, connecting a user with the articles he/she has seen. To generate the article embeddings we used the CNN based deep neural network proposed by [3] based on title, text and tags of news articles. In this dataset the majority of articles are related to a limited number of authors. This imbalanced distribution causes biased recommendations toward high frequency authors and high coverage of them in the recommendation lists. For instance, during the COVID-19 outbreak, the platform may be interested in covering more articles written by a specific author (e.g., a science journalist) or magazine in recommendation lists. This can be handled by considering a higher weight for that specific author or magazine in the query vectors. The main aim here is to demonstrate the potential and flexibility of hypergraph learning in considering different stakeholders in the model. In real cases, the platform owners should consider a trade-off between short-term utility and longterm utility. The exact weighting and strategy depend on the context and the policies of the platform owners. Results Three scenarios are assessed in this section and their results are shown in Table 2. The obtained results are based on per-user queries hiding/testing for each user 25 related article-interactions. Therefore only train interactions are used to form E5. The accuracy measures (nDCG@10 and Precision@10) are calculated based on test interactions. In the first scenario (baseline), the hypergraph consists of E1, E2, E3 and E5. In the second scenario, the article embeddings (E4) are added to the baseline. This way we investigate whether employing article content embeddings boosts recommendation performance. In the third scenario, we consider a higher weight for a specific author (α) in the query vector (y). The obtained results show that exploiting article embeddings (E4) boosts the hypergraph ranking performance compared to the baseline. In the third scenario, adding a weight (β) for the specific author (α) to the queries (y a = β) triples the coverage of the selected author in the recommendation lists. Here, coverage is defined as the percentage of recommendation lists that contain articles of the selected author. This strategy can be used to adapt the ranked lists based on the importance of stakeholders in a specific context. Furthermore, it is vital in order to avoid bias and maximize coverage in news recommendation. The recommender can provide fairer coverage over authors by adapting their weights. It is worth mentioning that this adaptation comes at the cost of reduced precision. The weight of selected stakeholders in the query vectors can be adjusted to balance precision and coverage based on the context. Conclusion In this study we showed that hypergraph ranking can naturally address multiple stakeholders in recommendations. We also showed that one can achieve better recommendation performance by adding article content embeddings. An interesting future research topic would be to dynamically adapt the queries to include more stakeholders balancing as well precision and coverage. Moreover, the trade-off between precision and coverage can be achieved by using the Pareto frontier [9]. Another interesting topic would be to diversify the recommendation lists based on the content of news articles [5]. Finally, a comprehensive evaluation study should be conducted to compare the proposed approach with other multi-stakeholder recommendation methods.
1,729
2020-12-09T00:00:00.000
[ "Computer Science" ]
Hybridizing neural network with multi-verse, black hole, and shuffled complex evolution optimizer algorithms predicting the dissolved oxygen The great importance of estimating dissolved oxygen (DO) dictates utilizing proper evaluative models. In this work, a multi-layer perceptron (MLP) network is trained by three capable metaheuristic algorithms, namely multi-verse optimizer (MVO), black hole algorithm (BHA), and shuffled complex evolution (SCE) for predicting the DO using the data of the Klamath River Station, Oregon, US. The records (DO, water temperature, pH, and specific conductance) belonging to the water years 2015 2018 (USGS) are used for pattern analysis. The results of this process showed that all three hybrid models could properly infer the DO behavior. However, the BHA and SCE accomplished this task by simpler configurations. Next, the generalization ability of the developed patterns is tested using the data of the 2019 water year. Referring to the calculated mean absolute errors of 1.0161, 1.1997, and 1.0122, as well as Pearson correlation coefficients of 0.8741, 0.8453, and 0.8775, the MLPs trained by the MVO and SCE perform better than the BHA. Therefore, these two hybrids (i.e., the MLP-MVO and MLP-SCE) can be satisfactorily used for future applications. Introduction As is known, acquiring an appropriate forecast from water quality parameters like dissolved oxygen (DO) is an important task due to their effects on aquatic health maintenance and reservoir management [1]. The constraints like the influence of various environmental factors on the DO concentration [2] have driven many scholars to replace the conventional models with sophisticated artificial intelligent techniques [3-6]. As discussed by many scholars, intelligence techniques have a high capability to undertake non-linear and complicated calculations [7][8][9][10][11][12][13][14]. A large number artificial intelligence-based practices are studied, for example, in the subjects of environmental concerns [15][16][17][18][19][20][21] [23,[45][46][47][48][49], image classification and processing [50,51], target tracking and computer vision [41,[52][53][54][55][56][57], structural health monitoring [58,59], building and structural design analysis [58,[60][61][62], structural material (e.g., steel and concrete) behaviors [8,21,61,[63][64][65][66][67], soil-pile analysis and landslide assessment [12, [67][68][69][70], seismic analysis [70][71][72], measurement techniques [41, 59,60,73], or very complex problems such as signal processing [50,52,74,75] as well as feature selection and extraction problems [23,50,74,[76][77][78]. Similar to deep learning-based applications [53,73,[79][80][81][82][83], many decision-making applications work related to engineering complicated problems as well [60,[84][85][86][87][88][89]. A neural network is known as a series of complex algorithms that recognize underlying connections in a set of data input and outputs through a process that mimics the way the human brain operates [45,46,[90][91][92][93]. In another sense, the technique of artificial neural network (ANN) is a sophisticated nonlinear processor that has attracted massive attention for sensitive engineering modeling [94]. Different notions represent this model. Most importantly, a multi-layer perceptron (MLP) [81,95] is composed of a minimum of three layers, each of which contains some neurons for handling the computations -noting that a more complicated ANN-based solution is known as deep learning [96,97] where it refers as part of a wider family of conventional training machine technique based on ANN with representation learning [79,80,82,83,98]. For instance, Chen, et al. [99], Hu, et al. [100], Wang,et al. [47], and Xia, et al. [101] employed the use of extreme machine learning techniques in the field of medical sciences. As some new advanced prediction techniques, hybrid searching algorithms are widely developed to have more accurate prediction outputs; namely, harris hawks optimization [67], enhanced grey wolf optimization [102], multiobjective large-scale optimization [40,90,103,104], fruit fly optimization [105], multiswarm whale optimizer [13], ant colony optimization [106], as well as conventional and extrme machine learning-based solutions [107][108][109][110][111]. Through applying a support vector regression (SVR), Li, et al. [112] showed the efficiency of the maximal information coefficient technique used for feature selection in the estimation of the DO concentration. The results of the optimized dataset were much more reliable (28.65% in terms of root mean square error, RMSE) than the original input configuration. Csábrági, et al. [113] showed the appropriate efficiency of three conventional notions of artificial neural networks (ANNs) by the names multilayer perceptron (MLP), radial basis function (RBF), and general regression neural network (GRNN) for this purpose. Similar efforts can be found in [114,115]. Heddam [116] introduced a new ANN-based model, namely evolving fuzzy neural network as a capable approach for the DO simulation in the river ecosystem. The suitability of fuzzy-based models has been investigated in many studies [117]. Adaptive neuro-fuzzy inference system (ANFIS) is another potent data mining technique that has been discussed in many studies [118][119][120]. More attempts regarding the employment of machine learning tools can be found in [121][122][123][124]. Ouma, et al. [125] compared the performance of a feed-forward ANN with multiple linear regression (MLR) in simulating the DO in Nyando River, Kenya. It was shown that the correlation of the ANN is considerably greater than the MLR (i.e., 0.8546 vs. 0.6199). Zhang and Wang [58] combined a recurrent neural network (RNN) with kernel principal component analysis (kPCA) to predict the hourly DO concentration. Their suggested model was found to be more accurate than traditional data mining techniques, including feedforward ANN, SVR, and GRNN by around 8, 17, and 12%. The most considerable accuracy (the coefficient of determination (R 2 ) = 0.908) was obtained for the DO in the upcoming one hour. Lu and Ma [126] combined a so-called denoising method "complete ensemble empirical mode decomposition with adaptive noise" with two popular machine learning models, namely random forest (RF) and extreme gradient boosting (XGBoost) to analyze various water quality parameters. It was shown that the RF-based ensemble is a more accurate approach for the simulation of DO, temperature, and specific conductance. They also proved the viability of the proposed approaches by comparing them with some benchmark tools. Likewise, Ahmed [127] showed the superiority of the RF over the MLR for DO modeling. He also revealed that water temperature as well as pH olay the most significant role in this process. Ay and Kişi [128] conducted a comparison among MLP, RBF, ANFIS (sub-clustering), and ANFIS (grid partitioning). Respective R 2 values of 0.98, 0.96, 0.95, and 0.86 for one station (Number: 02156500) revealed that the outcomes of the MLP are better-correlated with the observed DOs. Synthesizing conventional approaches with auxiliary techniques has led to novel hybrid tools for various hydrological parameters [129][130][131]. Ravansalar, et al. [132] showed that linking the ANN with a discrete wavelet transform results in improving the accuracy (i.e., Nash-Sutcliffe coefficient) from 0.740 to 0.998. A similar improvement was achieved for the SVR applied to estimate biochemical oxygen demand in Karun River, Western Iran. Antanasijević, et al. [133] presented a combination of Ward neural networks and local similarity index for predicting the DO in the Danube River. They stated the better performance of the proposed model compared to multisite DO evaluative approaches presented in the literature. Metaheuristic search methods, like teaching-learning based optimization [134], have provided suitable approaches for intricate problems. Ahmed and Shah [118] suggested three optimized versions of ANFIS using differential evolution, genetic algorithm (GA), and ant colony optimization for predicting water quality parameters, including electrical conductivity, sodium absorption ratio, and total hardness. In similar research, Mahmoudi, et al. [135] coupled the SVR with shuffled frog leaping algorithm (SFLA) for the same objective. Zhu, et al. [136] compared the efficiency of the fruit fly optimization algorithm (FOA) with the GA and particle swarm optimization (PSO) for optimizing a least-squares SVR for forecasting the trend of DO. Referring to the obtained mean absolute percentage errors of 0.35, 1.3, 2.03, and 1.33%, the proposed model (i.e., FOA-LSSVR) surpassed the benchmark techniques. In this work, three stochastic search techniques of multi-verse optimizer (MVO), black hole algorithm (BHA), and shuffled complex evolution (SCE) are used to optimize an MLP neural network for predicting the DO using recent data collected from the Klamath River Station. To the best of the authors' knowledge, up to now, a few metaheuristic algorithms have been used for training the ANN in the field of DO modeling (e.g., firefly algorithm [137] and PSO [138]). Therefore, the models suggested in this study are deemed as innovative hybrids for this purpose. Data As a matter of fact, intelligent models should first learn the pattern of the intended parameter to predict it. This learning process is carried out by analyzing the dependence of the target parameter on some independent factors. In this work, the DO is the target parameter for water temperature (WT), pH, and specific conductance (SC). This study uses the data belonging to a US Geological Survey (USGS) station, namely the Klamath River (station number: 11509370). As Figure 1 illustrates, this station is located in Klamath County, Oregon State. Pattern recognition is fulfilled by means of the data between October 01, 2014, and September 30, 2018. After training the models, they predict the DO for the subsequent year (i.e., from October 01, 2018, to September 30, 2019). Since the models do not know this data, the accuracy of this process will reflect their capability for predicting the DO in unseen conditions. Hereafter, these two groups are categorized as training data and testing data, respectively. Figure 2 depicts the DO versus WT, PH, and SC for the (a, c, and e) training and (b, d, and f) testing data. Based on the available data for the mentioned periods, the training and testing groups contain 1430 and 352 records, respectively. Moreover, the statistical description of these datasets is presented in Table 1. Methodology The steps of this research are shown in Figure 3. After providing the appropriate dataset, the MLP is submitted to the MVO, BHA, and SCE algorithms for adjusting its parameters through metaheuristic schemes. During an iterative process, the MLP is optimized to present the best possible prediction of the DO. The quality of the results is lastly evaluated using Pearson correlation coefficient (RP) along with mean absolute error (MAE) and RMSE. They analyze the agreement and the difference between the observed and predicted values of a target parameter. In the present work, given and as the predicted and observed DOs, the RP, MAE, and RMSE are expressed by the following equations: where K signifies the number of the compared pairs. The MVO As is implied by its name, the MVO is obtained from multi-verse theory in physics [139]. According to this theory, there is more than one big bang event, each of which has initiated a separate universe. The algorithm was introduced by Mirjalili, et al. [140]. The main components of the MVO are worm holes, black holes, and white holes. The concepts of black and white holes run the exploration phase, while the wormhole concept is dedicated to the exploitation procedure. In the MVO, a so-called parameter "rate of inflation (ROI)" is defined for each universe. The objects are transferred from the universes with larger ROIs to those with lower values for improving the whole cosmos' average ROI. During an iteration, the organization of the universes is carried out with respect to their ROIs, and after a roulette wheel selection (RWS), one of them is deemed as the white hole. In this relation, a set of universes can be defined as: where g symbolizes the number of objects and k stands for the number of universes. The j th objective in the i th solution is generated according to the below equation: where and denote upper and lower bounds, and the function () produces a discrete randomly distributed number. In each repetition, there are two options for the : (i) it is selected from earlier solutions using RWS (e.g., ∈ ( 1 , 2 , ..., −1 ) and (ii) it does not change. It can be wrriten: In the above equation, stands for the i th universe, ( ) gives the corresponding normalized ROI, and 1 is a random value in [0, 1]. Equation 7 expresses the measures considered to deliver the variations of the whole universe. In this sense, the wormholes are supposed to enhance the ROI. 2 ≥ where signifies the j th best-fitted universe obtained so far and 2 , 3 , and 4 are random values in [0, 1]. Moreover, two parameters of WEP and TDR stand for the wormhole existence probability and traveling distance rate, respectively. Given Iter as the running iteration, and as the maximum number of Iters, these parameters can be calculated as follows: where q is the accuracy of exploitation, a and b are constant pre-defined values [141,142]. The BHA Inspired by the black holes incidents in space, Hatamlou [143] proposed the BHA in 2013. Emerged after the collapse of massive stars, a black hole is distinguished by a huge gravitational power. The stars move toward this mass, and it explains the pivotal strategy of the BHA for achieving an optimum response. A randomly generated constellation of stars represents the initial population. Based on the fitness of these stars, the most powerful one is deemed as the black hole to absorb the surrounding ones. In this procedure, the positions change according to the below relationship: where rand is a random number in [0, 1], is the black hole's position, Z is the total number of stars, and Iter symbolizes the iteration number. Once the fitness of a star surpasses that of the black hole, they exchange their positions. In this regard, Equation 11 calculates the radius of the event horizon for the black hole. where is the fitness of the i th star and gives this value for the black hole [144]. The SCE Originally proposed by Duan, et al. [145], the SCE has been efficiently used for dealing with optimization problems with high dimensions. The SCE can be defined as a hybrid of complex shuffling and competitive evolution concepts along with the strengths of the controlled random search strategy. This algorithm (i.e., the SCE) benefits from a deterministic strategy to guide the search. Also, utilizing random elements has resulted in a flexible and robust algorithm. In the SCE is implemented in seven steps. Assuming NC as the number of complexes and NP as the number of points existing in one complex, the sample size of the algorithm is generated as S = NC × NP. In this sense, NC ≥ 1 and NP ≥ 1 + the number of design variables. Next, the samples x1, x2, …, xs is created in the viable space (i.e., within the bounds In the fifth step, each complex is evolved by the competitive complex evolution algorithm. Later, in a process named shuffling of the complexes, all complexes are replaced in the array D. This array is then sorted based on the fitness values. Lastly, the algorithm checks for stopping criteria to terminate the process [146]. Results and discussion 4.1 Optimization and weight adjustment As explained, the proposed hybrid models are designed in the way that MVO, BHA, and SCE algorithms are responsible for adjusting the weights and biases of the MLP. Each algorithm first suggests a stochastic response to re-build the MLP. In the next iterations, the algorithms improve this response to build a more accurate MLP. In this relation, the overall formulation of the MLP that is applied to the training data can be expressed as follows: where f(x) is the activation function used by the neurons in a layer, also, RN and IN denote the response and the input of the neuron N, respectively. The created hybrids are implemented with different population sizes (NPops) for achieving the best results. Figure 4 shows the values of the objective function obtained for the NPops of 10, 25, 50, 75, 100, 200, 300, 400, and 500. In the case of this study, the objective function is reported by the RMSE criterion. Figure 4 says that unlike the SCE, which gives more quality training with small NPops, the MVO performs better with the three largest NPops. The BHA, however, did not show any specific behavior. Overall, the MVO, BHA, and SCE with the NPops of 300, 50, and 10, respectively, could adjust the MLP parameters with the lowest error. Figure 6 shows a comparison between the observed DOs and those predicted by the MLP-MVO, MLP-BHA, and MLP-SCE for the whole five years. At a glance, all three models could properly capture the DO behavior. It indicates that the algorithms have designated appropriate weights for each input parameter (WT, PH, and SC). The results of the training and testing datasets are presented in detail in the following. As stated previously, the quality of the testing results shows how successful a trained model can be in confronting new conditions. The data of the fifth year were considered as these conditions in this study. Figure 8 depicts the histogram of the testing errors. In these charts, µ stands for the mean error, and represents the standard error. In this phase, the RMSEs of 1.3187, 1.4647, and 1.3085, along with the MEAs of 1.0161, 1.1997, and 1.0122, implied the power of the used models for dealing with stranger data. It means that the weights (and biases) determined in the previous section have successfully mapped the relationship between the DO and WT, PH, and SC for the second phase. From the comparison point of view, unlike the training phase, the SCE-based hybrid outperformed the MLP-MVO. The MLP-BHA, however, presented the poorest prediction of the DO again. Conclusions This research pointed out the suitability of metaheuristic strategies for analyzing the relationship between the DO and three influential factors (WT, PH, and SC) through the principals of a multi-layer perceptron network. The used algorithms were multi-verse optimizer, black hole algorithm, and shuffled complex evolution, which has shown high applicability for optimization objectives. A finding of this study was that while the MVO needs NPop = 300 to give a proper training of the MLP, two other algorithms can do this with smaller populations (NPops of 50 and 10). According to the findings of the training phase, the MVO can achieve a more profound understanding of the mentioned relationship. The RMSE of this mode was 1.3148, which was found to be smaller than MLP-BHA (1.4426) and MLP-SCE (1.3304). But different results were observed in the testing phase. The SCE-based model came up with the largest accuracy (the RPs were 0.8741, 0.8453, and 0.8775). All in all, the authors believe that the tested models can serve as promising ways for predicting the DO.
4,242
2021-01-25T00:00:00.000
[ "Computer Science" ]
Development of collagenous scaffolds for wound healing: characterization and in vivo analysis The development of wound dressings from biomaterials has been the subject of research due to their unique structural and functional characteristics. Proteins from animal origin, such as collagen and chitosan, act as promising materials for applications in injuries and chronic wounds, functioning as a repairing agent. This study aims to evaluate in vitro effects of scaffolds with different formulations containing bioactive compounds such as collagen, chitosan, N-acetylcysteine (NAC) and ε-poly-lysine (ε-PL). We manufactured a scaffold made of a collagen hydrogel bioconjugated with chitosan by crosslinking and addition of NAC and ε-PL. Cell viability was verified by resazurin and live/dead assays and the ultrastructure of biomaterials was evaluated by SEM. Antimicrobial sensitivity was assessed by antibiogram. The healing potential of the biomaterial was evaluated in vivo, in a model of healing of excisional wounds in mice. On the 7th day after the injury, the wounds and surrounding skin were processed for evaluation of biochemical and histological parameters associated with the inflammatory process. The results showed great cell viability and increase in porosity after crosslinking while antimicrobial action was observed in scaffolds containing NAC and ε-PL. Chitosan scaffolds bioconjugated with NAC/ε-PL showed improvement in tissue healing, with reduced lesion size and reduced inflammation. It is concluded that scaffolds crosslinked with chitosan-NAC-ε-PL have the desirable characteristics for tissue repair at low cost and could be considered promising biomaterials in the practice of regenerative medicine. Graphical Abstract Introduction The skin is the human body's main protective barrier from pathogens.In cases of dermis injury, this barrier is impaired.To prevent possible contamination, dehydration, heat loss and other damage, it is necessary to promote rapid wound closure and regeneration of damaged skin to restore the barrier function.Effective repair requires communication and interaction between different types of cells, a process that is regulated at various levels [1][2][3]. This study is dedicated to Luiz Ricardo Goulart. The development of wound dressings from biomaterials has been the subject of research due to their unique structural and functional characteristics.Animal-derived hydrogels made of collagen and chitosan act as promising materials for applications in injuries and chronic wounds, working as a repair agent that guides the patient's own skin cells to a compatible and organized matrix.Hydrogel-based dressings and matrices have the potential to satisfy requirements of an ideal dressing as they provide a consistent and conducive environment for healing wounds with an acceptable cosmetic appearance [4,5]. Collagen-derived sponge dressings provide a unique set of properties necessary for the wound healing process, such as high water affinity, chemotactic property, absorption capacity, platelet activation and biocompatibility [4][5][6].Furthermore, collagen can also be combined with other materials and additives to obtain synergistic effects, enhancing its therapeutic role in the treatment of chronic or infected wounds [5,6]. Another compound, chitosan is a natural linear biopolymer composed of a hydrophilic surface that promotes cell adhesion, proliferation and differentiation, which gives it properties including biocompatibility, bioactivity, biodegradability and high viability in different shapes and structures [7,8].Furthermore, due to the positively charged amino groups in the structure of this carbohydrate, it becomes mucoadhesive, which promotes the ability to bind cell membranes.Not only does chitosan have adequate porosity, but several studies have proven that its combination with collagen has resulted in better performance in terms of regenerative capacity [7,9] In addition, ε-Poly-L-lysine (ε-PL) is a cationic peptide consisting of 25 to 35 L-lysine residues [10].The ε-PL was discovered as an extracellular material produced by filamentous actinomycetes such as Streptomyces albulus and Lysinopolymerus, and has been used as a natural antimicrobial peptide that acts by inhibiting various microorganisms such as bacteria, yeasts and viruses [11][12][13].Due to its biodegradation and biocompatibility properties, ε-PL has been widely used as a food preservative in industry [14]. Furthermore, N-acetylcysteine (NAC), the acetylated variant of the amino acid L-cysteine, is a source of sulfhydryl (SH) groups and is converted in the body into metabolites capable of stimulating glutathione (GSH) synthesis, promoting detoxification and acting directly as free radical scavengers.NAC has been used in clinical practice as a mucolytic agent in respiratory diseases; however, it also appears to have beneficial effects in conditions of oxidative stress, such as HIV infection, cancer, heart disease and smoking [15].There is urging interest in the effects of NAC against oxidative stress associated with its antioxidant properties due to its rapid reaction with free radicals and reduced glutathione restitution as demonstrated by Douhib et al. [16] and Adil et al. [17]. Due to the various features of the abovementioned compounds, our group believes that a synergy between these components could potentialize a wound dressing made of them.Therefore, this study aimed to evaluate the in vitro and in vivo effects of scaffolds derived from collagen-rich biomaterial in different formulations containing bioactive compounds such as chitosan, NAC and ε-PL. Ethics statement All of the experimental procedures involving animals were approved by the Institutional Research Board for Ethics on Animal Use (Comitê de Ética no Uso de Animais, CEUA) of the Federal University of Uberlândia (UFU-Brazil) under approval protocol number 094/19 (CEUA/UFU).All experiments were conducted according to the guidelines for the care and use of laboratory animals of the CEUA/UFU. After the crosslinking of collagen and chitosan, 1 mg/mL of NAC and ε-PL was added to the solutions for preparing the CChNE scaffolds.For molding the scaffolds, each formulation and its respective control were plated at a volume of 200 to 300 μL per well (96-well plate) and frozen at −80 °C for 1 h.Then they were lyophilized overnight. Cell viability assay To assess cytotoxicity, resazurin was used.Due to the characteristics of the scaffold that becomes solubilized in the culture medium, the indirect contact method by component extraction was chosen.Human keratinocytes cells (HaCat) were maintained in Dulbecco's Modified Eagle's Medium (Gibco), supplemented with 10% fetal bovine serum (Cultilab) and 1% antibiotic/anticyotic solution (Gibco) under standard culture conditions (37 °C, 95% humidified air, and 5% CO 2 ) until confluence.For this assay, cells were plated in a 96-well plate in the amount of 1 × 10 4 per well and kept in a CO 2 oven for 24 h at 37 °C. The scaffolds were prepared at different concentrations (1, 1.5, 2 and 3 mg/mL) in triplicate and incubated in 200 μL of culture medium per scaffold and kept at 37 °C for 24 h.After incubation, 100 μL of the medium containing scaffold extract was transferred to a 96-well plate containing 1 × 10 4 cells per well.For positive control, 5% Dimethylsulfoxide -DMSO (Sigma-Aldrich) was added.After 72 h, cells were incubated with 20 μL/well of 3 nM Resazurin solution (Sigma-Aldrich) and maintained for 3 h at 37 °C.Cell viability was performed based on the mean fluorescence (Fl): According to the results of this screening, the noncytotoxic dose for the cell was chosen to proceed with the other assays. Live-dead assay The scaffolds were prepared at the chosen concentration, incubated in 200 μL of culture medium per well and kept in an incubator at 37 °C for 24 h.After incubation, 100 μL of scaffold extract was collected and transferred to a 96-well plate containing 1 × 10 4 cells per well.For positive control 5% DMSO was added.After 72 h, the medium was removed and, in each well, 100 μL of 3 μM calcein AM (Invitrogen -Thermo Fisher Scientific) and 2.5 μM propidium iodide (Sigma-Aldrich) were added.The plate was maintained for 30 min at 37 °C.Images were analyzed in EVOS Microscope.The presence of living cells was evaluated by staining with calcein in comparison with the control and positive control. Scanning electron microscopy (SEM) A scanning electron microscope (Zeiss EVO MA10) from Scanning Electron Microscopy Laboratory was utilized to determine pore structures of scaffolds.Samples were sputter coated with a layer of gold (Au) for observation at 10 kV at various levels of magnification (30x, 100x, 400x, 800x, 1600x).The surfaces of the scaffold were examined to identify any differences in pore size. Antibiogram To verify the antimicrobial potential of the different scaffold formulations at a concentration of 1 mg/mL, a sensitivity test was performed for multiresistant bacteria of clinical relevance including ampicillin-resistant Staphylococcus aureus (MRSA), Acinetobacter baumani and Klebsiella pneumoniae carbapenemase (KPC).For the positive control, 10 μL of gentamicin (50 mg/mL) was used and 10 μL of PBS for the negative control. First, the bacteria were inoculated in Brain Heart Infusion -BHI (Kasvi) medium and kept at 37 °C for 24 h.The inoculum were transferred to Tryptone Soy Agar -TSA (Kasvi) plates and kept at 37 °C for 24 h.The colonies were incubated in Tryptic Soy Broth -TSB (Kasvi) at 37 °C until they reached a turbidity of 0.5 on the McFarland scale and seeding was performed on the plate.Next, scaffolds and controls were placed and kept at 37 °C for 24 h and analyzed.The formation of a halo was observed around the scaffolds containing the bioactive compounds with antibiotic properties. In vivo tests In vivo tests were performed according to Galiano et al. ( 2004), with modifications.Rubberized and flexible polymer disks were used to suture the edges of the excisional wounds in order to avoid their closure by the contraction process.The contraction is largely responsible for the closure of wounds in rodents, thus simulating the healing of injuries by secondary intention, in which primary approximation of the edges is not possible and healing occurs by re-epithelialization. Three formulations of interest for the treatment of wounds were selected: CCh, CChNE and ChNE, only at a concentration of 1 mg/mL.C57BL/6 mice aged between 7 and 8 weeks were grouped as control group (excisional wounds with silicone disks), treated group 1 (CCh), treated group 2 (CChNE), treated group 3 (ChNE).Each group contained 16 animals. The evolution of the healing process was evaluated macroscopically by measuring the area of the wounds, with the aid of a digital caliper, starting immediately after wound induction and occurring at predetermined intervals (1, 3 and 7 days) [20,21].Measurements were utilized to calculate the percentage of wound closure and perform healing kinetics. Histological analysis On the 7th day after wound induction, the animals were euthanized.For histological and biochemical analyses, the wound region, along with the surrounding skin, was collected with the aid of an 8 mm circular biopsy punch. To assess the formation of granulation tissue, a histological analysis was performed on the 7th day using a score to evaluate inflammation and epithelialization ranging from 0 to 3, with 0 indicating no inflammation/intact skin, 1 signifying discreet inflammation with the presence of few cells inflammatory cells and 1/3 of the epithelium present, 2 moderate inflammation with many inflammatory cells and more than 1/3 of the epithelium formed, and 3 severe inflammation with exaggerated presence of inflammatory cells and complete epithelium. Activity of pro-inflammatory enzymes The remaining wound fragments were stored at −80 °C and subsequently weighed and processed to quantify the activity of the enzymes N-acetyl-β-glucosaminidase (NAG) and myeloperoxidase (MPO), for indirect evaluation of the infiltrate of macrophages and neutrophils. To quantify the NAG activity, the samples were homogenized in 1 ml of 0.9% NaCl solution containing 0.1% Triton X-100 (Promega) and centrifuged at 960 × g, for 10 min at 4 °C.Subsequently, 150 mL of the supernatant of each sample was added to previously identified microtubes, containing 150 mL of citrate/phosphate buffer pH 4.5.In a 96-well plate, 100 μL volumes of samples were added in duplicate, in which 100 μL of the substrate (p-nitrophenyl-nacetyl-β-D-glucosaminide -Sigma) was added at 2.43 nM, also diluted in citrate/phosphate buffer pH 4.5.Samples were incubated for 30 min at 37 °C.Then, at the end of the incubation period, 100 μL of 0.2 M glycine buffer pH 10.6 was added.Samples were measured at the absorbance of 400 nm.The results were analyzed to verify whether there was a difference in the inflammatory process between the samples. Samples for quantification of MPO activity were homogenized in 1 mL of sodium phosphate buffer pH 5.4.The obtained supernatant (300 μL) was replaced with 600 μL of hexadecyltrimethylammonium bromide (HTAB, Sigma) 0.5% w/v diluted in phosphate buffer pH 5.4.Next, the samples were sonicated for 20 sec and subjected to 3 cycles of rapid freezing in liquid nitrogen and heating in a water bath.Subsequently, they were centrifuged for 10 min at 10,000 × g.For the reaction, 200 μL of sample and 100 μL of 3,3'-5,5'-tetramethylbenzidine (TMB) solution (Sigma-Aldrich, St. Louis) at 6.4 mM dissolved in dimethylsulfoxide (DMSO) were used with 100 μL of 2.4 mM H 2 O 2 and diluted in phosphate buffer.The reaction was stopped with the addition of 100 μL of 4 M sulfuric acid.The activity of the MPO enzyme was determined by a spectrophotometer at the absorbance of 450 nm. Statistical analysis Statistical analysis was performed using a computational program with the statistical package GraphPad Prism Version 9.0 (GraphPad Software, Inc., USA).The data were submitted to the Kolmogorov-Smirnov normality test.The variables -cell viability, wound closure, quantification of MPO and NAG enzymes and the average number of vessels/mm² -were analyzed using the One-way ANOVA multiple comparison test, followed by Dunnett's post-test.For the pore size variable, the Kruskal Wallis post-test was used.Data were considered significant for p < 0.05. Fabrication and characterization of scaffolds In this research six different formulations were tested for comparison (C, CNE, CCh, CChNE, Ch and ChNE).The detailed process of preparing solutions, crosslinking, molding and lyophilization steps, as well as the appearance of the final product on the animal, are illustrated in Fig. 1. The formulations C, CNE and Ch were considered the respective controls for the formulations CCh, ChNE and CChNE. Effect of scaffold extracts on HaCaT viability First, a screening assay was carried out applying the different formulation extracts of the collagenous sponge at concentrations of 1, 1.5, 2 and 3 mg/mL on human keratinocytes cell line (HaCaT), which resulted in high viability (over 80%) for all formulations containing collagen and chitosan (Fig. 2), except for CCh 1.5 mg/ mL formulation extract (62%).Based on this screening result, the 1 mg/ mL solution scaffold was chosen for the following assays. Live-dead assay Cell viability results were also assessed by live-dead assay using calcein (viable cell marker) and PI (a cell death marker that permeates into cells with impaired cell membrane integrity) in HaCaT treated with 1 mg/ mL formulation extracts.Calcein staining showed that cells treated with C, CCh and Ch were viable and presented total confluence, whereas the formulations CNE, Ch and CChNE presented lower confluence, but were not labeled as dead cells.In the positive control for cell death, only cells in an intermediate state (not yet released from plastic) were stained, and the remaining cells were released and washed during the medium removal process for staining (Fig. 3). Antibacterial action Scaffolds containing NAC and ε-PL (CNE, CChNE and ChNE) demonstrated antibacterial action against multiresistant bacteria of clinical relevance such as KPC, MRSA and Acinetobacter b.This result can be observed in Fig. 6 by the presence of a halo around the scaffolds with this property, while in the other formulations (C, CCh and Ch) the effect was not detected. Wound closure analysis The experimental model of excisional wounds used a stabilizing ring to avoid the muscle contraction characteristic of mouse skin.The scaffolds for treatment were chosen after the in vitro analyses according to the interest for this research; therefore, CCh, CChNE and ChNE were selected. In the analysis of wound closure, the ChNE formulation showed a statistically significant difference when compared to the control group on days 1, 3 and 7.At the end of the treatment, the group treated with ChNE presented 70% wound closure.The other groups of the CCh and CChNE formulations did not differ significantly, as shown in Fig. 7.The scaffolds were progressively absorbed, revealing fragments in the wound bed within 24 h and complete absorption in 72 h.The presence of low-secretion granulation tissue was observed throughout the treatment.No macroscopic signs of hemorrhage, tissue necrosis or wound infection, were observed at the end of the experiment (Fig. 8).The animals displayed behavioral patterns compatible with good general health. Assessment of inflammatory infiltrate and epithelialization A moderate inflammatory infiltrate was observed in the treatments with the CCh and ChNE formulations (score 2) with presence of inflammatory cells that was discreet at the border of the wound, and moderate in the central region, indicating the healing process from edges to center, evidenced by a loosely organized extracellular matrix.The control groups and other treatment showed severe inflammation, evidenced by the exacerbated presence of inflammatory cells throughout the whole wound extension (Fig. 9).Epithelialization was complete (score 3) for CCh and ChNE and partial (<1/3) for CChNE (1).The presence of scaffold was not detected at the histological level in any of the treated groups, indicating complete absorption 7 days after treatment. Evaluation of the activity of pro-inflammatory enzymes The activities of NAG and MPO enzymes were evaluated to determine the respective activities of macrophages and neutrophils in the wounds.A significant reduction of NAG and MPO was observed in the lesions treated with the ChNE (24.6 NAG nmol/mg; 1.3 MPO OD/g) formulations when compared to the control group (36.5 NAG nmol/mg; 3.7 MPO OD/g).The other treatments did not show statistically significant results (Fig. 10). Discussion The wound healing process is complex, requiring the presence of several inflammatory cells, chemokines, cytokines and nutrients at the wound site.It consists of three stages: the inflammatory, the proliferative and the remodeling stage, which may suffer environmental or pathological modifications, resulting in undesirable effects such as infection and chronic inflammation [22].To avoid such effects, wound care is a crucial step that is directly linked to the associated morbidity and the effective resolution of healing.The ideal dressing provides adequate supplies of oxygen, moisture, angiogenesis and nutrition, in addition to protecting against pathogens and trauma [22,23]. In this context, the development of wound dressings that provide the appropriate conditions to accelerate the healing process or prevent chronicity has been subject of research.Traditional dressings with dry gauze can delay the process or traumatize the area when removed.Biopolymers are being researched as targets for regenerative medicine, assisting in the development of scaffolds that can contain bioactive substances, nutrients or growth factors [24][25][26]. Given the higher rate of collagen degradation over synthesis in chronic wounds, considering exogenous collagen supplementation at the wound site emerges as a viable option [25,27].Therefore, animal collagen-based biomaterials have been adapted for the manufacture of scaffolds to create an environment that mimics the extracellular matrix in chronic wounds [1,5].Also, chitosan exerts hemostatic action in contact with traumatic wounds through platelet adhesion and promotes erythrocyte agglutination.It also induces macrophage activation and has antibacterial and fungicidal action [7,8].Thus, chitosan scaffolds bioconjugated with collagen have been studied for application in tissue regeneration, due to biocompatibility and porosity that can promote cell proliferation [25,28].In light of all the evidence suggesting the benefits of collagen-chitosan bioconjugates, we proposed the development of a collagen In resazurin and live/dead assays there was an increase of cell viability in scaffold solutions.Cell proliferation was greater when treated with collagen associated with chitosan and cell morphology was preserved.Corroborating these results, a study with hydrogel-conjugated collagen and chitosan formulation showed positive effects on the proliferation of L929 cells and human mesenchymal stem cells [18].In this study the proportion of dead cells in total was extremely low, and the cells presented typical morphology.There were more living cells attached to the chitosancollagen and chitosan-NAC/ε−PL than other formulations.Another research study using marine collagen-chitosan cryogels has also reported non-cytotoxic (80%) behavior by L929 cells, confirmed by live/dead assay showing predominantly live cells [8]. While the resazurin and live/dead assays provided sufficient evidence of non-toxicity formulations, our assessment of the scaffolds' ultrastructure aimed at selecting noncytotoxic and porous materials for further in vivo assays.We demonstrated that the collagen scaffold bioconjugated with chitosan has greater porosity, being suitable for tissue repair [28].As expected, the results found in the SEM demonstrate the ultrastructure of the different formulations, showing that samples of pure collagen and chitosan (C, Ch) presented higher density.Scaffolds containing collagen and chitosan (CCh, CChNE) displayed complete and larger pore structures.Corroborating these results, Kafi et al. [29] demonstrated that collagen and chitosan scaffolds presented higher porosity and a more homogeneous structure when compared to controls.The authors report that scaffolds with greater porosity provide higher cell proliferation, demonstrating that bioconjugation of matrices have physical properties superior to pure collagen.Other research studies [8,19] found the same result, in which structures containing collagen-chitosan crosslinking have larger pores than pure materials.However, one study by Chao Deng et al. [18] observed larger pores in collagen materials and smaller and more homogeneous pores with the addition of chitosan, producing a denser structure, in contrast to what was found not only in the current study but also in prior ones.Despite our consistent aim to enhance porosity in our formulations, purportedly to achieve increased cell proliferation and viability as suggested by literature, our study revealed an unexpected outcome.It demonstrated that despite lower porosity, scaffolds with bioconjugation of chitosan with NAC and E-PL supported higher cell viability in both in vitro and in vivo assays.This suggests that the addition of NAC and ε−PL to the scaffold may confer greater benefits to cells than solely increasing porosity. In addition to non-cytotoxic and porous structures, we were also looking for a biomaterial with bactericidal properties, so we add NAC and ε-PL to the formulations and verify their action.Bacterial infection is a serious complication in the treatment of chronic wounds, which can form antibiotic-resistant biofilms.To prevent this complication, dressings with antibacterial substances are a desirable option [30].The antibacterial action of the NAC and ε-PL additives was confirmed by the bacterial sensitivity test, but a more complete antibiogram assay is needed for quantitative evaluation.Shivakumar et al. [9], report that the use of nanocomposite collagen dressings with chitosan inhibit not only microbial infections, but also the activity of wound matrix metalloproteinases by inflammatory cells that are present in excessive amounts in chronic wounds.Furthermore, the authors revealed that the proposed collagenchitosan dermal substrate aids both vascularization and cell proliferation of the tissue.In a study conducted by Mayandi et al. [31], ε-PL was used as a bioactive compound in wound dressings and demonstrated efficacy in reducing bacterial load and promoted tissue healing with better results than conventional dressings.In another study, hydrogels with a porous structure based on chitosan bioconjugated with polyvinylalcohol and ε-PL, showed excellent results in healing and antibacterial action against E. coli and S. aureus [30]. After confirming the bactericidal action, we also evaluated the anti-inflammatory action of the scaffolds to verify the potential of the bioactive compounds.In the inflammatory phase of healing, neutrophils act to decontaminate the lesion.However, in chronic wounds they can cause damage by producing free radicals and oxygen, resulting in oxidative stress, thus slowing the healing process due to excessive amounts of reactive oxygen species (ROS) found in chronic wounds [32].Our results showed a reduction of inflammatory cells in the wounds treated with collagen hydrogel bioconjugated with chitosan, NAC and ε-PL when compared to the control group.In this context, n-acetylcysteine acts as an important bioactive anti-oxidant in chronic wounds, as observed by Li et al. [33], as well as Ozkaya et al. [34], who carried out dermal regeneration experiments in mice, and also demonstrated a reduction in the tissue oxidative stress during the healing process.The collagen/chitosan compound gel developed by Li et al. [33], showed good results in wound healing with increased healing rate and shorter duration than other treatments, almost complete after 14 days.The compound increased granulation tissue, collagen deposition and wound vascularity, and was also able to inhibit the growth of S. aureus.In another study conducted by Tsai et al. [35], the use of NAC for treating burns in an experimental rat model, demonstrated that NAC promotes wound healing and accelerates re-epithelialization.In addition, NAC induced collagenous expression of MMP-1, which is important in the process of tissue repair and remodeling.The current research has verified that the expected effects of the chit-osan+ NAC/ε-PL combinations were similar to those reported in the literature.The formulation of chitosan and NAC/ε-PL scaffolds surprisingly exhibited greater antibacterial properties and accelerated the wound healing process compared to treatments involving collagen bioconjugation [36]. Despite our initial presumption that collagenous scaffolds combined with chitosan would perform better, our findings revealed that chitosan combined with NAC and ε-PL yielded superior results in terms of both cell viability and wound closure in the model used.The observed wound closure and decrease in inflammatory cells within the ChNE group indicate the promising potential of this scaffold for tissue regeneration. Conclusion In this study, we sought to obtain a new scaffold formulation with biomimetic, antioxidant, antibacterial and porous properties for application in dermal regeneration of chronic wounds.The desired properties were verified by characterization methods and in vivo biological analysis, and showed desirable characteristics for the adequate treatment of chronic wounds, by promoting biocompatibility, antibacterial action, antioxidant properties and a porous framework for cell proliferation.In our future studies, we aim to conduct research on diabetic and infected wounds to assess the effectiveness of the scaffolds in vivo using these models. Fig. 2 Fig. 1 Fig. 2 Cell viability of scaffolds extracts.A 1 mg/ mL solution scaffolds B 1.5 mg/ mL solution scaffolds C 2 mg/ mL solution scaffolds D 3 mg/ mL solution scaffolds.Controls: untreated cells, treated with 5% DMSO.X-axis values represent concentrations in μg/mL.* Statistically significant difference compared to negative control.Ordinary one-way ANOVA with multiple comparisons and Dunnet´s posttest were used.Data were considered significant when p value was less than 0.05
5,889.8
2024-02-05T00:00:00.000
[ "Medicine", "Materials Science", "Engineering" ]
Fast smooth second-order sliding mode control for systems with additive colored noises In this paper, a fast smooth second-order sliding mode control is presented for a class of stochastic systems with enumerable Ornstein-Uhlenbeck colored noises. The finite-time mean-square practical stability and finite-time mean-square practical reachability are first introduced. Instead of treating the noise as bounded disturbance, the stochastic control techniques are incorporated into the design of the controller. The finite-time convergence of the prescribed sliding variable dynamics system is proved by using stochastic Lyapunov-like techniques. Then the proposed sliding mode controller is applied to a second-order nonlinear stochastic system. Simulation results are presented comparing with smooth second-order sliding mode control to validate the analysis. Introduction Sliding mode control (SMC) is well known for its robustness to system parameter variations and external disturbances [1,2]. SMC has extensive applications in practice, such as robots, aircrafts, DC and AC motors, power systems, process control and so on. Recently, using SMC strategy to the nonlinear stochastic systems modeled by the Itô stochastic differential equations with multiplicative noise has been gaining much investigation, see [3][4][5][6] and references therein. The existing research findings applying SMC to the stochastic systems always treat the stochastic noise as bounded uncertainties. These methods need to know the upper bound of the noise and they are comparatively more conservative control strategy, which ensure the robustness at the cost of losing control accuracy. Some literatures derived SMC for the stochastic systems described in Itô's form applying stability in probability [3], which was proved to be unstable under the second moment stability concept [7]. By comparison, mean-square stability is more practical for engineering application. Wu et al. [8] designed SMC guaranteeing the mean-square exponential stability for the continuous-time switched stochastic systems with multiplicative noise. However, the control signal in [8] switches frequently and the results cannot be extended to stochastic systems with additive noise. One disadvantage of classical SMC is that the sliding variable cannot converge to the sliding surface in finite time. Finite-time convergence has been widely investigated in the control a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 systems. Shang discussed the finite-time state consensus problems for multi-agent systems [9,10], and further investigated the finite-time cluster average consensus in bidirectional networks and the fixed-time group consensus problem for a leader-follower network [11,12]. It is urgent to deduce finite-time convergence sliding mode method for stochastic systems. In addition, traditional SMC has restrictions such as the relative degree constraint and the high frequency control switching that may easily cause chattering effect [13]. Rahmani designed an adaptive neural network to approximate the system uncertainties and unknown disturbances to reduce chattering phenomena, and proposed controllers combining adaptive neural network with sliding mode control methods [14,15]. Ref. [16] designed a fractional order PID controller to a bio-inspired robot manipulator using bat algorithm. Higher-order sliding mode control (HOSM) also mitigates the problems associated with SMC [17][18][19][20][21]. In the past decades, HOSM has found a variety of application in the robust control of uncertain systems [22,23]. But HOSM for the stochastic systems is remaining poorly investigated. Aiming at the defects of the above mentioned research, a smooth control law for a class of nonlinear stochastic systems with Ornstein-Uhlenbeck colored noise is developed in this paper. By using stochastic Lyapunov-like techniques, a sufficient condition of finite-time convergence is derived under the mean-square practical stability concept. Finally, some experimental results are presented to validate the proposed controller. Problem statement Let α > 0 and σ = const., the following Itô stochastic differential equation is called Langevin equation, where z(t) is a standard scalar Gaussian white noise. The solution η(t) (t ! 0) is called Ornstein-Uhlenbeck process, which is a colored noise [24]. Consider single-input single-output (SISO) dynamics with denumerable Ornstein-Uhlenbeck colored noises where h i are constants; f(t), g(t) are given sufficiently smooth function and g(x) 6 ¼ 0; d(t) presents unmodeled dynamics, parametric uncertainties and external disturbances, which is assumed to be sufficiently smooth; " Z i are mutually independent Ornstein-Uhlenbeck colored noises with parameters α i and " s i . s can be interpreted as dynamics of the sliding variable s 2 R 1 calculated along the system trajectory and s = 0 expresses sliding manifold; u 2 R 1 is the control input. In order to prevent the chattering and exploit the benefits of a sliding mode controller in a real-life system, a smooth control, which can provide a finite time convergence s; _ s ! 0, is urgently needed. stability is proposed by LaSalle and Lefschetz [28] and was developed by Martynyuk, Lakshmikantham and Leela et al [29,30]. As a natural extension of the traditional concepts of practical stability, mean-square stability, and finite-time reachability, we shall introduce the concepts of finite-time mean-square practical stability and finite-time mean-square practical reachability. These concepts are concerned with bringing the system trajectory into a bounded neighborhood of a given point or manifold. Consider the following stochastic dynamical system where x(t,t 0 ,x 0 ) as the solution of (3) under the initial condition (t 0 ,x 0 ). Let s = s(t,x) = 0 be the chosen sliding manifold of the system. Remark 1: Unlike definitions in [28,29], which emphasize the boundedness of the system trajectory, the definition we taken here focus far more on the convergence of the system trajectory. Definition 2 (FTMSR): The sliding manifold s=0 is said to be (R 1 ) finite-time mean-square practically reached, if given a pair of positive numbers (λ,ε), λ = λ 1 + λ 2 and ε = ε 1 + ε 2 , there exists a finite setting time T = T(t 0 ,ε), such that implies Eks(x,t)k 2 ε,8t > t 0 + T for some t 0 2R + ; (R 2 ) finite-time mean-square uniformly practically reached, if (R 1 ) holds for all t 0 2R + ; (R 3 ) second-order finite-time mean-square practically reached, if given a pair of positive numbers (λ,ε), λ = λ 1 + λ 2 and ε = ε 1 + ε 2 , there exists a finite setting time T = T(t 0 ,ε), such that Fast smooth second-order sliding mode control for systems with additive colored noises implies Eksðx; tÞk Stochastic fast smooth second-order sliding mode control. Consider system Eq (2), meaning that η i is a Ornstein-Uhlenbeck noise with parameters α i and σ i , so the coefficient h i can be merged by substitute η i into (2) to get Consider system Eq (5), the dynamics of the sliding variable is designed as the following form: where μ 1 = s; m and k i are positive constants and m > 2; η i are Ornstein-Uhlenbeck colored noises expressed in (4). Let μ = [μ 1 , μ 2 , η 1 , η 2 , Á Á Á, η 1 ] T , the following Itô stochastic differential equation can be got by combining (5) and (6) together: z ð7Þ then a stochastic system with respect to the state vector μ can be represented as Let the sliding variable dynamics be of the form (6) and in accordance with the sliding variable system (5), the SFS-SOSM controller is selected as wheredðtÞ is the estimation of uncertain function by means of high-order sliding-mode observer presented in [22]. Hereafter, FTMSP and FTMSR are employed to analyze the reachability of the sliding manifold. Finite time convergence analysis. Based on the definition proposed above, we give the following theorem: Theorem 1: Consider the stochastic nonlinear system (6) with respect to the sliding variable s, let where m > 2, α i > 0 (i = 1,2,Á Á Á,l), k j > 0 (j = 1,Á Á Á,5). Constructing the following matrix and assuming that (i) ε ¼ ½1 þ ðk 1 þ k 2 þ k 3 þ lÞ 2 " ε and the following inequality holds where Then the prescribed sliding variable dynamics system (6) is finite-time mean-square practically stable, and the proposed control (10) is an SFS-SOSM control. The sliding manifold s = 0 can be second-order mean-square practically reached in finite time. We denote the infinitesimal generator by L. Appling infinitesimal generator along with system (8), we have Let . . ; LV 1 can be expanded and the following inequality holds Fast smooth second-order sliding mode control for systems with additive colored noises Notice that Then the following inequality can be deduced Furthermore, then we have The inequality about LV 2 can be deduced according to the properties of the matrix trace as Substitute (23), (24) into (18) to get According to Itô's formula, it follows that Since η i are mutually independent, utilizing E[η 2 (t)] σ 2 /2α and Rao inequality[31] to obtain: Then inequality (26) can be further represented as where It is obvious that γ 1 ,γ 2 > 0. Since the solution of the differential equation is given by it follows from the comparison principle[32] that EV(t) φ(t) when EV(t 0 ) φ 0 . From (31) we can claim that the following inequality holds. From the initial conditions, we have jsðx 0 ; t 0 Þj 2 þ j_ sðx 0 ; t 0 Þj 2 l. So the initial condition of the constructed vector ξ can be got as For convenient, we denote and synthesize the results we have got in (17), (32), (33), the following inequality can be Fast smooth second-order sliding mode control for systems with additive colored noises The following inequality can be obtained by the Minkowski inequality Then, by the Lyapunov inequality, we have Fast smooth second-order sliding mode control for systems with additive colored noises It follows that Substituting (41), (43) into (40) yields ffiffiffiffiffiffiffiffiffi Ej_ sj From the whole proving process, we notice that the parameter " ε can be interpreted as the control precision index, so we can reasonably assume that " ε is much less than 1 to meet the needs of engineering practice, and note that m > 2, we have ffiffiffiffiffiffiffiffiffi Ej_ sj So the following inequalities hold Fast smooth second-order sliding mode control for systems with additive colored noises Let ε ¼ ½1 þ ðk 1 þ k 2 þ k 3 þ lÞ 2 " ε, by Definition 2, we can claim that the sliding manifold s = 0 is second-order finite-time mean-square practically reachable with respect to (λ,ε). So the proof is completed. The control approach block-diagram of proposed SFS-SOSM method is shown in Fig 1. The design process of the controller is: first, the sliding variable dynamics _ sðxÞ, where x represent the system states, is obtained according to the expected system properties; Then the control law u is got by combining _ sðxÞ and the prescribed s-dynamics Eq(6); So the smooth control law u can steer the system state reach the desired value in finite time. Remark 3: ε can be treated as the convergence precision. It can be seen from condition (i) that ε depends on the parameters of the colored noise and the designed parameters of the controller. Results In this section, a second-order nonlinear stochastic system is taken into consideration to illustrate the necessity and effectiveness of the proposed control law. Fast smooth second-order sliding mode control for systems with additive colored noises Consider the following second-order SISO nonlinear stochastic system with colored noise and z is a zero-mean scalar Gaussian process with covariance 1. The initial state is (x 1 ,x 2 ) = (2,5). In order to achieve finite time convergence, the following auxiliary integral sliding variable Fast smooth second-order sliding mode control for systems with additive colored noises is introduced. This sliding surface can guarantee a finite-time convergence of the system state due to its nonlinear switching manifold characteristic. The prescribed compensated s-dynamics providing finite-time mean-square convergence are selected in a format (6). In accordance with (10) the smooth control input is selected to be where the parameters are taken as m = 3, k 1 = 20, k 2 = 20, k 3 = 1, k 4 = 6, k 5 = 6. The effectiveness of the SFS-SOSM control is investigated by comparing the SFS-SOSM control with the smooth second-order sliding mode (SSOSM) control, which is designed to Fast smooth second-order sliding mode control for systems with additive colored noises deal with deterministic systems. The SSOSM control is taken as [22] u ¼ 2½À a 1 m 1 m À 1 m sgnðm 1 Þ þ m 2 À x 2 2 À x 1 À 1:5x 2 À 0:5dðtÞ where the parameters are taken as m = 3, α 1 = 20, α 2 = 6. In (49) and (50),dðtÞ is the estimation of uncertain function d(t) by means of observer presented in [22]. The phase plots of two kinds of control are shown in Figs 2 and 3. It is obvious that both of the controllers can steer the system state from the initial position to the sliding surface, and then the sliding mode with state trajectories in this surface starts and thereafter remains on it. At the same time, the chattering of the sliding mode is eliminated in view of these figures. From the partial enlargements of the Figs 2 and 3, we can see that the SFS-SOSM controller can steer the system trajectory closer to the sliding surface comparing with the SSOSM controller. This result demonstrates that the SFS-SOSM method can significantly improve the Fast smooth second-order sliding mode control for systems with additive colored noises control precision, since the stochastic control techniques are employed to handle the noise. By contrast, the SSOSM controller adopts a more conservative control strategy, treating the stochastic noise as bounded uncertainties, which ensures the robustness at the cost of losing accuracy. The trajectory tracking error is shown in Figs 4 and 5. It is obvious that the error convergence rate of SFS-SOSM is faster than SSOSM. The overshoot of SSOSM controller is larger than SFS-SOSM, which demonstrate that the SSOSM control is more conservative since it overestimates the bound of uncertainties. The control signal of the SFS-SOSM controller and the SSOSM controller are presented in Figs 6 and 7. It is evident that neither of the controllers has high frequency switching benefited from the smooth controller design, but the overshoot of SSOSM controller is greater than SFS-SOSM. Figs 8 and 9 show the simulated results of the sliding variable s and its derivative _ s under the SFS-SOSM control. From these figures, we can see that the proposed smooth control law can stabilize the sliding variable and its derivative at a sufficiently small neighborhood of zero in finite time, which means that the proposed control achieves the second-order sliding modes. For comparison, results of s and _ s under the SSOSM control are presented in Figs 10 and 11. It is obvious that the convergence rate of s and _ s with the SFS-SOSM control is faster than the SSOSM control. Conclusions In this paper, a SFS-SOSM controller for stochastic system with additive Ornstein-Uhlenbeck colored noise has been proposed. The time to achieve second-order reachability to the sliding manifold from initial system states has been proved to be finite. The new proposed sliding mode controller has the following advantages: first, it can eliminate the chattering associated with traditional sliding mode; second, it has no high frequency switching needed to be smooth at the price of losing robustness; third, it can achieve higher control accuracy since the stochastic technique is employed to design the controller instead of treating the noise as bounded uncertainty. Simulation results are presented to validate the analysis. Future work includes optimizing the controller parameters to achieve better control performance and applying the proposed control to the practice engineering problems. We will also consider designing a more perfect disturbance observer to replace observer presented in [22] to improve the control precision. Fast smooth second-order sliding mode control for systems with additive colored noises Supporting information S1
3,711.4
2017-05-31T00:00:00.000
[ "Mathematics", "Engineering" ]
A Coupled Phase-Temperature Model for Dynamics of Transient Neuronal Signal in Mammals Cold Receptor We propose a theoretical model consisting of coupled differential equation of membrane potential phase and temperature for describing the neuronal signal in mammals cold receptor. Based on the results from previous work by Roper et al., we modified a nonstochastic phase model for cold receptor neuronal signaling dynamics in mammals. We introduce a new set of temperature adjusted functional parameters which allow saturation characteristic at high and low steady temperatures. The modified model also accommodates the transient neuronal signaling process from high to low temperature by introducing a nonlinear differential equation for the “effective temperature” changes which is coupled to the phase differential equation. This simple model can be considered as a candidate for describing qualitatively the physical mechanism of the corresponding transient process. Introduction Mammals complex thermoreceptor systems consisting of free nerve ending fibers are located in the dermis, muscle, skeleton, liver, and hypothalamus [1]. It is a phasic receptor which is active when there is a change in environmental temperature and rapidly becomes steady when reaching the stable temperature. Based on its characteristics with respect to the temperature level, it can be classified into warm or cold receptor [2,3], which is, respectively, sensitive to high or low temperature relative to the normal body temperature, characterized by its way in delivering the neuronal signals. The corresponding neuronal signals are delivered in the form of bursting, that is, rhythmic of action potential consisting of spikes and punctuated by periods of inactivity [4,5]. Their characteristics depend strongly on the associated temperature levels. In this report, we focus our discussion on the dynamics of mammals cold receptor. In a low temperature condition, the corresponding neuronal signals produce periodic bursts with uniform duration and slow oscillation characteristic, but with nonuniform spike frequencies for each burst. When the temperature is raised up by a quasistatic process, the amount of spikes per burst tends to decrease forming a periodic single spike or beating. At a relatively higher temperature, the spike pattern becomes aperiodic; namely, it can also exhibit either double spike or stochastically phase-locked spike (skipping) phenomenon [3]. An experimental study on the static and dynamic discharge of a specific mammals cold receptor, that is, cat's lingual nerve, has been comprehensively conducted by Braun et al. [4]. In particular, they showed that the dynamical response of the associated cold receptor is different for various temperature transitions between 10 ∘ C and 40 ∘ C. Nowadays, many models have been proposed to explain the dynamical characteristics of mammals cold receptor. One of the most profound models is the conductance-based model which relies on the conductance voltage-dependent phenomenon due to the existence of Na + and K − ions. For example, Braun et al. [6] in their report had discussed 2 Journal of Biophysics a Hodgkin-Huxley voltage-conductance type equation in their attempt to understand the role of nonlinearity and noise on the dynamics of nerve cell membrane through mammals cold receptor data. Another conductance-based cold receptor model was also discussed in different reports [7,8]. In the meantime, there is a certain type of ion channel called transient receptor potential melastatin 8 (TRPM8) that plays an important role in delivering the cold receptor neuronal signal (see [9] for review). The role of TRPM8 has been shown experimentally in thermosensation mechanism in mice as discussed in [10][11][12]. Very recently, a conductancebased model which includes the role of TRPM8 ion channel has been proposed by Olivares et al. and showed a good agreement with the experimental data found from the cold receptor of mice [13]. The corresponding Olivares model successfully resembled the experimental data of increasing the firing rate for quasistatically increasing exposed temperature protocol. Apart from those conductance-based models, a fully ionic model has been proposed by Longtin and Hinzer [5], which discussed the stochastic action potential phase model specifically for a cat's lingual cold receptor. This model was further simplified by Roper et al. [14], namely, by introducing a simplified phase differential equation. It was demonstrated that the corresponding model was able to approximate the Longtin-Hinzer model for temperature interval 17.8 ∘ C to 40 ∘ C. Compared to the conductance-based model, Roper's model [14] offered a relatively simple mathematical description. However, we discovered that this model did not lead to a realistic description on phenomena that occurred in higher or lower temperature conditions. Based on this fact, in the present report we discuss a possible modification on the corresponding Roper's model for the nonstochastic limit by introducing a new functional form of parameters that appeared in the corresponding model. Furthermore, we also discuss an extension of the corresponding modified model to accommodate the dynamical response of neuronal signals during a transition process from high to low temperature condition. This dynamical model is able to explain the phenomenon of sudden increasing amount of spikes per burst due to decreasing temperature, which is followed by a gradual decreasing of the corresponding amount of spikes per burst until the receptor reaches a steady condition at the lower temperature [4,15]. We explain this phenomenon by considering an additional differential equation to describe the temperature dynamics, which is coupled to the associated phase differential equation. We organize the report as follows: Section 2 discusses the phase model for the case of the steady temperature condition. The modified models for static temperature and dynamic transient process from high to low temperature are given in Section 3, namely, by defining a new set of functional parameters in the corresponding phase differential equation and introducing a new differential equation of temperature coupled to the phase differential equation and we focus our discussion on the characteristics of spike per burst, burst period, and interspike interval. We end this report with a conclusion in Section 4. Comprehensive discussions regarding the biological and chemical related properties of the corresponding cold receptor have been given in detail previously [4,5,14], such that in this report we only focus on the modified mathematical model. Model and Method The corresponding nonstochastic phase differential equation for steady condition of neuronal signaling at a specific temperature developed previously by Roper et al. [14] is given as follows: with Here, the symbol represents the phase of membrane potential in the trigger region, in which its full rotation describes the generation of an action potential [14]. The parameter is related to the modulation of the mean potential of the cell, while the term cos(Ω ) is a zero mean periodic term that oscillates with the frequency Ω, with as the corresponding magnitude. The function ( , ), with an inverse time unit, describes the dynamics of the corresponding neuronal signal bursting [14]. It is seen that there are two important terms in (2), namely, 1 and 2 functions, as given by (3) and (4), respectively. The burst occurs when 1 > 2 , where the average amount of spikes in each burst is proportional to the maximum width of the overlap area of both curves, denoted by Δ, as exemplified in Figure 1(a) along with the corresponding phase of membrane potential ( ), which is found by solving (1), and neuronal signal bursting ( ) functions as shown in Figures 1(b) and 1(c), respectively. It is obvious that the amount of spikes per burst can be controlled by changing the value of and as well as Ω which also determine the period between two consecutive bursts. It was assumed previously [14] that these parameters are of linear functional forms of temperature as follows: with 0 , 1 , 0 , 1 , Ω 0 , and Ω 1 being constants to be determined. This assumption was aimed at yielding a decreasing period between two consecutive bursts when the temperature increases through a quasistatic process. In their work, Roper used all these parameters at = 35 ∘ C to depict the example shown in Figure 1. Based on the above formulation, it is clearly seen that these linear assumptions will lead to an unrealistic scenario at the high and low temperature conditions, since all those parameters are not saturated at these limits. Therefore, it is reasonable to assume that phenomenologically at those temperatures the neuronal signals become saturated since in that range the receptor becomes less sensitive [16]. It is interesting to note that this model can also be further developed to describe the transient transition from high to low temperatures as previously reported by Ring and de Dear [15]. For this, we propose assuming that the corresponding temperature should be considered as a function of time with Morselike characteristic described by a differential equation which is coupled to the corresponding phase differential equation given by (5). To study the dynamical characteristics of this model, we numerically solve the related coupled differential equations by means of standard Runge-Kutta method. Modified Phase Model for Steady Temperature Condition. To develop a more realistic model, we consider the 4 Journal of Biophysics modification of , , and Ω parameters by introducing the following tanh functional forms: which exhibit sigmoidal saturation characteristic at relatively high and low temperatures. Here, and are parameters to be adjusted. We denote the parameter eff as an "effective temperature" for describing the dynamics of neuronal signal during the transient process from high to low environmental temperature. The meaning of this parameter will become clear in later discussion (see Section 3). Obviously, the functional parameter forms given by (8) will lead to saturated behavior of neuronal signal at high and low temperatures. By considering the same values with that used previously [14] for a temperature interval of 40 ∘ C to 15 ∘ C and Ω → 0 at eff → −∞ we found = 0.055/ ∘ C and = 33.75 ∘ C, while the other parameters are set to 0 = 0.4475 ms −1 , 1 = 0.1575 ms −1 , Demonstrated in Figure 2 is the comparison between the previous set of functional parameter forms and the new ones. It should be realized that, for the actual cases, those functional forms should be adjusted using experimental data of the associated spiking and bursting neuronal signaling phenomena in low and high temperature conditions. Indeed, it is also important to note that one can choose different type of functional forms. A nonsigmoidal functional form was previously proposed in [17] which was aimed at mimicing the bifurcation characteristics of the conductance-based Huber-Braun model [7]. The corresponding neuronal signals at steady temperatures for eff = 40 ∘ C to 15 ∘ C, which are compared to the previous reported results [14], are depicted in Figure 3. Here the function of (2) is calculated by solving first function in (1) and then the solution is inserted into the corresponding equation. The average amount of spikes per burst (SB) for both functional forms is given in Figure 4. It is observed that, at low temperature condition, the tanh functional forms exhibit a more reasonable amount of spike per burst than the linear functional forms of Roper's model [14] that exhibits higher amount as demonstrated in Figure 4. Indeed, there are some discrepancies between both linear and tanh functional forms since both assumptions do not perfectly coincide as clearly shown in Figure 2. But indeed, both forms share qualitatively similar bursting and spiking characteristics. As shown by the figures, it is important to realize that, for decreasing eff , the amount of spikes per burst is increasing. The other important characteristic, namely, the interspike interval histogram (ISIH) of the neuronal signal at the corresponding different temperatures for the tanh functional forms, is given in Figure 5(a) for both Roper and the present modified model, along with its plot as a multivalued function of eff in Figure 5(b). The ISIH shows the existence of beating and skipping at high temperature condition which are indicated by the presence of large intervals. Clearly, the modified model exhibits a bit different characteristics than the original Roper's model. Model for Transient Transition Process. During a transient transition from high to low temperature, the existence of a peak response with relatively large amount of spikes per burst at a certain time was shown experimentally as the transition process begins as reported in [15,18]. Based on the burst characteristic at steady temperatures as exemplified by Figure 3, we suspect that this condition might be perceived by the brain to occur at a temperature lower than the final temperature. In the meantime, another experimental result demonstrated that the period between two consecutive bursts during the transient transition process is higher than the period at final temperature [4]. To model the corresponding dynamical response, we consider first a conjecture that the effective temperature eff in our formulation is a function of time in the following Morse-like function [19]: where eff and 0,eff are effective temperature at time and its lowest value, respectively, whereas denotes the time when eff = 0,eff . Graphically, the parameter determines the depth of Morse-like curve as illustrated in Figure 6, while denotes its effective width. The corresponding Morse-like function is chosen because it is a mathematically well-defined function with no singularity. Phenomenologically, it is likely to be the best geometrical shape to describe the corresponding transient response characteristics among other similar functional forms such as the Lennard-Jones [20], the Buckingham Exponential-6 [21], and the Mie potential functions [22]. All these functions are commonly used to describe the molecular interactions [22]. In contrast to the Morse function, the other three functions contain a singularity. It should be emphasized that the existence of the abovementioned peak response with large amount of spike per burst during the transient transition is the reason to define the term "effective temperature" as a tuning factor in our formulation based on the following argument: as shown in Figure 3, the amount of SB for low temperature is larger than the higher one. At the same time, a sudden increasing amount of SB occurs due to decreasing temperature, which is followed by a gradual decrease of SB until the receptor reaches a steady condition at the lower temperature [4,15]. From all these facts, we therefore propose that the eff functional parameter, with its curve given in Figure 6, should be considered as a dynamical tuning factor that is needed to describe the dynamics of the related neuronal signal propagation. It is easy to prove that the function given by (9) satisfies the following differential equation: In the ensuing discussion, we choose to solve (10) numerically rather than using (9) in order to explain the corresponding peak response phenomenon. It should be noted that, to ensure the corresponding numerical solution of (10) is the Morse-like function as given by (9), one should consider a negative initial condition for ; that is, (0) = −√ eff (0) − 0,eff . We expect the parameters and can be determined experimentally. However, in our calculation, we assume that the parameter is fixed to where ,eff and ,eff denote the final and initial effective temperature, respectively, such that 0,eff in (9) and (11) Indeed, one can assume different values for this parameter and it is clear that different 0,eff leads to a different in (9). On the other hand, it is reasonable to assume that the parameter in (10) should be expressed as a function of , that is, ≡ ( ), because it is natural to think that the shape of the associated Morse-like function is different in different transition process. To formulate the corresponding expression, first we plot functions 1 and 2 given by (3) and (4) and adjust the value of in (10) to meet a matching condition, which is indicated by the coincidence between the first overlaps width of both functions (denoted by Δ in Figure 1) and the lowest effective temperature 0,eff . Exemplified in Figure 7 is the associated matching condition for the dynamical response from ,eff = 40 ∘ C to ,eff = 15 ∘ C. For this transition, we found that the matching condition occurs at = 0.002 ms −1 . The calculation result for this parameter from ,eff = 40 ∘ C to various ,eff is given in Figure 8. Using a standard fitting procedure, it is found that all those values can be approximated by the following function: with 0 = 4.5 × 10 −4 ms −1 and = 0.1/ ∘ C, while 0,eff is given by (12). Therefore, it is clear that the differential equation (10) should be rewritten as follows: which is coupled to (1) through parameters given by (8). It is important to note that the parameter and ( ) function phenomenologically correspond to the characteristics of the burst period and amount of SB around the matching condition, respectively. Therefore, as mentioned previously, it is obvious that these two parameters should be determined experimentally by observing the spiking and bursting characteristics similar to what was done in [4]. Using this new model, the simulation results of transition processes from ,eff = 40 ∘ C to ,eff = 35 ∘ C, 30 ∘ C, 25 ∘ C, 20 ∘ C, and 15 ∘ C are depicted in Figure 9. Given in Figure 10 are the SB for transition to ,eff = 35 ∘ C and 15 ∘ C, along with the corresponding burst period (BP) as defined in Figure 1(c), which is qualitatively similar to that given in [4]. The parametric plot in phase-plane of eff and ( , ) to give a more clear description is also given in Figure 10. It is shown that the SB and BP characteristics exhibit pronounced transient feature for transition to ,eff = 15 ∘ C. In the meantime, the occurrence of dense patterns of bursting process when eff < ,eff is demonstrated, justifying the existence of previously discussed peak response phenomenon as shown experimentally in [4] during a transient time. In contrast, it is interesting to note that a monotonous response characteristic is exhibited during eff > ,eff . An example of the approximate Morse-like function found from (15) is given in Figure 7, which shows the same position of the related matching condition compared to Morse-like function found by solving (10) numerically. It is clearly shown that, right after the transition process begins, namely, at the first burst, the amount of the corresponding spikes is larger than the next bursts. As a consequence of choosing the saturated tanh functional forms in the model parameters as given by (8), we found that the change of the SB at matching condition is at a reasonable level, especially in the case of ,eff = 15 ∘ C, where 0,eff < 15 ∘ C. To validate this modified model with experimental data, we focus on comparing qualitatively the SB and BP characteristics with the results reported by Braun et al. [4]. Given in Figure 11 are the spiking and bursting phenomena for the transition similar to what was discussed in [4] along with the associated ,eff function. We calculate the SB of four different ,eff transition intervals, as well as the BP parameter. The results are shown in Figure 12. It is interesting to note that qualitatively the corresponding SB characteristic exhibits fairly similar trend with the experimentally found SB figure in Figure 5 in [4], while it is seen that graphically that BP exhibits similar characteristics with SB. Note that the transient characteristics are demonstrated significantly in the cases of low temperature transitions, that is, ,eff = 25 ∘ C to ,eff = 20 ∘ C and ,eff = 20 ∘ C to ,eff = 15 ∘ C. This feature can be explained as a consequence of eff function with wider profile due to larger value as shown in Figure 11. On the other hand, in comparison with Olivares's model [13], taking into account the role of TRPM8 ion channel transient current to explain the phenomenon of increasing firing rate (which indicates the increasing of SB) as exposed temperature decreases, this modified model offers a different perspective to the transition mechanism from high to low temperature, namely, by introducing the "effective temperature" ( eff ) functional parameter as a dynamic tuning factor that coupled to the phase of membrane potential. As discussed previously, although the physical meaning of this dynamical parameter is not clearly understood at this moment, we proposed that this parameter might be interpreted as an actual temperature being perceived by the mammal brain and it is likely reasonable to assume that the corresponding transient eff function is related to the complex role of TRPM8 ion transient channel. Indeed, this hypothesis should be separately investigated. Furthermore, although our model is able to describe the existence of peak response at matching condition, however, it should be noted that the model leads to the increasing pause duration or the time distance between two consecutive bursts at the corresponding condition, while in reality this is not the case as reported previously [4]. This problem is a bit complicated to be solved and we suggest it can be overcome by defining new functional forms for parameters given by (8). The other problem with our model is related to the value of in (11) where in our calculation it was considered to be fixed. We expect this parameter can be determined experimentally, and this is beyond the scope of our study. To this end, apart from the above mentioned problems, it is also realized that this modified model should be improved further, since the related effective temperature differential equation given by (15) does not take into account the influence of phase of membrane potential. We suggest that fully coupled differential equations that accommodate this feature will likely be able to give a good quantitative explanation of the dynamics and characteristics of neuronal signals of the corresponding cold receptors. This issue could be a challenging topic for future investigation. Conclusion We have discussed a modified Roper's model for describing the characteristics of neuronal signaling in mammals cold receptor, especially for the temperature transition processes. The model consists of coupled phase-temperature nonlinear differential equations equipped with a set of functional parameters that saturate at low and high temperature. It was shown that our modified model is able to describe the experimental fact that the characteristics of neuronal signal in a transient transition process from high to low temperature exhibit the existence of large amounts of spikes per burst right after the process initiated, namely, by introducing the new functional parameter "effective temperature," which plays a role as a dynamical tuning factor to explain the corresponding phenomenon. We propose that this dynamical tuning factor might be interpreted as a perceived temperature by the mammal brain in which its perception of temperature at has the lowest value, while ,eff and ,eff are coincides with the environmental temperatures. Certainly, it is intriguing to further examine experimentally whether this interpretation is correct or not. For instance, by observing the related mammal brain activity that corresponds to the temperature perception. Further studies should be conducted in order to overcome a few problems that still exist. However, this modified model can be considered as a dynamic simple alternative candidate to complex ionic models to describe qualitatively the transient transition from high to low temperature of the mammals cold receptor. Disclosure Firman Ahmad Kirana and Husin Alatas are co-first authors.
5,419.4
2016-09-28T00:00:00.000
[ "Physics" ]
Energy Cooperation for Sustainable Base Station Deployments: Principles and Algorithms Energy self-sufficiency is of prime importance for future mobile networks. The design of energy efficient and possibly self-sustainable base stations is key to reduce their impact on the environment, and diminish their operating expense. As a solution to this, we advocate base station deployments featuring energy harvesting and storage capabilities. Each base station can acquire energy from the environment, promptly use it to serve the local traffic or keep it in its storage for later use. In addition, a power packet grid (DC power lines and switches) is utilized to enable energy transfer (energy routing) across base stations, compensating for imbalance in the harvested energy or in the load. Most of the base stations are offgrid, i.e., they can only use the locally harvested energy and that transferred from other network elements, whereas some of them are ongrid, i.e., they can also purchase energy from the electrical grid. We formulate the optimal energy allocation and routing as a convex optimization problem with the goals of improving the energy self-sustainability of the network, while achieving high energy transfer efficiencies under dynamic load and energy harvesting processes. An optimal assignment based on the Hungarian method is also presented. Our numerical results reveal that the proposed convex policy: (i) substantially improves the energy self-sustainability of the system, (ii) decreases its outage probability to nearly zero, even when a small number of base stations are connected to the electrical grid, and (iii) the amount of energy purchased from the electrical grid per served user is respectively decreased of three and eight times with respect to using the Hungarian policy and a scenario where the energy exchange among base stations is not permitted. I. INTRODUCTION Mobile Internet services have become ubiquitous. ITU estimated that 750 millions households are currently online and that there exist almost as many mobile subscribers as people in the world (around 6.8 billions) [1]. The trend is of a further increase in the traffic demand, in the number of offered and connected devices, especially mobile. However, this massive use of Information and Communications Technologies (ICT) is increasing the amount of energy drained by the telecommunication infrastructure and its footprint on the environment. Forecast values for 2030 are that 51% of the global electricity consumption and 23% of the carbon footprint by human activity will be due to ICT [2]. Besides, energy bills are also becoming a major problem for network operators, whose revenues are decreasing due to an ever increasing OPerating EXpense (OPEX): for example, it has been calculated that the energy bill matches the cost of the personnel needed to run and maintain the network, for a western Europe company in 2007 [3]. Hence, energy efficiency and, possibly, self-sufficiency is becoming a priority for any future development in the ICT sector. In this paper, we advocate future networks where small base stations will be densely deployed to offer coverage and high data rates, and energy harvesting hardware (e.g., solar panels and energy storage units) will be installed to power network elements [4]. Within this scenario, base stations will be capable of acquiring energy from the environment, use it to serve their local traffic and transfer it to other base stations to compensate for imbalance in the harvested energy or in the load. Energy transfer is thus a prime feature of these networks and can be accomplished in two ways: (i) through Wireless Power Transfer (WPT) or (ii) using Power Packet Grid (PPG) [5]. For (i), previous studies [6] have shown that its transfer efficiency is too low for WPT to be a viable solution, but (ii) looks promising. In analogy with communications networks, in a PPG a number of power sources and power consumers exchange (Direct Current, DC) power in the form of "packets", which flow from sources to consumers thanks to power lines and electronic switches. The energy routing process is controlled by a special entity called energy router [7]. Following this architecture, a local area packetized-power network consisting of a group of energy subscribers and a core energy router is presented in [8]. In this paper, the authors devise a strategy to match energy suppliers and consumers, seeking to minimize the mismatch between the energy generation and demand. An energy sharing framework is presented in [9], where the harvested energy is modeled as a packet arrival process, the storage as a packet queue and the energy consumption as a queue of loads, i.e., one or multiple servers. These three components of the PPG are interconnected through power switches. In [10], the packets take the form of current pulses with fixed voltage and duration. Each energy packet is equipped with an encoded header, containing the information about the destination identity (i.e., its address), which is used to route it through the PPG. Along the same lines, energy sharing among Base Stations (BSs) is investigated in [11] through the analysis of several basic multiuser network structures. A two-dimensional and directional water-filling-based and offline algorithm is put forward to control the harvested energy flows in time and space (among users), with the objective of maximizing the system throughput for all the considered network configurations. In [12], the authors introduce a new entity called aggregator, which mediates between the grid operator and a group of BSs to redistribute the energy flows, reusing the existing power grid infrastructure: one BS injects power into the aggregator and, simultaneously, another one draws power from it. This solution does not consider the presence of energy storage devices, and for this reason some of the harvested energy can be lost if none of the base stations needs it at a certain time instant. The proposed algorithm tries to jointly optimize the transmit power allocations and the transferred energy, maximizing the sum-rate throughput for all the users. In this paper, we consider the aforementioned scenario where all the BSs are equipped with solar harvesting hardware and energy storage units. They are all connected through a PPG. Moreover, some of them are connected to the electrical grid (referred to as ongrid), whereas the remaining ones are offgrid and, in turn, rely on either the locally harvested energy or on the energy transfer from other BSs. Since the BSs have a local energy storage, they can accumulate energy when the harvested inflow is abundant. Some of the surplus energy can also be transferred to other BSs to ensure the self-sustainability of the cellular system. The energy distribution is performed using the PPG infrastructure, where a centralized energy router is responsible for deciding the power transfer/allocation among BSs over time. For the harvested energy, we use real solar data from [13], whereas the BS load is modeled according to [14]. We formulate the energy routing problem through convex optimization, proposing an optimal power allocation strategy with the main goal of promoting the self-sufficiency of the BS system. This is accomplished by draining energy from energy rich BSs, while maximizing the transfer efficiency of the energy routing process. An approach based on the Hungarian method [15] is also presented and the two algorithms are numerically evaluated and compared with a scenario where BSs have energy harvesting and storage capabilities, but energy routing is not permitted. Numerical results, obtained with real-world harvested energy traces, show that the convex optimization approach keeps the outage probability to nearly zero for a wide range of traffic loads. Also, in the best cases, the amount of energy purchased per served user is reduced of one third with respect to the Hungarian-based allocation method and of almost eight times as compared to the case where the transfer of energy among base stations is not allowed. The paper is organized as follows. In Section II, we describe the network scenario. The energy allocation problem is described in Section III, where the proposed solutions are also presented. Routing and scheduling policies are addressed in Section IV. The numerical results are presented in section V, and final remarks are given in section VI. II. SYSTEM MODEL We consider a mobile network comprising BSs, where each of them has energy harvesting capabilities, i.e., a solar panel, an energy conversion module and an energy storage device. Some of the BSs are ongrid and, in turn, can also obtain energy from the electrical grid. These are termed ongrid BSs (set on ). The remaining BSs are offgrid (set off ). The proposed optimization process evolves in slotted time = 1, 2, . . . , where the slot duration is one hour and corresponds to the time granularity of the control. Note that the slot duration can be changed without requiring any modifications to the following algorithms. A. Power Packet Grids A PPG is utilized to distribute energy among the BSs. The grid architecture is similar to that of a multi-hop network, as shown in Fig. 1, where circles are BSs and the square is the energy router, which is in charge of making energy routing and power allocation decisions. According to [8], BSs are connected through Direct Current (DC) power links (electric wires) and the transmission of energy over them is operated in a Time Division Multiplexing (TDM) fashion. Energy transfer occurs by first establishing an energy route, which corresponds the sequence of power links between the energy source and the destination. Each power link can only be used for a single energy trading operation at a time. Power distribution losses along the power links follow a linear function of the distance between the source and the destination [8]. They depend on the resistance of the considered transmission medium and are defined by [16]: where is the resistivity of the wire in Ωmm 2 /m, ℓ is the length of the power link in meters, and is the cross-sectional area of the cable in mm 2 . Finally, in this paper we consider PPGs with a single energy router in the center of the topology. A number of trees originate from the router and each hop is assumed to have the same length ℓ, i.e., the same power loss. B. Harvested Energy Profile Solar energy generation traces have been obtained for the city of Chicago using SolarStat [13]. For the solar modules, the commercially available Panasonic N235B photovoltaic technology is considered. Each solar module features a total of 25 solar cells leading to a panel area of 0.44 m 2 , which is deemed practical for installation in a urban environment, e.g., on light-poles. As discussed in [13] and [4], the energy harvesting inflow is generally bell-shaped with a peak around mid-day, whereas the energy harvested during the night is negligible. However, there may be a high variability in the harvested energy among the BSs due to the orientation of their solar panel and to differences in the surrounding environment (trees, buildings, etc.). The amount of energy harvested ( ) by the generic BS = 1, . . . , in time slot is obtained as: where (0, ) is sampled from the uniform probability distribution function (pdf) (0, ), defined in the open interval (0, ). Here, (0, ) is referred to as shading factor, and is used to model the (possibly differing) harvested energy amounts across BSs. Instead, ℎ( ) returns the (hourly) harvested energy income, which is computed as in [13] and is the same for all BSs. A random experiment is executed for each BS at the beginning of each time slot , using Eq. (2). The resulting harvested energy trace is referred to as ( ). C. Traffic Load and Power Consumption It is commonly accepted and confirmed by empirical measurements that the energy consumption of base stations is time-correlated and daily-periodic [4]. In this paper, we use the typical daily load profile in Europe from [14], which allows tracking the number of mobile users that are to be served by a BS across a day. Given the load ∈ [0, 1], intended as the percentage of the total bandwidth that the BS allocates to serve the users in its cell, the BS energy consumption is obtained using the linear model in [4]. In addition, as we describe shortly below, starting from a common load pattern, we differentiate the load experienced by each BS introducing some randomness. This creates some imbalance in the load distribution, which is key to assess the effectiveness of the proposed optimization strategies. Specifically, the hourly traffic load ( ) of BS = 1, . . . , represents the number of users that are served by the BS in time slot , and is computed as follows. First, for each BS and time , the load is generated according to the following equation: where round(⋅) returns the integer that is nearest to the argument, ( , ) is a random value sampled from ( , ), the uniform pdf in the open interval ( , ), and are the minimum and maximum number of users in a BS cell, and ( ) ∈ [0, 1] represents the daily load profile, which is defined as the percentage of active users in one BS cell in hour and is taken from [14]. Note that ( ) is common to all the BSs, and ( ) is obtained for each BS performing a random experiment for each time slot , using Eq. (3). At this point, the load of BS in slot is approximated as ( ) = ( )/ max , where max represents the maximum number of users that can be served (serving capacity). The energy consumption (energy outflow) of this BS, referred to as ( ), is finally obtained using ( ) with the BS energy consumption model in [4] (see Eq. (1) in that paper). D. Energy Storage Units Energy storage units are referred to in what follows as energy buffers. The energy buffer level for BS = 1, . . . , is denoted by ( ) and two thresholds are defined, up and low respectively termed the upper and lower energy threshold, with 0 < low < up < max , where max is the energy buffer capacity. For the results in this paper we used max = 360 kJ, which corresponds to a battery capacity of 100 W h (small size Li-Ion battery). For an offgrid BS, i.e., ∈ off , ( ) is updated every time slot (hour) as: where ( ) is the amount of energy that is transferred, which can either be positive (BS is a consumer) or negative (BS acts as a source). Note that the objective of our optimization in Section III is to find ( ) for all as a function of all the other system parameters. Also, ( ) is the energy buffer level at the beginning of time slot , whereas ( ), ( ) and ( ) are the energy harvested, the energy that is locally drained and the energy transferred in the time slot (from to +1), respectively. The energy level of an ongrid BS ∈ on is updated as: where the new term ( ) ≥ 0 represents the energy purchased by BS from the electrical grid during hour . The behavior of a BS is determined by the energy level in its energy buffer. Specifically, if ( ) ≥ up , the BS behaves as an energy source, and is thus eligible for transferring a certain amount of energy ( ) to other BSs. In this work, we assume that if the total energy in the buffer at the end of the current time slot is ( ) < up and the BS is ongrid, then the difference ( ) = up − ( ) is purchased from the electrical grid in slot , as an ongrid BS must always be a source, i.e., in the position of transferring energy to other BSs. If instead ( ) ≤ low , the BS behaves as an energy consumer. From the above assumptions, it descends that in this case the BS can only be offgrid and its energy demand amounts to ( ) = low − ( ), so that its energy buffer would ideally become equal to the lower threshold low by the end of the current time slot. Note that, this can only be strictly guaranteed if ( ) − ( ) ≥ 0: however, since ( ) is measured at the beginning of time slot , whereas ( ) and ( ) are only known at the end of it, the amount of energy that the BS demands ( ) (still at the beginning of the time slot) cannot explicitly depend on them. Although we may use estimates of [ ( ) − ( )] to get more accurate results, this approach is left for future work. In Fig. 2, we plot a typical load pattern (blue curve), the harvested energy by an offgrid BS (black curve) and ( ) for an ongrid BS (red curve). Note that an ongrid BS maintains the buffer level equal to up when no energy is harvested, whereas its buffer ( ) grows beyond up around mid-day where there is an energy surplus due to the harvesting process. III. PROBLEM FORMULATION In this section, we propose an optimal energy allocation strategy whose objective is to make the offgrid BSs as self-sustainable as possible. This is achieved by transferring some amount of energy from rich energy BSs to those offgrid base stations that are energy consumers (whose buffer level is below low ). Note that maximizing energy transfer efficiency in the energy routing process is also important, and will be explicitly modeled in the objective functions of Section III-B. A. Notation We use the indices and to respectively denote an arbitrary energy source and a consumer. = {1, . . . , . . . , } and = {1, . . . , . . . , } are the set of BS acting as sources and consumers, respectively. With we mean the total available energy from the source ∈ to the consumer ∈ , in matrix notation we have = [ ]. Note that is the energy that would be available at the consumer BS and, in turns, it depends on , and on the distribution losses between them, i.e., on the total distance that the energy has to travel (see Section II-A). Vector , with elements , represents the energy demand from each consumer ∈ . represents the number of hops in the energy routing topology between source ∈ and consumer ∈ , in matrix notation we have = [ ]. Also, we assume that all hops have the same physical length. Finally, with ∈ [0, 1] we mean the fraction of that is allocated from source ∈ to consumer ∈ , in matrix notation = [ ]. B. Objective Functions As a first objective, we seek to minimize the difference between the amount of energy offered by the BS sources ∈ and that transferred to the BS consumers ∈ . This amounts to fulfill, as much as possible, the consumers' energy demand . At time , the energy that can be drained from a source is ( ) − low . Now, if we consider the generic consumer , the maximum amount of energy that can provide to is = ( ( ) − low ) ( ), where ( ) ∈ [0, 1] is the attenuation coefficient between and , due to the power loss. We thus write a first cost function as: where ∈ and ∈ . Due to the existence of a single path between any source and consumer pair, and the fact that each power link can only be used for a single transfer operation at a time, a desirable solution: (i) picks source and consumer pairs ( , ) in such a way that the physical distance ( ) between them is minimized and (ii) achieves the best possible match between sources and consumers, i.e., uses source , whose available energy is the closest to that required by consumer . That is, we would like to be as close as possible to 1. If this is infeasible, several sources will supply the consumer. Through the minimization of the following cost function, the number of hops between sources and consumers is minimized and we favor solutions with → 1: in other words, with this cost function we are looking for a sparse solution (i.e., a small number of sources with close to 1. Note that when → 1 and is minimized, the argument / is maximized and the negative exponential is minimized. Also, the exponential function was picked as it is convex, but any increasing and convex function would do. C. Optimization Problem At each time slot , every BS updates its buffer level ( ), using either Eq. (4) or Eq. (5) (note that ( − 1), ( − 1), ( − 1), ( − 1) and ( − 1) are all known in slot , see Section II). It then decides whether to act as a source or as a consumer in the current time slot. Each source evaluates for all ∈ through = ( ( )− low ) ( ) and each consumer evaluates the amount of energy required as = low − ( ). (power losses) is fixed as it only depends on the topology. At time , armed with the cost functions in Eq. (6) and Eq. (7), the following optimization problem is formulated: where ∈ [0, 1] is a weight used to balance the relative importance of the two cost functions and are the decision variables. The first constraint represents the fact that is a fraction of the available energy from source , and the second constraint means that the total amount of energy that a certain source transfers to consumers = 1, . . . , cannot exceed the total amount of available energy at this source. For any fixed value of , Eq. (8) is a convex minimization problem which can be solved through standard techniques. In this paper, we have used the CVX tool [17] to obtain the optimal solution * = [ * ], which dictates the optimal energy fraction to be allocated from any source to any consumer . D. Solution through Optimal Assignment Alternatively, the energy distribution problem from sources to consumers can be modeled as an assignment problem, where each source ∈ has to be matched with a consumer ∈ . This approach can be solved applying the Hungarian method [15], an algorithm capable of finding an optimal assignment for a given square × cost matrix, where = max( , ). An assignment is a set of entry positions in the cost matrix, no two of which lie in the same row or column. The sum of the entries of an assignment is its cost. An assignment with the smallest possible cost is referred to as optimal. Let = [ ] be the cost matrix, where rows and columns respectively correspond to sources and consumers . Hence, is the cost of assigning the -th source to the -th consumer and is obtained as follows: where the first term weighs the quality of the match (the demand should be as close as possible to ) and the second the quality of the route. To ensure the cost matrix is square, additional rows or columns are to be added when the number of sources and consumers is not equal. As typically assumed, each element in the added row or column is set equal to the largest number in the matrix. The main difference between the optimal solution found by solving the convex optimization problem (Eq. (8)) and that found by the Hungarian method is that the latter returns a one-to-one match between sources and consumers, i.e., each consumer can only be served by a single source. On the other hand, for any given consumer the convex solution also allows the energy transfer from multiple sources. IV. ENERGY ROUTING Next, we describe how the energy allocation is implemented over time. The algorithm that follows is executed at the beginning of each time slot, when a new allocation matrix * is returned by the solver of Section III-C. Each hour is further split into a number of mini slots. Given a certain maximum transmission energy capacity max for a power link in a mini slot, the required number of mini slots to deliver a certain amount of power between source and consumer is obtained as = ⌈ / max ⌉. Since each power link can be used for a single energy transfer operation at a time, we propose an algorithm that seeks to minimize the number of mini slots that are used. First of all, an energy route for the source-consumer pair ( , ) is defined as the collection of intermediate nodes to visit when transferring energy from to . The algorithm proceeds as follows: 1) a route is identified for each source and consumer (note that for the given network topology this route is unique), 2) the disjoint routes, with no power links in common, are found and are allocated to as many ( , ) pairs as possible, 3) for each of these pairs ( , ), the energy transfer is accomplished using route , for a number of mini slots , 4) when the transfer for a pair ( , ) is complete, we check whether a new route is released (i.e., no longer used and available for subsequent transfers). If that is the case, and if this route can be used to transfer energy for any of the remaining pairs ( ′ , ′ ) (not yet considered), this route is allocated to any of the eligible pairs ( ′ , ′ ) for ′ , ′ further mini slots. This process is repeated until all source-consumer pairs have completed their transfer. V. NUMERICAL RESULTS In this section we show some selected numerical results for the scenario of Section II. The parameters that were used for the simulations are listed in Table I. The results in Fig. 3 are obtained considering | | = 3, a total number = 18 of base stations and 30 users on average per BS. This graph shows the average outage probability ( ) over an entire day, that for each time is computed as the ratio between the number of BSs whose battery level is completely depleted, and the total number of BSs in the system . The performance of three methods is shown in the plot: (i) no energy exchange (NOEE), i.e., the BS that are offline only have to rely on the locally harvested energy, (ii) convex solution (CONV), found solving Eq. (8) and (iii) Hungarian solution (HUNG), found through the Hungarian method of Section III-D. As expected, the probability that a BS runs out of service due to energy scarcity is higher when energy cannot be transferred among BSs (NOEE) and is in general very high across the whole day for NOEE and HUNG. Moreover, we see that for the latter schemes increases when the amount of energy harvested is very little (i.e., night-time). The problem of the Hungarian method is that it returns a matching of source-consumer pairs where each source is allocated to a single consumer and, in turn, some of the BSs will not be allocated in some time slots (due to the imbalance between number of sources and number of consumers). This leads to high outage probabilities for the considered scenario. The convex optimization problem offers a more flexible solution, as it allows the energy transfer from multiple BSs and in different amounts. This translates into a zero outage probability. In Fig. 4, the evolution of the outage probability as the number of ongrid BSs increases is presented, considering an average of 30 active users per BS. Again, the convex solution performs better than the Hungarian one, but nevertheless we see that the same results are attained when the number of ongrid BSs gets equal to the number of offgrid ones. In fact, = 18 and | off | = − | on |, which means that when | on | = 9 there is the same number of ongrid and offgrid base stations and, in fact, in this case the Hungarian method also leads to a zero outage. The same occurs as | on | keeps increasing, as | on | > | off |. The outage probability as a function of the traffic load is shown in Fig. 5 for | | = 3 and = 18. As expected, is an increasing function of the load. However, CONV starts increasing later than NOEE and HUNG, and does it at a slower pace. Some considerations about the energy use are in order. All the algorithms purchase some energy from the electrical grid, although the way in which they use it differs. With NOEE, the energy purchased in solely used to power the base stations that are ongrid, whereas those being offgrid have to rely on the harvested energy. CONV and HUNG allow some energy redistribution among the base stations. With these algorithms, an energy rich BS may transfer energy to other base stations whose energy buffer is depleted. Note that an energy rich base station may belong to either the ongrid set or to the offgrid one. The latter case occurs when, for instance, one base station experiences no traffic during the day and all the energy it harvests is stored locally. In this case, this base station is energy rich, although it is offgrid and both CONV and HUNG consider it as an energy source for other BSs. We now see the whole base station network as a close system that gathers energy in two ways: 1) harvesting it from the environment and 2) purchasing it from the electrical grid. The harvested energy is basically free (the CAPEX is not considered here) and shall be utilized to the best extent: energy transfer among base stations makes this possible. The energy bought by the online BSs is costly and shall also be utilized as efficiently as possible. Next, we try to assess how efficiently the energy that is purchased by the BS system is utilized to serve the mobile users. To do so, for each hour , we compute an efficiency metric ( ), which represents the amount of energy purchased from the electrical system per successfully served user in hour . For its computation, we use the following quantities: ( ) is the total amount of energy purchased by base station ∈ on in hour , ( ) is the outage probability for BS ∈ on ∪ off in hour , defined as the fraction of this time slot in which its energy buffer is completely depleted. Hence, the efficiency is obtained as: ( ) is shown in Fig. 6 across an entire day for | on | = 3, = 18 and 30 active users (on average) per BS. Clearly, good energy allocation schemes have a small ( ) (the smaller the better), as this indicates that the energy purchased is put to good use. From this plot, we see that CONV provides much better (smaller) that the other two schemes. We also see that the highest advantages are obtained in the periods of the day during which the energy harvested is scarce. In fact, from = 12 (midday) to 15 (3 pm) the energy harvested is generally abundant and any solution would do. During these hours ("energy peak hours") the offgrid base stations have no problem (if properly dimensioned) to handle the respective traffic load. The problems arise when the harvested energy is not so abundant, scarce or event absent. In these cases, a good allocation strategy should take advantage of the energy reserve in the energy buffers (accumulated during the energy peak hours) and distribute energy from ongrid or energy rich BSs in a way to keep the power losses small. CONV does a very good job in these respects: in the best cases, the amount of energy purchased per served user is reduced of one third with respect to HUNG and of almost eight times when the transfer of energy among base stations is not allowed (NOEE). VI. CONCLUSIONS In this paper, we have considered future small cell deployments where energy harvesting and packet power networks are combined to provide energy self-sustainability through the use of own-generated energy and carefully planned energy transfers among network elements. In the considered scenario, besides possessing energy harvesting and storage units, some of the base stations (BSs) are offgrid and receive power from energy rich BSs. Two energy allocation schemes have been proposed, one based on convex optimization and one on the Hungarian method. The objective of the proposed solutions is to (i) maximize the energy transfer efficiency among base stations, while (ii) reducing as much as possible the outage probability, i.e., that offgrid base stations are unable to serve their load due to energy scarcity. Numerical results, obtained with real-world harvested energy traces, reveal that the convex optimization approach is capable of keeping the outage probability to nearly zero for a wide range of traffic loads. In the best cases, the amount of energy purchased per served user is reduced of one third with respect to the Hungarian allocation method and of almost eight times when the transfer of energy among base stations is not allowed.
7,802.2
2017-12-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Crystal structure of the uranyl arsenate mineral hügelite, Pb2(UO2)3O2(AsO4)2(H2O)5, revisited: a correct unit cell, twinning and hydrogen bonding The crystal structure of the uranyl arsenate mineral hügelite is affected by twinning due to reticular merohedry (diffraction type II). This study documents, apart from the correct description of the unit cell and the nature of the twinning, the possibilities of the JANA2006 program in revealing the real nature of twinning even if using published structural data without the original reflection files. The new find of the rare mineral hü gelite from the small uranium deposit Labská , Krkonoše Mts. (Czechia), prompted us to perform a new diffraction experiment, which revealed that the actual twinning in hü gelite is different from the description presented by Locock & Burns (2003). Here we report on the results of our analysis that might help to understand the nature of the twinning in this mineral, as well as helping in future analyses of similarly twinned crystal structures. Sample The hü gelite crystal used in this study was extracted from a specimen collected by Pavel Š ká cha from the Labská uranium deposit. This small uranium deposit is located approximately 5 km to the south of the town of Š pindlerů v Mlý n in the Krkonoše Mountains (Eastern Bohemia, Czechia). Hü gelite forms elongated prismatic crystals apparently flattened on one of the prismatic faces. Crystals reach maximally up to 1 mm across ( Fig. 1). They are dark orange in colour. Hü gelite was found on a few specimens only, associated with more abundant dumontite. Phosphuranylite and saleé ite were also identified in the association. Single-crystal X-ray diffraction We studied two tiny crystals of hü gelite from the Labská deposit. While the first crystal (hereafter denoted Labská I) of Table 1 Details of the data collection and refinement for the two different crystals of hü gelite from the Labská deposit. Figure 1 An aggregate of the tabular yellow crystals of hü gelite from the Labská deposit. Field of view 7 mm (photo by P. Š ká cha). approximate dimensions 0.038 Â 0.013 Â 0.007 mm was later found to be affected by twinning, the second crystal (Labská II), having similar, but somewhat larger, dimensions (0.067 Â 0.018 Â 0.005 mm) was found later on to be untwinned. Diffraction experiments were carried out at room temperature with a Rigaku SuperNova single-crystal diffractometer. The diffraction experiment was carried out using Mo K radiation ( = 0.71073 Å ) from a micro-focus X-ray tube collimated and monochromated by mirror optics and detected by an Atlas S2 CCD detector. For both experiments, !-rotational scans (of frame width 1 ) were adopted and a full sphere of the diffraction data was collected. For the larger crystal, Labská II, an increased counting time per frame, equal to 840 s (compared to 400 s for Labská I), a high-sensitivity mode of the CCD detector (binning of pixels 2Â2, with a high-gain option) and highredundancy of the data set ($7) were used to reveal even weak reflections. The diffraction experiment for Labská I, as expected, due to previous results given by Locock & Burns (2003), provided a particularly complicated diffraction pattern, caused by twinning due to reticular merohedry (Petříček et al., 2016). The studied crystal was found to be monoclinic, but with different unit-cell parameters than given by Locock & Burns (2003). Actually, the unit-cell parameters found are a = 7.0189 (7) Å , b = 17.1374 (10) Å , c = 8.1310 (10) Å and = 108.904 (10) , with V = 925.29 (15) Å 3 and Z = 2 ( Table 1). The reticular twin (diffraction type II) was found by the routine implemented in the JANA2006 program (Petříček et al., 2014). The twin operation is a mirror in the [100] direction; the second twin domain can be obtained by the matrix (À1 0 0 | 0 1 0 | 0.75 0 1), simulating a pseudo-orthorhombic supercell, which is eight times larger than a real (sub)cell (a = 7.019 Å , b = 17.137 Å , c = 61.539 Å and = 90.02 ). Therefore, during the data reduction, each twin domain was integrated alone and later imported into the JANA2006 program utilizing the already known twin matrix that helped to define the orientation of each unit cell, thus resolving fully overlapped reflections. Those reflections, present in both data blocks, were included only one time to avoid their multiple occurrences in the refinement. The refinement in JANA2006 including the twin model led to reasonable values of the twin fractions, i.e. 0.877 (1):0.123 (1) ( Table 1); noticeably, the second twin domain is rather weak. The refinement, which took into account twinning, improved/smoothed slightly the difference Fourier; nevertheless, there are still false maxima due to poorly fitted absorption (apparent in the vicinity of the U atoms). The fact that intensities (and namely those of the contributing second domain) are quite weak resulted in the refinement converging to higher R = 7.58% for 2110 reflections with I > 3(I) (GOF = 1.63); noticeably, on first sight, we have also to take into account a different approach to the weighting scheme of the current refinement and a criterion for observed intensities. Structure solution for the Labská I crystal was carried out using the intrinsic phasing algorithm of the SHELXT program (Sheldrick, 2015); refinement of Labská II was carried out using the model obtained for Labská I. Details of the data collection and refinement for both crystals are given in Table 1, final atomic coordinates and displacement parameters in Table 2, selected interatomic distances and hydrogen-bond parameters in Table 3, and a bond-valence analysis in Table 4. The bond-valence analysis was made following the procedure of Brown (2002Brown ( , 2009) using bond-valence parameters provided by Gagné & Hawthorne (2015). The formula of the crystal studied, based on refined occupancies and bondvalence calculations, is Pb 2 (UO 2 ) 3 O 2 [(As 0.583 P 0.417 )O 4 )] 2 (H 2 O) 5 (Z = 2 and D calc = 5.669 Mg m À3 ). Twin contributions were evaluated visually using the reciprocal layers (Fig. 2) reconstructed from the CCD frames (UNWARP tool within the CrysAlis software; Rigaku OD, 2019) and by computer methods using the program JANA2006 (Fig. 3). Twinning and the extra reflections due to twinning are easily visible for the Labská I crystal at the h0l and hk2 layers, for instance, while Labská II provided unbiased frames (Fig. 3). The presence of additional reflections can bias the indexing algorithms, because it simulates the larger unit-cell parameter. While the refinement of the Labská I crystal returned the refined twin fractions 0.877 (1) Table 4 Bond-valence analysis (all values given in valence units, v.u.) for hü gelite. The bond-valence parameters are taken from Gagné & Hawthorne (2015 Figure 2 Twinning in hü gelite, showing the reciprocal space reconstruction for the twinned (Labská I) and untwinned (Labská II) crystals. The twin contribution is easily visible in the case of the h0l layer. The biased intensities are apparent for the hk2 layer. (Table 1), the second crystal showed a negligible contribution of twinning only when a mirror operation was taken into account, Tw vol1 /Tw vol2 = 0.9994 (5)/0.0006 (5). Results Our description of the twinning in hü gelite leaves the structure model proposed by Locock & Burns (2003) unchanged. The structure possesses uranyl-arsenate sheets with a phosphuranylite topology (Burns, 2005;Lussier et al., 2016), with Pb 2+ cations located in the interlayer space between the infinite sheets (Fig. 4). Twinning in hügelite The twin operation, i.e. a mirror in [100], leads to a rather large supercell, with V $ 7400 Å 3 . There is a clear relationship between the unit cell derived by Locock & Burns (2003) and the supercell found in our study. The cell of Locock & Burns (2003) is half the volume of the supercell of our choice: our cell thus represents a real cell of hü gelite, while the cell of Locock & Burns is a result of twinning (Fig. 3); the unit cell of Locock & Burns (2003) (a = 30.993 Å , b = 17.159 Å , c = 7.022 Å and = 96.44), when applied a mirror in (001), leads to the same supercell as in the current study. The correct description of twinning in hü gelite confirmed the originally reported unit cell (Walenta, 1979), having a unit-cell volume of $930 Å 3 . The correct unit cell of hü gelite (V $ 930 Å 3 ), when compared to dumontite (V $ 920 Å 3 ), confirms that these two minerals are isotypic As-and P-dominant analogs, respec-tively. The increase of the unit-cell volume towards the As end member (hü gelite), due to the larger effective ionic radius of As 5+ compared to P 5+ , is apparent. It should be noted that the currently investigated crystal of hü gelite is not an end member of the solid-solution series, based on the site-scattering refinement (Tables 1 and 2). Hydrogen bonding in hügelite Although hü gelite is a highly absorbing substance, even for Mo X-rays ( = 46.30 mm À1 ), final difference-Fourier calculations revealed few maxima assignable to H atoms around those O atoms that belong, according to the bond-valence analysis, to H 2 O groups. Because it was impossible to freely refine all the parameters of the H atoms, they were refined using restrictions available in JANA2006 for the hydrogenbond geometry. Therefore, the scheme presented should be considered as an approximation at best. We also have to emphasize that the higher BV sums for both H atoms and the donor O atoms resulted from the used restraint on the A-H bond length taken as 0.82 Å as a conservative value for the hydrogen-bond length from X-ray analysis. The hydrogen-bonding scheme can be deduced from the results of the bond-valence analysis (Table 4). There are three symmetrically independent H 2 O sites (corresponding to O5, O10 and O12), indicating five H 2 O molecules per unit cell for Z = 2. While atom O5 seems to be three-coordinated (one bond from Pb1 and two bonds to H1O5 and H2O5), atoms O10 and O12 are five-and four-coordinated, respectively. According to the terminology introduced by Schindler & Hawthorne (see, for example, Schindler & Hawthorne, 2008), O5 represents the transformer H 2 O group, while O10 represents an inverse transformer and O12 represents a nontransformer H 2 O group. Therefore, the current results are in contrast to the theoretical predictions made by Schindler & Hawthorne (2008), who concluded that hü gelite should contain five inverse transformer (H 2 O) groups, based on the bond-valence approach. The above-mentioned scheme should be taken as a best-available model only. Due to underbonding of the O1 U eq and O2 U eq atoms (with corresponding low BV sums; Table 4), we can speculate about a somewhat different configuration, involving also the two O atoms U eq . Nevertheless, for such a task, employment of advanced theoretical approaches, as used recently for the uranyl phosphate mineral phurcalite (Plá šil et al., 2020), would be necessary. 4. Implications -processing the data using JANA2006 to reveal the nature of twinning Despite the fact that we did not have an original reflection file for the refinement of Locock & Burns (2003), the software we used for the structure analysis, JANA2006, enables us to perform a check for twinning in their structure, based on the available crystallographic information file (CIF). We have to emphasize here that the warning for the hidden translation symmetry in the CIF file of Locock & Burns (2003) The crystal structure of hü gelite projected down the monoclinic b axis. UO 7 bipyramids are shown in transparent yellow, UO 8 bipyramids are opaque yellow, (As/P)O 4 tetrahedra are green, Pb atoms are dark gray (shown as displacement ellipsoids at the 75% probability level), H atoms are light gray and O atoms are red. The unit-cell edges are outlined with solid black lines. HÁ Á ÁA bonds have been omitted for clarity. checkCIF), returning the B Alert: 'PLAT113_ALERT_2_B ADDSYM Suggests Possible Pseudo/New Space Group P2 1 /m Check Note: (Pseudo) Lattice Translation Implemented'. The entire procedure we followed in JANA2006 is displayed in Fig. 5. The first step involves a calculation of the theoretical reflection file (Mo K, full sphere) based on the atom positions in the CIF of Locock & Burns (2003). The next step involves the calculation of the Patterson map. As the autoconvolution of the electron density itself, it provides important information of the real metrics and can thus reveal the real periodicity features underlying the data. This analysis showed three pronounced Patterson maxima; from them, assuming the omnipresent inversion in the Patterson map, we obtained three translation vectors: (0, 0, 0), (À 1 4 , 0, 1 4 ), ( 1 2 , 0, 1 2 ), ( 1 4 , 0, À 1 4 ). Those were used for the unit-cell transformation by the matrix | 1 4 0 À 1 4 | 0 1 0 | 0 0 1 |. After axis transformation (a!c), we obtained the following cell: a = 7.043 Å , b = 17.30 Å , c = 8.1554 Å , = 90 , = 108.879 and = 90 . During the next step, the creation of the refinement reflection file (even if from the simulated data), there were 24 reflections found that violated the translation symmetry. Nevertheless, they were weak. The structure after the transformation into the smaller cell shows few atoms projected into very close positions (<0.5 Å ). Merging the 24 atoms and refinement of the simulated structure led then to reasonably low R values ($4.2% for 2771 reflections). The test for twinning by reticular merohedry/pseudomerohedry (Petříček et al., 2016) readily revealed an orthorhombic supercell (7.043 Å , 17.302 Å , 61.733 Å , 90 , 89.98 , 90 ), with a unit-cell volume eight times larger than the real cell. This supercell is a result of the twinning that can be described as a mirror in (100) of the a = 7.043 Å , b = 17.302 Å , c = 8.1554 Å , = 90 , = 108.879 and = 90 cell. Fig. 3(a) displays a pattern of the eight times larger cell; from this point of view, the cell choice of Locock & Burns (2003) is reasonable and is due to twinning, which had been present in their crystal without any shadow of a doubt.
3,378.2
2021-05-14T00:00:00.000
[ "Materials Science", "Geology" ]
Control of Chaos Using the Controller Identification Technique Modeling and simulation of chaotic system with dynamic control have been extensively presented in the past decades. Several control techniques have been proposed for the control of chaos. One technique that has not been sufficiently explored for the control of nonlinear systems is the controller identification technique.This technique is based on the evaluation of controllers even if they are not online. This technique does not use a priori knowledge of the plant parameters. In this work, we propose a class of controllers candidates to follow desired trajectories. Simulation results are presented for the control of chaotic systems. Introduction Phenomena which exhibit chaotic behavior appear in several areas, attracting researchers which try to describe this behavior through mathematical equations and to control its dynamic. The search for efficient control techniques of nonlinear systems continues to be a challenge to researchers.Among a variety of controllers used in dynamical systems, proportional integrative derivative (PID) is the most common and practical to be adopted by control engineers.Since the PID controllers are commonly used in industrial control systems, they are usually adjusted by empirical methods.On the other hand, there are more sophisticated control techniques involving complex theoretical developments, which impose very restrictive hypothesis on the systems to be controlled, as nonlinear control techniques. The theoretical success of robust control led many researchers to say that control should focus on the plant parameter estimation and its bounds.However, other researchers in the control area, such as Safonov and Tsao [1], become unsatisfied with the robust control paradigm, which required the control of a family of plants instead of the control of one, which would lead to conservative results.They stated that it was time to reformulate the control problem.A first formulation was the unfalsified control [1,2].This control has two major characteristics: it advances from plant parameter estimation to controller parameter estimation [3] and it considers model falsification in the sense of Popper instead of model validation [4,5].This was a first formulation of the controller identification problem, which in general considers a list of candidate controllers and a criterion to judge the performance of these controllers without needing to put them online [6]. In fact, there are few control techniques which require minimal information on the systems to be controlled.Among these techniques we can cite the ones based on the unfalsified control paradigm and the controller identification technique [2,6].The unfalsified control paradigm allows us to formulate the control problem based on experimental data [7].The advantage of this technique, when compared to others, is that it does not require a priori knowledge of the state or physical properties of the plant.This fact illustrates a potential to be explored through the use of the controller identification technique, which can also be used for the control of nonlinear dynamical systems. The main goal of this work is to use the controller identification technique to control the trajectories of dynamical systems with chaotic behavior.The novelty of this approach is that the plant is treated as a model free plant which accounts for any model mismatches and that we try to identify a low order controller for a complex system. The proposed technique will be applied to control a Rössler [8], a Lotka-Volterra three-species system [9][10][11], and a Rössler hyperchaos model [12].In the literature we can find several works dealing with control of chaos.We highlight those related to the control of the Rössler system [13,14] and the predator-prey systems [15][16][17].In general, one can observe in these applications that the controllers used are of proportional kind.In the same way, in this work we use the controller identification technique applied to the proportional case.More precisely, given a class of candidate controllers, we identify the ones that present the best performance.The proportional control parameter is periodically modified.This update is made in order to put a better controller online.There are several ways to update the control parameters, the one chosen in this work is the step; that is, after a certain time, the control parameters are updated. Numerical simulation is presented to illustrate the effectiveness of the proposed technique.These simulations regard the systems previously cited with the requirement of following a desired trajectory.The controllability of each system is calculated, which allows us to show that the proposed control technique preserves the locally complete state controllability of the nonlinear controlled systems. This paper is organized as follows.In Section 2, the technique of controller identification is presented.In Section 3, a brief explanation of local controllability is presented and the proposed technique is applied to control the chaotic Rössler, predator-prey, and Rössler hyperchaos systems with simulations.In Section 4, some concluding remarks are given. The Controller Identification Technique A general overview of the controller identification technique can be found in [6].According to this technique, the only plant information used is the plant experimental data.For the cases presented in this work, we need only the reference functions and the data (from the dynamical system) to obtain the control .Given a desired behavior, a class of candidate controllers is proposed.Then, a controller is selected through the use of a performance index and the fictitious reference concept.In this work, we apply the controller identification to a class of proportional controllers and the respective mathematical development follows below. The control law for a proportional controller is given by and, consequently, the fictitious reference is given by where is a constant.The performance index is given by where () is a transfer function of desired behavior and is its inverse Laplace transform Theorem 1.Among the controllers of class (1), the one that minimizes the performance index ( 3) is K given by where with " * " denoting the convolution operation. Proof.From (1), the fictitious reference is given by ( 2), where is a constant.Using ( 8) and ( 9) we have that where The minimization of the performance index means finding that satisfies the equation: which leads to our estimator for the proportional constant One can note that K = / is the multiplicative constant of control functions .In order to obtain the simulation results, K will be periodically updated.The objective is to minimize the difference between the measured and the desired .We used a continuous time formulation for the dynamical systems. From ( 1)-( 4) and Theorem presented above, we obtained the constant K that determines the proportionality constant of the optimal control function , given by ( 1).This development can be extended to any odd power of the control function .Thus, if is an odd number, we replace (1) and ( 2): Isolating produces Then ( 8) and ( 10) are rewritten as respectively.The other equations are not modified.The controller identification technique can be translated to the following procedure. Step 1. Define the space-state from the dynamical system. Step 2. Obtain the experimental data of the system (); that is, define (0) = 0 and choose the desired trajectories and initial . Step 5. Update the input of the system () with results obtained in the previous step.Go to Step 3. Step 6.After reaching the number of iterations previously stipulated to update K, update it and go to Step 2. Step 7. The simulation ends when the number of iterations has been reached. Figure 1 shows a block diagram of the proposed control. Control of Chaos Using Controller Identification Simulation results are considered for three nonlinear dynamical systems: Rössler, Lotka-Volterra predator-prey for two preys and one predator, and Rössler hyperchaos, all systems being dimensionless.We consider independent control laws for each equation of the system.In controlled dynamic systems, the most common way to write the control function u = K(r − y) is given by where is equal to the number of system equations.The references are = x and the entries are = .In [13], the control laws present proportional coefficients, but the equation is not independent because depends on all variables. The software Matlab and Runge-Kutta of fourth order were used in the simulations.For the two systems, the values obtained from ( 6)-( 12) and the state were computed concomitantly.A vector was created to be used in the integration process of the Runge-Kutta function, defined as The initial state is given by and the initial values are The transfer function () is In each case below, the reference to the system, containing the desired trajectory, will be represented by a vector x as follows: . (24) 3.1.Local Controllability.Nonlinear systems can be written using direct parameterization to bring the nonlinear dynamics to the state-dependent coefficient (SDC) form where A() is a state-dependent matrix.The design procedure consists of using direct parameterization of A() to bring the nonlinear system to a linear structure having statedependent coefficients (SDC) () = A()x.In general, A() is unique only if is scalar.Then, many possibilities exist for SDC parameterizations if is not scalar. The proposal is to show that the pair [A(), B] is a controllable parameterization of the nonlinear system (25) in a region where is updated, such that [A(), B] is pointwise controllable in the linear sense. For each update of , we consider the local controllability of the system.This idea is similar to locally optimal control from the state-dependent Riccati equation control [18,19].In the two cases that follow, we will show that the control technique, proposed here for the nonlinear dynamical systems, produces systems that are locally completely state controllable. Case 1: Application to the Control of the Rössler System. Rössler system [8] is given by (26) Differential equations (26) define a continuous time system with chaotic behavior for = 0.2 and = 5.7.Notice that the third equation presents a nonlinearity 3 1 .An interesting aspect of this system is its complex dynamical behavior in contrast with the simplicity of the description of its vector field.The phase portrait of Rössler system is shown in [8]. The system with control can be written as [13] Here, the objective of control strategy is to drive system (27) from any initial state 0 to desired trajectories x .The desired trajectories are given by x = [5 + cos () 1 + sin () 1 + sin ()] , (28) which represents a limit cycle in the phase diagram.The initial values for 1 , 2 , and 3 were In order to obtain the simulation results, the values of 1 , 2 , and 3 were periodically updated on each 0.5 from 0 to 100. Figure 2 presents the desired trajectories 1 , 2 , 3 and the trajectories of the controlled system (27). One can observe convergence with a residual error in the third component 3 .Figure 3 presents the phase portrait for the controlled Rössler system. From Figure 3 we can notice convergence to the limit cycle given by x . The local controllability for the Rössler system (27) can be obtained by rewriting the system in the state-dependent coefficients form (25).One possible parameterization for ( 27) is for 1 ̸ = 0.The rank of the controllability matrix is computed for each update of .It has value of 3, which means that the nonlinear system (27) is locally completely state controllable. Case 2: Application to the Control of the Biological Lotka-Volterra System.The biological Lotka-Volterra model describes populations in competition [11].The general form of this model is given by where is density of the species at time , is the number of species interacting in the system, is the reproduction or mortality rate, and are predation, competition, or conversion rates.For two preys and one predator, or the two hosts and one parasitoid, the model becomes According to Gilpin [9] and Vance [10], system (32) is nonlinear and exhibits chaotic behavior for the following model coefficients: 1 = 2 = − 3 = 1, 11 = 12 = 0.001, 22 = 0.001, 21 = 0.0015, 13 = 0.01, 23 = 0.001, 31 = −0.005, 32 = −0.0005,and 33 = 0.The trajectories and the strange attractor of system (31) are shown in [16]. The controlled Lotka-Volterra system proposed is described by The objective of control strategy is to drive system (33) from its initial state to the desired fixed point [20], or constant trajectories while minimizing performance criterion (11).The initial values for 1 , 2 , and 3 were the same as in Case 1.The total simulation time was of 100 with 1 , 2 , and 3 update on each 0.5. Figure 4 shows the time series of the controlled Lotka-Volterra system with initial densities (2, 5, 2). From Figure 4, one can observe that the trajectories are practically controlled when = 20.Figure 5 shows the phase portrait behavior of the controlled Lotka-Volterra system (33). The performance index for the controllers change of the controlled Lotka-Volterra system is shown in Figure 6. One can observe from Figure 6 that performance index has minimal variation after 100 (or constant trajectory), which leads us to conclude that it stabilizes at this value.Step change controller u 3 The local controllability for the Lotka-Volterra system (33) can be checked by rewriting the system in the statedependent coefficients form (25), parameterized by The rank of the controllability matrix is computed for each update of .It has value 3, which means that the nonlinear system (33) is locally completely state controllable. Case 3: Application to the Control of the Rössler Hyperchaos System.Rössler hyperchaos system [12] is given by The 4D differential equations (37) define a continuous time system with chaotic behavior for 1 = 0.27857, 2 = 3.0, 3 = 0.3, and 4 = 0.05.As in the Rössler 3D system, the third equation presents a nonlinearity 1 3 .The phase portraits of system (37) are shown in Figure 7, in four different combinations of coordinate axes. The 4D system (37) with control can be written as Analogously to the previous cases, the control strategy drives system (38) from any initial state 0 to desired trajectories x .The desired trajectories are given by x = [5 + cos () sin () sin () 3 + sin ()] . The initial values for 1 , 2 , 3 , and 4 were In order to obtain the simulation results, the values of were periodically updated on each 0.1 from 0 to 80. One can observe in Figure 8 that there is convergence of the trajectories to the desired references. The local controllability for the 4D Rössler system (38) can be obtained by rewriting the system in the statedependent coefficients form (25).As in the previous cases, the rank of the controllability matrix was computed for each update of as having value 4, which means that the nonlinear system (38) is also locally completely state controllable. Using the controller identification technique in the systems with chaotic behavior requires particular care in the choices of the initial conditions and the initial values of .Although not shown in this work, several tests were performed with different choices of initial conditions and initial .Each change in choices generates different control performances.Therefore, it is up to the control designer to analyze these initial data in order to better control the system. Conclusions In this paper we presented the controller identification technique applied to find proportional controllers for nonlinear systems with chaotic behavior.Simulation results were presented to illustrate the effectiveness of the technique.We observe convergence to the desired trajectories. From the simulation results obtained, one can note that the controller identification technique is useful for the control of chaotic systems.Nevertheless, the results are dependent on the initial parameters and on the periodicity of the updating of , which may indicate that the technique is not so robust in spite of its simplicity.In addition, we Mathematical Problems in Engineering noted also that the controller for each system has its own peculiarities, which needs to be taken into account in each simulation.It is possible that one cannot find a general proof that the controller identification technique works for any nonlinear systems.However by the simplicity of application of the technique, we expect that it can be successfully used in other cases.It was shown in the simulations that the proposed control technique controls nonlinear systems that were locally completely state controllable. Trying to improve the performance of the systems presented, higher order candidate controllers were used, but the performance was not improved.This leads us to believe that the results for the linear case represent a good illustration of the technique.Moreover, through the use of a simple control law, we could present the use of the technique in an illustrative way. Figure 1 : Figure 1: Block diagram for the proposed control system. Figure 4 : Figure 4: Temporal evolution of the controlled Lotka-Volterra system. 2 Figure 5 : Figure 5: Phase portrait behavior of the controlled Lotka-Volterra system.
3,868.2
2017-01-01T00:00:00.000
[ "Mathematics" ]
Risk Measurement and Performance Evaluation of Equity Funds Based on ARMA-GARCH Family Model There are few comprehensive studies on risk measurement and performance evaluation of stock funds in China. This paper uses the ARMA-GARCH family model to analyze the volatility characteristics of equity funds under the t-distribution and Generalized error distribution (GED), and combines CVaR, PM (Second revised sharp ratio) and CVaR-RAROC (Revised RAROC) to comprehensively evaluate equity funds risk and performance. The empirical analysis of five equity funds in China from October 28, 2010 to May 17, 2019 shows that: Comprehensive evaluation of the risk and performance of equity funds can comprehensively and effectively examine the risks and returns of equity funds, helping investors, financial institutions and regulatory agencies to more fully understand the risks and performance of equity funds. Introduction Securities investment fund is a kind of collective investment tool which gathers small-scale funds and uses a variety of investment methods to carry out professional operation. It has been favored by investors since its birth and plays an important role in economic development. Equity funds are a kind of open-end fund. It is an investment fund with stocks as its investment object. The main functions of equity funds are to concentrate the small amount of investment of public investors into large-scale funds and invest in different stock combinations. Equity funds are the main institutional investor in the stock market. There are risks associated with investment returns. Equity funds have the greatest fluctua-tions in risk and return among all fund types. Studying its risks and performance is of great significance to investors, financial institutions and regulators. At present, there have been many studies on the GARCH family model and the risk and performance measurement of China's open-end fund. Song Guanghui et al. [1] used the VaR and CVaR of GARCH family model to study the risk of Shibor. The research shows that the GED is better than the normal distribution and t-distribution, and the t-distribution hypothesis is not suitable to describe the dynamic characteristics of the logarithm yield of Shibor's weekly interest rate; CVaR model can effectively make up for the deficiency of VAR model and effectively measure the risk of actual loss. Wei Zhengyuan et al. [2] established a new realized GARCH-GED model. The empirical research results show that compared with the realized GARCH model under the assumption of normal distribution, the realized GARCH-GED model can better describe the leverage effect of volatility and improve the precision of tail risk to some extent. Zhao Zhenquan and Li Xiaozhou [3] use GARCH model to study the volatility of open-end fund' return rate, and use absolute VaR and RAROC index to comprehensively study the risk and return of open-end fund. Huang Chongzhen and Cao Qi [4] used the China Asset Shanghai-Shenzhen 300 ETF connection as an example to use the GARCH family model to study the risk of open-end funds. The research shows that the GARCH-M model under the t-distribution can better measure risk. Qi Yue and Sun Xinming [5] applied the method of copying the investment strategy of the fund for the first time to create a new investment portfolio and calculate its investment income. As a benchmark for fund performance evaluation, dynamically evaluate the fund's investment performance and investment behavior. Liu Junshan [6] pointed out that CVaR is better than VaR in nature, and proved that the CVaR index is consistent with the stochastic dominance theory. Zhu Fuyun, Zhou Ying, and Chen Yuan [7] The VaR method based on the EGARCH-GED model effectively characterizes the market risk of securities investment fund. Sharpe ratio and RAROC (risk-adjusted capital return) effectively evaluate the performance of securities investment fund. Tang Zhenpeng and Peng Wei [8] introduced CVaR into the RAROC model, which more accurately measures risk, improves accuracy, and provides a good performance reference index for fund investors. Huang Jinbo, Li Zhongfei, and Ding Jie [9] introduced CVaR into the Sharpe ratio model. This indicator overcomes the shortcomings of traditional Sharpe ratios that do not meet the monotonicity of stochastic dominance and does not consider high-order moment information. It can more accurately characterize risk-adjusted Return on assets. Previous studies have shown that: 1) CVaR is more reasonable in measuring risk than VaR; 2) PM and CVaR-RAROC are more effective in measuring the performance of financial products; 3) The effects of t-distribution and GED are more effective than those of the normal distribution model. At present, there are few comprehensive studies on the risk measurement and performance evaluation of equity funds. This paper uses the ARMA-GARCH family model to analyze the volatility characteristics of equity funds under the conditions of t-distribution distribution, t-distribution, and GED. The research in this paper mainly studies the volatility characteristics and risk performance of equity funds in the context of t-distribution and GED. ARMA Model In the 1970s, American statistician Box GEP and British statistician Jenkins GM [10] proposed an autoregressive moving average model (ARMA model). The general ARMA model expression is: ε is a random sequence, t µ is the value of the current period. p and q are the lag orders of the autoregressive term (AR) and moving average term (MA), respectively. GARCH (p, q) Model GARCH (p, q) was proposed by Bollerslov [11] in 1985, and the expression is: in which, ω is a constant term, p and q are the maximum lag order of GARCH term and ARCH term; t µ is a residual term, and t σ is a conditional standard deviation of t µ . GARCH-M Model In 1987, Engle, Lilien, and Robbins [12] proposed the GARCH-M model, which The expression of the GARCH-M model is: δ is a risk premium coefficient, and C represents the effect of the predicted risk fluctuation on t y can be observed. When 1 0 δ > , risk compensation ( ) 2 1 0 t f γ σ > , high return means high risk. TGARCH Model The threshold GARCH model (TGARCH model) was proposed by Zakoian [13] and Glosten et al. [14] respectively, and its expression is: γ > , Increase the leverage effect, and the shock will increase the fluctuation, and vice versa. EGARCH Family Model In 1991, Nelson [15] proposed an EGARCH model with guaranteed positive variance. Its expression is: in which, 2 t σ in logarithmic form guarantees that the value of 2 t σ is non-negative, and does not require that the coefficients on the right side of the equation be non-negative. The solution process is simpler. If 0 k γ ≠ , the impact of information is asymmetric; if 0 k γ > , the impact of good news is greater than the impact of negative news, and vice versa. The Concept of VaR and CVaR The VaR method (Value at Risk, referred to as VaR), known as the value-at-risk model, is often used in the risk management of financial institutions. It was proposed in 1993. The VaR model has been widely adopted by many financial institutions and has become the mainstream method for financial market risk measurement. However, many empirical studies show that VaR method has its own insurmountable defects. Rockafeller and Uryasev [16] proposed Conditional Value at Calculation of VaR and CVaR The calculation formula of VaR is expressed as: in which, 1 t P − is the value of the asset on day t-1, Z α is the quantile of α at a given confidence level, and t σ is the standard deviation of conditions. According to the definition, the CVaR expression based on the GARCH family model is: c is the given significance level. Function ( ) f q is the probability density of the yield series. In the case of a normal distribution, the specific calculation of CVaR is: Under t-distribution, the specific calculation of CVaR is: in which, Γ is the gamma function. d is the degree of freedom. Under the GED, the specific calculation of CVaR is: in which, After the value of CVaR is obtained, it is tested for validity, and the DLC is used to measure the actual loss over VaR. The definition of the statistic is: Concepts and Calculations of SHARPE Ratio, Revised Sharpe Ratio, PM The Sharpe Ratio, also known as the Sharpe Index, is one of three classic indicators that consider both returns and risks. The Sharpe ratio uses standard deviation to measure the risk of the returns of currency funds. The Sharpe ratio can be used as an important basis for fund performance evaluation only when considering the purchase of a certain fund among many funds. Therefore, the Sharpe ratio can be used as a standard for fund performance evaluation index. The calculation formula is: (12) in which, sp is the sharp value of the equity funds, P R is the average return rate of the equity funds, and f R is the risk-free interest rate. Under the current conditions of China's stock market, there is actually no uniform standard for the selection of the risk-free rate of return. Internationally, short-term government bond yields are generally used as market risk-free returns. Therefore, this paper uses the one-year Treasury bond rate (3.6661%) as the risk-free rate. p σ is the standard deviation of the return on equity funds. The larger the Sharpe ratio, the greater the return than the risk, the better the fund's performance. There are certain limitations to using standard deviation as a risk indicator. Revising the Sharpe Index solves this limitation and introduces VaR instead of standard deviation. Its expression is: VaR sp VaR VaR is the value-in-risk calculated based on the GARCH family model. PM is the second revision of the Sharp Index, which is the introduction of CVaR instead of VaR by Golden Wave et al. [6]. PM CVaR ROC RAROC VaR = (15) in which, ROC is the expectation of return on assets. The larger the RAROC, the larger the ratio of benefits to risks, and the better the performance. Selection and Processing of Sample Data This The calculation formula is as follows: in which, t R is the daily rate of return of the equity funds, t P and 1 t P − are the unit net values of the day t and day t − 1, respectively. The data analysis of this paper is realized by R 3.5.1, Eviews 9 and MATLAB 2015b. Basic Description Analysis The calculation formulas of standard deviation, skewness and kurtosis are as follows: in which, std is the standard deviation, sk is the skewness, and kt is the kurtosis, t R is the average rate of return. The basic statistics of the daily rate of return of the sample are shown in Table 1 below. tive. In addition, the kurtosis of the rate of return of the samples are all greater than 3, indicating that the samples are all fat tail distribution. In addition, the Jarque-Bera are all above 1000, and the adjoint probabilities are all close to 0. Therefore, the sample's rate of return has a phenomenon of peaks and fat tails, and the samples do not obey the normal distribution. An non stationary series does not have convergence. If the time series is not stable, applying it to the model will reduce the reliability of the model. This paper uses ADF statistics to test the stationarity of the rate of return. As shown in Table 2. From Heteroskedasticity Test The timing diagram of each sample is shown in Figure 1 below. In which, the horizontal axis is the time axis, and the vertical axis is the rate of return. As can be seen from Figure 1 Autocorrelation test is carried out for the residual of mean model. The results show that there is no autocorrelation in the sample and the mean model is effective. Perform a Heteroskedasticity test (Heteroskedasticity ARCH test) on the mean model, as shown in Table 3. According to the ARCH test results, the Prob are all less than 0.05, indicating that the assumption of "there is no ARCH effect" is rejected at the significance level of 0.05. In addition, the residual sequence is known from the residual autocorrelation graph and partial autocorrelation graph. There is a high-order truncation phenomenon, and it is believed that there is a high-order ARCH effect in the rate of return sequence. Selection of GARCH Family Models In determining the GARCH family model, this paper considers the t-distribution and the GED respectively, and through the AIC and SC criteria, after conti- Use formulas (9) and (10) to calculate CVaR as shown in Table 4 below. From the back test results, it is known that the GARCH model has a better effect of describing risks when GED is distributed. According to the principle that the smaller the DLC value is, the more accurate the model estimates the risk, the The model parameters are shown in Table 5. The results in Table 5 show that the GARCH model can well fit the data of LM Test is used to test the ARCH effect of the above models. The results in Table 6 show that there is no ARCH effect in the residuals of the five models, indicating that the ARCH effect is well eliminated by each model. Table 5. ARMA-GARCH family model corresponding parameters. Calculation of Risk and Performance Based on GARCH Family Models In the case of 95% confidence level, quantile and conditional standard deviation of the model are calculated by Eviews, and CVaR value is calculated by MATLAB according to Formula (9) and (10) as shown in Table 7. As can be seen from Use Formulas (11) and (13) to calculate PM and CVaR-RAROC, the results are as in Table 8. It can be seen from Table 8 top risk ranking, and the bottom performance ranking is also the bottom risk ranking. Therefore, the greater the risk of the equity funds, the greater the return, the better the performance. Among these 5 funds, investors who can accept high risks can choose E Fund Consumption Industry (110022) to obtain greater returns, and more conservative investors can choose funds that are more stable like the Y Yinhua-Dow Jones 88 Index A (180003). Comparing risk ranking and performance ranking, it is found that the results of risk ranking and sharp ratio method are consistent. The ranking of comprehensive risk and performance found that the better the performance of equity funds, the higher the returns, the greater the risks. Equity funds have the characteristics of investment products "high return, high risk". Conclusions In this paper, the ARMA-GARCH model of the t-distribution and GED is established. Through CVaR back testing, it is found that under the GED, the model has a better effect of measuring risk. Under the 95% confidence level, the risk and performance indicators CVaR, PM, and CVaR-RAROC are calculated. The results show that the performance rankings of the Sharpe ratio method and the RAROC method are different. The performance ranking and risk ranking calculated by the CVaR-RAROC method Consistent. Comprehensive risk and performance ranking can be found that the higher the return, the greater the risk, there is a corresponding relationship of "high risk, high return" for equity funds. To sum up, investment products have risks as well as returns. If investors have low financial literacy, they cannot fully collect and accurately identify the risks of
3,557.4
2020-04-02T00:00:00.000
[ "Economics", "Business" ]
Stability in higher-derivative matter fields theories We discuss possible instabilities in higher-derivative matter field theories. These theories have two free parameters β1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta _1$$\end{document} and β4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta _4$$\end{document}. By using a dynamical system approach we explicitly demonstrate that for the stability of Minkowski space in an expanding universe we need the condition β4<0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta _4<0$$\end{document}. By using the quantum field theory approach we also find an additional restriction for the parameters, β1>-13β4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta _1>-\frac{1}{3}\beta _4$$\end{document}, which is needed to avoid a tachyon-like instability. Introduction The unsolved problems of General Relativity such as dark energy [1,2] and dark matter [3] force us to investigate different alternatives. One of the most popular alternatives is well-known f (R)-gravity [4,5] in the different forms [6][7][8]. A new type of modified gravity was recently proposed [14]. The higher-derivative matter fields are implied in such a kind of theory, which can be interpreted as non-dynamical auxiliary fields, The most general form of the term S μν if we take into account terms up to fourth order in derivatives is Some of the terms from Eq. (3) also appear in f (R, T )theory [15]. In this sense there may a further generalizations of such a kind of theories, for instance, in the light of f (R, T, R μν T μν )-theory [16] or maybe f (R, T, R μν T μν , T μν T μν )-theory. Note that for a special choice of parameters some of the well-known theories are contained in the representation (3) as a limit. For instance, the case β 1 = 0, β 4 = −κ/2 corresponds to EiBI gravity in the small coupling limit, or β 4 = 0 corresponds to generic Palatini f (R) gravity [14]. In this sense the theory of (1) also may be interpreted as some kind of phenomenological theory of modified gravity. Cosmology in such a kind of theories was investigated in [17,18]. There was obtained an effective dark energy sector and one found accelerating, de Sitter, and non-accelerating solutions. Also it was demonstrated that in such a kind of theories the effective dark energy e.o.s. may be quintessencelike, cosmological constant-like or even phantom-like. Nevertheless we need to note that the problem of multiple de Sitter solutions (which was successfully solved in f (R)gravity [19]) is unsolved yet, and it seems that it cannot be solved without some additional terms in the equation. Generalizations for brane theories were studied in [20][21][22]. Some interesting remarks about this theory may be found also in [23]. For a standard perfect fluid T μν = ( p + ρ)U μ U ν + pg μν and FLRW metric ds 2 = −dt 2 + a 2 (t)δ μν dx μ dx ν the Friedman-like equation takes the form [17] and also we have the usual energy conservation law T μν ;μ = 0, which reads noẇ where H =ȧ a is usual Hubble parameter. Stability conditions It is well known that incorporating of higher-derivative terms can affect the stability of different solutions including the simplest cosmologically relevant one. One of the most important is the Minkowski solution. Let us study the stability of the Minkowski solution in such a kind of gravity. To discuss the stability problem we need to transform the equations to the form of a dynamical system. First of all we will discuss the simplest (and for cosmological applications the most convenient) equation of state, p = wρ. Further we can express the Hubble parameter from Eq. (5) and insert it into (4). In this way we obtain a second order differential equation for the function ρ, which is also dependent on the parameters β 1 , β 4 , , and w: and We can see that f 1 and f 2 are non-dynamical functions and they only depend on the parameters of the theory. For further investigation let us rewrite Eq. (6) as a dynamical system: with the function F It is clear that the stationary points of system (9) are governed by the next simple equation, Now let us discuss the physical meaning of its solutions. First of all, note that we are interested in solutions with a true vacuum = 0, but we need to keep the parameter to investigate perturbations because it is coupled with the other parameters of our theory, β 1 and β 4 . So we can let it vanish it only at the end of our investigation. Therefore, we may interpret as a parameter which allows us to avoid degeneration of the solutions and we can put → 0 at the end. A similar approach was successfully practiced to a stability investigation of the Minkowski solution in different modified gravity theories in our previous papers [24,25]. The first solution of Eq. (11) reads We can see that in the case → 0 this solution reads ρ = − 4 f 2 = 0; moreover, (5) tells us that H = 0. So this solution corresponds to the static Einstein universe and it is not interesting for our further investigations; but if we want to keep it, we must put f 2 < 0 to satisfy the weak energy condition (wec) 2 and some additional restrictions for the parameters will follow from this inequality. 3 The second solution reads and it has the limit ρ → − for → 0, so we need to put → −0 to satisfy wec. Note that this solution exists for any value of H including the case H = 0. In the last case this is a Minkowski solution and its stability is very important for us. Let us study the Minkowski stability conditions. It is well known that in the first order, stability is governed by the equation or 4 and it easy to see that the stability conditions take the form 5 Let us calculate the values of these functions explicitly. For the function F π we have First of all note that at the interesting point this function takes a singular value, but we need only its sign. Second, we can put 1 + w > 0 without loss of generality, because the case w = −1 corresponds to the case of the cosmological term, which is already incorporated in our equations. Now we can see from (5) that −2ρ (1+w)ρ = 6H > 0 in an expanding universe. The value of the function f 1 is always finite, see (7), whereas the value of the expression 1 3(1+w)ρ is infinite at the point we study (ρ = 0), and positive if we pose the quite natural conditions 1 + w > 0 and ρ > 0. So we find that the total sign of Eq. (17) at the point (0, 0) is governed by the sign of the parameter β 4 and for Minkowski stability we must have β 4 < 0. Now let us discuss the second condition from (16). We have First of all note that near the interesting point (0, 0) the second term may be neglected because f 2 always has a finite value, see Eq. (8), and ρ → 0, whereas the first term is equal to a non-zero constant. According to (5)ρ 2 ρ 2 = 9H 2 (1 + w) 2 → 0 for the Minkowski solution, so the third term may also be neglected. And for the fourth term we havė ρ 2 ρ 3 (1+w) 2 = 9H 2 ρ = 3 near the point ρ = 0; see (4). So finally we have and we can see that for stability we need β 4 < 0 as well. Now let us discuss possible instabilities from another point of view. The trace of Eq. (3) reads 4 Here we imply F ρ ≡ ∂ F ∂ρ and so on. 5 Recall that Re(μ 1,2 ) < 0 is needed for stability. Let us study a small perturbation δT for some solution of this equation with T 0 and R 0 . So we put T = T 0 + δT and the equation for δT takes the form (β 4 + 3β 1 ) δT + 4 β 1 + 2β 2 1 T 0 δT − δT = 0; (21) we try to find possible solutions of (21) as standard decomposition on the functions where ω ≡ (k 2 + μ 2 ) 1/2 , k ≡| k |, and μ is the mass of the effective scalar field (scalaron). After substituting this representation into (21) we obtain the next equation for μ 2 : and if we turn back to the Minkowski limit → 0, this relation give us an additional restriction to avoid the tachyonlike instability (μ 2 < 0): Conclusion In this paper we discuss instabilities in higher-derivative matter field theories. We found the condition for the Minkowski stability and another one to avoid the tachyon-like instability. Of course, these are only necessary but not sufficient conditions for stability of the theory. For instance we study Minkowski stability only with respect to the simplest class of isotropic perturbations and taking into account more complicated perturbations may provide us with some additional restrictions on the parameters. Nevertheless the conditions found must be satisfied and may be very helpful for further construction of the theory.
2,213.4
2016-09-01T00:00:00.000
[ "Physics" ]
Valley-selective energy transfer between quantum dots in atomically thin semiconductors In monolayers of transition metal dichalcogenides the nonlocal nature of the effective dielectric screening leads to large binding energies of excitons. Additional lateral confinement gives rise to exciton localization in quantum dots. By assuming parabolic confinement for both the electron and the hole, we derive model wave functions for the relative and the center-of-mass motions of electron–hole pairs, and investigate theoretically resonant energy transfer among excitons localized in two neighboring quantum dots. We quantify the probability of energy transfer for a direct-gap transition by assuming that the interaction between two quantum dots is described by a Coulomb potential, which allows us to include all relevant multipole terms of the interaction. We demonstrate the structural control of the valley-selective energy transfer between quantum dots. Valley-selective energy transfer between quantum dots in atomically thin semiconductors Anvar S. Baimuratov 1* & Alexander Högele 1,2 In monolayers of transition metal dichalcogenides the nonlocal nature of the effective dielectric screening leads to large binding energies of excitons. Additional lateral confinement gives rise to exciton localization in quantum dots. By assuming parabolic confinement for both the electron and the hole, we derive model wave functions for the relative and the center-of-mass motions of electronhole pairs, and investigate theoretically resonant energy transfer among excitons localized in two neighboring quantum dots. We quantify the probability of energy transfer for a direct-gap transition by assuming that the interaction between two quantum dots is described by a Coulomb potential, which allows us to include all relevant multipole terms of the interaction. We demonstrate the structural control of the valley-selective energy transfer between quantum dots. The unique properties of two-dimensional (2D) materials provide versatile opportunities for nanomaterial physics 1 . Within this realm, monolayers of transition metal dichalcogenides (TMD) represent 2D crystalline semiconductors with unique spin and valley physics for opto-valleytronic applications [2][3][4] . Coulomb electron-hole attraction and the nonlocal nature of the effective dielectric screening lead to a large binding energy of excitons, which dominate both light absorption and emission [5][6][7][8] . The combination of exceptional brightness and spin-valley coupling opens up novel opportunities for tunable quantum light emitters for quantum information processing and sensing 9 realized on the basis of excitons confined in TMD-based systems. There are various approaches to realize exciton confinement in TMD monolayers. Impurities, vacancies, or strain in monolayer TMD crystals, as well as local modulations of the immediate environment, modify the energy gap of 2D materials 10 . By providing additional lateral confinement, local disorder is known to confine excitons in a relatively small area of TMD monolayers and give rise to quantum dot (QD) excitons. Spectral signatures of quantum dot exciton localization with bright and stable single-photon emission were observed from unintentional defects in monolayer tungsten diselenide [11][12][13][14][15] . Subsequently, strain engineering has proven as a viable deterministic approach to obtain spatially and spectrally isolated quantum emitters in monolayer and bilayer TMDs [16][17][18][19] , and controlled positioning has been achieved by irradiating monolayer crystals with a sub-nm focused helium ion beam 20 . Alternatively, QDs have been realized by electrostatic confinement [21][22][23] , or by creating lateral TMD heterostructures forming a potential well 24 . In vertical TMD heterostructures, moiré superlattices give rise to periodic QD arrays hosting localized excitons 25,26 . By preserving strong spin-valley coupling, TMD QDs inherit optovalleytronic properties from their 2D host system 27 , as the intervalley coupling is weak due to the vanishing amplitude of the electron wave function at the QD boundary and hence valley hybridization is quenched by the much stronger spin-valley coupling 28 . As in conventional QDs, the oscillator strength and radiative lifetime of confined excitons are strongly size-dependent, which results in oscillator strength enhancement and ultrafast radiative annihilation of excitons, varying from a few tens of femtoseconds to a few picoseconds 29 . In the presence of an external magnetic field bound states in TMD QDs can be considered as quantum bits for potential applications in quantum technologies [30][31][32] . In this work, we study nonradiative resonance energy transfer between two adjacent QDs in TMD monolayers 33 . Building on theories initially developed for molecular systems by Förster in the framework of dipole-dipole interaction 34 and generalized by Dexter for quadrupole and exchange interactions 35 , we derive the theory of nonradiative resonance energy transfer for atomically thin QDs hosted by 2D crystals. For conventional QDs with sizes in order of tens of nm, the multipole nature of Coulomb interactions and energy transfer through dipole-forbidden states must be taken into account 36,37 www.nature.com/scientificreports/ here, we account for multipole terms of the transfer process and analyze the effect of the donor-acceptor system geometry on transfer efficiency. Model for confined excitons We start our analysis from delocalized excitons in TMD monolayer, by focusing on excitons formed by states at the bottom of the conduction band and the topmost valence band at K ± points of the first Brillouin zone. Using the two-band effective mass model the wave function of the exciton can be written as a factorization of the relative motion of charge carriers and their center-of-mass motion 38,39 : where α = ±1 is the valley index, r e(h) is the radius vector of the electron (hole), σ is the normalization area, R = (m e r e + m h r h )/M is the center-of-mass vector, m e(h) is the effective mass of the electron (hole), M = m e + m h , Q is the wave vector of the center-of-mass motion, ψ N (ρ) is the wave function of the relative motion with coordinate ρ = r e − r h , and u α (r e ) and v α (r h ) are the Bloch functions of the electron and hole in valley α . For very small QDs (nanoflakes) the effective mass approach is not applicable, but it is possible to use ab-initio calculations to study their properties 40 . We solve the Schrödinger equation for the relative motion of states with circular symmetry, namely S-states with zero angular momentum, where ρ = |ρ| , µ = m e m h /M is the reduced mass, ǫ N is the eigenenergy of the S-state with the principal quantum number N. The nonlocally screened electron-hole interaction is described by the Rytova-Keldysh potential [41][42][43] where e is the electron charge, ρ 0 is the screening length, ε is the effective dielectric constant, and H 0 (x) and Y 0 (x) are Struve and Neumann functions. Without considering the details of lateral confinement we focus on TMD QDs with in-plane localization of charge carriers and make the approximation of a harmonic confinement. If the coordinates of the relative motion, ρ , and the center-of-mass motion, R , of the confined exciton are not separated, one can use a variational procedure without the separation of coordinates to find the wave function 44 . Here we assume for simplicity that both the electron and hole are confined by parabolic potentials of the form which are characterized by the confinement frequency ω . With this potential we separate the coordinates of the relative motion and the center-of-mass motion 45 . Therefore, the energies and wave functions of the excitons in QDs are written as: respectively. The total energy of the exciton confined in the QD takes discrete values and is dependent on the band gap of the TMD monolayer, E g , and on E nl and ǫ N , which are the energies of the confined center-of-mass and relative motions. We solve the Schrödinger equation for the center-of-mass motion in polar coordinates R ≡ (R, θ) . The exact eigenenergies and eigenstates are those of the 2D harmonic oscillator 46 , namely A nl = 2 · n!/(|l| + n)!, www.nature.com/scientificreports/ where n = 0, 1, 2, . . . and l = 0, ±1, ±2, . . . are the principal and angular momentum quantum numbers, respectively, L = √ /(Mω) is the QD size, and L |l| n (x) are the associated Laguerre polynomials. The radial part of the relative motion with zero angular momentum is determined not only by the nonlocally screened potential from Eq. (3) as in the case of free excitons, but also by the parabolic potential µω 2 ρ 2 /2 . The Schrödinger equation for the relative motion is written as To find an approximate solution of this equation one can use the 2D hydrogen-like wave functions with the Bohr radius as variational parameter 5 . Using the material parameters from Table 1, we solve this equation numerically for the first three exciton S-states in four specific TMDs, namely MoS 2 , MoSe 2 , WSe 2 , and MoTe 2 . The energies ǫ N and wave function overlaps of the electron and hole at the same spatial position |ψ N (0)| 2 are shown in Fig. 1 as functions of the QD size L. In the top panel of Fig. 1 we observe for all materials the same trends for the energies ǫ N , they decrease with the size of the QD. The 1S state (N = 1) is less dependent on the QD size as it is the most localized state. The wave function overlaps |ψ 1 (0)| 2 are shown in Figs. 1e-h. Due to the confinement effect these overlaps are larger for smaller QDs and decrease monotonically with size. QDs in WSe 2 exhibit larger size effect than in MoS 2 , MoSe 2 , and MoTe 2 due to smaller reduced mass. For large QDs all three states converge to those of delocalized excitons shown by the dashed lines in Fig. 1. Resonant energy transfer In the following, we calculate the nonradiative resonant energy transfer between two 2D QDs coupled by a Coulomb potential. Due to the finiteness of QD dimensions in the xy-plane, the point dipole model developed by Förster leads to large errors when the distance between the QDs is of the order of their sizes. Therefore, the multipole nature of Coulomb interaction must be taken into account as in the case of conventional 3D QDs 36 . For conventional colloidal QD and molecular systems the orientations of QDs and molecules are random, whereas in layered systems the positions of the QDs are fixed and the localized excitons with lowest energies have an in-plane www.nature.com/scientificreports/ circular polarization with sign reversal for K + and K − . This in-plane arrangement of the dipole moments leads to characteristic valley-selective orientation effects, which are considered in detail below. Let us consider two 2D QDs, namely donor and acceptor with sizes L D and L A , located in the same or in two different layers of the same TMD material. We assume the centers of QDs to be separated by a 2D-vector d in the xy-plane as shown in Fig. 2a. In the z-direction they are separated by a distance h as illustrated in the xz-plane projection. In our analysis, it is useful to distinguish two limiting cases. The first case corresponds to the situation when the QDs lie in different layers on top of each other and d = 0 . The second case is realized when the QDs are located in one monolayer and h = 0 . These two limiting cases allows us to analyze the orientation effects, e.g., for h = 0 the system is a truly 2D-object, in which the QDs and their dipole moments are in one plane. For d = 0 the problem is quasi-3D, because in principle all dimensions matter. Without loss of generality we assume that only the 1S states (N = 1) contribute to the energy transfer between the donor and acceptor. Then, energy transfer is related to the annihilation of an exciton F (α) 1,nl in the donor and the creation of an exciton F (β) 1,mk in the acceptor (see Fig. 2b), where α and nl (β and mk) are the valley index and two quantum numbers of the exciton in the donor (acceptor). Here we consider the range of distances between the donor and acceptor, s = (d 2 + h 2 ) 1/2 , much smaller that the de Broglie wavelength of the annihilated exciton and neglect effects of exchange and radiation transfer 47,48 , but take into the account the multipole nature of the Coulomb interaction 36 . With these assumptions, the energy transfer depends on the matrix elements of the Coulomb potential where r 1 and r 2 are the 2D center-of-mass vectors originating at the centers of the donor and acceptor. By using the Fourier expansion we find the matrix elements where If we use the long wave approximation qa ≪ 1 with the lattice constant of TMD a and express the integration as a sum of integrals over elementary cells, we simplify F D (q) = dre iqr � nl (r)u α (r)v α (r). and is the area of the unit cell. The corresponding expression for F A (q) can be found by replacements D → A , α → β , and nl → mk in Eq. (17). By carrying out the integration over the angular variable of q in Eq. (15) we obtain with two integrals I 1 and I 2 given by where J η (x) is the Bessel function of the first kind, η = 1, 2 , and n is the unit vector co-directional with d. Within the two-band approximation of the band structure in TMD monolayer, the interband matrix elements of the dipole moment operator in the donor and acceptor are written in Cartesian coordinates as where D = eat/E g and t is the nearest-neighbor hopping integral 49,50 , and angles ϕ and ϑ determine the crystal coordination axes of the donor and acceptor. Using these formulas for dipole moments and choosing n = e x , we find where δ αβ is the Kronecker symbol and p = e i(βϑ−αϕ) is the phase factor, which is determined by the alignment of the crystal coordination axes. Further, for the sake of simplicity, we assume ϕ = ϑ = 0 and p = 1 . According to the result, the intravalley matrix element (α = β) corresponds to the annihilation and creation of excitons in the same valley and is proportional to the difference I 1 − I 2 /2 . The intervalley matrix element (α = β) is proportional to −I 2 /2 , because the dipole moments of the excitons are orthogonal to each other and the first scalar product in Eq. (19) is zero. It is noteworthy that if we consider the interlayer excitons in TMD homo-or heterobilayers instead of intralayer ones, we must take into account the z-component of dipole moments in Eqs. (21) and (22). Finally, using the Fermi's golden rule we obtain the rate of the resonant energy transfer from the donor state α, nl to all final states of acceptor β, mk as: where Ŵ is the sum of the total dephasing rates of interband transitions in the donor and acceptor, and = E nl − E mk is the energy detuning between the exciton levels in the donor and acceptor involved in the energy transfer process. Notably, if the magnitudes of the Coulomb matrix elements in Eq. (24) are much larger than and Ŵ , the formation of the entangled states in QDs must be considered 51 . Assuming the simplified case when the energy transfer occurs for equal QDs, L D = L A = L , from the state nl = 00 in valley α to the states with mk = 00 in valleys β = ±α , we find where the rate is dependent on the intravalley and the intervalley matrix elements. To quantify these matrix elements for h > 0 and d > 0 we evaluate the integrals in Eq. (20) numerically. It should be noted that all other S-states of the relative motion and all other mk states of the center-of-mass motion in acceptor contribute to the energy transfer, but their contributions are negligibly small. Before proceeding with the evaluation of the integrals, it is instructive to introduce the dipole-dipole approximation (DDA) of the Coulomb interaction developed by Förster 34 . This approximation of point dipoles is obtained by increasing the distance between the donor and acceptor s to infinity, in particular d → ∞ for h = 0 or h → ∞ for d = 0 . Then we find the DDA limit for the matrix element from Eq. (23) and the energy transfer rate from Eq. (25) www.nature.com/scientificreports/ where the matrix element indexes were omitted for simplicity. As illustrative examples of our theory, we consider two limiting cases shown in Fig. 2a, as they allow us to analyze the matrix elements and rates, and find their asymptotics. For h = 0 , when both QDs are located in the same monolayer, we find exact expressions where C 0 = 2πD 2 |ψ 1 (0)| 2 /(εL) , ξ = d 2 /(8L 2 ) , and I 0 (ξ ) and I 1 (ξ ) are the modified Bessel functions of the first kind. In Fig. 3 we show the matrix elements for MoS 2 monolayer and ϕ = 0 starting from the close-contact distances between QDs, d = 2L . For the shown range of distances the intravalley matrix element C 1 first starts with a positive value and decreases with distance d. After crossing the zero at d ≈ 2.51L it reaches a minimum C 1 ≈ −0.03C 0 at d ≈ 3.41L and further increases monotonically. On the other hand, the intervalley matrix element C 2 starts from a negative value, decreases monotonically with distance, and after reaching a minimum C 2 ≈ −0.15C 0 at d ≈ 2.26L it increases monotonically. The analysis shows that for ξ → ∞ , which is the DDA in Eq. (26) for h = 0 , we obtain For small distances in the monolayer limit ( ξ ≪ 1 ) exchange effects must be taken into account, because they substantially change the intravalley matrix element. By substitution of Eqs. (28) and (29) into (25) we find the rate of the energy transfer for h = 0 as where γ 0 = 2C 2 0 /( 2 Ŵ) . By using the material parameters of monolayer MoS 2 and assuming L = 3 nm, ε = 4.5 and Ŵ = 5 meV 20 , we estimate the absolute values C 0 = 17 meV and γ 0 = 176 ps −1 . Again for ξ → ∞ we obtain the DDA in Eq. (27) γ (dd) d = 10γ 0 (L/d) 6 . This result clearly shows that for this limit, the system can be considered as two interacting point dipoles in the Förster model. We plot the rate of the energy transfer γ d in Fig. 4a and show explicitly the intravalley and intervalley contributions to the transfer. Evidently, the intervalley transfer is larger than the intravalley, particularly for large distances it is nearly one order of magnitude larger. It should be noted, that for the distance d ≈ 2.51L only the intervalley transfer occurs, this position is marked by the red arrow in Fig. 4a. To compare the exact result for the energy transfer in Eq. (31) with the DDA we calculate the ratio between the energy transfer rates γ /γ (dd) d in Fig. 4b. The gray dashed line corresponds to the DDA asymptotic. For the monolayer limit with h = 0 we use Eq. (31), whereas for QDs separated in the z-direction with distances www.nature.com/scientificreports/ h = 0.1L, 0.5L, L we substitute Eqs. (23) and (20) into (25). Evidently, for small distances between the QDs the multipole contribution becomes sizable and the distance dependence deviates from the DDA. It is a result of the finite sizes of the QDs in the xy-plane. With increasing distance d the ratios γ /γ (dd) d become larger than unity, and also have maxima, e.g. for the monolayer limit we observe the maximum γ d ≈ 2.35γ (dd) d at d ≈ 4.14L . With increasing separation in the z-direction, h, we observe a decrease in the energy transfer and a shift of the ratio maximum to larger distances d. Summarizing the above, the monolayer limit exhibits the largest energy transfer rate, that is up to 2.35 times larger than the energy transfer for the DDA. Another limiting case with d = 0 is shown in Fig. 2a and corresponds to the geometry, where the centers of the two QDs lie on the z-axis. For this symmetry only the intravalley energy transfer occurs, and its matrix element is given by where ζ = h/(2L) and Erfc(ζ ) is the complementary error function. Using Eq. (25) we calculate the transfer rate as From the asymptotic behavior of these functions for ζ → ∞ , which corresponds the DDA in Eqs. (26) and (27) for d = 0 , we obtain 2C 0 (L/h) 3 for the intravalley matrix element and γ (dd) h = 4γ 0 (L/h) 6 for the energy transfer rate. In Fig. 4c we show that with increasing distance h the transfer rate γ h decreases subexponentially. For all distances this ratio is smaller than unity and the DDA overestimates the energy transfer (see Fig. 4d). This holds generally if all multipole contributions are taken into account, which is necessary when the distance s is in the order of the QD sizes and the energy transfer deviates from the Förster model. Summary and conclusions In summary, we developed a simple theory for excitons confined in QDs of atomically thin TMD semiconductors. Using the approximation of harmonic confinement we derived the energy spectrum and the wave functions of QD excitons. By calculating the intravalley and intervalley Coulomb matrix elements and taking into account not only dipolar contributions but also multipole corrections, we determined resonant energy transfer rates between two adjacent QDs. We derived exact expressions for two simplified cases of possible donor-acceptor geometries, and found that for small distances all multipoles must be taken into account. At large distances, the energy transfer was found to converge asymptotically towards the Förster DDA. The largest energy transfer rate was found in the monolayer limit, where the major contribution stems from the intervalley matrix element of the Coulomb interaction and the intravalley matrix element is small. Moreover, the intravalley matrix element can be drastically suppressed by geometry, rendering the energy transfer valley selective. This aspect is important when considering collective light-matter effects such as superradiance for QD ensembles, and might prove useful for spin-valley selective transfer of quantum information.
5,303.2
2020-08-05T00:00:00.000
[ "Physics" ]
Protocol for enhanced proliferation of human pluripotent stem cells in tryptophan-fortified media Summary We describe a protocol for the efficient culture of human pluripotent stem cells (hPSCs) by supplementing conventional culture medium with L-tryptophan (TRP). TRP is an essential amino acid that is widely available at an affordable cost, thereby allowing cost-effective proliferation of hPSCs compared to using a conventional medium alone. Here, we describe the steps for enhanced proliferation of hPSCs from dermal fibroblasts or peripheral blood cells, but the protocol can be applied to any hPSCs. For complete details on the use and execution of this protocol, please refer to Someya et al. (2021). SUMMARY We describe a protocol for the efficient culture of human pluripotent stem cells (hPSCs) by supplementing conventional culture medium with L-tryptophan (TRP). TRP is an essential amino acid that is widely available at an affordable cost, thereby allowing cost-effective proliferation of hPSCs compared to using a conventional medium alone. Here, we describe the steps for enhanced proliferation of hPSCs from dermal fibroblasts or peripheral blood cells, but the protocol can be applied to any hPSCs. For complete details on the use and execution of this protocol, please refer to Someya et al. (2021). BEFORE YOU BEGIN This protocol describes the specific steps for hPSC culture, which have been validated in several cell lines. We applied this protocol to the human induced pluripotent stem cell (hiPSC) lines (253G4 and FfI14s04). hPSCs were obtained from the Center for iPS Cell Research and Application, Kyoto University (Kyoto, Japan) and the WiCell Research Institute (Wisconsin, USA). Their utilization and distribution met the relevant institutional and government guidelines and regulations. We used humidified 5% CO 2 incubators at 37 C for cell culture. All cell lines were regularly tested for contamination, including mycoplasma, and all culture procedures were performed using sterile techniques to prevent contamination. All processes within the protocol were performed in a biosafety cabinet (Class II, Type A1), according to the relevant guidelines and regulations. Regular karyotyping is highly recommended for ruling out gross chromosomal instability. We typically passaged hPSCs weekly, with the medium changed every 48 h, unless specified. Preparation of culture medium with TRP supplementation Timing: 30 min 1. TRP (53 mg) should be directly added to an aliquot of approximately 50 mL drawn from an original 500 mL mTeSR1 bottle containing 400 mL of mTeSR1 basal medium called Solution A without the 53 Supplement. a. Measure TRP carefully using a clean laboratory spatula and a calibrated analytical measuring scale. b. Fill the tube or container thoroughly with an aliquot of mTeSR1 while keeping the original bottle covered with a shading sheet (aluminum foil can be used for practical purposes) to avoid light exposure. 4. Carefully remove the shading sheet to observe the solution for any solute residues present. If the residues are observed, store the aliquot in a refrigerator for a longer period until dissolution. Troubleshooting 1. 5. Once a TRP is in solution, the mixture should be added back to the original mTeSR1 bottle using a 0.22-mm sterile filter. 6. Add thawed mTeSR1 53 supplement to the solution to complete the preparation of TRP-fortified mTeSR1 solution. Alternatives: For fortifying StemFit AK02N medium, thawed solution B should be added to the StemFit AK02N medium. 7. Store the bottle at 4 C until use. Note: Ensure that all resources used in this protocol are within their expiry dates. Once opened, StemFit AK02N or mTeSR1 medium should be used within two weeks. CRITICAL: We strongly suggest avoiding UV light and heat exposure to prevent TRP degradation, as well as FGF2 destabilization, in StemFit AK02N or mTeSR1 medium (Bellmaine et al., 2020;Igarashi et al., 2007). Preparation of Matrigel-coated dishes or plates Timing: 30 min 8. Dispense a mixture of thawed Matrigel and DMEM/F-12 (1:60 dilution) into dishes or plates. The detailed method follows the manufacturer's instructions. Troubleshooting 2. a. Thaw an aliquot containing Matrigel on ice. b. Add the required amount of Matrigel from an aliquot to cold DMEM/F-12. c. Dispense 10 mL of the solution into a 10 cm dish, or 2 mL per well for a 6-well plate, and gently swirl to cover the entire surface with Matrigel before incubation. 9. Incubate dishes or plates in a sterile hood or refrigerator. Pause point: Matrigel-coated dishes or plates are ready to be used after incubation at room temperature (approximately 21 C) for 1 h, or overnight incubation in a refrigerator at 4 C. They can be stored at 4 C for up to one week. For incubation or storage in a refrigerator, they should be covered with clean plastic wrap (or equivalent) to prevent spillage. Avoid tilting to prevent uneven surface distribution of Matrigel. Alternatives: iMatrix-511 is a potential alternative to Matrigel, and, if used, should be diluted in D-PBS(-) at a mixing ratio of 50 mL:10 mL before performing step 8c. MATERIALS AND EQUIPMENT Note: TRP-fortified mTeSR1 medium should be thoroughly covered with a material that has a UV light-shielding properties and can be stored at 4 C for up to 2 weeks. TRP can be stored at room temperature and used before its expiry date. Alternatives: StemFit AK02N medium can be used as an alternative to mTeSR1. Note: A Matrigel bottle should be thawed on ice, divided into aliquots (e.g., 1.5 mL) using microtubes, and stored at À30 C for up to 2 years to prevent thaw-freeze cycles. Note: Aliquots should be thawed on ice. Dilution with cold DMEM/F-12 should be performed promptly to avoid gel aggregation. Note: Y-27632 stock solution should be divided into small aliquots using microtubes and stored at À30 C for up to 1 year. Avoid multiple freeze-thaw cycles. DMSO can be stored at room temperature and used before expiry date. CRITICAL: DMSO readily penetrates the skin and may cause irritation to the eyes, skin, and respiratory tract. Appropriate protection must be provided during handling. Note: Key resources table for phycoerythrin (PE)-conjugated antibodies used for flow cytometry. Prepare 2% FBS solution in each experiment. Immunocytochemistry Note: Prepare each solution in each experiment. Note: The donkey anti-rabbit IgG secondary antibody corresponds to the NANOG primary antibody. Note: The goat anti-mouse IgG secondary antibody was paired with the OCT-4 and SSEA4 primary antibodies listed above. STEP-BY-STEP METHOD DETAILS Initial culture of hPSCs Timing: 45-60 min In this step, hPSCs are thawed from cryotube containing frozen hPSCs to commence culture of hPSCs. 1. Place the required amount of refrigerated mTeSR1 solution in a 50 mL conical tube covered by a shading sheet at room temperature (approximately 21 C) for approximately 30-60 min to warm. Note: 10 mL of medium is required for a 10 cm dish, or 2 mL of medium per well in a 6-well plate. Moreover, 10 mL of medium is required for pelleting thawed cells and 10 mL is required for the initial culture of iPSCs. 2. Thaw an aliquot of 10 mM of Y-27632 diluted with DMSO. Reagent Final concentration Amount Oct-3/4 Antibody (C-10) Host: mouse, Isotype: IgG2b n/a 0.01 mL ImmunoBlock n/a 1.990 mL 3. Add 10 mM Y-27632 to the required amount of normal mTeSR1 solution (1:1,000 dilution) to prepare an mTeSR1 solution containing 10 mM Y-27632. 4. When the incubation and warming steps are completed, the dish or plate is tilted to aspirate and remove the excess Matrigel solution in the bottom corner. 5. Once the Matrigel solution was removed, mTeSR1 solution with Y-27632 should be dispensed to prevent desiccation. 6. Thaw hPSCs were stored in cryotubes at À150 C in a 37 C water bath. 7. After thawing, move the cryotube to a hood, and transfer the solution into a conical tube containing 10 mL of normal mTeSR1 with Y-27632, using a sterile 1,000-mL tip attached to a P1000 pipette. 8. Centrifuge the tube for 3 min at 300 3 g. 9. The supernatant should be aspirated carefully to prevent suction of the cell pellet, 10 mL of normal mTeSR1 with 10 mM Y-27632 was added, and the cells were gently stirred, preferably by multiple bouts of suction and discharge of contents using an electronic pipette controller with a sterile serological pipette. CRITICAL: Do not repeat pipetting more than ten times, as this may cause cell death. 10. Transfer an aliquot of the solution onto dishes or plates with the planned cell density. In this experiment, cells were seeded at approximately 3.4 3 10 3 cells per cm 2 . Note: The volume to be transferred is in accordance with the specified split ratio of the cells, which should be selected based on the desired passaging day. If you wish to passage cells sooner, then the split ratio should be higher to allow the cells to reach sufficient confluence on the planned day of passage. In our suggestion, if you passage in 2 or 4 days, seed cells with a density of 1.7 3 10 4 or 3.4 3 10 3 per cm 2 , respectively. To avoid direct seeding from a high-density suspension, which may cause more variation in the initial distribution, with 6-well plates, 500 mL of medium per well was added before dispensing the diluted solution with 1.5 mL of medium containing a designated number of cells per well. 11. Place the dishes or plates in an incubator at 37 C and gently move them side-to-side to evenly spread the cells without contamination. Note: We recommend initial seeding of hPSCs in a conventional setting without TRP, fortifying with TRP, to allow cells to grow in an accustomed medium environment. Alternatives: If cells were already cultured on a dish or plate containing mTeSR1, change the medium to TRP-fortified mTeSR1 on the next medium change day. If cells were grown in different media, they should be initially switched to mTeSR1 medium on the day after medium change, followed by TRP-fortified mTeSR1. If the cells cultured in non-fortified mTeSR1 are already confluent and ready for passage, they should not be passaged TRP-fortified mTeSR1 with Y-27632, as TRP fortification should commence at the time of medium change. If the cells were initially passaged using conventional mTeSR1 with Y-27632, and then the medium should be then changed to TRP-fortified mTeSR1 (Figure 2). Medium change of hPSCs Timing: 45 min In this step, hPSC culture medium is changed to Y-27632 depleted TRP-fortified medium. OPEN ACCESS STAR Protocols 3, 101341, June 17, 2022 Note: As with the initial culture, the following steps should be undertaken under sterile conditions, except for assessments of the cells under the microscope. 12. After 1-2 days of incubation, place the required amount of TRP-fortified mTeSR1 solution in a conical tube covered by a shading sheet at room temperature (approximately 21 C) for approximately 30-60 min. 13. Observe cells in the dish or plate microscopically. Note: Observation of cells should be performed under an inverted microscope regularly prior to the timing of medium change, passage, or cryopreservation, particularly to assess the morphology indicative of differentiation and the presence or absence of contamination. The microscope should be thoroughly cleaned using 70% ethanol prior to and after use. CRITICAL: Any signs suggestive of contamination, discontinue culture, and discard all concurrently cultured dishes and plates to prevent the dissemination of contamination in the laboratory. 14. Tilt the dish or plate, aspirate the culture medium, and change medium to TRP-fortified mTeSR1 solution (without Y-27632). Troubleshooting 3. Note: During medium change or passage, if possible, minimize the time of direct light exposure, which may be detrimental to cells, and reduce the light level in a sterile hood. The switch to TRP-fortified medium from conventional TRP-concentration medium was performed after hPSCs were cultured in conventional TRPconcentration medium for at least 12 h. OPEN ACCESS Pause point: Change the medium every other day with TRP-fortified mTeSR1 solution. Once cells are 60%-80% confluent, they should be passaged to facilitate maintenance of pluripotency and spontaneous differentiation. The cells were passaged on day 7. Passage of hPSCs Timing: 45-75 min In this step, hPSCs in TRP-fortified medium are passaged and is seeded with planned densities. 15. Prepare resources for passaging when cells are ready. a. Transfer the required volume of TRP-fortified mTeSR1 to a conical tube with 10 mM Y-27632 and keep it in a dark place at room temperature (approximately 21 C) for warming. b. Prepare Matrigel-coated dishes or plates as described previously, and replace the coating with TRP-fortified mTeSR1 with 10 mM Y-27632. 16. Observe cells in the dish or plate microscopically. 17. Tilt the dish or plate and aspirate medium. 18. Wash the dish or plate gently with PBS and aspirate the supernatant. 19. Add TrypLE Select for dissociation and the dish or plate was placed in an incubator at 37 C for approximately 3 min. Note: The amount of TrypLE Select depends on the surface area of the dish or plate and may need to be adjusted based on confluence. We typically added 2 mL to a 10-cm dish and 500 mL per well in a 6-well plate. Alternatives: Accutase can be used as an alternative to TrypLE Select. We typically added 2 mL to a 10-cm dish and 500 mL per well in a 6-well plate. Note: Observe the dish or plate to ensure that the cells are well-dissociated prior to the next step. Incubate for longer periods if the cells do not dissociate. 20. Add 10 mL or 2 mL per well of TRP-fortified mTeSR1 with Y-27632 to a 10-cm dish or 6-well plate, respectively. Then, gently detach the cells, and transfer all the cell suspensions to a conical tube using an electronic pipette controller. Alternatives: Cells can be collected directly using a 1,000 mL pipette tip after detachment by gentle pipetting. The collected cells were promptly transferred to a tube containing 10 mL TRP-fortified StemFit AK02N with Y-27632. 21. Centrifuge the tube for 3 min at 300 3 g. 22. Aspirate the supernatant carefully to prevent suction of the cell pellet; mix 10 mL of normal mTeSR1 with 10 mM Y-27632 and gently stir cells, preferably using multiple bouts of suction; discharge contents using an electronic pipette controller with a sterile serological pipette. 23. Perform a cell count using Vi-CELL XR or equivalent, and assess cell density within the sampled solution and viability. 24. Seed cells onto prepared dishes or plates based on the planned cell density or split-cell ratio. Note: The preferred cell density depends on multiple variables, including the cell line, passage number, and adaptation to TRP-fortified medium. Several attempts are required to determine the optimal cell density or split ratio. We suggest initial seeding and culture with different cell densities using multiple dishes or plates and choose one of them that is approximately 60%-80% confluent at the time of passaging. In this experiment, 1.7 3 10 3 cells per cm 2 of cells were seeded. 25. Place the dishes or plates in an incubator at 37 C and gently move them side-to-side to evenly spread the cells without contamination. Pause point: Continue culture using TRP-fortified mTeSR1 for at least two weeks to allow hPSCs to adapt to their growth environment. Troubleshooting 4. Cryopreservation of hPSCs is recommended when the cells are adapted to TRP-fortified medium after a few passages, allowing easier access to cells that have an increased capacity to proliferate and to efficiently commence cell culture when another culture encounters problems, such as contamination or undesirable differentiation. 26. Prepare 10 mL of TRP-fortified mTeSR1 with 10 mM Y-27632 in a conical tube and keep it in the dark (the medium does not need to be warmed). 27. Prepare cryotubes and record data, such as date, cell line, passage number, TRP-fortification status, cell count, and medium used before freezing. 28. Repeat steps 16-21. 29. Centrifuge the tube for 3 min at 300 3 g. 30. Aspirate the supernatant. 31. Add 500 mL of STEM-CELLBANKER per 1-5 3 10 6 cells. Note: Cell density that is either too high or too low may cause decreased survival after thawing. 32. Gently disperse the cells by pipetting with a sterile 1,000 mL tip attached to a P1000 pipette. Note: The cells are very fragile at this point. Only three pipetting strokes are required to disperse the pellets. More than five strokes can decrease the viability. 33. Immediately after pipetting, 500 mL of cryopreservation solution should be transferred to each of the prepared cryotubes. 34. Place the cryotubes in BICELL freezing vessels. 35. Store the vessels in a À80 C freezer. Pause point: Cryotubes should be removed from the vessels overnight (8-24 h) and stored in a freezer at À150 C for long-term preservation. Note: Frozen hPSCs can be recovered in steps 1-11, except using TRP-fortified mTeSR1 with Y-27632 as the culture medium from the start. Evaluation of hPSCs proliferation: Live-cell imaging Timing: 30-60 min In this step, hPSCs are evaluated in terms of proliferation. IncuCyte ZOOM is used to determine proliferation rates by serially measuring the confluence for specified periods. This system enables maintenance of the ongoing culture of cells while comparing ll OPEN ACCESS proliferation rates between TRP-fortified medium and conventional medium in an installed humidified 5% CO 2 incubator at 37 C. 36. Turn on IncuCyte and establish a protocol for imaging, that includes setting up the type of plates and cells, scan mode, imaging intervals, and parameters for image processing. 37. Repeat steps 15-24, with the exception that plates that are compatible with the imaging system must be used instead of passaging onto 10-cm dishes. 38. After seeding, the plates should be gently moved side-to-side to evenly spread the cells without contamination, and gently but firmly placed in the instrument tray specific to the culture vessel. 39. Start imaging and change medium as required (Methods Video S1). Note: Avoid opening the drawer of IncuCyte during the first 24 h of culture, as cells are more susceptible to mechanical force shortly after passage. 40. When the acquisition protocol established during step 36 finishes, the data should be analyzed using MS-Excel. Troubleshooting 5. Evaluation of hPSCs pluripotency: Flow cytometry Timing: 60-90 min In this step, hPSCs are evaluated in terms of pluripotency using cytometry. Flow cytometry of hPSCs cultured in TRP-fortified medium should be performed after multiple passages to assess their pluripotency using hPSCs grown in conventional medium as a control. Three antibodies (anti-REA, anti-SSEA4, and anti-TRA-1-60) were used for this assay. The Gallios flow cytometer and its software, Kaluza and FlowJo, were used for measurement and analysis. Note: When performing flow cytometry to compare pluripotency between hPSCs in TRP-fortified medium and hPSCs in conventional medium, the same number of cells should be sampled from each condition. 44. Centrifuge the tube for 3 min at 300 3 g. 45. Aspirate The supernatant and add 300 mL of a solution containing 2% FBS in PBS. 46. Divide the solution into three aliquots of 100 mL each in a 1.5 mL microtube. 47. Add 10 mL of each antibody to a separate microtube. Pause point: Place the microtubes in the dark on ice for 30 min. 48. Add 1 mL of 2% FBS to each microtube and centrifuge for 3 min at 1,250 3 g (or 5 min at 300 3 g). 49. Using a pipette tip, aspirate supernatant. 50. Add 500 mL of 2% FBS in PBS to each of the microtubes. 51. Dispense the solutions to a separate test tube by filtering them through a cell strainer cap. 52. Run the flow cytometer. Immunocytochemistry of hPSCs cultured in TRP-fortified medium is recommended to assess their pluripotency, and should be performed together with hPSCs grown in conventional medium as a control. Three pluripotency markers were used in this assay (OCT-4, NANOG, and SSEA4). Consult the manufacturer's instructions for details regarding the appropriate dilution ratio for each antibody, and ensure that the secondary antibodies correspond to the primary antibodies used. The fluorescence microscope BZ-X710 and its accompanying software, BZ-X Viewer and BZ-X Analyzer, were used for analysis. 53. Prepare hPSCs at 40%-60% confluency on 6-well plates. Note: If three pluripotency markers are tested, a minimum of three wells are required for immunocytochemistry per cell line. 54. Aspirate medium. 55. Wash with 2 mL of PBS per well and aspirate. 56. Fix Cells by adding 1-1.5 mL of 4% paraformaldehyde per well and refrigerated at 4 C. Pause point: Blocking can be performed at room temperature (approximately 21 C) for 30 min to 1 h or at 4 C overnight (8-24 h). 63. Dilute the primary antibodies with ImmunoBlock in separate conical tubes (1 mL per well). 64. Aspirate ImmunoBlock from the plates and add diluted primary antibodies. Note: If imaging is to be performed later, the plates can be stored in the dark place at 4 C for up to one week. Wrapping with a shade sheet to minimize light exposure is recommended. EXPECTED OUTCOMES By applying this method, hPSCs were able to grow efficiently through multiple passages, with an increase in the 10-week cumulative cell counts ranging from 5-to 17.5-fold after the cells had adapted to the new medium. hPSCs proliferate for a long period without karyotypic changes (Someya et al., 2021). TRP fortification showed a consistent improvement in proliferation across multiple cell lines, including 253G4 and FfI14s04 for hiPSCs, suggesting that this phenomenon is not cell-specific (Figure 3, Methods video S1). For basement membrane matrices other than Matrigel, we tested iMatrix-511, a chemically defined, xeno-free recombinant human laminin substrate, which also showed enhanced proliferation after TRP fortification, implying that the increased proliferation was not dependent on the components of the basement membrane matrix. Pluripotency was assessed by multimodal analysis involving flow cytometry detection of SSEA4 and TRA-1-60 (Figure 4), as well as immunocytochemistry screening for OCT-4, NANOG, and SSEA4 ( Figure 5), all of which showed a high level of pluripotency, demonstrating that hPSCs grown in TRP-fortified medium retained pluripotency after a long period. LIMITATIONS Although TRP-fortified mTeSR1 medium was effective in multiple hPSC lines, including 253G4 and FfI14s04, whether enhanced proliferation is universal for all hPSC lines remains unclear. We validated that increased cell growth is observed in mTeSR1 and StemFit AK02N when supplemented with TRP; however, whether this applies to all available hPSC growth media remains unclear. The capacity of cells to adapt to TRP-fortified media and the number of passages required for cell adaptation may vary among different cell lines. Owing to the photodegradation of TRP, the degree of proliferation may be influenced by environmental factors, such as lighting in the laboratory and sterile hood, and time of exposure, which in turn may depend on how efficiently each step is performed. TROUBLESHOOTING Problem 1 TRP does not dissolve in mTeSR1. Potential solution Ensure that the TRP used is of cell culture grade, handled with caution, and not contaminated with other substances. If a large mass of TRP remained at the bottom of the container after the addition of mTeSR1 basal medium, gently shake the container multiple times to help dissipate the mass (firmly close the lid beforehand to prevent spillage). If the dissolution process is slow, the amount of basal medium increases. We do not recommend leaving the solution at room temperature (approximately 21 C) for long periods or warming the solution to a 37 C water bath, as this may lead to TRP and medium deterioration. We also suggest not using solvents other than the basal medium (such as Milli-Q water or DMSO) to dissolve TRP, as they cause overall dilution of the culture medium and may have potentially deleterious effects in long-term culture (step 4). Problem 2 Detachment of colonies from the dish is observed. Potential solution Colony detachment is often observed when the basal medium is changed. Re-validation of the Matrigel dilution ratio is required. If detachment is still observed, testing other extracellular matrices is required. In this case, validation of the use of iMatrix-511 is our first recommendation (step 8). Problem 3 Significant cell death is seen, or cells do not survive after initial culture. Potential solution If frozen cells had been previously cultured in a different medium, thaw the cells initially in the same medium supplemented with Y-27632. After 1-2 days, change the medium to mTeSR1 without TRPfortification until the cells proliferated and reached 60%-80% confluence. Cells were passaged as previously described for mTeSR1 (without TRP-fortification) with Y-27632, and after 1-2 days, the medium was changed to TRP-fortified mTeSR1. A few passages without TRP fortification may be required until the cells are fully adapted to mTeSR1, as the cells likely do not survive well with TRP fortification unless they are adapted to the medium. If cells still do not survive, check for signs of contamination, differentiation, or karyotype abnormality; alternatively, choose cells with fewer passages if available (step 14). Problem 4 Cell growth is slow after TRP-fortification. Potential solution Slower growth is commonly observed in the first few passages, especially when passaging with TRPfortified medium for the first time. Adaptation to TRP-fortified medium may vary from cell line to cell line, and some cell lines may require longer time for cell adaptation. Adjust The split ratio so that the confluence was 60%-80% at the time of passage, as a too small or too large cell density is detrimental to cell growth. Overconfluent cells may lose pluripotency. When compared to the control, the same cell line and passage number were used. In addition, we ensured that the TRP-fortified basal medium was the same as that used before using the TRP-fortified basal medium. Pluripotent stem cells must be acclimatized to any new basal medium for at least two passages (step 25). Problem 5 Results obtained using IncuCyte are inaccurate. Potential solution The IncuCyte system was calibrated. Review the configurations of the system and software, including the setup for scan mode and job analysis. Consider a whole-well scan mode instead of a standard scan mode and refine the job analysis to allow better recognition of phase-contrast images. Multiple attempts may be required to obtain an optimal configuration before a formal experiment can be conducted. For accurate results, avoiding the use of high-number plates (for example, 24-well or more plates) decreases the accuracy. The cells were evenly distributed during seeding. For a 6-well plate, we suggest replacing Matrigel with a small volume of growth medium with Y-27632, initially (such as 500 mL per well), and then dispensing the diluted solution with a larger volume (such as 1.5 mL per well) containing a designated number of cells to avoid seeding directly from a high-density suspension, which may cause more variation in the initial distribution (step 40). RESOURCE AVAILABILITY Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact Shugo Tohyama (shugotohyama@keio.ac.jp). Materials availability This study did not generate new unique reagents. Data and code availability There is no dataset and/or code associated with the article.
6,221.6
2022-04-22T00:00:00.000
[ "Biology" ]
Sustainable Masonry Made from Recycled Aggregates: LCA Case Study : For a sustainable building industry, reusable construction with a low demand for primary resources is needed. Moreover, if we want to reduce the amount of construction and demolition waste, construction with recycled aggregate should be considered. To investigate the environmental impacts of such concrete construction, life cycle assessment (LCA) was used to compare the following types of concrete construction: Reusable blocks with recycled brick aggregate, reusable blocks with recycled concrete, reusable blocks with natural aggregate, and regular concrete wall. Firstly, the properties of new concrete with recycled aggregate were measured, such as physical, mechanical, and thermal properties. Then, di ff erent constructions were designed and assessed using the method of Institute of Environmental Sciences (CML2001) and the method of National Institute for Public Health and the Environment (ReCiPe 2016) as characterization methods. Unsurprisingly, the regular concrete wall had a higher impact on most of the impact categories, e.g., 113 kg CO 2 eq. (in the first scenario, using CML2001). In accordance with the circular principles, the reusability of blocks and recycling of aggregate are the main factors that a ff ect the environmental impact of the constructions. Thus, the global warming potential (GWP) of construction with reusable recycled concrete blocks was only 53 kg CO 2 eq. (in the second scenario). Moreover, we show di ff erences in the results of CML2001 and ReCiPe 2016, e.g., in the Photochemical Oxidant Creation category. Introduction The main sustainability hierarchy goes through three 'Rs': First, reduction, reuse, and finally, recycling. The utilization of reusable blocks with recycled aggregate meet all three 'Rs'. These reusable blocks, which have been used in previous construction, can be reused in new construction. Furthermore, it is possible to reduce primary sources and construction and demolition waste through recycling. The research about the utilization of recycled aggregate for concrete was started in the 1940s [1]. The utilization of recycled aggregate (RA) from construction and demolition waste (CDW) as a partial or full replacement of natural aggregate (NA) in concrete mixtures mostly negatively influences its mechanical and durability properties. However, RA has a positive impact on thermal properties [2,3]. The RA from waste concrete (RCA) is mostly used for constructing base layers in road structures and RA from waste masonry (RMA), as backfill layers or as layers to make landfills safe. The properties of recycled aggregate concrete (RAC) are influenced by the replacement rate of RA, as well as by its quality and composition [4]. The properties of RA are mostly influenced by the unwanted impurities, which are materials usually contained in the CDW, such as soil, dust, plastics, paper, textile, etc. However, Sustainability 2020, 12, 1581 2 of 21 these impurities are able to separate during selective demolition and the two-phase recycling process, which leads to the higher quality of RA. In previous studies [5][6][7], the maximal possible replacement rate of NA by RCA without the decline of properties has been found to be 30% or 50%, which is also defined in the European Standard [8]. In the case of recycled masonry aggregate concrete (RMAC), it was found that the coarse natural aggregate (NA) can be replaced by up to 15% without the decline of properties. Further, the compressive strength decrease of concrete with the full replacement of natural gravel by coarse RMA is 35% [9]. The conclusions from the studies of the replacement of natural sand by fine RA are still not clear. In some studies, it has been found that natural sand can be replaced by fine RA up to a level of 30% without significant effects on the mechanical properties of concrete [10]. On the contrary, the results of another study showed the increase of compressive strength of concrete with the full replacement of natural sand by fine RA [11]. RMA containing high amounts of waste masonry and fine RA has not found adequate utilization yet due to the fact that these types of RA cannot be used as an aggregate for concrete according to European standards [8]. There have been many studies published that have compared the environmental impacts of RAC and NAC for structural use [44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62]. The environmental impacts of concrete structural elements can be assessed, for instance, by the life cycle assessment method (LCA) [63]. The environmental impacts of the lower-grade concrete with RA were assessed and compared with NA [47]. The life cycle assessment of different types of aggregate (e.g., river NA, crushed NA, and coarse RA) used for structural concrete and their comparisons have been published in a few studies. Furthermore, the importance of different types of transportation and transport distances has been assessed in these studies [47,48,51,54,60,61]. It has been also published that the decline of RAC properties could be compensated for with additional cement, which slightly influences the environmental impacts of this material [44,62]. The environmental impacts of CDW recycling could also be affected by special types of recycling, such as separating the attached cement mortar from the aggregate surface from waste concrete by heat treatment and abrasion or separation by microwave heating [64,65]. In the abovementioned LCA studies, the environmental impacts are assessed using different characterization methods. One of the used methods is the the characterization method of Institute of Environmental Sciences (CML2001). The characterization factors of this method are used also for environmental assessment of construction products according to EN 15 804 + A1. In this study, the results were performed using CML2001-January 2016. Nevertheless, this method uses characterization models, which insufficiently describe the environmental impacts in some categories, such as the Photochemical Ozone Creation category. Therefore, this study also used the method of National Institute for Public Health and the Environment (ReCiPe 2016 v1.1) as characterization method and both sets of results were interpreted together. This paper presents a comparison of the four wall systems with the same utility characteristics, as well an environmental evaluation of the systems using the LCA method. The following constructions were compared: (1) Reusable masonry blocks made of RMAC with full replacement of NA by RMA; (2) reusable masonry blocks made of RCAC with full replacement of natural gravel and partial replacement of natural sand by RCA; (3) reusable masonry blocks made of NAC with only NA; (4) conventional Sustainability 2020, 12, 1581 3 of 21 concrete wall system with only NA. These constructions were compared in four scenarios with the different time scales of use and other assumptions. All of the compared solutions have the same utility properties, such as concrete strength class, thermal properties, and methods of use. The different thermal conductivity of concretes is compensated for by the various thicknesses of thermal insulation to reach the same heat transfer coefficient. The functional unit of comparison was 1 square meter of a wall system. Materials Three different concrete mixtures with different aggregate types and ordinary Portland cement (OPC) were designed for the same utility properties and structural use with: The RA originated from waste concrete and masonry and was produced by the recycling center in the Czech Republic. The concrete or masonry fragments were crushed in the mobile recycling plant to the fractions 0/128 and sieved to fractions 0/4, 4/16, and 16/128 mm. In the next step, the fraction 16/128 was crushed again and sieved to fractions 0/4, 4/8, and 8/16 mm. The aggregate was treated in the natural air humidity. In the case of concrete mixtures containing RA, the higher water absorption of RA was compensated by the additional water amount, which was calculated on the basis of the water absorption of RA after 10 min. It is generally known that the RA has higher water absorption and lower density than NA, which has been verified in the previous research. It has been established that the water absorption of coarse RMA ranges from 10% to 19% [9,[66][67][68], and fine RMA from 12% to 15%, [9,69,70]. The dry density of coarse RMA ranges between 1800 and 2700 kg/m 3 and fine RMA between 2000 and 2500 kg/m 3 [9,[66][67][68][69]71]. The water absorption of coarse RCA ranges between 0.5% and 14.75%, and the dry density of coarse RCA ranges from 1900 to 2700 kg/m 3 [6]. The water absorption of fine RCA ranges between 4.3% and 13.1% and the dry density of fine RCA ranges from 1900 to 2360 kg/m 3 [72]. The pycnometric method according to the CSN EN 1097-6 standard was used for the verification of water absorption and density of aggregate. Water absorption of RMA after 24 h of fraction 4/8 mm ranges from 11.5% to 12.5% and of fraction 8/16 mm varied from 10.0% to 11.0%. The oven-dried density of fraction 4/8 mm was 2000 kg/m 3 and 2100 kg/m 3 for fraction 8/16 mm. Water absorption of RCA after 24 h of fraction 4/8 mm was 7.0% and of fraction 8/16 mm varies from 6.0%. The oven-dried density varied from 2400 kg/m 3 for both fractions. The results of the RCA and RMA properties correlate with the results of previous studies. Concrete Mixes The composition of RAC mixtures was designed according to the Czech standard [8] and the granular skeleton was established according to the Bolomey particle size distribution curve. The parameter A was equal to 16 for the design of RAC mixtures, due to particle shape and roughness [20]. The aim of the mixture design was to reach the compressive strength class of concrete (C20/25) by the optimization of the amount of cement (CEM I 42.5 R) and effective water/cement ratio. The cement content was various due to the mechanical properties of concretes. The cement content in RAC mixtures was 320 kg/m 3 and in NAC was 300 kg/m 3 . The effective water/cement (w/c) ratio refers to the free water content, excluding the amount of additional water, which was added due to the Sustainability 2020, 12, 1581 4 of 21 high water absorption of RA. The additional water quantity was calculated on the basis of RA water absorption after 10 min and added to saturate the RA before or during mixing to obtain the desired RAC workability. The properties of aggregates were used for mixture design. No water-reducing admixtures were used. The mixture proportions, given in cubic meters, are shown in Table 1. ISO 12,491, in which the number of tested samples is taken into consideration for strength class determination to exclude their impact. The average values of concrete properties and target strength classes are shown in Table 1. According to the study [52], the decline of concrete compressive strength with the full replacement of coarse NA by RCA ranges up to 25%. However, the highest decline was mostly examined for modulus of elasticity, which was up to 45% [53]. It was found that the declines of mechanical properties for a full replacement of NA by RA could be compensated for by 8.3% of additional cement and a lower effective water/cement ratio [73]. In this case study, the decline of mechanical properties was compensated for by the addition of 6% of cement and a lower effective water/cement ratio of the RAC mixtures. The durability of recycled aggregate concrete, which is key for its use in structural applications, was not notably affected in this case; the replacement rate of aggregate by RCA was less than 30% [42]. Nevertheless, in this study, the replacement rate of NA by RCA was almost 100%. For this reason, it was predicted that the durability of the reusable concrete blocks is unsatisfactory. Therefore, the analysis was limited to a structural element for which non-aggressive environmental conditions with protection by thermal insulation were applied. Description of Assessed Systems In this study, four construction types were marked as Plan 1, Plan 2, Plan 3, Plan 4. The first construction (Plan 1) combined regular insulation and reusable blocks, which were made of the concrete mixture with recycled brick aggregate. Similarly, Plan 2 construction consisted of reusable blocks made of recycled concrete aggregate. Reusable blocks with thermal insulation were also used in Plan 3, but for this construction, a concrete from primary resources was used. Plan 4 was a regular reinforced concrete wall system with thermal insulation and adhesive. Constructions with reusable blocks included timber frames. All of the constructions had similar thermal insulation properties. The heat transfer coefficient for all Plans was 0.29 W/m 2 K. The overview of plans is in the Table 2. The compared constructions were designed to find the answers to questions from the field of civil engineering in the Czech Republic. In the very first step of this study, these questions were used as a background for the choice of the compared concrete wall systems. The first plan was a regular concrete wall system, used as a reference and to represent an ordinary system. To compare the influence of reusable construction, other systems were made of reusable blocks. Moreover, next to a reusable block with natural aggregate, blocks with brick and concrete recycled aggregates were also suggested to test the influence of recycled aggregates. After this choice of wall systems, the constructions were designed. Their detailed description is in Figure 1. Sustainability 2020, 12, x FOR PEER REVIEW 5 of 21 reinforced concrete wall system with thermal insulation and adhesive. Constructions with reusable blocks included timber frames. All of the constructions had similar thermal insulation properties. The heat transfer coefficient for all Plans was 0.29 W/m 2 K. The overview of plans is in the Table 2. The compared constructions were designed to find the answers to questions from the field of civil engineering in the Czech Republic. In the very first step of this study, these questions were used as a background for the choice of the compared concrete wall systems. The first plan was a regular concrete wall system, used as a reference and to represent an ordinary system. To compare the influence of reusable construction, other systems were made of reusable blocks. Moreover, next to a reusable block with natural aggregate, blocks with brick and concrete recycled aggregates were also suggested to test the influence of recycled aggregates. After this choice of wall systems, the constructions were designed. Their detailed description is in Figure 1. Goal and Scope of the Study Constructions were compared using the methodology in accordance with LCA standards (ISO 14040:2006 standard, EN 16757:2017) [74]. LCA was performed in four steps: Goal and scope definition, Life cycle inventory analyses, Life cycle impact assessment, and Interpretation. The goal of this study was to compare the environmental impacts of four concrete constructions in four scenarios to see the influence of blocks' reusability, end of life processes, and aggregate recycling. The functional unit was 1 m 2 of wall construction with similar thermal conductivity (the heat transfer coefficient was 0.29 W/m 2 K). In the first scenario, the assumed service life was 50 years. It is assumed that 50 years is a typical time scale for one building, and after this time, the building is refurbished or demolished. On the other hand, the typical life span of a concrete block is 100 years. Thus, in the second scenario, it is assumed that while the reusable concrete blocks can be reused, the reinforced concrete wall must be removed and rebuilt. Therefore, in the second scenario, different life spans for construction were assumed: 100 years for construction from reusable blocks and 50 years for the reference reinforced concrete wall system. The same assumption was also made for the third and fourth scenarios. These scenarios were different only at the end of life (EoL) phase. In Scenario 3, the second built wall is demolished and CDW is recycled to produce new concrete aggregates, and the separated wood is incinerated in a waste wood incineration plant to produce electricity and thermal energy. The end-oflife phase for CDW from the second built wall is different in Scenario 4, where CDW including waste wood is just removed to landfill. In each scenario, the entire life cycle of construction was considered (a cradle-to-grave scale). The following phases were considered: Production (primary resources extraction, secondary raw materials production, transport, manufacturing), use phase (manipulation on construction site, deconstruction), and end of life (demolition, transport, waste removal). A more detailed description of the scenarios is in Table 3. Goal and Scope of the Study Constructions were compared using the methodology in accordance with LCA standards (ISO 14040:2006 standard, EN 16757:2017) [74]. LCA was performed in four steps: Goal and scope definition, Life cycle inventory analyses, Life cycle impact assessment, and Interpretation. The goal of this study was to compare the environmental impacts of four concrete constructions in four scenarios to see the influence of blocks' reusability, end of life processes, and aggregate recycling. The functional unit was 1 m 2 of wall construction with similar thermal conductivity (the heat transfer coefficient was 0.29 W/m 2 K). In the first scenario, the assumed service life was 50 years. It is assumed that 50 years is a typical time scale for one building, and after this time, the building is refurbished or demolished. On the other hand, the typical life span of a concrete block is 100 years. Thus, in the second scenario, it is assumed that while the reusable concrete blocks can be reused, the reinforced concrete wall must be removed and rebuilt. Therefore, in the second scenario, different life spans for construction were assumed: 100 years for construction from reusable blocks and 50 years for the reference reinforced concrete wall system. The same assumption was also made for the third and fourth scenarios. These scenarios were different only at the end of life (EoL) phase. In Scenario 3, the second built wall is demolished and CDW is recycled to produce new concrete aggregates, and the separated wood is incinerated in a waste wood incineration plant to produce electricity and thermal energy. The end-of-life phase for CDW from the second built wall is different in Scenario 4, where CDW including waste wood is just removed to landfill. In each scenario, the entire life cycle of construction was considered (a cradle-to-grave scale). The following phases were considered: Production (primary resources extraction, secondary raw materials production, transport, manufacturing), use phase (manipulation on construction site, deconstruction), and end of life (demolition, transport, waste removal). A more detailed description of the scenarios is in Table 3. The system boundaries of the considered scenarios included the following processes: Electricity production (Czech grid mix), recycling of construction and demolition waste (production recycled aggregates), mineral wool production, cement production (CEM I 42,5), lime plaster production, lightweight plaster production, water drawing, construction timber (softwood) production, diesel production (EU-28 mix), transport of materials using truck (Euro 3, up to 28 t gross weight/12.4 t payload capacity), manipulation on the site, construction assembling and disassembling, separating the materials from construction, unspecific construction waste removal on landfill, waste incineration of wood products, credit for energy from wood incineration, and transport of waste materials using a truck (Euro 3, up to 28 t gross weight/12.4 t payload capacity). Life Cycle Inventory The total amount of materials for each construction is shown in Table 4. Every construction was designed with a different type of concrete. Concrete containing recycled concrete aggregate was used in Plan 1 and brick aggregate for concrete was used in Plan 2. Plan 3 and Plan 4 were designed using regular concrete with a natural aggregate. GaBi 9 software (thinkstep, Leinfelden-Echterdingen, Germany) was used to obtain data about the production systems. As a priority, specific data for the Czech Republic were used, but generic data from the Ecoinvent database were also used. Life Cycle Impact Assessment The impact assessment was carried out using two characterization methods: CML2001-January 2016 and ReCiPe 2016 v1.1. The impact category indicators, which were used in this study, are shown in Table 5. CML2001, as the characterization method, is very often used for environmental Sustainability 2020, 12, 1581 8 of 21 assessment in the building industry. For example, the CML2001 midpoint indicators are described for assessment of construction products according to EN 15804+A1 and for concrete products according to EN 16757 [75,76]. ReCiPe 2016 combines CML 2001 and Eco-indicator 99 [77]. In ReCiPe, it is possible to determine 18 midpoint indicators and 3 endpoint indicators. These indicators contain factors according to the three cultural perspectives: Individualist, Hierarchist, Egalitarian. In this study, the Hierarchist set of characterization factors was used. This consensus model is used as a default in scientific models and is based on medium timeframe of impacts. By combining ReCiPe and CML, we are able to more fully describe the environmental impacts of the assessed forms of construction. Different types of methodologies have been reviewed by Bogacka [78]. In this study, seven ReCiPe midpoint indicators were used for impact assessment. These seven indicators were chosen because they describe environmental impact in same categories, which are assessed according to EN 15804+A1 and EN 16757 [75,76] using CML2001. The results of environmental assessment for these scenarios are presented in Section 3, Results. To show typical contribution of process and phases, the results are more fully described in Section 4, Discussion. The scenarios 3 and 4 were modelled to describe the influence of End of Life in the production system for construction. Therefore, the results of scenario 3 and 4 are considered in Section 4, Discussion. Results In this study, we present the results of four constructions in four scenarios. The results are marked with a combination of a number of plans and a scenario number. According to this, the results, which are marked as Plan 1 s1, belong to construction Plan 1, and these results were performed in Scenario 1. Table 6 shows the demand for selected resources in the case of basic scenarios. There were no important differences among the amount of non-renewable energy resources, which were used for the life cycle of the considered construction. On the other hand, construction with recycled aggregates in concrete mixtures (Plan 1, Plan 2) had smaller primary consumption in most of the flows, and moreover, they were beneficial in the flow of Natural Aggregate. This beneficiation was caused by recycling processes of construction and demolition waste (CDW), which are tied up with the production of other aggregate fractions. Plan 1 and Plan 2 construction also had smaller water consumption, mainly due to the amount of water that was saved by the recycling of CDW. Plans in Scenario 1 All of the elementary flows were characterized, and the results of the impact indicators are shown in Tables 7 and 8. The results of Plan 2 were affected by the recycling of concrete CDW, which also included the recycling of iron scraps. Recycling of this admixture in the conditions of the Czech Republic provided impact beneficiation in the ADP impact category, but also had a negative influence on the ODP impact category. Nevertheless, Plan 2 had the smallest environmental impact, as shown in Figure 2. Each of the plans had a beneficial impact on the POCP category. The main effect on this category was caused by transport processes, which were modeled as diesel trucks (Euro 3). After combustion in the diesel engine, emissions of NOx were generated. This flow can be split into two emissions; NO and NO2. Nitrogen monoxide, which is in many cases dominating, reduced under certain conditions, when produced O3 converted back to O2 and NO2. Therefore, the flow of NO had a potentially beneficial impact on the POCP category. A similar effect can be caused by benzaldehyde, which is also a product of combustion [79]. The results of this indicator were based on potential impact and the actual reaction depended on local aspects, such as an abundance of O3 and NO or intensity of daylight. It cannot be concluded that more transport leads to fresher air. In the comparison of the results, according to CML2001 (Table 7) and ReCiPe 2016 (Table 8), there were some differences. For example, Plan 4 (regular wall) had the smallest impact. On the other hand, according to ReCiPe 2016, Plan 1 caused the smallest impact in Stratospheric Ozone Depletion. Similarly, the impact in the Photochemical Ozone Potential category can be performed with different conclusions. Nevertheless, the indicators of Global warming had almost the same results, and in addition, the Acidification Potential and categories describing resource scarcity produced similar results for their indicators. Other indicators described the impact differently. To compare the overall impacts of constructions in scenario 1, normalization and weighing were performed ( Figure 2). The normalization was carried out with data from CML2001-January 2016, World, year 2000, including biogenic carbon (global equivalents) and the weighing was carried out according to thinkstep LCIA Survey 2012, Global, CML 2016, including biogenic carbon (global equivalents weighted). No data for the normalization and weighing according to ReCiPe 2016 were available. Scenario with a Longer Service Life of Reusable Blocks In the second scenario, the possibility of reusing the blocks was considered, hence a longer service life was assumed (100 years for blocks). Alternatively, Plan 4 (regular concrete wall system) had the same assumed moral life expectancy as in the first scenario (50 years) and so, for comparable function, twice as much material was needed than in the first case. Even a well-functioning construction was demolished after some time due to outdated design or inappropriate other properties. This time period is called moral life expectancy. Recycled brick aggregates in Plan 1 came from the recycling of brick CDW, which did not contain such a large amount of steel scraps like in Plan 2. Therefore, these recycling processes did not significantly affect the environmental impact of Plan 1. In these basic scenarios, Plan 3 and Plan 4 were almost similar, hence their environmental impacts were almost equal, too. The only exception was in the ODP category. This difference was caused by the process of incineration of the construction wood, which was used only in the other three plans. Each of the plans had a beneficial impact on the POCP category. The main effect on this category was caused by transport processes, which were modeled as diesel trucks (Euro 3). After combustion in the diesel engine, emissions of NO x were generated. This flow can be split into two emissions; NO and NO 2 . Nitrogen monoxide, which is in many cases dominating, reduced under certain conditions, when produced O 3 converted back to O 2 and NO 2 . Therefore, the flow of NO had a potentially beneficial impact on the POCP category. A similar effect can be caused by benzaldehyde, which is also a product of combustion [79]. The results of this indicator were based on potential impact and the actual reaction depended on local aspects, such as an abundance of O 3 and NO or intensity of daylight. It cannot be concluded that more transport leads to fresher air. In the comparison of the results, according to CML2001 (Table 7) and ReCiPe 2016 (Table 8), there were some differences. For example, Plan 4 (regular wall) had the smallest impact. On the other hand, according to ReCiPe 2016, Plan 1 caused the smallest impact in Stratospheric Ozone Depletion. Similarly, the impact in the Photochemical Ozone Potential category can be performed with different conclusions. Nevertheless, the indicators of Global warming had almost the same results, and in addition, the Acidification Potential and categories describing resource scarcity produced similar results for their indicators. Other indicators described the impact differently. To compare the overall impacts of constructions in scenario 1, normalization and weighing were performed (Figure 2). The normalization was carried out with data from CML2001-January 2016, World, year 2000, including biogenic carbon (global equivalents) and the weighing was carried out according to thinkstep LCIA Survey 2012, Global, CML 2016, including biogenic carbon (global equivalents weighted). No data for the normalization and weighing according to ReCiPe 2016 were available. Scenario with a Longer Service Life of Reusable Blocks In the second scenario, the possibility of reusing the blocks was considered, hence a longer service life was assumed (100 years for blocks). Alternatively, Plan 4 (regular concrete wall system) had the same assumed moral life expectancy as in the first scenario (50 years) and so, for comparable function, twice as much material was needed than in the first case. Even a well-functioning construction was demolished after some time due to outdated design or inappropriate other properties. This time period is called moral life expectancy. Tables 9 and 10 show that Plan 2 s2 was still the most favorable scenario. In comparison with other scenarios, it had a more negative impact, only in the ODP category. The worst construction was Plan 4 s2, which potentially caused almost double the impact than in the first scenario. There were some differences among results according to CML2001 and ReCiPe 2016. For example, in the category of photochemical ozone, Plan 4 s2 was the most beneficial construction according to CML2001 and Plan2 s2 had the smallest impact according to ReCiPe 2016. According to ReCiPe 2016, Plan 1 s2 caused the smallest total impact but on the other hand, the smallest total impact according to CML2001 was caused by Plan 4 s2. Service Life and Influence of Blocks' Reusability To explain the environmental advantages of the reusable blocks, it is possible to compare the relative results for all categories in scenario 1 and scenario 2 (Figures 3 and 4). Typically, the influence of a life span as an assumption can be considered in different scenarios [80]. Some authors consider only the cradle-to-grave approach for new building elements [81,82]. Sustainability 2020, 12, x FOR PEER REVIEW 12 of 21 The result in the POCP category is influenced mainly by iron scrap recycling in Plan 2 s2. The influence of iron scrap recycling was also investigated in a case study of earth-retaining walls [84]. In other plans, results in this category are mainly influenced by transport, as it is described in Results. Overview of Contributions Caused by Phases and Processes To investigate the influence of phases and processes, the relative contributions are shown in Figure 5, in relation to the results of the all of the plans in scenario 1. The figure shows relative contribution in the GWP category. For GWP mitigation, the main contributor is the cement production and processes in the EoL phase. On the other side of the figure, aggregate production in Plan 2 causes a beneficial impact. Another point of view on using aggregates in concretes for beamfloor systems was investigated by Dossche [85]. The impact in this category is influenced by the iron scrap recycling. Another beneficial impact is caused by the wood production. Optimization of concrete mix and block design was also investigated [86]. The result in the POCP category is influenced mainly by iron scrap recycling in Plan 2 s2. The influence of iron scrap recycling was also investigated in a case study of earth-retaining walls [84]. In other plans, results in this category are mainly influenced by transport, as it is described in Results. Overview of Contributions Caused by Phases and Processes To investigate the influence of phases and processes, the relative contributions are shown in Figure 5, in relation to the results of the all of the plans in scenario 1. The figure shows relative contribution in the GWP category. For GWP mitigation, the main contributor is the cement production and processes in the EoL phase. On the other side of the figure, aggregate production in Plan 2 causes a beneficial impact. Another point of view on using aggregates in concretes for beamfloor systems was investigated by Dossche [85]. The impact in this category is influenced by the iron scrap recycling. Another beneficial impact is caused by the wood production. Optimization of concrete mix and block design was also investigated [86]. Figures 3 and 4 show all results except ODP, because there is an inappropriate difference in the results of this category according to CML2001. The results in these pictures confirm that designing for reuse is a way to reduce the environmental impact of construction. The impact of Plan 4 s1 is the biggest among most of the categories, and the differences with other plans are even greater in scenario 2. The almost doubled impact of Plan 4 s2 is mainly caused by higher resource production, the higher amount of transported materials, and also the higher amount of landfilled waste. We only used one type of insulation, and the constructions were designed with almost the same thermal insulating properties. Another approach is to assess the different types of insulation materials [83]. On the other hand, the results in the ODP category of the various scenarios were significantly higher for other plans than they were for Plan 4 s2. This impact is caused by the recycling of the iron scraps and incineration of wood. According to ReCiPe 2016, the assessment of impact in the Stratospheric Ozone Depletion category leads to different conclusions. Plan 4 s2 has the biggest impact on this indicator. The result in the POCP category is influenced mainly by iron scrap recycling in Plan 2 s2. The influence of iron scrap recycling was also investigated in a case study of earth-retaining walls [84]. In other plans, results in this category are mainly influenced by transport, as it is described in Results. Overview of Contributions Caused by Phases and Processes To investigate the influence of phases and processes, the relative contributions are shown in Figure 5, in relation to the results of the all of the plans in scenario 1. The figure shows relative contribution in the GWP category. For GWP mitigation, the main contributor is the cement production and processes in the EoL phase. On the other side of the figure, aggregate production in Plan 2 causes a beneficial impact. Another point of view on using aggregates in concretes for beam-floor systems was investigated by Dossche [85]. The impact in this category is influenced by the iron scrap recycling. Another beneficial impact is caused by the wood production. Optimization of concrete mix and block design was also investigated [86]. Manipulation consists of the deconstruction and rebuilding of blocks, hence is mentioned only for plans 1-3. Transport and EoL are significant contributors in Plan 2 but are rather minor contributors to other plans. The relative contributions of phases in the case of the block with recycled aggregate is in Figure 6. The GWP and ADP (elements and fossils) categories are influenced significantly by the production phase. The ODP category is affected mainly by the EoL. Plan 2 represents the most beneficial type of considered construction, and in scenario 1 is mostly influenced by production (cement, recycled concrete aggregate). By focusing on the same plan in scenario 2, it is possible to see that the relative influence of the production can be reduced by the influence of other phases, as shown in Figure 7. For example, the use of recycled aggregates is a beneficial process, but its relative beneficiation is reduced by processes in the EoL and Use phase. According to Figures 6 and 7, the use phase is a minor contributor in all categories, hence changes in calculations and estimations of these processes (deconstruction, rebuilding, manipulation on site, etc.) would not play a significant role in conclusions of the study. On the contrary, the main contributors are EoL and the production phase. The influence of the initial manufacturing phase contribution was investigated in the case of designing bridges [87]. Manipulation consists of the deconstruction and rebuilding of blocks, hence is mentioned only for plans 1-3. Transport and EoL are significant contributors in Plan 2 but are rather minor contributors to other plans. The relative contributions of phases in the case of the block with recycled aggregate is in Figure 6. The GWP and ADP (elements and fossils) categories are influenced significantly by the production phase. The ODP category is affected mainly by the EoL. Manipulation consists of the deconstruction and rebuilding of blocks, hence is mentioned only for plans 1-3. Transport and EoL are significant contributors in Plan 2 but are rather minor contributors to other plans. The relative contributions of phases in the case of the block with recycled aggregate is in Figure 6. The GWP and ADP (elements and fossils) categories are influenced significantly by the production phase. The ODP category is affected mainly by the EoL. Plan 2 represents the most beneficial type of considered construction, and in scenario 1 is mostly influenced by production (cement, recycled concrete aggregate). By focusing on the same plan in scenario 2, it is possible to see that the relative influence of the production can be reduced by the influence of other phases, as shown in Figure 7. For example, the use of recycled aggregates is a beneficial process, but its relative beneficiation is reduced by processes in the EoL and Use phase. According to Figures 6 and 7, the use phase is a minor contributor in all categories, hence changes in calculations and estimations of these processes (deconstruction, rebuilding, manipulation on site, etc.) would not play a significant role in conclusions of the study. On the contrary, the main contributors are EoL and the production phase. The influence of the initial manufacturing phase contribution was investigated in the case of designing bridges [87]. Plan 2 represents the most beneficial type of considered construction, and in scenario 1 is mostly influenced by production (cement, recycled concrete aggregate). By focusing on the same plan in scenario 2, it is possible to see that the relative influence of the production can be reduced by the influence of other phases, as shown in Figure 7. For example, the use of recycled aggregates is a beneficial process, but its relative beneficiation is reduced by processes in the EoL and Use phase. Figure 5 shows the influence of transport for plans in scenarios with shorter time scales (scenario 1). Transport effects on the GWP indicator depend on the plan and can cause approximately 13-26% of the plan impact in this category. Therefore, the change in transport parameters can affect the results. The main parameter is distance. To investigate the influence of distance on the results of the study, plans with local transport were modeled to have distance parameters set at 50 km instead of 100 km. Figure 8 shows that the differences between plans and this local variation are 7-29%. These differences are too small to transform the conclusions of the study. On the one hand, transport is a minor contributor in GWP for our scenario, but in other cases, the transport could make an important contribution, for example in the case of the manufacturing of bridge [88]. According to Figures 6 and 7, the use phase is a minor contributor in all categories, hence changes in calculations and estimations of these processes (deconstruction, rebuilding, manipulation on site, etc.) would not play a significant role in conclusions of the study. On the contrary, the main contributors are EoL and the production phase. The influence of the initial manufacturing phase contribution was investigated in the case of designing bridges [87]. Figure 5 shows the influence of transport for plans in scenarios with shorter time scales (scenario 1). Transport effects on the GWP indicator depend on the plan and can cause approximately 13-26% of the plan impact in this category. Therefore, the change in transport parameters can affect the results. The main parameter is distance. To investigate the influence of distance on the results of the study, plans with local transport were modeled to have distance parameters set at 50 km instead of 100 km. Figure 8 shows that the differences between plans and this local variation are 7-29%. These differences are too small to transform the conclusions of the study. On the one hand, transport is a minor contributor in GWP for our scenario, but in other cases, the transport could make an important contribution, for example in the case of the manufacturing of bridge [88]. Transport results. The main parameter is distance. To investigate the influence of distance on the results of the study, plans with local transport were modeled to have distance parameters set at 50 km instead of 100 km. Figure 8 shows that the differences between plans and this local variation are 7-29%. These differences are too small to transform the conclusions of the study. On the one hand, transport is a minor contributor in GWP for our scenario, but in other cases, the transport could make an important contribution, for example in the case of the manufacturing of bridge [88]. Figure 5 shows the influence on the impact caused by EoL, which depends on the plan and can be 22%-45% in the GWP category. Therefore, the method of waste removal was changed in scenario 3. In this scenario, constructions are deconstructed and then concrete is crushed and sorted. The recycled aggregates are used as a replacement for natural gravel. FU was 1 m 2 of construction with a service life 100 years. Another different approach to the assessment of the End of life phase was investigated by Penades-Pla [89]. End of Life Phase In comparison with scenario 2, plans of scenario 3 have a smaller environmental impact, except the ODP category, which is not influenced by this change, as shown in Table 11. Plan 2 s3 is still the most favorable construction, and in the ADP category, is even beneficial. Similarly, Table 12 shows that Plan 2 s3 caused the smallest impact in most categories except Stratospheric Ozone Depletion. Nevertheless, the results according to ReCiPe 2016 do not confirm the beneficial impact in the Fossil Depletion category, but it confirms that this plan is the best variant for this category. Figure 9 shows the relative impacts of scenarios 2 and 3. The most significant relative difference is in the ADP fossil category, where Plan 2 s3 reaches a positive impact on the environment. However, the difference between the results of the two scenarios for one plan is smaller than 24% for most of the impact indicators. contrast to this, the impacts of scenario 3 in the Photochemical Ozone Formation category are lower than the impacts of scenario 2, according to ReCiPe 2016. This decrease in impacts confirms the positive impact of deconstruction and recycling processes of aggregates. According to CML2001, the only increase in impacts is in the POCP category. In contrast to this, the impacts of scenario 3 in the Photochemical Ozone Formation category are lower than the impacts of scenario 2, according to ReCiPe 2016. Another possible way to change EoL was performed in scenario 4, which modelled a situation in which waste wood is not incinerated but is removed to the landfill as an admixture of CDW; hence, in comparison to scenario 2, the only change is in EoL of wood. Plan 4 is not included in scenario 4 because this plan does not contain any construction wood. Dossche considered the different waste scenarios for wood [85]. Also, the environmental impacts of disposal of building materials were assessed [90]. The results of other plans are shown in Tables 13 and 14. Changes in this process take a minority effect in the results and therefore have no influence on the assessment of construction types. The only noticeable difference between scenarios 2 and 4 is in the ODP indicator, where plans 1 and 3 have a smaller impact. Conclusions Life Cycle Assessment has been used to compare the environmental impacts of four concrete constructions containing different types of aggregate in four scenarios over two time scales. Our results strongly suggest that the key factors in reducing the environmental impacts of concrete constructions are the use of recycled concrete aggregate in the concrete mixture and the reusability of the blocks used. At the shortest time scale, the construction containing recycled concrete aggregate had a lower impact than the other constructions, due to the positive impact of steel scrap recycling. Conversely, block reusability appears to become more important over time. Although we have shown that the environmental impacts of such constructions are also influenced by cement production, transportation, and disposal, their influence was not as significant as those of block reusability and aggregate type. Because the CML2001 and ReCiPe 2016 results can be interpreted differently in categories such as ozone depletion and photochemical oxidant formation, we used both characterization methods in order to obtain a fuller and more accurate picture. While CML2001 is the industry standard, it has limitations in the assessment of the impact in the Photochemical Oxidant Creation category. This leads us to recommend the use of ReCiPe for assessing the sustainability of concrete constructions.
10,362.8
2020-02-20T00:00:00.000
[ "Engineering" ]
Effect of heat and mass transfer on the nanofluid of peristaltic flow in a ciliated tube The current work focuses attention on discussing the peristaltic flow of Rabinowitsch nanofluid through ciliated tube. This technical study analyzes heat and mass transfer effects on the flow of a peristaltic flow, incompressible, nanofluid via a ciliated tube. The governing non-linear partial differential equations representing the flow model are transmuted into linear ones by employing the appropriate non-dimensional parameters under the assumption of long wavelength and low Reynolds number. The flow is examined in wave frame of reference moving with the velocity \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$c$$\end{document}c. The governing equations have been solved to determine velocity, temperature, concentration, the pressure gradient, pressure rise and the friction force. Using MATLAB R2023a software, a parametric analysis is performed, and the resulting data is represented graphically. The results indicate that the various emerging parameters of interest significantly affect the nanofluid properties within the tube. The present study enhances the comprehension of nanofluid dynamics in tube and offers valuable insights into the influence of heat and mass transfer in such setups. Convective heat transfer is found to be greater at the boundaries resulting in decreased temperature there. Radius of the tube ρ s , (ρβ) s , (ρc p ) s Density, the thermal expansion coefficient, the effective heat capacitance K s Thermal conductivity of the solid particle θ Temperature Advances in fluids engineering and industrial sectors have aroused the interest of researchers in analysing mathematical models for non-Newtonian fluids.Non-Newtonian materials include slurries, coolants, lubricants, blood at low shear rates, ketchup, some paints, dirt, hygienic products, and many more.Several researchers have extensively studied the flow characteristics of such fluids due to their critical importance in the diverse fields of science and technology, including polymer solutions, viscoelastic suspensions, metal spinning, lubricants, plastics manufacturing, molten metal distillation, crystals, and food processing.Given its significance, a single constitutive model cannot encompass all non-Newtonian fluid aspects due to the diverse nature of fluid characteristics.The Peristaltic flow is a common method of fluid flow where flexible chambers are used to propel fluids through progressive waves of contraction and expansion.examined the unsteady MHD flow of blood and thermal effect in an aneurysmatic artery using a finite difference approach.They documented that with increasing the magnitude of the Prandtl number, the Nusselt number enhances.Thermal radiation therapy is of significant importance in some medical procedures related to the cure of muscle spasm, myalgia (muscle pain), fibromyalgia and contracture.Maqbool et al. 6 studied the effects of magnetic field and copper nanoparticles on the flow of tangent hyperbolic fluid through a ciliated tube.Tariq and Khan 7 explained the behavior of second-grade dusty fluid flowing through a flexible tube whose walls are induced by the peristaltic movement.Ellahi et al. 8 discussed the effects of heat and mass transfer on peristaltic flow in a non-uniform rectangular duct is studied under consideration of long wavelength and low Reynolds number.Nadeem et al. 9 investigated the mathematical model for the peristaltic flow of nanofluid through eccentric tubes comprising porous medium.Shaheen and Nadeem 10 analyzed the mathematical model of ciliary motion in an annulus is studied the effect of convective heat transfer and nanoparticle are taken into account.Iqbal et al. 11 investigates the heat and mass transfer through curved channel with bi-convection.Main findings of the analysis discloses that the temperature of nanofluid decreases as radiation and viscosity varies.Abd-Alla et al. 12 explained the effects of heat transfer and the endoscope on Jeffrey fluid peristaltic flow in tubes.Akhtar et al. 13 construct polynomial scheme to find exact solution to the peristaltic flow of Casson fluid through elliptic duct.They focus their attention on the behavior of streamlines.Consideration of constant density force streamlines to be closed enough so that velocity get increased.Peristaltic flow of micropolar fluid through asymmetric channel with new type of interfacial thin film layer at the boundaries are studied by Mahmood et al. 14 .Tanveer et al. 15 presented the analysis for peristaltic flow of Walter's B fluid with internal heat generation.They applied numerical technique based on shooting method which express results in terms of interpolating function.Mansour and Abou-zeid 16 studied the influence of heat and mass transfer the peristaltic flow of non-Newtonian Williamson fluid in the gap between concentric tubes.Awan et al. 17 explained the numerical treatment for dynamics of second law analysis and magnetic induction effects on ciliary induced peristaltic transport of hybrid nanomaterial.Akbar and Butt 18 investigated the concerns with the mechanical properties of a Rabinowitsch fluid model and the effects of thermal conductivity over it.In recent years, researchers have extensively focused on the peristaltic flow of Newtonian and non-Newtonian fluids (see for example [19][20][21][22][23][24][25][26][27][28][29][30][31] and several references therein.In Refs.( [34][35][36][37][38][39][40] ), peristaltic flow with new parameters with or without endoscope has been discussed.Recently, significant interest has been developed in studying a peristaltic phenomenon due to its significant implications in biological sciences and biomedical engineering.In the current study, an investigation is carried out to inspect the peristaltic pattern of the nanofluid model in the presence of mass and heat phenomenon.The flow modelling is followed with the small Reynolds number hypothesis while the long wavelength is premised.The influence of various emerging physical parameters in the obtained solutions is observed.The obtained expressions are utilized to discuss the role of emerging parameters on the flow quantities.Numerical computations have been used to evaluate the expression for velocity, temperature, concentration, the gradient pressure, rise pressure and the friction force of various interesting parameters.Finally, the effect of various emerging parameters is discussed through graphs.Finally, the detailed computational results are compiled and discussed with the physical interpretation of our results.The graphical upshots for the velocity, temperature, concentration, the gradient pressure, rise pressure and the friction force are examined for influentrial parameters. Flow description The current analysis is performed to study the 2-dimensional peristaltic flow of Rabinowitsch nanofluid in an infinite tube with convective boundary condition.In addition, we also analyze the impact of nanofluid, thermal conductivity of fluid, the viscosity at constant concentration, thermal coefficient of nanofluid and heat source.For problem under consideration, geometry is exhibited in cylindrical coordinate system (r, θ, z) representing radial, azimuthal and axial coordinates respectively.Furthermore, the fluid flow is caused by the metachronal wave, resulting from uniform cilia beating whereas the temperature and concentration at the wall are T 0 and C 0 respectively (please, see Fig. 1). Formulation of the problem For the peristaltic flow of an incompressible nanofluid with iron oxide nanoparticles in a uniform tube, using the cylindrical coordinate (R, Z) where Z the axis of the tube, while R the radius of it see Fig. 1.The flow is described in two coordinate systems one is fixed and the other, moving with speed c. The equations of motion for the Rabinowitsch nanofluid model for the flow are 18,32,33 The r component of momentum equation: The z component of momentum equation: The energy equation: The concentration equation: (1) (2) www.nature.com/scientificreports/ The constitutive equation for the extra stress tensor S is defined as follows: The flow is obtained by a sinusoidal wave train which move with a constant speed c along the wall of the tube, wall equation of the tube in the fixed system is given by: The equation of the cilia tips given mathematically in the form. The transformations between the two coordinate systems are: where (u, w) and (U, W) represent the velocity components of the moving and fixed frame. The boundary conditions are given by The thermo-physical properties can be written as: The dimensionless parameter is given as follows: Solution of the problem In view of the above transformations (9) and non-dimensional variables (11), Eqs.(1, 2, 3, 4 and 5) are reduced to the following forms: (5) Applying the approximation long-wavelength >> 1, small Renolds number Re ≪ 1 and neglecting the wave number δ .Since Reynolds numbers are typically very low (Re ≪ 1) in flow in small diameter tubules, analysis can be completed by approximating the inertia-free flow.Additionally, if the tube length is finite and equal to an integral number of wavelengths along with a constant pressure difference at the ends of the tube, the flow may be steady in the wave frame of reference.The governing Eqs. ( 12), ( 13), ( 14) and (15) in this situation, which make use of the long wavelength approximation, can be reduced to the following: The solutions of velocity, temperature and concentration with subject to boundary conditions (16) are given by The gradient of the pressure: The volumetric flow rate: The rise of pressure: where Results and discussion This article discussed the Rabinowitsch Nanofluid of peristaltic flow with heat and mass transfer in a ciliated tube.The primary purpose of this section is to analyse the impact of pertinent parameters, including material parameters of Nanofluid, Nanofluid K nf , thermal conductivity of fluid k f , the viscosity at constant concentration α c , thermal coefficient of nanofluid (ρβ) nf and heat source Q, on velocity ( w ), temperature (θ), concertation ( C ), pressure gradient ( dp dx ) , pressure rise (�p ) , friction force (F ) and shear stress (S rz ) .Graphs are drawn to analyse the effects of the relevant parameter mentioned above using the MATLAB 2023a programming language.For this purpose Figs.(2, 3, 4, 5, 6, 7, 8 and 9) are sketched to measure the features of all parameters.In particular, the variations of parameters are examined. We disclose the influence of various parameters on temperature and heat transfer mechanism.Figure 2 shows the effect of thermal conductivity of nanofluid K nf , thermal conductivity of fluid k f and heat source Q.It is clear from the Fig. 3 that temperature surges as we increase the value of parameter.This is because of the fact that K nf > 0 indicates the heat generation.Which means temperature rises due to the internal friction of the nanofluid.elucidates the impact of convective heat transfer at the boundaries.It can be noticed from the temperature that for higher values of Q , temperature decreases.The reason behind this is the increased convective heat transfer at the boundaries which in turn results in decreased temperature, as well, we see an increasing manner in the temperature in the center of the tube.This result is in good agreement with the results obtained by Ellahi et al. 8 . Mass transfer cannot be ignored while dealing with heat transfer during multiple industrial and physiological fluid transport.Therefore, current subsection deals with the influence of different parameters on concentration profile.Figure 3 depicts the impact of thermal conductivity of nanofluid k nf , thermal conductivity of fluid k f and heat source Q on concentration C. Mass concentration decreases for enhancing the values of K f , Q, while it increases with increasing of K nf .Thus, increase in its value causes decrease in mass diffusion hence, resulting in mass accumulation.Thermal conductivity of nanofluid depicts the same behavior for increase in its concentration value.The behavior can be justified through the fact that increase in the value of K nf results in large concentration gradient which in turn moves more mass.Thus, causing increase in mass concentration.All these concentration graphs reveal a parabolic graphical outcome and the concentration is minimum in the centre of tube while it enhances toward the tube walls.This is in good agreement with what was obtained in clinical practice because the nutrients diffuse out of the blood vessels to neighbouring tissues 32 . Figure 4 addresses the impact of related parameters on velocity profile w of Rabinowitsch nanofluid flow through ciliated tube.Figure 4 provides the graphical plot of velocity for thermal conductivity of nanofluid K nf , coefficient of the viscosity at constant concentration α c , thermal coefficient of nanofluid (ρβ) nf and heat source Q.The increasing K nf , α c reveal a decline in the velocity profile, while it increases with increasing (ρβ) nf , Q. Since, the flow rate is directly related to velocity, therefore causing this upsurge in velocity for both types of fluids.In the culmination of above discussion we can say that velocity exhibits parabolic behavior for Dilatant fluid.The velocity profile outcomes depict that the flow has a maximum speed in the centre and it eventually decreases towards the tube walls.For more authenticity, this result is in good agreement with the results obtained by Rafiq et al. 32 . Figure 5 shows the axial pressure gradient expressed in terms of independent variable z is plotted for various corresponding parameters.Fluctuating behavior is shown by pressure gradient attaining its maximum at 0, 1 , whereas approaching minimum at 0.5 , while it disclose the graphical outcomes of pressure gradient dp dz for certain physical parameters.Figure 5 shows that dp dz increases for increasing value of heat source Q and Brinkman number Br , while it decline in the interval [0, 0.2], as well, it increases in the interval [0.2, 0.8], like that, it increases in the interval [0.8, 1] for increasing values of Grashof number Gr and ratio of relaxation to retardation times 1 .This shows the existence of high level flow through the tube without the need of greater pressure gradient. Vol.:(0123456789) Figure 6 is developed to illustrate how embedding parameters correspond to pressure rise P per mean flow rate in this regard.Non-Newtonian fluid characteristics can easily be described by nonlinear nature of these curve.Figure 6 presents a graphical solution of P for increasing ratio of relaxation to retardation times 1 , thermal coefficient of nanofluid (ρβ) nf and a decline in the value of P is observed in the region P ≻ 0 while it increases in the region P ≺ 0 .It is observed that the graphical result of P for incrementing 1 and an increase in the value of P is noted in the region P ≺ 0 while it declines in the region P ≻ 0, while for increasing (ρβ) nf and an increase in the value of P in the regions P ≺ 0, P ≻ 0 .Peristalsis, which occurred as a result of pressure difference, causes flow rate to be positive in the zone of peristaltic pumping, whereas peristalsis of the tube boundaries produces a free-pumping region.This result is in good agreement with the results obtained by Ellahi et al. 8 . Figure 7 indicate the disparities of the friction force F on tube with regards to the rate of volume flow F for different values of the ratio of relaxation to retardation times 1 , thermal coefficient of nanofluid (ρβ) nf in the peristaltic flow in a ciliated tube.In both figures, it is clear that the friction force in a ciliated tube has a non zero value only in a bounded region of space.It is observed that the friction force increases with increasing of ratio of relaxation to retardation times in the interval [−300, 0] , while it decreases in the interval [0, 300] , as well it decreases with increasing of the rate of thermal coefficient of nanofluid.On the other hand, these figures show that F have an opposite behavior compared to the pressure rise p versus the physical parameters. Figure 8 displays the disparities of the value of axial tangential stress s rz with regards to the z− axis, which it has oscillatory behavior which may be due to peristalsis in the whole range of the z-axis for different values of the Brinkman number Br , heat source Q, The thermal expansion coefficient β f and thermal conductivity of nanofluid K nf .In both figures, it is clear that the value of tangential stress has a non zero value only in a bounded region of space.It is observed that the shear stress decreases with increasing of Br, Q, while it increases with increasing of β f , K nf .It is noticed that the variation in shear stress at the tube wall surface with and without nanoparticles in the fluid and varying values of the non-Newtonian fluid parameter.It is important to note that the shear stress at the tube wall exerted by the streaming flow carrying nanoparticles is higher in magnitude Figure 9 shows the 3D schematics concern the concentration , axial velocity w and the axial pressure gradient dp dz with regards to r and z axes in the presence of heat source Q , the heat source Q, Grashof number Gr and the cilia length ε .It is observed that the concentration decreases with increasing Q, while it increasing with increasing of ε, axial velocity increases with increasing of Gr, while it decreases with increasing of ε, the pressure gradient increases with increasing of Gr .We obtain for all physical quantities, the peristaltic flow in 3D overlapping and damping when r and z increase to reach the state of particle equilibrium.The vertical distance has more signifi- cant of the curves were obtained, most physical fields are moving in peristaltic flow. Conclusion The Rabinowitsch nanofluid of peristaltic flow with heat and mass transfer in the ciliated tube is investigated.The present investigation utilized nanofluid model in a tube to analyse the peristaltic flow of non-Newtonian fluid in a ciliated tube with heat and mass transfer.A thorough understanding of peristaltic flow regulation and malfunction is possible by incorporating numerous effects that mirror natural events, especially in small arteries.These discoveries shed crucial new light on the properties of peristalsis during blood flow in human circulatory system.The knowledge gathered from this study could help diagnose and treat vascular diseases and promote scientific research.The following is a summary of the main conclusions drawn from the model: 1. Results obtained from the study indicate different flow behavior for the dilatant and nanofluid.2. Thermal conductivity: This phenomenon causes the temperature to fall during peristalsis.This indicates that the nature of heat transfer within the system influences temperature fluctuations. Figure 1 . Figure 1.The schematic diagram of wall positions of tube when a peristaltic wave of slightly dilating amplitude propagates along it with velocity c. https://doi.org/10.1038/s41598-023-43029-6www.nature.com/scientificreports/The force of friction: Figure is prepared to analyze the impact of Gr, Q, Br and 1 on dp dz .It is clear from the Figure that pressure gradient simply decreases along the boundary of infinite tube.This result is in good agreement with the results obtained by Rafiq et al. et al. 32 . Figure 2 . Figure 2. Variation between the temperature θ and the axial r with different values of Q, K nf , K f . Figure 3 . Figure 3. Variation between the concentration and the axial-r with different values of Q, K nf , K f . Figure 4 .Figure 5 . Figure 4. Variation between the velocity w and the axial-r with different values of Q, K nf α c , β nf . Figure 6 . Figure 6.Variation between the rise of pressure p and the axial-F with different values of 1 , β nf . Figure 7 . Figure 7. Variation between the force of friction F and the axial-F with different values of 1 , β nf . Figure 8 . Figure 8. Variation between the shear stress S rz and the axis-z with different values of Q, K nf , β f , Br. Figure 9 . Figure 9. Discrepancies of the concentration in 3D against r and z axis under the influence of Q, ε, velocity w under the influence of Gr, Br, pressure gradient dp dx under the influence of Gr. Akbar et al. 1 studied the magnetohydrodynamics and convective heat transfer of nanofluids synthesized by three different shaped (brick, platelet and cylinder) silver nanoparticles in water.Narla et al. 2 investigated the analysis of entropy generation in biomimetic electroosmotic nanofluid pumping through a curved channel with joule dissipation.Agoor et al. 5examined the binary powelleyring nanofluid of peristaltic flow with heat transfer in a ciliated tube.Shit and Majee 4 discussed a pulsatile MHD flow of blood signifying blood as a Newtonian fluid with temperature-dependent variable viscosity in an overlapping constricted tube under the influence of whole-body vibration.In a separate study, Shit and Majee5
4,648
2023-09-25T00:00:00.000
[ "Physics" ]
WOOD AND GENERATIVE ALGORITHMS FOR THE COMPARISON BETWEEN MODELS AND REALITY This study examines the emblematic case of a test room and its relation to digital modelling. This space is the result of a multioptimization process that has been physically built for the verification of the initial hypotheses. As a result, it is actually a Physical Twin, designed to be transformable by removing a wall. The same space, on the other hand, has become useful for testing the Digital Twin logic by associating a BIM model with a dynamic representation of the data captured by the sensors. The representation is thus placed at the core of this cyclic phase between reality and representation, with the goal of validating the proposed theories through empirical practice, improving digital computational ability, and identifying pathways for monitoring space's interactions with the environment and those who live in it. INTRODUCTION The modern definition of Digital Twin is a conceptual and substantive extension of the importance of digital models, which are "mirror images" (Batty, 2018) of experiences occurring in physical space that are transferred to the virtual world to ensure new management systems. In the ever-expanding theme of digitization (Mitchell, 1995;Lenka et al., 2016), which is inherent in the fourth industrial revolution (Kamarul Bahrin et al., 2016;Schwab, 2016) and thanks to IoT sensors (Gubbi et al., 2013;Xu et al., 2014), anything that can be monitored in physical space can be translated into virtual, multifaceted space (Iansiti and Lakhani, 2015) where data can be converted into information and computational power can be used to simulate the many aspects of shape. This topic, which began with Nasa (Glaessgen and Stargel, 2012), has a wide range of applications in the construction industry, especially in energy analysis and monitoring (Marszal et al., 2011;Shi and Yang, 2013) ', 2018), in order to minimize energy usage and increase overall building energy quality, new buildings in Europe must meet nZEB standards (Rodriguez-Ubinas et al., 2014), as buildings are responsible for 40% of CO2 emissions and energy consumption (Tian et al., 2018). The efficiency of a building is a significant requirement of today's market, which is rediscovering the importance of wooden structures for these reasons (Hoadley, 2000;O'Connor and Dangerfield, 2004;Wood handbook -Wood as an engineering material, 2010;Cabeza et al., 2013;Holstov et al., 2017). Thanks to manufacturing processes (Oxman and Oxman, 2010;Gramazio et al., 2014;Wood et al., 2016aWood et al., , 2016bFalamarzi and Correa Zuluaga, 2019) that are linked to mass customization * Corresponding author. (Pine and Slessor, 1999;Anderson, 2002;Kolarevic, 2015;Paoletti, 2017;Bianconi et al., 2019), wood design is enhanced by the digital (Bianconi and Filippucci, 2019a). Wood is a natural and smart material (Ugolev, 2014), that can be transformed by digital processes (Menges, 2009;Menges et al., 2017;Willmann et al., 2017). The use of generative logics (Schumacher, 2011;Bianconi and Filippucci, 2017;Chen and Sass, 2017) in conjunction with Artificial Intelligence (Bianconi and Filippucci, 2019b) to create form-finding processes (Bergmann and Hildebrand, 2015;Weinand, 2016;Hemmerling and Cocchiarella, 2018) that define the best solutions is particularly interesting. These concepts are then coupled with the BIM approach, which is a further transformation in design logic aimed at managing knowledge in the same environment. This is essentially a cultural as well as a technological revolution (Eastman, 2011). The requirement to use BIM was only recently introduced into Italian law (Ministerial Decree 560/2017). A key date on this road can be found in the year 2002, when Autodesk released a White Paper in which the term "Building Information Modeling" was used for the first time. Even though there are still many gaps to fill (Dainty et al., 2015), the advantages of BIM are moving various figures involved in building processes (Smith, 2014) thanks to its obvious benefits (Khanzode et al., 2008;Barlish and Sullivan, 2012;Bryde et al., 2013). As a result, BIM is a Virtual Environment Platform (VMP) capable of storing, processing, and mapping various types of data (Zheng et al., 2019). The research is being conducted in this field as part of a collaboration between the Department of Civil and Environmental Engineering and Abitare+, a local innovative wood construction start-up, with the aim of triggering product and service innovation. The research begins with the development of generative models with the goal of multioptimizing the form, energy consumptions, structure, and cost of wooden houses, and it ends with the integration of BIM models. BACKGROUND The analysis of generative models is followed by a proposal for an integrated mass customization-based design and manufacturing process, aimed primarily at wood construction technicians and specialists but also useful as a dissemination tool for students and researchers. First of all, the research aims to provide personalized housing designs (Bianconi et al., 2019), identifying a range of design solutions that ensure genetic algorithms are used to adapt and optimize the architectural model. The design concept is focused on the analysis of local codes and X-Lam and Platform-Frame building systems with the goal of reducing waste and optimizing the construction process. Energy consumption, thermal and visual comfort, as well as price, were evaluated with the construction company and through iterative processes. The results of this first study, which began with the selection of solutions available to the company, have prompted a closer examination of each element that makes up the building envelope. The focus of the investigation then shifted to improving the energy efficiency of wooden structures that had previously been customized to meet the location's specific requirements. The aim in this case is to use generative design tools to optimize the preliminary cost and efficiency of wood walls for X-Lam and Platform-Frame structures, with the goal of comparing the actual performance of the built solutions (Seccaroni and Pelliccia, 2019). As a result, the described workflow begins with the implementation of generative algorithms that return thermal transmittance, decrement factor, time shift, costs, and verify the absence of interstitial condensation while varying the wall materials and thicknesses from time to time (Rossi and Rocco, 2014;Aste et al., 2015) (Fig. 1). The selected parameters can be processed in a multi-optimization path based on evolutionary algorithms (Diakaki et al., 2008), in which more than 5000 possible material and thickness combinations have been automatically analyzed. The best solutions can thus be selected, identifying the Pareto front (Wright et al., 2002;Wang et al., 2005) in which the combinations simultaneously present optimal values of the various parameters that determine the wall's behavior in summer and winter, as well as the overall cost (Fig. 2). Through the construction of a test room, the study shifted then from virtual to physical: this is an abstracted representation of a wooden house reduced to the size of a paradigmatic space. RESEARCH OBJECTIVES The research's background demonstrates how the model's basis is digital representation. In this case, the logics inherent in the Digital Twin find an interesting inversion, where reality derives from representation, rather than the opposite. As a result, the process can be divided into three phases: simulation and multiparametric optimization of building performance, construction of the test room and data collection, and implementation of the Digital Twin for real-time data exchange. Therefore, the aim of this study is to examine the relationship between digital and physical in an experimental process whose phases are linked to specific issues. First, it is interesting to investigate the aspects that arise in the implementation of the Physical Twin, a generic dwelling abstraction. Second, the study focuses on the development of the Digital Twin, which extends beyond the generative model to include data collected in the real world and represented in the model. As a result, evaluating data collection in combination with the monitoring tools installed in the test room becomes a key element, and their analysis and management within the BIM environment is the final step of the experimental process. This path is therefore proposed as a useful technique for validating simulation results and evaluating the model's reliability. It also establishes a cyclical framework of digital and interactive knowledge exchange to refine models and introduce new facility management logic. Construction of the test room (Physical Twin) The test room is a 20-square-meter single-story temporary pavilion with a 25-centimeter thick base slab and a 2.4-meter average height. The test room has two fully opaque walls and two glass openings in the east and south directions to optimize the climatic impact. The north wall was built to be removable and replaceable so that various stratigraphies could be checked, as well as the type of structure (Fig. 3). Its specific conformation, in fact, allows it to be replaced with X-Lam panels instead of the prefabricated panels that constitute the Platform-Frame structure. In this first phase of the research, all four walls were built using Platform-Frame. Starting with the evolutionary algorithms' optimized solutions, a first wall was built in the test room, consisting of an 11 cm rockwool external insulation, 1.5 cm marine plywood, 12 cm glasswool between the structural elements and 5 cm glasswool behind the plasterboard. The algorithm calculated a transmittance of 0.131 W/sqmK for this wall. The single-pitch ventilated roof has an inclination of 10° and it is made of glulam wood beams and 2.3 cm thick planking. Around 15 square meters of thin-film photovoltaic panels with storage batteries ensure the heat pump's function, which is needed for cooling and heating in order to simulate typical indoor winter and summer thermo-hygrometric conditions (Fig. 4). Data collection Several sensors and instruments have been installed in the test room to monitor the parameters required to characterize the wall's efficiency. Both internally and externally, thermocouples are used to measure air and wall temperatures. The thermal transmittance U of walls is measured using fluxmeters. Temperature and humidity probes test the indoor and outdoor air's temperature and relative humidity. The S.A.L.E. monitoring system allows for on-site monitoring of the wood's moisture content, identifying any irregularities that may lead to biodegradation. Wireless sensors and tools for collecting, storing, and managing alert messages are used in the monitoring. All sensors are visible and removable (Fig. 5). The north-facing wall is tested because it is not directly exposed to solar radiation, which would influence the results, whereas the east-facing wall was also tracked as a reference. Monitoring was conducted through the previously described sensors during the summer period. The acquired data were used to determine the transmittance in situ, which was compared to the one simulated by the algorithm, according to UNI ISO 9869 ('ISO 9869-1:2014 Thermal insulation -Building elements -In-situ measurement of thermal resistance and thermal transmittance -Part 1: Heat flow meter method', 2014), which states that the thermal resistance can be calculated from the ratio between the summation of the surface temperature difference between outside and inside with the summation of the thermal fluxes (Eq. 1). where Tsij = internal surface temperature obtained from the jth measurement Tsej = external surface temperature obtained from the jth measurement q = heat flux obtained from the j-th measurement from which the conductance L can be found (Eq. 2): Thermal transmittance is calculated by taking into account air temperatures as well as internal and external surface resistances (Eq. 3): where Tij = internal temperature obtained from the j-th measurement Tej = external temperature obtained from the j-th measurement The measurements were taken over four days in July, with reasonably significant daily variations in the outdoor temperature (between 44 and 20 °C) and night time values never dropping below the indoor temperature (constant at 20 °C), resulting in a sufficiently high temperature gradient. As a result of the acquisitions, the following parameters have been obtained: The transmittance measurement should be adjusted by a 10% percentage error due to direct measurement. Furthermore, the specified thermal conductivity values, which were used in the algorithm's calculation, are accurate for test conditions at 10 °C. On the basis of these considerations and the required corrections, it can be assumed that real behaviour closely matches that obtained from the simulations. Realization of the Digital Twin The information is returned from the built to the digital in the final section of the study, in order to build a real-time monitoring system of what is happening within the test room and integrate that data into the model, creating a digital twin in the BIM environment. In fact, the test room has been remodelled in Revit As-Built, with symbolic elements placed to indicate the location of the specific sensors in the test room that perform realtime monitoring. Special families have been used for this purpose, through the "Generic Models" family, whose instances have been modelled by spheres. Different colored spheres are correlated with parameters and details that can be seen remotely in space in real time. Depending on the type of sensor, four different sphere types have been created: air temperature (blue spheres), wall temperature (red spheres), thermal flux (orange spheres), and wood moisture (brown spheres). After that, the different instances were connected to the .csv files that are continuously updated based on the measurement interval set (generally every 60 seconds). Each instance has been placed near the wall where the sensor it represents is actually located, and the The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII- B4-2021XXIV ISPRS Congress (2021 relation between the different instances and the corresponding .csv files can be seen through the abacus of the sensors (Fig. 6). The benefit of automatically collecting the real data measured by the sensors is then combined with the ability to track temperatures, flows, and humidity values punctually in space and time through their real-time visualization, which can be accessed remotely. Another factor is the ability to send the data obtained in the Digital Twin back into the initial algorithm used to simulate the energy output of the various walls: this exchange is useful to refine the algorithm's measurement method in order to make the preliminary step simulation even more precise (Fig. 7). CONCLUSIONS The developed research emphasizes the importance of representation in digital modelling, with particular attention to the digitization process and the convergence of various aspects of the form into virtual computational tools. Models gather and analyze data and information through interconnected and interdisciplinary routes in order to transform it into knowledge. Because of its transdisciplinary nature, representation becomes the language of knowledge incorporation, introducing its own field of experimental and heuristic intervention, with paths that must be validated. The relationships between virtual and physical space provide a complex view of procedures that become cyclical. In fact, the approach starts from the algorithms of parametric multioptimization of the walls to determine combinations of materials and thicknesses in relation to costs and performances. These simulated features must then be validated using sensors, whose data are sent back into the models to verify initial assumptions. This approach enhances the very role of the model in its relationship with information: data is managed and visualized, Twin, which allows the initial algorithm to be refined. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B4-2021 XXIV ISPRS Congress (2021 edition) but it can also be used to dynamically and accurately explain what is happening in reality. It is then simple to link the data to alarms and home automation systems, which can then be used to turn on/off cooling or heating, for example. Even at this early stage, the Digital Twin appears to be useful in simulating and predicting future behavior, optimizing resource and time usage, and generally improving management efficiency. In this way, an information ecosystem is generated, which creates knowledge and provides data for different purposes, depending on particular interests. The resulting process aids in the structure of resilient processes to anticipate, respond, and react to what, due to its complexity, can only be controlled by digital and understood by design.
3,651.2
2021-01-01T00:00:00.000
[ "Computer Science" ]
Mobile Learning Practice in Higher Education in Nepal During the 15 years of this current century, mobile technology has become a leading technology in the support of educational outcomes. This study investigated the mobile learning practices among undergraduates in higher education in the semi-urban and rural areas of the Gorkha district of Nepal. The objectives were to explore the availability of mobile technology for learning; its costs; learning trends, institutional policies, and attitudes towards mobile learning. These factors were explored to identify implications for pedagogical practice. The study adopted a mixed methods design, in which the quantitative data were collected by using a questionnaire with a sample of 161 undergraduates from six campuses. The qualitative data were collected from 19 purposively selected respondents by the way of semi-structured interviews. The result indicated that virtually all undergraduates possessed their mobile phones and used them informally for learning both inside and outside of their classes. The majority of the students had positive attitudes towards mobile learning. However, many were not satisfied with the effectiveness of their practices or with the level of institutional support for using mobile devices to support their learning. Although comprehensive mobile learning is not widespread in Nepal, enriching conventional learning by the incremental use of mobile devices is possible in Nepalese institutes of higher education. I conclude that teachers and institutions should provide guidance to students about the effective uses of mobile technology because successful use of technology in learning largely depends on appropriate pedagogy and teacher support. Mobile learning has become a distinctive area of modern digital learning.The United Nations Educational, Scientific and Cultural Organization (UNESCO) celebrated a Mobile Learning Week in February 2015, with the theme "leveraging technology to empower women and girls" (UNESCO, 2015).Its mission was to close the gender gap by promoting women's learning in developing countries with affordable handheld technology.The conference was an example of the impact of mobile learning across the world.Increasing numbers of conferences, workshops, seminars, and journal publications on mobile learning provide testimony to the influence of mobile learning across the world (Ally, 2009;Traxler, 2007).This paper discusses mobile learning practices among undergraduate students in rural areas of Nepal and considers Nepalese readiness to adopt mobile devices for learning in higher education. Theoretical framework for mobile learning Mobile learning is an emerging phenomenon and its effective use is presently unclear (Traxler, 2007;Mehdipour & Zerahkafi, 2013).Kukulska-Hulme (2009, cited in Shrestha, 2011, p. 108) argued that "mobile learning is a tricky term as mobility refers to mobility of technology, content and learners in the context of learning".There appears to be no concensus about how to conceptualise or define it (Ally, 2009).Mobile learning is often viewed as an updated version of e-learning, which incorporates learning experiences with electronic devices.Mehdipour & Zerehkafi (2013) claim that there is a whole part relationship between e-learning and mobile learning in the wider context of digital learning.This view considers mobile learning as a part of e learning.Their relationship can be illustrated in Figure 1. Open Praxis, vol.8 issue 1, January-March 2016, pp.41-54 Figure 1 shows that mobile learning is a part of e-learning.Similarly, e-learning is a part of modern digital learning.Mobile learning can be viewed as a paradigm shift within the framework of e-learning.E-learning is often equated with the Internet connected desktop computer based learning experiences.Mehdipour and Zerehkafi (2013) drew a distinction between e-learning and mobile learning. E-Learning can be real-time or self-paced, also known as "synchronous" or "asynchronous" learning.Additionally, E Learning is considered to be tethered (connected to something) and presented in a formal and structured manner.In contrast, mobile learning is often self-paced, un-tethered, and informal in its presentation (p.9).Traxler (2007) claims that the distinction between e learning and mobile learning is blurred because mobile technology has largely overcome previous barriers of effective mobile learning.For example, mobile devices have considerable connectivity, screen size, storage, and processing power.Consideration of the definitions of mobile learning is necessary for enabling a detail discussion on it. In essence, mobile learning is learning with a mobile hand held electronic device, at any time, anywhere (Kukulska-Hulme & Shield, 2008, as cited in Shohel & Power, 2010).In a broad sense, mobile learning refers to any learning that occurs when the learner is not at a fixed, predetermined location, or learning that occurs when the learner takes advantages of opportunities offered by mobile technology.In a narrow sense, O' Malley et al. (2005) stated that mobile learning refers to any kind of learning experiences with handheld mobile devices that take place both inside and outside the classes.This article views mobile learning as any kind of learning experience gained with portable digital devices both inside and outside the class. The concept of mobile learning is in consistence with the modern concept of lifelong learning.Sharples, Taylor and Vavoula (2006) observed a convergence between modern form of learning and technology.They stated that new technology (personal, user-centered, mobile, networked, ubiquitous, and durable) is suitable for new learning (personalized, learner centered, situated, collaborative, ubiquitous, and lifelong).They outlined some assumptions of mobile learning, which reflect the views of twelve Mlearn project research leaders from various countries of world.They viewed that learner as mobile and learning is interwoven with other daily activities.The control and management of learning can be shared between teachers and students.The learners through interactions create learning context.Learning can fulfill goals and set new goals.Mobile learning can complement and conflict with formal education.Ownership of learning and privacy are ethical issues in mobile learning.Rapid advancement and diffusion of technology in rural areas provide learners with opportunities to connect them with the larger learning communities outside their classes.This study was based on these assumptions of mobile learning. Advantages of mobile learning The discussion about the advantages of mobile learning began from the beginning of this century.The first Mlearn conference was organized in Birmingham in 2002 (Traxler, 2007).Initially, mobile Mobile Learning Practice in Higher Education in Nepal Open Praxis, vol.8 issue 1, January-March 2016, pp.41-54 learning was seen as an innovative use of the latest information and communication technology in education when voice call and text messaging were the main features available in mobile devices.Currently, the development of portable, handheld mobile devices with Internet connectivity has offered greater access and possibilities for interaction and collaboration among teachers and students.Media rich capabilities such as, decreasing weight, wider screen size and high resolution, high storage and processing speed, extended battery backup are fueling a transition to a 'Mobile Age' (Lee & Chan, 2007). Mobile learning has a number of benefits.Mobile learning provides learning opportunities inexpensively because the cost of mobile devices is significantly lower than PCs and laptops.It also reduces the burden of buying several gadgets since it has the capacity to create and deliver multimedia contents.This can be used for both continuous and situated learning support.The userfriendly design of mobile devices reduces training costs for the learners and the teachers.It might also provide rewarding learning experiences.They have potential to improve levels of literacy, numeracy, and participation in education among young adults (Mehdipour & Zerehkafi, 2013).Similarly, they can be beneficial for both formal and informal learning because they offer an additional platform for interaction among teachers and learners on the one hand, and sharing content knowledge on the other hand.They can promote learners' active participation in learning process.Research project has confirmed positive outcomes for mobile learning in both formal and informal learning situations (Kumar et al., 2010;Hayati, Jalilifar & Mashhadi, 2013).In the Nepalese context, mobile devices can provide opportunities to access the Internet from remote location. Nepalese context for mobile learning Nepalese educational institutions are primarily structured at school and university level.Schools run from pre-primary level to grade 12. Grades 11 and 12 are referred to as the higher secondary level.Universities and other tertiary institutions offer undergraduate to PhD level programs Schools and undergraduate colleges tend to be located in remote areas. The advent of mobile phones in Nepalese schools has posed a major threat to the ecology of the school, and school administrators have attempted to restrict their use because is thought to be disruptive in classrooms (Bishowkarma, 2007).Policy-makers appeard to be unaware of the positive uses of digital technology inside and outside the classrooms.However, students from grades 6 to 10 have been using mobile phones secretly in their classes.The mobile penetration rate was 51.1 percent (Nepal Telecommunications Authority, 2011) when Bishowkarma prepared the report.He reported cases both of mobile use and misuse in schools.His article indicated the emergence of unsupervised mobile learning in Nepalese education. The Higher Secondary Education Board (HSEB) decided to ban mobile phones in grades 11 and 12 to prevent distractions on study in the school (HSEB, 2013).This issue generated heated debate among educationists, teachers, and students.Though some teachers and guardians welcomed the decision, the students clearly showed their dismay and argued that banning mobile phones is not a solution because a lot can be learned by using information and communication technologies.The HSEB decision has not been strictly implemented because of ineffective monitoring.Most parents and teachers still have reservations about mobile uses in schools.However, mobile learning is increasing in informal learning and this has received very little attention. Nepal has better infrastructure for mobile communication technology than other forms information communication technology.Nepal Telecommunications Authority (May, 2015) reported that the mobile penetration rate in Nepal was 86% at the end of 2014.In line with other countries, young people, including the university students, comprise the large portion of mobile subscriptions.Wider Open Praxis, vol.8 issue 1, January-March 2016, pp.41-54 accessibility of the technology has increased the possibilities of mobile learning among Nepalese students.Whereas the Central Bureau of Statistics (CBS) reported that only 7% households had a computer in Nepal by 2011 (CBS, 2012), the latest data shows that mobile and internet penetration rates are approximately 101.17% and 44.37% respectively by 17th July, 2015 (Nepal Telecommunications Authority, December 2015).Broadband Internet is available in a small number of cities. Ninety-five percent of users access the Internet by mobile phones.Nepal is making rapid progress in adopting mobile technology, which is a prerequisite for mobile learning.The pace of technological advancement is much faster than their educational application and evaluation (Terras & Ramsay, 2012). Review of policies and plans for mobile learning Discussion on information and communication technology integration in Nepalese school and university level education started recently.The government of Nepal has formulated a master plan for ICT integration in education.The vision of the master plan is to "ensure extensive use of ICT in education sector and contribute for access to and quality education for all" (Ministry of Education, 2013, p. 4).It has a policy to bridge the existing digital divide by providing ICT integrated teaching and learning environments.Some pilot programs are assessing the use of information and communication technology in schools.Tribhuvan University (TU) and Kathmandu University (KU) have policies that are designed to support the use of computer technologies for open and distance education students. TU has taken some initiatives to integrate information and communication technology (ICT) in higher education.TU Faculty of education offers teacher preparation course (B.ED in ICT) and Faculty of Management offers Bachelor of Information Management (BIM).Institute of Engineering has Centre for Information Technology (CIT) and Information & Communication Technology Centre (ICTC).TU established the Open and Distance Learning Center in 2015, which aims to "provide access of quality higher education to mass people in Nepal through open and distance mode".The center will also support other institutions to integrate e-learning by hybridizing traditional education programs as a gradual transition to virtual learning (ODEC-TU, 2015).The Center will develop android application to assist learning (Adhikari, 2015).This initiative endorses mobile learning formally in the Nepalese higher education sector.The Center will develop resources and train faculties to promote information and communication technology in open and distance education.In this context, mobile technology can bridge the digital divide by offering an alternative technology for learning.Mobile learning based on new developments in mobile technology is an emerging trend in Nepalese higher education.Mobile devices might be an alternative technology to integrate information and communication technologies in Nepalese education.However, it is too early to predict how these initiatives will change Nepalese education.It is important for universities to examine students' current mobile learning practice before implementing new modes of learning. Research on Mobile Learning in Nepal There is a scarcity of research studies on mobile learning in Nepal.No formal research reports appear to have been published.However, there are a few magazine, journal and blog articles.Bishowkarma (2007) stated that he did not find any formal research on mobile learning in Nepal.His article published in Sikshak magazine shed some light on the issue.He pointed out threats and prospects for mobile learning in Nepalese schools.Based on a field survey carried out by Sikshak magazine inside the Kathmandu valley and other districts (Tanahu, Biratnagar and Jhapa), he reported that a large number of school students beyond grade 6 had carried mobile phones in class.Mobile Learning Practice in Higher Education in Nepal Open Praxis, vol.8 issue 1, January-March 2016, pp.41-54 Schools had attempted to ban mobile phones in but these attempts have been mostly unsuccessful.The survey revealed both uses and misuses of mobile phones.Few students used mobile phones for learning word meanings, discussing homework, or solving arithmetic problems using the calculator features.They used the alarm feature to wake up in the morning.They listened to music and the radio, watched videos, browsed the Internet, and updated Facebook.On the darker side, the survey also revealed that some students watched pornographic videos and photos.Some students reported that they were bullied using mobile phones.Shrestha (2012) Framework of the study Mobile learning practices depend on several variables.This study1 investigated some major determiners of mobile learning practices.The key variables of the study are presented in the diagram in Figure 2. the technological considerations, which influence actual uses of mobile for learning.Similarly, institutional policy, the nature of curriculum and the assessment system also influence teaching methodology.Other important factors include teachers, parents, and peers' support for mobile learning.Theoretically, mobile learning takes place at any time anywhere (Kukulska-Hulme & Shield, 2008, cited in Shohel & Power, 2010).However, students' time for other activities, such as part time employment, and family commitments might contribute to variation in the mobile learning practices.Cost of devices, call rates, mobile data charges, availability of Wi-Fi for internet connections are some of financial considerations, which might also limit the use of devices. Significance of the study It can be seen from the discussion above that mobile learning has received mixed attention in Nepal.Mobile learning research has attracted few researchers.No one has carried out comprehensive research in Nepalese higher education yet.In this context, investigating students' current mobile learning practices will be significant since it will provide some descriptive data for educational policy maker, planners, administrators, and teachers. Objectives of the study and research questions The main objective of this study was to explore the mobile learning practices of university level students in Nepal.Its other objectives were to explore the availability of technology, financial consideration of mobile learning gadgets, data charge, students' affordability, institutional policy and practices, teachers' and parental support to students in mobile learning.It also aimed to suggest some implications for teaching learning and research. This study was designed to answer the following research questions: 1. What is the technological and financial readiness for mobile learning among undergraduate students in rural areas of Nepal? 2. How do students use their mobile devices for learning?3. What are their views on mobile learning? Research design This research used a mix methods design employing both quantitative and qualitative techniques in a two-phase sequential data collection process, as an exploratory and descriptive research project.A student survey was conducted with a questionnaire containing both closed and openended questions.After the initial analysis of survey data, for more comprehensive data on mobile learning across the diverse background of participants within the district, semi-structured interviews were carried out with students selected by judgmental sampling.The interviews helped to clarify and interpret trends and issues that emerged from the open-ended section of the survey. This research was carried out in the Gorkha district of Nepal.It is located in mountainous area about 140 kilometers west of Kathmandu, the capital city of Nepal.There are six campuses of Tribhuvan University (TU) in the district.Two of campuses have limited access to the Internet mostly for administrators and faculties.Two are in the district headquarter, which is in semi urban area.Others are in rural area.All the campuses run classes in the morning.Most of the students are from rural areas.Most come from farming family and help their parents in farm duties in the afternoon.They spend considerable amount of time walking to and from campus because of limited public transport.Female students outnumber male students in all the campuses.Mobile Learning Practice in Higher Education in Nepal Open Praxis, vol.8 issue 1, January-March 2016, pp.41-54 The population of the study was all of the undergraduate students at six campuses of Tribhuvan University (TU).One of them was the constituent campus of TU, and the others were affiliated campuses.The sample comprised 161 randomly selected undergraduate students (40 men and 121 women), with ages ranging from 17 to 28 years.Nineteen students (3 men and 16 women) were selected through judgmental sampling for in-depth interviews; these students included those with a disability, those with different types of mobile devices, and students from different geographical and cultural backgrounds. Data collection and analysis To survey students' demographic data, mobile learning practices, and attitudes towards mobile learning, a questionnaire was constructed which contained both open ended and close questions.The initial version of the questionnaire was based on the author's ideas on important issues on mobile learning and informal discussion among colleagues.To ensure content validity, first it was sent to three experts, including one who was working in open and distance learning for their feedback.After reviewing their feedback, the modified questionnaire was piloted with 10 students at Drabya Shah Multiple Campus.Following information received from the pilot study, some questions were removed, and for example, information on mobile brand, and number of years they were using mobile phones and how often they changed their mobile phones.The final version of the questionnaire consisted of five sections.The first was to obtain demographic data, the second focused on access to technology, the third was on general uses of mobile, the forth section on academic uses and the final section was on students perceptions of other issues relating to mobile learning. The revised questionnaires were administered to the target students in their class in the target campuses after gaining consent from campus authorities.The students were assured of the privacy of their responses.The respondents were given freedom to maintain their anonymity.They voluntarily participated in the survey.After the initial analysis of the survey, 19 students were interviewed for in depth information on their practices and their expectations regarding mobile learning, using an interview guide.Six questionnaires were discarded because they were not completed.It was also found that some of the respondents missed some of the open-ended questions, which did not affect the analysis of questionnaire. The researcher administered the questionnaire in person to ensure that participants could seek clarification of the questions on the spot.Most of the questions were answered with relevant information.The researcher interviewed the respondents individually.They could freely share their experiences. The closed ended questions were coded and analyzed using the Microsoft excel program.Simple descriptive statistical procedures were carried out on the quantitative data.Similarly, content analysis of mobile uses in general, mobile learning activities, challenges and their attitudes on mobile learning, teachers' behavior, and parental support were performed on the qualitative data to describe the data trends. Quantitative data Demographical profile of the respondents: The first section of the questionnaires had questions on the background information of the respondents.An analysis of the first section of the questionnaire generated a demographic profile of the respondents, which showed that 75% of the respondents were women and only 25% were men.Their ages ranged from 18 to 28 years with a mean age 20.22 years, and a standard deviation (SD) of 1.93.Each campus in the district was represented Open Praxis, vol.8 issue 1, January-March 2016, pp.41-54 in the study with 28% of the sample from Drabya Shah Multiple Campus, 34% from Gorkha Campus, 10 % from Bhimodaya Campus, 13% from Bhawani Multiple Campus, 9% from Dullav Multiple Campus, and 6% from Paropakar Multiple Campus.The sample comprised 25% of the respondents from the Faculty of Management, and 75% from the Faculty of Education.The majority of the respondents (77%) were full-time students. Accessibility and cost: The second section of the questionnaire explored students' accessibility to mobile technology and financial costs.All students had a mobile phone.Almost half of them (45%) had smart mobile phones and a little more than half of them (55%) had basic mobile phones.However, only 24% of them had computers (17% laptop & 7% desktops).Similarly, 32% had a digital camera, 5% had an iPad and 24% had Mp3 players.Mobile Internet use is popular in Nepal.Seventy-nine percent of respondents had Internet connections on their mobile phones.The average price of their mobile phone was Rs 7,440 ($US74.40).Seventy-seven percent of the respondents paid less than Rs 10,000($US100) for their mobile phones.The average monthly expense of respondents was Rs288 ($US2.88)per month.Almost half of them spend around Rs 200 ($US2) per month for making calls and for data use. General uses of mobile technology: The third section of the questionnaire sought to find how the respondents used their mobile phone in day-to-day life.All of the respondents used their mobile phones for making phone calls and sending text messages.Email was used by 40% the respondents, 68% of them used their mobile devices for entertainment, 50% used them for browsing the internet, 81% for playing games, 58% for social networking, 65% for reading online news, and 90% used them for taking photos. Mobile learning practices: The fourth section investigated students' mobile learning environments and practices.The majority of the respondents (82%) used their mobile devices for learning outside their classroom.Only 18% of the respondents stated that they wanted to use their mobile devices in class.Home is their favorite place for mobile use (80%).Nobody reported that the classroom was their favorite place for mobile learning.Thirty percent of the respondents were not sure whether they could use a mobile device for learning in the class or not.One-third (33%) reported that they were not allowed to use mobile phones in class whereas 37% reported that they were allowed to use mobile phones for learning in class.The majority of the respondents (55%) reported that they did not get any support from their teachers for mobile learning.Figure 3 shows how different functions of mobile phones were used for learning purposes.The majority of the respondents (74%) used their mobile device for using offline (downloaded content).On the other hand, only 59 (36%) of the respondents used their mobile for accessing online content.Similarly, 54% of the respondents used their mobile to listen to media broadcasts.Figure 3 also reveals that a large number of respondents (60%) used their mobile for recording their English speaking practice.Likewise, 46% respondents used an online dictionary and 58% of the respondents used off-line dictionaries.Figure 3 also indicates that a small number of respondents recorded (22%) class discussions. The fifth section was developed to explore students' perceptions of mobile learning practices.This section contained 12 Likert type items.Respondents were asked to respond to statements on a five point scale ranging from strongly agree (5), agree (4), undecided (3), disagree (2) and strongly disagree (1).Note: Likert scale was designed with five points.Strongly agree and agreed have been grouped under agree and strongly disagree and disagree have been grouped under disagree in the table. Open Praxis, vol.8 issue 1, January-March 2016, pp.41-54 Table 1 shows that the majority of respondents clearly agreed with the positive role of mobile devices in learning (Mean = 4.2/88%) and a similar number expressed the view that students should be allowed to take mobile devices to class (Mean = 4.02/78%).The table also shows that almost two thirds of the respondents were against banning mobile phones in class (Mean = 2.29/71%).A little more than half of the respondents agreed that they would use their mobile device appropriately if they were allowed to use it in the class (Mean = 3.47/57%).The table also indicates that they wanted some sort of orientation for mobile learning (Mean = 4.37/86%), and nearly the same percentage agreed that teachers should provide guidance for mobile learning (Mean = 4.15/84%).The respondents had mixed views about the negative potential of mobile devices on learning.Thirty six percent agreed that mobile devices had a negative role, 37% were undecided, and 27% disagreed about the negative role of mobile devices on learning.Relatively, large numbers of respondents disagreed that mobile learning can replace face-to-face learning (Mean = 2.47/64%).The data showed that they favored a blended learning mode.Similarly, they agreed that mobile phones could narrow the existing digital divide in the country (Mean = 3.95/76%).Furthermore, almost equal numbers of students agreed that mobile learning should be integrated into the formal education system (Mean = 3.85/76%).However, only about half of the respondents agreed that their parents and teachers had positive attitudes towards mobile learning. Results from interviews The following themes emerged from the analysis of the interview data. Listening to audio books: Although it is not a very popular activity, some students learn by listening to downloaded material on their mobile devices.Students download some audio novels or stories and listened to them.Some of them listen to course related content.A visually impaired participant reported that she regularly listened to course related content on her mobile."We do not have books in Braille.Some audio books are available.I have downloaded them on my memory card and I listen to them whenever I want.They are blessing for us.We can study whenever we want" (Participant 1).She also had planned to record her English textbook, which was not available in audio format, asking her teacher to read for her. Recording class lectures: Most of the students record their spoken activities and listen either for learning or for entertainment.Some students secretly record their teachers' class lecture.Participant no 4 admitted that she recorded class lectures."I have secretly recorded class lectures several times but I fear a lot.I think if the teacher knows it, he will be angry with me" (Participant 7). Dictionary use: Dictionary use is a popular function of mobile devices.The students use both on line and downloaded dictionaries on their mobile phone.One participant said, "Mobile dictionaries are easier to carry and faster to find words" (Participant 6). Web searches: The study also shows that the respondents use the Internet function on their mobile devices.One of the participants said, "I remember I had searched various reasons for learning English on the Internet" (Participant 4).Another participant said, "I get confused with large amount of information after Google search.I cannot choose appropriate content " (Participant 5). Phone calls: Some participants reported that they made phone calls to their teachers for learning.However, they reported that they phoned their friends and discussed their course while preparing for exams."I had a problem with one question.Then I called one of my friend she helped me to find answer" (Participant 4). Chat: Some of the participants admitted that they did not discuss course content on Facebook and other Messengers site.However, they kept track of course while chatting if they could not go to campus.Calculators: Many students used their mobile for calculating."As it is readily available, I use it for simple calculation" (Participant 11). Discussion, Conclusion and Recommendations The present study gathered and analyzed data to understand current trends regarding mobile learning practices among undergraduates in the Gorkha district of Nepal.The result confirmed that students generally have a sound technological understanding and positive attitude towards mobile learning.Almost all of the students have a mobile phone with a good number having smart phones.Cost of technology is an important issue.They can buy low price smart phones.However, use of the phones for learning is expensive.No higher institution provides free Wi-Fi facilities for the students in the district.It is expensive for the students to download audio and visual learning material with a mobile data service on the one hand and poor speed and connectivity are other issues with Internet access on the other hand.Although, it is somewhat, some students are using their mobile devices for informal learning.They are using their mobile devices mainly for checking word meanings, browsing the web, and accessing multimedia.It shows that students need to learn and practice several other ways for the optimal use of their mobile devices for both formal and informal learning. Recently, some universities have initiated few programs with open and distance mode of learning using information and communication technology in Nepal.Mobile phones are readily available in Nepal.Therefore, success of new open and distance programs learning will largely depends on the use of mobile devices as the basic technology of learning.It is necessary for parents and teachers to play a supportive role if mobile learning is enabled to fulfill its potential.Although the present research was based on students in semi-urban and rural areas of Nepal, the findings might be useful for other developing countries where issues of technological and pedagogical developmental are similar. Challenges of mobile learning in Nepalese higher education There are numerous of challenges for implementing successful mobile learning practices among university level students in Nepal.These are financial, technological, policy related, pedagogical, and ethical.The cost of appropriate mobile gadgets and operation cost is beyond affordability of students in rural areas.More than half of the students use ordinary cell phones.Smart phones and tablets, more comfortable devices for on screen reading, cost more than ordinary cell phone.Expensive data charges are another financial barrier for Internet based mobile learning.Battery backup, mobile charging with long hours of power outages, poor network connectivity, the small screen size of cell phones, availability of suitable software and hard ware are technological challenges.Open and distance learning mode has just started with few programs.The support system is in the initial stage.Almost all programs of higher education hold face-to-face program which do not recognize the role of information and communication technology in education.Most of teacher use chalk and talk instructional approaches in the class.Unless teacher changes their pedagogy, mobile learning will not be successful. Contribution of mobile learning to Nepalese higher education Mobile learning can play a key role in Nepalese higher education by offering an additional platform for learning both inside and outside of the classrooms.Students can be members of a global learning Open Praxis, vol.8 issue 1, January-March 2016, pp.41-54 community and get opportunities to use the vast resources that are available on the Internet.Students and teachers do not need to always be in the same class at the same time for discussions.Learning is integrated with other day to activities.High dropout and absenteeism are common in classes of rural areas.If teachers deliver some lessons for use on mobile devices, it may keep irregular students on track with their learning journey.These can interact with teachers and other students. Recommendations for the implementation of mobile learning in Nepal Policy: Although, some universities have introduced open and distance learning courses in few discipline, which use information and communication technology, However, there appear to be no written policies to guide mobile learning practices in higher education.In this regard, universities should formulate policies to recognize mobile learning as a supplementary mode of learning as part of blended learning in higher education.They should introduce hybrid courses at the undergraduate level.Current face-to-face learning can be enriched with the introduction of mobile learning.Every institute of higher education should be supported to develop as a resource and support center for mobile learning practices.This will help to reach more students who live in isolated rural areas of Nepal.The universities and tertiary institution should provide support training, orientation, and research for ICT integration and mobile learning practices on a wider scale. Operational: Policy should help people to develop understanding of mobile learning.However, the practice causes real changes.Each tertiary institution should create an appropriate environment for effective mobile learning practices.Each campus should formulate a code of conduct for mobile learning practices in the campuses.Teachers and students should set ground rules for judicious mobile learning practices in the class and outside the class for safe learning experiences.Faculties should conduct surveys on available technology and students' mobile learning practices to develop a support system.They should organize seminars, workshops for effective mobile learning practices.They should develop a culture of information and resources sharing.They should develop mobile learning resources and share with the students.Campus administration should update the faculties with the available mobile learning technology.Each campus should provide facilities of Wi-Fi facilities so that students can use the Internet without worrying the mobile data charges. Pedagogical: Using technology in teaching is not end in itself.How and what students learn with their mobile devices largely depend on how technology is integrated to support teaching and learning process with the technology.Faculties should devise appropriate teaching methods which demands the use of their mobile learning in addition of face to face learning.Faculty members should develop flexible learning and assessment methods.Faculty members should send assignment, feedback, etc on mobile phone.Faculties should record class lectures and share with students, so that students focus on information processing, rather than information possessing. Limitations and further research Mobile learning is a complex phenomenon.This descriptive exploratory study assessed current mobile learning trends among undergraduate students in Nepal.The study excluded more mature Masters level students.The study was conducted in public institutions with students from rural and semi urban areas.Therefore, the results should be interpreted with caution where the situation in urban and private institutions may be different.This study has not included teachers' and parents' viewpoints.Therefore, future research should include other stakeholders, for example, teachers, principals, and parents.Longitudinal qualitative research and experimental research can examine the effectiveness of supervised and unsupervised mobile learning in this context in future. Figure 1 : Figure 1: Relationship between mobile learning and e learning conducted research in some schools in the Chitwan district in Nepal with low-cost open-source mobile devices, specifically Ben Nanonote and Wikireader to access offline sources.His study showed that learning with mobile devices promoted student centered learning.He noticed the scarcity of appropriate content customized for Nepali learners.No previous research appears to exist on mobile learning practices among university students in Nepal. Figure 2 : Figure 2: Factors affecting mobile learning practice Figure 2 presents factors that affect mobile learning.Students' mobile learning practices depend on personal factors like age, gender and interest.Type of devices, network availability, battery backup, screen size and resolution, apps and other features available in the devices are some of Figure 3 : Figure 3: Uses of Mobile Phones for Learning Photographs: Participants used their camera function to learn.They reported that they took photographs of different books or board and readings."My friend and I have different writers' books.When I find a useful text in her book, I capture the text with my mobile phone.It saves my time to copy.I can easily collect text from different sources" (Participant 7).Calculators: Many students used their mobile for calculating."As it is readily available, I use it for simple calculation" (Participant 11).
8,106
2016-03-03T00:00:00.000
[ "Education", "Computer Science" ]
Characterization and Differentiation of Petroleum-Derived Products by E-Nose Fingerprints Characterization of petroleum-derived products is an area of continuing importance in environmental science, mainly related to fuel spills. In this study, a non-separative analytical method based on E-Nose (Electronic Nose) is presented as a rapid alternative for the characterization of several different petroleum-derived products including gasoline, diesel, aromatic solvents, and ethanol samples, which were poured onto different surfaces (wood, cork, and cotton). The working conditions about the headspace generation were 145 °C and 10 min. Mass spectroscopic data (45–200 m/z) combined with chemometric tools such as hierarchical cluster analysis (HCA), later principal component analysis (PCA), and finally linear discriminant analysis (LDA) allowed for a full discrimination of the samples. A characteristic fingerprint for each product can be used for discrimination or identification. The E-Nose can be considered as a green technique, and it is rapid and easy to use in routine analysis, thus providing a good alternative to currently used methods. Introduction Characterization of petroleum-derived products (PDPs) is an area of continuing importance in environmental science, for example for the identification of fuel spills [1]. Oil and fuel spills (accidental or illegal intentional operational discharges) are commonplace, but a full determination of the source is not always straightforward [2][3][4][5]. It is important to discriminate the type of spill to identify the source, evaluate the level of hazard and to employ the appropriate cleanup treatment [6,7]. Furthermore, very rapid identification is advantageous in that the appropriate clean-up procedure can be started as soon as possible. In most cases, the PDP is not found in isolation but adheres to different surfaces (wood, rocks, cloth, flooring etc.). The absorption characteristics of the substrates may affect the evaporation rate of the PDP and consequently the analysis and the identification [8]. GC-MS-based methods, including some sample preparation steps, are usually applied to characterize fuel-related products [9]. Using GC-MS, both chromatographic information and spectroscopic information can be used to characterize the samples. In any case, both chromatograms and spectra must be treated using the chemometric approach, in some cases using the total ion chromatogram (TIC), in other cases using extracted ion chromatograms [10], in combination with target compound analysis [11][12][13]. When the liquids are weathered, this method can be time-consuming and potentially subjective since the interpretation of the results is variable based on the analyst's experience. Furthermore, this approach does not allow for full automation. Additionally, there is some information within the analytical results that is not used. One option to overcome the problems associated with the interpretation of GC-MS data is the application of chemometric tools. Chemometrics allows for the classification of data, the extraction of useful information, including discrimination between different groups of samples, in an almost automatic procedure [10,14]. Such procedures have included covariance mapping [15,16], principal components analysis (PCA) [5,11,[17][18][19], linear discriminant analysis (LDA) [19], quadratic discriminant analysis (QDA) [20], artificial neural networks [5], soft independent modeling of class analogy (SIMCA) [21], cluster analysis [22], self-organizing feature maps [21], and fuzzy rule-building expert system classification [23]. In some cases, the quantification of compounds in the chromatogram is not needed as the chromatographic pattern can be used to characterize samples. This procedure is very simple and also very useful, as it can produce fingerprints that enable very fast sample identification. However, this procedure cannot be applied in different laboratories/equipment from those used for method development because small differences in the retention times would produce unreliable results. One strategy to overcome the challenges related to changing retention time is related to the analysis of the total ion spectrum (TIS), which is obtained by summing the intensities of each m/z signal over all chromatographic results. The TIS is time-independent and has been used as a new method for the rapid classification of petroleum-derived products after several reference samples have been included in a database [24], and this method has already been applied. GC-MS has been demonstrated to be helpful in this field, but relatively long analysis times are needed. Therefore, the development of methods that do not require chromatographic separation for the solution of several analytical problems is currently of interest in many fields. A fingerprint of the sample obtained by integrating all the components is sufficient in some cases [25]. The application of different spectroscopic techniques-such as NIR, FT-IR, or Raman-combined with chemometric tools has also been described in the literature as an alternative to chromatographic techniques [26][27][28]. These spectroscopic techniques have several advantages since they are non-destructive, are easy to use, are cheap, can be applied in situ, and require limited or no sample preparation. However, these methods also have certain drawbacks, as they provide limited information about specific components and they do not have as high a sensitivity that the mass detector has. In the study described here, we proposed a non-separative analytical method based on E-Nose using a mass spectrometer as detection system. This system can be considered an E-Nose because it detects volatile compounds, records data and uses data treatment to develop models and fingerprints [29]. In recent years, E-Nose has been heavily developed for several applications, and some portable systems have even been used for the classification of gases [30,31]. This system has been optimized in a previous study, and the resulting method validated, for analysis of gasoline samples with different research octane numbers (RONs) [32,33]. In the response pattern obtained by E-Nose, the final mass spectrum is equivalent to the summed ion spectra and this is obtained in only a few minutes because there is no chromatographic separation. The resulting total ion spectrum is characteristic of the sample being analyzed and so can be utilized as a fingerprint of each sample [34]. This technique has been previously applied to studies related to PDPs. In this sense, Feldhoff et al. presented a study in which they compared the performance of a different system for the characterization of diesels and they have concluded that MS data were obtained easily and were more reproducible [35]. In the same way, Pavón et al. used the headspace mass spectrometer to analyze methyl tert-butyl ether (MTBE) in gasoline samples. In all cases, the chemometric treatment of the spectroscopic results is essential in order to extract the information contained within the signal profile [36]. The aim of the work described here was to develop a new method to determine the presence/absence of petroleum-derived products using fingerprints. Such an approach would enable the fast and easy identification of different PDPs (gasoline, diesel, ethanol, and aromatic solvents) adhered to different surfaces (wood, cork, and cotton sheet) using an E-Nose technique combined with chemometric tools. Each type of liquid (80 µL) was added to the pieces of different materials (5 cm × 5 cm) inside the E-Nose glass vials, and they were analyzed in these vials. All samples were prepared in duplicate, and the average mass spectrum for each case was used for the study. Acquisition of E-Nose Spectra The substrate samples were analyzed on an Alpha Moss (Toulouse, France) E-Nose system composed of an HS 100 static headspace autosampler and a Kronos quadrupole mass spectrometer (MS). The samples were contained in 10 mL sealed vials (Agilent Crosslab), and these were then placed in the autosampler oven to be heated and agitated in order to generate the headspace. Headspace was finally taken from the vial using a gas syringe and injected into the mass spectrometer. The gas syringe was heated above the sample temperature (+5 • C) to avoid condensation phenomena. Between each sample injection, the gas syringe was flushed with carrier gas (nitrogen) to avoid cross-contamination. The experimental conditions for the headspace sampler were as follows: incubation temperature: 145 • C; incubation time: 10 min; agitation speed: 500 rpm; syringe type: 5 mL; syringe temperature: 150 • C; flushing time: 120 s; fill speed: 100 µL/s; injection volume: 4.5 mL; injection speed: 75 µL/s. The total time per sample was approximately 12 min, i.e., approximately 10 times faster than regular methods using sample preparation plus GC-MS analysis. The components in the headspace of the vials were injected directly into the mass detector without any chromatographic separation or sample pre-treatment. On using this process, for any given measurement, the resulting mass spectrum gave a typical signal of the product. Ion electron impact spectra were recorded in the range m/z 45-200. Instrument control was achieved using the residual gas analysis and the Alpha Soft 7.01 software package. Data Analysis and Software The mass spectra were normalized to the maximum intensity for each sample. Multivariate analysis of the data, which included hierarchical cluster analysis (HCA), PCA, and LDA, was performed using the statistical computer package SPSS 22.0 (SPSS Inc. Chicago, IL, USA). Results and Discussion In a previous study, an E-Nose-based method was optimized for the analysis of gasoline samples with different research octane numbers [29]. The optimized HS conditions for the discrimination of the different gasoline samples were as follows: a 145 • C incubation temperature, a 10 min incubation time, and an 80 µL sample volume. The present work is focused on the applicability of this optimized method for the discrimination of different petroleum-derived products (PDPs) supported on different solid materials. In this case, 80 µL of each PDP was added to different substrates and these samples were analyzed by the E-Nose using the optimized method for HS generation. The PDP was placed on the surface of the material in order to evaluate whether the E-Nose is able to detect the PDP even when it is adsorbed on a surface, and to ascertain if signals from the PDP can be differentiated from the signals due to the supporting material. All of the spectra were recorded in the range m/z 45-200, and the resulting spectra were normalized to a total intensity of one. The spectra obtained for all the liquids upon heating the samples for 10 min at 145 • C are shown in Figure 1. The present work is focused on the applicability of this optimized method for the discrimination of different petroleum-derived products (PDPs) supported on different solid materials. In this case, 80 μL of each PDP was added to different substrates and these samples were analyzed by the E-Nose using the optimized method for HS generation. The PDP was placed on the surface of the material in order to evaluate whether the E-Nose is able to detect the PDP even when it is adsorbed on a surface, and to ascertain if signals from the PDP can be differentiated from the signals due to the supporting material. All of the spectra were recorded in the range m/z 45-200, and the resulting spectra were normalized to a total intensity of one. The spectra obtained for all the liquids upon heating the samples for 10 min at 145 °C are shown in Figure 1. Visual inspection of the spectra (Figure 1) shows differences between some of the PDP samples. For instance, ethanol samples are the only ones that present signal at m/z 45, and one of the aromatic solvents is the only liquid that presents signals at m/z 156 and m/z 158. However, even gasoline and diesel samples have several common signals in the E-Nose system. It can be seen that drawing any conclusion on the presence/absence of a PDP based on the visual pattern recognition of the total mass spectrum is time-consuming. Besides, this procedure is highly dependent on the skill and experience of the analyst and does not allow automation. Therefore, the possibility of developing fingerprints that would enable automatic data interpretation for fast PDP identification is of great interest in this field. In order to develop the PDP fingerprints, it is necessary to identify the signals in the spectra that allow for the discrimination of different liquids. It is therefore necessary to apply chemometric tools in order to extract the information contained in the signal profiles (i.e., to identify the appropriate m/z signals) that create a characteristic pattern or fingerprint for each type of sample. This work is the first time that the E-Nose has been used with this aim, and it was therefore necessary to check the applicability of the system in the characterization of different PDPs (gasoline, diesel, ethanol, aromatic solvents) adsorbed on different surfaces. For this reason, an exploratory tool-namely HCA-was applied to all of the m/z values (45-200 m/z) as variables to form groups. A total of 39 samples (9 substrates containing gasoline, 9 diesel, 9 ethanol, 9 aromatic solvents, and the 3 materials without any liquid) were analyzed, two samples were analyzed for each PDP, and the average spectrum was employed in each case. The results of the HCA are represented in the dendrogram in Figure 2. Clear differentiation of some groups for the samples can be clearly observed. Visual inspection of the spectra (Figure 1) shows differences between some of the PDP samples. For instance, ethanol samples are the only ones that present signal at m/z 45, and one of the aromatic solvents is the only liquid that presents signals at m/z 156 and m/z 158. However, even gasoline and diesel samples have several common signals in the E-Nose system. It can be seen that drawing any conclusion on the presence/absence of a PDP based on the visual pattern recognition of the total mass spectrum is time-consuming. Besides, this procedure is highly dependent on the skill and experience of the analyst and does not allow automation. Therefore, the possibility of developing fingerprints that would enable automatic data interpretation for fast PDP identification is of great interest in this field. In order to develop the PDP fingerprints, it is necessary to identify the signals in the spectra that allow for the discrimination of different liquids. It is therefore necessary to apply chemometric tools in order to extract the information contained in the signal profiles (i.e., to identify the appropriate m/z signals) that create a characteristic pattern or fingerprint for each type of sample. This work is the first time that the E-Nose has been used with this aim, and it was therefore necessary to check the applicability of the system in the characterization of different PDPs (gasoline, diesel, ethanol, aromatic solvents) adsorbed on different surfaces. For this reason, an exploratory tool-namely HCA-was applied to all of the m/z values (45-200 m/z) as variables to form groups. A total of 39 samples (9 substrates containing gasoline, 9 diesel, 9 ethanol, 9 aromatic solvents, and the 3 materials without any liquid) were analyzed, two samples were analyzed for each PDP, and the average spectrum was employed in each case. The results of the HCA are represented in the dendrogram in Figure 2. Clear differentiation of some groups for the samples can be clearly observed. A primary trend for grouping can be seen in Figure 2 according to the presence or absence of liquid in the substrates, since all of the substrates that were free of PDPs (Nco, Nsh, Npw) are separated from the rest of the samples in a different cluster, namely Cluster 2 (dark blue). Based on these results, E-Nose allows the detection of the studied PDPs in any material on which they are supported. Cluster 1 contains all of the substrates with the different PDPs, and all of the samples containing the same type of liquid were grouped together in different sub-clusters: samples with ethanol in Cluster 1.1, those with aromatic solvents in Cluster 1.2, those with diesel in Cluster 1.3, and those with gasoline in Cluster 1.4. From the HCA it can also be observed that samples containing diesel and gasoline are grouped closely, and a similar trend is observed with samples containing ethanol and aromatic solvents. This tendency to cluster according not only to the presence/absence of the liquids but also to the type of PDP indicates that the data from the E-Nose analyses used to perform the HCA are related to the compounds that are responsible for the characterization of different liquids, also supporting materials without PDP. Based on this result, a principal component analysis (PCA) was performed on the data set. PCA will provide information about specific m/z signals related to the maximum variability between samples. Additionally, the contributions by the m/z to different factors will help in the construction of the discrimination model. According to this PCA, seven factors were required to explain at least 99% of the information contained in the body of mass spectra for the samples. After analyzing the loadings for the resulting factors, it was seen that Factor 1 showed a clear contribution from specific m/z signals. The loadings for the first five factors are depicted in Figure 3. It can be seen that the first factor explains 79.62% of the variance and its signals (m/z), except for the signal at m/z 105, are below m/z 100. Factor 2 accounts for 5.37% of the information and it only shows a signal at m/z of 45. Factor 3 accounts for 5.08% of the information and, as in the case of Factor 1, all of the signals except for m/z 106 are below m/z 100. Factor 4 and Factor 5 account for less than 5% of the variance. A biplot for scores on the PC2-PC1 plane for all the samples with PDP is shown in Figure 4. It can be observed that four different groups of samples (which correspond to the four types of PDP) are clearly distinguished using only the first two factors (PC1&PC2). Besides, it can also be seen how both factors are needed for a full separation of the samples regarding the type of PDP added to the substrate. A primary trend for grouping can be seen in Figure 2 according to the presence or absence of liquid in the substrates, since all of the substrates that were free of PDPs (Nco, Nsh, Npw) are separated from the rest of the samples in a different cluster, namely Cluster 2 (dark blue). Based on these results, E-Nose allows the detection of the studied PDPs in any material on which they are supported. Cluster 1 contains all of the substrates with the different PDPs, and all of the samples containing the same type of liquid were grouped together in different sub-clusters: samples with ethanol in Cluster 1.1, those with aromatic solvents in Cluster 1.2, those with diesel in Cluster 1.3, and those with gasoline in Cluster 1.4. From the HCA it can also be observed that samples containing diesel and gasoline are grouped closely, and a similar trend is observed with samples containing ethanol and aromatic solvents. This tendency to cluster according not only to the presence/absence of the liquids but also to the type of PDP indicates that the data from the E-Nose analyses used to perform the HCA are related to the compounds that are responsible for the characterization of different liquids, also supporting materials without PDP. Based on this result, a principal component analysis (PCA) was performed on the data set. PCA will provide information about specific m/z signals related to the maximum variability between samples. Additionally, the contributions by the m/z to different factors will help in the construction of the discrimination model. According to this PCA, seven factors were required to explain at least 99% of the information contained in the body of mass spectra for the samples. After analyzing the loadings for the resulting factors, it was seen that Factor 1 showed a clear contribution from specific m/z signals. The loadings for the first five factors are depicted in Figure 3. It can be seen that the first factor explains 79.62% of the variance and its signals (m/z), except for the signal at m/z 105, are below m/z 100. Factor 2 accounts for 5.37% of the information and it only shows a signal at m/z of 45. Factor 3 accounts for 5.08% of the information and, as in the case of Factor 1, all of the signals except for m/z 106 are below m/z 100. Factor 4 and Factor 5 account for less than 5% of the variance. A biplot for scores on the PC2-PC1 plane for all the samples with PDP is shown in Figure 4. It can be observed that four different groups of samples (which correspond to the four types of PDP) are clearly distinguished using only the first two factors (PC1&PC2). Besides, it can also be seen how both factors are needed for a full separation of the samples regarding the type of PDP added to the substrate. Based on the tendency to grouping shown in HCA and PCA, and since the objective of this study was to develop a methodology for the characterization of the samples, a supervised technique, namely LDA, was applied to the whole body of mass spectra (m/z 45-200). In an effort to achieve a more robust discrimination prior to running the LDA, around 70% (n = 28) of the samples were randomly selected as a training set to obtain the discriminant functions and the remaining 30% (n = 11) of the samples were used as a validation set. In order to identify whether there are specific m/z values in the mass spectra that are more significant than others when classifying the PDPs, and with the aim of developing a fingerprint for each PDP that allows for automatic classification of the samples, a stepwise discriminant analysis was performed. Five groups were defined, specifically samples with gasoline, diesel, ethanol, or aromatic liquids and samples without PDPs. The resulting linear discriminant functions enabled full discrimination between the five groups of PDP samples. Samples from both sets (calibration and validation set) were unambiguously Based on the tendency to grouping shown in HCA and PCA, and since the objective of this study was to develop a methodology for the characterization of the samples, a supervised technique, namely LDA, was applied to the whole body of mass spectra (m/z 45-200). In an effort to achieve a more robust discrimination prior to running the LDA, around 70% (n = 28) of the samples were randomly selected as a training set to obtain the discriminant functions and the remaining 30% (n = 11) of the samples were used as a validation set. In order to identify whether there are specific m/z values in the mass spectra that are more significant than others when classifying the PDPs, and with the aim of developing a fingerprint for each PDP that allows for automatic classification of the samples, a stepwise discriminant analysis was performed. Five groups were defined, specifically samples with gasoline, diesel, ethanol, or aromatic liquids and samples without PDPs. The resulting linear discriminant functions enabled full discrimination between the five groups of PDP samples. Samples from both sets (calibration and validation set) were unambiguously Based on the tendency to grouping shown in HCA and PCA, and since the objective of this study was to develop a methodology for the characterization of the samples, a supervised technique, namely LDA, was applied to the whole body of mass spectra (m/z 45-200). In an effort to achieve a more robust discrimination prior to running the LDA, around 70% (n = 28) of the samples were randomly selected as a training set to obtain the discriminant functions and the remaining 30% (n = 11) of the samples were used as a validation set. In order to identify whether there are specific m/z values in the mass spectra that are more significant than others when classifying the PDPs, and with the aim of developing a fingerprint for each PDP that allows for automatic classification of the samples, a stepwise discriminant analysis was performed. Five groups were defined, specifically samples with gasoline, diesel, ethanol, or aromatic liquids and samples without PDPs. The resulting linear discriminant functions enabled full discrimination between the five groups of PDP samples. Samples from both sets (calibration and validation set) were unambiguously assigned to their corresponding group. The m/z values selected to develop the discrimination function were as follows: m/z 45, m/z 47, m/z 56, m/z 57, m/z 59, m/z 64, m/z 65, m/z 70, m/z 71, m/z 85, m/z 92, m/z 120, m/z 126, m/z 138 and m/z 145, and m/z 193. This means that there are different spectroscopic areas in the m/z studied that are related to the different PDPs and these are required to obtain a full discrimination of the samples. When only the abundance values of these m/z are represented, the different fingerprint obtained for each type of PDP can be observed ( Figure 5). All of the values were normalized to the base peak at 100%. A limited number of m/z values are above 50% of the maximum intensity for each fingerprint. The scenario is completely different for substrates without PDPs (none). For these samples, most of the m/z values of its fingerprint are above 0.5 (50% of the maximum intensity). When only the abundance values of these m/z are represented, the different fingerprint obtained for each type of PDP can be observed ( Figure 5). All of the values were normalized to the base peak at 100%. A limited number of m/z values are above 50% of the maximum intensity for each fingerprint. The scenario is completely different for substrates without PDPs (none). For these samples, most of the m/z values of its fingerprint are above 0.5 (50% of the maximum intensity). In contrast, samples that contain PDPs show very few important signals (m/z values above 0.5) and, in the case of substrates with ethanol, not even a single signal of this magnitude. In samples with ethanol, the characteristic signal is m/z 45, in samples with aromatic solvents there are two such signals, m/z 92 and m/z 65, and m/z also seems to be important even though the intensity of m/z 45 is below 0.5. In samples with gasoline or diesel, the maximum signal is m/z 57, but the other signals are not the same or they do not have the same intensity. Gasoline samples show high intensity signals at m/z 59, but this signal is virtually absent from samples with diesel. The same behavior can be seen for m/z 56 and m/z 59, which are also intense in gasoline samples, whereas in diesel samples m/z 56 is below 0.5 and m/z 59 is almost absent. In addition, the intensities and the ratios of the rest of the signals are also different for each PDP, thus giving different fingerprints that can be used to discriminate the different PDPs. This finding is illustrated in Figure 5 for samples with gasoline and diesel. Both types of sample present signals at m/z 70 and m/z 71 but not above 0.5. However, the ratio between these m/z (70/71) is above 1 in the case of gasoline samples but below 1 in the case of diesel samples. As a consequence, these signals are also useful for the characterization although they are not particularly intense. In contrast, samples that contain PDPs show very few important signals (m/z values above 0.5) and, in the case of substrates with ethanol, not even a single signal of this magnitude. In samples with ethanol, the characteristic signal is m/z 45, in samples with aromatic solvents there are two such signals, m/z 92 and m/z 65, and m/z also seems to be important even though the intensity of m/z 45 is below 0.5. In samples with gasoline or diesel, the maximum signal is m/z 57, but the other signals are not the same or they do not have the same intensity. Gasoline samples show high intensity signals at m/z 59, but this signal is virtually absent from samples with diesel. The same behavior can be seen for m/z 56 and m/z 59, which are also intense in gasoline samples, whereas in diesel samples m/z 56 is below 0.5 and m/z 59 is almost absent. In addition, the intensities and the ratios of the rest of the signals are also different for each PDP, thus giving different fingerprints that can be used to discriminate the different PDPs. This finding is illustrated in Figure 5 for samples with gasoline and diesel. Both types of sample present signals at m/z 70 and m/z 71 but not above 0.5. However, the ratio between these m/z (70/71) is above 1 in the case of gasoline samples but below 1 in the case of diesel samples. As a consequence, these signals are also useful for the characterization although they are not particularly intense. Conclusions Based on the results described above, it can be concluded that the E-Nose system is able to identify not only the presence/absence of the studied PDPs but also the type of PDP adhered to different surfaces. Therefore, if a database containing PDP fingerprints is developed in a real situation, then the identification of PDPs by matching unknown fingerprints with those from the database or with a suspect sample can be easily achieved.
6,699
2017-11-01T00:00:00.000
[ "Chemistry" ]
Porous Venturi-Orifice Microbubble Generator for Oxygen Dissolution in Water Microbubbles with slow rising speed, higher specific area and greater oxygen dissolution are desired to enhance gas/liquid mass transfer rate. Such attributes are very important to tackle challenges on the low efficiency of gas/liquid mass transfer that occurs in aerobic wastewater treatment systems or in the aquaculture industries. Many reports focus on the formation mechanisms of the microbubbles, but with less emphasis on the system optimization and assessment of the aeration efficiency. This work assesses the performance and evaluates the aeration efficiency of a porous venturi-orifice microbubble generator (MBG). The increment of stream velocity along the venturi pathway and orifice ring leads to a pressure drop (Patm > Pabs) and subsequently to increased cavitation. The experiments were run under three conditions: various liquid velocity (QL) of 2.35–2.60 m/s at fixed gas velocity (Qg) of 3 L/min; various Qg of 1–5 L/min at fixed QL of 2.46 m/s; and free flowing air at variable QLs. Results show that increasing liquid velocities from 2.35 to 2.60 m/s imposes higher vacuum pressure of 0.84 to 2.27 kPa. They correspond to free-flowing air at rates of 3.2–5.6 L/min. When the system was tested at constant air velocity of 3 L/min and under variable liquid velocities, the oxygen dissolution rate peaks at liquid velocity of 2.46 m/s, which also provides the highest volumetric mass transfer coefficient (KLa) of 0.041 min−1 and the highest aeration efficiency of 0.287 kgO2/kWh. Under free-flowing air, the impact of QL is significant at a range of 2.35 to 2.46 m/s until reaching a plateau KLa value of 0.0416 min−1. The pattern of the KLa trend is mirrored by the aeration efficiency that reached the maximum value of 0.424 kgO2/kWh. The findings on the aeration efficiency reveals that the venturi-orifice MBG can be further optimized by focusing on the trade-off between air bubble size and the air volumetric velocity to balance between the amount of available oxygen to be transferred and the rate of the oxygen transfer. Introduction Microbubble-based processes have emerged as a promising option for enhancing interphases mass-transfer for industrial applications [1]. The application of microbubbles in the aquaculture industry helps to enhance productivity, water quality, hydroponic plant growth and soil fermentation [2]. For example, microbubble generators (MBG) have been used in the farming of oyster [3,4] for promoting growth, shell opening and the increment of oyster's blood flow rate, ascertaining the beneficial effect on the bioactivity [3]. In intensive aquaculture of tilapia fish, the application of MGB, as an aerator, also promoted the growth rate of fish (both their length and weight) [5]. A special type of MBG in a form of bubble-jet-type air-lift pumps has also been applied for purifying fishery wastewater [6,7]. Recently, a membrane-based bubble generator has also been applied for cultivation of microalgae and aerobic wastewater treatment [8][9][10] and can potentially be used to enhance the efficiency for CO 2 dissolution for microalgae cultivation [11]. Microbubbles are generated through three fundamental methods: pressurization dissolution (decompression), rotating-flow (spiral flow) and cavitation for ejector and/or venturi methods [10,12,13]. These basic methods are the base for most of the recent modifications and optimizations [2]. Some of the recent developments include a system based on a porous media, constant flow nozzles and membrane or gas spargers coupled with a mixer (i.e., impeller) [12]. For the pressurized type MBG, the highly saturated gas is injected into the tank through a nozzle, together with the pressurized water to enhance gas dissolubility. The liquid-gas mixture then forms the microbubbles, due to a sudden pressure drop when flashed by a reducing valve at lower pressure [12,13]. The spiral flow liquid MBG is commonly designed in conical shape, to enhance the gas-water circulation. Water is introduced tangentially into the cylindrical tank to form a spiral pattern flow with a maelstrom-like cavity [4,6]. Meanwhile, spiral or swirl-based flow MBG can also work based on a self-suction mechanism for gas supply like orifice or venturi type MBG [14]. The gas is sucked in from the opening at the bottom of the tank towards the reduced pressure central core of the whirlpool. Then, the gas-liquid mixture is reduced into microbubbles, due to the shear effect of the centrifugation formed by the rapid rotating liquid flow [12,15]. The venturi effect has also been exploited to generate microbubbles; and the factors affecting microbubbles formation have widely been discussed. It consists of a converging-diverging nozzle with a throat at the middle [16,17]. When liquid enters the throat at a greater velocity, it lowers the static pressure, and this effect can be used for air suction and the subsequent formation of microbubbles (static pressure falls below the atmospheric pressure) [18]. Orifice type MBGs work under similar principles with the venturi type MBGs, in which the velocity change is also used as a decompressor [14,19]. Fujiwara et al. (2003) [20] investigated the phenomena of microbubble generation in a venturi tube with the use of 3-pentanol as surfactant. They found an inverse proportional relationship between pressure and velocity changes, and a directly proportional relationship between bubble diameter towards velocity along the venturi tube. The low local pressure within the venturi tube promotes cavitation generation conditions, but soon the formed void collapses and the pressure is recovered further in the downstream. However, Kaushik and Chel (2014) [18] reported an issue of immediate coalesce of microbubbles into bigger bubbles at the venturi discharge point, which can be limited by application of surfactant dosing. Fujiwara et al. (2003) [20] observed the bubbles formation and breakdown process under low liquid velocity (Q L ) of 4.2 L/min and high liquid velocity of 6.7 L/min. At lower liquid velocity, the bubble collapses gradually along the flow; while at higher liquid velocity, bubbles fission occurs at the front/top surface of a single large bubble, at a further section of the venturi tube [20]. The observations suggest that microbubbles formation could be based on two mechanisms: by the shearing motion [7] of liquid under lower liquid velocity, and by the sudden recovery of the pressure under higher liquid velocity [20]. The gas (naturally forming bubbles after being forced/sucked into liquid) and the cavitation effect contribute to microbubbles formation. Sadatomi and Kawahara (2008) proposed a concept of automated gas suction under negative pressure in the throat [21]. Ejector-type MBG that works based on cavitation also falls under this category [22]. According to Terasaka et al. (2011), a typical ejector-type MBG refers to a liquid flow channel that involves the shrinking and the stepwise enlargement of pressure creating its own complex profile [15]. The ejector-type MBG also generates vacuum pressure by implementing the converging-diverging nozzles [23]. The pressure energy of flowing liquid is altered by the velocity change, as such, it creates a low pressure below the atmospheric one to draw in and to entrain the suction gas. Then, turbulence liquid flow induces shear on the entering gas and sweeps it to form microbubbles. The ejector forms microbubbles with the diameters of about 40 to 50 µm. On the other hand, a recent study reported that the diameters of microbubbles formed by the venturi type of MBG are in a range of 100-300 µm [24]. Most of the previous studies focus on examining the underlying mechanism of the microbubble formation and their dynamics. However, only a few studies focus on addressing the operation of the venturi/orifice type MBG, especially with respect to energy input. Therefore, this study addresses these research gaps by investigating the operational parameters of a porous venturi-orifice MBG for oxygen dissolution in water. The study was focused on the effect of liquid velocity and gas velocity (Q g ) on the generated vacuum pressure and the oxygen mass transfer rate as well as on the aeration efficiency associated with them. The aeration efficiency parameter is very important to gauge the current state of the MBG technology in comparison with another established oxygenator. The novelty of this study is on the design on MBG itself, as a combined venturi and orifice structure aimed to minimize the energy loss, due to its friction reduction capabilities. Assessment of such MBG system in term of energy efficiency is still not well explored in the literature. Previous reports addressed different types of MBG on their effectiveness for oxygen dissolution, and on the mechanism of microbubble formation and the dynamics of the bubble size and size distribution. On the other hand, this study addresses the knowledge gap on the impact of operational parameters (gas velocity and liquid velocity) towards the rate of oxygen dissolution using the venturi-orifice type MBG. The aim is to understand the behavior of oxygen transport from gas to liquid phase before conducting further operational optimization, or even optimizing the MBG design. The liquid velocity range was set from 2.36 to 2.60 m/s (35-40 L/min), which corresponds to the range which is sensitive to the bubble size (please see reference [24]). It also includes the assessment of the aeration efficiency (kgO 2 /kWh), which allows us for a better comparison with other MBGs and other established aeration systems. Materials The experiments were conducted at room temperature of 20 • C, using tap water as medium for oxygen dissolution. Before each experiment, the initial dissolved oxygen (DO) of the water was measured and 7.9 ppm of sodium sulfite (Na 2 SO 3 , R&M Chemicals, London, UK) per ppm of oxygen was added into the water to for deoxygenation [21,24]. A total of 1.5 ppm of cobalt chloride (Sigma-Aldrich, St. Louis, MO, USA) was added too, as the catalyst for the oxygenation reaction. The initial DO concentrations in the raw water were about 4.0-4.5 ppm. The reaction of deoxygenation is shown below. Experimental Setup The setup used in the experiment is illustrated in Figure 1. The experiment was conducted in 1.5 m height cylindrical tank with effective water content of 700 L. The large volume of water was applied to allow slow development of DO during the test and thus can be used to accurately calculate the volumetric mass transfer coefficient (K L a). Two probes of DO meters were placed at locations of 0.3 and 1.45 m from the water surface level. The average of the DO concentration readings was taken for the data analysis. A submersible water pump (HJ5500, 100 Watt, Sunsun, AKD5500, Chennai, India) was placed near the bottom of the tank in which the inlet came from the side and the outlet faced upside ( Figure 1). The porous venturi-orifice MBG was placed at the discharge line of the pump. The design of a custom-made porous venturi-orifice MBG structure installed in the setup is shown in Figure 2. The MBG was installed on the discharge line atop the submersible pump. The structure consists of a venturi pathway for liquid inlet, and a 10 mm orifice ring. Combination of the venturi and orifice was applied to reduce friction loss and to enhance the pressure drop. The case material of the pipe was polyethylene. The porosity was formed by wounding the case with polypropylene net with an estimated porosity of 0.3. The air suction room had a 7 mm opening, connected with a tube for air flow. The air was sucked through a 7 mm pipe from the open atmosphere. A T-connector was used to link the tube to a water manometer to measure the pressure. An air flow meter incorporated with flow regulator was installed near the entrance of the air. The design of a custom-made porous venturi-orifice MBG structure installed in the setup is shown in Figure 2. The MBG was installed on the discharge line atop the submersible pump. The structure consists of a venturi pathway for liquid inlet, and a 10 mm orifice ring. Combination of the venturi and orifice was applied to reduce friction loss and to enhance the pressure drop. The case material of the pipe was polyethylene. The porosity was formed by wounding the case with polypropylene net with an estimated porosity of 0.3. The air suction room had a 7 mm opening, connected with a tube for air flow. The air was sucked through a 7 mm pipe from the open atmosphere. A T-connector was used to link the tube to a water manometer to measure the pressure. An air flow meter incorporated with flow regulator was installed near the entrance of the air. The design of a custom-made porous venturi-orifice MBG structure installed in the setup is shown in Figure 2. The MBG was installed on the discharge line atop the submersible pump. The structure consists of a venturi pathway for liquid inlet, and a 10 mm orifice ring. Combination of the venturi and orifice was applied to reduce friction loss and to enhance the pressure drop. The case material of the pipe was polyethylene. The porosity was formed by wounding the case with polypropylene net with an estimated porosity of 0.3. The air suction room had a 7 mm opening, connected with a tube for air flow. The air was sucked through a 7 mm pipe from the open atmosphere. A T-connector was used to link the tube to a water manometer to measure the pressure. An air flow meter incorporated with flow regulator was installed near the entrance of the air. Assessment of the System The venturi-orifice MBG system was first evaluated by assessing the effect of the liquid velocity on the vacuum pressure and the velocity of the free-flowing air. When evaluating the impact of pressure, the air flow was blocked; while when measuring the air velocity, the vacuum pressure was also recorded. This way the pressure drops due to the air flow could also be measured. .60 m/s. The selection of the velocity values was dictated by the ten possible settings of the velocity provided by the applied liquid pump. This test was conducted to explore the operational range of the system as well as to define the range of testing parameters for the oxygen dissolution tests. The liquid velocity was measured at every setting and the obtained data are reported as the liquid velocity values set for the experiments. Oxygen Dissolution Tests Before starting each experiment, the DO concentration was lowered to approximately 2 mg/L by dosing appropriate amount of sodium sulfite. There were three types of test conducted for assessment of oxygen dissolution: (1) effect of liquid velocity at constant gas velocity, (2) effect of gas velocity at constant liquid velocity, and (3) .60 m/s under a fixed gas velocity of 3 L/min. For the second type, the liquid velocity was fixed at 2.46 m/s, while gas velocities were varied at 1, 2, 3, 4 and 5 L/min. In this case, the air velocity was controlled at the air inlet pipe. For the third test, the air was let to flow freely, in which increasing liquid velocity was accompanied by increasing gas velocity. Each test was performed for one hour and the measurements of DO were taken for every minute. The data of DO concentration against time were used to calculate the K L a. This term is the combination of liquid film coefficient (K L ) and interfacial area per unit volume (a). It has linear relationship to the oxygen transfer rate as in Equation (1) [25,26]: where C * is the saturation concentration of the DO (mg/L), C t refers to the concentration of DO (mg/L) at the time (t). Equation (1) can be linearized into Equation (2), which can be used to estimate the K L a. The K L a is the gradient of a linear plot between −ln C * −C t C * −C 0 vs. time. Aeration Efficiency The data of DO against time were also used to estimate the aeration efficiency. The aeration efficiency is one of the performance standards for oxygen dissolution devices, including the MBG. The estimation was done for the oxygen transfer for a total concentration increment of 4 mg/L. This value is above the typical required DO concentration in aerobic wastewater treatment of >2 mg/L [27]. Equation (3) was derived from the Bernoulli's equation and the aeration energy was calculated using Equation (4). where P is the pump work (J), ∆P is the net pressure of the liquid pump (40,000 Pa), ρ is water density (1000 kg/m 3 ), v is the linear velocity of water (m/s) and liquid velocity is liquid velocity (m 3 /s). A E is the aeration energy (kgO 2 /kWh), V is water volume (700 L), ∆C is the difference in the dissolved concentration within the applied range (4 × 10 −6 kg/L) and t is the time to reach ∆t. Figure 3 shows the vacuum pressure created (P atm − P abs ) with respect to liquid velocity, demonstrating that the applied range of liquid velocity was sufficient to generate vacuum pressure to allow permeation of air through the porous pipe. During the test, the air flow was fully closed and thus no bubble was formed through the MBG. As the pump power was switched on, the water circulated within the set velocities through the MBG. The liquid velocity increased from the inlet pipe to the venturi-orifice tube, hence, causing lower pressure below the P atm . It creates a negative pressure (P < P atm ), which allows the air from the surrounding to be sucked automatically into the MBG. The presence of the porous pipe allowed formation of microbubbles when entering the system. This could be explained by the Bernoulli Principle, in which an increase in fluid velocity within the tube is accompanied by the decrease in the pressure. Figure 3 shows the vacuum pressure created (Patm − Pabs) with respect to liquid velocity, demonstrating that the applied range of liquid velocity was sufficient to generate vacuum pressure to allow permeation of air through the porous pipe. During the test, the air flow was fully closed and thus no bubble was formed through the MBG. As the pump power was switched on, the water circulated within the set velocities through the MBG. The liquid velocity increased from the inlet pipe to the venturi-orifice tube, hence, causing lower pressure below the Patm. It creates a negative pressure (P < Patm), which allows the air from the surrounding to be sucked automatically into the MBG. The presence of the porous pipe allowed formation of microbubbles when entering the system. This could be explained by the Bernoulli Principle, in which an increase in fluid velocity within the tube is accompanied by the decrease in the pressure. Effect of Liquid Velocity on the Vacuum Pressure and Gas Velocity The MBG works based on the Bernoulli Principle, which epitomizes the energy balance principles in which the increase in liquid velocity through the throat leads to lower pressure reaching a vacuum condition. The design of the porous venturi-orifice MBG applied in this study was inspired from the design of Sadatomi and Kawahara type of MBG [6,21], for which no positive pressure was required to force the air which is needed for generating bubbles. The bubble size and distributions were not analyzed in detail, and they will be subjects of future study. However, visual observation on the rising bubbles on top of the tank showed that the bubbles were in millimeter size. The large size of the bubble is expected since the bubbles were depressurized as they rose to the top of the liquid. Analysis of similar type of MBG has been reported earlier but with much lower aeration rates whereby the bubble sizes near the discharge point were around 100-300 µm [24]. The MBG works based on the Bernoulli Principle, which epitomizes the energy balance principles in which the increase in liquid velocity through the throat leads to lower pressure reaching a vacuum condition. The design of the porous venturi-orifice MBG applied in this study was inspired from the design of Sadatomi and Kawahara type of MBG [6,21], for which no positive pressure was required to force the air which is needed for generating bubbles. The bubble size and distributions were not analyzed in detail, and they will be subjects of future study. However, visual observation on the rising bubbles on top of the tank showed that the bubbles were in millimeter size. The large size of the bubble is expected since the bubbles were depressurized as they rose to the top of the liquid. Analysis of similar type of MBG has been reported earlier but with much lower aeration rates whereby the bubble sizes near the discharge point were around 100-300 µm [24]. Figure 3 also shows that higher liquid velocity leads to higher vacuum pressure (pressure drop). The pressure difference increases sharply with the liquid velocity increment at lower liquid velocity (from 2.35 to 2.41 m/s). However, as the liquid velocity further increases, the increment rate is lower until reaching a condition where the effect of liquid velocity on the pressure is minimum indicated by plateau value of air velocity beyond liquid velocity of 2.5 m/s. The graph of square root function for the pressure drop against liquid velocity in Figure 3 was derived according to Equation (5). This equation is originally used for calculating dimensionless discharge coefficient (C D ) of stream flow in an orifice meter, in which β is ratio of orifice diameter to pipe diameter (-), u 0 is linear velocity (m/s), ρ density of liquid (kg/m 3 ) and ∆P is pressure drop (kPa) [28]. The graph can also show that water inlet velocity, u 0 (m/s) is linearly proportional to the square root of the pressure drop. This relationship was proven by Shah et al. (2012), using both CFD prediction and experimental data [28]. Nevertheless, it is worth mentioning that the linear relationship as suggested in Equation (5) does not fit really well the experimental data, corresponding to R 2 of 0.7497. The deviation from linearity originates largely from the first three data points with liquid velocity of 2.35, 2.38, 2.41 m/s in which prominence impact of the liquid velocity on the vacuum pressure was observed, which requires further detailed analysis. Figure 4 shows the relationship between the liquid velocity and gas velocity. Increasing liquid velocity leads to higher gas velocity, because of the higher vacuum pressure generated inside the porous tube, as shown in Figure 3. The flow of air into the MBG is driven by the vacuum pressure inside the porous tube. When the liquid velocity further increases over 2.46 m/s, its influence on the air velocity is not significant, which also correlates well to the pressure difference pattern presented in Figure 3. Since high liquid velocity corresponds to high pumping energy (as depicted by Equation (3)) but offers small impact on the vacuum pressure and gas velocity, operating an MBG under high cross flow velocity would result in a low aeration efficiency. Therefore, for the further study of the impact of gas flowrate (varying the gas velocity s), the liquid velocity of 2.46 m/s was used as a fixed variable. for the pressure drop against liquid velocity in Figure 3 was derived according to Equation (5). This equation is originally used for calculating dimensionless discharge coefficient ( ) of stream flow in an orifice meter, in which is ratio of orifice diameter to pipe diameter (-), u0 is linear velocity (m/s), density of liquid (kg/m 3 ) and is pressure drop (kPa) [28]. The graph can also show that water inlet velocity, (m/s) is linearly proportional to the square root of the pressure drop. This relationship was proven by Shah et al. (2012), using both CFD prediction and experimental data [28]. Nevertheless, it is worth mentioning that the linear relationship as suggested in Equation (5) does not fit really well the experimental data, corresponding to R 2 of 0.7497. The deviation from linearity originates largely from the first three data points with liquid velocity of 2.35, 2.38, 2.41 m/s in which prominence impact of the liquid velocity on the vacuum pressure was observed, which requires further detailed analysis. Figure 4 shows the relationship between the liquid velocity and gas velocity. Increasing liquid velocity leads to higher gas velocity, because of the higher vacuum pressure generated inside the porous tube, as shown in Figure 3. The flow of air into the MBG is driven by the vacuum pressure inside the porous tube. When the liquid velocity further increases over 2.46 m/s, its influence on the air velocity is not significant, which also correlates well to the pressure difference pattern presented in Figure 3. Since high liquid velocity corresponds to high pumping energy (as depicted by Equation (3)) but offers small impact on the vacuum pressure and gas velocity, operating an MBG under high cross flow velocity would result in a low aeration efficiency. Therefore, for the further study of the impact of gas flowrate (varying the gas velocity s), the liquid velocity of 2.46 m/s was used as a fixed variable. Figure 5 shows the relationship between the vacuum pressure generated by the MBG, and the resulting air velocity under free-flowing mode, in which the air was allowed to enter the MBG freely without any restriction. The valve at air flow meter which is connected to the air tube was fully opened. The flow of air was driven by the vacuum pressure and thus was indirectly dictated by the liquid velocity (see Figure 3), where liquid velocity is proportional to the square root of pressure drop and gas velocity. Moreover, it is worth noting that significant increase of the vacuum pressure is observed from the first to the second and the third data point, corresponding to minor increments of Figure 5 shows the relationship between the vacuum pressure generated by the MBG, and the resulting air velocity under free-flowing mode, in which the air was allowed to enter the MBG freely without any restriction. The valve at air flow meter which is connected to the air tube was fully opened. The flow of air was driven by the vacuum pressure and thus was indirectly dictated by the liquid velocity (see Figure 3), where liquid velocity is proportional to the square root of pressure drop and gas velocity. Moreover, it is worth noting that significant increase of the vacuum pressure is observed from the first to the second and the third data point, corresponding to minor increments of liquid velocity Figure 6 shows the profile of DO in water as a function of time and at various liquid velocities. The test was conducted to explore if there is any optimum liquid velocity for oxygen dissolution through microbubbles formation. The DO increment is much higher at the initial stage of the test where the DO concentration is far from the saturation value. The rate of oxygen peaks at the middle range of the tested velocity at 2.43 m/s. Since the air velocity was fixed (3 L/min), the total supply of oxygen to the system was equal for all tests. Therefore, the difference in DO increment rate as a function of time can be attributed to the role of liquid velocity in affecting the mixing and the distribution of bubble sizes. As the velocity increases the sweeping flow of the liquid leads to smaller bubble sizes, which correlates well with previously reported findings [24]. This explains the increase in the DO dissolution rate from the lowest linear velocity value of 2.35 up to 2.46 m/s, beyond which the rate of DO increment decreases. Other reports also pointed out that the range of velocity between 30-40 L/min [19,28,29] plays a significant role in affecting bubble sizes (decrement). The liquid velocity range of 30-40 L/min is exactly the range applied in this study, in which formation of bubble is strongly affected by the shear stress until reaching a point where shear stress has minimum impact. Beyond the liquid velocity of 2.46 m/s, the DO increment decreases, most likely due to over-mixing that promotes intensive bubbles contacts and bubbles coalescence. Effect of Liquid and Gas Velocity on the Oxygen Dissolution Rate The trend of the DO increment pattern can be explained by the formation of bubbles with different diameters as a function of the liquid velocity. Low liquid velocity poses low shear stress from the drag force that sweep the air to form the bubbles. The surface tension force that inhibits the release of the bubbles is constant, therefore increasing the shear stress will lead to the formation of smaller bubbles. In addition, Juwana et al. (2019) reported that this condition ends up causing bubble coalescence around the MBG hence increasing the probability generating bigger bubbles [24]. Formation of the large bubbles decreases the interfacial area which leads to lower oxygen dissolution rate, mainly because of a high applied gas velocity of 3 L/min compared to the one with 1 L/min, which could dampen the effect of liquid velocity on the oxygen dissolution rate. At higher liquid velocities, the inertia force acting on the bubble increases causing the bubbles to have a shorter attachment period with the porous structure, at the same time preventing bubbles from merging together. Thus, the bubble generated is smaller due to lesser growth time and greater driving force to leave the MBG. This eventually increases the total surface-to-volume ratio of microbubbles, and Figure 6 shows the profile of DO in water as a function of time and at various liquid velocities. The test was conducted to explore if there is any optimum liquid velocity for oxygen dissolution through microbubbles formation. The DO increment is much higher at the initial stage of the test where the DO concentration is far from the saturation value. The rate of oxygen peaks at the middle range of the tested velocity at 2.43 m/s. Since the air velocity was fixed (3 L/min), the total supply of oxygen to the system was equal for all tests. Therefore, the difference in DO increment rate as a function of time can be attributed to the role of liquid velocity in affecting the mixing and the distribution of bubble sizes. As the velocity increases the sweeping flow of the liquid leads to smaller bubble sizes, which correlates well with previously reported findings [24]. This explains the increase in the DO dissolution rate from the lowest linear velocity value of 2.35 up to 2.46 m/s, beyond which the rate of DO increment decreases. Other reports also pointed out that the range of velocity between 30-40 L/min [19,28,29] plays a significant role in affecting bubble sizes (decrement). The liquid velocity range of 30-40 L/min is exactly the range applied in this study, in which formation of bubble is strongly affected by the shear stress until reaching a point where shear stress has minimum impact. Beyond the liquid velocity of 2.46 m/s, the DO increment decreases, most likely due to over-mixing that promotes intensive bubbles contacts and bubbles coalescence. Effect of Liquid and Gas Velocity on the Oxygen Dissolution Rate The trend of the DO increment pattern can be explained by the formation of bubbles with different diameters as a function of the liquid velocity. Low liquid velocity poses low shear stress from the drag force that sweep the air to form the bubbles. The surface tension force that inhibits the release of the bubbles is constant, therefore increasing the shear stress will lead to the formation of smaller bubbles. In addition, Juwana et al. (2019) reported that this condition ends up causing bubble coalescence around the MBG hence increasing the probability generating bigger bubbles [24]. Formation of the large bubbles decreases the interfacial area which leads to lower oxygen dissolution rate, mainly because of a high applied gas velocity of 3 L/min compared to the one with 1 L/min, which could dampen the effect of liquid velocity on the oxygen dissolution rate. At higher liquid velocities, the inertia force acting on the bubble increases causing the bubbles to have a shorter attachment period with the porous structure, at the same time preventing bubbles from merging together. Thus, the bubble generated is smaller due to lesser growth time and greater driving force to leave the MBG. This eventually increases the total surface-to-volume ratio of microbubbles, and directly improves on their oxygen dissolubility in the water. Meanwhile, a microbubble with smaller diameter would have characteristics of slow rising speed (based on Stokes' law), having enough time for oxygen gas to dissolve in the water. In this case, 2.43 m/s can be considered the optimum liquid velocity that achieves this target. Detailed analysis on different forces acting on the bubble formation can be obtained elsewhere [29]. Liquid velocity s above 2.43 m/s also shows lower oxygen dissolution rates. In this case, it could be linked with the coalescence of bubbles. The vigorous flow of water causes microbubbles lacking time to flow towards the discharge outlet before colliding and combining with one another. The merging of the bubbles leads to uneven bubbles distribution, which is common but unfavorable for aeration purposes, and most importantly resulting in lower interfacial area for mass transfer. directly improves on their oxygen dissolubility in the water. Meanwhile, a microbubble with smaller diameter would have characteristics of slow rising speed (based on Stokes' law), having enough time for oxygen gas to dissolve in the water. In this case, 2.43 m/s can be considered the optimum liquid velocity that achieves this target. Detailed analysis on different forces acting on the bubble formation can be obtained elsewhere [29]. Liquid velocity s above 2.43 m/s also shows lower oxygen dissolution rates. In this case, it could be linked with the coalescence of bubbles. The vigorous flow of water causes microbubbles lacking time to flow towards the discharge outlet before colliding and combining with one another. The merging of the bubbles leads to uneven bubbles distribution, which is common but unfavorable for aeration purposes, and most importantly resulting in lower interfacial area for mass transfer. Figure 7 shows the effect of gas velocity on the oxygen dissolution rate at fixed liquid velocity of 2.43 m/s, in which higher gas velocities lead to higher oxygen dissolution rate. The increment was significant from 1 to 2 L/min, while less so under air velocities of 2-5 L/min. The finding suggests that there is a threshold air velocity that can offer maximum oxygen dissolution rate, which is below 2 L/min. For the aeration rate of 2-5 L/min. The findings can be explained as follows. At low gas velocities and about equal bubbles size, a lower volume of air is available, resulting in lower interfacial area for oxygen mass transfer [12]. This is also due to the possibility of microbubbles being trapped in the porous pipe and, hence, leading to permeability reduction. It seems that, below 2 L/min, the momentum force of the moving air plays an important role in determining the formation and the size of the bubbles. Higher liquid velocity enhances the drag, momentum and pressure forces as detailed elsewhere [29]. Therefore, under very high cross flow velocities all of those forces dictating the air bubbles formation mechanisms, the air velocity does not affect the bubble size and mass transfer area greatly. It was stated that the effect of air velocity is only significant on the bubble size under a range of 0.1-1 L/min [19,30]. According to Al-Ahmady (2005), a greater volume of air supply directly increases oxygen dissolution capacity [25]. This means that total air volume is definitely affecting the oxygen dissolution rate, despite the fact that a smaller microbubble has a greater dissolubility rate. Sadatomi et al. (2012) stated that, when gas velocity of <10 L/min, the oxygen absorption efficiency is roughly independent of gas velocity and of the type of MBG employed [21]. Since this study falls under this gas velocity range (<10 L/min), the conclusion is similar with the minor DO increment for gas velocities of 2-5 L/min. Within this range of gas velocity, the increase in gas velocity leads to production or larger bubble size resulting in slight increment in the oxygen dissolution rate. This finding suggests that overflowing of air bubbles into the system might not Figure 7 shows the effect of gas velocity on the oxygen dissolution rate at fixed liquid velocity of 2.43 m/s, in which higher gas velocities lead to higher oxygen dissolution rate. The increment was significant from 1 to 2 L/min, while less so under air velocities of 2-5 L/min. The finding suggests that there is a threshold air velocity that can offer maximum oxygen dissolution rate, which is below 2 L/min. For the aeration rate of 2-5 L/min. The findings can be explained as follows. At low gas velocities and about equal bubbles size, a lower volume of air is available, resulting in lower interfacial area for oxygen mass transfer [12]. This is also due to the possibility of microbubbles being trapped in the porous pipe and, hence, leading to permeability reduction. It seems that, below 2 L/min, the momentum force of the moving air plays an important role in determining the formation and the size of the bubbles. Higher liquid velocity enhances the drag, momentum and pressure forces as detailed elsewhere [29]. Therefore, under very high cross flow velocities all of those forces dictating the air bubbles formation mechanisms, the air velocity does not affect the bubble size and mass transfer area greatly. It was stated that the effect of air velocity is only significant on the bubble size under a range of 0.1-1 L/min [19,30]. According to Al-Ahmady (2005), a greater volume of air supply directly increases oxygen dissolution capacity [25]. This means that total air volume is definitely affecting the oxygen dissolution rate, despite the fact that a smaller microbubble has a greater dissolubility rate. Sadatomi et al. (2012) stated that, when gas velocity of <10 L/min, the oxygen absorption efficiency is roughly independent of gas velocity and of the type of MBG employed [21]. Since this study falls under this gas velocity range (<10 L/min), the conclusion is similar with the minor DO increment for gas velocities of 2-5 L/min. Within this range of gas velocity, the increase in gas velocity leads to production or larger bubble size resulting in slight increment in the oxygen dissolution rate. This finding suggests that overflowing of air bubbles into the system might not necessarily lead to an effective dissolution process if the bubble size is too large (poor interfacial mass-transfer area). It also means that the system can operate at relatively low crossflow velocity leading to lower energy input. Nevertheless, rigorous analysis of the aeration efficiency must be performed to decide the most optimum operational condition. necessarily lead to an effective dissolution process if the bubble size is too large (poor interfacial masstransfer area). It also means that the system can operate at relatively low crossflow velocity leading to lower energy input. Nevertheless, rigorous analysis of the aeration efficiency must be performed to decide the most optimum operational condition. Figure 8 shows the DO increment under the free-flowing air condition, where the higher crossflow velocity leads to greater oxygen dissolution rate. Since there is no restriction in the air tube, the air velocities were maximum with respect to each liquid velocity shown in Figure 4. It shows a clear trend in which increasing the liquid velocity leads to higher oxygen dissolution rate. This can be explained as a higher air velocity leads to a higher volume of the air being introduced into the system coupled with the formation of about similar sized bubbles (Figure 8). In this condition, the oxygen dissolution rate seems to correlate well with the air/liquid interface, which promotes the mass transfer of oxygen. Figure 8 shows the DO increment under the free-flowing air condition, where the higher crossflow velocity leads to greater oxygen dissolution rate. Since there is no restriction in the air tube, the air velocities were maximum with respect to each liquid velocity shown in Figure 4. It shows a clear trend in which increasing the liquid velocity leads to higher oxygen dissolution rate. This can be explained as a higher air velocity leads to a higher volume of the air being introduced into the system coupled with the formation of about similar sized bubbles (Figure 8). In this condition, the oxygen dissolution rate seems to correlate well with the air/liquid interface, which promotes the mass transfer of oxygen. necessarily lead to an effective dissolution process if the bubble size is too large (poor interfacial masstransfer area). It also means that the system can operate at relatively low crossflow velocity leading to lower energy input. Nevertheless, rigorous analysis of the aeration efficiency must be performed to decide the most optimum operational condition. Figure 8 shows the DO increment under the free-flowing air condition, where the higher crossflow velocity leads to greater oxygen dissolution rate. Since there is no restriction in the air tube, the air velocities were maximum with respect to each liquid velocity shown in Figure 4. It shows a clear trend in which increasing the liquid velocity leads to higher oxygen dissolution rate. This can be explained as a higher air velocity leads to a higher volume of the air being introduced into the system coupled with the formation of about similar sized bubbles (Figure 8). In this condition, the oxygen dissolution rate seems to correlate well with the air/liquid interface, which promotes the mass transfer of oxygen. Interestingly, the oxygen dissolution rates under maximum air flow (Figure 8) are lower than the one with restricted air velocity ( Figure 6). Referring to Figure 7, overflowing of air bubbles does not guarantee a greater oxygenation rate. This demonstrates the importance of bubble size in affecting the oxygen dissolution rate. Despite of lower rate of air flow, the high rate of oxygen dissolution is enhanced by formation of smaller bubbles leading to higher gas/liquid interfacial area. This finding suggests the necessity for operational optimization of the venturi-orifice type of MBG, to yield maximum dissolution rates. Simply letting free-flowing air with maximum velocity does not lead to a maximum oxygen dissolution rate. It is worth noting that the ranges of bubble size formed for the tests reported in Figures 6 and 8 seem to be significantly different, judging from the rates of oxygen dissolution. As reported earlier, for similar venturi-orifice MBG system, operated at 30-40 L/min, the resulting bubbles sizes were in range of 450-1000 µm when the air velocities were set to 0.1-1 L/min [24]. However, since no measurement of bubble size was conducted, this merely remains just a conjecture. Relationship of Liquid and Gas Velocity with Volumetric Mass Transfer Coefficient The impact of air velocity on the oxygen dissolution rate is not conclusive, and somewhat counterintuitive. To understand the behavior of the oxygen transfer, it can be further analyzed using the K L a values as reported in Figure 9. The K L a counts the impact of bubble velocity (the contact time of bubble with the liquid), bubble diameter (the gas/liquid interfacial area or effective mass transfer area), dynamic viscosity of the liquid (mixing) and the mass diffusivity [31], in which the first two are seen as having the most prominence in this study. Interestingly, the oxygen dissolution rates under maximum air flow ( Figure 8) are lower than the one with restricted air velocity ( Figure 6). Referring to Figure 7, overflowing of air bubbles does not guarantee a greater oxygenation rate. This demonstrates the importance of bubble size in affecting the oxygen dissolution rate. Despite of lower rate of air flow, the high rate of oxygen dissolution is enhanced by formation of smaller bubbles leading to higher gas/liquid interfacial area. This finding suggests the necessity for operational optimization of the venturi-orifice type of MBG, to yield maximum dissolution rates. Simply letting free-flowing air with maximum velocity does not lead to a maximum oxygen dissolution rate. It is worth noting that the ranges of bubble size formed for the tests reported in Figures 6 and 8 seem to be significantly different, judging from the rates of oxygen dissolution. As reported earlier, for similar venturi-orifice MBG system, operated at 30-40 L/min, the resulting bubbles sizes were in range of 450-1000 µm when the air velocities were set to 0.1-1 L/min [24]. However, since no measurement of bubble size was conducted, this merely remains just a conjecture. Relationship of Liquid and Gas Velocity with Volumetric Mass Transfer Coefficient The impact of air velocity on the oxygen dissolution rate is not conclusive, and somewhat counterintuitive. To understand the behavior of the oxygen transfer, it can be further analyzed using the KLa values as reported in Figure 9. The KLa counts the impact of bubble velocity (the contact time of bubble with the liquid), bubble diameter (the gas/liquid interfacial area or effective mass transfer area), dynamic viscosity of the liquid (mixing) and the mass diffusivity [31], in which the first two are seen as having the most prominence in this study. The impact of liquid velocity on the KLa under the free-flowing air shows an increasing trend from 2.35 up to 2.46 m/s ( Figure 9A), after which the KLa decreases slightly until the liquid velocity reaches 2.57 m/s. The KLa then suddenly jumps at the highest liquid velocity of 2.6 m/s. The steady increase of the KLa can be ascribed by the increasing air velocity that forms higher number of bubbles hence higher interfacial area for oxygen mass transfer. For the liquid velocities beyond 2.46 m/s, both the liquid and air flows promote bubbles coalescence which eventually reduces the effective mass transfer area. The spike of the KLa for the liquid velocity of 2.6 m/s is presumably due to the smaller bubble sizes produced. Figure 9B shows that increasing air velocity at constant liquid velocity leads to higher KLa. The finding suggests that higher liquid velocity leads to increasing number of bubbles that eventually The impact of liquid velocity on the K L a under the free-flowing air shows an increasing trend from 2.35 up to 2.46 m/s ( Figure 9A), after which the K L a decreases slightly until the liquid velocity reaches 2.57 m/s. The K L a then suddenly jumps at the highest liquid velocity of 2.6 m/s. The steady increase of the K L a can be ascribed by the increasing air velocity that forms higher number of bubbles hence higher interfacial area for oxygen mass transfer. For the liquid velocities beyond 2.46 m/s, both the liquid and air flows promote bubbles coalescence which eventually reduces the effective mass transfer area. The spike of the K L a for the liquid velocity of 2.6 m/s is presumably due to the smaller bubble sizes produced. Figure 9B shows that increasing air velocity at constant liquid velocity leads to higher K L a. The finding suggests that higher liquid velocity leads to increasing number of bubbles that eventually enhances the area for oxygen mass transfer. The significant increment of air velocity from 1 to 2 L/min suggests that the pressure and momentum forces dictate the formation of the bubbles. For air velocities higher than 2 L/min, the increment is less significant indicating that additional volume of air form slightly larger bubbles only modestly affects the overall effective mass transfer area. It is worth noting that the K L a value is system specific, and the value is affected by the applied experimental set-up. The K L a values obtained in this study cannot be compared with the ones in reference. Nonetheless, the trend of K L a obtained in this study is in line with an earlier report [24]. The K L a increasing as the air velocity increases under free air flow system and the increasing trend of K L a as function of air velocity at constant liquid velocity have been also reported elsewhere [24]. Aeration Effciency Since the K L a value is system specific and not directly comparable within data obtained from different experimental setups, a universal parameter in form of the specific aeration efficiency is used to assess the system (presented in Figure 10). The trend of the aeration efficiency is similar to the K L a. The energy efficiency for operation of the venturi-orifice system peaks at value of 0.424 kgO 2 /kWh under the free-flowing air at a liquid velocity of 2.54 m/s, corresponds to the K L a value of 0.0404 min −1 . Processes 2020, 8, x FOR PEER REVIEW 12 of 15 enhances the area for oxygen mass transfer. The significant increment of air velocity from 1 to 2 L/min suggests that the pressure and momentum forces dictate the formation of the bubbles. For air velocities higher than 2 L/min, the increment is less significant indicating that additional volume of air form slightly larger bubbles only modestly affects the overall effective mass transfer area. It is worth noting that the KLa value is system specific, and the value is affected by the applied experimental set-up. The KLa values obtained in this study cannot be compared with the ones in reference. Nonetheless, the trend of KLa obtained in this study is in line with an earlier report [24]. The KLa increasing as the air velocity increases under free air flow system and the increasing trend of KLa as function of air velocity at constant liquid velocity have been also reported elsewhere [24]. Aeration Effciency Since the KLa value is system specific and not directly comparable within data obtained from different experimental setups, a universal parameter in form of the specific aeration efficiency is used to assess the system (presented in Figure 10). The trend of the aeration efficiency is similar to the KLa. The energy efficiency for operation of the venturi-orifice system peaks at value of 0.424 kgO2/kWh under the free-flowing air at a liquid velocity of 2.54 m/s, corresponds to the KLa value of 0.0404 min −1 . The maximum aeration efficiency value obtained in this study is greatly below the established aerators which are used in large scale industries. The typical energy efficiencies of surface aeration and fine bubble aeration systems for dissolution of oxygen from air into clean water are 1.1-2.0 and 2.0-5.5 kgO2/kWh and for dissolution of oxygen from air into wastewater are 0.9-1.7 and 1.3-2.6, respectively [32]. The identical trend on the energy efficiency to the KLa suggests that it is strongly affected by the size of the formed bubbles. Nonetheless, it is worth noting that the venturi-orifice MBG system tested in this study is still not optimized yet and could be improved further to enhance its energy efficiencies. Sections 3.2 and 3.3 discuss the impact of operational parameters on the oxygen transfer rate and KLa. The findings unravel the importance of optimizing operational parameters in obtaining the highest KLa (small bubbles). Most of the previous study on MBG put emphasis on the mechanics of microbubble formation and the bubble size dynamics [27,29,30,33]. Such information should be used as input for designing an energy efficient oxygenator that improves the performance of the current established systems. By referring to earlier report, a very low ratio of gas and liquid velocity needs to be implemented (<0.033) to form micron size bubbles [24]. It means that a high pumping energy is required to create small volumes of air bubble by applying high liquid crossflow velocity. The formation of microbubbles offers a maximum effective surface area and a longer retention time in the The maximum aeration efficiency value obtained in this study is greatly below the established aerators which are used in large scale industries. The typical energy efficiencies of surface aeration and fine bubble aeration systems for dissolution of oxygen from air into clean water are 1.1-2.0 and 2.0-5.5 kgO 2 /kWh and for dissolution of oxygen from air into wastewater are 0.9-1.7 and 1.3-2.6, respectively [32]. The identical trend on the energy efficiency to the K L a suggests that it is strongly affected by the size of the formed bubbles. Nonetheless, it is worth noting that the venturi-orifice MBG system tested in this study is still not optimized yet and could be improved further to enhance its energy efficiencies. Section 3.2 and 3.3 discuss the impact of operational parameters on the oxygen transfer rate and K L a. The findings unravel the importance of optimizing operational parameters in obtaining the highest K L a (small bubbles). Most of the previous study on MBG put emphasis on the mechanics of microbubble formation and the bubble size dynamics [27,29,30,33]. Such information should be used as input for designing an energy efficient oxygenator that improves the performance of the current established systems. By referring to earlier report, a very low ratio of gas and liquid velocity needs to be implemented (<0.033) to form micron size bubbles [24]. It means that a high pumping energy is required to create small volumes of air bubble by applying high liquid crossflow velocity. The formation of microbubbles offers a maximum effective surface area and a longer retention time in the water for prolonged mass transfer to occur. This way the dissolved oxygen can also be enhanced. However, since the ratio of gas to liquid velocity is too small, they only carry limited amount of oxygen in the air, which becomes the limiting factor for energy efficiencies. In order to supply ample amount of oxygen, it is proposed that multiple MBGs are required, resulting in inflated energy input to the system. Vice versa, the formation of large air bubbles reduces the specific mass transfer area. Despite of the over-supply of oxygen at higher air velocity but at large bubble sizes, the oxygen dissolution yield remains low due to fast bubble rising velocity that shorten contact of bubbles/liquid. Such trade-off situation necessitates process optimization, as well as a redesign of the venturi-orifice MBG that will offer high oxygen dissolution energy efficiency. Another option is to develop porous tube materials such as hydrophobic membrane to allow formation of smaller air bubbles [34]. Conclusions The performance of a porous venturi-orifice MBG was evaluated. The range of operational parameters enables the system to operate under vacuum and increased liquid velocities (from 2.35 to 2.60 m/s) result in higher vacuum pressure (of 0.84 to 2.27 kPa). They correspond to air velocities for the free-flowing air of 3.2-5.6 L/min. For operations under a constant air velocity of 3 L/min but variable liquid velocities, the K L a peaks at 0.041 min −1 corresponding to liquid velocity of 2.46 m/s, which corresponds to a aeration efficiency of 0.287 kgO 2 /kWh. Only slight increments were achieved on both K L a and aeration efficiency when the system was operated under variable liquid velocities and under free-flowing air, with the maximum aeration efficiency reaching the maximum at 0.424 kgO 2 /kWh. The value unfortunately still falls far below the established aerators that are used in large scale industries. The analysis on the energy efficiency revealed that the venturi-orifice MBG could be further optimized by focusing on the trade-off between air bubble size and the air volume velocity in order to establish a greater balance between the amount of available oxygen (to be transferred) and the rate of the oxygen transfer.
12,821.2
2020-10-09T00:00:00.000
[ "Engineering" ]
Tree-Ring Based Chronology of Landslides in the Shirakami Mountains, Japan : The N-Ohkawa landslide, and the southern section of the Ohkawa landslide, occurred during the snow-melt seasons of 1999 and 2006, respectively, in the Shirakami Mountains, Japan. This paper examines the response of trees in the Shirakami Mountains to landslides, and also investigates the spatio-temporal occurrence patterns of landslide events in the area. Dendrogeomorphological analysis was used to identify growth suppression and growth increase (GD) markers in tilted deciduous broadleaved trees and also to reveal the timing of the establishment of shade-intolerant tree species. Analysis of the GD markers detected in tree-ring width series revealed confirmatory evidence of landslide events that occurred in 1999 and 2006 and were observed by eyewitnesses, as well as signals from eight additional (previously unrecorded) landslide events during 1986–2005. Furthermore, shade-intolerant species were found to have become established on the N-Ohkawa and southern Ohkawa landslides, but with a lag of up to seven years following the landslide events causing the canopy opening. Introduction Landslides are common in mountainous regions, and can be driven by tectonic, climatic, and/or human activities [1,2]. Landslides can create permanently unstable sites, and as a result, can drastically alter landscape morphology, damage forest environments, and even endanger life. Identifying the spatial and temporal patterns of landslide occurrence is vital for environmental management and minimizing the losses associated with landslides. However, information regarding past landslide events is scarce and almost always incomplete. Dendrogeomorphology can be used as a proxy indicator of past landslide activity at the scale of years [3][4][5]. This dating technique is based on the analysis of annual growth rings in trees, with the mixed signals being filtered to isolate the signal indicative of landslide events from non-landslide disturbances, such as climate variations, insect epidemics, and human activity, encoded within the tree-ring chronologies [6,7]. Landslides cause disturbances in tree growth that are preserved as variations within the tree-ring width series. These growth disturbances (hereafter GD) can take several forms, namely, abrupt growth release (wider annual rings), suppression (narrower annual rings), and the formation of compression wood that results from the elimination of neighboring trees, damage to the root, crown or stems, and stem tilting [5,8]. Dating of landslide reactivation by interpretation of these GD markers preserved within annual-ring-width series has been performed using a moving-window approach to smooth out non-landslide fluctuations [9] or evaluating the change rate of the annual ring width if it exceeds a certain threshold value [10]. Additionally, other studies have dated landslides using different thresholds (e.g., the event-response (I t ) index and number of GD markers) [10,11]. Although the amount of research has increased in recent years, no systematic standard approach has yet been proposed and the choice of an appropriate definition and threshold appears to be site-specific. Dendrogeomorphological studies of landslides have been performed using conifers in the European Alps and Americas [5,8,12]. In North America, Carrara [13] identified synchronous abrupt reductions in annual ring width in tree samples. He suggested that these tree responses were the result of damage during a landslide and was thus able to date the landslide event to 1693 or 1694 and infer that the trigger was an earthquake. With a focus on abrupt reductions in annual ring width and the formation of compression wood on the tilted side stem in the French Alps, Lopez-Saez et al. [10] assessed eight different stages of landslide reactivation over the past 130 years and found that landslide reactivation was associated with seasonal rainstorms. Recently, Lopez-Saez et al. [14] added abrupt increases in annual ring width as another type of growth disturbance, and this enabled reconstruction of 26 reactivation phases of landslides between 1859 and 2010 in the Swiss Alps. In the Orlické hory Mountains (Czech Republic), Šilhán [11] found that landslide activity is particularly associated with slide and creep effects, and the consequent growth disturbance can be identified in trees growing on the scarp and the landslide block. In contrast, there have been few such studies in Asia [3,12,15]. Recent studies have demonstrated that broadleaved trees are also useful for dating landslides and shown the need for additional case studies that consider, for example, an adequate variety of species and age classes [5,12]. Coherent landslides, which often move slowly (Jisuberi in Japanese), dominate in the Shirakami Mountains [16], but historical records relevant to landslide activity are scarce. In this study, we investigate the spatio-temporal patterns of landslide occurrence through analysis of the dendrogeomorphological record of 90 deciduous broadleaved trees from 12 species growing on landslide scarps and landslide moving bodies, which we refer to as the displaced blocks, on the right flank of the Ohakawa River, a tributary of the Iwaki River, within the Shirakami Mountains. Our main aims are: (i) to identify and interpret the GD markers (i.e., abrupt growth increase and growth suppression) preserved in the tree-ring series of trees growing on the landslide slopes; (ii) to investigate how these trees responded to landslides known to have occurred in the area; and (iii) to reconstruct the spatial and temporal patterns of landslide occurrence over the past 70 years using our GD data, as well as the timing of the establishment of shade-intolerant trees, and compare this with the limited eyewitness reports of landslides. Study Area The coherent landslides studied here were located on the right bank, and on an outside bend, of the meandering Ohkawa River, which originates from the eastern side of the Shirakami Mountains, northern Honshu Island, Japan ( Figure 1). These landslides are covered by deciduous broadleaved trees dominated by Siebold's beech (Fagus crenata). The forest is a naturally regenerated, unmanaged secondary forest that developed after the original forest was selectively felled until 1967 [17]. The study area has a cool-temperate climate, with an average temperature of 8.1 • C and average annual rainfall of 2589 mm [18]. Each year, from November to the following April, the area is covered by snow to a maximum depth of about 2.2 m [18]. major movements of the N-Ohkawa landslide and the southern section of the Ohkawa landslide occurred in April 1999 and May 2006, respectively [20]; other slope movements of the Ohkawa landslide occurred recently, as described in Section 4.3. In addition, the lower slope of the N-Ohkawa landslide seems to have failed beforehand, as indicated by the bare area seen on the aerial photograph from 1975 ( Figure 1b). The N-Ohkawa landslide and the northern and southern sections of the Ohkawa landslide are visible on the aerial photograph from 2015 ( Figure 1c). Our study area contains two neighboring landslide slopes: the N-Ohkawa and Ohkawa landslides, that are located along a 40-m-high terraced scarp, with the river terrace top at elevations of 285 to 295 m (Figures 1a and 2). Terrace gravels were exposed at the edge of the terrace after the landslides. The bedrock is formed from the mid-Miocene Hayaguchigawa Formation, which consists primarily of acidic pyroclastic deposits, but also contains andesitic pyroclastic deposits, sandstones, and conglomerates [19] (Figure 2). The N-Ohkawa landslide comprises a single displaced block. In contrast, distinctive stair-like features are evident on the displaced block of the Ohkawa landslide, which also comprises two secondary scarps that separate the individual blocks within the larger block at its northern and southern ends ( Figure 2). Minor gully features are present in the landslide slope. Based on its slope geometry, we divided the Ohkawa landslide into three sections; i.e., the eastern, northern, and southern sections, for the following discussion. The timing of these movements is not well constrained. However, limited information obtained from several eyewitness accounts recorded during site visits suggests that the major movements of the N-Ohkawa landslide and the southern section of the Ohkawa landslide occurred in April 1999 and May 2006, respectively [20]; other slope movements of the Ohkawa landslide occurred recently, as described in Section 4.3. In addition, the lower slope of the N-Ohkawa landslide seems to have failed beforehand, as indicated by the bare area seen on the aerial photograph from 1975 ( Figure 1b). The N-Ohkawa landslide and the northern and southern sections of the Ohkawa landslide are visible on the aerial photograph from 2015 ( Figure 1c). Sampling and Cross-Matching of Ring-Width Series Increment cores were extracted from the upper side of the tilted stems of 90 living broadleaved trees using a Pressler increment borer (maximum length of 40 cm and diameter of 5.15 mm) between June and November 2019, on the main and secondary landslide scarps and on landslide-displaced blocks ( Figure 3). The trees were sampled at trunk heights of 20-120 cm. According to the standard methods of dendrochronological research, increment core should be taken parallel to contour to avoid the development of reaction wood in tilted trees [21]. Tension wood develops on the upper side of leaning hardwood trees and typically has wider annual rings than on the lower side [3]. However, in the present study, we obtained cores oriented in the slope direction, because the formation of tension wood is, in itself, a good indicator of landslide movement [3]. Indeed, tension wood may not form in all tilted trees; therefore, wider annual rings may also be the result of growth release owing to, for example, the formation of canopy opening after landslide [5,8]. As such, responses resulting from both tilting and gap formation after landslides are included in our results. We Sampling and Cross-Matching of Ring-Width Series Increment cores were extracted from the upper side of the tilted stems of 90 living broadleaved trees using a Pressler increment borer (maximum length of 40 cm and diameter of 5.15 mm) between June and November 2019, on the main and secondary landslide scarps and on landslide-displaced blocks ( Figure 3). The trees were sampled at trunk heights of 20-120 cm. According to the standard methods of dendrochronological research, increment core should be taken parallel to contour to avoid the development of reaction wood in tilted trees [21]. Tension wood develops on the upper side of leaning hardwood trees and typically has wider annual rings than on the lower side [3]. However, in the present study, we obtained cores oriented in the slope direction, because the formation of tension wood is, in itself, a good indicator of landslide movement [3]. Indeed, tension wood may not form in all tilted trees; therefore, wider annual rings may also be the result of growth release owing to, for example, the formation of canopy opening after landslide [5,8]. As such, responses resulting from both tilting and gap formation after landslides are included in our results. Smiley [22] and Speer [21]. The sample cores were prepared using a razor blade to maximize the visual resolution of the ring widths and were measured to the nearest 0.01 mm under a binocular zoom microscope (Olympus SZ61) using a precision measurement stage (Chuo Seiki LTD. LS-252D) attached to a digital output unit (Mitsutoyo Digimatic). After measurement, all cores were visually cross-dated by matching well-defined wide or narrow rings. In addition, longer chronologies (>60 years) of shade-tolerant species and several intermediate species were cross-dated by using a simple list method [23]. Identification of Growth Disturbance by Landslides in Tree-Ring Width Series and Age Determination of Shade-Intolerant Species In this study, we considered two types of GD markers in the tree-ring width series: abrupt growth increase and abrupt growth suppression. GD markers were identified using the method described by Ishikawa et al. [7], in which a five year moving average of ring width is used to identify periods of abrupt growth increase or suppression as follows. A growth increase is defined as a doubling of the five year moving average of the ring width when compared with that of the previous five year period and a defined growth rate that fluctuates continuously above 1 for at least 10 consecutive years. Conversely, a growth suppression is defined as a halving of the five year average ring-width and a growth rate fluctuating continuously below 1 for at least 10 consecutive years. In addition, because of spatial irregularities in tree growth, the duration of the GD also depends on (2006) section) from the Ohkawa landslide. The cores were prepared and analyzed using standard procedures following Stokes and Smiley [22] and Speer [21]. The sample cores were prepared using a razor blade to maximize the visual resolution of the ring widths and were measured to the nearest 0.01 mm under a binocular zoom microscope (Olympus SZ61) using a precision measurement stage (Chuo Seiki LTD. LS-252D) attached to a digital output unit (Mitsutoyo Digimatic). After measurement, all cores were visually cross-dated by matching well-defined wide or narrow rings. In addition, longer chronologies (>60 years) of shade-tolerant species and several intermediate species were cross-dated by using a simple list method [23]. Identification of Growth Disturbance by Landslides in Tree-Ring Width Series and Age Determination of Shade-Intolerant Species In this study, we considered two types of GD markers in the tree-ring width series: abrupt growth increase and abrupt growth suppression. GD markers were identified using the method described by Ishikawa et al. [7], in which a five year moving average of ring width is used to identify periods of abrupt growth increase or suppression as follows. A growth increase is defined as a doubling of the five year moving average of the ring width when compared with that of the previous five year period and a defined growth rate that fluctuates continuously above 1 for at least 10 consecutive years. Conversely, a growth suppression is defined as a halving of the five year average ring-width and a growth rate fluctuating continuously below 1 for at least 10 consecutive years. In addition, because of spatial irregularities in tree growth, the duration of the GD also depends on the sampling position [4,13]. Therefore, we took into account moderate levels of GD in which the defined growth rate persisted for less than 10, but more than five, consecutive years. In some cases, there is a slight time lag from the casual disturbance event in the GD markers extracted using the moving average method because of growth variation prior to and/or after the event. To avoid this inaccuracy, we carefully checked the ring-width pattern around the timing of the GD markers, and used the information to decide on the GD marker years in the tree-ring width series. Figure 4a-d shows representative examples of how the GD marker years were identified in the tree-ring series using the above method. No significant changes in annual ring width were found in 30 of the cores from the sampled trees (33%) and these cores were not considered for further analysis. Spatial Distribution of Tree Ages and GD in Tree-Ring Width Series The age of the trees sampled around the N-Ohkawa and Ohkawa landslides was 48.2 ± 22.4 years (average ± 1 SD), with a median of 56 years. The youngest tree was 6 years old and the oldest was 101 years old. Figure 5a shows the spatial distribution of the tree The chronology of each of the previous landslides was expressed using the eventresponse (I t ) index, following Shroder [24], as follows: where GD t is the number of trees showing GD in their tree-ring record in year t, and N t is the number of sampled trees for each landslide alive in year t. Due to the limited number of samples and detected GD markers available to identify landslide reactivation years, thresholds of GD t ≥ 2 and I t ≥ 15% were used. Additionally, the reported year of landslide events and year of establishment of shade-intolerant tree species were also used to assist our interpretation of the dendrochronological effects of landslide activity. The establishment of shade-intolerant species is indicative of the development of large gaps in the canopy at some point in the past, and these gaps were most probably caused by landslides [25,26]. Consequently, the ages of individual younger trees from shade-intolerant species were determined (Figure 4e) based on the number of rings counted in the cores and the number of years required for seedlings to reach coring height estimated using an age-height regression relationship. The age-height regression (age (years) = 0.025 × height (cm), R 2 = 0.27) was established from 15 specimens of Ah (<2.5 m in height) sampled at the Shirakami Natural Science Park of Hirosaki University, 4 km from the study area, where the growth conditions are similar to those in our study area because of their similar elevations. However, the ages for trees of shade-tolerant species and intermediate shade-tolerant species to reach coring height were not estimated, and these trees were used only to identify GD markers on tree-ring width series, as described above. The tree-ring record of these samples were inspected between 1950 and 2019. Spatial Distribution of Tree Ages and GD in Tree-Ring Width Series The age of the trees sampled around the N-Ohkawa and Ohkawa landslides was 48.2 ± 22.4 years (average ± 1 SD), with a median of 56 years. The youngest tree was 6 years old and the oldest was 101 years old. Figure 5a shows the spatial distribution of the tree ages of 60 trees used for landslide dating. Older ages tend to be concentrated near the scarps, where the majority of trees were 51-70 years old. Trees of 51-90 years in age were also sparsely distributed on the displaced block and many of these were back-tilted, which suggests that the trees were moved down hillsides during landslide transport by rotation along a circular feature of the sliding surface [11,27]. The younger trees (<20 years old) on the scarps and displaced blocks were the shade-intolerant species Ah, Sb, and Ss. GD markers were not detected in these younger trees ( Figure 5). suppression events) were detected mainly on the landslide scarps. Nevertheless, in a few cases, two GD markers were also detected in trees on the landslide blocks. Summary of GD in Tree-Ring Width Series The GD markers associated with the N-Ohkawa landslide occurred mainly between 1998 and 2001 (Figure 6a). Samples Nu. 9 and 20 on the landslide scarp and the displaced landslide block, respectively, showed wider annual rings in 1998, one year before the land- In total, 64 GD events (including 39 moderate GDs) were identified from 47 trees (Figure 5b). Growth suppression (34 GDs, 53%) occurred in slightly more trees than growth increase (30 GDs,47%). This higher frequency of growth suppression has also been reported in other similar works [14,28]. The highest frequency (45%) of first-detected GD within the tree-ring width series occurred for trees aged between 16 and 30 years. Individual trees with two GDs (e.g., growth suppression and increase or multiple growth-suppression events) were detected mainly on the landslide scarps. Nevertheless, in a few cases, two GD markers were also detected in trees on the landslide blocks. Summary of GD in Tree-Ring Width Series The GD markers associated with the N-Ohkawa landslide occurred mainly between 1998 and 2001 (Figure 6a). Samples Nu. 9 and 20 on the landslide scarp and the displaced landslide block, respectively, showed wider annual rings in 1998, one year before the landslide event that eyewitnesses reported as occurring in 1999. Two samples (Nu. 17 Dendrochronological Investigations of Spatial and Temporal Patterns of Landslide Reactivation The analysis of GD markers enabled the identification of landslide events on the stud ied slopes (Figure 7). These previous slides are summarized in Figure 8 with reference t the locations of trees with GD markers and field observations. For the N-Ohkawa land slide, apart from the landslide reported in 1999, additional landslide activity was detecte in 1998 (Figure 7a). In addition, an event took place in 2000 that was detected using sam ples from the landslide scarp, implying an enlargement of the scarp (Figures 7a and 8). I the eastern section of the Ohkawa landslide, for which there are no reported landslide we detected two landslide events that took place in 1993 and 2007 (Figure 7b). The land Dendrochronological Investigations of Spatial and Temporal Patterns of Landslide Reactivation The analysis of GD markers enabled the identification of landslide events on the studied slopes (Figure 7). These previous slides are summarized in Figure 8 with reference to the locations of trees with GD markers and field observations. For the N-Ohkawa landslide, apart from the landslide reported in 1999, additional landslide activity was detected in 1998 ( Figure 7a). In addition, an event took place in 2000 that was detected using samples from the landslide scarp, implying an enlargement of the scarp (Figures 7a and 8). In the eastern section of the Ohkawa landslide, for which there are no reported landslides, we detected two landslide events that took place in 1993 and 2007 (Figure 7b). The landslide events in 1993 and 2007 may suggest an episode of regressive enlargement of the landslide scarp along the terrace scarp ( Figure 8). This is supported by field observations showing terrain below the landslide scarp with collapsed debris deposited on a pre-existing landslide mass. The observations suggest that the present landslide unit may have grown from gradual accumulation of landslide debris from repeated landslides, in combination with retrogressive enlargement. For the northern section of the Ohkawa landslide, two landslide events were identified in 2000 and 2005 (Figure 7c). These landslide events have not been previously reported; however, the landslide aftermath can be observed on the aerial photograph from 2015 (Figure 1c). Our analysis suggests that a large landslide might have been initiated in 2000 (as three of the four GDs were identified on the scarp; Figure 6c) and experienced further downward movement in 2005 (as GDs were detected on the landslide block; Figures 6c and 8). Furthermore, ongoing movement is evident on the downslope section, which is bounded by a secondary scarp up to 7 m in height in the lower section. This section is cut by a minor gully, in which surface water is concentrated, and which affected the area before and after a local failure in 2017 (Figure 9a [29]. The foot of the downslope section is undergoing river toe erosion, this may steepen the slope and facilitate further movement [29,30]. The landslide activity in 1998, as indicated by the GD markers, might also have progressed to become the major event in 1999 on the N-Ohkawa landslide block. Our dendrochronological study using 60 deciduous broadleaved trees from 12 species for landslide analysis is unique on global scale [12]. In this contribution, we illustrate that the obtained chronology of landslide activity is in agreement with eyewitness reports of the major landslide events in 1999 and 2006, which suggests the GD markers and index values (where GD t ≥ 2 and I t ≥ 15% are adjusted based on the number of disturbed trees available for analysis) employed in this study may provide a critical assessment of past landslide occurrence in the study area and in those areas with similar environmental conditions. Shade-intolerant tree species are typically established between 2-7 years after landslides. However, this lag may reflect the severe erosion that can continue for several years after a landslide, thus limiting tree establishment [26]. values (where GDt ≥ 2 and It ≥ 15% are adjusted based on the number of disturbed trees available for analysis) employed in this study may provide a critical assessment of past landslide occurrence in the study area and in those areas with similar environmental conditions. Shade-intolerant tree species are typically established between 2-7 years after landslides. However, this lag may reflect the severe erosion that can continue for several years after a landslide, thus limiting tree establishment [26]. Conclusions The spatial and temporal development of the coherent N-Ohkawa and Ohkawa (consisting of the eastern, northern, and southern sections) landslides were investigated using tree-ring chronologies from tilted deciduous broadleaved trees in the Shirakami Mountains, northern Honshu Island, Japan. In total, we identified 64 GD markers (i.e., periods of growth suppression or growth increase) from 47 trees, as well as the year of establishment of 13 trees from shade-intolerant species over about 70 years. Our dendrogeomorphological analysis allowed us to identify the GD markers related to two major eye-witnessed landslide events; i.e., the N-Ohkawa landslide in 1999 and the southern section of the Ohkawa landslide in 2006. Shade-intolerant tree species became Conclusions The spatial and temporal development of the coherent N-Ohkawa and Ohkawa (consisting of the eastern, northern, and southern sections) landslides were investigated using tree-ring chronologies from tilted deciduous broadleaved trees in the Shirakami Mountains, northern Honshu Island, Japan. In total, we identified 64 GD markers (i.e., periods of growth suppression or growth increase) from 47 trees, as well as the year of establishment of 13 trees from shade-intolerant species over about 70 years. Our dendrogeomorphological analysis allowed us to identify the GD markers related to two major eye-witnessed landslide events; i.e., the N-Ohkawa landslide in 1999 and the southern section of the Ohkawa landslide in 2006. Shade-intolerant tree species became established after a lag of 2-7 years after the events in response to canopy opening by the landslides. Other GDs were used to reconstruct previously unknown events within the local landslide chronology. The reconstruction of the N-Ohkawa landslide added precursory landslide activity in 1998 and a local enlargement of the landslide scarp in 2000. In addition, the reconstruction of the Ohkawa landslide indicated episodes of regressive enlargement of the landslide scarp from 1993 to 2007 in the eastern section. In the northern section of this landslide, the landslide slope might have been undergoing sliding to form the current landslide scarp observed in 2000. The slope may have moved progressively downwards in 2005 and its secondary scarp on the downslope locally expanded in 2017. In addition, the reconstruction of the southern section of the Ohkawa landslide suggested that progressive movements may have developed in 1986 and 1987; i.e., before the landslide event in 2006.
6,006
2021-04-25T00:00:00.000
[ "Environmental Science", "Geology" ]
Algorithm for Enhancing the QoS of Video Traffic over Wireless Mesh Networks One of the major issues in a wireless mesh networks (WMNs) which needs to be solved is the lack of a viable protocol for medium access control (MAC). In fact, the main concern is to expand the application of limited wireless resources while simultaneously retaining the quality of service (QoS) of all types of traffic. In particular, the video service for real-time variable bit rate (rt-VBR). As such, this study attempts to enhance QoS with regard to packet loss, average delay, and throughput by controlling the transmitted video packets. The packet loss and average delay of QoS for video traffic can be controlled. Results of simulation show that Optimum Dynamic Reservation-Time Division Multiplexing Access (ODR-TDMA) has achieved excellent utilization of resource that improvised the QoS meant for video packets. This study has also proven the adequacy of the proposed algorithm to minimize packet delay and packet loss, in addition to enhancing throughput in comparison to those reported in previous studies. Keywords—Wireless Mesh Networks (WMNs); Medium Access Control (MAC); Quality of Service (QoS); video traffic I. INTRODUCTION The importance of medium access control (MAC) protocol for wireless mesh networks (WMNs) is dependent on several reasons, such as enabling statistical multiplexing for wireless access interface, optimizing the use of limited wireless resources, and ensuring that the required quality of service (QoS) for multimedia packets are fulfilled, particularly for video service with real-time variable bit rate (rt-VBR) [1].The assignment of dynamic slots is important to ensure that the achievement of statistical multiplexing in various service categories and features are adequate for variable bit rate (VBR) class that coordinates the vast traffic needs for both spatiallydispersed and independent wireless terminals [2][3][4].There are several weaknesses in the present protocols for MAC; for instance residual lifetime, excessive overhead for transmission of buffer data, instant queue length, and punctual arrival of packet as an essential threshold.Two types of access schemes can be employed for transmission of data from individuallydistributed wireless terminals to contention less and contention-based access points.In the effort to diminish packet loss as a result of collision, the contention-based system [5][6][7][8] requires smaller control packet and lower collision probability.As such, several MAC protocols were developed for application in wireless networks.For instance, a comparative analysis has been carried out which involved two MAC protocols, namely RI_MAC (a contention-based receiver that initiates the asynchronous duty cycle in the MAC protocol) and ATMA (advertisement-based Time Division Multiplexing Access (TDMA) MAC protocol) [9].A system that is free from any contention [10 -13] was described as being based on either a polling mechanism that assigns an uplink in the slot to transfer the parameters or a piggyback approach where the parameters are piggybacked on uplink data burst.The polling period must be altered to reduce loss of packet while the overhead of the piggyback has to be reduced.An MAC protocol based on TDMA was developed to prevent collision during data transfer in order to maintain QoS in networks with ad-hoc feature [14].Other researchers have developed procedures for adaptive registration of mobile-to-mobile (M2M) networks with massive Internet of Things (IoT) devices.This was followed by the development of an MAC protocol with hybrid-slotted Carrier Sense Multiple Access/TDMA (CSMA/TDMA) (HSCT), in which the logical frame comprises two parts: the first part is a contention-based CSMA with a collision avoidance (CSMA/CA) period (SCP) that is segregated into many access windows (i.e., C-slot).The second part is a contention-free slotted TDMA period (STP) segregated into many T-slots [15].Additionally, a hybrid CSMA/TDMA or queue-MAC has been developed to adjust to the varied traffic [16], where the contention-based CSMA was applied for a light load to address any delay due to scattered data transfer.Another MAC protocol based on TDMA was developed for gate-way multi-hop Wi-Fi-based long distance (WiLD) network in the attempt to enhance throughput and deal with the delay in performance [17].This method was able to substantially reduce the possibility of collision for MAC protocol based on token through the use of synchronized inherent nodes approach.The suggested protocol enhanced the performance of the network by using the (available bandwidth to maximize the overlapping TDMA slots.The improvisation www.ijacsa.thesai.org of MAC protocols was reflected by the average end-to-end delay in the packet for multiple hops, as well as saturated throughput.According to the best of our knowledge Dynamic Reservation (DR-TDMA) is one of the most efficient and comprehensive algorithm used in resource allocation for multimedia traffic in wireless network [18]. In particular, this approach includes the allocation algorithm that is based on an efficient rate of VBR video traffic which integrates algorithm to the control rate in order to ensure the flow of video with the related traffic thresholds.Hence, a 6bit piggybacking overhead was used to predicate the requirement for video connections in the previous protocols, In the last stage in each frame, DR-TDMA estimates the connection buffer status (with regard to number of packets, packets-guaranteed groups and best-effort) based on received data and parameters on connection, either immediately in control packet or piggybacked to data packet.Packets verified to video traffic threshold are referred as guaranteed packets with high QoS, whereas packets that fail to conform to the function on the basis of best-effort with the absence of QoS.In addition, the algorithm for slot allocation comprises two units, namely best-effort and guaranteed allocations. The available slots were initially allocated by the scheduler to video connections awaiting guaranteed packets via time-toexpiry algorithm.The remaining slots were then allocated to best-packet video connection in virtue queue based on the algorithm for fair bandwidth allocation. This study proposed the utilization of ODR-TDMA for video traffic.Hence, the primary aim of the suggested algorithm is to offer fair delay for the delayed video packets by reducing the variance in the delay between the transferred video packets.Since ɳ was modified, the corresponding value was met for each varied number of video user, which the ODR-TDMA attempts to achieve through the creation of each frame.The control of the allocated resources or bandwidth is done in an adaptive manner for video traffic that correspond to the average bit rate as well as the ability to manage QoS with regard to packet loss, average delay and throughput.This paper is organized as follows.Section II presents the system and traffic model description.Section III explains and discusses the proposed ODR-TDMA mechanism.The performance evaluation and simulation results have been reported in Section IV.Conclusion is presented in the final Section V. II. SYSTEM AND TRAFFIC MODEL DESCRIPTION The developed resource allocation algorithm can be utilized for slotted TDMA Wi-Fi, which functions by complying with the standards set in IEEE 802.11.The protocol was developed using Time Division Duplex (TDD).Fig. 1 shows the structure of frame used in the developed resource allocation algorithm. The size for a frame of air interface was fixed at 2 ms, and the frame was divided into equal-sized slots to moderate date scheduling with each slot having a 48-byte frame body. III. ODR-TDMA MECHANISM High usage of wireless channel and excellent QoS can be achieved through efficient resource allocation algorithm.In fact, the algorithm for transmission at uplink can also be easily used for the downlink scheduled by TDM.The upper and lower Delay_Thresholds were broadcast by base station to all video users in each frame created and their delay time in transferring packets is indicated by overhead piggybacking based on three categories: first, higher state for packet delay that is higher than upper Delay_Threshold; second, lower state for packet delay that is smaller than lower Delay_Threshold; and finally, In-betweens state for packet delay that is lower than upper Delay_Threshold but is higher than lower Delay_Threshold. The latest updated status for packet delay is used by the allocation algorithm to manage slots for users at the end of each frame.This algorithm maximizes the amount of Inbetween delay state by converting lower and higher delay state into In-betweens state to obtain a fair delay. The allocation algorithm depends on three steps.In the first step it assigns higher delay state, and increase number of allocated slots by 1.The second step keep number of allocated slots without change.Finally, the third step assign users with delay lower than low delay threshold and reduce the number of allocated slots by 1. Since the lower delay state was the least affected by any decrease in the free available slots, it appeared in the final step.When the slot is absent, the user has to wait until an available slot is found.If the waiting required 3 frames, a slot is allocated by the base station in the present frame to deliver a packet and to update its delay state.Step 3 showed that users were distributed based on their wait state, where users with higher waiting time were served with fair allocation efficiently.Both the lower and upper Delay_Threshold were modified by the base station usingthe following equations [18]: The Upper_Delay_Threshold ( ) is the summation of the Lower Delay threshold ( ), and the Variance Threshold ( ) is fixed at 4 msec. ( www.ijacsa.thesai.org is bit rate/slot at uplink channel bit rate for 24 slots/frame; is the duration of the frame and is fixed at 2 ms; is the number of additional slot for video traffic that exceeds the product of and ; is the number of video user; and isaverage bit rate/video user. is the weighting factor for the algorithm proposed in this study, is the number of slots that is equal to the total mean bit rate for all video users, and is the number of slots allocated for each user. These equations suggest that the ODR-TDMA is able to modify the average number of slots allocated for video traffic near by regulating the values of lower and upper delay threshold.The upper and lower delay thresholds that are based on these equations were controlled more strictly by the base station when the EXCESS slot increased.Hence, lower delay state is used more often in comparison to higher delay state, thereby reducing the number of allocated slots.The values of Variance Threshold, which is the average of packet delay variance among the transferred packets, must be small, which is the reason for choosing 4 ms for 2 frames.The upper and lower Delay_Threshold values are equal to the Maximum Transfer Delay (MTD) video packets or 50 ms (maximum) and zero (minimum), respectively.The random value chosen by ODR-TDMA in the initial reading for was fixed as one the following values: {1.00, 1.01, 1.02, 1.03, 1.04}.It measured both packet delay and loss for each frame, and subsequently compared them with QoS, which was stored in the buffer upon adherence.With regard to the remaining frames, MPP was employed to scan the area and measure .The value of was then compared with the values derived from prior processing to determine the suitable Lower and Upper Delay_Thresholds.With repetition, ODR-TDMA began to learn to identify the optimal value for which meet the requirements for QoS and adhere to the number of video users.Fig. 2 shows the flowchart for the developed ODR-TDMA for video traffic.ODR-TDMA is able to obtain data from other wireless networks, therefore causing delay for users.To be precise, when MN failed to gain access to a WLAN network due to limited area of coverage, it can request for data, including the IP address of the Mesh Access Point (MAP) connected, to be sent to another network interface card.This MAP then sends an MN reply information message of neighbouring access networks (e.g.network id and channel numbers) that are within coverage.A list of available channels that operate in the available WLAN network was generated by the MN.If there was no WLAN network channel available within the coverage of the other network, the MN stayed in another network interface card, thereby allowing MPP access to the number of delayed users.All channels were scanned, either passively or actively, when the MN entered the interface card of the WLAN.If no MN was found, the interface card is switched off, but if an MN was found, the MPP will connect it with a new WLAN interface card to communicate data. IV. PERFORMANCE EVALUATION AND SIMULATION RESULTS Three primary metrics of performance were employed to evaluate the ODR-TDMA algorithm based on the alteration in MNs for the initial case and in for the second case.The outcomes from packet loss, packet delay, and throughput are presented and compared with the findings made in previous www.ijacsa.thesai.orgstudies in two parts; the first part presents packet loss, average delay, and throughput when MNs were increased, whereas the second part shows a comparison of the outcomes for performance metrics between ODR-TDMA and Fair Dynamic Reservation-Time Division Multiple Access (FDR-TDMA) when MNs were increased. A. Effect of Increasing Nv when is within a Range of {1:1.04} This section discusses the effect of ODR-TDMA when there was an increase in MN with the value for ƞ fixed at {1: 1.04}.The results for packet loss ratio, average delay, and throughput show the impact of ODR-TDMA in fulfilling the requirements of QoS.Fig. 3 shows that a small increase in could reduce packet loss that is proportional with the increase in both Nv and .The expansion reflects the stability of as the value was gradually increased for Nv.It is interesting to note the small increase in ɳ, as shown in Fig. 4 resulted in a substantial reduction in average packet delay.Efficient use of the allocated bandwidth by ODR-TDMA resulted in a substantial decrease in average packet delay and packet loss due to the change in value of Delay_Threshold.An increase in the MNs enhanced the dominance of that affects the increase in multiplexing gain, as well as a decrease in total variation rate for video traffic. High values of have an impact a small group of users rather than a large group in attaining a similar QoS.Delay_Threshold is frequently influenced by the allocated slots and .A reduction in allocated slots and an increase in tend to reduce the availability of free bandwidth, thereby generating higher delay state and reducing average packet delay due to the conversion to in-between state by ODR-TDMA. Throughput is the average number of slot ratio for a successful transmission of data packet to the total amount of slots per frame.The similarities of the resulting throughput with are shown in Fig. 5.The second section presents a comparison of ODR-TDMA and FDR-TDMA.The video traffic channel parameters were fixed as recommended by [18]. B. Comparison of ODR-TDMA with Previous Protocols Channel parameters and users were set as recommended by [18] to compare video traffic simulations.The outcomes of the simulation of the packet loss ratio are shown in Fig. 6 to 9.They show that the average values for throughput and packet delay for a video usder are = 1.04 and 250 Kbps, respectively.The first factor used to compare ODR-TDMA with other protocols was packet loss ratio.Fig. 6 shows a comparison with FDR-TDMA, which illustrates that ODR-TDMA generated better outcomes by virtue of the reduced packet loss and the achieved consistency, which thereby gives exceptional QoS.The targeted 0.06 ODR-TDMA packet loss ratio supported 29 video users at a packet delay lower than 4 ms, where only 26 users were supported by FDR-TDMA.Fig. 7 illustrates that ODR-TDMA resulted in lower packet delay in comparison to FDR-TDMA, since it supported 20-23 users with a delay of less than 2 msec whereas the delay in FDR-TDMA was 5 msec.By taking into account throughput and delay, the ODR-TDMA served 29 users with higher throughput, as can be seen in Fig. 8, in which the delay increased by only 3.25 msec in comparison to FDR TDMA that served similar number of users with a delay of up to 19 msec.Fig. 8 shows that ODR-TDMA exhibited slightly higher throughput when the system was overloaded.Both throughputs started at the same point and resulted in expansion, whereas Nv increased within a similar range up to Nv = 26, wherein ODR-TDMA began to achieve higher throughput in comparison to FDR-TDMA. V. CONCLUSION This study presents a unique resource allocation based on fair delay optimization for video traffic over WMNs system.As such, the recommended allocation algorithm was able to achieve the required QoS by reducing the variance in delay between the transmitted video packets in addition to flexibly controlling the allocated resource (bandwidth) for video traffic approximately the corresponding average bit rate that enhanced efficiency in its usage.The simulation results show an exception resource employment has achieved and offered an almost fair delay for the video packets.Moreover, it has the ability to manage the QoS for video traffic in terms of average delay and packets loss in order to generate exceptional QoS.In conclusion, the algorithm proposed in this study is successful in terms of improving packet loss, packet delay, and throughput in comparison to FDR-TDMA.For future work and as ODR-TDMA only concentrates and works with users who have higher delay than the higher delay threshold, a new algorithm could be developed to concentrate on users who have lower delay than the lower delay threshold and keep them in sleep mode during the waiting time so it can save the total consumption energy. Fig. 3 . Fig. 3. Packet Loss Ratio as a Function of Nv.
3,974.6
2019-04-01T00:00:00.000
[ "Computer Science", "Business" ]
Dual Multi-Scale Dehazing Network Single-image haze removal is a challenging ill-posed problem. Recently, methods based on training on synthetic data have achieved good dehazing results. However, we note that these methods can be further improved. A novel deep learning-based method is proposed to obtain a better-dehazed result for single-image dehazing in this paper. Specially, we propose a dual multi-scale network to learn the dehazing knowledge from synthetical data. The coarse multi-scale network is designed to capture a large variety of objects, and then fine multi-scale blocks are designed to capture a small variety of objects at each scale. To show the effectiveness of the proposed method, we perform experiments on a synthetic dataset and real hazy images. Extensive experimental results show that the proposed method outperforms the state-of-the-art methods. I. INTRODUCTION The turbid medium in the atmosphere often degrades the image quality. Outdoor images taken in bad weather tend to show a hazy and blurry appearance. Atmospheric absorption and scattering cause haze, which reduces the contrast and fades the color of outdoor images. The light reaches by the camera from the scene objects is attenuated along the line of sight and blended with the atmospheric light. The absorption and scattering processes are commonly modeled by a linear combination of the direct attenuation and the air-light [1]: where the I is the input hazy image, and the J it the corresponding clean image, t represents how much the light reflected from objects is received by camera, A is the air-light. Single-image dehazing, which aims at removing haze from single input image as much as possible, has a wide variety of applications, such as auto driving, semantic segmentation, image recognition, etc. Due to its wide applications, dehazing has attracted much attention. There are two key steps in the dehazing process: 1) estimation of transmission map and The associate editor coordinating the review of this manuscript and approving it for publication was Yi Zhang . atmospheric light, and 2) compute the final dehazed result. Prior-based methods [2], [3] have been proposed to remove haze based on two steps of dehazing. Due to the fact prior is based on simple statistic law, which cannot be satisfied by real cases. For example, dark channel prior (DCP) [2] cannot deal the white objects well. Inspired by the success of data-driven methods, many researchers proposed end-to-end CNN models [4], [5], [6], [7], [8] for single-image dehazing. Although these methods have shown effectiveness on a synthetic dataset. However, these methods have limitations due to large-scale arbitrariness caused by haze. Furthermore, The distribution of haze is depend on depth, which needs different receptive field sizes to estimate the depth for each pixel. To overcome these two issues jointly, we propose a dual multi-scale dehazing network. The formation of haze can be affected by various factors, such as temperature, altitude, and humidity, making the distribution of haze at individual spatial locations space-variant and non-homogeneous. To capture the distribution of haze, we propose a dual multi-scale dehazing network, which has different perceptive fields and captures objects with different sizes. We compare our method with traditional and learning-based methods [2], [9] in Fig 1. The main contributions of this work are listed as follows: The dehazed result of DCP tends to show a dark appearance and the tree area cannot be recovered well, compared with (d). The dehazed result of PhysicsGan looks better than DCP, there is still room for improvement. Compared with DCP and PhysicsGan, our method often generates a visual favorable result. • We propose a dual multi-scale dehazing network, which can capture the large and small variety of objects and understand the distribution of haze. The distribution of haze is very large, we employ a coarse multi-scale network to capture the global haze distribution. We then capture the small variety of distribution of haze via fine multi-scale blocks. The proposed model can capture the global and local distribution of haze well and effectively improves the dehazing performance. • We propose a fine multi-scale block, which can capture small varieties of objects. The distribution of haze depends on the depth, which is different for different objects. However, the distribution of haze within one object tends to show homogeneous. It is critical to design a network that can capture small varieties of object sizes in each scale, which motivates us to design a fine multiscale block. • We conduct extensive experiments to quantitatively and qualitatively compare the proposed method with the state-of-the-art single-image dehazing methods and demonstrate the effectiveness of the proposed model. II. RELATED WORK Single image dehazing methods can be mainly grouped into two approaches: physical model based recovering methods and color information based enhancement approaches. A. SINGLE IMAGE DEHAZING METHODS Physical models based methods [10], [11] assume that hazy images can be modeled by Eq. (1), which models hazy images as the linear sum of clean image and atmospheric light [12]. Clean image means the scene information that are not affected by medium particles. Based on this model, most existing algorithms focus on recovering the scene that not reaches the camera sensors, i.e., estimating the transmission map t(x) for each hazy image. For example, a improved image formation model is proposed by [13]. This model is designed for the estimating of transmission map and surface shading. The hazy image can be treat as regions of constant albedo, and we can infer the the scene transmission from hazy image. Dark channel prior (DCP) is inferred from the features of non-sky haze-free images. The DCP assumes the at least one pixel contains a channel whose intensity is close to zero. [10] extends the DCP and proposes a more general boundary constraint. Four haze-relevant priors are studied [14] and a multi-scale dehazing method are designed to improve the dehazing performance. Reference [15] finds that the relation between brightness and saturation in a clear image patch, and proposes a color attenuation prior to compute transmission maps. Reference [3] finds that a clean image can be presented by hundreds of color clusters. However, a hazy image cannot be presented by hundreds of color clusters. Based on this observation, [3] design a non-local method to compute the transmission map. However, these hand-crafted priors are statistical properties over a large number of images and thus cannot hold always in practical scenarios. For example, when the scene objects are close to the airlight, the dark channel has bright values near such objects, which means that the dark channel prior is not hold, and as a result the haze layer will be overestimated [2]. To avoid designing statistical features, several algorithms employ deep convolutional neural networks (CNN) to improve image dehazing. Both DehazeNet [11] and MSCNN [16] use a deep neural network for transmission estimation and then follow the conventional method to estimate atmospheric light and haze-free image. Instead of computing the transmission map and the atmospheric light separately, AOD-Net [17] incorporates the transmission and the airlight into a new variable and design an light dehazing method. However, this method tends to retain haze in dehazed result. DCPDN [18] and DDN [19] are two methods, which incorporate the scattering model into deep network. These methods need two networks to compute transmission maps and atmospheric lights first, then restore final dehazed images by inversing the model (1). An end-to-end fusion-based dehazing network [20] is proposed to predict weight maps to combine three derived inputs into a single one by choosing the most important features of them. However, GFN also computes three inputs using traditional methods and intermediate confidence maps were needed to be computed. Qin et al. design a novel pixel and channel attention [5] to improve the dehazing performance. Pan et al. design a physics-based generative networks [9] for image restoration problem, which can incorporate the physics model to boost the dehazing performance. Dong et al. employ the boosting strategy to design a multi-scale dehazing network [21]. Zheng et al. study the ultra-high-definition image dehazing [22] based on the physical model. Although promising results have been obtained, the assumption that hazy images is the sum of clean image and airlight does not hold in real complex scenes, especially when the haze is heavy and contains noise. To improve the dehazing performance on natural hazy images, Shao et al. propose a domain adaptation dehazing method [23]. Different from these methods, our method takes multi-scale ability into the proposed network for dehazing and achieves the fast dehazing performance. Prior based dehazing methods can restore dehazed sharp results at the expense of low quantitative results for synthetic images. Data-driven dehazing methods obtain high quantitative results for synthetic images but cannot remove haze from real hazy images completely. To address the disadvantages of prior based dehazing methods and data-driven dehazing methods, neural augmentation based dehazing methods [24], [25], [26] are proposed. Neural augmentation based dehazing methods estimate the atmospheric light and transmission map firstly, and then data-driven methods are used to refine the the atmospheric light and transmission map. The dehazed results are obtained by physical model with the estimated the atmospheric light and transmission map. III. PROPOSED METHOD The proposed model is a dual multi-scale dehazing network, the overall framework is shown in Fig. 2. The dual multi-scale ability is from coarse multi-scale network and fine multi-scale blocks. We firstly introduce the motivation, and then the dual multi-scale dehazing network, which learns dehazing ability from synthetic images. A. MOTIVATION Objects often have different sizes, which are hard for dehazing. As shown in Fig. 3, persons in red rectangles have different sizes due to the different depths, which result in different densities of haze for these areas. We note that the trees in black rectangles also have different sizes. We can see that the objects in near areas have large size, while the objects in far areas have small sizes. To capture such a dramatic variation of object sizes, we propose a coarse multi-scale network that increases the receptive field via down-sampling. As shown in Fig. 3(c) and (d), we note that the sheep and cranes have similar object sizes and show small variations of object sizes, it is important to capture such a variation for image dehazing. To capture such small variations of object sizes, we propose a fine multi-scale block, which employs different dilation rates to understand the variations in local areas. In order to capture the large and small variations of objects, we combine the coarse multi-scale network with fine multi-scale blocks. B. DUAL MULTI-SCALE DEHAZING NETWORK Based on the analysis in Section III-A, we propose a dual multi-scale dehazing network (DMSDN), the network detail can be found in Fig. 2. The dual multi-scale dehazing network consists of a coarse multi-scale network and fine multi-scale blocks. The coarse multi-scale network contains three scales. The first scale (coarse scale) contains six fine multi-scale blocks, the second (median scale) contains six fine multiscale blocks, and the third scale contains six fine multi-scale blocks. To capture the global and local features, the model employs three scales of information to explore useful features for dehazing. As the learned feature map exists redundant information, which is a reason for deep learning models cannot learn effective features for dehazing. In order to boost the learning efficient, we propose a fine multi-scale block. The fine multi-scale block (FMB) contains multi-scale information extracting and an attention module, which is shown in Fig 4. Based on the observation, the feature map contains redundant information, applying a convolution on it cannot learning information as much as possible. We split the feature into four sub-features, which contain sub-information of the original feature. We apply a convolution on one sub-feature, and obtain a new feature (O 1 ). We concat the O 1 with another sub-feature and obtain a concated feature (C 1 ), then we apply a convolution on C 1 and obtain a new feature (O 2 ). We repeat this process, and obtain O 3 and O 4 . We concat O 1 , O 2 , O 3 and O 4 , We then apply a channel attention on the concated feature and obtain the active feature for dehazing. The proposed module reduce the computation time and model complexity. To further improve the information flow, we propose an adaptive fusion module (AFM), which fuses the features from each scale of a coarse multi-scale network adaptively. As shown in Fig. 5, we first concat the high-level, middle-level, and low-level features, and then a convolution operation with 1×1 kernel is applied, which obtains the fused feature. C. TRAINING LOSS Let F denote the mapping function which is learned by the network, and represents the parameters of the network. Let {I i , i = 1, 2, · · · , N } and {J i , i = 1, 2, · · · , N } denote the hazy input images and the corresponding clean ones, respectively. It has been widely acknowledged that L 2 loss tends to produce blurry dehazed results [18]. To solve this issue efficiently, we introduce a novel edge-preserving loss, which is composed of two different parts: L 1 loss and perceptual loss. L 1 is defined as follow: where N is the number of training pair data. To eliminate the visual artifacts of dehazed images, we employ perceptual loss to train the model. The perceptual loss consists of Feature Reconstruction Loss and Style Reconstruction Loss. Instead of encouraging the pixels of the dehazing imageJ to exactly match the pixels of the ground truth image, feature reconstruction loss encourages them to have similar feature representations. The perceptual loss can be defined as follow: where φ presents the VGG-19 network, which is trained on ImageNet, N demotes the number of training samples, and j denotes the layer number. We select the layers 'conv1-2', 'conv2-2', 'conv3-2', 'conv4-2', and 'conv5-2' in the VGG-19 network to compute the feature reconstruction loss. Our overall loss function is: where λ 2 controls the contribution of perceptual loss. D. IMPLEMENTATION DETAILS AND DATASET In the proposed model, we set 3 × 3 as the kernel size for all convolution layers except the ones in AFM. In our experiment, we set the scale number to 3. For each scale-aware attention module, we set the dilation rate to 1, 2, 4 and 8. All dilated layers are initialized using an identity initializer [27]. We set λ 1 =0.01 and λ 2 =0.01 in all the experiments. We use a leaky rectified linear unit (LReLU) as our activation function. We use Adam optimizer with β 1 = 0.9 and β 2 = 0.9999 to train the network. The batch size and the learning rate are 1 and 0.0005, respectively. During training, we decrease the learning rate decreases half for every 30 epochs. The network was trained for totally 100 epochs by Pytorch with an Nvidia GTX 2018Ti GPU. We train the proposed network on the SOTS dataset from RESIDE [28] as the state-of-the-art dehazing methods [5], [29]. Fig. 9. The best result is marked with red color, while the second best is marked with blue color. A. QUANTITATIVE COMPARISON 1) RESIDE DATASET REalistic Single-Image DEhazing (RESIDE) [28] is the first large-scale simulated haze dataset, which provided indoor and outdoor hazy images. The hazy images in this dataset have the ground truth, we can evaluate the dehazing performance using PSNR and SSIM. The indoor part of RESIDE dataset simulates hazy images using NYU indoor dataset [33]. The indoor part of RESIDE dataset contains 500 hazy images for test. We then evaluate the performance of our proposed network on the SOTS dataset from 84704 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. RESIDE [28]. The comparison results on SOTS are shown in Table 1. From the experimental comparisons, it has been demonstrated that the proposed method outperforms the current state-ofthe-art methods [21], [29], and achieves superior performance with great improvements. It should be pointed out that the FFA-Net achieves the best scores for RESIDE dataset. However, its performance on real hazy images is poor. We term the GridDehazeNet [29] as GDN. B. QUALITATIVE COMPARISON To further evaluate the proposed method, we use real images to compare with different state-of-the-art methods. Fig. 6 shows the qualitative comparison of results with the seven state-of-the-art dehazing algorithms [2], [9], [11], [17], [18], [20], [21], [31], [32] on challenging real-world images. As shown in Fig. 6(b), most of the haze is removed by DCP, and the details of the scenes and objects are well restored. However, the results significantly suffer from overenhancement (for instance, the building regions of the first and second images are much darker than it should be. The results of DehazeNet, AOD-Net, FFA-Net, MSBDN, Dehamer, GridDehazeNet, and DCPDN do not have the overestimation problem and maintain the original colors of the objects as shown in Fig. 6. But these methods have some remaining haze in the dehazed results. The method of AirNet and GFN tend to non-uniformly estimate haze concentration and results in inhomogeneous dehazed images in Fig. 6(k). The PhysicsGan, and EPDN generates relatively clear results, but the images show some color distortions. In contrast, the dehazed results by our method are clear and the details of the scenes are enhanced moderately as shown in Fig. 6(n). We note some works employ hazy images to train dehazing network [6], [31]. PSD [6] employ hazy images to finetune dehazing network. AirNet [6] designs a encoder-decoder network, which can handle unknown corruption images. As shown in Fig. 7, we can see that the dehazed of AirNet and PSD show a haze appearance. We also note some color distortion in the deazed result of PSD in the second row in Fig. 7. The dehazed of AirNet looks darker than the dehazed results of proposed method and PSD. In contrast, the proposed method can restore the images details and recover a reasonable global appearance. The AirNet assumes that the degradations in the same image should be similar, which is not true for image dehazing. We show an example in Fig. 8, which shows that the degradations in the same image is not similar. The PSD employs several well-grounded physical priors to fine-tune the dehazing model. However, the physical priors arenot true for all hazy images. The proposed method employ haze-aware model to fuse the dehazed result, which helps the model restore high quality dehazed result. We further compare the proposed method with some recently End-to-End dehazing methods [5], [6], [21], [30], [31], [34], [35], [36]. We show an example in Fig. 9. The dehazed results of FAMED, FFA-Net, MSBDN, AECR, and AirNet tend to retain haze. EPDN can remove haze. However, the dehazed result of EPDN tend to lose image details. SGID-PFF can remove haze. However, some areas of dehazed are completely dark. PSD can enhance the hazy image. However, the dehazed result of PSD tends to retain haze and show color distortion. In contrast, the proposed method can restore the images details and recover a reasonable global appearance and colorful dehazed result. To show the effectiveness of the proposed method, we compare it with other dehazing methods. First, we show the dehazing performance of dehazing methods on real hazy images. Second, we show the densities of dehazed results obtained by different dehazing methods. As shown in Table 2, we can see that the proposed method achieve the second best dehazing performance with metric DHQI [37]. The proposed method is a data-driven dehazing method, which may do not perform well for real-world images. However, the proposed method is better than other data-driven dehazing methods. To Further show the performance of the proposed method, we show the density of the results obtained by dehazing methods. As shown in Table 3, we can see that the proposed method can remove haze better than other dehazing methods. C. ABLATION STUDY To better show the effectiveness of the proposed modules, we design an ablation study that includes a coarse multiscale network, fine multi-scale blocks, and adaptive fusion modules. We construct the series variants with different proposed modules: 1) To show the effectiveness of the coarse multi-scale network, we design a single scale model termed as Base; 2) We add coarse multi-scale ability by removing AFM and replacing FMBs with traditional dense blocks, which is termed as BaseNet. We show the architecture of the BaseFMB in Fig. 10; 3) We add AFM to the BaseNet, and we term it as BaseAFM; 4) We replace traditional dense blocks with FMBs, and we term it as BaseFMB; 5) The architecture proposed in Section III, which is termed as Full. All models are trained in the same way and tested on the indoor part from RESIDE. As shown in Table 4, each proposed module shows its contribution to image dehazing. To show the influence of loss function, we add an experiment, which only use L 1 norm to train the proposed model. As shown in Table 4, we can see that model trained with L 1 norm obtains lower quality dehazing results. To show the efficiency of the proposed model, we show the run time from the variants of the proposed model. As shown in Table 5, we can see that the proposed model run faster than other models, such as Base, BaseNet, and BaseAFM. The models are tested on a computer equipped with a Nvidia Geforce 1060. D. RUN TIME We note that the dehazing performance has been greatly improved. However, the dehazing speed is slow. In this subsection, we compare the propose model with some dehazing methods, which achieve high dehazing performance. We test the dehazing speed on a server platform, which is equipped with eight TITAN V GPUs. The CPU of the platform is Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz and the memory is 512 GB. We resize the hazy images to a fix size 512 × 512. We show the dehazing speed of state-ofthe-art methods in Table 6. As we can see that the proposed dehazing is almost two times faster than MSDBN. E. LIMITATION Although the proposed model is effective for most hazy images. However, the proposed method maybe failed for some dense hazy images. We address the this problem by using DCP loss. We show an example in Fig. 11, which is from the prior work [38]. As shown, we can see that the dcp loss may result in artifacts around depth jump areas. In order to further improve the dehazing quality, we design a novel method to improve the accuracy of transmission map. We show the different between the DCP and the proposed method in Fig. 11. Although the proposed model is trained with the transmission maps estimated by DCP, The proposed model also trained with synthetic dataset, which improve the accuracy of the estimated transmission maps. As shown in Fig. 11, we can see that the the transmission maps estimated by DCP contains more details. In contrast, the visual result of the proposed method is much smoother. We can use the new transmission map predicting network and real hazy image to boost the dehazing performance on real hazy images. As proved by [39], [40], DNN-based methods often learn low-frequency functions, while ignore the high-frequency information. The neural augmentation framework [24] is proposed to address such a problem. In the feature, we also adopt the neural augmentation framework to improve the dehazing quality of the proposed method. V. CONCLUSION In this paper, we design a dual multi-scale dehazing network for single-image dehazing. The model contains a coarse multi-scale network and fine multi-scale blocks. The coarse multi-scale network which capture is designed to capture large variations of object sizes, while fine multi-scale blocks are designed to capture small variations of object sizes. The coarse multi-scale network contains three scales, which extract pyramidal features from the input image. To further explore the multi-scale information, we develop a fine multiscale block, which extracts multi-scale information using dilation convolution with different dilation rates and channelwise attention. The adaptive fusion module is designed to boost information flowing. Extensive experiments are conducted on public synthetic indoor images and natural hazy images to show the effectiveness of the proposed method.
5,619.6
2023-01-01T00:00:00.000
[ "Physics" ]
Human-to-Bovine Jump of Staphylococcus aureus CC8 Is Associated with the Loss of a β-Hemolysin Converting Prophage and the Acquisition of a New Staphylococcal Cassette Chromosome Staphylococcus aureus can colonize and infect both humans and animals, but isolates from both hosts tend to belong to different lineages. Our recent finding of bovine-adapted S. aureus showing close genetic relationship to the human S. aureus clonal complex 8 (CC8) allowed us to examine the genetic basis of host adaptation in this particular CC. Using total chromosome microarrays, we compared the genetic makeup of 14 CC8 isolates obtained from cows suffering subclinical mastitis, with nine CC8 isolates from colonized or infected human patients, and nine S. aureus isolates belonging to typical bovine CCs. CC8 isolates were found to segregate in a unique group, different from the typical bovine CCs. Within this CC8 group, human and bovine isolates further segregated into three subgroups, among which two contained a mix of human and bovine isolates, and one contained only bovine isolates. This distribution into specific clusters and subclusters reflected major differences in the S. aureus content of mobile genetic elements (MGEs). Indeed, while the mixed human-bovine clusters carried commonly human-associated β-hemolysin converting prophages, the bovine-only isolates were devoid of such prophages but harbored an additional new non-mec staphylococcal cassette chromosome (SCC) unique to bovine CC8 isolates. This composite cassette carried a gene coding for a new LPXTG-surface protein sharing homologies with a protein found in the environmental bacterium Geobacillus thermoglucosidans. Thus, in contrast to human CC8 isolates, the bovine-only CC8 group was associated with the combined loss of β-hemolysin converting prophages and gain of a new SCC probably acquired in the animal environment. Remaining questions are whether the new LPXTG-protein plays a role in bovine colonization or infection, and whether the new SCC could further acquire antibiotic-resistance genes and carry them back to human. Introduction Staphylococcus aureus are major human and animal pathogens that can produce a variety of diseases, from relatively mild skin and soft tissue infections to life-threatening blood stream bacteremia and endocarditis [1,2]. In addition, this bacterium is mastermind in developing antibiotic resistances, and some strains have become resistant to virtually all non-experimental drugs, including the whole family of b-lactam molecules in the case of methicillinresistant S. aureus (MRSA) [3], as well as last-resort vancomycin and daptomycin [4,5]. In humans, the major reservoir of S. aureus is represented by healthy carriers, who account for up to 30% of the population, and harbor the organism in their anterior nares and sometimes other anatomic sites [6]. Besides, S. aureus carriage was also reported in numerous animal species including dog, cat, horse, pig, poultry and cattle [7,8,9]. However, while S. aureus are quite ubiquitous in terms of host species, different animals tend to harbor different lineages (i.e. clonal complexes, or CCs for short) as recognized in pioneer work by Devriese and Oeding [10], and amply confirmed thereafter [11,12,13,14,15,16,17]. Several studies suggested that critical modulators of this host specificity might be mobile genetic elements (MGEs), gene decay, or adaptive evolution of surface proteins [11,12,14,15,18,19,20]. For instance, it has been suggested that the presence of the immune evasion cluster (IEC), a gene cluster carried by b-hemolysin converting bacteriophages, was strongly correlated with human isolates [21]. Such host-specific genes were suggested to be useful as epidemiologic markers [20]. We recently observed a close genetic relationship between S. aureus strains isolated from bovine suffering subclinical mastitis and strains of the prominent human CC8, suggesting recent human to bovine jump [17]. Here, we further compared the genetic makeup of human and bovine CC8 S. aureus strains, using a collection of epidemiologically independent isolates collected in Switzerland [17]. We observed evidence for a human to bovine jump rather than the contrary. Notably, the jump was associated with the loss of a b-hemolysin converting prophage typical of human strains [15,22,23], plus the acquisition of a new bovine-specific SCC element, which lacked the methicillin-resistance mecA gene, but carried a new LPXTG protein. S. aureus Strains Selection Nine epidemiologically unrelated human CC8 strains and 14 epidemiologically independent CC8 strains recovered from bovine subclinical mastitis (labeled ''M'') were included in the study (Table 1). All strains were isolated from humans or animals in Western Switzerland. Concerning the human CC8 strains, three were recovered from healthy carriers and were labeled ''Laus'', four were isolated from patients with bloodstream infections and were labeled ''I'', and two corresponded to the reference strains USA300_FPR3757 (USA300) [24] and COL [25]. The bovine CC8 strains were chosen to represent all spa types found among 400 isolates previously collected [17,26]. In addition, nine isolates from four typical bovine lineages (CC20, CC97, CC151, and CC479) were included. Preparation of Labeled Nucleic Acids for Microarrays Probing Purified genomic DNAs from the reference sequenced strains used for the design of the microarray chip was labeled with Cy-5 dCTP [27] and used in microarray normalization [32]. Mixtures of Cy5-labeled pooled DNAs and Cy3-labeled DNA of the test Figure 1. Clustering analysis, using Spearman correlation, of patterns of genome hybridization to probes matching 2,609 genes carried by the chromosome of strain USA300. Each probe set (i.e. collection of all probes hybridizing to USA300 genes) is represented by a single row of colored boxes. The blue areas correspond to genes showing significant fluorescent signal (i.e present in a corresponding genome), whereas yellow bars indicate genes poorly or not fluorescent (i.e. absent from a corresponding genome). The dendrogram on the right of the figure (black lines) represents the similarity matrix of the strain set. Clonal clusters (CCs) are indicated on the left. Clusters and sub-clusters are indicated by roman letters on the right. doi:10.1371/journal.pone.0058187.g001 strains [33] were hybridized and scanned as previously described [34]. Microarray Data Analysis Hybridization fluorescence intensities were quantified using the Feature Extraction Software v9.5 (Agilent Technologies, Santa Clara, CA, USA). Local background-subtracted signals were corrected for unequal dye incorporation or unequal load of the labeled product, using a rank consistency filter and a curve-fitting algorithm per the default LOWESS (locally weighted linear regression) method. Data were analyzed using GeneSpring 8.0 (Silicon Genetics, Redwood City, CA, USA) as previously described [34] and lists of probes over-represented either in human or cow strains were further investigated manually using an Excel spreadsheet. For this manual step, genomes of S. aureus strains showing a hybridization signal value $ to 50% of the lowest value obtained with the genome of a reference strain, known to carry the corresponding gene, were considered as carrying a corresponding gene homolog. This 50% threshold was validated by PCR amplification of several genes (data not shown). The complete microarray dataset (accession number GPL7137) is posted on the Gene Expression Omnibus database (http://www. ncbi.nlm.nih.gov/geo/). GCTTTGAAATCAGCCTGTAG-39), GoTaqH 2.5 U in 25 mL 1X white buffer. PCR reactions were performed in a T Professional PCR thermocycler (Biometra, Goettingen, Germany). GoTaqH, white buffer, and dNTPs were from Promega (Madison, WI, USA). Primers were purchased from Microsynth AG (Balgach, Switzerland) and were described previously [19]. All other chemicals were from Sigma-Aldrich (Saint Louis, MO, USA). Genome Sequencing and Assembly Total genomic DNA was isolated from the bovine S. aureus strain M186 using a protocol adapted from reference [35]. Bacterial cells from an overnight culture in Tryptic Soy Broth (TSB) were pelleted and resuspended in Tris-EDTA (10 mM Tris-Cl, 1 mM EDTA; pH 7.5) containing 400 mg/mL of lysostaphin (Sigma-Aldrich). After 45 min incubation at 37uC, six volumes of Nuclei lysis solution (Promega) were added and the mixture was transferred to 80uC for 10 min. After cooling the sample to room temperature, 50 mg/mL RNAse A (Sigma-Aldrich) were added and a new incubation step of 30 min. at 37uC was performed. 1/ 3.5 (vol/vol) of protein precipitation solution (Promega) was added and sample was left on ice for 5 min, before it was centrifuged for 10 min at 4uC. The supernatant was transferred to 1 volume isopropanol, thoroughly mixed and centrifuged at 4uC for 10 min. The DNA pellet was washed with 1 volume ethanol 70% and resuspended with 20 mL ultrapure H 2 O. In order to solubilize the genomic DNA, overnight incubation at 4uC and further at 65uC for 1 h were performed. The genomic DNA was finally stored at 220uC. Genome sequencing was performed with a Genome Analyzer IIx (Illumina Inc., San Diego, CA, USA) at the Genomic Technologies Facility of the University of Lausanne. A paired-end library with approximately 600 bp insert was constructed from 5 mg of genomic DNA and 28 million paired-end 36 bp reads were obtained following manufacturer's instructions. In these conditions, the theoretical coverage based on the average of published genome size for S. aureus (ca. 2.8610 6 bp) was 7206. The quality of the data obtained from the sequencing was verified using FastQC (http://www.bioinformatics.bbsrc.ac.uk/projects/fastqc/ ). Since most of the reads were of excellent quality (data not shown), no trimming was required. Reads of insufficient quality or contaminant sequences (less than 1%) were removed using locally developed scripts (available upon request). The assembly was performed using first SOAPdenovo [36], with kmers ranging from 19 to 35, and Gapcloser (http://soap.genomics.org.cn/about. html#resource2). ORFs were detected using ORF finder and potential functions were assigned using blastp and blastn (softwares available on the National Center for Biotechnology Information (NCBI) website (http://www.ncbi.nlm.nih.gov/)). Minimum Inhibitory Concentrations (MICs) of Sodium Arsenite The MICs of sodium arsenite were determined in TSB for S. aureus isolates carrying or not the new SCC element, using a standard broth macro-dilution method [37]. The MIC was defined as the lowest concentration of sodium arsenite that inhibited visible bacterial growth following incubation for 24 h at 37uC. A minimum of three independent experiments were performed. Sodium arsenite (NaAsO2) solution was purchased from Sigma-Aldrich. Clustering of Strains According to the Presence or Absence of USA300-specific Genes To evaluate the relatedness between the various isolates, the genomes of the 32 tested organisms (Table 1) were evaluated for the presence or absence of 2,609 genes carried by USA300, and the obtained patterns were clustered by Spearman correlation (Figure 1). Clusters and sub-clusters were very similar to those recently reported for the same isolates by amplified fragmentlength polymorphism (AFLP) and multi-locus sequence typing (MLST) [17]. Two major clusters were delineated; the first called cluster I, regrouped only CC8 strains, and the second called cluster II, contained all the non-CC8 isolates. Cluster I further segregated in three sub-clusters, among which sub-clusters Ia and Ib consisted of a mix of human and bovine CC8 strains that were relatively close to USA300, and sub-cluster Ic contained only CC8 isolates of bovine origin. Cluster II contained only bovine strains, but segregated in sub-clusters as well ( Figure 1). Indeed, CC479, CC20, CC97, and CC151 isolates regrouped separately into four sub-clusters, named IIa, IIb, IIc, and IId, respectively ( Figure 1). Thus, while clusters I and II broadly segregated between rather human types and typical bovine types of isolates, sub-clustering within CC8 strains further delineated differences between human and bovine CC8 isolates. Comparing Human and Bovine CC8 Isolates by Microarray 1,816 genes were found to be present on the genomes of all tested CC8 strains and corresponded to the so-called CC8 core genome (data not shown). Amongst the 8,877 60-mer DNA probes represented on the microarray chips, 198 (2.2%), corresponding to 127 genes, were found to have a higher prevalence in human than in bovine CC8 isolates. Moreover, out of these 127 genes, 95 (74.8%) were related to bacteriophage genes, 19 (15%) to S. aureus pathogenicity islands (SaPIs) genes, and 11 (8.7%) to staphylococcal cassette chromosome mec (SCCmec) genes. Thus, .99% of the genes associated with human specificity were carried by MGEs. In symmetry, 43 probes (0.48%) corresponding to 29 genes, were over-represented in bovine CC8 isolates. Out of these 29 genes, 14 (48.3%) were homolog to genes carried by diverse SCCmec elements, eight (27.6%) corresponded to genes carried by the S. aureus pathogenicity island 5 (SaPI5), and seven (24.1%) were related to transposon genes. Thus, all genes associated with bovine specificity were also carried by MGEs. Altogether, the human and bovine CC8 genomes differed only by a total of 156 genes, of which 154 (98.7%) were carried by MGEs. Below, we attempted to sort out which of these genes, or set of genes, might be the most likely candidates to promote specificity of S. aureus CC8 strains for either the human or the bovine host. Comparing MGEs Gene Content Since close to 99% of the genetic differences between human and bovine CC8 isolates were related to MGEs, we concentrated on these elements for further analyses. More precisely, we evaluated our whole strain collection for the presence or absence of homolog genes to every single gene carried by the major MGEs found in the two human CC8 reference strains USA300 and COL, as well as the bovine CC151 reference strain RF122. With Respect to the Genomic Islands vSaa and vSab The non-phage and non-SCC vSaa and vSab genomic islands are well conserved in all sequenced S. aureus [38]. Therefore, they were expected to be present in most of the studied isolates. Accordingly, both CC8 and typical bovine clusters uniformly carried several genes that were homolog to those of the vSaa and vSab of USA300 (Table S1 and S2, respectively). Nevertheless, while the entire group of CC8 strains presented quite uniform patterns for both vSaa and vSab, they were clearly different from the patterns found in typical bovine clusters, in which even intercluster differences were observed. Thus, the CC8 strains were clearly different from the typical bovine clusters in this respect. Moreover, this segregation was further confirmed when the strain collection was compared to the vSaa and vSab of COL and the reference bovine strain RF122 (Table S8-S9 and S10-S11, respectively). With Respect to the USA300 Prophages WSa2 and WSa3 USA300 is lysogenized by two bacteriophages. WSa2 carries the Panton-Valentine Leukocidin (PVL) [39], and WSa3, which is a member of a family of b-hemolysin converting bacteriophages that share a very similar integrase int (genes). Of note, WSa3 and related prophages may harbor determinants implicated in immune evasion [40], including a staphylokinase (SAK), a chemotaxis inhibitory protein (CHIPS), and the staphylococcal complement inhibitor SCIN. Homologs of USA300 WSa2 prophage, devoid of the PVL lukF-PV and lukS-PV genes, were only found in the CC8 sub-cluster Ia, which contained USA300 and a few human and bovine CC8 strains, as well as in two typical bovine strains of sub-clusters IIc and IId (Table S3). Thus, WSa2 did not discriminate between human and bovine isolates. In sharp contrast, WSa3-related b-hemolysin converting prophages, were present in the two mixed human-bovine CC8 subclusters Ia and Ib (except for COL), but notoriously absent from the bovine-only CC8 sub-cluster Ic, as well as from all the typical bovine clusters (Table S4). This observation was in agreement with the fact that such prophages are typically associated with human S. aureus isolates, but tend to be absent from animal strains [11,20,23]. Thus, the presence or absence of b-hemolysin converting prophages made a further distinction between subclusters Ia and Ib, which contained mixed human-bovine CC8 isolates, and sub-cluster Ic that contained bovine-only CC8 isolates. Indeed, strains of the sub-cluster Ic, lacking b-hemolysin converting prophages, were closer to typical bovine strains in this regard. To further determine the chromosomal insertion site of bhemolysin converting prophages, we performed multiplex PCR reactions on genomic DNA from all strains using specific primers for the b-hemolysin converting prophage WN315 int gene and the S. aureus b-hemolysin (hlb) gene [19]. The presence of amplicons of the expected size confirmed the presence of WN315 int homologs in the genomes of the isolates harboring b-hemolysin converting prophages (not shown). Moreover, no amplification was obtained for the chromosomal hlb gene, supporting the fact that this gene was interrupted by the integration of the prophage, as described elsewhere [19]. Of note, while all the identified b-hemolysin converting prophages carried homologs to the typical WSa3 sak and scin genes, only 6/18 of them carried homologs to the WSa3 chips gene. With Respect to Other Non-SCC MGEs Other non-SCC MGEs examined herein included the USA300 SaPI5 (Table S5); a USA300 transposon-related region (Table S6); the COL prophage WSaCOL, which is closely related to WSa2 (Table S12); the COL SaPI3 (Table S13); as well as the bovine RF122 SaPIbov [41], SaPIbov3 [42], vSabov, and prophage WRF122 (Table S14, S15, S16, and S17 respectively). None of these elements were discriminatory between human and animal isolates except for the bovine genomic island SaPIbov3, which was only present in typical bovine clusters but not in CC8 strains. This further supported the fact that bovine CC8 strains were more closely related to human CC8 than to typical bovine strains (Table S15). With Respect to the USA300 SCCmec Cassette SCCmec is a genomic island conferring methicillin resistance [43]. It is found in MRSA USA300, but not systematically in other S. aureus isolates. Table S7 shows that only two strains (i.e. MRSA I2 and COL) contained relatively numerous gene homologs, including mecA/mecRI, to the USA300 SCCmec, which was consistent with the fact that they were MRSA. Strikingly, a different and restricted stretch of gene homologs was uniquely present in all bovine CC8 isolates, but never found in human CC8 strains or isolates of the typical bovine clusters. This region appeared as a truncated SCC, which carried homologs of the ccrA and ccrB recombinase genes, as well as a few other determinants present on the SCCmec of USA300. However, it lacked the methicillin resistance determinants mecA/mecR1 and surrounding gene (i.e. from sausa300_0027 to sausa300_0035) (Table S7). This mecA-negative SCC element discriminated the bovine CC8 strains from all other strains of the present collection, be it CC8 or typical bovine CCs, and this observation was confirmed by comparison with COL SCCmec (Table S18). Genetic Organization of the Representative Non-mecA SCC Cassette from Bovine CC8 Strain M186 (SCC M186 ) Bovine CC8 strains were specifically associated with the presence of a truncated SSC cassette, which was devoid of the mecA gene. Thus, the nucleotide sequence of this cassette was further extracted and annotated from the preliminary draft chromosomal sequence of strain M186, and named SCC M186 . After assembly of the reads generated by Illumina with SOAPdenovo and GapCloser, we obtained 129 contigs ranging from 1,000 to 674,164 bp in length. To map SCC M186 , we sought for the orfx gene, which precedes the insertion site upstream of SCC cassettes (28). orfx was localized on a single contig of 277,076 bp in length. A ca. 40,000 bp fragment, starting with the first nucleotide of orfx (i.e. designed as position one), was extracted from this contig, in which we further localized the chromosomal 15 bp direct repeats attL and attR that typically flank SCC cassettes [44]. These were found at nucleotide positions 462-476 (AGAAGCTTATCATAA) and 30,741-30,755 (AGAGGCG-TATCATAA). Thus, the deduced length of SCC M186 was 30,279 bp and contained 26 potential ORFs (Figure 2 and Table 2). Based on its ORF sequences, SCC M186 appeared as a composite cassette formed by three distinct regions. From the 59 to 39 ends, the first region was composed of six ORFs, of which one (orf1) encoded for a potential new LPXTG-protein harboring a LPDTG signature, which is described below. The five other ORFs, encoded by orf2 to orf6, showed high degrees of amino acids identity (i.e. from 86 to 98%) with ORFs regrouped on a unique region encompassing SE0030 to SE0035 on the genome of S. epidermidis strain ATCC12224 (Table 2). orf2, orf3 and orf4 coded for three hypothetical proteins which were also found in USA300 (SAUSA300_0056, 0057, and 0059, respectively). Orf5 encoded for a carboxypeptidase and orf6 for a putative penicillin-binding protein 4. The central region was composed of six genes showing a conserved organization with the sausa300_0037 to _0042 genes of the USA300 SCCmec. Within this region, orf8 and orf7 encoded for the recombinases CcrA and CcrB, respectively. The ccrA and ccrB genes were members of the ccr allotype II and both proteins showed 90 and 92% identity, respectively, at the amino acid level with corresponding proteins in USA300. The gene products of orf9, orf10, and orf11 were annotated as hypothetical proteins with very high (i.e. $96%) amino acid identity to USA300 proteins SAUSA300_0039, 0040, and 0041, respectively. Eventually, ORF12 of SCC M186 , showed 99% identity to USA300_0042, which could act as a transcriptional regulator. Since resistance to chemicals such as arsenic may be pertinent in the agricultural environment, we tested the susceptibility to sodium arsenite of bovine CC8 isolates, carrying the new SCC cassette, as compared to all other strains of the collection, which did not carry the new SCC. The MIC of arsenite was 25 mM for all the bovine CC8 isolates, including M186. In contrast, it ranged between 0.4 to 3 mM in all other strains, i.e. up to 8 times lower than in SCCpositive strains. New SCC M186 -related LPXTG Protein The deduced amino acid sequence of the orf1-encoded LPXTGprotein of SCC M186 was composed of 1,151 amino acids and had a theoretical molecular weight of ca. 124 kDa and a pI of 4.47 using Compute pI/Mw tool (http://web.expasy.org/compute_pi/). A search for conserved domains [50] identified an YSIRK type signal peptide (YSIRKxxxGxxSIA, pfam04650) at position 23-35 and two G5 domains (pfam07501). Interestingly, the LPXTGprotein harbored by SCC M186 showed significant homologies (74% over the 400 amino acids at Cterminal) to SE0175, a putative accumulation associated protein (AAP) found in S. epidermidis ATCC 12224. Moreover, an LPDTG signature of S. aureus adhesins was manually found at position 1112-1116. Finally, the LPDTG motif was preceded by 25 proline-rich PE/ GQPGN repeats, which showed 95% of homology with a domain harbored by a potential surface LPXTG-protein (GT20_0444) of hypothetical function found in the environmental bacterium G. thermoglucosidasius TNO-09.020. Discussion The present results indicate a clear segregation between S. aureus strains from the CC8 cluster and typical bovine CCs. In addition, they also show that some isolates of the supposedly human-only CC8 cluster had permeated the bovine environment, as bovine CC8 isolates resembled much more isolates of the human CC8 than isolates of the typical bovine clusters. These observations may provide clues for the speculated jump of CC8 strains from human to cattle [17]. Having assessed that 99% of the genetic differences observed between the tested isolates resided in MGEs, we found that several of them were not discriminative at all, because they were not systematically represented in particular clusters. On the other hand, a group of MGEs appeared to be present in all strains, but demonstrated discrete differences in gene contents between CC8 isolates and isolates from typical bovine CCs. These included the genomic islands vSaa and vSab, which are believed to have evolved with S. aureus since a long time, and are present in all the strains sequenced so far [38]. In our study, both islands adopted clear patterns that differentiated the CC8 group (including human and bovine strains) from typical bovine CCs. This suggested that both vSaa and vSab emerged from a common ancestor and further evolved divergently in either the human or the bovine environment. Hence, the fact that bovine CC8 isolates shared very similar vSaa and vSab with human CC8 isolates, supported the hypothesis that they were originally human, and had jumped into cattle at a more recent occurrence in time. Moreover, this hypothesis was further supported by the fact that typical SaPIbov3 homologs were strikingly absent in CC8 isolates. Indeed, typical genes of this island were recently reported to discriminate S. aureus isolated in cattle with mastitis from human clinical strains [42]. Additional MGEs helped determine even more specific differences within the human and bovine CC8 isolates. These were exemplified by b-hemolysin converting prophages and a new composite SCC cassette. b-hemolysin converting prophages were present in the two CC8 sub-clusters Ia and Ib, which contained a mixture of human and bovine strains, but were absent from the bovine-only CC8 sub-cluster Ic, as well as from all the typical bovine CCs. This was highly reminiscent of recent studies on S. aureus jumps between human and small ruminant, poultry, and pig [15,22,23]. In all cases the postulated human to animal jump was associated with the loss of b-hemolysin converting prophages from the human strains, along with their establishment in animals. Since such prophages disrupted the S. aureus chromosomal hlb gene, encoding for hemolysin b it was proposed that this toxin was either unnecessary for persistence of S. aureus in humans, or even detrimental for it. On the other hand, it could be advantageous in animals [19]. Accordingly, the finding that human-derived CC8 isolates lose b-hemolysin converting prophages upon transition to becoming bovine-adapted is a strong evidence of a significant role of b-hemolysin in the process of host adaptation in cows. Likewise, adaptation of S. aureus to a new host is frequently associated with the acquisition of new genetic determinants such as pathogenicity islands, additional prophage(s), or new SCC islands [15,22,23]. In the present observation, the bovine CC8 isolates have acquired additional features that possibly helped them settle in their new environment. This was substantiated by the the new mec-negative SCC which was present only in bovine CC8 isolates, but never in human CC8 or typical bovine CCs. This SCC was reminiscent of the SCCmec acquired by porcine S. aureus CC398 [15,22,23], as it also carried genes conferring resistance to toxic agents (e.g. arsenic and copper). In the strains described herein, the MIC of sodium arsenite was uniformly 25 mM for all strains harboring the new SCC element, as compared to #3 mM for all strains that were devoid of it. This observation indicates that all SCC+ strains carried a SCC equipped with a functional arsenicresistance operon that could represent an asset for survival in the agricultural milieu. Although the new SCC shared the same ccrAB allotype II with the SCCmec of USA300 and S. epidermidis ATCC 12228, it was a composite element composed of homologs to regions found in S. aureus, S. epidermidis and environmental bacteria. This chimeric construction indicates that it was not just the descendant of an existing human SCCmec parent, but rather de novo (re)-constructed from parts of different genomes, most likely in the rural environment. Of highest interest, was the fact that it carried a gene encoding for a new LPXTG protein of unknown function, which was partly homologous to a protein found in the environmental bacterium G. thermoglucosidasius TNO-09.020 [51]. The presence of this LPXTGprotein may well be explained by horizontal gene transfer from a Geobacillus sp., a genus known as potential milk contaminant [52]. S. aureus LPXTG proteins are involved in various functions, including host colonization in which they play crucial roles in bacterial adhesion to host tissues, and are therefore termed adhesins [53]. The presence of a signal peptide which is found in many staphylococcal surface proteins, and two G5 domains to which a N-acetylglucosamine binding function has been attributed [54], strongly suggests an adhesin function for this protein. This possibility is reinforced by the significant homology with a S. epidermidis AAP. Indeed, such proteins have been shown to play major roles in the accumulation of S. epidermidis on polymer surfaces, and thus biofilm formation [55,56]. Taken together, the present work is an additional illustration of the adaptability of S. aureus to various hosts and the subtlety of the biological tools underlying it. We obtained convincing evidences supporting the human to bovine jump scenario of S. aureus CC8 rather than the contrary. We therefore propose that bovine CC8 strains originated from human CC8 strains following a scenario depicted in Figure 3. This raises several academic and public health issues. One is the contribution of the new SCC to bovine colonization and/or infection, and whether it may definitively hold the bovine CC8 strain in the bovine milieu. Another is whether this new island could acquire a mecA/mecRI complex and further spread methicillin resistance both in cattles and humans. Such a precedent recently occurred in the swine-related MRSA CC398, which first jumped from human to pig and then jumped back equipped with SCCmec. In view of this case, bovine CC8 strains might well be a new threat for human and veterinary medicine, which deserve concern and preventive control.
6,546.4
2013-03-11T00:00:00.000
[ "Biology", "Medicine" ]
Rh-relaxin-2 attenuates degranulation of mast cells by inhibiting NF-κB through PI3K-AKT/TNFAIP3 pathway in an experimental germinal matrix hemorrhage rat model Mast cells play an important role in early immune reactions in the brain by degranulation and the consequent inflammatory response. Our aim of the study is to investigate the effects of rh-relaxin-2 on mast cells and the underlying mechanisms in a germinal matrix hemorrhage (GMH) rat model. One hundred seventy-three P7 rat pups were subjected to GMH by an intraparenchymal injection of bacterial collagenase. Clodronate liposome was administered through intracerebroventricular (i.c.v.) injections 24 h prior to GMH to inhibit microglia. Rh-relaxin-2 was administered intraperitoneally at 1 h and 13 h after GMH. Small interfering RNA of RXFP1 and PI3K inhibitor LY294002 were given by i.c.v. injection. Post-GMH evaluation included neurobehavioral function, Western blot analysis, immunofluorescence, Nissl staining, and toluidine blue staining. Our results demonstrated that endogenous relaxin-2 was downregulated and that RXFP1 level peaked on the first day after GMH. Administration of rh-relaxin-2 improved neurological functions, attenuated degranulation of mast cells and neuroinflammation, and ameliorated post-hemorrhagic hydrocephalus (PHH) after GMH. These effects were associated with RXFP1 activation, increased expression of PI3K, phosphorylated AKT and TNFAIP3, and decreased levels of phosphorylated NF-κB, tryptase, chymase, IL-6, and TNF-α. However, knockdown of RXFP1 and PI3K inhibition abolished the protective effects of rh-relaxin-2. Our findings showed that rh-relaxin-2 attenuated degranulation of mast cells and neuroinflammation, improved neurological outcomes, and ameliorated hydrocephalus after GMH through RXFP1/PI3K-AKT/TNFAIP3/NF-κB signaling pathway. Background GMH is the most common neurological disorder of newborns. It is defined as the rupture of immature blood vessels in the subependymal brain tissue of the premature infant [1]. The complications after GMH include neuroinflammation, hydrocephalus, primary and secondary brain injury, and developmental delay [2,3]. Among all of those, the activation of inflammatory cascades could be the main factor leading to post-hemorrhagic consequences, such as long-term morphological and functional impairment [4]. Mast cells are considered the first responders and are able to initiate and magnify immune responses in the brain. Therefore, inhibition of the inflammatory response of mast cells is critically important at the early stage after GMH. Mast cells are present in various areas of the brain and in the meninges [5]. Brain mast cells are mainly of a tryptase-chymase positive phenotype [6]. However, their number and distribution can quickly change in response to a number of environmental stimuli, such as trauma and stress. They release histamine, serotonim, tryptase, chymase, and TNF-α after activation. Furthermore, they can crosstalk indirectly with microglia in the release of cytokines. Therefore, treatments focused on reducing proinflammatory cytokines via inhibiting mast cells could be potentially important in attenuating mast cell degranulation and inflammation after GMH [7]. Relaxin-2 is a member of the insulin-like peptide family, which can bind to its receptor RXFP1 with high affinity. Several recent studies have reported that relaxin and RXFP1 are expressed in the local arteries of mice and rats [8][9][10]. In addition to a role in the reproductive system during pregnancy, a growing number of literature suggests that relaxin has extensive cardiovascular effects, such as protecting against fibrosis and early inflammation and promoting vasodilation and angiogenesis [11,12]. Currently, a number of studies showed that PI3K-AKT is one of the downstream pathway of the interaction between relaxin and RXFP1 [13]. Moreover, tumor necrosis factor-alpha-induced protein 3 (TNFAIP3) plays an inhibitory role in terminating NF-κB signaling. However, it is unknown whether TNFAIP3 is a downstream mediator of relaxin-2 in exerting its stabilizing effect on mast cells after GMH. Based on the abovementioned evidence, we hypothesized that rh-relaxin-2 treatment could suppress mast cell activation, consequently reduce the secretion of proinflammotory cytokines (Tryptase, chymase, IL-6, and IL-1β), improve neurological function in the short and long term and ameliorate PHH, and that these beneficial effects may be mediated by PI3K-AKT/TNFAIP3/ NF-κB signaling ( Supplementary Fig. 1). Animals All experimental procedures were approved by the Institutional Animal Care and Use Committee at Loma Linda University. All studies were conducted in accordance with the United States Public Health Service's Policy on Humane Care and Use of Laboratory Animals and reported according to the ARRIVE guidelines. One hundred seventy-three P7 Sprague-Dawley neonatal pups (weight = 12-14 g, Harlan, Livermore, CA) were randomly divided into Sham (n = 37) and GMH (n = 136) groups (Supplementary Table 1). All pups were housed with controlled temperature and 12-h-light/dark cycle and given ad libitum access to food and water. All rats (up to 21 days old) were returned to their home cages with mothers after doing surgery, drug administration, and behavior tests. Neither collagenase-induced GMH nor administration of rh-relaxin-2 caused mortality in this study. Investigators were blinded to the experimental groups when performing neurological tests, immunofluorescence, toluidine blue staining, and quantitation Western blot density. GMH model and experimental protocol The procedure for the GMH model using collagenase infusion was performed as previously described [14]. Briefly, pups were anesthetized with isoflurane (3.0% induction, 1-1.5% maintenance) on a stereotaxic frame. After the skin was incised on the longitudinal plane and the bregma was exposed, a 27-gauge needle with 0.3 U clostridial collagenase (0.3 units of clostridial collagenase VII-S, Sigma-Aldrich, MO) was inserted at coordinates of 1.6 mmL, 1.5 mmA, and 2.7 mmV, and infused (1 μl/ min) using a 10-μl Hamilton syringe (Hamilton Co, Reno, NV, USA) guided by a microinfusion pump (Harvard Apparatus, Holliston, MA). The needle was kept in place for an extra 10 min to avoid leakage and withdrawn at a speed of 0.5 mm/min. The pups were placed back onto a heated blanket after infusion and before being returned to their mothers, and euthanized at different time points according to the experimental design. Intracerebroventricular drug administration was performed as previously described [15] as GMH induction, but the coordinates were at 1 mmA, 1 mmL, and 1.7 mmV. Experiment 1 The time course of endogenous relaxin-2, its receptor RXFP1, and the mast cell marker chymase and tryptase in the whole brain at 0.5, 1, 3, 5, and 7 days after GMH was analyzed by Western blot. The cellular localization of receptor RXFP1 and tumor necrosis factor-α-induced protein 3 (TNFAIP3) was detected at 1 day after GMH by double immunofluorescence staining on mast cells. Thirty-six rat whole brains were collected after perfusion with cold PBS at 0 (naive), 0.5, 1, 3, 5, and 7 days after GMH for Western blot ( Supplementary Fig. 2). Experiment 3 To evaluate the mast cell activation, the number of mast cells was quantified in the perihematoma area and thalamus on the first day after GMH by toluidine blue staining. Eighteen pups were divided into groups: Sham (n = 6), GMH + vehicle (n = 6), and GMH + rh-relaxin-2 (60 μg/kg, n = 6) ( Supplementary Fig. 2). Experiment 4 To evaluate the effect of RXFP1 in vivo on mast cell degranulation after administration of rh-relaxin-2 post-GMH, clodronate liposome was administered i.c.v. on the left side of the brain to inhibit the microglial activation at 24 h prior to GMH induction. Meanwhile, RXFP1 small interfering RNA (RXFP1 siRNA) and scramble siRNA (Scr siRNA) were also infused by i.c.v. injection on the right side of the brain. The whole brain samples were collected to conduct Western blot analysis on the first day post-GMH and after being perfused with cold PBS. The pups were randomly divided into six groups: Sham, GMH + vehicle, GMH + vehicle + clodronate liposome, GMH + clodronate liposome + rh-relaxin-2 (i.p. 60 μg/kg), GMH + clodronate liposome + rhrelaxin-2 (i.p. 60 μg/kg) + Scr siRNA, and GMH + clodronate liposome + rh-relaxin-2 (i.p. 60 μg/kg) + RXFP1 siRNA ( Supplementary Fig. 2). Experiment 5 To assess the role of PI3K-AKT pathway in vivo on mast cell degranulation after administration of rh-relaxin-2 post-GMH, clodronate liposome was administered i.c.v. on the left side of the brain to inhibit the microglial activation at 24 h prior to GMH induction. At the same time, LY294002 was administered by i.c.v. injection at 1 h on the left side of the brain prior to GMH induction. The whole brains were collected for Western blot on the first day post-GMH after being perfused with cold PBS. The pups were divided randomly into the following groups: Sham, GMH + vehicle, GMH + vehicle + clodronate liposome, GMH + clodronate liposome + rh-relaxin-2 (i.p. 60 μg/kg), GMH + clodronate liposome + rh-relaxin-2 (i.p. 60 μg/kg) + DMSO, and GMH + clodronate liposome + rh-relaxin-2 (i.p. 60 μg/kg) + LY294002 ( Supplementary Fig. 2). Neurological examinations Neurological tests were performed in a random and blinded setup as previously reported [17]. Short-term neurological tests, namely negative geotaxis and righting reflex, were conducted from day 1 to day 3 after GMH. Long-term neurological tests, including rotarod, foot fault, and water maze tests, were performed from day 21 to day 28 after GMH. In detail, negative geotaxis was tested to record the duration of the pups to turn 90°and 180°when positioned head downward on a 45°inclined plane. The maximum recording time was 60 s (three trials/pup/ day). For the righting reflex, the pups were placed on their backs on a horizontal plane, and the time needed for the pup to right itself in a prone position on its four paws was recorded. The maximum recording time was 20 s (three trials/pup/day). A foot fault test was conducted and the total numbers of missteps were recorded. When the animal's forelimb or hind limb fell into one of the grid openings, a foot fault was recorded. The maximum recording time was 60 s. In a rotarod test, the animals were placed on a rotating wheel (Columbus Instruments) and tested at a starting speed of 5 rpm and 10 rpm with acceleration at 2 rpm per 5 s. The time latency for the animals to remain on the rotating wheel and the speed at which animals fell down from the rotarod were measured and averaged from 3 repeated trials. The water maze test used a circular pool (diameter: 110 cm) filled with water at 24 ± 1 o C. A transparent escape platform (diameter: 11 cm) was submerged 1 cm beneath the water and placed at a fixed position at the center of one of the quadrants. On day 6, a probe trial was performed to assess spatial memory retention. During this trial, animals were allowed to swim freely for 60 s, but no platform was present. Swim distance, escape latency, velocity, and the percentage of time in the target quadrant were digitally recorded and analyzed by a tracking software (Noldus Ethovision). Toluidine blue staining Frozen sections were stained in toluidine blue working solution for 2-3 min. Sections were dehydrated quickly through 95% and 2 changes of 100% alcohol (10 dips in each since stain fades quickly in alcohol) after being washed in distilled water for three times. Finally, sections were cleared in xylene and covered with a resinous mounting medium. Mast cells were counted at perihematoma and thalamus areas for 3 sections per sample (n = 6/group). Nissl staining Nissl staining was conducted and analyzed as previously described [18]. Brain sections were dehydrated in 95% and 70% ethanol for 2 min and then washed in distilled water for 2 min. Sections were stained with 0.5% cresyl violet (Sigma-Aldrich, USA) for 2 min and washed in distilled water for 10 s followed by dehydration with 100% ethanol and xylene for 2 min twice, respectively, before a coverslip with Permount was placed. Volumes were calculated as the average delineated area from 10 μm sections taken at ≈ 2.5 mm, 1.2 mm, 0.7 mm rostral, and 2.9 mm caudal of the bregma multiplied by the depth of the cerebroventricular system. ImageJ software was used to measure cortical thickness and white matter area in Nissl stained histological brain sections. These indexes were relative to the control group [18,19]. Calculations were performed in a blinded fashion. Statistical analysis All data were presented as a mean ± SD. All analyses were performed using GraphPad Prism 6 (GraphPad software). Normal distribution was first confirmed using the Shapiro-Wilk normality test. For the data that passed the normality test, the statistical differences among groups were further analyzed using one-way ANOVA followed by Tukey's multiple comparison post hoc analysis. For the data that failed the normality test, Kruskal-Wallis one-way ANOVA on ranks was used, followed by Tukey's multiple comparison post hoc analysis. P < 0.05 was considered statistically significant. Results Endogenous relaxin-2 was downregulated and RXFP1 level peaked on the first day after GMH Western blot results showed that there was a significant decrease in the expression of endogenous relaxin-2 at 12 h after GMH (Fig. 1a, b). The expression of RXFP1 increased at 12 h after GMH, peaked on the first day, and declined significantly on the third, fifth, and seventh day after GMH (Fig. 1a, c). Mast cell marker chymase expression increased and peaked on the first day and diminished on the third day after GMH (Fig. 1a, d). Additionally, tryptase, another marker of mast cells, increased dramatically at 12 h, peaked on the first day, and decreased on the third day after GMH (Fig. 1a-e). Double immunofluorescence staining demonstrated that the receptor RXFP1 was expressed abundantly on mast cells marked with tryptase (Fig. 2B2, B4, C2, C4) and chymase (Fig. 2E2, E4, F2, F4) on the first day after GMH. Furthermore, TNFAIP3 (Supplementary Fig. 3B and F) was also co-localized in mast cells marked by tryptase ( Supplementary Fig. 3A and D) and chymase (Supplementary Fig. 3E and H) on the first day after GMH. rh-relaxin-2 treatment inhibited mast cell response after GMH In order to explore whether rh-relaxin-2 inhibits mast cell degranulation after GMH, we used toluidine blue as the specific staining of mast cells in the perihematoma and thalamic areas and quantified the numbers of mast cells on the first day after GMH. The results showed that the total numbers of violet mast cells with rhrelaxin-2 treatment were decreased compared to the vehicle group in the perihematoma (Fig. 3b-d) and thalumic (Fig. 3f-h) areas. There were scarcely any violet mast cells in the sham groups (Fig. 3a, d, e, h). Intraperitoneal administration of rh-relaxin-2 improved short-term neurological function at 72 h post-GMH Three dosages (30 μg/kg, 60 μg/kg, and 90 μg/kg) of rhrelaxin-2 were administered via intraperitoneal injections at 1 h and 13 h after GMH. Pups in the vehicle group took significantly longer time to finish the action from head downward to the prone 90° (Fig. 4a) and 180°(Fig. 4b) position compared to the sham group in the first 3 days after GMH. There were significant differences in negative geotaxis between the three treatment groups and vehicle on the first, second, and third day after GMH. Among these treatment groups, both the medium and high dosages of rh-relaxin-2 improved short-term neurological function in negative geotaxis and body righting reflex (Fig. 4c). Considering the drug safety profile, we chose the medium dose of rh-relaxin-2 (60 μg/kg) for the following studies. rh-relaxin-2 treatment ameliorated long-term neurological deficits post-GMH In the rotarod test, rh-relaxin-2 (60 μg/kg) treatment significantly increased the falling speed and falling latency in both 5 rpm (Fig. 4d, e) and 10 rpm (Fig. 4d, e) acceleration groups compared to the vehicle group. In the foot fault test, animals in the vehicle group had significantly more total foot slips compared to the rh-relaxin-2 (60 μg/kg)-treated group (Fig. 4f). Moreover, the water maze test showed that animals from the vehicle group swam a significantly longer distance (Fig. 4g), took more time to find the platform (Fig. 4h), and spent less time in the target quadrant during the probe trial (Fig. 4j, k) compared to sham animals. In contrast, rh-relaxin-2treated animals performed better (Fig. 4j, k) than vehicle-treated animals. These findings indicated that Fig. 2 The cellular localization of RXFP1 in the perihematoma area of the brains. Representative images of double immunofluorescence staining showed that RXFP1 (A1, B1, and C1) was expressed on mast cells marked with tryptase (A2, B2, and C2) and chymase (D2, E2, and F2) on the first day after GMH. n = 6. Scale bar = 50 μm rh-relaxin-2 treatment improved memory function at 28 days after GMH. There was no significant difference in swimming velocity among these 3 groups (Fig. 4i), meaning that the differences in finding the platform was related to the memory recovery, rather than the swimming ability. rh-relaxin-2 treatment attenuated ventricular dilation and gray matter loss and restored cortical thickness and white matter area after GMH Ventricular dilation is a major demonstration of PHH. We evaluated whether this could be attenuated by rh-relaxin-2 treatment. Significant ventricular dilation (Fig. 4l) was observed in vehicle-treated animals, but the ventricular volume was reduced significantly with rh-relaxin-2 treatment (Fig. 4l, m). Gray matter loss was significant in vehicle-treated animals, while it was also significantly attenuated with rh-relaxin-2 treatment (Fig. 4l, n). Loss of cortical tissues was significantly attenuated with rh-relaxin-2 treatment (Fig. 4l, o) compared to vehicle-treated animals. Relative white matter area was significantly less in the vehicle group than that of the sham group and rhrelaxin-2 treatment pups (Fig. 4l, p). Discussion GMH is the most common neurological disorder of newborns, and the major neurological complications following intraventricular hemorrhage are neuroinflammation, cerebral palsy, PHH, and cognitive deficits [21]. Neuroinflammation is a trigger of secondary injury after GMH. Mast cells are the first responders in neuroinflammation after intracranial hemorrhage, which can release inflammatory mediators, such as cytokines, proteinases, and reactive oxygen species, to initiate and magnify the immune response in the brain. Therefore, inhibition of neuroinflammation focused on attenuating geotaxis (a, b) and righting reflex (c) demonstrated that medium (60 μg/kg) and high (90 μg/kg) dosages of rh-relaxin-2 significantly improved neurological function compared to vehicle-treated pups in the first 3 days. *P < 0.05 vs Sham, #P < 0.05 vs GMH + vehicle, $P < 0.05 vs low dosage (30 μg/kg) of rh-relaxin-2, one-way ANOVA, Tukey's test, n = 7/group. rh-relaxin-2 (60 μg/kg) treatment significantly increased the falling speed and falling latency in both 5 rpmand 10 rpm (d, e) acceleration groups. In the foot fault test, animals in the vehicle group had significantly more total foot slips compared to the rhrelaxin-2 (60 μg/kg) treatment group (f). The water maze test showed that animals from the vehicle group swam significantly longer in 1 min (g), took more time to find the platform (h), and spent less time in the target quadrant during the probe trial (j, k) compared to the sham animals. In contrast, rh-relaxin-2-treated animals performed better (j, k) than vehicle. *P < 0.05 vs Sham, #P < 0.05 vs GMH + vehicle, one-way ANOVA, Tukey's test, n = 10/ group. In addition, rh-relaxin-2 administration reduced ventricular volume (l, m) and gray matter loss (n), and increased relative cortical thickness (o) and relative white matter area (p) significantly. *P < 0.05 vs Sham, #P < 0.05 vs GMH + vehicle, one-way ANOVA, Tukey's test, n = 6 mast cell degranulation was our primary strategy to treat GMH in this study. Thus, we explored the therapeutic effects of rh-relaxin-2 in inhibiting the degranulation of mast cells and uncovered the potential mechanisms after GMH. Firstly, we observed that the expression of endogenous relaxin-2 decreased continuously and that its receptor RXFP1 increased on the first day but decreased in the late phase after GMH. The receptor RXFP1 was expressed abundantly after GMH on mast cells, which was marked by tryptase and chymase. Additionally, administration of rh-relaxin-2 at the dosage of 60 μg/kg improved short-term neurological functions in the first 3 days, and inhibited mast cell degranulation on the first day in the perihematoma and thamalus areas. It also attenuated the motor and memory dysfunction and reduced the ventricular dilation in the longterm studies. Moreover, knockdown of RXFP1 using RXFP1 siRNA aggravated mast cell degranulation and neuroinflammation, as shown by the decreased levels of PI3K, phosphorylated AKT, and TNFAIP3 and increase in chymase, tryptase, IL-6, and TNF-α. Furthermore, degranulation of mast cells and neuroinflammation were exacerbated with the inhibition of PI3K, which was concomitant with downregulation of phosphorylated AKT, TNFAIP3, and upregulation of chymase, tryptase, IL-6, and TNF-α. The naturally circulating hormone relaxin-2 is a member of the insulin-like peptide family. It is well known to be used in cervical ripening, scleroderma, or systemic sclerosis and heart failure in basic and clinical research, due to its vasodilatory and organ protective actions. In this study, rh-relaxin-2 has been shown as a beneficial factor involved in attenuating the degranulation of mast cells. We observed that the endogenous relaxin2 decreased but its receptor RXFP1 increased as early as 12 h post-GMH, indicating a fast protective reaction to attenuate mast cell degranulation. This data was different from other research that RXFP1 mRNA was significantly downregulated on day 3 in a subarachnoid hemorrhage model of rabbit [22]. It might be because we chose an earlier time point at 12 h and 1 day post-GMH, which was concomitant with mast cell activation. Our further study showed that rh-relaxin-2 attenuated neurological deficits significantly in the short and long term after GMH. Therefore, improved outcomes were attributed to suppressed degranulation of mast cells by the systemic administration of rh-relaxin-2. (a, f) on the first day after intracerebroventricular injections. However, the expression of phosphorylated NF-κB (a, g) and inflammatory factors chymase (a, h), tryptase (a, i), IL-6 (a, j), and TNF-α (a, k) increased on the first day after GMH. *P < 0.05, Sham vs. RXFP1 SiRNA, #P < 0.05, GMH + vehicle vs. rh-relaxin-2, $P < 0.05, Scr SiRNA vs. RXFP1 SiRNA, one-way ANOVA, Tukey's test, n = 6 The exact mechanism by which rh-relaxin-2 exerts its protective effect in GMH still remains unclear so far. The 72-h intravenous administration of rh-relaxin-2 in acute myocardial infarction resulted in early beneficial effects, including reduced inflammation [10]. In our results, after knockdown RXFP1 by specific RXFP1 siRNA, mast cell markers chymase and tryptase, inflammatory cytokines IL-6 and TNF-α, and classic phosphorylated NF-κB all increased, which was consistent with the acute myocardial infarction outcomes. Hence, RXFP1 could be an important therapeutic target to reduce the degranulation of mast cells after GMH. Our current results indicated that mast cell degranulation was exacerbated with inhibition of PI3K and also led to the decrease of phosphorylated AKT on the first day after GMH. Some research evidence demonstrated the PI3K-AKT axis as an important therapeutic target in attenuating degranulation of mast cells via suppressing immune responses, which has been validated as a major downstream pathway of RXFP1 activation [13]. RXFP1, as the receptor of relaxin-2, is expressed on mast cells abundantly after GMH. All of the abovementioned evidence supports our results observed in GMH. It is known that TNFAIP3 is an endogenous antiinflammatory factor that can reduce the expression of IL-6 and TNF-α by inhibiting NF-κB activation [23]. As shown in our results, TNFAIP3 expression was significantly reduced after knockdown of RXFP1 and PI3K in GMH animals. Thus, we hypothesized that TNFAIP3 may be a downstream factor of RXFP1 and the PI3K-AKT axis in the context of GMH. Meanwhile, the decrease of TNFAIP3 after inhibition by specific siRNA promoted the expression of phosphorylated NF-κB and inflammatory cytokines on the first day post-GMH. In this process, after activation of RXFP1 by rh-relaxin-2, an increase of TNFAIP3 mediated the reduction of NF-κB and functioned as a negative regulator of NF-κB. Previous reports also showed that IL-6 and TNF-α levels increased in TNFAIP3 -/sham groups of an intracerebral hemorrhage mouse model, suggesting that TNFAIP3 deficiency caused spontaneous inflammation in the mouse brain, which was consistent with our results in GMH pups [24,25]. Thus, our results, coupled with the previous research, suggested that rh-relaxin-2 attenuated GMH-induced inflammation through the PI3K-AKT/ TNFAIP3/NF-κB pathway in mast cells. Fig. 6 LY294002 significantly decreased PI3K (a, d, l), phosphorylated Akt (a, e), and TNFAIP3 (a, f) expression, which was accompanied by the increase of phosphorylated NF-κB (a, g), chymase (a, h), tryptase (a, i), IL-6 (a, j), and TNF-α (a, k) on the first day after GMH. *P < 0.05, Sham vs. LY294002; #P > 0.05, GMH + vehicle vs. LY294002; $P < 0.05, DMSO vs. LY294002, one-way ANOVA, Tukey's test, n = 6 There are some limitations in our current research. We only studied the mast cell activation rather than the detailed interaction between microglia and mast cells after GMH. In addition, we did not further explore the potential protective effects of rh-relaxin-2 on neurons and the reduction of glial scar in tissues in GMH. Conclusions The activation of RXFP1 by rh-relaxin-2 could attenuate degranulation of mast cells and improve neurological function by inhibiting NF-κB through the PI3K-AKT/ TNFAIP3 signaling pathway after GMH in a rat model. Therefore, rh-relaxin-2 may serve as a promising therapeutic agent to reduce neuroinflammation and secondary brain injury in GMH patients. Additional file 1: Figure S1. Proposed mechanism. Figure S2. Experimental design. Figure S3. The cellular localization of TNFAIP3 in the perihematoma area of the brains. Representative images of double immunofluorescence staining showed that TNFAIP3 (B and F) was expressed on mast cells marked with tryptase (A) and chymase (E) on the first day after GMH. n = 6. Scale bar = 50 μm. Figure S4. Clodronate liposome inhibited the response of microglia in a GMH rat model. Representative images of immunofluorescence staining showed the expression of Iba1 in clodronate liposome + GMH animals (B, H) was inhibited compared to PBS + GMH animals (C, I) on the first day after GMH. n = 3. Scale bar = 50 μm. Table S1. Animal use in each experimental group.
5,940.2
2020-08-28T00:00:00.000
[ "Biology" ]
Linking Real-Life Situation with Content of School Mathematics at Secondary Level: Exploring Current Practice and Challenge in Bangladesh Worldwide mathematics is considered as a subject in school curriculum which is intimately connected to students’ daily life. However, it is disheartening that there are some level of disconnection between students’ mathematics learning and the way it is applied to real-world. This study is conducted to explore the current practice and challenges in linking real-life situation with mathematical content at secondary level in Bangladesh, a country in transition from least-developed to developing status. Here multiple case study approach was followed under qualitative framework. Mathematics teachers of Grade 8 and their lessons were selected conveniently to explore how teachers were responding to the issues of practice and challenges in linking real-life situation and eventually promoted mathematical literacy in their lessons. Each teacher was considered as a case. Lesson observations, participant teachers’ interview, FGD with students and document analysis had been used as instruments in the study. Thematic analysis was followed as the data analysis technique. In this paper, we briefly summarized our work in exploring current practice and challenges in Bangladesh in connecting real-world with the world of mathematics. This study finds that teachers emphasize on explaining mathematical concept in ways that make sense to students but practice in specific cases. Occasionally, students struggle with mathematical terminologies while teachers are unaware of it. Moreover, teachers find difficulties to connect some mathematical content with real-life situations because of the nature of content and during the lessons they rely on “teaching textbooks” due to time constraint. Both teachers and students in this study point out that textbook examples and problems are not interesting. Finally, this study finds that Grade 8 mathematics textbook, in limited cases, deals with “real” problems: problems that are typically placed in a “situation”. The present study will thus provide insights to the teachers to improve their practice in order to make mathematics more significant and interesting to the students. This study will also notify the textbook writers to bridge between content and context to increase the effectiveness and quality of mathematics textbook by including more real-life oriented mathematical problems. Along with improved teachers’ practice and quality textbook, this study will eventually provide opportunities for students to be engaged with mathematics in a more meaningful way and to finally develop their mathematical literacy at an expected level. Introduction Mathematics is a critical tool for youth as they confront issues and challenges in personal, occupational, societal and scientific aspects of their lives (Organisation for Economic Co-operation and Development [OECD], 2017). However, in learning mathematics students usually do not realize why they have to learn it and that impacts their motivation and retention (Diggs, 2009). On the other hand, students' interest in mathematics increases when they understand the skills and how that skill is developed and connected to required mathematics competencies for performance (Mensah, Okyere & Kuranchie, 2013, as cited in Arther, Owusu, Asiedu-Addo & Arhin, 2018. It is emphasized that mathematical problem should be realistic so that learners can build their own mathematical concepts (Royal Ministry of Education, Research and Church affairs [RMERC], 1999). Increased stress on tasks that promote connections and applications of mathematics to real-world had been called for by researchers and reforms in curriculum frameworks in several countries since the 1990s (for example, National Council of Teachers of Mathematics [NCTM], 1989, 2000cited in Cheng, 2013. Statement of the problem In Bangladesh, one of the aims of school mathematics curriculum is to make learners acquired with mathematical logics, methods and skills; and to increase their abilities to apply them for solving problems concerning day to day activities and global affairs (National Curriculum and Textbook Board [NCTB], 2012. p. 8, translated by authors). In reality, 57.56% of mathematics teachers in Bangladesh, who have served for more than 10 years, are not naturally well conversant with existing curriculum (Bangladesh Bureau of Educational Information and Statistics [BANBEIS], 2008). Moreover, within the last consecutive years there are higher failure rates in mathematics subject in Secondary School Certificate (SSC) examination, after the completion of secondary education in Bangladesh (BANBEIS, 2008). Significance of the study There has been a variety of research on mathematics education in Bangladesh. Most of these studies focus on assessment system, teachers' training program or text book implementation. However, there is a void in extensive research in Bangladesh on developing mathematical literacy of secondary students' through tasks that promote connections and applications of mathematics to real-world. This study thus seeks to explore the current practice and challenges that teachers' face in Bangladesh in connecting mathematical content with real-life situation. Research questions The following research questions (RQ) are used for this study: RQ1: What are the current practices in Bangladesh in linking real-life situation with mathematical content? RQ2: What are the challenges that teachers face in Bangladesh to link real-life situation with mathematical content? Limitations This study has been conducted in small scale and only in the capital city of the country. Further research can be done in large scale including rural and other urban areas for a deeper understanding about current practice and challenges in bridging between content and context for developing mathematical literacy of secondary students' in Bangladesh. Literature Review Real-World Mathematical Problems Palm (2008) indicates some level of disconnection between students' mathematics learning and the way it is applied to real-world. Research reveals this disconnection cause struggles by students' in understanding mathematics where some blame, for this disconnection, the mathematical contents these students are required to learn while others blame the educational approach taken by teachers (White-Clark, DiCarlo & Gilchriest, 2008). Education standards in many countries emphasize the vital importance of bridging school mathematics with reallife situations (Larina, 2016). In consequence, the teaching of mathematics is made interesting to students when teachers are ready to connect mathematical concepts to real-world problems (Arthur et al., 2018). Real-world problems are also referred to as everyday problems (OECD, 2013), realistic problems (Cooper & Harries, 2005;Gainsburg, 2008;Pais, 2013), applied tasks (Palm, 2006) and modeling tasks (Blum & Ferri, 2009;Frejd, 2012). Accordingly, there are multiple definitions of real-world mathematical problems which, in general, suggest that different characteristics may be used in theoretical models of such problems. In reviewing literatures for the present study, three different aspects were investigated initially in connecting real-world with the world of mathematics: (1) components of the connection (2) outcome of the connection and (3) classroom tools for the connection. Various themes with indicators were then identified in this study as theme locates something significant about the information in relation to the research question, and represents some level of comparable response (Braun & Clarke, 2006). Language in Mathematics In several studies (for example, Meiers, 2010), we observe that using everyday language is the first indicator of real-world mathematical problem in relation to the theme of language in mathematics. Students at all levels need opportunities to express mathematical terms in ways that make sense to them (Adams, Thangata & King, 2005). It is argued that everyday language plays a vital role in learning mathematics and mathematical problems should describe situations with the help of words, symbols and events that people come across every day (Meiers, 2010). The use of everyday language as the first characteristic of real-world mathematical problems implies two unavoidable processes: mathematical modeling and interpretation (Larina, 2016). Likewise, teachers need to encourage students to communicate about what the teacher is trying to teach and what they are trying to learn (Gough, 2007). Context and Situational Relevance The second component of a real-world mathematical problem is the context and situational relevance. The incorporation of context in problems has been highly recommended by recent reform documents and mathematics curricula around the world (for example, OECD, 2013). The PISA (Program for International Student Assessment) framework in OECD (2003) for developing real-world mathematical problems suggests four different contexts: personal, educational & occupational, public and scientific which are described as follows:  Personal Context: It is assumed that such a context would be of immediate and direct personal relevance to many young kids.  Educational & Occupational Context: These contexts include problem situations that students might confront while at school (artificially designed for teaching and practice purposes), or problems that would be met in a work situation.  Public Context: Public contexts are those situations experienced in one's day-to-day interactions with the outside world.  Scientific Context: The stimulus for this context usually presents scientific data, for example, data on level of carbon dioxide emissions for several countries, and the problem might ask students to interpret and make use of the data presented. 'Mathematisation': The Problem-Solving Process The third and final component of real-world mathematical problem involves problem-solving process or the process of 'mathematisation' as it is called in the PISA framework in OECD (2003) of mathematical literacy. OECD (2009) describes 'mathematisation' as a circular process with the subsequent five main features of the problem: 1. Real-world setting of the problem. 2. Consistent organization of the problem with mathematical concepts and identification of relevant mathematics. 3. Transformation of the real-world problem into a mathematical problem. 4. Solving the mathematical problem. 5. Making sense of the mathematical solution in terms of the real-world situation. These five aspects were then clustered into three phases, as follows, according to general features of mathematical problem-solving approaches in OECD (2009) Phase 1: Understanding the question (e.g. dealing with extraneous data), which is called "horizontal mathematisation". Phase 2: Sophistication of problem-solving approaches, which is called "vertical mathematisation". (2003) defines mathematical literacy as "an individual's capacity" for identifying and understanding "the role that mathematics plays in the world" and, in particular, emphasizes on making "wellfounded judgments" in using mathematics "in ways that meet the needs of that individual's life" as a concerned citizen. Briefly, mathematical literacy is the capability of dealing with "real" problems: problems that are typically placed in a "situation" (Burkhardt, 2008). While developing mathematical literacy, students have to "solve" a true world problem requiring them to use the skills and competencies they acquired through schooling and life experiences (Cohen, 2001). Later, Burkhardt (2008) investigated a number of PISA tasks where the task labeled as Primary Teachers, provided in Table 1, was one of those. Table 1. An Example of PISA Task (Burkhardt, 2008) Burkhardt (2008), finally, identifies that mathematical literacy involves (a) "complex reasoning" (as logical and systematic thinking) and (b) linking models of the situation to data -which Steen (Ed.) (2002) describes as the "sophisticated use" of elementary mathematics. 'Making Better Use' of Mathematics Textbook in Linking Real-Life Situation Textbooks are designed for the purpose to help teachers to organize their teaching (Johansson, 2006) and in attempting to specify how classroom lessons can be structured with appropriate exercises and activities (Lockheed, Vail & Fuller, 1986). Conversely, if mathematics textbooks are monotonous and uninteresting then it is an obstacle for students learning (Johansson, 2006). Educational reform has urged to transform the function of textbooks' from 'controlling teaching' to 'serving for teaching' (Guo, 2001). The use of textbooks by teachers in mathematics lessons are thus divided into three levels according to the framework of Nicol and Crespo (2006). They are "adhering", "elaborating" and "creating". Literally, "adhering" usage indicates considering textbooks as "an authority" to decide what to teach and how to teach. During this level of usage, teachers always make no adjustment Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.12, No.3, 2021 or modification on textbooks. In "elaborating" usage of textbook, teachers will take advantage of other sources, in addition to considering textbook as "a guide", to amplify the questions, tasks and exercises given in textbooks. In "creating" usage of textbook, teachers keep a critical eye on textbooks and consequently, locate the intention and limitations of the textbooks and finally, optimize teaching structures and pedagogical sequences including teaching system by setting up appropriate problems. It is important to note that, with the advancement of mathematics curriculum reform, teachers are strongly urged to have decreased dependence on textbook and to change their mind-set from "teaching textbooks" to "making better use of textbooks" (Qi, Zang & Huang, 2014). This leaves teachers with the scope of creating favorable mathematics learning environment in linking real-life situation and to promote mathematical literacy of secondary students. Mathematics Teaching in Linking Real-Life Situation Students have to be actively engaged in learning rather than just being listeners and observers in their classes (Jameel & Ali, 2016). Teachers attempt to simply pour information into children's minds, as in traditional way of teaching, is not anticipated in current educational reforms. Rather, children ought to be given confidence to discover their world, find out knowledge, consider and think critically with vigilant supervision and significant guidance of the teacher (Eby, Herrel & Jordan, 2005). Research strongly suggests that students have to build their own understanding of every concept of mathematics and thus the main responsibility of teaching is not explaining, lecturing or attempting to convey mathematical knowledge, but creating situations for students to promote their mental structures (Lessani, Yunus & Bakar, 2017).Teachers role in making mathematics meaningful and engaging students to form their own interpretation of evidence and to submit it for review are emphasized since effective teaching method helps students to develop a wide range of complex mathematics structures and to gain the capability of solving variety of real-life problems (Tarmizi & Bayat, 2012). In linking real-world with the world of mathematics six themes with several indicators identified in this study are briefly summarized in Table 2. Among the themes identified in present study, three were related to the components of the connection between the real-world and the world of mathematic, two were in relation to the classroom tools for the connection and one was related to the output of the connection.  Teaching methods make mathematics meaningful to students  Students are given confidence to discover their world  Creating situations for students to promote their mental structures  Students think critically with vigilant supervision and significant guidance of the teacher  Students form their own interpretation of evidence and submits it for review  Students gain the capability of solving variety of real-life problems 3. Methodology Study areas and period of time The study area was located in Dhaka, the capital city of Bangladesh. The selected cases were from four neighboring schools of University of Dhaka. This study required just about 4 months. Nature of the study Qualitative research is best suited to address a research problem in which researcher does not know the variables and need to explore more from participants through exploration (Creswell, 2012). While exploring the current practices and challenges in linking real-life situation with mathematical content in Bangladesh, qualitative approach was considered as an appropriate option for the study. Case study was used as research design. Four mathematics teachers of Grade 8 from four different schools were chosen conveniently as the cases. Three lessons of each of the cases were observed. According to Bell (2005), observation can be useful in discovering whether people do what they say and how they behave in the way. In each case, a total of six Grade 8 students (among them 3 were girls & 3 were boys), whose lessons were observed, were selected purposively for Focus Group Discussion (FGD). Table 3. Sample and sampling of the present study Data collection methods In this study, data was collected through lesson observation, interview schedule, FGD and documents analysis. A total number of twelve lessons of Grade 8 were observed. Four FDGs with students of observed lessons and interviews with four participant teachers were conducted along with analyzing class work and homework notebooks of the participant students as documents and records. Themes and corresponding indicators of Table 2 were used to develop observation checklist and questionnaires for interviews and FGDs. Semi-structured interview (Note 1) of participant teachers after observing all three of their lessons were conducted for both RQ1 and RQ2. In addition to interviews, lesson observations, FGDs with students and document analysis were conducted for RQ1. In FGDs, students discussed about teachers' current practice during lessons in linking real-world with the world of mathematics as well as their own experiences and expectations from mathematics lessons. Data analysis Thematic analysis was used for this study. According to Braun & Clarke (2006), thematic analysis is a method which is used for identifying, analyzing, and reporting patterns (themes) within data. Collected data was analyzed on the basis of six themes with their indicators identified in the present study. Ethical consideration To collect data, request letters were at first sent to the principals (head teachers) of the concerning schools. Consent from participating teachers and students were taken before conducting the interview, observation and FGD. The Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.12, No.3, 2021 objective and goal of the study was shared with the participants as it is necessary to inform participants to gain support from them (Creswell, 2012). Participant teachers and students willingly cooperated with the study. Results and Discussions In this section, we briefly summarize our work in exploring current practice and challenges in Bangladesh in linking real-life situation with mathematical content. We have identified a number of issues and learned many lessons while conducting this research. Although some issues and lessons may be unique to the context of Bangladesh, we believe many are applicable across countries. On the basis of qualitative review of lesson observation, participant teachers' interview, FDG with students and analysis of documents the following findings are shared in this paper. Teachers emphasize on "making language meaningful to students" but practice only in specific cases All of the four teachers, in their interviews, emphasized that mathematical term should be expressed in ways that make sense to students. They also claimed their support in explaining difficult or unfamiliar words in mathematics lessons as regular. However, only in specific arithmetic (profit, loss & interest related) lessons it was observed that all of the teachers explained the mathematical terms such as "cost price", "retail selling price" in ways that make sense to students. In particular, Teacher C gave an example of "khuchra-bikreta" ('retail seller' in textbook language) as "mudi-dokander" (a word from everyday language in Bangladesh) and doing so easy language was used for students understanding. Document analysis in all four cases also suggests that generally, there were no explanations of unfamiliar words during mathematics lessons. For instance, students in case 1 wrote "Fibonacci" in their notebook from previous lesson but did not know that it was 'name' of a mathematician. Students struggle with mathematical terminologies while teachers are unaware of it Teachers, usually, were not concerned in explaining unfamiliar terms used during the lessons of algebra and geometry. Teacher A mentioned in interview that there was no difficult or unfamiliar term in all three of the lessons observed which included two on algebra (use of algebraic formula and middle term factorization) and one on geometry (diagonals of a parallelogram bisect each other). However, FGD with students of these lessons revealed that they expected simplification of the unfamiliar words used during those lessons such as 'middle term factorization', 'diagonal' and 'bisection'. In all the four cases, students in FGD expressed their view that if difficult words were explained, it would had been easier for them to solve the mathematical problem. While claiming this, in Case 4 students' provided an example as an evidence from their class work notebook which had the explanations of the unfamiliar word, 'Pie chart', as "it is round shaped, it looks like a food called 'PIE', so it is called pie chart." Teachers face difficulties to relate mathematical content with real-life situation due to the nature of content All the teachers in their interviews agreed that real-life situation is important for students understanding and motivation, however, from class observation, FGD and document analysis it was evident that teachers used reallife related examples only in particular arithmetic lessons. When being asked about their practice in connecting real-life situation with mathematical contents in their lessons, teachers identified that most of the time they faced difficulties in connecting real-life situation with algebra and geometry because of the "nature of content". Moreover, Teacher B viewed that it was not necessary to link real-life situation with every mathematical contents, whereas others disagreed with this opinion in their interviews. On the other hand, connection of real-life situation with every mathematical content was expected by students in all FGDs. 4.4 Textbook problems were not typically placed in a "situation" and hence were not "real" to students While observing the lessons, it was identified that in Grade 8 textbook in Bangladesh, generally, problems were not related to application of mathematics to the situation of natural world faced by many young kids at school, work station or in family, for example, as seen in PISA Task: PRIMARY TEACHERS in Table 1. Students were not engaged in transforming real-world problem into a mathematical problem and thus had limited opportunity for 'sophisticated use' of mathematics. A particular arithmetic topic (on profit, loss & interest) and concept on 'Data & Information' were based on real-life settings. Generally, in mathematics lessons the opportunity provided by the textbook for students to make sense of the mathematical situations in terms of the real situation were not observed and consequently, there were limited scopes for immediate and direct experiences in students' interaction with the outside world. For instance, we have described earlier that Teacher A in a geometry lesson was proving the theorem the diagonals of a parallelogram bisect each other. In textbook, this problem, like most of the others, was not typically placed in a "situation" and hence, was not "real" to students. Teachers rely on "teaching textbooks" due to time constraint Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.12, No.3, 2021 Lesson observation revealed that generally, teachers rely on mathematics textbook in "adhering" level of usage with no adjustment or modification on textbooks. In no cases teachers took advantage of other sources (such as technology) to amplify the questions, tasks and exercises given in textbooks. In rare cases, (only once in Case 4) teachers kept a critical eye on textbooks to identify limitations and act accordingly in order to optimize learning as in "creating" level of usage of textbook. In Case 4, students in FGD identified that Teacher D used examples beyond textbook only on the topic 'Profit' where examples related to banking system were provided by the teacher. Students expressed their preferences on these types of examples. Except Teacher B, all, however, agreed that textbook examples are not enough for students learning but the pressure to complete syllabus within a short time prevailed them to provide problems and examples beyond textbook. However, Teacher B strongly argued that textbook examples were enough for students' learning. Both teachers and students found textbook examples and problems uninteresting Both teachers and students expressed their opinion that textbook examples and problems were 'not interesting'. In addition to that, Teacher D addressed textbook problems and examples as 'simple' and 'traditional'. Students found very few examples and word problems in textbook interesting. They specifically mentioned problems in 'Data & Information' topic were interesting since these were related to 'sports'. Non-observance of teaching in making mathematics meaningful to students It was observed that, without any exception, students were listeners and observers in their mathematics lessons. Generally, in every lesson, teacher solved mathematical problems and students copied that down from board and later students were engaged in practicing the process of solution. In no cases, verbal interpretation of their own understanding of mathematical concept was asked by the teachers. Neither teachers' encouragement in students' critical thinking, nor teachers' effort in creating situations for students to develop mental structures on mathematical concept was observed in lessons. Unfortunately, teaching was observed as explaining, lecturing and attempting to convey mathematical knowledge to students not as making mathematics meaningful to them. Limited scope for developing mathematical literacy Finally, it is identified that both textbook and teaching did not involve students in logical and systematic thinking (i.e., complex reasoning) as seen in PISA task in Table 1. Eventually, limited scope for students in developing mathematical literacy was identified in this study. Overall findings for RQ1 and RQ2 are summarized in Table 4 and Table 5 respectively: Table 4. Overall Findings for RQ1 Themes Findings Making Language Meaningful to Students  Teachers agreed that it is important to explain difficult words in mathematics lessons, however, practice only in specific cases.  Teachers only explained difficult mathematical terms (such as 'cost price', 'retail selling price') in specific arithmetic lessons.  Teachers usually were not concern in explaining unfamiliar terms or words (such as 'middle term factorization', 'diagonal', 'bisection', 'Fibonacci') in algebra and geometry lessons.  Occasionally, teacher used everyday example in explaining terms like 'pie chart'. Contextualization and Situational Relevancy of Mathematical Problems  Immediate and direct experiences in students' day to day interaction with the outside world was not observed in general.  Textbook problems related to application of mathematics to the situation of natural world faced by many young kids at school, work station or in family were not found as seen in PISA task in Table 1. 'Mathematisation'  Students were not engaged in transforming real-world problem into a mathematical problem.  Both lesson and textbook did not provide the opportunity for students to make sense of the mathematical situations in terms of the real situation. 'Making Better Use' of Mathematics Textbook  Teachers rely on textbook as in "adhering" level of usage with no adjustment or modification.  Most of the teachers viewed that textbook examples were not enough for students' learning.  In no cases teachers took advantage of other sources (such as technology) to amplify the questions, tasks and exercises given in textbooks.  In rare cases teachers kept critical eye on textbooks to identify limitations in order to optimize learning. Teaching in Making Mathematics Meaningful  Without any exception, students were listeners and observers in their mathematics lessons.  In no cases, verbal interpretation of their own understanding of mathematical concept was asked by the teachers.  Neither, teachers' encouragement in students' critical thinking, nor teachers' effort in creating situations for students to develop mental structures on any mathematical concept was observed in lessons.  Teaching was observed as explaining, lecturing and attempting to convey mathematical knowledge to students not as making mathematics meaningful to them. Developing Mathematical Literacy  Both textbook and teaching did not involve students in logical and systematic reasoning.  It is difficult because of the nature of content. Uninteresting textbook examples  Textbook examples are simple and uninteresting for Grade 8 students.  Textbook examples do not motivate students. Completing syllabus within short period of time  Teachers need to complete syllabus within a short period of time.  Teachers cannot provide examples beyond textbook within the short period of time. Conclusion This study finds that teachers emphasize on explaining mathematical concepts in ways that make sense to students but practice in specific cases. Occasionally, students struggle with mathematical terminologies while teachers are unaware of it. Besides, teachers find difficulties to connect some mathematical content with real-life situations because of the nature of content and during lessons they rely on "teaching textbooks" due to time constraint. Both teachers and students in this study identify textbook examples and problems as not interesting. Finally, this study also finds that Grade 8 mathematics textbook in limited cases deals with "real" problems: problems that are typically placed in a "situation". Recommendation The finding of textbook examples and problems as uninteresting to both students and teachers needs attention and further investigation since reviewing literature in this study reveals that when mathematics textbooks are monotonous and uninteresting then it is an obstacle for students learning (Johansson, 2006). Ultimately, in linking Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.12, No.3, 2021 real-life situation with content of school mathematics at secondary level, development of textbook in dealing with "real" problems placed in a "situation" is required in Bangladesh.
6,529.6
2021-01-01T00:00:00.000
[ "Mathematics", "Education" ]
Improved strength properties of LVL glued using PVAc adhesives with physical treatment-based Rubberwood (Hevea brasiliensis) The properties of the laminated veneer lumber (LVL) composed of the boiled veneer of Rubberwood (Hevea brasiliensis) using polyvinyl acetate (PVAc) adhesives in various cold-pressing time and various conditioned time with loaded and unloaded were studied. Five-ply LVL was produced by boiling veneer at 100°C for 90min as pretreatment and cold-pressing time at 12 kgf cm for 1.5, 6, 18, and 24 h then conditioned at 20°C and 65% relative humidity (RH) with loaded (12 kgf cm) and unloaded for 7 days as physical treatment. Especially for the delamination test, the specimens were immersed at 70 ± 3°C for 2 h and dried in the oven at 60 ± 3°C for 24 h; then, the specimens were solidified at room temperature (20°C and 65% RH) with loaded (12 kgf cm) and unloaded for 7, 10, 12, and 14 days. To determine the performance of LVL, the density, moisture content (MC), delamination, modulus of elasticity (MOE), modulus of rupture (MOR), horizontal shear strength, and formaldehyde emission tests were conducted according to the Japanese Agricultural Standard (JAS 2008) for structural LVL. The MOE and MOR values were significantly influenced by the physical treatment, however, neither to horizontal shear strength nor to formaldehyde emission. The best performance of LVL has resulted from unloaded LVL with cold-pressed time for 18 h; the MOE and MOR values were 9,345.05 ± 141.61 Nmm and 80.67 ± 1.77 Nmm, respectively. The best value of the horizontal shear strength was obtained from the LVL with 18 h coldpressing time and conditioned with loaded (13.10 ± 1.47 Nmm) and unloaded (12.23 ± 1.36 Nmm). The percentage of delamination values decreased with an increase in the cold-pressing time and conditioning time. The lowest value of delamination (19.06%) was obtained from the LVL with 24 h cold-pressing time and conditioned with loaded for 14 days. Except the delamination test, all other properties fulfilled the JAS. Introduction In Indonesia, the use of small-diameter logs produced from industrial forest estates has been studied since 1998 due to the decrease of large diameter logs from the natural forest as raw materials for composite wood industries (Kliwon and Iskandar 1998). Rubberwood (Hevea brasilienis) produced from community rubber plantation can be used as a raw material for wood-based panel products because, in comparison with some commercial wood species, it has been shown to exhibit equal or better physical and mechanical properties (Boerhendhy et al. 2003). Additionally, in 2018, Indonesia has the largest rubber plantation with the area ±3,671,387 ha or about 30% of the total area in the world (Ministry of Agriculture of Indonesia 2019). Rubber plantation in Indonesia is still dominated by community rubber plantation (smallholders), which cover an area about 3,235,761 ha (88.13%) followed by government estate 189,576 ha (6.70%) and private estate 2,46,050 ha (5.16%) (Ministry of Agriculture of Indonesia 2019). By assuming that the rubber areas were rejuvenated every year about 2% of smallholders and 3% of government estate and private estate, respectively, and the average wood potential was about 50 m 3 /year (Boerhendhy et al. 2003), the potential wood produced from un-productive of rubber areas reached about 3.9 million m 3 every year. The production of LVL from rubberwood using a hot-setting adhesive, especially phenol-formaldehyde (PF), melamine urea-formaldehyde (MUF), and urea-formaldehyde (UF), has been reported (Ding et al. 1996; Khoo et al. 2019). However, research to improve the properties of LVL manufactured from rubberwood using a cold-setting adhesive, namely PVAc, with physical treatment is very limited. To improve the adhesion between the wood and adhesive, many pretreatments were applied to wood surfaces such as chemical pretreatments (Gardner and Elder 1988;Belfas et al. 1993), mechanical pretreatments using sanding and planning (Aydin 2004a;Follrich et al. 2010) or densification by rolling (Bekhta et al. 2009) or prepressing (Bekhta 2003;Bekhta and Marutzky 2007), and thermomechanical pretreatments using a hydraulic press with electric resistance heating (Arruda and Del Menezzi 2016). Additionally, bonding quality and strength properties of LVL or plywood could be improved by using adhesive filled with the filler (Sutrisno et al. 2018; Sutrisno et al. 2020) or nanoclay (Moya et al. 2015). However, the influence of using hot water at 100°C on veneers as a pretreatment method is yet to be reported. Although boiled logs can be applied to modify the surface characteristics of veneers, to the best of our authors' knowledge, this method is unsuitable for woods that were produced from community forest or industrial plantation forest because the value of wood density is around low (0.24-0.56 g cm −3 ) and medium (0.56-0.72 g cm −3 ) classes (Abdurachman and Hadjib 2009). In addition, thermal treatment on wood at high temperature in the range of 180-250°C could retard and even impair penetration of glues because of the decrease in wettability and surface inactivation by oxidation of wood bonding sites (Costa et al. 2014), which is due to the compounds formed in the chemical transformations above 180°C (Esteves and Pereira 2009). UF resins are the most widely used adhesives in the manufacture of wood-based panels, especially for interior use. However, environmental requirements, such as stringent formaldehyde emission regulations (Carvalho et al. 2012), have forced producers to use an alternative adhesive, and one of them is polyvinyl acetate (PVAc) adhesive. PVAc adhesives are widely used and chosen (Král et al. 2015) because they are ready-to-use, short setting time, flexible joint, easy to clean, long storage life (Costa et al. 2014), odorless, and nonflammable (Uysal 2005). The growing interest in the use of PVAc adhesives in the wood-based industry is due to the health hazards associated with formaldehyde-based adhesives (Kim and Kim 2006), which are inexpensive when compared with formaldehyde-based adhesives (Paris 2000), various materials can be bonded, and nontoxic (Kim et al. 2007). The use of PVAc in making LVL has been studied by Aydin et al. (2004b), Shukla and Kamdem (2008), and Hashim et al. (2011). Currently, PVAc adhesives also can be applied to the structural design of wood furniture and wooden constructions (Hu and Guan 2019). Some researches have been conducted to evaluate the bonding performance of PVAc adhesives without any treatment on the wood surface. Kral et al. (2015) reported that the bonding properties were affected by different hydrothermal exposures, and the shear strength was affected by the wood species. Additionally, Uysal (2005) reported that the bonding strength of PVAc adhesives was lower than that of UF in the application for LVL after the steam test. There are several treatments for improving the adhesive strength of PVAc, including the addition of PVAc to silica (SiO 2 ) and titanium dioxide (TiO 2 ) nanoparticles (Petkovic et al. 2019) and thermal treatment to the wood at 180°C and 200°C for 3, 5, and 7 h (Andromachi and Ekaterini 2018). The bonding strength of PVAc adhesives on the fir wood is decreased in line with the temperature, and the duration of the treatment is increased. On the other hand, Ordu et al. (2013) reported that the thermal treatment on pinewood at 100-150°C for 4 h increased the performance of PVAc adhesives by 31.51%. All these treatments had led to an increase in the price of PVAc adhesives. However, to the best of our authors' knowledge, the use of physical treatments such as boiling veneers, various cold-pressing time and conditioning time, and type of the conditioning method (with loaded and unloaded) on the bonding properties of PVAc adhesives has not been reported. The purpose of this research was to improve the performance of PVAc adhesives by using the above-mentioned physical treatments on the production of LVL made from rubberwood, especially for interior use. The physical and mechanical properties of LVL were evaluated, including density, moisture content (MC), delamination, modulus of elasticity (MOE), modulus of rupture (MOR), horizontal shear strength, and formaldehyde emission. Materials The wood species used was rubberwood (Hevea brasiliensis) from the Community Rubber Plantation at South Sumatera Province, Indonesia. Rubberwood used in this experiment was harvested after cessation of latex with the age of tree 26 years and the solid wood density is 0.67 g cm −3 . PVAc emulsion was used in this study. The PVAc specification used in this study is listed in Table 1. Preliminary study The preliminary study aims to study the effects of pretreatment of veneer and to select better pretreatment by boiling the veneer at a temperature of 100°C for 2, 4, 6, 8, 10, 15, 30, 45, 60, 75, and 90 min or soaking the veneer in acetic acid at a concentration of 10% for 2, 4, 6, 8, 10, 15, 30, 45, 60, 75, and 90 min. Three specimens of LVL were produced using two various pretreatments of veneer, cold-pressing at 12 kgf cm −2 for 24 h and conditioned at 20°C and 65% RH for 3, 5, and 7 days before evaluating their delamination values according to the Japanese Standard (Japanese Agricultural Standard 2008) for structural laminated veneer lumber. Based on this standard, the percentage of delamination values are categorized as follows: passed (<10%), retest (10-30%), and failed (>30%). Differential scanning calorimetry (DSC) analysis The thermal properties of PVAc adhesives were studied by DSC analysis following the procedure by Jelinska et al. (2010) and Cowan et al. (1978). The samples were prepared by casting wet films of emulsion on Teflon container, dried at room temperature for 24 h, and then dried at 60°C for 24 h in a vacuum oven. Approximately 2 mg of the sample was heated at a heating rate of 10°C/min from −30 to 150°C in a nitrogen atmosphere. Thermal transitions were measured using a differential scanning calorimeter (DSC-60 Calorimeter, Shimadzu). LVL manufacturing Five-ply LVL was made from rubberwood veneer with 1.7 mm thick, 35 cm length, and 35 cm width boiled at 100°C for 90 min as pretreatment and dried until MC of veneer reached about 8%. Then, the glue was applied on both sides of core veneer at a spread rate of 32 g f −2 followed immediately by covering face and back veneer and cold-pressing at 12 kgf cm −2 for 1.5, 6, 18, and 24 h. The LVLs were thereafter conditioned at room temperature (20°C and 65% relative humidity (RH)) for 7 days with loaded (12 kgf cm −2 ) and unloaded before testing. For the delamination test, the specimens were immersed at 70 ± 3°C for 2 h and dried at 60 ± 3°C for 24 h, then solidified for 7, 10, 12, and 14 days at room temperature (20°C and 65% RH) with loaded (12 kgf cm −2 ) and unloaded before testing. The specimens were cold-pressed at a pressure of 12 kgf cm −2 , which is used as the conditioning method with loaded. Density and moisture content (MC) test Four pieces of 50 × 50 mm specimens were used to determine the density and MC using the gravimetric method. In the density test, specimens were taken from air-dried weight panel, and then the air-dried volume was measured and calculated using equation (1): Density g cm Air dried weight Air dried volume . 3 (1) MC was calculated using equation (2): Figure 1. The load was applied perpendicular and parallel to the veneer faces for flatwise and edgewise bending, respectively. The size of the flat-sample is as follows: length (23 × LVL thickness) and width (90 mm); and the size of the edge-sample is as follows: long (23 × LVL thickness) and width (LVL thickness). LVL thickness is the dimension of a cross section, which is perpendicular to the plane of the veneers, and measured in the Y-direction, while LVL width is the dimension of a cross-section, which is perpendicular to the thickness (or parallel to the plane of the veneers) and measured in the X-direction ( Figure 1). Four specimens from each panel were used. MOE and MOR were calculated using equations (3) and (4), respectively: where P is the maximum load (N), b is the width of test specimens (mm), L is the span (mm), h is the thickness of test specimens (mm), ΔP is the difference between upper and lower limits of load within proportional range (N), and Δy is the deflection at the center of span corresponding to ΔP (mm). Horizontal shear strength test Horizontal shear strength was evaluated according to JAS (Japanese Agricultural Standard 2008) for Structural Laminated Veneer Lumber. Two types of specimens for the horizontal shear strength test, flat-sample and edgesample, were used, and the sample configurations are shown in Figure 1. The size of flat-sample is as follows: long (6 × LVL thickness) and width (40 mm), and the size of edge-sample is as follows: long (6 × LVL thickness) and width (LVL thickness). Four specimens from sample configurations (flat and edge) were used, and then the adhesive strength test is applied to the test specimens under the dry condition to obtain the horizontal shear strength. The bonding strength was calculated using equation (5): Horizontal shear strength N mm Maximum load at failure Glued area . 2 (5) Percentage of delamination test The percentage of delamination was evaluated according to JAS (Japanese Agricultural Standard 2008) for Structural Laminated Veneer Lumber. Four specimens from each panel with a size of 75 × 75 mm were used for the delamination test. Specimens were immersed into hot water at 70 ± 3°C for 2 h then dried at 60 ± 3°C for 24 h, until MC reached below or equal to 8%. After that, samples were solidified for 7, 10, 12, and 14 days at room temperature (20°C and 65% RH) with loaded (12 kgf cm −2 ) and unloaded before tested. The percentage of delamination was calculated using equation (6): Percentage of delamination % Total lenght of delamination at the 4 side Total lenght of glue layer at the 4 side 100%. (6) Formaldehyde emission test Formaldehyde emission was evaluated according to JAS (Japanese Agricultural Standard 2008) for Structural Laminated Veneer Lumber. Four pieces of the LVL sample of 15 × 5 cm were cut from each panel and wrapped in a plastic bag and then conditioned at room temperature (20 ± 1°C) for more than 1 day. The samples were arranged on the top of a crystallizing dish with 300 mL of distilled water and placed in a glass desiccator. Then, they were conditioned at room temperature (20 ± 1°C) and 65% RH for 24 h. Later, 25 mL liquid sample of distilled water was placed in to Erlenmeyer glass and added 25 mL of acetylacetone ammonium acetate and warmed in a water-bath at 60 ± 2°C for 10 min then conditioned until reached the room temperature in the darkroom. To detect formaldehyde emission, the absorbance of samples is measured based on the 412 nm wave-length using spectrophotometry. The concentration of emission is calculated based on the standard curve for LVL. Statistical analysis The results of MOE, MOR, horizontal shear strength, and formaldehyde emission were first subjected to an analysis of variance (ANOVA) at p < 0.05, and significant differences between mean values were determined using least significance different (LSD) test. Preliminary study LVL delamination test results from the preliminary study are listed in Tables 2-4. A preliminary study presented that a delamination test of LVL produced from boiled veneer at the temperature of 100°C for 90 min, coldpressed at 12 kgf cm −2 for 24 h, and conditioned at 20°C and 65% RH for 7 days before testing has the best result with the percentage of delamination 9% (Table 3). This delamination test has passed according to JAS (Japanese Agricultural Standard 2008) (less than 10%); conversely, the percentage of delamination of LVL without pretreatment (control) is higher (45%) as listed in Table 4. This value indicated that due to an increase in the boiling time of the veneer and the conditioned time of LVL before tested, the bonding strength of PVAc adhesives increased significantly. For a temperature of 100°C as boiled veneer pre-treatment, the best performance of delamination is attained after 90 min and conditioned of LVL (20°C and 65% RH) after 7 days. This research successfully assessed that the bonding properties of LVL are significantly altered by different soaked veneer pre-treatment, the period time of boiled veneer, and conditioned of LVL. These results of the preliminary study were expected that will lead to new gluing approaches, which will result in significant performance improvements of veneer-based products. Differential scanning calorimetry (DSC) analysis The DSC analysis was conducted on the PVAc adhesive to determine its glass transition temperature. The glass transition temperature (T g ) is the temperature at which the transition between the glassy and rubbery state of amorphous solids occurs and it is a direct measurement of molecular mobility and depends on its degree of cure (Carbas 2014). Based on the DSC data, in the first heating cycle shows a glass transition in the temperature range 38.29-42.88°C, Density and moisture content (MC) The density values for both conditions, unloaded and loaded LVL, were in around 0.67-0.68 and 0.68-0.71 g cm −3 , respectively ( Table 5). Based on the respective veneer density of 0.67 g cm −3 , LVLs densities ranging from 0 to 1.5% (unloaded) and 3 to 6% (loaded) were recorded. These results showed that the LVLs densities were increased under loading conditions. The MC of both conditions, with unloaded and loaded LVL, were in around 8.12-10.24% and 9.98-11.03%, respectively ( Table 6). These values showed that the MC of all LVL was lower than 14% and had met the JAS (Japanese Agricultural Standard 2008) requirement. Modulus of elasticity (MOE) The average values of MOE for both conditions, unloaded and loaded LVL, in control and various cold-pressing time from 1.5 up to 24 h are listed in Tables 7 and 8. The data showed that the MOE of the LVLs produced using cold-pressing time from 1.5 up to 24 h was significantly increased, ranging from 3.14 to 19.69% (unloaded) and 4.38 to 20.89% (loaded) when compared with the control. The highest MOE value of 9345.05 ± 141.61 N mm −2 was obtained from unloaded LVL with cold-pressed for 18 h ( Table 7), and this value was increased to 19.69% when compared with the control. This MOE value met the 90 E grade requirement based on the Japanese Agricultural Standard (2008) as given in Table 11. Horizontal shear strength The average values of horizontal shear strength that represents bonding strength, for both conditions, unloaded Percentage of delamination The values of percentage of delamination for both conditions, unloaded and loaded LVL in control, and various cold-pressing time from 1.5 up to 24 h are listed in Tables 15 and 16. These values had not met the Japanese Agricultural Standard (2008) requirement because the values were more than 10%. These values indicated that the percentage of delamination decreased by increasing the cold-pressing time and solidification time both with loaded and unloaded, whereas with the loaded treatment of LVL showed better performance (19.06%) than that of the unloaded LVL (20.63%). Formaldehyde emission Average values of formaldehyde emission for both conditions, unloaded and loaded LVL in control, and various cold-pressing time from 1.5 to 24 h were around 0.1 mg L −1 , which are listed in Table 17, and formaldehyde emission met F4S (or F****) classification of the Japanese Agricultural Standard (2008), showing the emission values lower than 0.3 mg L −1 ( Table 18). It means that all of the LVLs resulted in this research had the best value of formaldehyde emission. Discussion In general, LVL produced from boiled veneer at the temperature of 100°C for 90 min as pretreatment then conditioned (20°C and 65% RH) for 7 days provides performance, meeting the physical and mechanical properties according to JAS (Japanese Agricultural Standard 2008), except the delamination test. It was predicted that the surface of the boiled veneer is already clean from the extractives, which improved the contact between the adhesive and the veneer. As stated by Vick (1999), the condition of the wood surface is extremely important to satisfactory joint performance; hence, the surface should be free of burnishes, exudates, oils, dirt, and other debris. Over-drying and overheating deteriorate the physical condition of the wood surfaces by forcing extractives to diffuse to the surface, by reorienting surface molecules, and by irreversibly closing the larger micropores of cell walls. Wood surfaces can be chemically inactivated with respect to adhesion by airborne chemical contaminants, hydrophobic, and chemically active extractives from the wood. Similar to this research results, Rohumaa (2016) reported that the effect of heating logs before peeling is improved by the bonding strength of plywood due to minimizing the initiation of glue line failure. Furthermore, the higher temperatures of soaking and peeling due to the lower lathe check depth improve the integrity and surface physicochemical properties, such as wettability and roughness (Rohumaa 2016). The duration of cold-pressing time from 1.5 to 24 h has significantly increased the value of MOE, both in the unloaded and loaded LVL, respectively (Tables 7 and 8). Table 11. The value of MOR, both in the unloaded and loaded LVL, was significantly influenced by cold-pressing time from 1.5 up to 24 h (Tables 9 and 10). However, the MOR of unloaded flat and edge sample with cold-pressed time Table 11. In line with this result, Ding et al. (1996) reported that LVL produced from rubberwood glued using MUF adhesive had the 80 E -Special grade. The application of LVL in the building construction or other purposes has not been mentioned in JAS (2008) for Structural Laminated Veneer Lumber. However, based on the strength class and durability class of rubberwood, III-II and V, respectively (Abdurachman and Hadjib 2009), for obtaining the adequate results, the rubberwood LVL is used in building constructions after preserved using a wood preservative material. However, it can be used for interiors only because the rubberwood LVL was produced using PVAc adhesives. Although the average horizontal shear strength value of LVL was not significantly different, it attributed that the value increased slightly with an increase in the coldpressing time from 1.5 up to 24 h (Tables 12 and 13). However, the horizontal shear strength of loaded flat samples with cold-pressed time for 24 h was lower than that of the control. Generally, the shear retention of LVL was higher in the cold-pressed time for 1.5 up to 18 h, which recorded 17.77-20.68% and 5.39-18.67% shear retention of unloaded and loaded, respectively, when compared with 4.99 and 2.67% in the cold-pressed time for 24 h. The occurrence of shear retention indicates that there was Table 14, the horizontal shear strength value met the 65 V -55 H grade requirement. It means that all of LVL resulted in this research had the first-class performance because it had edgewise and flatwise strength more than 65 and 55 N mm −2 , respectively, and used in building constructions in the direction of vertical or horizontal use. In the direction of vertical use, the load was applied parallel to the veneer faces. While in the direction of horizontal use, the load was applied perpendicular to the veneer faces. The LVL obtained in this research is better when compared with the study by Ding et al. (1996) who reported that LVL produced from rubberwood glued using MUF adhesive had the 55 V -47 H grade. Conversely, the delamination obtained for the tested LVL did not meet JAS (Japanese Agricultural Standard 2008) requirement with the minimum value of percentage of delamination obtained in this research is 19.06%, which is more than 10%. However, the values of the percentage of delamination are decreasing with an increase in the cold-pressing time and conditioned time of LVL (Tables 15 and 16). It is clear that at room temperature, PVAc adhesives did not yet cure completely due to the Tg, which approximately reached at 40°C in this research ( Figure 2). This is because below the minimum filmforming temperature, the molecule is confined at the site with a very limited group or branch movement freedom, and its free volume is relatively small; hence, whole molecules cannot move away from each other (Li 2000). In addition, there are many factors that affect Tg, such as curing time and curing temperature (Carbas et al. 2014). The process of adhesion is essentially completed after the transition of the adhesive from liquid to solid form. Especially in the thermoplastics adhesive type, the physical change to solid form may occur by either loss of solvent from the adhesive through evaporation and diffusion into the wood or cooling of molten adhesive on a cooler surface (Vick 1999). It was predicted that conditioned time with loaded or unloaded treatment up to 14 days was not sufficient to improve the delamination performance of PVAc adhesives. On the other hand, rubberwood is one of the resinous species that contain extractives on wood surfaces, which affect the principal physical and chemical contributors to surface inactivation, hence to poor wettability by adhesives (Vick 1999). Komarayati et al. (1995) reported that rubberwood with the age of tree 26 years contained high extractives (more than 4%) due to its resinous species with solubility as follows: in cold water 4.48%, in hot water 5.93%, in alcohol-benzene (1:2) 2.37%, and in 1% sodium hydroxide 20.72%. The acidity of extractives can interfere with the chemical cure of adhesives. Additionally, the low-molecular wood extractives migrate to the surface resulted in natural surface inactivation process then caused the poor adhesion between the wood and adhesive (Back 1998). The bond ability of wood is not only affected by the surface properties of wood adherents but also affected by wood's physical properties, especially density. Rubberwood used in this research with a density of 0.67 g cm −3 is categorized as medium-density wood (Abdurachman and Hadjib 2009). Hence, much greater pressure is required to compress stronger or a longer conditioned time to bring intimate contact between the wood surface and adhesive. This is in agreement with Alamsyah (2008) who studied that the bonding performance of Enterolobium cyclocarpum, Paraserianthes falcataria, and Toona sinensis with the density ranges from 0.30 to 0.49 g cm −3 met the requirement of JAS (Japanese Agricultural Standard 2008) in delamination test for both of aqueous polymer isocyanate (API) and resorcinol formaldehyde (RF) adhesives, while with the density ranges from 0.51 to 0.64 g cm −3 , Gmelina arborea bonded with F* is means one star; F** is means two stars; F*** is means three stars; F**** is means four stars (the best performance of formaldehyde emission). API and RF, Pinus merkusii and Acacia mangium bonded with RF did not meet the requirement of JAS (Japanese Agricultural Standard 2008) in the delamination test. The reason appears to be the great swelling pressure resulting from the high level of wood density acting on the glue layer (Vick 1999). For wood adhesive bonds, the most critical bond stress usually comes not from temperature but from swelling and shrinking of the wood in response to moisture changes, especially full soaking and drying cycles (Vick 1999). This is in line with Konnerth et al. (2016), in general, high-density wood species needed much more time to reach the initial mass demanded during re-drying after the impregnation steps with water than that lower density wood. Furthermore, higher concentrations of extractives that may interfere with the cure of adhesives are common in higher density species than that in a lower density of wood, especially in tropical hardwoods. The severe stresses produced by high-density species as the change dimension with changes in MC also contribute heavily to bonding difficulties (Vick 1999). Formaldehyde emission of rubberwood LVL glued using PVAc adhesives was not influenced by the coldpressing time from 1.5 to 24 h and the method of conditioned (unloaded and loaded). The average values of formaldehyde emission obtained in this research were 0.1 mg L −1 (Table 17), and it met F4S (or F****) classification according to the Japanese Agricultural Standard (2008), showing the emission values lower than 0.3 mg L −1 ( Table 18) Conclusion Based on this study, PVAc adhesives have a good prospect for the production of LVL-based rubberwood, especially for interior use. Rubberwood LVL produced using PVAc adhesives has a good performance of physical and mechanical properties according to the JAS (Japanese Agricultural Standard 2008) requirement, except for the delamination test. MOE and MOR values of rubberwood LVLs were significantly influenced by the physical treatment, however, neither to horizontal shear strength nor formaldehyde emission. The best performance of LVL resulted from unloaded LVL with the cold-pressed time for 18 h, which upgraded the LVL to 90 E -Special grade, and the values of MOE and MOR were 9,345.05 ± 141.61 N mm −2 and 80.67 ± 1.77 N mm −2 , respectively. Although the values were not significantly different, the best value of horizontal shear strength that represents LVL bonding strength was obtained from LVL with 18 h cold-pressing time and conditioned with loaded (13.10 ± 1.47 N mm −2 ) and unloaded (12.23 ± 1.36 N mm −2 ) for 7 days. The percentage of delamination values decreased with an increase in the cold-pressing time from 1.5 to 24 h and conditioning time from 7 to 14 days. The lowest value of delamination (19.06%) was obtained from LVL with 24 h cold-pressing time and conditioned with loaded for 14 days. These results indicate that the better performance of LVL seemed in line with an increase in the pressing time and solidification time. However, further studies using cold-pressing time for at least 24 h and conditioned time with loaded treatment more than 14 days could be predicted, which will contribute to improve the delamination performance of the PVAc adhesives. Formaldehyde emission obtained in this research (0.1 mg L −1 ) met F4S (or F****) classification according to the Japanese Agricultural Standard (2008).
6,939.8
2020-01-01T00:00:00.000
[ "Materials Science" ]
A REVIEW OF DIGITAL WATERMARKING AND COPYRIGHT CONTROL TECHNOLOGY FOR CULTURAL RELICS : With the rapid growth of the application and sharing of the 3-d model data in the protection of cultural relics, the problem of Shared security and copyright control of the three-dimensional model of cultural relics is becoming increasingly prominent. Followed by a digital watermarking copyright control has become the frontier technology of 3-d model security protection of cultural relics and effective means, related technology research and application in recent years also got further development. 3-d model based on cultural relics digital watermarking and copyright control technology, introduces the research background and demand, its unique characteristics were described, and its development and application of the algorithm are discussed, and the prospects of the future development trend and some problems and the solution. In recent years, the problem of cultural relic damage has become more serious.On the one hand, the speed of cultural relics protection has been increased slowly and the number of cultural relics to be protected is huge.On the other hand, unprotected cultural relics are damaged faster and less effective.The three-dimensional data model has been playing an increasingly important role in the protection of cultural relics since it was proposed.However, the data security and copyright protection of the 3-d model are important issues to restrict the further development and application of 3-d models in the protection of cultural relics (Ohbuchi, R., 1998).As a new method to protect multimedia data copyright, digital watermarking technology has received constant attention.It can be embedded into the product by the relevant evidences of the works and the author, and it has some hidden nature, so as to achieve the effect of not affecting the data usage and protecting the author's original benefit.Therefore, it is necessary and efficient to use digital watermarking as the data security and copyright protection measures for the three-dimensional model of cultural relics.In the study of 3-d model of digital watermarking technology, according to the digital watermarking is visible watermarking can be divided into visible watermarking and invisible watermark, according to the digital watermark attack resistance ability strong and the weak can be divided into the robust watermark and fragile watermark, according to the extracted when do you need the original image can be divided into private watermarking and blind watermarking, the watermark according to the feature set can be divided into the airspace and frequency domain watermarking, as shown in figure 1.This paper summarizes and discusses the current research status and application of digital watermarking technology in 3-d model.With the development of digital watermarking in recent years, the scope of its own application has also been expanded (Zhang X, 2003): 1.1.1Copyright protection: For different products, digital watermarking technology can adapt to the characteristics of the product, so as to achieve the copyright protection of the author.Digital watermarking technology has been widely used in video, audio, 2d image, etc.But there is still lack of 3-d model, which needs further research. 1.1.2Operation tracking: Since the data is relatively easy to copy, in the case of the copyright issues can through the operation of the watermark tracing in tracking the root cause of this problem, ensure the creators and users of copyright issues. 1.1.3Content authentication: In the process of data use, hard to avoid to produce the problem such as data destruction, damage, so for the contents of the data completeness and correctness of certification is necessary, is advantageous to the copyright protection of the creator. 1.1.4Data control: In the process of data copy can add watermark to count, to control the count by limiting the use of data to achieve copyright protection, data using the purpose of control and prevent the bulk copy data. DIGITAL WATERMARKING OPERATION PROCESS 3-d model of digital watermarking technology is a branch of digital watermarking technology, its principle is embedded in the 3-d model is not the ownership of the visible watermark to protect model, or used for testing the authenticity of the model, or embedded visible information to claim ownership model 3-d model watermarking process can be divided into three steps: watermark generating, watermark embedding and watermark detection, in these three steps, the digital watermarking algorithm research of watermarking embedding and detecting method.From the point cloud model data point of view, watermark embedding can be understood to add a small amount of point cloud data to a point cloud model data, but it will not affect the visualization effect of the point cloud model. Watermark generation Watermark can be meaningless data, such as the use of keys by generate the random number sequence of pseudo random generator, or meaningful data, such as the copyright owner's information, user information, etc.Before the watermark is embedded, the watermark data is usually converted into a binary sequence.If you want to further strengthen the security of the watermark data, we can also after binarization of the randomized or encrypt the watermark sequence, so that once appear extracted watermark data by attackers, because we on the watermark data protection, so as to make the attacker can know the content of the watermark data. Watermark embedding Watermark embedding is to embed watermark data into threedimensional model by embedding some algorithm into 3-d model, which is shown in figure 2. This process can be controlled by the key to prevent the illegal extraction of watermark, so as to avoid the resumption and modification of watermark by unauthorized users.The selection of watermark embedded base element directly determines the performance of watermarking algorithm.In the various information of 3-d model, geometric information and topological information are suitable for watermark embedding.At present, multi-resolution analysis and other technologies are applied, and the spectral information of the 3-d model is also used for watermark embedding. Watermark detection Watermark detection is the process of extracting watermark information and judging whether it is the original watermark information by using watermark extraction algorithm from the 3-d model to be tested, as shown in figure 3.In this process, the original 3-d model must depend on different watermarking algorithms.The watermark extraction algorithm is often compared with the watermark embedding algorithm, and the embedding method is used to find the location of the watermark embedded in the embedding algorithm to extract the watermark.Due to the 3-d model would inevitably suffered processing, intentionally or unintentionally, thus the integrity of the watermark data is likely affected by different degree, so we must put the extracted watermark data correlation compared with the original watermark data to determine the existence of the original watermark data.), in principle should also meet the requirements of the following aspects (Huang X., 2003). Precision characteristics The data of three-dimensional point cloud model is the most accurate three-dimensional data of the entity.In fact, in the protection of cultural relics and architectural heritage.Cultural relics are the treasures of human civilization, containing the unique spiritual values, thinking patterns and imagination of an era, which are the crystallization of human civilization, culture and wisdom.However, with the passage of time, most of the relics are incomplete, broken, incomplete and so on. The high-precision original point cloud data is the real virtual representation of cultural relics, and it is often the premise of the follow-up collaborative research in addition to the basic 3-d information storage.The detection and labeling of diseases caused by small changes of cloud at different time points; In the virtual 3-d model, the project formulation, adjustment, implementation and evaluation of results are completed, and the virtual restoration of test materials and reference bases is provided for the actual operation.Therefore, ensure the accuracy of cultural relics of the 3-d model data is in the watermark embedding problem must be considered when, and does not affect the use of its late (especially disease detection, virtual repair, deformation, etc.. High efficiency The data volume of 3-d model is generally relatively large, and the order of magnitude is usually several GB to dozens of GB.Therefore, compared with conventional digital watermarking, the digital watermarking scheme of 3-d model should be more efficient and used to meet the practical application requirements. Detectable Digital watermarks should be able to be detected or extracted by authors or other notaries.When the work is in dispute on the copyright issue, the copyright of the work can be confirmed by extracting the watermark information in the works.If watermark is not detectable, then it loses its original meaning. Without Ambiguity The result of a watermark or watermark decision should be able to indicate the ownership of the ownership and not to detect inaccurate information. Imperceptibility This refers to the significant decrease in the information quality of the protected information and the visual effects after embedding the watermark information.For invisible watermarks, this is a basic requirement.And the robustness and imperceptibility are two contradictory features that interact and constrain each other.But because of people's perception system is not very accurate, the so-called imperceptible in fact is relative, not absolute cannot be perceived, as long as it is for people to notice the obvious change can be called is imperceptible.The current technique is usually to improve the robustness of watermarks as much as possible under the premise of "relative" insensitivity (Li Y. C., 2011). Figure 4.Watermark Imperceptibility However, the technical requirements of 3-d model digital watermarking are often mutually influenced and even contradictory.For example, the smaller the data volume, the worse the robustness.The larger the data volume, the weaker the fidelity, and the accuracy will be affected.In practice, therefore, to achieve the above technical requirements, is designed for 3-d model data characteristics of digital watermarking scheme, between the various technical indicators according to the characteristics of the current specific goals and requirements to comprehensively consider the effect of between each technology, different watermark usually comes with its own characteristics.For example, the fragile watermark robustness requirement is not high, and the robustness of watermark in addition to meet the requirements of the above, but also satisfy the robustness and can resist common attack multiple, to ensure that the embedded watermark can be extracted correctly. THE RESEARCH STATUS At present, the research progress in the airspace and frequency domain digital watermarking is more, the method based on spatial domain is by modifying the model geometry information, topology information and other properties to embed watermark, the watermark embedding and extraction speed, methods mainly include the method based on model vertex, based on polygon area such as the method of geometric elements and the method based on two-dimensional watermark extension.However, the method of spatial domain is to operate directly on the geometric invariant of the model, which has a large influence on the invisibility of watermark information, and the spatial watermark is generally sensitive to noise.Based on the frequency domain method, the model is firstly transformed, and the frequency domain coefficients obtained are modified to embed watermark.Based on frequency domain method is mainly differences on mathematical transformation, such as based on three dimensional discrete cosine transform (3-d -DCT) and DCT coefficient of bipolar quantitative technology put forward a kind of three-dimensional data model of embedding method, the method to realize a blind extraction, able to deal with the common attacks such as cut, add noise, but unable to cope with big strength of geometric attack.Or by combining quantitative modulation strategy and dual tree complex wavelet transform, embedding the watermark into the key entropy image area, enhances the watermark's ability to cope with compression attack, but ability to resist geometric attacks.The frequency domain method is used to operate the frequency domain coefficients.Compared with the watermark of the airspace, the frequency domain watermark has the characteristics of resisting all kinds of attack means, good robustness, high fidelity, and larger watermark information capacity. Figure 5.3-d watermarking method Both wavelet transform and other mathematical transformation, mostly on the certain properties of the model change to embed watermark, and directly to model the triangle mesh in triangular domain orthogonal system on the mathematical transformation is obviously much more convenient.Using V -system parameterization of triangular mesh model in the frequency domain spectrum, by adjusting the frequency spectrum to embed the watermark, the algorithm of similarity transformation, such as noise, shear attack has better robustness.Or will be introduced to the combination of Laplacian three-dimensional grid algorithm, by using the method of Laplace basis function mathematical transform, this method only depends on the grid topology information, and has nothing to do with the geometry information, thus the watermarking grid, especially not rules grid watermark visual quality is affected by a large grid.In 2002 Adrian et al. proposed a blind watermarking algorithm for 3-d models and objects.It generates a string by the key, which is embedded in the geometry of the object by changing the location of some vertices.This algorithm has good invisibility.Cotting etc. (Cotting D., 2004) in 2004 put forward a point sampling geometry model based on spectrum analysis of robust watermarking scheme, USES the fast hierarchical clustering algorithm, the point cloud model is first divided into a series of small, and mapped to an approximation of the Laplacian characteristic function space, the final selection of spectrum of low frequency part of watermark embedding, the algorithm for smoothing, such as rotation, translation, scaling attack has better robustness.2004, Ohbuchi, etc. (Ohbuchi R., 2004)to (Karni, Z., 2000) proposed a grid based on spectrum analysis technology, converting point model into grid model, using the correlation between vertex with spectrum analysis to grid, ultimately embed watermark data.But strictly speaking, this algorithm is not really a direct digital watermarking algorithm for point cloud model, the algorithm for cutting, noise, resampling, etc have good robustness, but in view of the similarity transformation attack, if a model for the symmetry model leads to poor robustness against the attack.In 2011, Wang Xinyu etc. (Wang Xinyu, 2011) change to weaken the 3-d model data at the expense of precision, the 3-d model data itself routine minor adjustments to embed watermark information, this paper proposes a 3-d point cloud model zero watermarking scheme.Combined with the characteristics of three-dimensional point cloud model, in order to have higher the stability of the global geometric features 3-d point cloud model as the breakthrough point, through to the number of vertex concentration in accordance with certain conditions of analysis to construct the watermark.The watermark information is registered in the information database of Intellectual Property Right, a third-party authority, and finally the copyright protection of 3-d point cloud model data is realized.The algorithm has good invisibility and good robustness to common watermark attacks such as translation, rotation and noise, but with the increase of shear attack strength, the robustness decreases.In 2013 Wang rui, etc.(Wang rui, 2017) breakthrough is dependent on a specific coordinate system for the limitation of watermarking scheme design, the introduction of Clifford geometry kangaroo, put forward a kind of three-dimensional point cloud based on Clifford -Fourier transform model of digital watermarking algorithm.The data watermark of 3-d point cloud model is calculated without dependence on specific coordinate system.It has better non -visible features, and has good robustness for common attacks such as translation, rotation, simplification, etc.In 2017, Shiqun Li etc. (Shiqun Li, 2017) proposes a gridbased Laplacian characteristic vector of a three-dimensional grid model and a half blind watermarking algorithm, the watermark embedding phase, Tutte Laplacian calculation, then the eigenvalue decomposition characteristic vector is obtained, the disturbance of Laplacian matrix eigenvector for watermark embedding.This algorithm can resist the common attacks such as affine transformation, random noise, smoothing, uniform quantization, cutting and so on, which has strong robustness and greatly improves the watermark load capacity.In 2018 the GN Pham (GN Pham, 2018) proposes a watermark embedding algorithm based on the three dimensional model of feature points, through the 3-d print model of Z axis 3-d section for calculation and determination process of feature points, according to the reference length changed foxy feature points in the space vector length to embed the watermark data model for 3-d printing a feature point.The x and y coordinates of the feature points will change depending on the length of the change vector that has been embedded in the watermark.This algorithm has good robustness and invisibility for rotation, scaling, translation and random noise, and the algorithm has good precision. SUMMARY AND PROSPECT At present, the 3-d model of digital watermarking has achieved great results, but the technology is not fully mature, and there are still many problems to be solved and perfected. First, in terms of theory, and the world is still in the early, intermediate the theory still has a lot of to break through the key problems, such as robustness, embedding strength, embedding the relationship between the data volume and not perceptual.How to coordinate the influence of various aspects, improve the overall effect of watermark is more important.Second, the attack on 3-d model is varied, due to the need of copyright protection, the watermark needs has good robustness to deal with all kinds of attacks in place to ensure that the embedded watermark information will not be destroyed, to ensure the legitimate rights and interests of the author.Thirdly, the research on digital watermark of 3-d model is more than that of grid model, and the research on digital watermark technology of point cloud model is relatively small.In addition, the study of blind watermarking algorithm is relatively weak, although the robustness of the non-blind watermarking algorithm is better, but the effect is poor in the actual application, and the robustness of the blind watermarking algorithm is also needed.Fourthly, the application of deep learning in the digital watermarking of cultural relic 3-d models is still just starting.If it can build a better model of robustness, it will produce good results.At present, Stanford university first proposed the use of deep learning to carry out the generation of digital watermarking of point cloud model, and achieved good results. CONCLUSION With the increasingly prominent application value of 3-d model data in the protection of cultural relics and the increasing demand of 3-d model data sharing, the security of the 3-d model is becoming more and more important.As a cutting-edge technology in data security protection, digital watermarking technology plays an important role in the security protection, copyright protection and data sharing of cultural relic 3-d models.Only to guarantee the data security problems, the creator of the copyright benefit is protected, to make the cultural relics of the 3-d model of data sharing and widespread application, in order to promote the development of the protection of cultural relics. laser scanning technology can obtain the point cloud data of object or real scene, and then quickly reconstruct the 3-d model.The development of multimedia and network technology makes 3-d data model widely used in virtual reality, medical image, industrial design and cultural heritage protection. Figure 1 . Figure 1.Classification of digital watermark Figure 3 . Figure 3.Watermark detection process 3. THE CHARACTERISTICS OF DIGITAL WATERMARKING OF THE THREE-DIMENSIONAL MODEL OF CULTURAL RELICS Digital watermarking general process including the watermark embedding and watermark extraction of two parts, the
4,380.6
2018-04-30T00:00:00.000
[ "Computer Science" ]
Glucuronidation of deoxynivalenol (DON) by different animal species: identification of iso-DON glucuronides and iso-deepoxy-DON glucuronides as novel DON metabolites in pigs, rats, mice, and cows The Fusarium mycotoxin deoxynivalenol (DON) is a frequent contaminant of cereal-based food and feed. Mammals metabolize DON by conjugation to glucuronic acid (GlcAc), the extent and regioselectivity of which is species-dependent. So far, only DON-3-glucuronide (DON-3-GlcAc) and DON-15-GlcAc have been unequivocally identified as mammalian DON glucuronides, and DON-7-GlcAc has been proposed as further DON metabolite. In the present work, qualitative HPLC–MS/MS analysis of urine samples of animals treated with DON (rats: 2 mg/kg bw, single bolus, gavage; mice: 1 mg/kg bw, single i.p. injection; pigs: 74 µg/kg bw, single bolus, gavage; cows: 5.2 mg DON/kg dry mass, oral for 13 weeks) revealed additional DON and deepoxy-DON (DOM) glucuronides. To elucidate their structures, DON and DOM were incubated with human (HLM) and rat liver microsomes (RLM). Besides the expected DON/DOM-3- and 15-GlcAc, minor amounts of four DON- and four DOM glucuronides were formed. Isolation and enzymatic hydrolysis of four of these compounds yielded iso-DON and iso-DOM, the identities of which were eventually confirmed by NMR. Incubation of iso-DON and iso-DOM with RLM and HLM yielded two main glucuronides for each parent compound, which were isolated and identified as iso-DON/DOM-3-GlcAc and iso-DON/DOM-8-GlcAc by NMR. Iso-DON-3-GlcAc, most likely misidentified as DON-7-GlcAc in the literature, proved to be a major DON metabolite in rats and a minor metabolite in pigs. In addition, iso-DON-8-GlcAc turned out to be one of the major DON metabolites in mice. DOM-3-GlcAc was the dominant DON metabolite in urine of cows and an important DON metabolite in rat urine. Iso-DOM-3-GlcAc was detected in urine of DON-treated rats and cows. Finally, DON-8,15-hemiketal-8-glucuronide, a previously described by-product of DON-3-GlcAc production by RLM, was identified in urine of DON-exposed mice and rats. The discovery of several novel DON-derived glucuronides in animal urine requires adaptation of the currently used methods for DON-biomarker analysis. Electronic supplementary material The online version of this article (doi:10.1007/s00204-017-2012-z) contains supplementary material, which is available to authorized users. Introduction Formed pre-harvest by Fusarium species, the mycotoxin deoxynivalenol (DON) is one of the most frequent fungal contaminants of food and feed worldwide. DON affects eukaryotic cells by inhibition of protein-, DNA-, and RNA synthesis, resulting in toxic effects in animals and plants (Rocha et al. 2005). These effects include feed refusal and emesis, growth retardation, and modulation of immune response in animals (Pestka 2010). However, plants and animals are capable of mitigating DON by conjugation. By attaching glycoside residues, plants can produce masked DON compounds like DON-3-glucoside (Berthiller et al. 2005(Berthiller et al. , 2009. Depending on the animal species, animals conjugate DON to glucuronic acid (GlcAc) (reviewed by Payros et al. 2016) or sulfate (Schwartz-Zimmermann et al. 2015;Wan et al. 2014), which are both phase II metabolism reactions. Glucuronidation is the major conjugation reaction of DON in mammals, whereas sulfation is the dominant metabolization in poultry. Sulfonation has additionally been described for rats Wan et al. 2014), but its mechanism has not yet been elucidated. In addition to animal-innate phase II conjugation, DON can also be metabolized by gut microbes. The most prominent microbial metabolite of DON is deepoxy-DON (DOM) (Fuchs et al. 2002) which can, in turn, be subject to glucuronidation , sulfation (Schwartz-Zimmermann et al. 2015), or sulfonation . The glucuronidation activities and the regiospecificity of glucuronidation towards DON are species-dependent (Maul et al. , 2015Uhlig et al. 2013). Pioneer work on in vitro DON glucuronidation by human liver microsomes (HLM) and liver microsomes of different animal species revealed DON-3-glucuronide (DON-3-GlcAc) as major DON metabolite upon incubation of DON with animal liver microsomes. was the prevailing conjugate upon incubation of DON with HLM (Maul et al. , 2015, and was also readily formed by porcine liver microsomes (Maul et al. 2015). In addition, a third DON-GlcAc was detected, which was formed in considerable amounts by rat-, bovine-, trout-, and carp liver microsomes and tentatively identified as DON-7-glucuronide (DON-7-GlcAc). Finally, a cyclic DON-8,15-hemiketal-8-GlcAc could be isolated as a side product formed upon incubation of DON with Wistar rat liver microsomes and structurally elucidated by nuclear magnetic resonance spectroscopy (NMR) (Uhlig et al. , 2016. Chemical structures of DON-3-GlcAc, 15-hemiketal-8-GlcAc are given in Fig. 1. In humans, DON-15-GlcAc is formed preferentially over DON-3-GlcAc Warth et al. 2012Warth et al. , 2013. A third DON-GlcAc that was tentatively identified as DON-7-GlcAc was additionally detected in some highly contaminated human urine samples Warth et al. 2013). In pigs orally treated with DON, formation of DON-3-GlcAc and DON-15-GlcAc at similar ratio was observed, albeit with notable differences between individual animals . In rats, DON-3-GlcAc prevailed, but the presence of a minor second DON-GlcAc peak that was not DON-15-GlcAc was detected (Nagl et al. 2012;Veršilovskis et al. 2012). In addition, enzymatic hydrolysis of urine samples collected from pigs treated with DON-3-glucoside indicated formation of DOM glucuronides . Summarizing the literature data, only DON-3-GlcAc and DON-15-GlcAc were unequivocally identified in urine samples of humans and animals so far. Formation of a third DON-GlcAc has been reported in human and rat urine, but confirmation of the suggested structure (DON-7-GlcAc) is still outstanding. A forth glucuronide produced by rat liver microsomes has been structurally elucidated as DON-8,15-hemiketal-8-GlcAc, but never been detected in animal urine. Finally, formation of DOM glucuronides has been suggested, but not confirmed. The main aim of our work was to investigate in vivo DON glucuronidation by different animal species. To this end, we analyzed urine samples of DON-treated rats, mice, pigs, and cows by a flat gradient HPLC-MS/MS method. We then converted DON and DOM with rat-and human liver microsomes, collected the main and minor glucuronides, produced greater amounts of the relevant novel glucuronides, and elucidated their structures. The discovery and identification of several novel DON-and DOM glucuronide metabolites formed by different animal species are of great importance with respect to interspecies variation in DON metabolism, toxicology, and analytical determination of DON metabolites as biomarker of DON exposure. Solid deoxynivalenol (DON, purity >95%) as well as liquid calibrant solutions of DON and DOM (100 and 50 mg/L, respectively, in ACN) were supplied by Romer Labs GmbH (Tulln, Austria). DOM as the starting material for glucuronide production was obtained by conversion of DON with the bacterial strain BBSH 797 as described in Schwartz-Zimmermann et al. (2014) and purified by preparative chromatography as outlined in Schwartz-Zimmermann et al. (2015). DON-3-GlcAc produced by chemical synthesis ) and dissolved in MeOH to a concentration of 10 mg/L served as reference standard. DON-8,15-hemiketal-8-GlcAc was provided by Silvio Uhlig (Norwegian Veterinary Institute, Oslo) and used for compound identification. In brackets compound numbers as in Fig. 2 Microsome assay for glucuronidation of DON and DOM The assay for glucuronidation of DON and DOM was based on the protocol published by Uhlig et al. (2013). For small-scale production of DON-and DOM glucuronides, pre-mixes containing all components required for the reaction except the microsomes were prepared. The premixes (described for 5 replicates) were composed of 100 µL each of UDPGA (100 mM), UDPAG (5 mM), alamethicin (250 µg/mL in ethanol/water 5/100, v/v), MgCl 2 (50 mM), Tris-HCl (1 M), and aqueous mycotoxin solution (4000 mg/L for DON and DOM) as well as 350 µL of water. Prior to the addition of microsomes, 190 µL aliquots of the pre-mixes were pipetted into Eppendorf reaction vials and pre-incubated at 37 °C for 10 min. Subsequently, 10 µL of RLM or HLM was added and the tubes were incubated under slight shaking at 37 °C overnight. The reactions were stopped by the addition of 800 µL of cold MeOH and proteins were removed by centrifugation at 14,000×g for 10 min. In total, five replicates were prepared for each toxin/microsome combination and the combined supernatants after centrifugation were concentrated to 0.3 mL by evaporation. Prior to semi-preparative chromatography, aliquots of the reaction mixtures were checked by HPLC-MS/MS (see below). The relative abundances of the formed glucuronides were estimated based on the peak areas of the glucuronide specific selected reaction monitoring transitions 471.1->113.0 (DON-GlcAc) and 455.1->113.0 (DOM-GlcAc). Semi-preparative isolation of glucuronides produced in the microsome assays Isolation of the reaction products formed upon incubation of DON and DOM with rat and human liver microsomes was carried out by semi-preparative chromatography on an Agilent 1100 series preparative HPLC system (Agilent Technologies, Waldbronn, Germany). Compounds were separated by gradient elution on a Kinetex C18 column (150 × 10 mm, 5 μm, Phenomenex, Aschaffenburg, Germany) with a pre-column of the same material using water and ACN, both containing 0.1% acetic acid, as mobile phases A and B. The flow rate was 6 mL/min, the column temperature 25 °C, and the injection volume was 300 µL. Gradient elution started with an isocratic period of 0.5 min at 6% B and continued with a linear increase to 13.8% B within further 7.5 min. Subsequently, the proportion of B was increased to 90% within 0.5 min and the column was flushed until 10.5 min. Finally, the column was re-equilibrated at 6% B for 2 min. Forty fractions were collected at equal time intervals (12 s/fraction) between 2.5 and 10.5 min. Compounds were detected by UV-detection at 200, 254, and 280 nm. All collected fractions were analyzed by HPLC-MS/MS (see below) for the presence of DON-and DOM glucuronides. Enzymatic hydrolysis 5 µL aliquots of stock solutions containing ca. 5 mg/L of each isolated (iso-)DON/DOM glucuronide were evaporated and incubated in 50 µL of 40 mM PBS containing 650 U of β-glucuronidase at 37 °C over night. Prior to LC-MS/MS analysis, proteins were removed by the addition of 150 µL of MeOH and centrifugation. Production and purification of iso-DON and iso-DOM For production of iso-DON, 30 mg of solid DON was dissolved in 10 mL of absolute methanol, and 20 µL of sodium methoxide solution (25% w/v in methanol) was added. The solution was shaken at ambient temperature for 20 h. The progress of the reaction was monitored by analyzing diluted aliquots by LC-MS/MS and LC-UV. The reaction was stopped by the addition of 10 mL water/formic acid (99/1, v/v). Prior to purification by semi-preparative chromatography, the volume of the solution was reduced to 4 mL on a rotary evaporator. As the yield of iso-DON was only between 5 and 10%, the procedure was repeated twice with the DON regained upon semi-preparative isolation. Iso-DOM was discovered to be a side product of the DOM production by conversion of DON by the anaerobic bacterial strain BBSH 797. Upon preparative isolation of DOM according to Schwartz-Zimmermann et al. (2015), both a pure fraction of DOM (5.66-6.03 min) and a mixed fraction containing DOM and iso-DOM (6.05-6.40 min) were collected. The mixed fraction was subjected to semipreparative chromatography for separation of DOM and iso-DOM. Purification of iso-DON and iso-DOM was carried out by semi-preparative chromatography using the same conditions as described above for isolation of glucuronides produced in the microsome assay. The gradient for purification of iso-DON was the same as described for isolation of glucuronides (see above). Residual DON was collected between 6.40 and 6.95 min, pure iso-DON was collected between 7.15 and 7.45 min, and two fractions containing iso-DON and either one earlier or one later eluting side product were collected between 6.96 and 7.14 min and between 7.46 and 7.65 min. Separation of DOM and iso-DOM started at 8% B for 0.5 min, continued with a linear increase to 17% B until 9 min and a steep increase to 90% which was reached at 10 min. The column was flushed at 90% B for 1 min and re-equilibrated at 8% B until 13 min. DOM was collected between 7.60 and 8.10 min and iso-DOM between 8.20 and 8.60 min. 3 The two fractions containing iso-DON and one of the two side products mentioned above were subjected to HPLC separation on an Agilent 1290 series UHPLC system equipped with a programmable switching valve (VICI Valco Instruments, Houston, Texas, USA). The column was the same as used for HPLC-MS/MS analysis (see below). A flat gradient (0-0.5 min: 5% B, 0.5-7.5 min: linear increase to 13% B, 7.5-8 min: linear increase to 95% B, 8-9 min: 95% B, 9.1-12 min: 5% B) was used for separation, the injection volume was 10 µL, and iso-DON was collected between 6.6 and 6.9 min. Production and purification of DOM-, iso-DON-, and iso-DOM glucuronides Comparison of glucuronides formed in the microsome assays with the glucuronide pattern in animal urine revealed compounds no. 1,5,6,7,9,12,13, and 14 (see Fig. 2) as the most relevant glucuronides. Of these, 5, 6, and 7 were identified as DON-8,15-hemiketal-8-GlcAc, DON-3-GlcAc, and DON-15-GlcAc. Therefore, our aim was to produce milligram-amounts of 1, 9, 12, 13, and 14 for consecutive structure elucidation by NMR. The parent compound of 1 and 9 was iso-DON, the aglycone of 12 and 13 was DOM, and the parent compound of 14 was iso-DOM. Production of DOM-, iso-DON-, and iso-DOM glucuronides was carried out by microsome assays as described above. Iso-DON and iso-DOM were incubated with RLM, whereas DOM was incubated with RLM for production of 12 and with HLM for production of 13. In total, between 10 and 30 replicates were prepared for each toxin/microsome combination to gain sufficient amounts. In addition, also iso-DON and iso-DOM were incubated with HLM at small scale to obtain the full glucuronidation pattern. Purification of the formed glucuronides was carried out by semi-preparative chromatography as described above for the products of the first microsome assays. Four compounds were isolated from the reaction mixture of iso-DON with RLM: compound 1 (3.00-3.80 min), a novel compound eluting between 6.05 and 6.75 min, compound 9 (6.80-7.20 min), and residual iso-DON (7.21-7.50). Likewise, four compounds were obtained from the reaction mixture of iso-DOM with RLM: compound 3 (3.80-4.50 min), a novel compound eluting from 8.00-8.40 min, compound 14 (8.40-8.90 min), and non-converted iso-DOM (9.50-9.80 min). Incubation of DOM with RLM yielded compound 12 as major product, compound 13 as side product, and residual DOM. Compound 12 was isolated between 7.50 and 8.00 min and a mixed fraction containing 12 and The concentration of the aglycones, DON-and DOM glucuronides was ca. 100 ng/mL, and the concentration of the iso-DON-and iso-DOM glucuronides was ca. 30 ng/mL. Annotation was based on NMR measurements (1, 3, 9, 11, 12, 13, 14, and 16), comparison with reference standards (5, 6, 8, and 15), and literature (7). 1 iso-DON-8-GlcAc, 2 unknown DON-GlcAc, 1 3 13 was collected between 8.00 and 8.3 min. Vice versa, incubation of DOM with HLM yielded compound 13 as main and compound 12 as side product. In that case, a mixed fraction was collected from 7.50-7.85 min and compound 13 was collected between 7.86 and 8.45 min. The mixed fractions were again subjected to semi-preparative chromatography. In both DOM reaction mixtures, non-converted DOM eluted between 8.60 and 9.10 min. In the case of compounds 1 and 9, the resolution achievable by semi-preparative chromatography was not sufficient. Compound 1 partly co-eluted with a large reagent peak of slightly earlier retention time. Compound 9 partly co-eluted with iso-DON. Hence, the respective pooled fractions obtained by semi-preparative chromatography were up-concentrated and the analytes were isolated by HPLC on the same system as described for HPLC isolation of iso-DON. The column, mobile phases, and gradient used were the same as for HPLC-MS/MS analysis. The injection volume was increased to 10 µL for isolation. Compound 1 was collected between 2.1 and 2.4 min and compound 9 between 5.6 and 5.9 min. NMR NMR spectra were obtained in methanol-d 4 at 293 K on a Bruker Avance III HD spectrometer (Bruker BioSpin GmbH, Rheinstetten, Germany) equipped with a 5-mm Cryoprobe™ Prodigy BBO, operating at 600.15 MHz for 1 H and 150.90 MHz for 13 C. NMR data were recorded and processed using TopSpin 3.2 (Bruker BioSpin GmbH). Chemical shifts were established based on residual solvent signals (3.31 ppm for 1 H and 49.15 ppm for 13 C) and reported relative to TMS. HPLC-HR-MS and HPLC-HR-MS/MS analysis High-performance liquid chromatography-high-resolution (tandem) mass spectrometry (HPLC-HR-MS/MS) analysis was performed on a 6550 iFunnel Q-TOF instrument coupled to a 1290 Infinity UHPLC system (both Agilent Technologies, Waldbronn, Germany). Chromatographic separation was achieved on a Kinetex C18 column (150 × 2.1 mm, 2.6 μm, Phenomenex) at a flow rate of 0.25 mL/min using a linear gradient (0-0.5 min: 5% B, 7-8 min: 100% B, 8.1-11 min: 5% B). Mobile phase A was water/formic acid (99.9/0.1, v/v) and mobile phase B MeOH/formic acid (99.9/0.1, v/v). Compounds were deprotonated by electrospray ionization (ESI) in the negative mode and measured first in full scan (m/z 100-500) and then in targeted MS/ MS mode (m/z 40-500) at different collision energies (CEs) between −20 and −40 eV. ESI was carried out at a gas temperature of 130 °C, drying gas flow of 16 L/min, nebulizer pressure of 30 psig, sheath gas temperature of 300 °C, and sheath gas flow of 11 L/min. The capillary voltage was 4000 V and the nozzle voltage 500 V. Data acquisition was achieved in the 2 GHz extended dynamic range mode and the acquisition rate was set to 333 ms/spectrum. HPLC-MS/MS analysis High-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) analyses were performed on an Agilent 1290 series UHPLC system coupled to a 6500+ QTrap mass spectrometer equipped with an Ion-Drive Turbo V ® ESI source (both Sciex, Foster City, CA, USA). Analytes were separated on a Kinetex C18 column (150 × 2.1 mm, 2.6 μm) protected by a SecurityGuard ULTRA pre-column of the same stationary phase (both Phenomenex, Aschaffenburg, Germany) at 30 °C and at a flow rate of 0.25 mL/min. Mobile phases A and B consisted of water/acetic acid and ACN/acetic acid (both 99.9/0.1, v/v), respectively. The gradient started with an isocratic period at 5% B for 0.5 min and continued with a linear increase to 15% B until 7 min, followed by a further linear increase to 30% B between 7 and 8.5 min and a steep increase to 100% B until 9 min. Finally, the column was washed at 100% B for 1.5 min and re-equilibrated at 5% B until 13.5 min. The injection volume was 3 μL and the LC eluent was diverted to the MS between 2.0 and 9.5 min. Tandem mass spectrometric detection was performed in selected reaction monitoring (SRM) mode after ESI in negative polarity. The ion source settings were: temperature 400 °C, ion spray voltage −4500 V, curtain gas 35 psi, ion source gas 1 60 psi, ion source gas 2 40 psi, and collision gas (N 2 ) high. First measurements of animal urine samples and of fractions obtained by semi-preparative chromatography after incubation of DON and DOM with RLM and HLM were carried out with four SRM transitions optimized for DON-3-GlcAc and four transitions calculated for DOM glucuronides. For detection of DON glucuronides, the deprotonated precursor ion (m/z 471.1) was fragmented to two glucuronide-derived fragments (m/z 113.0, collision energy (CE) −35 eV; m/z 175.1, CE −40 eV), to one DON-specific fragment (m/z 265.1, CE −38 eV), and to one fragment formed by loss of the CH 2 O group attached at C-6 (see Fig. 1) which distinguishes DON-3-GlcAc from DON-15-GlcAc (m/z 441.1, CE −30 eV). For detection of DOM glucuronides, the deprotonated precursor (m/z 455.1) was fragmented to the corresponding fragment ions (m/z 113, 175, 249, 425), using the same CEs as for the respective DON-GlcAc fragment ions. SRM transitions for DON and DOM, optimized by software controlled compound optimization, are listed in Table 1. The final SRM method was established by combining the SRM transitions optimized for the individual isolated glucuronides (see Table 1). Analyst ® software version 1.6.3 (Sciex) was used for instrument control and data evaluation. Animal trials For studying the glucuronidation of DON in different animal species, urine samples collected in the course of several previous animal experiments (n = 4 per for each trial) were diluted to 0.5 mM creatinine and re-analyzed by the HPLC-MS/MS method described above. A brief overview of the animal trials is provided in Table 2, and the detailed descriptions are given elsewhere (Nagl et al. 2012Pestka et al. 2017;Winkler et al. 2015). Chronology and preliminary analysis of animal urine The starting point of our work was analysis of urine samples of various previous animal trials, where DON had been administered to rats, mice, pigs, and cows, by a flat gradient LC-MS/MS method, using selected reaction monitoring (SRM) transitions (a) optimized for DON-3-GlcAc and (b) calculated for DOM glucuronides. These preliminary measurements confirmed the presence of DON-3-GlcAc in all samples and of DON-15-GlcAc in pig urine samples and hinted at the presence of DOM-3-GlcAc in urine of cows and rats. In addition, they also yielded several new peaks at the selected SRM transitions which showed different SRM intensity ratios and appeared at different (earlier and later) retention times. Rat urine contained the greatest spectrum of metabolites. Three novel DON-GlcAc (compounds 1, 5, and 9 in Fig. 3) as well as two major novel DOM-GlcAc peaks (compounds 12 and 14) and two minor novel DOM-GlcAc (compounds 10 and 13) were detected in addition to DON-3-GlcAc (compound 6) in rat urine. Detailed results are shown in Fig. 3 and Table 7 and discussed in 1 3 "LC-MS/MS analysis of animal urine samples ". As isolation of several hundred microgram amounts of these compounds-which is required for structure elucidation by NMR-is not practicable, we decided to incubate DON and DOM with rat-and human liver microsomes, to isolate the formed glucuronides and compare them with the unknown glucuronides detected in animal urine samples. Microsome assay for glucuronidation of DON and DOM We incubated DON and DOM with commercially available RLM or HLM and the other reagents required for glucuronidation according to Uhlig et al. (2013). A preliminary time course experiment (0.5-48 h) yielded the greatest peak areas of DON-3-GlcAc after 24 h of incubation. The compounds formed upon incubation of DON and DOM with RLM and HLM are summarized in Table 3, where the later achieved compound identification is anticipated for reasons of clarity. HPLC-MS/MS analysis of diluted aliquots of the four reaction solutions (see Figs. S1-S4 in the electronic supplementary material) indicated formation of six DON glucuronides when DON was incubated with RLM, albeit to largely different extent and at different SRM transition intensity ratios. Consistent with previously published data (Maul et al. , 2015Uhlig et al. 2013), DON-3-GlcAc was the major product and could be identified by comparison with an authentic reference standard. The second most intense product (compound 5 in Fig. S1, 5.2% of DON-3-GlcAc based on the peak area of the transition 471.1->113.0) was at first speculated to be either DON-7-GlcAc (Maul et al. , 2015 or DON-8,15-hemiketal-8-glucuronide (Uhlig et al. 2016). Eventually, comparison of retention times and SRM transition intensity ratios of compound 5 with those of a DON-8,15-hemiketal-8-glucuronide reference standard identified compound 5 as the latter. DON-15-GlcAc, the third most intense peak (2.0% of DON-3-GlcAc), was identified based on literature reports Sarkanj et al. 2013;Uhlig et al. 2013;Warth et al. 2012) and absence of the transition 471.1->441.1. The peak areas of the other three formed DON glucuronides (compounds 1, 2, and 9) were minute compared to DON-3-GlcAc (0.2-0.3% based Similar to incubation of DON with HLM, DOM-15-GlcAc was the major glucuronide formed upon incubation of DOM with HLM, followed by DOM-3-GlcAc (7.7% of DOM-15-GlcAc). However, the glucuronidation rate of DOM with HLM was much higher than that of DON (see Figs. S2 and S4). The relatively fast formation of DOM glucuronides upon incubation of the far less toxic DOM with HLM indicated that the used preparation of HLM was in principle active, but unsuited for quantitative glucuronidation of DON. Low activity of HLM for glucuronidation of DON, but good activity for glucuronidation of 4-trifluoromethylumbelliferone as a reference substance had already been reported by Maul et al. (2012). In addition, greater glucuronidation of DOM compared to DON had been observed in pigs, where DON glucuronides made up approximately 40% of total urinary DON, whereas enzymatic hydrolysis increased the urinary DOM concentrations by a factor of 3.5 on average ). Semi-preparative isolation of glucuronides produced in the microsome assay In line with the HPLC-MS/MS chromatograms of the diluted incubation solutions of DON and DOM incubated with RLM and HLM, six DON-GlcAcs were collected when DON was incubated with RLM and six DOM-GlcAcs were obtained from the reaction mixture of DOM with RLM. As fractions of equal volume were collected in short intervals, compounds eluted in more than one fraction. For most compounds, both pure and mixed fractions were collected. To give an overview of the obtained glucuronides, a chromatogram containing all glucuronides isolated in the course of the first microsome assays for production of DON glucuronides (compounds 1, 2, 5, 6, 7, and 9) and DOM glucuronides (compounds 3, 4, 10, 12, 13, and 14) is shown in Fig. 2. Solutions of the individual glucuronides were pooled in such a way as to obtain similar signal intensities for all compounds. In Table 4, the collection intervals of the fractions containing pure compounds and, if pure fractions could not be collected for a compound, of the fractions of the highest purity are given. For reasons of clarity, the compound identification which was later achieved by NMR measurements is mentioned already at that stage. Interestingly, in addition to the parent compounds DON and DOM, minute amounts of one later eluting compound displayed at the same SRM transitions, but at different intensity ratios was collected for each substance (compounds 11 and 16). To assess the relevance of the six glucuronides formed for DON and DOM, respectively, we compared the retention times and SRM transition intensity ratios of the isolated compounds with the glucuronides detected in animal urine samples (see Figs. 2,3). In addition to DON-3-GlcAc that was detected in all samples and DON-15-GlcAc in pig urine, 9,12,13, and 14 occurred in animal urine samples. As the amounts of glucuronides obtained in the first assays were not sufficient for structure elucidation by NMR, our next step was to verify if DON and DOM were the aglycones of the isolated glucuronides. The finding that some glucuronides present in animal urine (iso-DON-based compound 1 and are at best partially cleaved upon enzymatic hydrolysis highlights the importance of proper biomarker method development. Most likely, compound 1 and DON-8,15-hemiketal-8-GlcAc have escaped detection whenever enzymatic hydrolysis had been employed. Likewise, iso-DON-and iso-DOM-glucuronides have never been quantified in hydrolysis methods unless iso-DON and iso-DOM accidentally co-eluted in the employed LC-MS/MS method. Still, in the case of co-elution, quantification was falsified by different ionization and fragmentation intensities of the isomeric compounds. Production and purification of iso-DON and iso-DOM Occurrence of compounds 1 and 9 in urine of mice, rats, and/or pigs, and presence of compound 14 in rat-and cow urine underlined the relevance of these assumed iso-DONand iso-DOM glucuronides. Therefore, the next step was to produce iso-DON and iso-DOM to (a) confirm the structure of the unconjugated DON-and DOM-metabolites and (b) generate the substrates for microsome assays which were required to elucidate the structures of the iso-DON-and iso-DOM glucuronides. First, we attempted to produce iso-DON and iso-DOM by heating solid DON and DOM, respectively, at 160 °C for 1 and 2 h as described by Greenhalgh et al. (1984). However, after heating for 1 h, formation of iso-DON and iso-DOM was <1% based on UV-detection at 220 nm. Several side products were produced, too (Bretz et al. 2006). Heating for 2 h slightly increased the proportion of formed iso-DON and iso-DOM to <2%, but also increased the number and concentration of side products. An alternative way of producing iso-DON is by use of sodium methoxide, a chemical often used in organic chemistry for achieving isomerization of compounds. Here, the conversion rate of DON to iso-DON was approximately 8% in the used protocol. Prolonged incubation or increase in sodium methoxide concentration did not enhance iso-DON formation, but increased the number and concentration of undesired side products (data not shown). Still, formation of at least two unidentified side products that were displayed at the SRM transition 355.1->59.0 and eluted slightly before and after iso-DON in preparative chromatography could not be avoided. Therefore, one pure iso-DON fraction and two impure fractions containing iso-DON and one of each side products were obtained upon semi-preparative chromatography. Analytical HPLC separation and collection of the iso-DON peak were required to isolate iso-DON from these mixed fractions. To obtain at least 5 mg of iso-DON, the amount of DON regained upon preparative chromatography of the reaction mixture was again subjected to sodium methoxide treatment. In sum, three cycles of iso-DON production were carried out and 6.2 mg of iso-DON was produced. Iso-DOM formation upon incubation of DOM with sodium methoxide was <2% based on UV detection at 220 nm. However, under the conditions employed for conversion of DON to DOM by means of the anaerobic bacterium BBSH 797 (Schwartz-Zimmermann et al. 2014), one side product was formed which had the same retention time and SRM transition intensity ratios as the aglycone of compounds 3 and 14. Hence, we purified this compound from a mixed DOM/iso-DOM fraction collected upon preparative isolation of DOM according to Schwartz-Zimmermann et al. (2015). In sum, 4.5 mg of iso-DOM were obtained. Production and purification of DOM-, iso-DON-, and iso-DOM glucuronides To enable structure elucidation or confirmation by NMR, compounds 1, 3, 9, 12, 13, and 14 had to be produced at larger scale. As already known from the first microsome assays, compounds 12 and 13 (supposed to be DOM-3-GlcAc and DOM-15-GlcAc) could be produced by incubation of DOM with RLM and HLM, respectively. As expected, conversion of DOM with RLM and semi-preparative isolation yielded majorly DOM-3-GlcAc (2.1 mg), but also a mixed fraction containing DOM-3-GlcAc and DOM-15-GlcAc. Pure DOM-15-GlcAc was obtained as major metabolite when DOM was incubated with HLM. Again, a mixed fraction containing DOM-3-GlcAc and DOM-15-GlcAc was collected, and both mixed fractions were purified again. Finally, 1.3 mg of DOM-15-GlcAc was obtained. For production of the iso-DON based compounds 1 and 9 and of the iso-DOM-derived glucuronides 3 and 14, iso-DON and iso-DOM were incubated with RLM. The results are summarized in Table 3. Interestingly, in both cases, a third glucuronide was formed in addition to the expected compounds. These novel glucuronides eluted in front of compound 9 in the iso-DON assay and in front of compound 14 in the iso-DOM reaction batch. LC-MS/MS analysis and LC-HR-MS/MS spectra (see below and supplementary material) clearly showed the absence of the fragment m/z 441.1 for the novel iso-DON-GlcAc and the absence of the fragment m/z 425.1 for the novel iso-DOM-GlcAc which, in both cases, suggested conjugation at C-6. Judging from the transition 471.1->113.1 in HPLC-MS/ MS analysis, compound 1 was the major product of incubating iso-DON with RLM, followed by compound 9 (13%) and the suspected iso-DON-15-GlcAc (8%). Conversion of iso-DOM with RLM yielded compound 3 as major product and similar formation of compound 14 and the assumed iso-DOM-15-GlcAc (both 54% based on the transition m/z 455.1->113.1). To confirm the proposed structures of the iso-DON/M-15-GlcAc, iso-DON and iso-DOM were incubated with HLM in small scale. As observed for DON and DOM incubated with HLM, the assumed 15-glucuronides were the major products, followed by compound 9 (29% based on the transition m/z 471.1->113.1) in the case of iso-DON and compound 14 (3% based on the transition m/z 455.1->113.1) in the case of iso-DOM. Similarly, the conversion rate with HLM was much greater for iso-DOM (ca. 70%) than for iso-DON (ca. 10%). Considering the analogy to glucuronidation of DON and DOM by HLM which yielded DON/DOM-15-and -3-glucuronides, formation of the later eluting iso-DON-and iso-DOM glucuronides (compounds 9 and 14), but not of the earlier eluting compounds 1 and 3 strongly suggested compounds 9 and 14 to be the iso-DON/iso-DOM-3-glucuronides and compounds 1 and 3 to be the iso-DON/iso-DOM-8-glucuronides. For unequivocal identification by NMR spectroscopy, compound 1 which partly co-eluted with reagent compounds and compound 9 which eluted closely to iso-DON had to be isolated from their mixed fractions by analytical HPLC. In sum, 2.1 mg of compound 1, 0.5 mg of iso-DON-15-GlcAc, and 0.76 mg of compound 9 were obtained. Semi-preparative chromatography of the incubation mixture of iso-DOM with RLM yielded 2.8 mg of compound 1, 0.7 mg of iso-DOM-15-GlcAc, and 1.0 mg of compound 14. 1, 3, 9, 11, 12, 13, 14, and 16 were analyzed by 1D ( 1 H and 13 C) and 2D (H,H-COSY, H,C-HSQC, and H,C-HMBC) NMR measurements, and complete assignments for all signals were achieved. Tables 5 and 6 give the 1 H and 13 C chemical shifts and multiplicities. All NMR spectra are given in the supplementary material. In Table 4, the compound names are assigned to the compound numbers; Fig. 1 shows the numbering of the skeletons. Compounds Compound 11 was identified as iso-DON by comparison of the 1 H and 13 C NMR data with those given by Greenhalgh et al. (1984); analysis of its 2D spectra confirmed the assignments given there, except for the erroneous shifts for C-8 and C-9 (144.8 and 125.2 ppm, resp.). For compound 16, the spectral features of the A ring (C-6 to C-11; identified by their long-range C-H correlations) as well as C-15 and C-16 were found very similar to iso-DON. Another HMBC relation from C-6 (59.3 ppm) identified the CH 3 group in position 14, the protons of which also show long-range correlations to 155.3 (C-12) and 45.8 ppm (C-4). Starting from the latter position, the remains of the C ring (C-2 to C-4) were identified mainly by means of the COSY, which connects H-4a, H-4b, H-3, and H-2 to a pattern very characteristic for DON-derived trichothecenes. HMBC correlations from H-2 (3.96 ppm) to 107.5 (C-13) and 155.3 ppm (C-12) define the deepoxy substructure, and another one to 73.1 ppm (C-11) finally shows the intact B ring. Thus, compound 16 has been positively identified as iso-DOM. In all analyzed glucuronides, the attachment point of the glucuronic acid unit to the respective parent structure (iso-DON, DOM, or iso-DOM) could unambiguously be proven by HMBC long-range correlation of the carbohydrate's H-1′ peak (anomeric proton) to the signal of the trichothecene carbon carrying the glucuronic acid moiety: C-8 (146.7 ppm for compound 1 and 148.1 ppm for compound 3); C-3 (75.9 ppm for compound 9, 76.7 ppm for compound 12, and 76.0 ppm for compound 14); or C-15 (70.1 ppm for compound 13). In addition, a characteristic downfield shift of 6-8 ppm was observed, where the attachment carbon is aliphatic (C-3 or C-15); in case of the C-8 glucuronides of iso-DON and iso-DOM, the most prominent effect is a 20 ppm downfield shift of the neighbouring C-9 signal due to the severely reduced electron-donating effect of the C-8-attached oxygen atom. Apart from these differences, the spectral features of all glucuronides closely resemble those of their parent compounds 11, 15, or 16. HPLC-HR-MS and HPLC-HR-MS/MS analysis The exact mass of all isolated DON-and iso-DON glucuronides is 472.1581 Da, the exact mass of all obtained DOMand iso-DOM glucuronides 456.1632 Da. For all 16 glucuronides, the determined accurate mass deviated by less than 6 ppm from the exact mass. HR-MS/MS spectra of all isolated compounds are given in the electronic supplementary material. For most compounds, a CE of −30 eV was best suited in terms of relative intensity of precursor-and product ions. However, DON-3-GlcAc and DON-15-GlcAc required a CE of −35 eV for proper fragmentation of the precursor ions. Contrary to that, the iso-DON-and iso-DOM-3-and -8-glucuronides fragmented very easily, resulting in complete disappearance of the deprotonated ion at −30 eV. Hence, for these compounds, MS/MS spectra at −25 eV are shown. In general, the intensity of the fragments is highly dependent on the CE. Hence, we focused on the presence or absence of fragments rather than on similar fragment intensity patterns when we compared our mass spectra with literature spectra. samples, DON-3-GlcAc was the major glucuronide, followed by DON-8,15-hemiketal-8-GlcAc and iso-DON-8-GlcAc. DON-15-GlcAc and iso-DON-3-GlcAc occurred in traces. Only traces of DOM-3-GlcAc were detected which is likely result using i.p. administration rather than oral gavage. All pig urine samples contained DON-15-GlcAc as main and DON-3-GlcAc as second most important glucuronide. Iso-DON-3-GlcAc occurred in three of the four analyzed urine samples. However, as previously published for DON-15-GlcAc and DON-3-GlcAc , there were inter-individual differences in the relative peak intensities of the three glucuronides. Small peaks of DOM-15-GlcAc could be detected in two samples. The high microbial activity in the ruminant gastro-intestinal system leads to exhaustive conversion of DON to DOM (Winkler et al. 2015). Consequently, the dominant metabolite in cow urine was DOM-3-GlcAc, followed by iso-DOM-3-GlcAc. DON-3-GlcAc, DOM-15-GlcAc, and iso-DON-3-GlcAc were minor metabolites. The extent of in vivo glucuronidation corresponded well with the reported relative glucuronidation activities in microsome assays (Maul et al. , 2015. The glucuronidation rate was greatest in cows, where only traces of DOM were detected. In rat urine, peak areas of the major glucuronides were similar or slightly greater than peak areas of DON. In line with greater glucuronidation of DOM compared to DON , peak areas of DOM-3-GlcAc were 3-4 times greater than those of DOM. In pig and mouse urine, the major part of DON remained unconjugated. As already discussed by Maul et al. (2012) on the basis of in vitro data, the ready glucuronidation in cows and rats might contribute to the lower sensitivity of these species to DON compared to pigs, even if the main reasons are most likely a high degree of deepoxidation (cows and rats) and a relatively low bioavailability of DON (rats). To gain further insight into the reasons for species specific differences in DON sensitivity, toxicity assessment of iso-DON and iso-DOM is warranted. To sum up, the novel DON-and DOM-derived glucuronides DOM-3-GlcAc, iso-DON-3-GlcAc, and iso-DON-8-GlcAc are major metabolites in animal urine. Iso-DON-3-GlcAc, most likely previously tentatively identified as DON-7-GlcAc, is an important metabolite in urine of rats and occurred in traces in urine of mice, pigs, and cows. In addition, DON-8,15-hemiketal-8-GlcAc could be detected in animal urine for the first time. Conclusion Prior to this work, the only DON glucuronides identified beyond doubt in human and animal urine samples had been DON-3-GlcAc and DON-15-GlcAc. Analysis of urine samples collected from DON-treated rats, mice, pigs, and cows by a generic LC-MS/MS method revealed the presence of seven additional DON-and DOM glucuronides, of which four seemed to be major DON metabolites in at least one of the investigated animal species. By incubating DON, DOM and their newly produced isomers iso-DON and iso-DOM with rat and human liver microsomes, by performing semi-preparative isolation, HPLC-HR-MS/MS, and NMR characterization of the reaction products, and by comparing with one previously characterized reference standard, we eventually identified six of the novel DON-and DOMbased glucuronides in animal urine. One major novel compound detected in rat, mouse, and pig urine was iso-DON-3-GlcAc, which had most likely previously been misidentified as DON-7-GlcAc. The presence of iso-DON glucuronides as important DON metabolites in urine of mice and rats, the detection of iso-DOM glucuronides in urine of rats and cows and the occurrence of DON-8,15-hemiketal-8-GlcAc in urine of mice have implications for DONbiomarker analysis methods. All of these glucuronides escape detection in the conventional methods based on enzymatic hydrolysis and detection of released DON and DOM. Inclusion of iso-DON and iso-DOM in the analytical method would solve the problem for iso-DON-3-GlcAc and iso-DOM-3-GlcAc which are quantitatively hydrolyzed. Detection of iso-DON-8-GlcAc and DON-8,15-hemiketal-8-GlcAc, important DON metabolites in mouse urine that are at best partially hydrolyzed with β-glucuronidase, requires inclusion of the glucuronide SRM transitions into the LC-MS/MS method. To conclude, by discovering, producing, and structurally elucidating several novel DON-and DOM glucuronides, we enhanced the current knowledge on DON metabolism by different animal species and paved the way for analyzing these compounds in animal urine. Future quantitative analysis of the novel glucuronides in animal urine will show their biological relevance. In addition, studies on the toxicity of iso-DON and its derivatives are warranted.
9,310
2017-06-21T00:00:00.000
[ "Agricultural and Food Sciences", "Biology", "Environmental Science" ]
Ensemble Learning-based Algorithms for Traffic Flow Prediction in Smart Traffic Systems Due to the tremendous growth of road traffic accidents INTRODUCTION Road traffic accidents are dramatically augmented each year due to the massive increasing number of vehicles on the roads.This problem is considered a serious risk, a major source of trouble for individuals worldwide, and a significant global concern.[1].Collecting and analyzing comprehensive data is essential for any initiative aiming to improve traffic safety [2].With the rising number of vehicles on the roads and the resulting congestion issues, optimizing traffic flow has become a pressing challenge in modern cities.Intelligent Transportation Systems (ITSs) have emerged as a promising solution to alleviate traffic congestion and enhance overall transportation efficiency [3][4].The Vehicle Ad-Hoc Network (VANET) serves as a fundamental infrastructure for ITSs, enabling wireless connectivity among vehicles [5][6].Additionally, intelligent transport systems are increasingly focused on addressing traffic congestion.Researchers have employed machine learning algorithms to predict traffic flow and reduce congestion at intersections.These models were evaluated using www.etasr.com Alkarim et al.: Ensemble Learning-based Algorithms for Traffic Flow Prediction in Smart Traffic Systems the national road traffic dataset for the UK.An adaptive traffic light system was implemented, which adjusts green and red lights based on road width, traffic density, and vehicle categories.Simulations demonstrated a 30.8% decrease in traffic congestion [7]. Accurate traffic prediction is crucial for ameliorating the effectiveness of traffic systems and reducing energy consumption.Machine learning-based methods have become commonplace, but they often rely on historical data [8][9].Furthermore, ML-based models are gaining popularity due to their ability to accurately forecast traffic conditions, thereby improving safety and infotainment applications.However, the efficacy of these models in predicting real-time traffic remains a subject of investigation [10]. Several research studies have focused on developing methods and models for traffic flow prediction and management.In [11], a framework is presented that utilizes Vector Auto Regression (VAR) and a CNN-LSTM hybrid neural network to predict short-term traffic flow.The CNN-LSTM model outperforms other models in forecasting shortterm traffic flow and demonstrates predictive accuracy associated with spatial correlation in traffic flow.In [12], three proposed solutions are discussed to address the issue of missing data in traffic management.These solutions include a livetraffic simulation, a neural network traffic prediction and rerouting system based on pheromone principles, as well as a Weighted Missing Data Imputation (WEMDI) approach.The integration of WEMDI into the systems yields notable improvements in various traffic factors and demonstrates efficient routing to alternative destinations.ML and neural networks play a significant role in solving traffic congestion issues.In this context, authors in [13] propose ML and DL algorithms for predicting intersection traffic flow.The models were trained, validated, and tested using public datasets, and the Multilayer Perceptron Neural Network (MLP-NN) produced the best results.Gradient Boosting, Recurrent Neural Networks, RF, LR, and Stochastic also showed promising performance. ITSs require traffic flow monitoring for effective management and optimization.Conventional methods of data collection and analysis are being augmented with AI techniques, such as ensemble learning [14].The IAROEL-TFMS methodology utilizes feature subset selection and optimal ensemble learning to predict traffic flow, outperforming other approaches with its low RMSE.Authors in [14] used Hybrid-LSSVM, AST2FP-OHDBN, and IAROEL-TFMS models for evaluation purposes, considering their respective performance indicators.Among the several models evaluated, it was observed that IAROEL-TFMS had the most superior predictive performance.In close succession, the AST2FP-OHDBN model exhibited robust performance, whereas, in contrast, the Hybrid-LSSVM model demonstrated a somewhat reduced level of prediction accuracy.Regarding predictive performance, the IAROEL-TFMS model had the best precision and accuracy in forecasting the target variable.The AST2FP-OHDBN model closely followed it.On the other hand, the Hybrid-LSSVM model exhibited slightly inferior predictive skills.This paper utilizes four Machine Learning (ML) and Deep Learning (DL) models: Random Forest (RF), Linear Regression (LR), Long Short-Term Memory (LSTM), and ensemble bagging (RF).The objective is to utilize these predictions to enhance the efficiency of traffic light controllers in the context of traffic flow prediction at intersections.Experimental results demonstrate that all models exhibit a strong predictive capacity for estimating vehicular flow, highlighting their potential utility in smart traffic systems. II. THE PROPOSED MODEL This study has developed a model for monitoring traffic flow.The primary objective of this model is to predict traffic movement.To achieve this objective, the model operates in three distinct stages.Firstly, data collection can be accomplished using cameras or sensors.Secondly, ML and DL technologies are applied.Thirdly, the outcomes are evaluated using MAE, RMSE, and the coefficient of determination (Rsquared).The workflow of the suggested approach is illustrated in Figure 1.Overall, the proposed model aims to monitor traffic flow by predicting its movement.It involves data collection through cameras or sensors, the application of machine learning and deep learning technologies, and the evaluation of outcomes using specific error metrics.Figure 1 provides a visual representation of the workflow.The workflow of the proposed model. A. Data Collection The dataset used for traffic prediction was obtained from various traffic sensors provided by the Huawei Munich Research Center.The dataset plays a crucial role in predicting traffic patterns and making necessary adjustments to stop-light control settings, including cycle length, offset, and split timings.The dataset consists of recorded data from six intersections located within an urban area, collected over a period of 56 days (Table I).The data are presented as a flow time series, which indicates the number of vehicles passing through each intersection every 5 minutes, spanning 24 hours.This results in 12 readings per hour, 288 readings per day, and a total of 16,128 readings over the course of the 56 days.For this study, 4 out of the 6 intersections were selected to replicate a 4-lane intersection scenario [15]. B. Data Preparation Data cleaning is a critical step in the preprocessing phase, where incorrect, incomplete, duplicate, or erroneous data within a dataset are rectified.Fortunately, the collected data for this study do not contain any missing values.The dataset has been divided into two parts: 70% for training the model and the remaining portion for testing.To ensure consistency and optimal performance during training, the data were scaled using the MinMaxScaler from the scikit-learn library.This scaler transforms the data, making them range between zero and one [16]. C. Proposed Techniques In this study, four regression models from the scikit-learn module in the Python programming language are employed.The scikit-learn module is a comprehensive Python library that offers a wide range of state-of-the-art machine learning algorithms designed to tackle various supervised and challenges [16].The authors applied four ML/DL techniques to the dataset: RF, LSTM, LR, and ensemble method (bagging).The following section provides an overview of the traditional ML and ensemble methods utilized in the experiment. III. OVERVIEW OF TRADITIONAL MACHINE LEARNING AND ENSEMBLE METHODS A. Random Forest RF is a learning method that combines multiple tree predictors.Each tree in the forest is constructed based on the values of a random vector, sampled independently from the same distribution for all trees.Tree-based models form the core components of the random forest algorithm.A tree-based model involves iteratively dividing a given dataset into two distinct groups, guided by a specific criterion, until a predetermined stopping condition is met.The terminal nodes of decision trees are commonly known as leaf nodes or leaves [17]. B. Long Short-Term Memory (LSTM) LSTM networks have found extensive applications in various domains, including image processing, speech recognition, manufacturing, autonomous systems, communication, and energy consumption, for dynamic system modelling purposes.LSTM has gained significant attention in recent years due to its effectiveness in modeling and predicting the dynamics of nonlinear time-variant systems.It incorporates the characteristics of short-term and long-term memory, the ability to make predictions several steps ahead, and the propagation of errors.Sequence-to-sequence networks with partial conditioning have been shown to outperform other techniques such as bidirectional or associative networks, making them well-suited for achieving the specified objectives [18]. C. Linear Regression LR is a widely used and straightforward ML algorithm.The technique is a mathematical methodology employed to do predictive analysis.Moreover, LR is a statistical technique that enables the prediction of continuous or numerical variables.LR is a statistical technique employed to assess and quantify the association between variables under consideration [19]. D. Ensemble Method (Bagging) Bagging, short for bootstrap aggregating, is a technique that involves creating multiple iterations of a predictor and combining them to form an aggregated predictor.In the aggregation process, the mean is calculated across the iterations when predicting a numerical outcome, while a majority vote is used when predicting a class.To generate multiple versions, bootstrap copies of the original learning set are created, and these replicates are then used as new learning sets [20]. IV. EVALUATION MEASURES In model evaluation, the coefficient of determination (Rsquared), RMSE, and MAE are standard metrics [21]. V. EXPERIMENTAL RESULTS AND DISCUSSION Table II presents the results of the model using various ML and DL algorithms.It can be observed that RF achieved an MAE of 13.76, while LSTM and LR achieved 14.74 and 17.80, respectively.When Bagging (RF) was applied, the minimum MAE obtained was 13.69.In terms of RMSE, the models achieved values of 22.39, 23.50, and 27.04, while the Bagging model achieved a lower RMSE of 22.21.In terms of R 2 , the experimental results for the models were 0.9341, 0.9275, and 0.9040, respectively.The best R 2 value was obtained by the Bagging model, which achieved a value of 0.9352.The results show that the RF model and the Bagging model (using RF as the base model) outperformed the LSTM and LR models in terms of both MAE and RMSE.Additionally, the Bagging model showed the highest R 2 value, suggesting a better fit to the data.Overall, these findings demonstrate the effectiveness of the RF algorithm and the potential benefits of using ensemble methods like Bagging for traffic flow prediction.Figure 2 illustrates the numerical values of MAE measurements of the considered models.It can be observed that the LSTM model has a slightly higher MAE (14.74) compared to the RF (13.76) and Bagging (RF) (13.69) models.This suggests that the LSTM model may not perform optimally in this particular scenario.On the other hand, the LR model has the highest MAE score (17.80).This indicates that it may not excel at accurately predicting the target variable.These results suggest that the RF and Bagging models (using RF as base) perform better than the LSTM and LR models in terms of MAE.It is important to consider these findings when selecting the most suitable model for traffic prediction in this context.The figures presented in Figure 3 illustrate the RMSE values of the considered models.It can be observed that the Bagging (RF) model has the lowest RMSE score (22.21), indicating that, on average, its predicted values deviate the least from the actual values.This suggests that the model exhibits strong predictive accuracy.The RF model also performs well, although it has a slightly higher RMSE (22.39) compared to the Bagging model.On the other hand, the LSTM model shows a larger RMSE (23.50), indicating potentially inferior performance in terms of predictive accuracy.The LR model has the largest RMSE value (27.04), suggesting a potentially lower level of accuracy in predicting the target variable.These findings again suggest that both the Bagging (RF) and RF models perform well in terms of RMSE, indicating their ability to provide accurate predictions.However, the LSTM and LR models may have limitations in accurately predicting the target variable based on their higher RMSE values. Figure 4 shows the R 2 values of the considered models.R 2 ranges from 0 to 1, with a value of 1 indicating a perfect fit.Among the models presented, it is evident that the Bagging (RF) model shows the highest R 2 value (0.9352), designating its superior ability to fit the data accurately.The RF (0.9341) and LSTM (0.9275) models also demonstrate high R 2 values, suggesting their effectiveness in explaining a significant proportion of the observed variability in the dependent variable, while LR performs satisfactorily (0.9040), its R 2 value is slightly lower compared to the alternative models.Overall, it can be noticed that all of the models exhibit strong performance in elucidating the variability in the dependent variable.However, it is noteworthy that Bagging (RF) emerges as the most prominent performer among them.Compared to [13], this research has improved the results by more than 0.5%.In [13], the researchers used the same dataset and applied 5 ML methods.Gradient boosting was the most succesful, with 93.05%.The proposed model gets 93.41% by utilizing an RF model.Bagging (RF) has the highest result of 93.52%.In the future, researchers in this field could use a combination of ML and DL models to improve model performance [22]. VI. CONCLUSION In this article, we mainly presented a new model to enhance intelligent traffic systems.The main purpose of this method is to recognize the traffic flow prediction or vehicle movement at intersections applying an ensemble learning technique.The proposed framework consisted of three primary phases: data collection through cameras and sensors, implementation of Fig. 1 . Fig. 1.The workflow of the proposed model.
2,958.2
2024-04-02T00:00:00.000
[ "Engineering", "Computer Science" ]
Quantum antenna as an open system: strong antenna coupling with photonic reservoir We proposed the general concept of quantum antenna in the strong coupling regime. It is based on the theory of open quantum systems. The antenna emission to the space is considered as an interaction with the thermal photonic reservoir. For modeling of the antenna dynamics is formulated a master equation with the correspondent Lindblad super-operators as the radiation terms. It is shown that strong coupling dramatically changes the radiation pattern of antenna. The total power pattern splits to three partial components; each corresponds to the spectral line of Mollow triplet. We analyzed the dependence of splitting from the length of antenna, shift of the phase, and Rabi-frequency. The predicted effect opens a way for implementation of multi-beam electrically tunable antennas, potentially useful in different nano-devices. 2.The model of quantum antenna as an open system The general model of the antenna is shown on Figure 1. We consider the antenna as a quantum dot placed inside the quantum wire, which manifests itself as a waveguide. (Figure 1a). Such type of devices was implemented experimentally at the form of tapered InP nanowire waveguide containing a single InAsP quantum dot [34][35][36]. Quantum dot is perfectly positioned on-axis of InP nanowire waveguides, where the emission is efficiently coupled to the fundamental waveguide mode. From theoretical point of view this system may be considered as a defect of a crystal lattice. Therefore, the general model of the bulk crystal defects, which is based on Wannier functions [37], may be used for the antenna analysis. Similar [37], we define Wannier functions . The Wannier and Bloch states are satisfy uncertainty principle [37] in the form 1 hx     . It means, that for rather long antenna (large x  ) the main support to the wave-function is defined by the small region near the band minimum at the zone center. As a result, we obtain the approximate presentation Equation (4) shows, that the quantum properties of antenna may be modeled by the two-level artificial atom with Fermionic quantum states a , b separated by the energy 0  with the same spatial envelope   fx (Fig 1b). We assume the length of antenna to be comparable with the wavelength. The area of quantum confinement is comparable with the wavelength. It makes the retardation of EM-field in its interaction with quantum emitter to be essential. The antenna is driven by the classical external field in the regime of arbitrary coupling. In particular, an essential role in the formation of radiation is played by the Rabioscillations produced by the antenna feeding. Therefore, its interaction with EM-field cannot be considered by different types of perturbation theory. The total Hamiltonian of antenna in the EM-field within the bounds of given model is H is Hamiltonian of free antenna and is the interaction Hamiltonian with the external field. We will limit our consideration by the strong coupling regime and not touch ultra-strong one [17], thus Hamiltonian (2) is written in rotating-wave approximation [1]. The states , ab are excited and ground states of antenna without field, 0 ,  are the frequencies of optical transition and driven field, respectively, R  is the Rabi-frequency. The radiation of quantum antenna into photonic reservoir is described in rotating-wave approximation [1] is the Fermionic-type lowering operator of antenna excitation,b k is a creation operator of photon in the k-th mode, e is the unit vector along the antenna axis, ,  k k are the frequency and wave-vector of k-th mode,V is the normalization volume.   The eigenvalues are given by The last term is the Lindblad super-operator, which models antenna emission as its coupling with photonic thermal reservoir. It reads in its conventional form The next step of simplification consists in the standard transformation from summation over k to the frequency integration [1] using the replacement where azimuthal integration have been carried out and the new variable of integration / kc   k have been used. As a result, we obtain The main support to the time integral is given by the narrow vicinity of frequency   k . It allow to use the approximation The relaxation parameter     is a spontaneous emission frequency. Its difference with Weisskopf -Wigner result for individual atom [1] consists in the special interference dictated by the relative phase shift of different EM-modes over antenna axis. As a result, its frequency dependence doesn't add up to   , where coefficients   are elements of matrix The relaxation parameters are obtained from rather long, but trivial calculations. They are given by The element 22  may be found from the probability conservation low 22 11 1   [1]. 3.Dynamics of quantum antenna: qualitative analysis The strong coupling regime corresponds to the condition R    , which is equal to 1 g  . We will analyze for simplicity antenna dynamics in the regime of exact resonance (zero detuning As it leads from (20), the antenna emission has not vanished, while the emission properties are independent on its initial state. 4.Radiation properties of quantum antenna In this section we will consider the power radiation pattern of quantum antenna in the strong coupling regime. The far field emitted by antenna [1] presented as a superposition of the partial supports produced by elementary dipoles induced at the antenna surface. The positive-frequency part of field operator produced by the single dipole quantum emitter [1] (25) The normally ordered operator of intensity in the far field zone is given by ll ab x ll E r e r r r r rr (26) The observable value of intensity expressed through the two-time correlation function of polarization [1]. (27) where We will consider the steady state of antenna given by equations (23), (24). It is stationary process, for which the correlation function is time-independent. Thus, equation (28) may be rewritten as The correlation function (29) is equal to the correlation function of resonance fluorescence in detail considered in [1]. The time shift in antenna is stipulated by the phase shift of radiation from different points and given by For simplifying the integration in (27), we will use the conventional assumptions to macroscopic antenna theory [14]: for amplitude and phase factors, respectively, where , R  are coordinates for the spherical system with origin at the antenna center (exponentially attenuated factors in (30) are related to the amplitude ones and approximated accordingly (31a)). As a result, the intensity of radiation may be presented in terms of radiation pattern, conventional for antenna theory [14]. It reads For illustrating the qualitative properties of radiation pattern we consider the simplest model of perfect linear antenna [14], which quantum envelope has the constant spatial amplitude and the linear phase distribution , where  is a phase shift per unit length. The integration in (33) Equation (37) allows analyzing the qualitative behavior of radiation pattern, some of which aspects is seen to be in agreement with theory of resonance fluorescence [1]. The total radiation pattern represents the sum of three elementary patterns   2 sin /  of the form of perfect wire antenna [14], The patterns become more and more jagged with the length increasing (Figures 3,4). Simultaneously, effect of pattern separation increases and even the absolute separation becomes reachable (the angle of maximum for the main lobe of one pattern coincides with the minimum for another one). The typical radiation patterns of wire antennas are highly dependent on the value of phase shift. The value cr   defines the regime of axial radiation. For the phase shift exceeding the critical value cr   , the main lobe removes to the invisible region [14] and becomes not related to the observable angles  .Therefore, the antenna emission is completely formed as a superposition of side lobes, which are incoherent. As a result, the emitted power decreases, which makes this regime to be not suitable for a lot of applications. For the classical wire antennas 1 cr   [14]. In our case we have the own critical shift for every partial diagram: 5.Conclusion In summary, we developed the model of quantum antenna in the strong coupling regime basing on the general theory of open quantum systems [1,33]. The far-field zone of antenna radiation is considered as the thermal photonic reservoir. We formulated and solved the master equation with Lindblad-terms related to the energy losses via antenna emission. The general concept was applied to the wire antenna with Fermionic type of excitation. Spectral density of power in far-field zone was calculated. It is shown, that the strong coupling regime dramatically changes the radiation pattern compared with the macroscopic antennas and optical nanoantennas of different well-known types. The calculated radiation pattern consists of three components, each corresponding to the resonance line in the Mollow triplet and turned one with respect to another. The value of turn strongly depends on the geometric parameters of antenna, energy spectra of used materials and the value of coupling (Rabi-frequency). It opens a new ways for high-effective electric control of antennas characteristics for using in different nano-photonic applications. On the other hand, the influence of EM-field on the pattern should be accounted from the point of view of electromagnetic compatibility in nanoscale [37,38]. Author Contributions: Developments of the physical models, derivation of the basis equations, interpretation of the physical results and righting the paper have been done by Alexei Komarov and Gregory Slepyan jointly. Conflicts of Interest: The authors declare no conflict of interest.
2,235.8
2018-05-02T00:00:00.000
[ "Physics" ]
An Empirical Study of Deep Web based on Graph Analysis The internet can broadly be divided into three parts: surface, deep and dark among which the latter offers anonymity to its users and hosts. Deep Web refers to an encrypted network that is not detected on search engine like Google etc. Users must use Tor to visit sites on the dark web. Ninety six percent of the web is considered as deep web because it is hidden. It is like an iceberg, in that, people can just see a small portion above the surface, while the largest part is hidden under the sea. Basic methods of graph theory and data mining, that deals with social networks analysis can be comprehensively used to understand and learn Deep Web and detect cyber threats. Since the internet is rapidly evolving and it is nearly impossible to censor the deep web, there is a need to develop standard mechanism and tools to monitor it. In this proposed study, our focus will be to develop standard research mechanism to understand the Deep Web which will support the researchers, academicians and law enforcement agencies to strengthen the social stability and ensure peace locally &amp; globally. Introduction The Dark Web, a conglomerate of services hidden from search engines and regular users, is used by cyber criminals to offer all kinds of illegal services and goods [35]. Cybercriminal activities in the dark web can be considered one of the critical problems for societies around the world [5]. Web mining techniques such as content analysis and structure analysis can be useful for detecting and avoiding terrorist's threats all over the world [7]. Nowadays social network analysis (SNA) is used to study a variety of economic and organizational phenomena and processes [6, 8, 9, and 10]. Social network analysis (SNA) is used effectively to counter money laundering, identity theft, online fraud, cyber-attacks, and others. In particular, the SNA methods are used in the investigation of many illegal operations with securities and investments, for the prevention of riots and others [6, 11, and 12]. Graph theory has long been a favored tool for analyzing social relationships [13,14] as well as quantifying engineering properties such as search ability [13,15]. For both reasons, there has been numerous graph-theoretic analysis of the World Wide Web (www) from the seminal [13, 16 -20] to the modern [13,21]. Graph theory as a tool can be used for analyzing social relationships for the dark web [13]. SNA [34] is a graph-based method for analyzing social relationships and their impact on individual behavior and organizational structure. It was developed by sociologists and has been applied in many academic fields such as epidemiology and Computer-mediated communication (CMC). After classifying and clustering the captured data, the characteristics of the special participants can be extracted. Through the social network analysis method, the social interaction mode with other cybercrime, the type of content published, and the frequency of discussion of the participating topics can be obtained [27]. The dark web can provide anonymity by implementing of onion routers, which encrypt and bounce communication through a network of relays run by volunteers around the world [5]. The United States Naval Research Laboratory has developed the onion router (Tor) for anonymity to protect sensitive information and network. Tor program was released to the Internet users in 2004 [5]. It can provide privacy, and encryption, direct Internet traffic by using a series of virtual tunnels. It can help users to reach blocked contents and destinations. The Tor website ends with .onion, while other web domains end with .net, .com, .edu, .org, …etc, and can be opened by using the Tor software [5,23]. Other programs can provide anonymity with encryption such as: ZeroNet, GNUNet, FAI (Free Anonymous Internet), and Freenet [5,23]. Research Problem One area that has not received adequate attention in the vast academic literature surrounding extremist movements and their use of the Internet is the Dark Web, whose websites are vaguely assumed to work as hubs for terrorists, drug-traffickers, and gangs [49]. With the rise of technology, cyber criminals are becoming more and more empowered. On the other hand, law enforcement agencies do not have adequate resources and technologies to fight cyber-crimes and monitoring activities on dark web. One of the primary challenges posed by the Dark Web to national security professionals is segregating out the "noise" from issues of legitimate national security concern. With annual cybercrime revenue estimated at approximately $1.5 trillion and considering the existence of 7,000-30,000 TOR sites, knowing where to look requires us to bound our focus to specific subject areas [50]. To find latest researches on dark web, an online query was executed on IEEE Xplore. The query result showed that only 250 resources available on dark web. Among those, 111 conference papers, 96 journals articles, 23 magazines, 13 books and 7 other resources. Therefore, we can say that deep web needs more academic attention to fight cyber-crime in this information age. By conducting research on "Deep Web", our primary focus is to deliver a comprehensive road map to fight cyber-crimes and devise new strategies to monitor deep & dark web while developing standard software systems. Related Research Study Latent Dirichlet Analysis (LDA) technique has been applied by [25] to discover latent topics in dark Web page's contents. LDA is a generative model to detect topics in a text corpus by determining likelihoods of each document, and then capture word and documents that being capable of exchange. Finding the threaten topics can assets detecting community key-members. A work done by [26] to extracting group key members using LDA to find the terrorize topics by integrating the LDA in dark Web portals to enhance the Social Network Analysis (SNA). Using the method can help to measure the radical of the member and assort the kind of member to expert or key-based on the selected topic. This work limited to dark websites use English language as communication language and it also done based in only one forum. Zhang Xuan, a member of the Shandong Police College, and the Secretary of the Department of Information Security and Cryptography (CISC) of the University of Hong Kong, Professor Jinpei ou, co-published the Dark Net Threat Intelligence Analysis Framework, which proposes a concept of a hidden threat intelligence analysis framework. To help analyze crime traces in the dark network [27,28]. In a recent work [29,32], Qin et al. performed an empirical study of different global extremist organizations on the Web and presented how sophisticatedly they propagate their ideologies. Several studies have focused on sentiment analysis, opinion mining and affect analysis of user posts in Web forums [30], and the discovery of user roles and their ties have been appraised [31]. In a research study, Yang et al. [33] came up with a spectral coherence based clustering approach to identify dark Web clusters, which considers the temporal coherence of user activeness rather than contents or links as the primary information. They represented a group of users as a mdimensional multivariate process which is used to derive the spectral density matrix and finally spectral coherence score is computed to identify the clusters [32]. Pastrana et al. [36] recently built a system that looks at cyber-crime outside the Dark Web. The authors discuss challenges in crawling underground forums and analyze four English-speaking communities on the Surface Web. In contrast, Nunes et al. [37] mine Dark Web and Deep Web forums and marketplaces for cyber threat intelligence. They show that it is possible to detect zero-day exploits, map user/vendor relationships and conduct topic classification on Englishlanguage forums, results that we have been able to reproduce with BlackWidow [35]. Al-Nabki et al. [40] presented a web-text-content-based classification pipeline containing TOR dark net illegal activities. They have used two well-known text representation techniques (Frequency Inverse Document Frequency and Bag-of-Words) together with three different supervised classifiers (Logistic Regression, SVM, and Naive Bayes). With the help of Uniform Resource Locators (URL), Kan et al. [41] classified the web pages by extracting features where a URL is segmented into tokens using information-theoretic measures. Noor et al. [42] proposed an automatic deep web classification technique, named "Query Probing", where they extracted the content from deep web data sources. Besides, it is commonly used for supervised learning algorithms and "Visible Form Features" [39]. Nunes et al. [43] discovered 16 zero-day exploits by monitoring forum posts in Darknet marketplaces. To reduce training data labeling requirements, their binomial classification method combined supervised with semi-supervised classifiers (eg: Label Propagation and Co-Training). Unsupervised k-means clustering was applied to character level n-gram features in [44] and partitioned Dark Web marketplace products into 34 clusters. Thomas et al. analyzed the way of cybercriminals' communications and what they exchange in forums [45]. Pastrana et al. focused on finding cybercrime actors in a large underground forum [46]. For evaluating private interactions, Overdorf et al. developed a method for automatically labelling threads that are likely to trigger private messages [47]. These studies were used to explore the market of underground forums and the social relationships of members. Masashi et al. [48] conducted a study to efficiently extract threat intelligence from the dark web by using machine learning as an "active defense" against cyber-attacks. Furthermore, focusing on the current situation that myriad forums are rampant on the dark web, they proposed a method to identify the characteristics of these forums. The experiment showed that "doc2vec", a neural network based tool, has high performance as a method of natural language processing and feature extraction in machine learning. MLP indicated high classification performance of 90% or more based on the number of datasets used in the experiment. This proved that the vectorization of doc2vec accurately represents the features of the posts. Furthermore, their experiment has shown that it is effective to use machine learning for posts on the dark web [48]. Future Research Scope The future research scope on Deep Web has enormous potential. With the rise of "Artificial Intelligence" and Web Technology, research on Deep Web would be the next game changer. Furthermore, this area of research involves emerging fields such as big data and advance intelligent computing, etc. More precisely, this will play a significant role in global eGovernance.
2,370.2
2020-10-28T00:00:00.000
[ "Computer Science" ]
Performance of UV and IR Sensors for Inspections of Power Equipment Electric power infrastructure, such as transmission lines or substations, is usually routinely inspected to assess its condition. The vast majority of typical defects in power transmission equipment manifests itself either through corona phenomena or through thermal effects. Therefore, an IR camera and a solar blind UV camera are sufficient for the detection of most defects in power transmission equipment. In the past, many network operators have relied mostly on manual inspections. In recent years, however, manned as well as unmanned aerial inspection methods, which are significantly more time effective, have become increasingly affordable and are therefore gaining in popularity rapidly. To obtain meaningful measurement results, many factors must be taken into account, which can even be difficult with conventional, static measurements. In the case of highly dynamic measurement practices (airborne or vehicle based), the combination of velocity and distance presents further challenges. This contribution is focused on the detection performance of UV and IR sensors under dynamic conditions. For this purpose, experiments were carried out with a typical IR and UV/corona camera at various distances to artificial defects. Additionally, a method for the automatic evaluation of UV und IR data based on machine learning is presented. Introduction To ascertain the highest possible security of supply, network operators routinely inspect their infrastructure. Many network operators use manual inspections, carried out by specialized personnel, for this purpose. Fortunately, the vast majority of typical defects in power transmission equipment manifests itself either through corona phenomena or through thermal effects. Therefore, an IR camera and a solar blind UV camera are sufficient for the detection of most defects in power transmission equipment. However, varying geographical conditions and frequent needs for follow-up inspections make those procedures rather time and staff intensive. As a result, manned as well as unmanned aerial inspection methods, which became increasingly affordable in recent years and are in general more time effective, are gaining in popularity rapidly. In order to obtain meaningful measurement results, many factors must be taken into account, which can even be difficult with conventional, static measurements. In the case of dynamic measurement practices, the combination of velocity and distance presents further challenges. So far, there are no comprehensive standards or guidelines for UV and IR which adequately cover the special conditions that dynamic measurement methods imply. In this contribution an attempt is made to determine conditions under which a qualitatively meaningful fault detection and evaluation is possible and useful. Particular attention is paid to assess the influence velocity and distance have on the detection sensitivity as well as the detection accuracy. For this purpose, results from measurements performed on selected laboratory and outdoor fault scenarios at various distances and velocities using an automated turntable are presented ( Figure 1). State-of-the-art Primarily, IR sensors are used for local testing of electrical power equipment. In order to determine the temperature as accurately as possible, some environmental aspects, in addition to a suitable camera, must be taken into account. This includes variables such as the surface condition and the current load of the object, but also ambient temperature, wind velocity, sky as well as solar radiation and cloudage. All of those parameters also play a significant role when IR sensors are used for dynamic measurements. Additionally, the relative velocity and distance to the test object must be taken into account. In this chapter experimental results will be presented to investigate the influence of velocity and distance on the measurement results. In order to avoid influences of weather phenomena and reflected solar radiation, the experiments were carried out indoors. IR sensor In earlier tests, the Optris PI640 was selected for the experiments on a moving device. Preliminary laboratory tests have shown that actively cooled cameras may have slightly better performances under dynamic conditions. However, the selected camera provides the best overall package in terms of its characteristics, its weight and its digital interfaces. Especially the last two attributes are very important for future use on autonomous, moveable devices. The key facts of this camera are shown in Table 1. Testing environment The test object was a conductor loop of 10 m length whose ends were jointed with a suitable cable clamp. The conductor cable has a diameter of 21 mm and a glass bead-blasted surface. The emission coefficient is given by the manufacturer as ε = 0.6. The clamp is made out of cast aluminium and consists of two parts that are fixed with one screw on each side ( Figure 2). The emission coefficient for the clamp was assumed to be ε = 0.4 according to previous measurements. The loop was passed through an AC current transformer and loaded with an initial current of 300 A which was adjusted in order to obtain the desired temperatures. To measure the temperature, five thermocouples were fitted, two on the clamp (1x front / 1x back) and three on the conductor ( Figure 2). The temperatures were recorded during the whole process, from heating up until 30 minutes after the last measurement. The temperatures of the clamp during the recording of the thermograms were obtained by linear interpolation. The clamp was loosened to attain a temperature difference between rope and clamp of about 10 K at a clamp temperature of about 60 °C. This value can be taken as a benchmark for a minimal alert temperature difference in detecting hot-spots [1,2]. After reaching stationary temperatures, measurements were taken at three different distances (10 m, 20 m and 30 m). For every distance the velocity was varied in following steps: 0 m/s (static), 2 m/s, 5 m/s, 7 m/s and 10 m/s. To ensure reproducible relative velocities between the IR camera and the test object, the camera was mounted on an automated turntable which was controlled by a stepper motor and a microcontroller ( Figure 1). For every step the IR temperature was obtained for the clamp and two spots on the conductor ( Table 2). The emission coefficient was fixed for the whole camera range. In this case, the influence of three different emission coefficients (ε = 1, 0.6, 0.4) were explored. As result, IR-sequences were recorded for every performed measurement. Out of those thermograms, temperature values were obtained from three predefined areas ( Figure 3). To compare the temperatures for every distance in a reproducible way, the maximum temperature values were recorded from every area. Also, the IR temperatures are given in relation to the temperatures measured on the surface. Results When looking at the graphs in Figures 4 and 5 it is recognizable, that the detected temperatures depend on camera velocity and distance. For an estimated emission coefficient of ε = 1 the IR sensor only detects 64 % to 42 % of the origin clamp temperatures, depending on velocity and distance. The influence of the sensor velocity gets smaller with increasing distance. As far as the conductor is concerned, the influences of speed and distance on the observed temperature are almost negligible. The detection efficiency is about the same as for the clamp (Figures 4 and 5). In case of the clamp, the influencing effects become much more obvious. The recorded results show that the basic course of every curve is nearly the same. Furthermore, the comparison of the different curves indicates that the closer the emission coefficient of the clamp gets towards to the actual value, the closer the temperatures approach the values determined in situ with the thermo-couples ( Figures 5 and 6). Looking at the variations between different curves for different distances, it is recognizable, that lower emission coefficients cause a higher dependency on the distance of the sensor (Figures 5 -7). The results indicate, that the influence of the sensor velocity is getting smaller with longer distances between sensor and object. The same effect is achieved if the emission coefficient is set to a higher value (Table 3). In summary it can be stated, that in this experiment the detected temperatures were more constant when a higher emission coefficient was used. However, higher emission coefficients also led to less accurate temperature measurements. Regarding the influence of different sensor velocities on the temperature difference ΔT between the clamp and the conductor the sensor velocity appears to have less influence at higher distances, but the detection efficiency is then already lower than 50 % of the original ΔT value (Figure 8). Increasing the distance between sensor and object leads to a similar behavior. However, this behavior is desirable because the minimum distances, which must be kept between the infrastructure and inspection equipment, usually exceeds 30 m. The investigations regarding the temperature differences ΔT show that lower values for the emission coefficient deliver results that are closer to the actual temperature difference (Figure 9). So far, the experiments have shown, that measuring absolute temperatures in a dynamic environment accurately can be rather challenging. Nonetheless, it should be possible to detect potential failures or hot-spots by analyzing temperature differences which has proven to be feasible under dynamic conditions. Furthermore, the experiments have shown that results measured with a higher emission coefficient are less dependent on sensor velocity. State-of-the-art Corona discharges emit in air mainly in the 230 -405 nm range of the UV [3]. Unfortunately, corona emissions are very weak in intensity in relation to solar UV irradiance. However, there is a so called "solar blind band" between 240 -280 nm where the solar radiation is absorbed by the earth's ozone layer. Commercially available corona cameras operate within this window. Corona emission lines in this spectral band are weaker in intensity than in the 290 -400 nm range. Therefore, corona cameras usually rely on UV solar blind image intensifiers to provide high contrast images [4]. Additionally, a solar blind band pass filter is used to block out any leakage UV radiation which might saturate the image amplifier system [5]. Assessment of corona images The assessment of corona images is inherently difficult and has been a vast field of research for many years. The main difficulty originates from the image intensifier's high gain (typically 10 6 ph/ph) which causes all corona discharges to appear as bright white spots of similar size (blobs) against a black background (Figure 10). A single corona image does therefore not allow any assessment or classification regarding the intensity or severity of the corona inducing defect. However, in recent years, several methods for the classification of corona inducing defects have been developed. Those approaches usually rely on the extraction of features from a series of corona image frames with machine learning algorithms [6 -9]. However, while all authors conclude that correlation is feasible for stationary setups with constant distances, it remains unclear whether those algorithms are still applicable for dynamic conditions and to what extent the detection of blobs is influenced by velocity and distance in general. To gain further experience in this respect, laboratory experiments were carried out under realistic dynamic conditions. Testing environment The corona UV experiments were carried out with a UV camera manufactured by ProxiVision equipped with an image intensifier and a solar blind filter ( Table 4). The test object was a needle-plane-arrangement which produced continuous, branched discharges with an apparent charge of about Q = 100 pC. The distance between the camera and the test object was varied between 10 m and 40 m. To replicate dynamic operating conditions, the same turntable arrangement as described in chapter 2.3 was used. For every distance the velocity was varied in following steps: 5 m/s, 10 m/s and 15 m/s. The experiments were carried out with two different UV lenses with different focal lengths (25 mm and 60 mm, both F 2.8). Additionally, the influences of the camera's shutter speed on the detection sensitivity under dynamic conditions were investigated (20 ms, 40 ms and 60 ms). Results One of the main objectives of this work is to analyse the influence of various experimental variables, e.g. distance to target object, velocity of moving platform, camera exposure time, etc. onto the ability of detecting events captured by UV equipment. To this end, a set of relevant markers meant to describe this ability was defined, which were monitored across various sets of experimental conditions. Focusing on the UV case, a typical image produced by such a camera, has a resolution of 694 × 510 pixels. A blob detection algorithm, aiming to obtain the 2D location of all bright areas along with their corresponding sizes was applied to the recorded pictures. Typical blob detection algorithms are based on thresholding and filtering operations applied to the (grayscale) image. The result of this algorithm is a set of blob detections. A 2D point is assumed to be in the same image plane that corresponds to the location of the physical UV event, which acts as ground truth (gt) information for the experiment. A single recording sequence, as described in chapter 3.3, comprises a collection of such images augmented with detections and annotated gt information. In these conditions, the following statistical markers can be computed: n1the percentage of frames from the sequence containing at least one detected blob, n2the percentage of frames with confirmed gt, where a gt point is considered to be confirmed if there is at least one detection no further than a specified radius (r) from it, n3the percentage of flooded frames, where a frame is labelled as flooded if the number of detected blobs exceeds a value of 10, and finally n4the number of actual frames with confirmed gt in a session. The above markers were compiled for a total of 72 condition sets, combinations of 4 from the following experimental variables: ∈ {10,20,30,40}the distance (m) between the camera and the target object, ∈ {25,60}the focal length (mm) of the camera optics, ∈ {20,40,60}camera exposure time (ms) and ∈ {5,10,15}equivalent velocity (m/s) of a point passing in front of the target on a linear trajectory at distance d and producing the same recording session (ignoring lens distortions). Throughout the experiment, the radius r for confirming a gt annotation point was set to 20 pixels. The following observations can be drawn from the results visible in Figure 11: marker n3, the percentage of flooded frames, is negligible, with few exceptions that affected experiments with the 25 mm lens at 40 m distance and the 60 mm lens at 10 m distance. These sessions (clustered in time) are likely to have been influenced by some external perturbation, or perhaps some technical problem of the UV camera. The abnormal percentage of flooded frames has a direct consequence on n1 (for the corresponding sessions), particularly visible for the 25 mm lens (row 1, columns 1-3). Otherwise, n1 seems to be influenced primarily by distance when using the 25 mm lens, which indicates a low percentage of detections as the distance to the test object increases. For the 60 mm lens, however n1 does not change much with distance (with the sole exception of d = 30 m). Distance has a similar influence on n2, where the negative correlation seems to be much clearer than that for n1. The percentage of confirmed defects (n2) decreases on average to approximately half, as distance increases from 10 m to 40 m. This observation is consistent across different focal lengths and exposure time values. Velocity shows a clear negative correlation on n4. This correlation is to be expected, since as velocity increases, the number of frames with the defect in the field of view of the camera decreases (and this is the upper bound for n4). As will be mentioned in chapter 4, n4 has a clear significance when consolidating 2D detections, being the main parameter that decides the final form of the solution. Exposure time appears to be positively correlated with n1 and n2, although this dependency is not as clear as for the distance. In general, a higher percentage of detections and a higher percentage of confirmed annotations can be observed as the exposure increases. Finally, when deciding between different focal lengths, the experiments favour the higher f value, which gives an increased percentage of both detections and confirmed annotations. In general, a high confirmation rate (n2 and n4) for most combinations of experimental factors can be observed, which supports the feasibility of automatic detection of UV events. Case study: automatic detection of UV events using Computer Vision Automatic detection of 2D blobs in UV images is very useful in localizing events such as corona defects, especially when the blobs can be linked temporally. In this chapter an example of temporal aggregation of blobs in 3D with the scope of highlighting UV events that are consistent across multiple frames is shown. To this end, an algorithm that takes as input blob detections from multiple frames and camera calibration information and produces aggregated 3D points that accumulate votes from individual frames was developed. The algorithm formulates the problem as a point search in 3D, constrained by elements of epipolar geometry and general camera geometry [10]. Blobs detected in one frame are projected in 3D as line segments bounded by a fixed depth of interest. Points from these segments are back-projected and matched (up to a matching tolerance) in consecutive frames. Consequently, vote counters are incremented for successfully confirmed blobs. Finally, the points accumulating a certain number of confirmations form the solution. While the technical details of the algorithm are beyond the scope of this chapter, the outcome of applying it will be demonstrated on a semi-realistic scenario: detection and localization of a 3D point visible in multiple consecutive images. For ease of verification, the tip of a pylon cross-arm, whose 2D location is annotated in multiple consecutive frames forming a trajectory segment, will be considered. Figure 12 shows a sample image and the location of the considered point. The data comes from an inspection flight performing a high voltage overhead line monitoring routine and includes RGB (grayscale) and LIDAR data streams. Next, a minimum number of 50 votes of a point from the solution was chosen for the algorithm. The value of this parameter is supported by Figure 11, for the case of 25 mm lens, a distance of approx. 40 m and a velocity between 5 and 10 m/s, reflecting the recording conditions. The outcome is printed in 3D in Figure 13, using 3D rendering software superimposed onto LIDAR data [11]. The number of votes each 3D point accumulates is encoded in shades of gray (with brighter shades corresponding to more votes). The red points mark the camera trajectory over time. The solution calculated with the algorithm can be further post-processed by applying a clustering algorithm exploiting the voting information. However, even in this unfiltered form the localization is fairly accurate, with neighboring points being approx. 30 cm apart from each other. In order to quantify the quality of the localization, a second experiment was conducted, where the point with the highest number of votes from the above solution was considered (reference point) and then projected back onto all the frames from the trajectory segment. Since corona discharges are recorded by an UV camera with a location uncertainty around the actual physical defect (e.g. sharp tip of a broken conductor), this uncertainty was modelled by adding Gaussian noise with increasing standard deviation to the projected locations of the reference point. As performance metric, the distance error between the reference point and the solution point with the highest number of votes was measured. Figure 14 shows the outcome of this experiment. As expected, increasing the noise level results in more validated points in the raw solution and also in an increase in the distance error. Overall, a distance error of this magnitude is rather encouraging, and currently to be expect in realistic conditions. Conclusion The experiments with the IR camera have shown, that measuring absolute temperatures in a dynamic environment accurately can be rather challenging. Nonetheless, it should be possible to detect potential failures or hot-spots by analyzing temperature differences which has proven to be feasible under dynamic conditions. Furthermore, the experiments have shown that the emission factor is a crucial parameter for long distance thermography. The measurements carried out with the UV camera clearly indicate that automatic optical corona detection is also possible under dynamic conditions. While different experimental conditions influence the quantity of redundant information used when consolidating 2D blob detection in 3D (e.g. see the impact of velocity on n4), a sufficiently large number of frames with confirmed ground truth (n4) appears to be ensured in most situations. This in turn makes automatic 3D localization of corona defects possible.
4,786.6
2019-08-08T00:00:00.000
[ "Physics" ]
A Modified Wallman Method of Compactification Closed xand basic closed C*D-filters are used in a process similar to Wallman method for compactifications of the topological spaces Y, of which, there is a subset of D   * C Y containing a non-constant function, where   * C Y is the set of bounded real continuous functions on Y. An arbitrary Hausdorff compactification  ,  Z h   of a Tychonoff space X can be obtained by using basic closed C*D-filters from   C Z | D f h f D       in a similar way, where is the set of real continuous functions on Z   C Z Introduction Throughout this paper,   T   will denote the collection of all finite subsets of the set .For the other notations and the terminologies in general topology which are not explicitly defined in this paper, the readers will be referred to the reference [1]. Let be the set of bounded real continuous functions on a topological space Y.For any subset of , we will show in Section 2 that there exists a unique r f in for each f in so that for any and let V be the set for any , 0  f x for all f D at some x in Y, then K, V, ℰ and Å are denoted by K x , V x , ℰ x and Å x , respectively.Let Y be a topological space, of which, there is a subset of containing a non-constant function.A compactification w  of Y is obtained by using closed  x -and basic closed C* D -filters in a process similar to the Wallman method, where   * C Y Theorem 2.1 Let ℱ be a filter on Y.For each f in there exists a r f in such that for any F in ℱ and any 0   (See Thm.2.1 in [2, p.1164]). Proof.If the conclusion is not true, then there is an f in such that for each t in there exist an Corollary 2.2 Let ℱ (or Q) be a closed (or an open) ultrafilter on Y.For each f in , there exists a unique in such that (1) for any A Closed  x -Filter and a Modified Wallman Method of Compactification Let Y be a topological space, of which, there is a subset of an open nhood filter base at x; let N x be the union of From the Def.3.4, the following Cor.3.5 can be readily proved.We omit its proofs. Corollary 3.5 For a closed set Thus, ℭ does not belong to , contradicting the assumption.For [] is obvious from (i). * Equip with the topology  induced by .For Proposition 3.9 For each f in , f* is a bounded real continuous function on . . For the continuity of f*: If ℭ is in and is t f .We show that for any Thus is well-defined and one-one.Let be a function from To show the continuity of and , for any k So, and are continuous.(ii) is obvious.(iii) For any and all  > 0. Therefore, if the K* or ℰ* defined as above is well-defined, so is K or ℰ defined as in Section 2 well-defined and vice versa. and S is in V x , thus V x is an open nhood base at x; (iii): For any * F in  such that N x is not in * F , by Cor.3.5 (i), x is not in F , and by (ii) of Lemma 3.11 above, x is in Cor. 3.5 (i), Lemmas 3.6 (ii) and 3.8 (i) imply that We claim that * : Thus is an open nhood base at .   Lemma 3.12 Let ℰ be a basic C* D -filter on Y defined as in Section 2. If ℰ does not converge in Y, for any For any by Cor.3.5 (iv) there exists a *,   Pick a 0 where The Hausdorff Compactification (X w ,k) of X Induced by a Subset D of C * (X) Let X be a Tychonoff space and let be a subset of separates points of X and the topology on X is the weak topology induced by .It is clear that contains a non-constant function.For each x in X, since V x is an open nhood base at x, it is clear that where X E = {ℰ x |x X} and X E = {ℰ|ℰ is a basic closed C* D -filter that does not converge in X}.Similar to what we have done in Section 3, we can get the similar definitions, lemmas, propositions and a theorem in the following: (4.15.4) (See Def.3.4) For a nonempty closed set F in (ii), (iii) and (iv) are obvious.(4.15.6) (See Lemma 3.6) For any two nonempty sets and E F in X, (i) for any 0.     (4.15.9) (See Prop.3.9) For each f in , f* is a bounded real continuous function on D w X .(4.15.10) (See Lemma 3.10) Let be defined by for any * * , 0 is an open nhood base at ℰ in and similarly for K t .Since ℰ s is not equal to ℰ t , K s is not equal to K t and that has a g such that The Homeomorphism between (X w ,k) and (Z,h) Let   , h Z be an arbitrary Hausdorff compactification of X, then X is a Tychonoff space.Let denote D    C Z which is the family of real continuous functions on Z, and let separates points of X, the topology on X is the weak topology induced by and contains a non-constant function. D D Let   and let h −1 be the function from h(X) to X defined by h −1 (h(x)) = x.Since h and h −1 are one-one, f = °f o h and h(X) is dense in Z, similar to the arguments in the paragraphs prior to Lemma 3.11, we have that and all 0   .Thus, if K or ℰ is well-defined, so is °K or and similarly for K t such that ℰ s and ℰ t are generated by K s and K t , respectively.Assume that °ℰs and °ℰt converge to z s and z t in Z, respectively.Then ℰ s is not equal to ℰ t , °ℰs is not equal to °ℰt and z s is not equal to z t are equivalent.Hence  is well-defined and one-one.For each z in Z, let °ℰz be the basic closed C* °D-filter at z, since Z is compact Hausdorff and is an open nhood base at z, thus °ℰz converges to z.Let ℰ z be the element in w X induced by °ℰz , then,  (ℰ z ) = z.Hence, is one-one and onto. , for any , 0 Since is one-one, h f f h    for all f in , so (b) iff (c): and any  > 0. Since for any °f in , D  0,   (c) iff (d): for any Since   , 0 [2] H. J. Wu and W. H. Wu, "An Arbitrary Hausdorff Compactification of a Tychonoff Space X Obtained from a C* D -Base by a Modified Wallman Method," Topology and its Applications, Vol. 155, No. 11, 2008, pp. 1163-1168. doi:10.1016/j.topol.2007 K and V are called a closed C* D -filter base and an open C* D -filter base on Y, respectively.A closed filter (or an open filter) on Y generated by a K (or a V) is called a basic closed C* D -filter (or a basic open C* Dfilter), denoted by ℰ (or Å). the set of all basic closed C* D -filter that does not converge in Y, is the topology induced by the base τ = {F*|F is a nonempty closed set in Y} for the closed sets of and F* is the set of all ℭ space X can be obtained by using the basic closed C* D -filters on X from D f V are called a closed and an open C* D -filter bases, respectively.If for all f in , Y, then K and V are called the closed and open C* D -filter bases at x, denoted by K x and V x , respectively.Let ℰ and ℰ x (or Å and Å x ) be the closed (or open) filters generated by K and K x (or V and V x ), respectively, then ℰ and ℰ x (or Å and Å x ) are called a basic closed C* D -filter and the basic closed C* D -filter at x (or a basic open C* D -filter and the basic open C* D -filter at x), respectively.Corollary 2.3 Let ℱ and Q be a closed and an open ultrafilters on a topological space Y, respectively.Then there exist a unique basic closed C* D -filter ℰ and a unique basic open C* D -filter Å on Y such that ℰ is contained in ℱ and Å is contained in Q. 0 be a closed C* D* -filter base on and let ℰ* be the basic closed C* D* -filter on generated by K*.Since and are one-one, the closed C* D -filter base or the basic closed C* D -filter on Y induced by K* or ℰ* and vice versa.Lemma 3.11 Let ℰ be a basic closed C* D -filter on Y defined as in Section 2. If ℰ converges to a point x in Y, then (i) r f = f(x) for all f in ; i.e.ℰ = ℰ x , (ii) V x is an open nhood base at x in Y and (iii) For any basic closed C* D* -filter ℰ* on , ℰ* converges in .w Y Proof.For given ℰ*, let K and ℰ be the closed C* D -filter base and the basic closed C* D -filter on Y induced by ℰ*.Case 1: 15.7) (See Prop.3.7)  = {F*|F is a nonempty closed set in X} is a base for the closed sets of w X .(4.15.7.1) (See the definitions for the topology  on and f* for each f in in Section 3.) topology  induced by .For each f in , define by f* the Hausdorff compactification of X obtained by the process in Section 4 and is defined as above.For each basic closed C* D -filter ℰ in °ℰ and vice versa.If K or ℰ is given, °K or °ℰ is called the closed C* °D-filter base or the basic closed C* °D-filter on Z induced by K or ℰ and vice versa.For any z in Z, X the closed C* °D-filter base at z.The closed filter °ℰz generated by °Kz is the basic closed C* °D-filter at z. Since Z is compact Hausdorff, each °ℰ on Z converges to a unique point z in Z. So, we define by (ℰ) = z, where ℰ is in : and z is the unique point in Z such that the basic closed C* °D-filter °ℰ on Z induced by °ℰ be the basic closed C* °D-filter on Z induced by ℰ.If °ℰ converges to z in Z, FF an arbitrary basic open nhood of z in Z.So, (d) iff z is in ; i.e., ℰ is in F* if  (ℰ) is equal to z .Hence, T(F*) = Cl Z (h(F))is closed in Z for all F* in .Thus, -one, onto and both Z and w X are compact Hausdorff, by Theorem 17.14 in [1, p.123],  is a homeomorphism.Finally, from the definitions of and , it is clear that Let (X, ) be the Stone-Čech compactification of a Tychonoff space X, to  as above.Then (X, ) is homeomorphic to h  , w  X k such that
2,791.6
2013-08-29T00:00:00.000
[ "Mathematics" ]
An exhaustive review of the stream ciphers and their performance analysis ABSTRACT INTRODUCTION Widely used applications like big data, cloud computing, and e-commerce have resulted in a growing demand for efficiency and security in data processing.The cryptography core and information security create lots of opportunities with real-time challenges.Providing high-level security with high-speed architecture at a low-cost implementation while considering low-resource constraints became a prominent demand for most applications.Wireless networks, device authentication, and radio-frequency identification (RFID) systems have low-resource constraints with low-cost implementation requirements.The lightweight block and stream ciphers protect attackers' information and provide data integrity and confidentiality [1], [2].Block ciphers are the primary choice in lightweight cryptography (LWC) and are easily designed with functionality.However, block ciphers use further as communication protocols, and they can't be designed using stream ciphers.The necessity of the initialization phase before the communications happen has significant drawbacks for the stream ciphers.The stream ciphers are suited to most application requirements where the input text is continuous or unknown.Stream ciphers are compact, easy to design, fast, less-power utilization, and suitable for lowconstrained devices [3], [4]. Stream ciphers have received more attention in recent years due to various research initiatives to develop secure stream ciphers.Research activities and competitions have been conducted in past decades to find novel architectures.As an effort, ECRYPT stream cipher project (eSTREAM) completion is among them Int J Reconfigurable & Embedded Syst ISSN: 2089-4864  An exhaustive review of the stream ciphers and their performance analysis (Raghavendra Ananth) 361 and was held by the european network of excellence for cryptology (ENEC) from 2004 to 2008.This competition promotes to finds of compact and novel stream ciphers for a wide range of usage.Later, the international organization for standardization and the international electrotechnical commission (ISO/IEC) standardized stream ciphers formed for LWC in the ISO/IEC 29192-3:2012 standard.Many stream ciphers proposals and concepts have been proposed [5], [6].Authentication is one of the prime security features to be considered in most applications, apart from confidentiality, and data integrity.Competition for authenticated encryption (AE): security, applicability, and robustness (CAESAR) conducts the cryptographic research community competition to find suitable cipher algorithms and should be advantaged over advanced encryption standards (AES) [7].The hardware-based stream ciphers are well-suited to low-resource-constrained devices and use direct cryptographic functions and basic operations without additional components [8].The stream ciphers have constructed software and hardware acceleration using cryptographic functions, feedback shift registers, and basic operations [9], [10].The cryptographic functions are categorized into either Boolean or vectorial functions with different cryptographic properties.The shift registers like divided into the linear feedback shift register (LFSR) and non-linear feedback shift register (NFSR) based on feedback mechanisms. In addition, XOR and rotation operations commonly used essential functions while constructing the stream ciphers. The performance characteristics of various stream ciphers are examined in this paper, both from a hardware and software perspective.The approach of the stream cipher is described in section 2, as well as an overview of its design with tabulation.In section 3, we'll go over security attacks and countermeasures.In section 4, the performance realization and its application usage are listed.The future trends of current stream ciphers are highlighted in section 5. Finally, the overall work in section 6 concludes. STREAM CIPHERS The stream ciphers are an alternative branch of the symmetric cryptosystem, which provides better speed and scalability for hardware-based approaches.The stream ciphers are classified based on functionality, represented in Figure 1.The LFSR based Stream ciphers are bit-oriented types.The key generation units are designed using a more significant number of LFSR units.An example of a combiner generator with non-linear features in E0 is represented in Figure 2. The E0 is Bluetooth encryption that supports point-to-point communications in wireless networks.The E0 mainly contains four LFSRs with 4-bit memory.The memory bits are updated using C functions.The E0 uses a 128-bit key with a 74-bit initialization vector (IV).The keystream receives the composite output with a feedback mechanism.The E0 is used mainly in Bluetooth combiner with alternative mapping correlation analysis [11], [12].With an irregular clock control mechanism, the clock controller generator introduces nonlinear properties.In Figure 3, the clock controller generator is depicted.Two LFSR sets and a feedback controller are the key components.An example of a clock-controlled generator is A5/1 based stream cipher.This encryption technique is used in most global systems for mobile communications (GSM) based phones for air transmission encryption.The A1/5 cipher uses a 54-bit or 64-bit secret key for keystream generation and avoids the reduction of output efficiency [11].Mutual irregular clocking keystream generator is also called MICKEY stream cipher, which provides low complexity and fewer resource constraints with high security on the hardware platform.The MICKEY cipher uses an irregular clocking mechanism of shift registers with an optimization mechanism against the attacks.The MICKEY cipher generally uses an 80-bit key, whereas the MICKEY 2.0 cipher uses a 128-bit secret key with an IV of 80/128-bit [13]- [15].The MICKEY cipher uses two registers (R and S) with a feedback control mechanism to generate the keystream bit. Stream Ciphers Classification The word-orient stream ciphers work on 8-bit to 32-bit with LFSR with finite state machine (FSM) or non-linear filter generation combinations.SNOW series (1.0, 2.0, and 3.0).The SNOW 1.0 cipher uses a 128bit secret key with a 32-bit word size.The SNOW-based stream cipher representation is shown in Figure 4.It contains two registers, finite field operation with a feedback mechanism, non-linear FSM with two memory units, and XOR operation as output to generate the running key [16], [17].SNOW 2.0 cipher uses a 128/512bit secret key with an IV of 128-bit.The ZUC is a stream cipher used as a 3 rd generation partnership project (3GPP) encryption standard and developed for Chinese studies for inclusion in the 4 th generation (4G) or long term evolution (LTE) project.The ZUC cipher uses a 128-bit secret key size with an IV of 128-bit, and it is built with LFSR-based architecture.The ZUC architecture mainly includes LFSR layers, a Bit recognition layer, and non-linear function and key loading.The ZUC cipher mainly focuses more on timing attacks [18].The SOSEMANUK is one of the software-based eSTREAM projects, which uses a 128/256-bit secret key with an IV of 128-bit for a 32-bit word length.The stream cipher uses most of the features and working principles of SNOW 2.0 with SERPENT-based transformations.The efficiency and security analysis is improved than the SNOW 2.0 stream cipher [19]. The LFSR and NFSR combination is constructed using the GRAIN family to enhance the cryptographic properties.The GRAIN family targets hardware-based constrained environments to improve the gate count, memory, and power consumption features.The GRAIN family has three stream ciphers: GRAIN-v1, GRAIN-128, and GRAIN-128a.The GRAIN-v1 considers 80-key with 80-bit IV using NFSR and LFSR [20].The GRAIN-128 considers a 128-bit key with 1V of 96-bit [21].The GRAIN stream cipher mainly has two shift registers, LFSR and NFSR, and output functions are represented in Figure 5.The key initialization mechanism is crucial for realizing the attack scenarios in the GRAIN cipher using IV and XOR operations.The small-state-based stream cipher is introduced with continuous key use to solve the hardware complexity, illustrated in Figure 6.The TRIVIUM series [22] stream ciphers are hardware featured with simple architecture and are interconnected with three NFSRs with low-degree feedback mechanisms, and quadratic filter functions are represented in Figure 7.The TriviA cipher [23] generates the keys for ciphertext and tags and provides independent hash pairs to calculate the tag.The "encode-hash combine" or ECH hash creates distinct hash pairs.The TriviA provides a 124-bit security key for authentication and a 128-bit key for privacy.The TRIVIUM is one of the eSTREAM finalist hardware stream ciphers and uses an 80-bit secret key size with an IV of 80-bit [24].The TRIVIUM cipher can generate up to 264 keystream bits with a 288-bit internal state.The cipher can solve bit-oriented issues with strong security and performance efficiency.The hardware based fast and secured AE is introduced as TriviA, which uses a 128-bit secret key size with an IV of 80-bit.Fruit-2.0 is a stream cipher that is ultra-lightweight and has a more straightforward internal state system [25].The Fruit 2.0 cipher has an 80-bit secret key and a 70-bit IV.Fruit 2.0 is used to strengthen against related-key attacks with a modified initialization process. The Platelet stream cipher is well suited for lower-constraint devices and does not rely on non-volatile memory (NVM) [26].The Platelet cipher improves the security weakness by storing the key in non-rewritable NVM and rewritable NVM.Platelet cipher uses a 128-bit secret key size with an IV of 40-bit.The Platelet uses An exhaustive review of the stream ciphers and their performance analysis (Raghavendra Ananth) 363 double-layer LFSRs with NLFSR combination as an internal mechanism for key storing.The adaptation of the new stream from the TRIVIUM is the QUAVIUM cipher [27] to improve the performance.QUAVIUM cipher uses a 128-bit secret key size with an IV of 80-bit.The QUAVIUM uses shift registers and k-order primitive polynomials with three round structures for keystream generation.The Kreyvium is a low-depth stream cipher like TRIVIUM and is used for homomorphic compression evaluation [28].Kreyvium cipher uses a 128-bit secret key size with an IV of 80-bit.The Kreyvium cipher added a 288-bit internal state without increasing the multiplicative depth corresponding to key and IV than the original TRIVIUM cipher.The PANAMA is a combination of fast hashing and stream cipher cryptographic modules, and it achieves high performance with low operation with a high degree of parallelism [29].The module reaches 4.7 bits/cycle at stream cipher mode and 5.1 bits/cycle at hashing mode.The PANAMA performs high-end parallel tasks and is suitable for very long-instruction word (VLIW) based processors.PANAMA cipher uses a 256-bit secret key size without an IV process.The Enocoro and MUGI are two typical examples of PANAMA-like stream ciphers suitable for software and hardware implementations.The Enocoro uses an 80/128-bit secret key size with an IV of 64-bit.The MUGI uses a 128-bit secret key size with an IV of 128-bit. The random-shuffled stream ciphers use random-shuffled tables to generate random permutations to achieve higher efficiency using software environments.The RC4 stream cipher [30] is byte-oriented and used against state recovery attacks.The RC4 uses a random table containing 0 to 255 with permutation mode to calculate the two-bytes index-pointer replacements.RC4 cipher uses 8 to 2048-bit secret key size without an IV process.The typical RC4-based keystream generation is illustrated in Figure 8.The numerical table initializes key mixing, followed by the keystream generation phase.The table will be modified in each iteration and generates the output keystream.However, RC4 is still weak against distinguishing attacks.The RC4 is adopted with the new version as an RC4 hardware acceleration suite (RC4-A) [31] to speed up the cipher process in ASIC environments.The RC4-A provides better flexibility, performance, and resource minimization in hardware environments.The performance of the RC4-A will be enhanced using multiported static random access memory (RAM), loop unrolling, state replication, and splitting.The HC-128 is a simple, secure, and software-efficient stream cipher and uses a 128-bit secret key size with an IV of 128-bit [32].It can generate up to 264 keystream bits from each IV/key pair.In contrast, HC-256 [33] uses a 128-bit secret key size with an IV of 256-bit.The HC-128/256 is suitable for modern superscalar microprocessors and supports a high level of parallelism. The addition rotation XOR (ARX) based ciphers are one of the modern stream ciphers, and their round function contains hybrid operations like modulo addition, interworld rotation, and XOR operation.The ARX ciphers are simple, fast, easy software implementation, and run constantly.Salsa20 and Chacha ciphers use 32-bit module addition, rotation, and XOR operations with the help of the hash function.The Salsa20 is the first eSTREAM based software project, and the Chacha cipher is a modified version of Salsa20 with a new round function that creates more diffusion.Salsa20 cipher uses a 128/256-bit secret key with an IV of 64-bit.Chacha cipher uses a 256-bit secret key with an IV of 32-bit.The Salsa20 cipher is typically faster than the AES cipher.Chacha is a new variant of salsa20, designed to improve the diffusion per round and also used to improve the cryptoanalysis resistance [34], [35].The ARX-based round function for Chacha is illustrated in Figure 9.The Rabbit stream cipher was one of the fast encryption standards in 2003 and an eSTREAM-based software project finalist [36].It uses a 128-bit secret key size with an IV of 64-bit as an input to generate the 128-bit random output data in each iteration.The Rabbit examines the security for algebraic and correlation attacks by arranging the key/IV setup parameters.The MORUS is an authenticated stream cipher with 128/256 bits of secret keys and a 128-bit IV [37].MORUS v1 uses the status update function to avoid collisions during the initialization and encryption/decryption stages. The sponge structural-based stream ciphers are designed based on sponge structure with LFSR or permutations, and one of its internal state outputs is directly considered a keystream sequence.The KECCAK and ASCON are examples of sponge structural-based stream ciphers.The KECCAK is a sponge construction type cipher that uses more random permutations, allows multiple inputs, and provides any amount of data outputs [38].The KECCAK cipher uses a 128-bit secret key without an IV process.The KECCAK cipher provides better authentication features without using any additional authentication module.The ASCON is one of the CAESAR finalists' ciphers and known AE modules [39].The ASCON cipher uses a 128-bit secret key with an IV of 128-bit.The ASCON uses a substitution permutation network (SPN) structure with a fixed permutation of an iterative process.It performs both software and hardware implementations with better performance and cost.The ASCON is best known for cube and key recovery attacks.The A2U2 is one of the AE ciphers commonly used in printed electronics-based RFID tags [40].The A2U2 uses two NFSRs followed by a key-bit mixing mechanism with a shrinking filter to generate the ciphertext.A2U2 cipher uses a 56-bit secret key without an IV process.The welch gong (WG)-7 is a lightweight stream cipher used for RFID authentication and encryption [41].WG-7 cipher uses an 80-bit secret key with an IV of 81-bit.The WG-7 consists of 23-stage LFSRs for keystream generation.The WG-7 is secure against time/data/memory trade-off attacks.The WG-8 is a lightweight stream cipher used for low resource constraints smart devices [42].To generate the ciphertext, the WG-8 uses 20-stage LFSRs with feedback polynomial and transformation modules.The WG-8 cipher uses an 80-bit secret key with an IV of 80-bit.The WG-8 is capable of resisting the most common security attacks.The hummingbird (HB) is an ultra-lightweight stream cipher commonly used in high-volume consumer devices like smart cards, RFID tags, and wireless devices [43].HB cipher uses a 16-bit block size, 64/256-bit secret key with an IV of 64-bit.The HB encryption mainly contains four 16-bit block ciphers, followed by an internal state register updation unit and a 16-bit LFSR module.The 16-bit block cipher is constructed using a typical substitution permutation (SP) network.The HB-2 is a lightweight authentication encryption module targeted at low-constrained devices [44].HB 2.0 cipher uses a 128-bit secret key with an IV of 64-bit.GRAIN-128a is a new version of GRAIN-128 with authentication features [45].GRAIN-128a cipher uses a 128-bit secret key with an IV of 96-bit.GRAIN-128a was used to strengthen all known attacks.The Rabbit-MAC is a lightweight AE module commonly used in wireless sensor networks (WSNs) [46].The Rabbit-MAC cipher uses a 128-bit secret key without an IV process and generates the 128-bit random data at the output side for each iteration.The pseudo-random data is XOR'ed with plaintext/ciphertext to generate the encryption/decryption process in Rabbit-MAC.ACRON is a lightweight authenticated cipher and uses a 128-bit secret key with an IV of 128-bit [47].The authentication tag length must be less than or equal to 128 bits.The six LFSRs are concatenated, followed by feedback bits in the ACRON structure.The ACRON is capable of resisting traditional and statistical attacks.The Sablier is one of the hardware-based stream ciphers built with authentication features [48].The Sablier v1 cipher uses an 80-bit secret key with an IV of 80-bit.The Sablier performs the authentication mechanism using shift registers and accumulators in keystream generation. An exhaustive review of the stream ciphers and their performance analysis (Raghavendra Ananth) The BEAN is a lightweight stream cipher module designed based on the GRAIN cipher [49].The BEAN cipher uses two FCSRs followed by an S-Box and filtering.The BEAN cipher uses an 80-bit secret key with an IV of 64-bit.The BEAN cipher utilizes fewer hardware resources than the GRAIN cipher.The BEAN cipher can be resistant to most traditional attacks.The new scalable stream cipher with rule 30 is CAR30.The CAR30 cipher is constructed using the cellular automata (CA) rule 30 with maximum length CA followed by XOR operation to generate the ciphertext.The CAR30 is implemented both on software and hardware platform.In general, the CAR30 can scale up to any key size and IV.Most current works on CAR30 use a 128-bit secret key with an IV of 120-bit [50].The CAR30 provides better throughput than other GRAIN and TRIVIUM ciphers.TinyStream is a new lightweight stream cipher algorithm for WSNs.TinyStream cipher uses a 128bit secret key without an IV process [51].The TinyStream cipher is constructed using tree parity machine (TPM) with a loop system mechanism.The summary of the stream cipher types and their algorithms is tabulated in Table 1.The list of the stream ciphers with functionality is tabulated in Table 2.The stream cipher type, secret key size, and IV size are mentioned in the ciphers tabulation. SECURITY ATTACKS AND COUNTERMEASURE METHODS This section analyzes different types of security attacks and their countermeasure methods.The attacker's main aim is to use cipher designs to find the secret key used in the encryption or decryption process.Two attacks happen: passive attacks and active attacks.Passive attacks occur in the initialization or output phases.The attacker retrieves the information, copies them, and uses it for harmful or malicious purposes.Whereas active attacks, the attackers are trying to recreate the original data in the form of an insert, replay or delete.These two attacks will modify the key information, or system resources will be damaged. Furthermore, these attacks are extensively classified based on cryptography usage.Exhaustive key search is an attack (brute force) where attackers try to find all the possible core combinations to find the primary secret key.This type of attack's computational complexity remains lower and possesses more on plaintext and ciphertexts.The exhaustive key search is analyzed in detail using the TRIVIUM cipher [22], [24] with key recovery.Correlation attacks realize the cipher's linear function and calculate the keystream based on output observation.Algebraic attacks use the algebraic equations of the main cipher and are used further to generate the key bits.Similarly, linear attacks are also correlated with the linear functions of the defined keystream bits and initialization bits. Distinguishing attacks are a type of attack in which attackers try to differentiate the keystream information from a random sequence feature.These attacks may recover the complete key details in the future.The side-channel attack is a type in which the attacker retrieves the data information from the cipher while calculating the power consumption or electromagnetic emission process.In this attack, the attacker hacks the complete information from the internal operations of the cipher technique.The related-key attack is a type of target attack happening during the re-initialization process of the cipher design operation.The attacker will generate the related keys only if the cipher technique does not use the non-linearity feature and is directly related to plain text and new-key generation.Similarly, the chosen-plain text or IV attacks use the key scheduling weakness and retrieve the useful initial state information from the main memory.The basic structure of the cipher realizes the time, memory-data trade-off attacks, and summarization of the related results in a larger table. PERFORMANCE ANALYSIS AND APPLICATIONS This section discusses the hardware realization of the stream ciphers and their performance analysis.Most of the authors implemented the stream ciphers using the field programmable gate array (FPGA) platform.The stream ciphers are constructed with macroblocks using hardware description language (HDL) and later implemented on FPGA.The performance metrics include area in terms of slices, maximum operating frequency (Fmax) in terms of MHz, latency in terms of clock cycles (CC), throughput (Mbps), and efficiency Int J Reconfigurable & Embedded Syst ISSN: 2089-4864  (Mbps/Slice).The design module uses program logic blocks, and programmable interconnects on FPGA.The FPGA contains configurable logic blocks (CLBs), input-output blocks (IOBs), dedicated multipliers, a digital clock manager (DCM), and block RAMs.The CLBs are constructed using slices and lookup tables (LUTs).The slice definition is varied based on FPGA device selection.For example, one slice contains a minimum of two 4-input LUT, Flip-flops, adder tree, and multiplexors on Spartan-3 FPGA.The LUT holds the design information in the Boolean equations and Truth table.The maximum operating frequency is obtained after synthesis operation based on design architecture using the Xilinx tool.The latency is analyzed based on the execution of the design to generate the first output in the simulation process.The latency is calculated regarding CC in hardware realization.The throughput is measured based on input data width, frequency, and latency parameters.So, throughput = (input width * Fmax)/latency.The hardware efficiency is measured in terms of throughput per slice.The summary of the performance analysis of the other stream ciphers is listed in Table 4. FUTURE TRENDS The keystream generation is an essential part of the stream ciphers and the main functional requirement for most application domains.The stream ciphers' preamble remains the same, with their high performance and efficiency as the block ciphers.The recent trends towards IoT indicate that the millions of embedded devices are interconnected with resource constraints capabilities and interaction mechanisms with corresponding users.Social mobility and smart city applications need to include a distributed framework to transmit high amounts of cipher data securely.Most of the present industries, like 5 th generation wireless networks, vehicular adhoc-networks, smart camera-based Urban-surveillance, and green networking, will focus more on security to secure their data from attackers. Stream ciphers are the best option rather than block ciphers for streaming applications.However, research is still improving cipher usage in a well-organized manner.Currently, parallel computing systems are widely used in most embedded system applications.So, incorporating a lightweight stream cipher with highdegree parallelism is challenging in maintaining desired performance.Most current stream ciphers focus more on basic operations with cipher structures and can resist most of the existing attacks.However, these ciphers must incorporate most cryptographic properties for further security evaluation and performance analysis.Focus on internal state architecture resource utilization and power consumption while implementing the lightweight Int J Reconfigurable & Embedded Syst ISSN: 2089-4864  An exhaustive review of the stream ciphers and their performance analysis (Raghavendra Ananth) 369 ciphers.Implementing the AE methods using stream ciphers is still in demand because of the current trends in IoT usage.Security feature improvements using stream ciphers on cloud computing applications remain an open research spot. CONCLUSION As embedded or IoT gadgets increase in our daily lives, pervasive computing becomes a reality.Networked computers have undergone a significant change in their architecture, usage, and number to protect the security of those sources and the data kept on or transmitted to them.This manuscript presents an exhaustive review of the stream ciphers for low-constrained devices.The traditional and benchmarked stream cipher's design and authenticated ciphers are analyzed.The resistive streams ciphers for corresponding attacks are highlighted.The implemented results of these stream ciphers are examined in detail using the FPGA platform.From this, GRAIN-128.GRAIN-128A, TRIVIUM, and MICKEY stream ciphers provide better security and performance results than other ciphers.The most appropriate stream ciphers for corresponding application requirements are highlighted based on cryptographic functionalities.The requirements and systematic plans for future designs are highlighted. Figure 2 . Figure 1.Classification of the stream ciphers Table 4 . Performance analysis of stream ciphers Table 6 . Applications of the stream ciphers
5,415.2
2024-07-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Self-assisted optothermal trapping of gold nanorods under two-photon excitation We report a self-assisted optothermal trapping and patterning of gold nanorods (GNRs) on glass surfaces with a femtosecond laser. We show that GNRs are not only the trapping targets, but also can enhance the optothermal trapping of other particles. This trapping phenomenon is the net result of thermophoresis and a convective flow caused by localized heating. The heating is due to the conversion of absorbed photons into heat at GNR’s longitudinal surface plasmon resonance (LSPR) wavelength. First, we investigated the optothermal trapping of GNRs at their LSPR wavelength on the glass surface with as low as 0.5 mW laser power. The trapping range was observed to be larger than a typical field of view, e.g. 210 µm  ×  210 µm here. Second, by adjusting the distance between the laser focus and the glass surface, ring patterns of GNRs on the glass surface were obtained. These patterns could be controlled by the laser power and the numerical aperture of the microscope objective. Moreover, we examined the spectral emission of GNRs under different trapping conditions using the spectral phasor approach to reveal the temperature and association status of GNRs. Our study will help understanding manipulation of flows in solution and in biological systems that can be applied in future investigations of GNR-induced heating and flows. Introduction As imaging and potential thermal therapy agents, gold nanorods (GNRs), with plasmon-resonant absorption in the near-infrared region, have gained a significant amount of attention in the biomedical and imaging areas. For example, in vitro and in vivo twophoton luminescence imaging of GNRs have been already reported extensively [1][2][3]. This interest is mainly due to GNR's tunable excitation wavelength, i.e. longitudinal surface plasmon resonance (LSPR) wavelength. This LSPR wavelength can be easily tuned throughout the near-infrared (NIR) region for a better penetration depth in in vivo experiments, e.g. from ~600 nm-1400 nm by varying the aspect ratio during GNR synthesis [4][5][6]. Excitation at the LSPR can greatly enhance the luminescence from NRs and improve the imaging quality accordingly. GNRs can also enhance signal generated from nearby fluorescence dyes or quantum dots [7][8][9][10]. Furthermore, it has been reported that over 96% of the absorbed photons by GNRs can be converted into heat via non-radiative electron relaxation [11][12][13]. Therefore, GNRs are suitable for photothermal agents in localized NIRinduced hyperthermia [12,[14][15][16]. Consequently the handling and manipulation of GNRs becomes an interesting and important topic, which offers great opportunities for targeted nanoscale drug delivery [17][18][19][20][21][22][23]. Optical trapping of single NRs due to strong electro magnetic (EM) field enhanced trapping force have been previously reported [18,19,22,[24][25][26]. Pelton et al found that GNRs can be trapped and manipulated by a continuous-wave laser beam slightly detuned to the long wavelength side of the GNR longitudinal plasmon resonance [18]. Although most optical trappings of GNRs were done with continuous-wave lasers, Gu et al demonstrated that the trapping force can be enhanced by two-photon absorption of GNRs [22]. Moreover, they reported a snowball effect which is caused by a plasmon-mediated optothermal attracting force. They proposed that the large trapping range is due to the thermal force that decays slower as a function of distance than the EM force [27]. These findings confirm a previous discussion that the thermal effect at LSPR significantly influences the trapping capability [19,27]. The optothermal trapping provides a different approach for manipulation of small particles other than optical trapping. Detailed theoretical/ experimental studies and simulations can be found in references [28][29][30][31][32][33] and more. Traditionally, an IR beam in the range of ~1400 nm-~1600 nm is used to generate the thermal trapping due to its high optical absorption in water, e.g. the water absorption at 1400 nm is about 3 orders of magnitude larger than the absorption at 800 nm [34]. Due to the tightly focused laser beam, a temperature gradient inside a colloid solution can be created. As a result, thermophoresis and convection flow can work together to trap and redistribute particles [30]. Recently we reported that GNRs can undergo thermal trapping with a femtosecond laser at LSPR wavelength [35]. Given that the water absorption of photons at 840 nm is small, GNRs effectively convert absorbed photons into heat and lead to the thermal trapping without using the water absorption window. Here we explore the optothermal trapping and patterning of GNRs on glass surfaces. During this process, GNRs are not only the trapping targets, but also can enhance the trapping of other particles. We demonstrate that trapping of GNRs on the glass surface can be obtained with only 0.5 mW power at LSPR wavelength. The trapping range over about 200 µm confirms the contribution of the convective flow. By adjusting the distance between the laser focus and glass surfaces, ring patterns of GNRs can be observed on the surfaces. These ring patterns can be controlled by the trapping laser power and the numerical aperture of the objective. In addition, spectral emission shifts are detected that depend on the interaction between GNRs. Our spectral phasor analysis combined with intensity maps could help reveal the physical process of GNR trapping. The advantage of this technique is that low laser powers are used to accumulate the GNRs from a large distance in a wellcontrolled manner. Also it can create a localized trapping spot via GNRs with a positive feedback loop for trapping and heating more particles, which will help the potential applications of GNRs in thermal therapy. Instrument details As schematically shown in figure 1, the optothermal trapping of GNRs is performed on a confocal laser scanning microscope with the addition of two-photon excitation capability as described previously [35]. As shown in figure 1(A), a NIR beam from a MaiTai HP Ti:Sapphire (Newport, USA) laser is coupled into an inverted Olympus FV1000 confocal system (Olympus, USA). A DeepSea unit is used to compensate the group velocity dispersion and an acoustic optical modulator controls the laser power. The NIR laser beam is expanded in order to fulfill the back aperture of the objective lens. The Olympus FV1000 scanner is driven by a third party controller (IOTECH DAQboard 3001) and our software (SimFCS, available from www. lfd.uci.edu/) to gain the full control of the scanning pattern and timing. An Olympus60 × water objective (Olympus UPlanSApo, NA = 1.2) or a 40× water objective (Olympus LUMPlanFl, NA = 0.8) were used to focus the laser beam and to collect the emission signal. The signal is reflected to the internal PMT for intensity imaging or coupled into a 200 µm fiber for hyperspectral imaging. In hyperspectral imaging analysis, the phasor approach is used to identify components in the image based on spectral changes [35]. Two trapping modes have been used for this work. In the 'scanning-trapping' mode, the laser beam is coupled into the Olympus laser scanning unit via its NIR port. A RDM690 excitation dichroic mirror reflects the excitation beam into the scanning path and allows the collected luminescence signal passing through to the detectors in the descanned path. In this mode, the NIR beam has both imaging and trapping functions. The trapping is done by focusing the beam at the center of the field of view (FOV) for a certain period of time, e.g. 30 s, and then we raster scan (16 µs/pixel) the same laser beam with a lower power (0.5 mW) to acquire an image of GNRs on the glass surface after trapping. This raster scan is too fast to accumulate GNRs on the glass surface. In the other mode, i.e. the 'trapping only mode', the NIR beam is coupled into Olympus microscope IX81 body via the left port. A second beam expander is used to fill the back aperture of the objective and adjust the trapping position in the vertical direction. The NIR beam is only used for trapping because it cannot be scanned. A 488 nm laser in the confocal path is then used to generate transmission and confocal images while the NIR trappings beam is on. GNRs have a broad absorption spectrum with two peaks at ~500 nm and a tunable wavelength in IR range (e.g. 840 nm in our study), which allows an easy excitation with any wavelength in VIS and NIR ranges. We used 488 nm Ar + laser and 840 nm Ti:Sapphire for excitation with a 488 nm or a SDM690 dichroic mirror. The emission is also reported to from 400 nm to NIR range [35]. Here an emission bandpass filter of 505-540 nm was used in both trapping modes for collection of luminescence signal. Sample preparation CTAB-capped GNRs with LSPR at 840 nm was purchased from Nanopartz (Loveland, CO). The GNRs axial diameter is 10 nm and the length is 44 nm which leads to an aspect ratio of 4.4. The stock concentration is 5.7 × 10 11 nanoparticles ml −1 . During the study of trapping on or close to the bottom surface, 200 µl of the solution was sonicated for 1 h before being added into an 8-well glass bottom chamber. In the study of the trapping close to the top surface, a sandwich chamber with two parallel glass surfaces was made and filled with GNR solution. The distance between the top and the bottom glass surface was ~80 µm. Results and discussion Self-assisted long range thermal trapping of GNRs According to Hale and Querry, the optical extinction coefficient of water at 840 nm is ~1000 fold lower than at 1400 nm [34]. Therefore the direct heating of water by 0.5 mW laser power is minimal at 840 nm. However the thermal effect could be restored by the heat converted from absorbed photons via GNRs [11,12]. Since GNRs are freely diffusing in water, a GNR can be directly trapped at the laser focus for a short period of time [35], e.g. tens of milliseconds, or simply pass over the illumination region. The absorbed light is converted into heat rapidly and creates a heat spot near the laser focus. The resulting convective flow brings nearby GNRs to the laser focus. If the concentration of GNR is sufficiently high, a positive feedback loop can be established. When more heat is generated with accumulation of GNRs at the focus, the trapping range is extended due to the convective flow ( figure 1(B)). It has been reported that this effect was observable as the concentration increases to 10 3 per nanoliter [22]. In following measurements, we used the stock concentration of GNR, i.e. 5.7 × 10 5 nanoparticles per nanoliter. First we show that single GNRs can be trapped in the laser focal spot region on the glass surface. The 840 nm laser beam is focused at the bottom surface of the sample chamber with as low as 0.5 mW average power (figure 2(A)). The laser beam was scanned 256 × 256 pixels to obtain a 2D intensity image in 2 s and then parked at the center for 30 s to perform the optothermal trapping before another round of raster scanning for 2 s. This process was repeated until 20 frames were taken (movie S1 in supplementary materials (stacks.iop.org/MAF/4/035003/mmedia)). The image at 60s shows a small accumulation of GNRs in the focal spot ( figure 2(B), left panel). However, there is a clear accumulation of GNRs at 240 s (figure 2(B), right panel). We also observed more GNRs in the whole FOV after 240 s in comparison to 60 s. In order to confirm the thermal trapping effect, we increased the trapping time from 30 s to 60 s with the same imaging interval time of 2 s. In other words, we reduced the frequency of taking 2D images, during which the laser beam is moving and the heat accumulated at the center is dissipating until the laser was focused at the center again. Therefore a more obvious trapping effect can be obtained in figure 2(C) (movie S2 in supplementary materials). At 60 s, some GNRs are The NIR laser beam from A MaiTai HP Ti:Sapphire laser is coupled into a Olympus FV1000 confocal system with IX81 body. A MaiTai ® DeepSee ™ unit is placed in the front of MaiTai to compensate for group velocity dispersion. Laser power is controlled by an acousto-optic modulator (AOM). The beam is expanded 3× before a flip mirror which allows trapping or imaging mode. In the first mode, termed 'scanning-trapping', the beam is coupled to the Olympus scanning box directly and serves as both trapping and imaging beam sequentially. In the second mode, termed 'trapping only', the NIR laser beam is expanded for another 3.3× before being coupled into the left port of the IX81 body and serves as the trapping beam only. In this mode of operation, a 488 nm laser is used to obtain the confocal and transmission images at the same These results demonstrate that: (1) GNRs can be trapped at the center with only 0.5 mW average power and the optothermal trapping attracts GNRs from a large area as shown in the supplementary movies (movie S1 and S2). Our observation confirms that this process is convective flow assisted; (2) Although the heat was dissipating during the 2 s laser scanning, the convective flow is still enough to attract more GNRs to the center; (3) The trapping process can be quantified by the average intensities over time as shown in figure 2(D) (excluding the big clusters due to the fact particles were aggregated before entering the FOV). A rapid increase of intensity over time implies a positive feedback loop as described earlier by Gu et al [22]. Trapping with less frequent laser scanning, i.e. every 60 s, results a faster increase of intensities at the same time points in comparison to scanning every 30 s. This observation suggests less heat was dissipating away and a stronger convective flow was obtained. By introducing another laser for imaging, we can trap GNRs continuously without scanning the trapping laser beam. This is the 'trapping only' mode described in the experimental section. A 488 nm laser was used for both transmission and confocal imaging. The NIR laser at 840 nm with 30 mW power was focused at the center of glass surface for trapping. Using high laser powers will accelerate the trapping process and make the visualization easier. A 5 min video was acquired (Movie S3 in supplementary materials). Figure 3(A) shows selected frames with a FOV of 210 µm × 210 µm. The collected luminescence signal (color coded in green) is from both two-photon excitation at the focal point by the 840 nm laser without scanning and one-photon excitation by the 488 nm laser with raster scanning. From this time series, the increase of green luminescence at the focal region indicates the trapping of GNRs at the focal point, which is consistent with figure 2 above. Although the movement of single GNR is not easy to distinguish in the transmission images, the movement of GNR clusters (one example is circled in red) reveals the contribution of the convective flow figure 3(A). Clearly, this cluster was moved by the flow from outside the FOV (210 µm × 210 µm) to the focal region, i.e. the trapping range is extended more than the FOV due to the convective flow. Figure 3(B) shows detailed behavior of this cluster in the focal region. It was trapped towards the center spot. But within 1 s at focal region, it was levitated and deflected from the top. This observation is consistent with traditional optothermal trapping, but in this work it was obtained with the wavelength at 840 nm. Meanwhile, some clusters are precipitated on the glass surface firmly and cannot be moved until a strong flow can be generated. In summary, the trapping process can be described as following: in the focal spot, the absorbed photons by GNRs are quickly converted into heat which leads to a rapid local temperature increase which causes thermophoresis and convectional flow, figure 1(B). Thermophoresis moves particles from hot to cold regions, whereas the convection flow transports particles downwards or upwards with respect to the focal region. As a net result, more GNRs nearby are attracted to the focal spot and convert more heat into the water to trap even more GNRs. This positive feedback loop eventually will trap GNRs from a large range far beyond the optical focal spot. Interestingly, trapped GNRs should be laterally repelled by thermophoresis and eventually form a ring pattern as discussed earlier by Braun et al, who trapped DNA in a ring pattern with 1480 nm laser [30]. Such a ring structure is only formed on the glass surface when the laser focus is away from the surfaces. Here we focused the trapping beam exactly on the glass surface, the trapping of a GNR in this small spot is highly dynamic and a ring pattern is hard to observe. Instead the ring pattern can be obtained when we shift the focal spot away from the glass surface as described below. Patterning of GNRs on top and bottom surfaces away from the trapping laser spot By focusing the laser beam inside the GNR containing suspension and away from the glass surfaces, we are able to direct the GNRs to form a ring pattern on the glass surface. In the following experiments, a chamber was made with two parallel glass surfaces about ~80 µm apart. The chamber was filled with the stock GNR solution. The trapping beam was 840 nm. First, we focused the laser beam 10 µm (z = −10 µm) below the top cover slip (top surface (a) in figure 4(A)) with 138 mW average power. After 2 min, a TPEF image was acquired on surface (a) with 0.5 mW power. Figure 4(B) shows a ring structure of immobilized GNRs. We analyzed the spectra at every pixel of this ring with the spectral phasor approach [35] ( figure 4(C)). The spectral image shows that most immobilized GNRs maintained their spectra as in solution with a central average wavelength of ~500 nm. However, a few red shifted pixels at ~630 nm are seen which suggests that a fraction of immobilized GNRs are damaged or higher order aggregates and the plasmon is changed [35]. A similar procedure was done at the bottom coverslip (bottom surface (b) in figure 4(D)). The trapping laser with 106 mW power was focused at 10 µm above surface (b) (z = 10 µm) for 1 min first and then at 5 µm above surface (b) (z = 5 µm) for another 1 min. The TPEF image on surface (b) shows a double-ring pattern ( figure 4(E)). Spectral phasor analysis (figure 4(F)) shows that GNRs of the double-ring pattern have the same spectra as in suspension with a central emission at ~500 nm. However, some trapped GNR clusters also show red-shifted phasors (~630 nm), which could imply plasmon coupling between GNRs within the clusters. Some GNRs on the surface rings disappeared after a couple of minutes in the absence of the trapping laser (data not shown), which implies that these GNRs are transiently attached to the surface and can diffuse away if the trapping laser is turned off. The above trapping and patterning phenomenon can be explained by the thermophoretic depletion and the convectional flow, figure 1(B). In our setup, the flow of particles can be easily visualized by the transient immobilization of the particles on the glass surfaces placed at a distance from the laser focal point. Size control via different laser powers and objectives with different numerical apertures The size of the ring patterns can be controlled by varying laser power, the numerical aperture of the objective and the distance between the focal point and the surface. In one measurement, GNRs were exposed with 15 mW for 1 min and 114 mW for 10 s respectively, at z = 5 µm above surface (b). Results are shown in figures 5(A) and (B). The ring diameter ( figure 5(A)) is ~5.6 µm at 15 mW and ~9.4 µm at 114 mW ( figure 5(B)). GNRs moved to the surface by convective flow and deposited in a region which depends on the net result of thermophoresis and the convective flow, figure 1(B). In another measurement, we used objectives with different numerical apertures and also tested different distances between the focal point and the glass surface. A 60 × water objective (NA = 1.2) was used to focus the 114 mW laser beam at z = 5 µm for 2 min and at z = 10 µm for 30 s. A double-ring pattern was then obtained in figure 5(C). The outer diameter is ~20 µm and the inner diameter is ~10 µm. In comparison, a 40 × water objective (NA = 0.8) was used to focus the 114 mW beam at z = 5 µm for 3 min and at z = 10 µm for 30 s. A double-ring pattern is shown in figure 5(D) with the outer diameter ~14 µm and the inner diameter ~9 µm. Redistribution after trapping and melting of GNRs with excessive laser heating In the previous section we proved that the formation of ring patterns can be controlled by varying trapping conditions. Interestingly we found that applying the trapping laser at different z planes on an already formed ring can easily redistribute GNRs on the ring. An example is shown in figures 6(A) and (B). The trapping was performed with 114 mW laser power at z = 10 µm for 2 min (figure 6(A)) followed by 0.5 mW power at z = 0 µm (i.e. glass surface) for another 5 min (figure 6(B)). Clearly, by shifting the focus from z = 10-0 µm, the trapped GNRs on the ring were driven towards the center and a new accumulation of GNRs at the focal region is observed. With this procedure, more complicated patterns can be made since patterns can be altered after formation. However during the trapping, GNRs can also be damaged with excessive heating. For example, figure 6(C) shows damage or melting of GNRs after a 5 min continuous experiment with 50 mW laser power tightly focused at z = 0 µm, i.e. on the glass surface. We found that: (1) the GNRs were trapped around the focused area and extended to the whole FOV (66 µm × 66 µm); (2) The corresponding colorcoded spectral phasor plot in figure 6(D) shows that the spectra of most GNRs were shift to red with the central emission wavelength ~630 nm. The distribution along the radial direction in the phasor plot indicates that the GNRs have different spectral widths. (In the phasor plot, the farther the phasor is from the origin, the narrower the spectral width.) GNRs with altered spectra typically means damaged GNRs [35] or tight plasmon coupling of GNRs among each other; (3) Interestingly, GNRs with different spectral-phasors also show ring patterns in figure 6(D). GNRs coded with purple and cyan (narrower width) are close to the center, whereas GNRs coded with yellow and red (wider width) are far from the center. Moreover, only a few GNRs located at the peripheral area maintain the spectra (green) found in the suspension [35]. Conclusions In summary, we report a self-assisted optothermal trapping of GNRs under two-photon excitation at their LSPR wavelength. The high optothermal conversion efficiency of GNRs allows an effective conversion of absorbed photons into heat and leads to the optothermal trapping effect, which typically is only obtained at the wavelengths with high water absorption. During the trapping process, GNRs perform both as the trapping targets and the promoters, which help the trapping of more particles. Here the optothermal trapping of GNRs on the glass surface was observed with as low as 0.5 mW laser power and the trapping range can be over 210 µm. We also found and discussed the formation of ring patterns of GNRs on glass surfaces. These ring patterns can be controlled by the laser power, the numerical aperture of objectives and the distance between the laser focal point and the glass surface. The obtained spectral phasor information with intensity maps together could help understanding the optothermal trapping of GNRs. Our results pave the way to obtain localized heating and flows in solutions, but they could also be applied to induce flows in cells or tissue where GNRs can be as photothermal agents for treatment. GNRs are biocompatible and can be internalized by certain cells with or without targeting ligands or can be delivered to tumor tissues based on the enhanced permeability and retention (EPR) effect. With NIR irradiation, the excessive heat can be generated via localized GNRs to the local environments which are different than in the solutions. But the same effect could affect cytoplasm or blood flow which may worth further studies.
5,745.6
2016-02-16T00:00:00.000
[ "Physics" ]
Relative attitude stability analysis of double satellite formation for gravity field exploration in space debris environment Spacecraft operating in low orbit are at risk of being hit by space debris. In the debris environment, the impact of debris is likely to cause the double satellite formation to exit science mode or even lead to the divergence of the control system, thus affecting the scientific exploration mission. In this paper, the attitude stability of the double satellite formation for gravity field in the near circular and polar orbit in the space debris environment is studied. Firstly, based on Lyapunov control and LQR, two sets of control models of stochastic collision for two satellites aligned with each other were proposed, and the actuators were modelled and assigned. Secondly, models of collision probability and momentum are developed. The distribution law of space debris is obtained according to the international common debris software. Meanwhile, probability density function of two independent collisions is gained. Finally, through Monte Carlo simulation and statistics, the changes of relative attitude and thrust torque are simulated when the satellite obtains the angular momentum for a short period of time due to being impacted by space debris. During the 10-year mission period, the number of times that the space debris impact makes the satellite attitude out of the science mode and the number of times that the control system diverges are obtained, which provides a reference for the normal manner of the double satellite formation for gravity field exploration. exploration launched by NASA and GFZ.The satellites in the formation are about 220 km apart in near circular and polar orbits.They communicate with each other through microwave, and attitude pointing accuracy is better than 3 mrad.The actuator is composed of magnetorquers and cold air thrusters 10,11 .In 2018, they jointly launched GRACE-FO with the distance between two satellites is 50 km.The change of double satellite distance was measured by laser interferometry, and the attitude pointing accuracy was higher, reaching 0.24 mrad 12 .The satellites for gravity field exploration from single satellite 13,14 to double satellite formation [10][11][12] and from microwave to laser interferometry requires more and more attitude stability, supporting inversion of higher precision on the Earth's gravity field.Afterwards, ESA and NASA jointly proposed Next Generation Gravity Missions-NGGM with two formations, one with an orbital inclination of 90° and the other of 63°1 5 and China proposed TianQin-2 test satellite. Since the two satellites of the double satellite formation are identical, the laser beams emitted from the two satellites must be aligned with each other to ensure that the two laser beams can interfere.In other words, the target of the attitude control of the formation is that the outgoing beam of one satellite is aligned with the receiving end of other satellite, and the outgoing beam of the other satellite is aligned with the receiving end of this satellite.If the attitude of one satellite is misaligned, the optical power received on the interferometric signal of other satellite is relatively low, causing inefficient interference efficiency.While the information of angle jitter will be coupled into the distance measurement of the formation, causing measurement bias 16 .Therefore, the maintaining the stability of the relative attitude of the double satellite formation is a key step in ensuring laser interferometric ranging.However, GRACE's satellite development company, Airbus Defense and Space, has only briefly described the control of its attitude 11 , not to mention research on its stability maintenance in the face of complex space environments.Reference 17 developed two control algorithms based on Lyapunov control and LQR on the condition of attitude control accuracy of GRACE and GRACE-FO.Simulation results show that the controller designed by Lyapunov control algorithm has better comprehensive control effect.However, no relevant research has been found on the impact of space debris on the relative attitude stability of the double satellite formation. Since the control of two satellites in the double satellite formation for gravity field exploration is almost the same, this paper takes one of the double satellite formation as the research object and establishes a satellite relative attitude dynamics model.The disturbance torque in this model includes the gravity gradient torque and torque caused by the difference of tensor of inertia.The control torques of magnetorquers are described, and the thrust model of cold air thruster is established.The serial link control is adopted for the two actuators.Two control algorithms based on Lyapunov and LQR have been cited in the space debris environment.The number of space debris making satellite attitudes exit science mode that caused by impact, as well as making the control system diverged that obtained under certain control accuracy, and their probability of normal manner has been analysed.This article first focuses on and analyzes the normal manner of low orbit complex high-precision formation detector systems in debris environments.With the increasing number of space debris, this issue will become more remarkable.Establishing a set of analysis methods and means for the normal manner ability of detector systems has important reference value for the scientific measurement and in orbit operation management of detectors.This paper uses the control algorithm that meets the task requirements to obtain the critical collision of space debris that causes the control system to diverge.A rich supply of data that meets the law of debris distribution is selected, which conducts a Monte Carlo simulation on them.Creatively combining the control algorithm with a Monte Carlo simulation.The control system diverges due to be impacted by space debris in the current task period is simulated through a large amount of data. The second part is the description of satellite angle motion.The third part is the design of the formation attitude controller and control assignment of the actuator.The fourth part is the probability and momentum modeling of space debris impact, and the fifth part is the simulation and discussion. Description of the angular motion The double satellite formation for the gravity field exploration requires that the two satellites align with each other, and one satellite always points to the other accurately.The double satellite formation is composed of two satellites with almost identical motion and control.Therefore, this paper takes one of the satellites as the research object and controls its body frame to coincide with the reference frame within the error range.To simplify the study, we assume here that the reference frame is known and not affected by space debris impacts.The definition of coordinate frame is as follows.The reference frame of the double satellite formation is shown in Fig. 1a, and the simplified shape of the satellite is shown in Fig. 1b i 1 , i 2 , i 3 and e 1 , e 2 , e 3 are the unit vectors of the x, y, z axis under the orbital and reference frames, respectively, and the unit vector is defined as follows The rotation matrix and its derivatives from the inertial frame to the reference system are The reference angular velocity expressed without collision with space in the body frame is The skew symmetric matrix of cross product is Lyapunov attitude control The attitude dynamics equation of the satellite moving around the center of mass is where,J = diag(J 1 , J 2 , J 3 ),ω abs is angular velocity of satellite relative inertial frame, M ctrl ∈ R n×1 is the control torque applied to the satellite, M ext ∈ R n×1 is the external disturbance torque, and M impact ∈ R n×1 is the instan- taneous torque obtained after the satellite is impacted.Q is the rotation matrix from the inertial frame to body frame expressed in terms of Euler angle.The relative angular velocity and angular acceleration from body frame to the reference frame are ω = ω abs − Dω ref and ω = ωabs − D ωref + ω × Dω ref respectively, where D is the rotation matrix from the reference frame to the body frame. The derivative of this function with respect to time is where, S = (D 23 − D 32 , D 31 − D 13 , D 12 − D 21 ),D ij is the element corresponding to the rotation matrix D. If the expression satisfies the following equation The control torque is (5) www.nature.com/scientificreports/ Dynamic equation in vicinity of equilibrium It is difficult to accurately calculate all the external interference torque M ext for the satellite, so the external torque in this paper includes gravity gradient torque, and torque caused by the change of tensor of inertia, which are respectively The rotation matrix D rotates in the order of 2-3-1.α 1 , α 2 and α 3 represent roll, pitch and yaw respectively. Linearized in vicinity of equilibrium, the expression omits that the second order minima To sum up, the relative angular motion equation of the satellite omitting high-order small quantities is where Control torques generated by magnetorquers The magnetorquers are installed along the body frame of the satellite.Three magnetic torques in mutually perpendicular directions are generated by the action of the external magnetic field.However, the control accuracy of the magnetorquers can only reach the order of degrees, which is far from meeting the requirements of attitude control accuracy.It also needs to be combined with other actuators to meet the requirements of the mission 19,20 . The control torque generated by the magnetorquers is where m is the magnetic dipole vector and B is the magnetic induction intensity of the Earth.According to the control torque equation, the control torque is always orthogonal to the direction of magnetic induction intensity.When the satellite is near the equator, the magnetic torque can only control the pitch and yaw angle.When the satellite is near the poles, the magnetic torque can only control the roll and pitch angle.This means that at any time, there is always a direction where the magnetorquers cannot generate control torque.The dipole moment is generated by the electromagnetic coil in the magnetorquers, which is composed of electromagnetic materials with high permeability.Here, we assume that the input current does not exceed ±110 mA and the maximum magnetic dipole moment does not exceed ±27.5 A m 221 . The magnetic induction intensity of the Earth is B , which is expressed in the orbital frame 22,23 where µ B is the geomagnetic constant, i is the orbital inclination, u is the latitude argument and its expression is u = ω 0 t + u 0 , ω 0 is the angular velocity of the satellite.The control torque generated by the magnetorquer is where, e B = B √ (B,B) . Control algorithm of LQR For optimal control of linear quadratic regulator (LQR), the cost function and system state equation are as follows ( 6) www.nature.com/scientificreports/where, x is the state vector, T f is the final time, P is a positive definite symmetric constant matrix, Q and R are posi- tive definite symmetric time-varying matrices, and u is the control torque.A is the dynamics matrix, B ctrl is the control matrix, w is the model noise, B d is the noise coefficient, and B is the magnetic induction intensity. The optimal control is The magnetic dipole vector matrix is m = u .The differential Riccati equation of P(t) is The boundary condition of the equation is P T f = P f . Control distribution of actuator For a system composed of two or more actuators, how to reasonably assign virtual expected instructions to each actuator to satisfy the stability requirements of spacecraft attitude is the problem of actuator control allocation.The common method is to incorporate the control allocation of the actuator into the design of the control law. Considering the applicability of the project, this paper adopts a simple and practical string link allocation rule.This distribution rule assumes that different actuators provide control torque according to different priorities. After the actuators with higher priorities reach saturation, the remaining execution instructions are completed by the next level of actuators.As shown in Fig. 2, the control command of virtual expectation can be converted into the control command of each actuator only after control allocation.The control allocation expression form is as follows , where, v(t) is the desired virtual control instruction, u(t) is the input instruction of the actuator, and B eff is the control efficiency matrix.The control allocation problem can be transformed into The actuator uses a magnetic torquer for control distribution first, that is, We know that it is dif- ficult to meet the control requirements only with a magnetic torquer, that is In order to control a certain attitude of the satellite, two heterogeneous actuators need to cooperate with each other to complete the control task, that is, the input torque u(t) of the actuator is not unique.Since the magnetic dipole vector is limited to 27.5 A m 2 and the maximum thrust of the thruster is 10 mN , the actual per- formance index of the actuator is limited by physical conditions, and the input variable u(t) satisfies the inequality u min ≤ u(t) ≤ u max .Different actuators have different corresponding rates of instructions.This characteristic is represented by the rate u(t) of output instruction u(t) , i.e. ρ min ≤ u(t) ≤ ρ min .Therefore, the restrictions of actuators are Control torque generated by thruster It is difficult to achieve the attitude control accuracy by using only the torque generated by the magnetorquers.Therefore, micro thrusters with a maximum thrust of 10 mN need to be installed on the satellite, in order to achieve the attitude control accuracy of the satellite.Because it will produce a large torque in a short time, the attitude and angular velocity of the satellite will change almost simultaneously when the micro thruster is working.When the relative attitude and relative angular velocity reach the allowable boundary, the actuator starts to work.It should be noted that the switch of thruster does not consider the delay phenomenon.The control torque M ctrl2 generated by the micro thruster is where In order to solve the approximate values of k α , k ω , using Floquet theory, the approximate initial condition of system (7) is �(0) = I 6×6 , and ρ k is the characteristic root of characteristic equation det (�(T) − ρ k I 6×6 ) = 0 .To make the linear system (α, ω) T = 0 asymptotically stable in a large range near the equilibrium position, the inequality kRe(ln ρ k ) < 0 for any ρ k is constant.When max [Re(ln ρ k )] is the smallest, the system returns to the equilibrium position faster.Therefore, the most suitable control parameters for system stability are obtained through max k [Re(ln ρ k )] → min. Probability and momentum of collision In this paper, the average orbital height of the formation is h = 450km , eccentricity is e = 0.001 , orbit inclina- tion is i = 89.5 • , and average effective cross-sectional area is S ≈ 3m 2 .Space debris was assessed using the international common software ORDEM2000.From the software ORDEM2000, the flux of space debris with a size greater than or equal to 0.1 mm over a 10-year mission period is = 1427.56393 m 2 10 year , the flux of debris with a size greater than or equal to 1 mm is = 3.0216 3 m 2 10 year , and the flux of debris impacted with a size greater than or equal to 10 mm is = 3.0734 × 10 −4 3 m 2 10 year。Therefore, the probability of the formation being impacted by space debris of centimetre size is relatively small, and that of millimetre or smaller (15) www.nature.com/scientificreports/size is relatively high, the impact of space debris of 0.1 mm size and above on the double satellite formation for gravity field exploration is investigated and analysed.Two independent collisions obey an exponential distribution, then the probability density function of time is 9 The number of space debris with size greater than or equal to 0.1 mm colliding the formation per unit time is = 4.5314 × 10 −6 s −1 .Therefore the probability density function of two independent collision times can be obtained from Eq. ( 20), as shown in Fig. 3. The space debris impact the formation is a random event, obeying Poisson distribution, then the probability of double satellite formation being impacted N times is If the probability density function f X (x) is a continuous function, then the probability of random variable X in interval a and b is If the size of space debris (or its speed relative to the satellite) is within the interval [a, b] , the probability of collision with the formation in this interval satisfies Eq. ( 22), and the angle function of impact is where, f θ,φ x, y represents the probability density function of altitude and azimuth angle, the range of altitude angle is [−90 • , 90 • ] , and the range of azimuth angle is [−180 Before and after the double satellite formation is impacted by space debris, the momentum of the system composed of the debris and the formation is conserved.Because the actual motion of the satellite after being hit by debris is more complex-both translational and rotational motion.For the convenience of the study, only the translational motion of the satellite is studied when the impact passes through the satellite center of mass.Only the rotation of the satellite is studied when it does not pass through the center of mass. When the extension of the debris velocity passes the satellite center of mass, the momentum lost by the debris is equal to the increased momentum of the satellite.However, when the extension of the debris velocity does not pass through the satellite center of mass, the momentum lost by the debris is converted into the increased angular momentum of the satellite. Assuming that at time τ , the i-th space debris hits the satellite formation for a sustained impact time of ε , and that the change in linear momentum of the satellite during that time is P i and the change in angular momentum is H i , the increase in force or moment of the satellite upon impact is where δ(t) is the unit step function.The relationship between angular momentum and momentum is H = r × P. www.nature.com/scientificreports/When the satellite is hit by debris at high speed, the direction of debris momentum does not pass through the satellite center of mass, the moment M impact is obtained immediately after the satellite is hit, and the attitude and angular velocity of the satellite will change accordingly.When the direction of debris momentum passes through the satellite center of mass, the force F impact will be obtained immediately after the satellite is hit, and the position and velocity of the impacted satellite will change accordingly. This paper mainly studies the attitude stability analysis of formation after being impacted by debris.The angular velocity of the satellite after impact is where, ω A is the angular velocity of the satellite after impact, ω B is the angular velocity of the satellite before impact, and dω is the increased angular velocity of the satellite after impact. Since the angular momentum transferred to the satellite is instantaneous when the space debris collides with the satellite.When the momentum of the debris after the collision with the satellite is all converted into the increased angular momentum of the satellite, where, r is the vector from the satellite center of mass to the impact point, p is the momentum of the debris, and J is the rotational inertia of the satellite. From Eqs. ( 25) and ( 26), the angular velocity of the spacecraft after impact is Assuming that the satellite formation is impacted by the i-th space debris whose the mass is m i , the velocity is v i , the height angle is θ i , and an azimuth angle is φ i , and the linear momentum of the space debris is p i in the orbital frame can be expressed as Space debris has various shapes and types.For the convenience of research, this paper assumes that the debris is a sphere with a density of ρ , and the mass of the debris is where, l i is the size of the debris, subject to software ORDEM2000. The impact of space debris on satellite is an inelastic collision process, in which momentum is conserved but energy is lost.Due to the complexity of the high-speed impact of space debris on satellites, this paper assumes that the impact of space debris on satellites is an inelastic collision process, in which momentum is conserved but energy is greatly lost. The change of momentum after the satellite is impacted by debris is In order to accurately calculate the angular momentum of the satellite after being impacted, it is necessary to know the vector r .However, it is complicated to accurately calculate its specific expression.In this paper, it is assumed that the impact position is subject to uniform distribution in the impact plane, then the height angle and azimuth angle of the i-th space debris impact in plane A j are θ ji and φ ji respectively. The probability of the j ∈ {1, • • • , 6} plane of the satellite being impacted is where, A j is the area of the j-th plane, and ψ j is the angle between the momentum direction of the debris and the normal in the j-th plane before impact. The expression of vector r in body frame is The two coordinates in x ji , y ji , z ji are uniformly distributed, and the other coordinate can be easily deter- mined according to impact surface A j . In the process of high-speed collision, the momentum reduced by space debris is transferred to the satellite.At the same time, the sputter generated by debris collision will be ejected against the direction of the formation operation, taking away a small part of momentum.The change in the momentum of the system occurs as follows (25) www.nature.com/scientificreports/where, ξ is the momentum enhancement factor, indicating the impact of the sputter on the transferred momen- tum after the satellite is impacted by space debris. Motion control simulation without debris impact in scientific mode In order to verify the attitude controller developed in this paper, the parameters in Tables 1 and 2 Figure 4 shows that the control algorithm designed by Lyapunov has higher control accuracy, and the control accuracy is within ± 1.5°, when the actuator is only the magnetorquer and all initial values are 1°, while the control algorithm designed by LQR is accurate within ± 6°.Both control algorithms show that the pitch axes have the highest control accuracy, as that there is always a magnetic torque control effect on this axis.Figure 5a shows that the magnetic dipole vector transitions frequently between saturated states ± 27.5 A m 2 , while Fig. 5b shows that the magnetic dipole vector can initially reaches saturation, but later it is below ± 10 A m 2 , indicating that the Lyapunov-based control algorithm is more effective.The control results of the control algorithms based on Lyapunov and LQR when the thruster provides thrust torque are shown in Figs. 6, 7 and 8. Figure 6 shows the relative attitude control of the satellite achieved by the two control algorithms, and three axes can meet the attitude control accuracy.Figure 7 shows the relative angular velocity control of the satellite implemented by the two control algorithms, three axes also satisfy the angular velocity control requirement.Figure 8 shows the thrust torque required by the two control algorithms.Obviously, the torque required based on Lyapunov control algorithm is small, and the switching firing frequency is also low.Over a 24-h period, the total firing frequency of Lyapunov are 26 less than that of LQR, which is similar to the results of literature 17 . Monte Carlo simulation and critical momentum statistics There are three changes to the control system after the double satellite formation being impacted by space debris.Firstly, the accuracy of the control does not vary significantly because the control system designed is somewhat robust.Secondly, the control accuracy exceeds the maximum allowed value, causing the system to exit the science mode.Thirdly, it causes the control system to diverge and the task to fail.Scientific mode means that the spacecraft is in the normal operation stage.Exit from scientific mode means that the spacecraft can not work normally, but may return to normal operation after relevant control. The logical relationship of the change in the control system of the double satellite formation after a high-speed impact of space debris is shown in Fig. 9. The critical momentum referred to is the momentum that the impact of space debris happens to cause the satellite to exit the science mode.As soon as one degree of freedom exits the science mode during the collision, www.nature.com/scientificreports/we consider that the momentum gained by this impact on the satellite exceeds the critical momentum.The relationship between the Monte Carlo simulation and the control system is shown in Fig. 10.The density of space debris is taken as ρ = 2.8 g cm 3 , assuming a collision duration of ε = 0.1 s , which occurs at τ = 18, 000 s .The area of each surface of the satellite is shown in Table 3. The thrustless attitude control of the satellite is shown in Fig. 11 when the satellite is hit by debris with an increased angular momentum of L 1 = −1.8016× 10 −4 kg m 2 s , L 2 = 1.6135 × 10 −4 kg m 2 s and L 3 = 0.8253 × 10 −4 kg m 2 s respectively.Figure 11a shows that the Lyapunov-based control algorithm has a large attitude change within a short period of time after impact, but quick plateaus.While Fig. 11b shows that the LQR-based control algorithm takes a long time to plateau after impact, which further illustrates the control law of high control accuracy and low robustness. The change in attitude and thrust moment of the satellite when it acquires angular momentum of L 1 = −1.7921× 10 −2 kg m 2 s , L 2 = 1.1543 × 10 −2 kg m 2 s and L 3 = 0.7861 × 10 −2 kg m 2 s after impact are shown in Figs. 12 and 13 respectively.Figure 12 shows that when space debris with the same momentum hits the satellite formation, the attitude will exceed the requirements under the different control.Figure 13 shows that the control algorithm based on Lyapunov control can return to normal thrust conditions within 25 min, while the control algorithm based on LQR takes 48 min for the thrusters to return to normal thrust conditions under the impact of space debris of same momentum. The process of Monte Carlo simulation and statistics is shown in Fig. 10.In this paper, 100,000 data satisfying the size distribution, velocity distribution and angle distribution of space debris software ORDEM2000 are selected.These data and the double satellite formation at the average orbital altitude randomly collide, and the probability density of two collision times conforms to the rule of Fig. 3.The results are as shown in Fig. 14 when www.nature.com/scientificreports/Lyapunov control is adopted, the number of impacts greater than or equal to the critical angular momentum is 31, of which 5 times cause the control system to diverge.When LQR control is adopted, the number of impacts greater than or equal to the critical angular momentum is 38, of which 7 times cause the control system to diverge. Figure 15 shows that the probability of the double satellite formation exiting the science mode and divergence due to at least one impact is as shown in Fig. 15a. Figure 15b shows that the Lyapunov-based control algorithm has higher stability after being hit.In the 10-year mission period, the formation for gravity field exploration has been impacted by 1428 space debris with size greater than or equal to 0.1 mm.For the two control algorithms, debris impact can cause the system to exit the science mode and cause the control system to diverge.There is a slight difference in the probability of normal manner, but the difference is small when using different control algorithms. Conclusion In this paper, the stability of relative attitude of the double satellite formation for gravity field in space debris environment is studied.We established the dynamical equation of random collision of relative attitude, and adopted two control algorithms, Lyapunov control and LQR.Under the corresponding conditions, the space debris distribution function is established with the international space debris software, and the probability density function of the time of two independent collisions is obtained according to the corresponding conditions.The impact of debris on the attitude control system was simulated by Monte Carlo simulation with 100,000 data satisfying the model.The results show that during the 10-year mission period, using the control algorithm designed by Lyapunov control, 31 impacts caused the satellite to exit science mode, 5 impacts caused its control to diverge.Based on the LQR, 38 impacts caused the satellite to exit the science mode, and 7 impacts caused the control system to diverge.This shows that the probability of satellite being knocked over vary in a small range due to different control algorithms, the two algorithms that meet the attitude control accuracy will both exit the science mode and unstability in the space debris environment.The control system still has the risk of interrupting the scientific detection mode.It is necessary to consider the satellite operation and maintenance technology and further study the countermeasures. • O I − X I Y I Z I inertial frame-IF.The origin is located in the Earth center of mass, O I Z I is the rotation axis of the Earth and O I X I points to the vernal equinox of J2000 epoch.• o O − x O y O z O orbit frame-OF.The origin is located in the satellite center of mass, o O z O points to the Earth center, and o O x O is located in the orbit plane perpendicular to o O z O and pointing to the direction of motion.• o B − x B y B z B body frame-BF.Its axes are the principal axes of inertia for the satellite.oB x B is the sight line of the laser, and o B z B is perpendicular to the bottom of the satellite.• o R − x R y R z R reference frame-RF.The origin is the midpoint of the line from following satellite to the main satellite, where o R x R points from the follow satellite to the main satellite, and o R z R is perpendicular to o R x R in the orbit plane. Figure 3 . Figure 3. Probability density functions of time between independent collisions. Figure 6 . Figure 6.Control of satellite attitude with thruster action. Figure 7 . Figure 7.Control of angular velocity with thruster action. Figure 10 . Figure 10.Relationship between attitude control and Monte Carlo simulation. Figure 11 . Figure 11.Attitude without thruster control after space debris impact. Figure 12 . Figure 12.Satellite attitude after space debris impact. Figure 14 . 8 Figure 15 . Figure 14.The number of times the impact that causes the formation to exit the scientific mode and diverge under different control algorithms. Table 2 . Attitude control accuracy and maximum thrust torque. Table 3 . Area of each surface of the satellite.
7,294
2023-09-25T00:00:00.000
[ "Physics" ]
Unraveling substituent effects on the glass transition temperatures of biorenewable polyesters Converting biomass-based feedstocks into polymers not only reduces our reliance on fossil fuels, but also furnishes multiple opportunities to design biorenewable polymers with targeted properties and functionalities. Here we report a series of high glass transition temperature (Tg up to 184 °C) polyesters derived from sugar-based furan derivatives as well as a joint experimental and theoretical study of substituent effects on their thermal properties. Surprisingly, we find that polymers with moderate steric hindrance exhibit the highest Tg values. Through a detailed Ramachandran-type analysis of the rotational flexibility of the polymer backbone, we find that additional steric hindrance does not necessarily increase chain stiffness in these polyesters. We attribute this interesting structure-property relationship to a complex interplay between methyl-induced steric strain and the concerted rotations along the polymer backbone. We believe that our findings provide key insight into the relationship between structure and thermal properties across a range of synthetic polymers. A s the drive for sustainability continues, petroleum-derived plastics are steadily being replaced by alternative biobased polymers 1,2 . In this regard, the conversion of biorenewable feedstocks into aliphatic polyesters has become an especially desirable path forward as aliphatic polyesters are in general both biodegradable and biocompatible 3 . During the past decade, there has been growing interest in developing synthetic strategies for copolymerizing epoxides and cyclic anhydrides to make such aliphatic polyesters 4,5 . In addition to providing numerous opportunities for incorporating biorenewable content into polymers, these distinct monomer sets also allow us to tune a myriad of polymer properties and functionalities for different applications. For example, we recently found that bulky terpenebased anhydrides can be employed to make aliphatic polyesters with glass transition temperatures (T g ) up to 184°C, thereby positioning them for use in a number of high-temperature applications 6,7 . Inspired by this finding, we decided to explore even more abundant biorenewable feedstocks as building blocks for high-T g materials. Polymers with high T g values often have rigid backbones, which inhibit rotational flexibility in these polymer chains 8,9 . Hence, one of the most effective ways to increase chain rigidity is by the introduction of non-flexible ring structures into the polymer backbone, thereby hindering (or even eliminating in some cases) rotational flexibility 8 . As such, aromatic and aliphatic rings are often found as components of high-T g polymers such as aromatic polycarbonates, polynorbornenes, and polyimides 8 . Another common way to increase T g is by introducing substituents along/ near the polymer backbone to hinder main-chain rotations through steric interactions 8,10 . Notable examples of such polymers include poly(α-methylstyrene), poly(2-methylstyrene), and poly (2,6-dimethylstyrene), all of which are characterized by T g values that are significantly higher than their less substituted analogs 11 . Following these design principles, we report herein several biorenewable polyesters with ring structures derived from furan, 2-methylfuran, and 2,5-dimethylfuran, all of which can be readily synthesized from pentoses and hexoses [12][13][14] . In doing so, we observe a quite unexpected structure-property relationship in this class of polymers, namely that the introduction of methyl substituents along the polymer backbone does not always increase T g and an intermediate number of methyl substituents actually yields polymers with the highest T g values. By performing a detailed theoretical analysis of the conformational flexibility in these polyesters, we provide key chemical and physical insight into the critical role played by methyl-induced steric strain in governing concerted rotations along the polymer chain. In particular, we demonstrate that methyl-induced strain can be leveraged to control chain flexibility in this class of polyesters by selectively destabilizing the relevant minimum-energy conformations across the rotational potential energy landscape. On the basis of these findings, we propose several simple and intuitive design principles for how methyl substitution can be used to rationalize and tune T g trends across a range of synthetic polymers. Results Monomer synthesis. Acid-catalyzed dehydration of pentoses (such as xylose) to furfural, and hexoses (such as glucose and fructose) to hydroxymethylfurfural (HMF), represent promising pathways for the utilization of carbohydrates (Fig. 1a) [12][13][14] . Although many biomass transformations require fermentation, this dehydration protocol proceeds without the use of enzymes, allowing it to be competitive in both cost and productivity 13,14 . As a result, the transformation of furfural and HMF into other valueadded products and commodities has been the subject of extensive study 13,14 . For example, the hydrogenation of furfural and HMF yields 2-methylfuran and 2,5-dimethylfuran, respectively, both of which are regarded as potential biofuels (Fig. 1a) [12][13][14] . In order to incorporate these sugar-derived biorenewable feedstocks into polymers, we investigated the Diels-Alder (D-A) reactions of a number of furan derivatives with maleic anhydride. As previously observed with furfural and maleimide, both furfural and HMF also do not readily undergo a D-A reaction with maleic anhydride, and we attribute this low reactivity to the presence of the electron-withdrawing aldehyde group and associated higher resonance stabilization in these compounds 15 . In contrast, 2methylfuran and 2,5-dimethylfuran readily react with maleic anhydride to yield thermally labile unsaturated D-A adducts, which can then be hydrogenated or dehydrated to make tricyclic or phthalic anhydrides, respectively, with a targeted number of methyl substituents (Fig. 1b) [16][17][18] . Polymer synthesis and characterization. We then systematically investigated the copolymerization of these anhydrides using propylene oxide (PO), 1-butene oxide (BO), and cyclohexene oxide (CHO) in the presence of an aluminum catalyst 19 copolymerization of 1d with various epoxides has been extensively reported in the literature [20][21][22][23][24] . Under appropriate ratios of monomer to catalyst, 18 polyesters ( Fig. 2) were prepared with well-controlled molecular weights (M n~2 0 kDa for PO and BO copolymers, M n~1 0 kDa for CHO copolymers) and narrow dispersities (Supplementary Tables 1-2). No evidence of ether linkage formation was observed by NMR spectroscopy (spectra of polymers are provided in Supplementary Figures 6-23). Differential Scanning Calorimetry (DSC) was used to measure the T g values of all polymers (see Methods section), which are shown in Fig. 2b. It is worth noting that the T g values of the PO/anhydride polymers remain almost unchanged with increased molecular weight (Supplementary Table 1), confirming the hypothesis that the observed differences in T g values are due to structural variation between the polymers instead of molecular weight discrepancies between samples. Due to their bulky tricyclic nature, polymers made from 1a-1c are generally characterized by higher T g values than those made from 1d-1f with the same number of methyl substituents (Fig. 2b). When the epoxide was varied, we observed that the BO-based polymers have lower T g values than the PO-based polymers and attribute this to the fact that longer alkyl groups often hinder polymer chains from densely packing 25,26 . Furthermore, the CHO-based polymers have substantially higher T g values relative to both the PO-and BO-based analogs due to the decreased chain flexibility imposed by the cyclohexyl ring 8 . Quite interestingly, we also found that the addition of methyl substituents to the anhydride comonomers has a non-linear influence over the observed T g values, despite the widely accepted notion that T g values increase upon methyl substitution due to the additional steric hindrance near the polymer backbone 8, 10 . In particular, we found that polyesters made from monomethyl- substituted anhydrides exhibited higher T g values than those made from the corresponding unsubstituted as well as dimethylsubstituted anhydrides. In other words, polymers with an intermediate number of methyl substituents exhibited the highest T g values in each series. Given the ubiquity of methyl substitution in polymer structures, we now turn to a detailed analysis of the backbone rotational flexibility in these polymers accompanied by a discussion of the key chemical and physical insights that we have gained into this unexpected structure-property relationship. We believe that these findings will aid in the rational design of synthetic polymers with a wide range of targeted thermal properties. Chain flexibility and glass transition temperature. In general, T g values are governed by the overall cooperative segmental mobility 27,28 of the polymer, which in turn is determined by its intrinsic chain (or backbone) flexibility and the non-bonded intermolecular interactions present in the system 8,29 . Since the polymers considered in this work do not exhibit strong secondary forces (e.g., there are no directional hydrogen bonding motifs like one would find in polyamides, for instance) and only differ (within a given series) by the presence of one or two methyl groups along the backbone (which will not provide substantive differences in the non-bonded dispersion interactions), we can eliminate non-bonded intermolecular interactions as being the primary driving forces responsible for this aforementioned structure-property relationship. We therefore hypothesize that intrinsic chain flexibility is predominantly responsible for the fact that the monomethyl-substituted polyesters exhibited the highest T g values in each series. To support this claim, we present both experimental and theoretical evidence that intrinsic chain flexibility is indeed the primary difference between theses systems and that the monomethyl-substituted polymers have the lowest chain flexibility within their respective series. To provide experimental support that the addition of a single methyl substituent leads to a substantial decrease in polymer mobility, the average molecular hole volumes (V h ) 30-32 associated with all six PO-based polyesters considered in this study were measured via positron annihilation lifetime spectroscopy (PALS) 33,34 (Supplementary Methods), as glassy polymers with lower chain flexibility are generally accompanied with larger V h 35-37 . In doing so, we found that the monomethylsubstituted polyesters in both the tricyclic and phthalic series indeed have the largest respective V h (Supplementary Table 3). Since the monomethyl-substituted polyesters have both larger V h and larger T g values (when compared with their unsubstituted and dimethyl-substituted counterparts), these findings provide a direct correlation between intrinsic chain flexibility and the anomalous T g relationship observed in these polyesters. Theoretical analysis. To further investigate the intrinsic chain flexibility in these methyl-substituted polyesters, we employed dispersion-inclusive hybrid density functional theory, an ab initio quantum mechanical approach that simultaneously ameliorates self-interaction error 38,39 and accounts for nonlocal correlation effects such as dispersion (or van der Waals) interactions 40,41 . As such, this approach is able to furnish an accurate and reliable description of the rotational barriers along the polymer backbone (see Methods section, Supplementary Discussion, and Supplementary Tables 4-5). In particular, we focused our computational efforts on determining the rotational flexibility associated with the bonds between the carbonyl carbons and their respective αcarbons, which are defined by the θ and φ angles in Fig. 3a, due to their proximity to the methyl substituents introduced on the anhydride-derived repeat units (Fig. 2a). As illustrated in Fig. 2b, the non-linear influence exhibited by methyl substitution on T g is effectively independent of the choice for the epoxide comonomer (Supplementary Discussion and Supplementary Figure 32), which further justifies our focus on these select rotational degrees of freedom as well as the computationally tractable model systems (2a-2f) employed throughout this study (Fig. 3a). Since the two ester groups are vicinally substituted on the rings of these polyesters, there should be significant coupling between the θ and φ rotational degrees of freedom. Hence, instead of obtaining one-dimensional (1D) scans [42][43][44] of the potential energy surfaces (PES) corresponding to independent dihedral rotations around θ and φ, we performed full two-dimensional (2D) scans of the rotational PES for each polyester model compound (see Methods section). This approach is analogous to the Ramachandran plots used to study the rotational flexibility of the peptide backbone in proteins 45,46 and allows us to investigate the collective nature of these dihedral rotations. In doing so, we can provide a more detailed and comprehensive characterization of the local backbone flexibility, which we believe to be primarily responsible for the overall backbone flexibility in this class of polymers and hence the T g trends observed herein. Detailed Ramachandran-type plots of the 2D rotational PES for systems 2a-2f are presented in Fig. 3c (with representative conformations described in Fig. 3b). Since all of these plots are diagonally dominant (i.e., the location of the minima and most accessible transition states (TS) are located along the diagonal direction), one can first conclude that dihedral rotations around θ and φ are indeed strongly coupled. In this regard, the presence of such collective rotational degrees of freedom in these polyesters is in stark contrast to the simple 1D plots that are adequate to describe internal rotations in simple molecules such as ethane, butane, or other polyesters such as polyethylene terephthalate (vide infra). In fact, independent rotations of either θ or φ are significantly hindered in all cases (as evidenced by the largest energetic barriers located along the horizontal and vertical directions in these Ramachandran-type plots) and the primary mechanism governing rotational flexibility in this class of polymers is therefore best described as collective and disrotatory motions of the vicinal ester groups. Upon the addition of methyl substituents, the rotational landscapes of both the tricyclic and phthalic series are significantly modified, showing non-linear changes in the relative chain flexibilities in these polymers. For one, the introduction of a single methyl group breaks the left-right reflection symmetry in these systems, which lifts the degeneracies across the diagonal as well as the near-degeneracies along the diagonal. The presence of this methyl substituent also results in a reduction in the overall number of (local) minima and TS on the rotational PES corresponding to 2b and 2e. In fact, this addition can also lead to a completely different global minimum conformation, as observed in the phthalic anhydride series (see 2e in Fig. 3c and Supplementary Discussion). Of particular importance is the fact that the rotational barriers are noticeably higher in the Ramachandran-type plots corresponding to both 2b and 2e, which is clear evidence that rotational flexibility is hindered in the monomethyl-substituted compounds. Since left-right reflection symmetry is restored upon addition of the second methyl group, such deleterious effects of symmetry breaking are completely ameliorated in the rotational PES corresponding to 2c and 2f, which are characterized with rotational barriers that are only slightly increased with respect to their completely unsubstituted counterparts. Although symmetry breaking (and its subsequent restoration) is the underlying reason behind these observed trends in chain flexibility, a detailed analysis of these rotational PES allows us to procure key insight into how substituent effects influence chain flexibility in these polyesters (vide infra). The modifications to the rotational landscape upon methyl substitution also lead to substantial changes in the equilibrium (thermal) population of the relevant energy minima on these PES, as well as the connectivity and energetic barriers between such stationary points. Looking again at the Ramachandran-type plots in Fig. 3c, one immediately notices that large swaths of the rotational PES are less accessible in the monomethyl-substituted polymers (2b, 2e) when compared with their unsubstituted and dimethyl-substituted counterparts. As shown in Fig. 4, we quantitatively assessed this measure of rotational flexibility by plotting the fractional area of each 2D rotational PES that is accessible from the corresponding global minimum conformation as a function of the thermal energy (E rel ) available for traversing potential rotational barriers (Supplementary Discussion and Supplementary Figure 33). In the tricyclic series, the accessible fractional area is considerably smaller in 2b (when compared The bonds used to specify the θ and φ dihedral angles are delineated in red and blue, respectively, with the corresponding perspectives defined above. b Graphical legend illustrating the relative orientations of the two carbonyl groups corresponding to a given set of θ and φ dihedral angles. Red and blue arrows are used to depict the orientation of the left and right carbonyl groups, respectively. For example, conformation A (with dihedral angles given by θ = 90°and φ = 90°) corresponds to a structure with the left carbonyl group orientated upward and the right carbonyl group orientated downward. c Ramachandran-type plots depicting the full 2D PES corresponding to dihedral rotations about θ and φ for each of the polyester model compounds (2a-2f). Select stationary points on each PES include a representative minimum (gold star) and its most accessible neighboring transition states (circled numbers), the structures and energetics of which will be showcased in Fig. 5 against 2a and 2c) across all values of E rel , which indicates that 2b is the least flexible among these three structures with respect to θ and φ dihedral rotations. In the phthalic series, we also notice that the accessible fractional area is smaller in 2e in the relevant lowand intermediate-energy sectors. In this case, the slightly larger accessible fractional area of 2e for E rel values in the range of 0.0-2.0 kcal/mol is inconsequential to the overall rotational flexibility of this polymer because this area corresponds to local (frustrated) rotational motion confined to the energetic basin of the global minimum ( Fig. 3c and Supplementary Discussion). Quite interestingly, the relatively wide valley surrounding the global minimum in 2e is a direct consequence of the symmetry breaking caused by the addition of a single methyl group in this polymer series (vide supra), in which the three distinct minima present in 2d (or 2f) are effectively merged into two wide and fairly shallow minima. We have also devised an alternative statistical mechanical measure of the rotational flexibility in these polymers that accounts for the thermal population of all points on the rotational PES, which we denote as the thermal flexibility index (F, see Methods section). At 25°C, we findF values of 11.7 (2a), 2.7 (2b), 8.2 (2c) and 9.9 (2d), 3.8 (2e), 6.2 (2f), again clearly indicating that the monomethyl-substituted cases are indeed the least flexible in both series. We note in passing that the same trend is also observed at 100°C, which is a more relevant temperature for discussing high-T g polymer applications. Taken together, both of these measures of rotational flexibility are consistent with our hypothesis that backbone flexibility is primarily responsible for the experimental observation that the polyesters with an intermediate number of methyl substituents exhibit the highest T g values. In order to pinpoint the underlying cause of the substantially increased flexibility observed with the introduction of a second methyl group in compounds 2c and 2f, we also analyzed the structures and relative energetics associated with a representative minimum (across each series) as well as its neighboring and most accessible TS conformations (Fig. 5). The locations of these stationary points on the Ramachandran-type plots are highlighted in Fig. 3c, from which one can see that the paths connecting any one of these minima and its corresponding TS conformations involves concerted rotations along θ and φ. This again stresses the collective nature of these rotational degrees of freedom and the utility of the Ramachandran-type analysis performed herein. In both the tricyclic and phthalic series, the relative energetics corresponding to the TS structures in Fig. 5 again confirm that all rotational barriers are indeed lower in the dimethyl-substituted compounds, which provides additional support to our hypothesis that the addition of a second methyl group leads to increased rotational flexibility in these polymers. What is not immediately clear from the energetics in Fig. 5 is that this trend is not due to relative stabilization of the TS conformations in 2c (or 2f) and instead results from relative destabilization of the dimethyl-substituted minima. In this regard, we note that both the representative minimum and the TS have very similar conformations when going from 2b to 2c (and 2e to 2f), yet the addition of a second methyl group introduces significant additional strain into the system from the perspective of the minima, as depicted in Fig. 5. This observation indicates that steric interactions between the vicinal ester groups are the predominant source of strain in these systems, a key finding that is also supported by the significant coupling found between these rotational degrees of freedom even in the absence of methyl substituents (2a and 2d in Fig. 3c). In each representative minimum, we believe that the steric strain between the vicinal ester groups is ameliorated by the favorable Bürgi-Dunitz angle 47 between the left alkoxy oxygen donor and the right carbonyl carbon acceptor, which allows for favorable orbital overlap. Upon the introduction of either one or two methyl groups, the system must minimize the steric interactions between the strongly coupled vicinal ester groups, whose motions are now further constrained by the presence of methyl-induced strain. Hence, the system adopts the optimal rotational configuration for the ester groups in the minimum energy conformations in 2b and 2c (2e and 2f), despite the additional methyl-induced strain present in the dimethyl-substituted compounds. In the corresponding TS conformations, the fact that the ester groups are specifically not in their optimal rotational configuration essentially governs their relative energetic profiles and overshadows any potentially unfavorable steric interactions with the methyl groups. These findings are further supported by a detailed analysis of a series of intramolecular symmetry-adapted perturbation theory 48-50 calculations that were used to further investigate this complex interplay between ester-ester steric interactions and methylinduced strain (Supplementary Discussion, Supplementary Table 7, and Supplementary Figure 34). Since the additional methyl-induced strain that is present in the dimethyl-substituted compounds has a significantly larger impact on the relative stabilities of the minima rather than the TS conformations, the net effect is a relative destabilization of the minimum energy conformations in 2c and 2f. This results in the lower apparent rotational barrier heights seen in Figs. 3c and 5, which in turn correspond to significant relative increases in the flexibility of these polymer chains with the introduction of a second methyl substituent. Discussion From this work, one sees clear experimental evidence that selective methyl substitution can be leveraged to tune the observed T g values in polyesters with strongly coupled vicinal ester groups. Since none of these polyesters are subject to strong secondary forces and only differ by the presence of one or two methyl groups along the backbone within a given series, we eliminate non-bonded intermolecular interactions as being the primary driving forces responsible for this aforementioned structure-property relationship. This claim is further supported experimentally by the fact that the monomethyl-substituted polyesters in both the tricyclic and phthalic series indeed have the largest respective V h within a given series, thereby providing a direct correlation between chain flexibility and the anomalous T g relationship observed in these polyesters. These experimental findings are also accompanied by a detailed theoretical analysis of the nonlinear influence of these methyl substituents on the backbone rotational flexibility of these polyesters. In doing so, we demonstrate that the underlying quantum mechanical mechanism responsible for this trend is the compromise made by the system in optimizing the rotational configuration of the strongly coupled vicinal ester groups (which are the primary source of strain in these systems) in the presence of methyl-induced strain. A key finding uncovered here is that this results in a relative destabilization of the minimum energy conformations in the dimethyl-substituted cases, which in turn lead to lower apparent rotational barriers and (by extension) increased chain flexibility. As such, the analysis presented herein provides key chemical and physical insight into how substituent effects influence intrinsic polymer chain flexibility, and takes us one step closer to the rational (and even computational 51 ) design of high-T g polymers. Quite interestingly, these findings are not only limited to the class of polyesters with strongly coupled vicinal ester groups, and can be used rather straightforwardly to rationalize the reported 52 T g differences between poly(ethylene terephthalate) and poly (ethylene methyl-terephthalate), in which the rotational motions of the quite distant ester groups are completely independent (Supplementary Discussion and Supplementary Figure 35). This example is strongly indicative that our findings are robust and will provide key insight into the relationship between structure and thermal properties across a range of synthetic polymers. Methods General experimental details. All manipulations of air-and water-sensitive compounds were carried out under nitrogen in a MBraun Labmaster glovebox or by using standard Schlenk line techniques. Epoxides were purchased from Aldrich and distilled from calcium hydride. Bis(triphenylphosphine)iminium chloride ([PPN]Cl) was also purchased from Aldrich and recrystallized by layering a saturated methylene chloride solution with diethyl ether. Phthalic anhydride was purchased from Aldrich and recrystallized from hot chloroform. The synthesis of all other anhydrides and the [ F Salph]AlCl catalyst is detailed in Supplementary Methods and NMR spectra of these anhydrides are provided in Supplementary Figures 1-5. with a Teflon-lined cap, removed from the glovebox, and placed in an aluminum heating block preheated to 60°C. After the appropriate amount of time, an aliquot was taken for 1 H NMR spectroscopic analysis to determine conversion of the cyclic anhydride. The reaction mixture was then diluted with~0.5 mL methylene chloride and precipitated into 10 mL of methanol with vigorous stirring, after which the methanol was decanted. Polymers made from 1a-1c were precipitated into hexanes due to their higher solubility in methanol. Precipitation was repeated as necessary to remove excess monomer and catalyst. The polymer was then dried under vacuum at 60°C for 3 days and used for physical measurements. Synthetic and characterization details are provided in Supplementary Tables 1-2 and NMR spectra of all polymers are provided in Supplementary Figures 6-23. DSC measurements. DSC measurements of the polymer samples were performed on a Mettler-Toledo Polymer DSC instrument equipped with a chiller and an autosampler. Samples were prepared in aluminum pans. All polyesters were analyzed using the following temperature program: −70 to 200°C at 10°C/min, 200 to −70°C at 10°C/min, and then −70 to 200°C at 10°C/min. Data were processed using the StarE software. All reported T g values were observed on the second heating cycle. All DSC thermograms are provided in Supplementary Figures 24-31. Computational details. Each of the Ramachandran-type plots in Fig. 3c was constructed from the energies corresponding to 72 × 72 constrained geometry optimizations (for a total of 5184 calculations per model compound, taken in increments of 5°along the collective variable coordinates defined by θ and φ). The accompanying periodic surface plots (heat maps) were created using bilinear interpolation of the data with an in-house script written in the Matlab 2017b program. All geometry optimizations were performed (using the corresponding default convergence thresholds) with the Q-Chem (version 5.0) software package 53 . Unless otherwise specified, all calculations employed the B3LYP hybrid exchangecorrelation functional [54][55][56] in conjunction with the D3(op) dispersion correction 57,58 and the 6-311++ G(d,p) basis set. This level of theory was selectively tested against higher-level quantum chemical methods (such as CCSD) with excellent performance reported for characterizing internal bond rotations in the systems considered in this work (Supplementary Discussion and Supplementary Tables 4-5). The optimized geometries and harmonic vibrational frequencies for all stationary points reported in Fig. 5 were obtained using the tightest convergence thresholds available in Q-Chem to accurately reflect the structures and energetics of the true stationary points on the global PES. For a select set of stationary point structures, intramolecular symmetry-adapted perturbation theory 50 (SAPT) calculations were performed at the density-fitted SAPT0 (DF-SAPT0) level using the PSI4 (version 1.1) software package 59 , with the jun-cc-pVDZ basis set and the corresponding JKFIT and MP2FIT auxiliary basis sets. All core orbitals were frozen during the SAPT calculations. Thermal flexibility index. In order to introduce the thermal flexibility index (F), we first define the thermal flexibility (F), which is an ensemble average over all conformational transitions on the rotational PES in Fig. 3c, as follows: In this expression, ρ * (β) denotes the probability density associated with conformation β (given by a Boltzmann distribution at temperature T), E ≠ α β ð Þ represents the maximum barrier height along the minimum energy path connecting conformations α and β, and A is the total area of the rotational PES (that has been discretized by N points). In other words, F is the Boltzmann factor from transition state theory applied to the barriers encountered during conformational transitions, which has been thermally averaged and summed over all possible transitions. In this work, a modified version of the Floyd-Warshall algorithm [60][61][62] was employed to compute E ≠ α β ð Þbetween any two conformations on a given rotational PES. In particular, instead of evaluating the length of a combined path by addition of two subpath lengths (as was done in the original Floyd-Warshall algorithm), we evaluate E ≠ α β ð Þfor the combined path by choosing the higher maximum barrier of the two subpaths. Since the thermal flexibility is maximized for a (fictitious) completely flat rotational PES, in which the probability density is uniform and E ≠ α β ð Þ¼0 for any two conformations, we use this quantity ( F 0 ¼ R dα ¼ A) as a reference for definingF. In doing so, theF corresponding to a given PES is defined as the ratioF ¼ F=F 0 100 (such thatF is always between 0 and 100), which allows for direct comparison of thermal flexibility indices corresponding to different PES. Data availability. All data are available from the authors upon reasonable request.
6,729.8
2018-07-23T00:00:00.000
[ "Materials Science" ]
Remarks on the Nonlocal Dirichlet Problem We study translation-invariant integrodifferential operators that generate Lévy processes. First, we investigate different notions of what a solution to a nonlocal Dirichlet problem is and we provide the classical representation formula for distributional solutions. Second, we study the question under which assumptions distributional solutions are twice differentiable in the classical sense. Sufficient conditions and counterexamples are provided. Introduction The aim of this article is to provide two results on translation-invariant integrodifferential operators, which are not surprising but have not been systematically covered in the literature. Let us briefly explain these results in case of the classical Laplace operator. The classical result of Weyl says the following. Assume R is an open set, , and D is a Schwartz distribution satisfying in the distributional sense, i.e. for every . Then and in . This is the starting point for our analysis. The first aim is to study distributional solutions to nonlocal boundary value problems of the form L in in where L is an integrodifferential operator generating a unimodal Lévy process. Our second aim is to provide sufficient conditions such that distributional solutions to the nonlocal Dirichlet problem are twice differentiable in the classical sense. In case of the Laplace operator, it is well known that Dini continuity of R, i.e. finiteness of the integral 1 0 d for the modulus of continuity , implies that the distributional solution to the classical Dirichlet problem satisfies 2 . On the other hand, one can construct a continuous function 1 R and a distribution D 1 such that in the distributional sense, but 2 1 . These observations have been made long time ago [26]. They have been extended to non-translation-invariant operators by several authors [12,33] and to nonlinear problems [14,31]. Note that there are many more related contributions including treatments of partial differential equations on non-smooth domains. In the present work we treat the simple linear case for a general class of nonlocal operators generating isotropic unimodal Lévy processes. Let us introduce the objects of our study and formulate our main results. Let The function induces a measure d d , which is called the Lévy measure. Note that we use the same symbol for the measure as well as for the density. We study operators of the form L lim 0 d . (1.1) This expression is well defined if is sufficiently regular in the neighbourhood of R and satisfies some integrability condition at infinity. We recall that for 0 2 and d d with some appropriate constant , the operator L equals the fractional Laplace operator 2 on 2 R . The regularity theory of such operators has been intensively studied recently. For instance, it is well known [3,18,[36][37][38] that the solution of 2 with belongs to provided that neither nor is an integer. A similar result in more general setting is derived in [2]. We adopt a common convention and identify the Lévy measure with its radial profile, i.e. , R . Our standing assumption is that is a non-zero, nonincreasing radial function and that there exists a Lévy measure resp. a density such that for 0 , for 0 and 1 0 (1.2) for some 1 and 0 0. We remark that without loss of generality we may and do assume that 0 is such that the function 0 The condition L 1 R is the integrability condition needed to ensure well-posedness in the definition of L in distributional sense. Note that the shift of 0 results in a different constant in the integral above but does not affect its finiteness. Given an open set, we denote by resp. the usual Green resp. the Poisson operator associated with L , cf. Section 2. For a definition of the mean-value property and the Kato class K and K see Definition 2.1 and Definition 2.3 below. Here is our first result. The theorem above says that the distributional solution of Eq. 1.4 is unique up to a function with the mean-value property. If, additionally, is a Lipschitz domain and we impose some regularity, then there is a unique solution among all solutions which are bounded close to the boundary. Observe that we do not assume boundedness of , so, in general, may be unbounded outside of , even close to the boundary. Here bounded close to the boundary means that the restriction of to the set is bounded close to the boundary. Boundedness of , , would suffice, of course. We highlight that in general, there are functions with the mean-value property which are unbounded near to the boundary (see e.g. [6,Lemma 6]), thus, in order to address the uniqueness problem, we need to restrict ourselves to solutions bounded close to the boundary. Observe that in the first part of the theorem we do not say anything about existence of solutions; we are able to establish it only after additional assumptions on and . A more expanded discussion on the existence of solutions for problems driven by a non-local operator can be found in [8]. Note that, in the case where L equals the fractional Laplace operator, similar results like Theorem 1.1 are proved in [7]. A result similar to Theorem 1.1 has recently been proved in [29]. The authors consider a smaller class of operators and concentrate on viscosity solutions instead of distributional ones. Variational solutions to nonlocal operators have been studied by several authors, e.g., in [17,40]. The problem to determine appropriate function spaces for the data leads to the notion of nonlocal traces spaces introduced in [15]. It is interesting that the study of Dirichlet problems for nonlocal operators leads to new questions regarding the theory of function spaces. The formulation of our second main result requires some further preparation. They are rather technical because we cover a large class of translation-invariant operators. The similar condition to the following appears in [8]. is twice continuously differentiable and there is a positive constant such that for 0 . (A) and Eq. 1.2 are essential for proving that functions with the mean-value property are twice continuously differentiable, see Lemma 2.2. We emphasize that in general this is not the case and usually such functions lack sufficient regularity if no additional assumptions are imposed. The reader is referred to [32,Example 7.5], where a function with the mean-value property is constructed for which 0 does not exist. Let be a minus fundamental solution of L on R (see Eq. 2.2 for definition). Note that in the case of the fractional Laplace operator for and some constant (see e.g. [10]). In what follows we will assume the kernel to satisfy the following growth condition: (G) 2 R 0 and there exists a non-increasing function 0 0 and 0 0 such that (i) if 1 The result uses quite involved conditions because the measure interacts with the Dinitype assumptions for the right-hand side function . Looking at examples, we see that the two cases described in the theorem appear naturally. In the fractional Laplacian case ( for ), finiteness of the expression 1 2 0 1 d depends on the value of 0 2 . We show in Section 6 that the conditions hold true when L is the generator of a rotationally symmetric -stable process, i.e., when L equals the fractional Laplace operator. Note that Theorem 1.2 is a new result even in this case. We also study the more general class, e.g. operators of the form , where is a Bernstein function. Note that in the theorems and remark above we do not assume that is bounded. Remark 1. 4 We emphasize that in the case of L being the fractional Laplace operator of order 0 2 and 2 1 , it is not true that the function 1 1 d belongs to 2 1 , or even to 1 1 1 as is stated in [1,Theorem 3.7]. A similar phenomenon has been mentioned in [3] and is visible here as well. Observe that in such case the integrals (1.5) and (1.6) are clearly divergent and consequently, Theorem 1.2 cannot be applied. We devote Section 5 to the construction of counterexamples for any 0 2 . On the other hand, by [39, Theorem 1.1(b)], those counterexamples are 1 1 1 for every 0 1 and consequently, they are also pointwise solutions. Thus, [1, Theorem 1.1] is also not true in general. The article is organized as follows: in Section 2 we provide the main definitions and some preliminary results. The proof of Theorem 1.1 is provided in Section 3. Section 4 contains several rather technical computations and the proof of Theorem 1.2. We discuss the necessity of the assumptions of Theorem 1.2 through examples in Section 5. Finally, in Section 6 we provide examples that show that the assumptions of Theorem 1.2 are natural. Preliminaries In this section we explain our use of notation, define several objects and collect some basic facts. We write when and are comparable, that is the quotient stays between two positive constants. To simplify the notation, for a radial function we use the same symbol to denote its radial profile. In the whole paper and denote constants which may vary from line to line. We write when the constant depends only on . The family 0 induces a strongly continuous contraction semigroup on 0 R and 2 R R d R whose generator A has the Fourier symbol . Using the Kolmogorov theorem one can construct a stochastic process with transition densities , namely P d . Here P is the probability corresponding to a process starting from , that is P 0 1. By E we denote the corresponding expectation. In fact, is a pure-jump isotropic unimodal Lévy process in R , that is a stochastic process with stationary and independent increments and càdlàg paths, whose transition function is absolutely continuous and its density is isotropic unimodal, that is radial and radially non-increasing (see for instance [41] and [44]). One of the objects of significant importance in this paper is the potential kernel defined as follows: 0 d . Clearly . The potential kernel can be defined in our setting if 1 1 . In particular, for 3 the potential kernel always exists (see [41,Theorem 37.8]). If this is not the case, one can consider the compensated potential kernel , we can set 0 0. In other cases the compensation must be taken with 0 R 0 . For details we refer the reader to [21] and to Appendix A. Slightly abusing the notation, we let 1 be Eq. 2.1 for 0 0 ... 0 1 R . Thus, we have arrived with three potential kernels: , 0 and 1 . Each one corresponds to a different type of process and an operator associated with it. In order to merge these cases in one object, we let The basic object in the theory of stochastic processes is the first exit time of from , inf 0 . Using we define an analogue of the generator of , namely, the characteristic operator or Dynkin operator. We say a Borel function is in a domain D U of Dynkin operator U if there exists a limit Here is understood as a limit over all sequences of open sets whose intersection is and whose diameters tend to 0 as . The characteristic operator is an extension For a wide description of characteristic operator and its relation with the generator of we refer the reader to [16,Chapter V]. Instead of the whole R , one can consider a process killed after exiting . By we denote its transition density (or, in other words, the fundamental solution of L in ). We have It follows that 0 . Proceeding as in the proof of [ Here we assume that the integral is absolutely convergent. If has the mean-value property in every bounded open set whose closure is contained in then is said to have the mean-value property inside . Clearly if has the mean-value property inside , then U 0 in . In general, functions with the mean-value property lack sufficient regularity if no additional assumptions are imposed. In our setting, however, we can show that they are, in fact, twice continuously differentiable in . The proof is similar to the proof of [8,Theorem 4.6] and is omitted. [25,45] We say that a Borel function belongs to the Kato class K if it satisfies the following condition This is one of three conditions discussed by Zhao in [45]. A detailed description of different notions of the Kato class and related conditions can be found in [25]. R , it follows that 0 8 . Since 0 was arbitrary, the claim follows by induction. A consequence of Lemma 2.6 is the following corollary. Corollary 2.7 Let be open and bounded and N. If R 0 1 loc then 1 . The following lemma is crucial in one of the proofs. Proof Observe that thanks to Eq. 1.2 we get that L 1 for any 0. Now one may proceed exactly as in the proof of convergence of convolution in the classical 1 space. Remark 2.10 Note that Lemma 2.9 fails without the assumption of doubling condition on . In fact, this condition is crucial even for the well-posedness of the problem, let alone further results. Weak Solutions The aim of this section is to prove Theorem 1.1. For the fractional Laplacian related results are known, cf. [7,Section 3]. A similar result has recently been obtained in [11] using purely analytic methods instead of probabilistic ones exploited in [7]. When the generalization of these results to more general nonlocal operators is immediate, we omit the proof. and consequently, L can be calculated pointwise. Now we proceed exactly as in the proof of [8,Lemma 4.10]. Note that we do not need the assumption A1 from [8], because it is used only to show that the considered function belongs to 2 ; furthermore, since the Lévy density is nonincreasing, it is continuous in a.e. 0 , so indeed the proof is exactly the same. Thus, the claim follows immediately. The following lemma is a generalization of [7, Theorem 3.9 and Corollary 3.10], where the fractional Laplace operator is considered. R . By the strong Markov property we may assume that 1 is a Lipschitz domain. We claim that has the mean-value property in 1 . Indeed, let 2 be an open set relatively compact in such that 1 2 . There exist functions 1 , 2 on 1 such that 1 2 , 1 is continuous and bounded on 1 and 2 0 in 2 . We have The first integral is clearly absolutely convergent. We claim that it is also continuous as a function of in 1 The Sufficient Condition for Twice Differentiability In this section, we provide auxiliary technical results and the proof of Theorem In particular, 4 3 for sufficiently small . It follows that , if is sufficiently small. Thus, is uniformly continuous. where the localization functions 1 and 2 are chosen in dependence of . Note that in the integral defining 2 , due to the function 2 and (G), integration w.r.t. takes place in a region where and its derivative are bounded. Hence, from (G) we see that differentiating under the integral sign is justified. We obtain 2 R Counterexamples for the Case "α + β = 2" In this section we provide several counterexamples for Theorem 1. where 0 2 . It is known (see [7] or Theorem 1. where is the (compensated) potential kernel for process whose generator is 2 . We claim that for any fixed 1 , the function E belongs to 1 . Indeed, using the explicit formula for the Poisson kernel for the unit ball [5], we get Observe that 1 , which implies that we are separated from the origin. It follows that 1 is locally bounded and since the integral converges at infinity for any form of , we obtain that for fixed 1 Case α ∈ (0, 1) We follow closely the idea from the proof of Theorem 1.2 apart from the fact that at the end we will show that the last function 3 is not continuously differentiable. From Lemma 2.6 we get then the same argument applies for 4 . Therefore, it remains to calculate the derivative of 3 . Observe that on 1 4 we have 1 . To simplify the notation we accept a mild ambiguity and by we denote, depending on the context, either a real number or a vector in R of the form 0 ... 0 . Let 0. Observe that is equal to 0 for 0 and , thus, the expression in the first term of Eq. 5.8 simplifies. Furthermore, the second term vanishes, and the remaining limit is We note that in our setting, is strictly decreasing. Lemma A.2 allows us to introduce, following [5], [10], [28], a compensated potential kernel by setting for Observe that the left-hand side of Eq. A.5 converges to so it is finite. The remaining integral on the right-hand side converges as well by the monotone convergence theorem, but since all the other terms are finite, it follows that the integral is also finite and we obtain lim 0 E 0 E 0 which ends the proof.
4,043.2
2020-02-06T00:00:00.000
[ "Mathematics" ]
MAGE-A3 regulates tumor stemness in gastric cancer through the PI3K/AKT pathway Gastric cancer remains a malignant disease of the digestive tract with high mortality and morbidity worldwide. However, due to its complex pathological mechanisms and lack of effective clinical therapies, the survival rate of patients after receiving treatment is not satisfactory. A increasing number of studies have focused on cancer stem cells and their regulatory properties. In this study, we first constructed a co-expression network based on the WGCNA algorithm to identify modules with different degrees of association with tumor stemness indices. After selecting the most positively correlated modules of the stemness index, we performed a consensus clustering analysis on gastric cancer samples and constructed the co-expression network again. We then selected the modules of interest and applied univariate COX regression analysis to the genes in this module for preliminary screening. The results of the screening were then used in LASSO regression analysis to construct a risk prognostic model and subsequently a sixteen-gene model was obtained. Finally, after verifying the accuracy of the module and screening for risk genes, we identified MAGE-A3 as the final study subject. We then performed in vivo and in vitro experiments to verify its effect on tumor stemness and tumour proliferation. Our data supports that MAGE-A3 is a tumor stemness regulator and a potent prognostic biomarker which can help the prediction and treatment of gastric cancer patients. INTRODUCTION Gastric cancer remains one of the malignant tumors with the highest morbidity and mortality rates worldwide. According to Global Cancer Statistics, as of 2020, a total of 1,089,103 new gastric cancer patients have been diagnosed worldwide, accounting for 5.6% of the total new cancer patients. There were 768,793 new deaths from gastric cancer, accounting for 7.7% of the overall mortality [1]. Despite many significant advances in therapeutic strategies over the past decade, such as immunotherapy, chemotherapy, and radiation therapy, therapeutic efficacy is not ideal and the survival rate of patients after treatment remains poor [2,3]. The conventional surgical resection is often associated with the risk of metastasis and recurrence, as most patients were diagnosed at advanced stage [4]. Hence, it is more urgent and practical to investigate the molecular mechanism of gastric cancer and further develop effective prognostic factors. AGING Recently, many studies have focused on a specific class of tumor cells, namely cancer stem cells. They are involved in most processes of disease progression and heterogeneity of tumor [5]. Cancer stem cells own some characteristics as normal stem cells, such as selfrenewal and ability to differentiate into other cells that consist of various parts of the tumor [6]. Besides, cancer stem cells also possess their own characteristics. They usually stay at a dormant state for a long time and are highly resistant to drugs and insensitive to external physical and chemical environments that are detrimental to the cells [7]. Accumulating evidence suggests that cancer stem cells take the main responsibility for post-surgical recurrence, tumor metastasis, resistance to chemotherapy and radiation therapy [8,9]. Just for this reason, focusing on cancer stem cell therapy and exploring the key molecules that regulate the properties of tumor stem cells will greatly improve the likelihood of disease cure and patient survival rate. To better investigate and characterize these molecules that regulate and maintain tumor stemness, Malta and his colleagues analyzed transcriptome and other profiles from the TGCA database to obtain an indices which could quantify stemness [10]. The mRNA expression-based stemness index (mRNAsi) is used to quantify the stemness of mRNA expression in samples, the epigenetic regulation based-index (EREG-mRNAsi) is utilized to characterize the effect of epigenetic modifications on stemness. By applying these tumor stemness indices, researchers can obtain molecules involved in the regulation of tumor stemness in different tumors in the TCGA database. Higher index scores represent more important in its regulation of tumor stemness. Therefore, we got these tumor stem cell indices and applied them to the present study. In this study, we first identified tumor stemness-related modules and key genes by using the WGCNA and mRNAsi indices differentially expressed genes (DEGs). After extracting the expression data of these genes, we performed consensus clustering analysis on gastric cancer samples in TCGA. We found that gastric cancer patient samples could be classified into two tumor stemness subtypes (C1 and C2groups) based on these key genes. WGCNA were again applied to construct coexpression network and screen key genes after gastric cancer samples consensus clustering analysis. Then, we implemented an initial screening of the modules we were interested in and applied the LASSO regression analysis algorithm to construct a risk model and validated it. Finally, we identified MAGE-A3 as the final study subject. The results show that MAGE-A3 is involved in the regulation of tumor proliferation and that tumor stemness regulates through PI3K/AKT signalling pathways. Thus, our study provides a new potential target for the treatment and prognosis of gastric cancer. CCK-8 assay To test the proliferative capacity of the cells, the CCK-8 (Thermo Fisher, USA) assay was performed. Inoculate 10,000 cells in wells of a ninety-six-well plate with three replicate wells per group. Continue all subsequent operations according to the kit instructions. The absorbance at 450 nm of each group was measured at 0, 24 and 72 hours after inoculation using a microplate reader. 5-ethynyl-2′-deoxyuridine (EdU) incorporation assay Cells from the experimental and control groups were inoculated on cell coverslips at the same time. After overnight incubation at 37° C in 5% CO2, subsequent manipulations were performed as follows. Briefly, Replace the complete medium with fresh medium containing 20mM EDU (Thermo Fisher, USA) and incubated at 37° C for 2 hours. DAPI was used to stain cell nuclei. Olympus confocal microscope FV3000 was used to observe and take pictures. Immunofluorescence and confocal imaging Cellular immunofluorescence is utilized to detect the expression of tumor stem cell biomarker protein levels. The experimental steps are briefly described as follows:1 Cells were inoculated on coverslips, cultured overnight and washed three times.2 Cells were fixed with 4% paraformaldehyde at room temperature and permeabilized with 0.1% Triton-100. 3 Sealing of antigens at room temperature 1 hour.4 Primary antibody (CD44, EpCAM; Abclonal, China) was incubated overnight at four degrees and CY3-labeled secondary antibody(Abclonal, China) was added. Olympus confocal microscopy (Olympus, Japan) was used for photography. Tumor xenograft model and animal imaging Four-week-old immunodeficient nude mice, purchased from Beijing Huafukang Experimental Animal Co, Ltd, were kept in specific pathogen free (SPF) environment for three days before conducting the follow-up experiments. Test and control groups of 1x10 7 cells were simultaneously injected into the mice by subcutaneous injection. The volume size of the xenograft tumors was measured on the 6th, 12th and 18th days after injection, respectively. Animal imaging was used to observe tumor growth in mice in real time [11]. AGING Differentially expressed genes (DEG) Gastric cancer RNA sequencing data were processed by R package limma, pheatmap and ggplot2 for screening differentially expressed genes and presenting the top 50 DEGsin a heat map [12]. The screening criteria were Pvalue less than 0.05 and |LogFC|≥2. WGCNA and identification of key module Unlike the focus on differentially expressed genes, Weighted Gene Co-expression Network analysis (WGCNA) analyzed the data based on two assumptions: 1 Genes with similar expression patterns may be co-regulated, functionally related or under the same signaling pathway. 2 The genes in the network obey scaleless network distribution [13]. After removing the abnormal samples, the Pearson correlation coefficient between any paired genes was calculated. We then build the weight adjacency matrix by the power function amn = |cmn|β method [14]. A suitable β value is determined to remove weak correlations between genes, and therefore more conducive to building co-expression network. In the next step, we transform the weight adjacency matrix into a topological overlap matrix (TOM) so that we can measure the connectivity of genes in the network. Based on the TOM measurements, average linkage hierarchical clustering is used to classify genes with similar expression profiles with the same module. A minimum size of 50 per group is the criterion for gene dendrograms [13]. Consensus clustering After finding key genes by WGCNA and mRNAsi, we applied consensus clustering analysis to divide TCGA patient samples into different subtypes. R package ConsensusClusterPlus completed the above analysis. Cumulative distribution function (CDF) and consensus matrices determine the appropriate number of subgroups [15]. Functional annotation Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses were applied to characterize the biological function of genes you are interested. And these analyses were carried out by applying these doses, clusterProfiler, org.Hs.eg.db, enrichplot and ggplot2 R packages [14,16]. Construction of risk score models LASSO (least absolute shrinkage and selection operator) regression analysis and Kaplan-Meier survival analysis were used to construct risk score model [11,17,18]. Availability of supporting data The data generated during this study are included in this article and its Supplementary Information files are available from the corresponding author on reasonable request. Detection of differences in mRNAsi and differentially expressed genes in gastric cancer The mRNAsi is a widely recognized parameter for determining the similarity between tumor cells and normal stem cells. We first explored the differences in mRNAsi in normal and tumor samples of gastric cancer. As shown in Figure 1A, mRNAsi was dramatically different between the two groups, with the tumor group samples possessing much higher mRNAsi values than the normal. Subsequently, we screened differentially expressed genes in TCGA gastric cancer RNA sequencing data. Limma and pheatmap R packages processed the above data and extracted the top 50 DEGs to plot as heat map and volcano map ( Figure 1B, 1C). In total, we obtained 6736 differential genes of which 1139 expressed down-regulated genes and 5597 upregulated genes. Identification of mRNAsi-related key genes and their functional annotation The above findings demonstrate that there may be genes that play a critical role in regulating tumor stemness in DEGs. Therefore, we applied WGCNA and mRNAsi to search for these genes more deeply. After DEGs were processed by the WGCNA algorithm, we first removed the samples that did not meet the threshold because t of the deflection of their gene expression (Figure 2A). We then select β=4 (scalefree R 2 =0.9) as a soft threshold to build the scaleless network ( Figure 2B). After calculating the similarity between modules, we merged the modules below the red line ( Figure 2C) and plotted the gene dendrogram ( Figure 2D). A total of 8 modules were obtained and named with different colors. A heat map was plotted to show the relationship between different modules and tumor stem cell index ( Figure 2E). Finally, we chose the brown module as the subject for the subsequent study. AGING Gene significance (GS) represents the correlation between the gene and the trait of interest. Module membership (MM) represents the correlation between the module genes and this module. In this study, we set gene significance (GS)>0.5, Module membership (MM)>0.75 as criteria to screen key genes in brown modules ( Figure 2F). In total, 54 tumor stemnessrelated genes were obtained. Firstly, we performed correlation analysis on these 54 genes to demonstrate the accuracy of the above parameter settings (Supplementary Figure 1). And we subsequently extracted the expression data of these genes to map them as box line plots and heat maps ( Figure 3A, 3B). Functional enrichment analysis was likewise performed for these genes ( Figure 4A, 4B). The results of GO analysis showed that these genes are involved in sister chromatid segregation and cell nuclear division, etc. The results of KEGG are mainly for cell cycle and mismatch repair, etc ( Figure 4C, 4D). Molecular subtypes of gastric cancer based on mRNAsi-related key genes and identification of key modules To explore novel investigation objectives and horizons, we conducted a consensus clustering analysis using the obtained tumor stemness-associated key genes. After consensus clustering analysis, the 384 gastric cancer patient samples would be classified into different subtypes. Figure 5A shows the relative change of CDF curve of consensus score from k = 2 to 9. Relative change in area under the CDF curve for k = 2 to 9 ( Figure 5B). When k = 2 for consensus clustering, it proves to be the best choice for dividing the patient samples ( Figure 5C). Then we performed survival curve analysis between the two groups and their relationship with clinical characteristics. The K-M survival analysis showed that the overall survival rate of the C1 group was higher than that of the C2 group ( Figure 5D). Clinical heatmap for two groups was shown in Figure 5E. In this part of the study, we likewise performed WGCNA analysis on the consensus clustering samples. First filter out the outliers and this time we selected β=4 (scalefree R 2 =0.9) as the parameter to build the network ( Figure 6A, 6B). And after merging the high similarity modules ( Figure 6C, 6D), the heatmap was obtained ( Figure 6E). Finally, we identified the blue module as object due to its maximum positive correlation with tumor to study. Establishment and validation of risk prognostic model A total of 621 genes were obtained. To investigate the prognostic role of these genes in gastric cancer, a risk prognostic model was constructed. We first perform initial screening and obtained 51 genes (Supplementary Figure 2 and Table 1). We then applied these 51 genes to the LASSO regression algorithm to construct a risk prognostic model ( Figure 7A). As a result, we obtained a 16-gene risk model. Six of these genes were positively associated with the overall survival of the sample and ten were negatively associated (Supplementary Figure 3). We applied the coefficient of each risk gene to calculate the risk score for every gastric cancer patient sample. The calculation formula is as follows. :(expression of RNF43 x -0.155+expression of INCENP x -0.127+ expression of KIF24 x -0.056+expression of PGAM5 x -0.055+expression of SASS6 x -0.04+expression of SAC3D1 x -0.037+expression of TTF2 x -0.034+ expression of MASTL x -0.023+expression of E2F2 x -0.021+expression of GAD1 x -0.018+expression of HBB x 0.07+expression of UPK1B x 0.08+expression of MAGE-A3 x 0.09+expression of ADH4 x 0.1+expression of BST1 x 0.17+expression of GRB14 x 0.25). The TCGA cohorts and the externally validated cohorts (GSE88437) can be divided into high and low risk groups. The distribution of patient survival status and risk scores from the TCGA database and external validation database were presented in Figure 7C-7F. The scatter plot show that as the patient risk score increases the proportion of patient deaths also increases. Kaplan-Meier survival analysis displayed that overall survival of the high-risk group compared to that of the low-risk group was lower. Then, we applied Cox regression analyses to evaluate the risk model. The results in Figure 8A, 8B and Table 2 were from the TCGA database, which demonstrates that age, TNM, and risk score were all significantly associated with OS. And results in Figure 8C, 8D and Table 3 were from the external validation dataset and the obtained results again indicated that the risk score is significantly associated with OS. Clinical heat maps of risk scores and other clinical characteristics are shown in Figure 9A, 9B. Then, we evaluated this model with a time-dependent ROC curve. The AUC values for 1, 2, and 3 years in the TCGA cohort were 0.7, 0.69, and 0.693, respectively ( Figure 9C). The AUC values for 1, 2, and 3 years in the external validation cohort were 0.498, 0.531, and 0.581, respectively ( Figure 9D). MAGE-A3 possess the property to regulate tumor stemness and proliferation through PI3K/AKT signaling pathway After considering the expression of these OS positiverelated risk genes and role of prognosis, we selected MAGE-A3 and GRB14 as the subsequent study subjects ( Figure 10). As shown in Figure 11A, MAGE-A3 was significantly highly expressed in gastric cancer cell lines MGC803 and SGC7901, while the difference in GRB14 expression was not significant. Subsequently, we detected the expression of MAGE-A3 in cancer and normal tissues from gastric cancer samples, and the results showed that MAGE-A3 is highly expressed in tumor tissues ( Figure 11B). In order to verify the relationship between MAGE-A3 and tumor stemness regulation, we also detected the expression of tumor stem cell markers CD44 and EpCAM in these tissues. As shown in the Figure 11C, the expression of CD44 and EpCAM increased with the expression of MAGE-A3. Therefore, MAGE-A3 was selected as the final target of the study. We constructed MAGE-A3 knockdown stable transgenic cell in gastric cancer cell lines. Subsequently, cancer stem cell biomarkers were detected after MAGE-A3 knockdown. As shown in Figure 11D, 11E, knockdown of MAGE-A3 resulted in a significant decrease in protein expression of these biomarkers. We also examined the effect of MAGE-A3 on cell proliferation. The thymidine analog EDU can be incorporated into newly synthesized DNA in place of thymidine during the S phase of the cell cycle. The results of the EDU experiments were similar to those described above. Knockdown of MAGE-A3 reduced the ability of the cells to synthesize DNA ( Figure 11F). The results of CCK-8 experiments showed that the knockout of MAGE-A3 decreased the proliferation ability of SGC7901 cells by 25.6% and MGC803 by 24.1% ( Figure 11G). To verify through which signaling pathway MAGE-A3 exerts its ability to regulate tumor stemness and proliferation. We applied Western Blot technique to detect PI3K/AKT signaling pathway and applied 740Y-P, an activator of this signaling pathway, to MAGE-A3 knockdown cells. The results as shown in Figure 12A showed that the expression of PI3K and AKT decreased significantly after knockdown of MAGE-A3, but their expression rebounded significantly after treatment with activator 740Y-P. Meanwhile, the expression of CD44 and EpCAM varied with the expression of PI3K and AKT ( Figure 12B). The results of CCK-8 and EDU experiments also showed that the proliferation ability of cells significantly increased when 740Y-P was added ( Figure 12C-12F). These results demonstrate that MAGE-A3 may achieve its role in regulating tumour stemness and proliferation through the PI3K/AKT signalling pathway. Finally, we performed in vivo experiments that were used to verify the effect of MAGE-A3 on tumor growth. The constructed knockdown and control cells were injected simultaneously into different groups of nude mice subcutaneously. The volume size of the xenograft tumors in mice was measured at different time points after injection, respectively. Finally, at day 18, The mice were imaged and then sacrificed to remove the tumor and measure their weight. As shown in Figure 13A, 13B, the mean reduction in the knockdown group compared to the control group was 405 and 285 cubic millimeters, respectively. In terms of tumor weight, the knockdown group injected with MGC803 cells was reduced by 0.5 g and SGC7901 by 0.334 g ( Figure 13C). The results obtained from the animal imaging technique showed that the bioluminescence of knockdown groups significantly lower than the control groups ( Figure 13D, 13E). DISCUSSION Despite the great contribution of new therapies to the treatment of cancer, some patients still own poor outcomes, prompting us to search for potential molecular mechanisms to address this issue. Tumor heterogeneity has always been a challenge for oncology treatment. The study of tumor stem cells, a special class of stem celllike tumor cells, has been recently springing up vigorously. With the property of stem cell-like and resistant to chemotherapy and radiotherapy, tumor stem cells are involved in tumor multiple processes such as tumorigenesis, progression, metastasis, and recurrence after therapy [19][20][21][22]. Therefore, this study focuses on the molecular mechanisms regulating the properties of cancer stem cells. In 2018 Malta and his colleagues introduced the concept of cancer stemness index, a concept used to describe the degree of tumor differentiation. The cancer stemness index was rapidly applied to study cancer stem cells in different cancer, such as lung adenocarcinoma, breast cancer, pancreatic cancer, etc. [23][24][25][26][27]. In this study, we first applied WGCNA and mRNAsi to construct a coexpression network based on differentially expressed genes to obtain modules with different degrees of correlation with mRNAsi. We select the brown module and then determine the stemness-related key genes. The results of functional annotation of key genes show that they are mainly responsible for cell cycle, chromosome segregation, etc. [24,28]. This finding is consistent with the results of previous studies. Compared with the early research, our study performed consensus clustering analysis on gastric cancer samples based on stemnessrelated key genes and identified two molecular subtypes of gastric cancer (C1 and C2 groups). Then, we again construct the co-expression network after consensus clustering and select the blue module as the subject study target. We then construct the stemness subtype-related risk prognostic model with the LASSO regression analysis after Univariate COX regression analysis preliminarily screening the genes within the blue module. The results of Kaplan Meier analysis showed that the OS of patients in the high-risk group was significantly lower than that in the low-risk group, and this result was validated by the GES88437 dataset. COX regression analyses demonstrate that this risk-prognostic model can be used as an independent prognostic factor to predict the outcomes of gastric cancer patients, providing a new basis and possibility for precise treatment and management of patients. This risk prognostic model contains 16 risk genes, 6 of which are positively associated with OS. UPK1B is a member of the four-transmembrane superfamily, and most members of this family are characterized by four hydrophobic structural domains. This protein is found in asymmetric unit membranes and can interact with other family members to form complexes. And this complex may function in normal bladder epithelial physiology to regulate membrane permeability of superficial umbrella cells or stabilize the apical membrane through AUM/ cytoskeleton interactions [29]. High expression of UPK1B in clinical samples of bladder cancer was highly correlated with lymph node metastasis, distant metastasis and advanced stage of tumor. And in vitro experiments, UPK1B knockdown affects cell proliferation, migration and invasion through Wnt/β-catenin signaling pathway [30]. The melanoma-associated antigen A (MAGE-A) subfamily is one of the most thoroughly studied members of the cancer/testis antigens (CTA) family, whose expression is characterized by specific expression in various tumor tissues but not in normal tissues, except for germline cells [31]. Therefore, based on their expression characteristics, members of the MAGE-A subfamily have been developed as targets for immunotherapy such as vaccines and CAR-T cells [32]. MAGE-A3 plays prognostic role in many cancers and promotes cancer proliferation, migration, invasion and drug resistance [ [33][34][35]. In hepatocellular carcinoma, MAGE-A3 is highly expressed in cancerous tissues and is associated with poor patient prognosis. Knockdown of this protein, which is regulated by LINC01234 and miR-31-5p, affects tumor proliferation, invasion and cisplatininduced apoptosis [36]. In line with previous findings, we also found that MAGE-A3 is associated with poor prognosis in gastric cancer patients and plays an integral role in the progression of the tumor. However, it is noteworthy that our study appears to be the first to suggest that MAGE-A3 is involved in the regulation of tumour stemness after constructing a stemness subtyperelated risk prognostic model. And subsequent in vitro and in vivo experiments provided favorable evidences support for this finding. And our experimental results observed that MAGE-A3 may regulate tumor stemness and proliferation by PI3K/AKT signaling pathway. And in recent years, this signaling pathway has been frequently reported to be involved in the regulation of tumor stemness in many tumors [37][38][39][40][41] However, our study also has some limitations. First, we only conducted studies at the mRNA level, and the exploration at the protein level is relatively limited. Second, we have not yet investigated the mechanism of how MAGE-A3 specifically regulates tumor stemness. And this will also be the focus of our next work. Further studies on the molecular mechanisms of MAGE-A3 will deepen our understanding of the pathological mechanisms of gastric cancer, and this will provide new directions for the treatment. In summary, we have obtained a 16-gene riskprognosis model based on WGCNA, consensus clustering analysis and LASSO analysis. This model was validated and can be used to predict disease progression or treatment progression in patients. Finally, we screened these 16 genes and found that MAGE-A3 possesses the ability to regulate tumour stemness, and these findings suggest that MAGE-A3 may be an effective potential target for gastric cancer cure. CONFLICTS OF INTEREST The authors declare that they have no conflicts of interest. ETHICAL STATEMENT All mouse experimental procedures were evaluated and authorized in strict accordance with the guiding principles of the Animal Protection and Use Committee of Wuhan University of Science and Technology and in
5,685.4
2022-11-08T00:00:00.000
[ "Biology" ]
Effective information spreading based on local information in correlated networks Using network-based information to facilitate information spreading is an essential task for spreading dynamics in complex networks. Focusing on degree correlated networks, we propose a preferential contact strategy based on the local network structure and local informed density to promote the information spreading. During the spreading process, an informed node will preferentially select a contact target among its neighbors, basing on their degrees or local informed densities. By extensively implementing numerical simulations in synthetic and empirical networks, we find that when only consider the local structure information, the convergence time of information spreading will be remarkably reduced if low-degree neighbors are favored as contact targets. Meanwhile, the minimum convergence time depends non-monotonically on degree-degree correlation, and a moderate correlation coefficient results in the most efficient information spreading. Incorporating the local informed density information into contact strategy, the convergence time of information spreading can be further reduced, and be minimized by an moderately preferential selection. In extremely disassortative networks, hubs are surrounded by small-degree nodes and thus form star-like structures. Usually the hubs are not directly connect to each other and the star-like groups are interconnected via small-degree nodes. A typical such structure is illustrated in Fig. S1(a). To quickly transmit the information from an informed starlike group to other groups, the small-degree nodes, which take the role of bridges, demand to be more preferentially contacted. For the extremely assortative networks, large-degree nodes form a rich club and locate in the core of networks, while small-degree nodes locate in the periphery, as shown in Fig. S1(b). In this case, the information can easily spread in the core. But small-degree nodes should be preferentially contacted to avoid redundant contacts among hubs. In conclusion, both for highly assortative and disassortative networks, small-degree nodes should be more favored. When tuning the correlation coefficient of network, say from assortative to disassortative, the core-periphery structures gradually break up and turn into the star-like structures. During this process, α o first increases and then decreases. This explains the non-monotonic relationship between r and α o . S2. The comparision of structural properties for disassortative networks with γ = 2.1 and γ = 3.0 Some structural properties for γ = 2.1 and γ = 3.0 are summarized in table S1. For γ = 3.0 and r = −0.3, the mean degree of neighbors of the highest-degree node ⟨k⟩ Γ(h) is small and close to the minimum degree of the network k min . In addition, we measure the degree heterogeneity of neighboring nodes of the highest-degree node H Γ(h) . Low value of H Γ(h) indicates that almost all the neighbors have very small degrees, which further implies the star-like structure around the hubs. On the contrary, for γ = 2.1, ⟨k⟩ Γ(h) is much larger than k min and also the H Γ(h) is of larger values. That is to say, the star-like structure around hubs is less significant for γ = 2.1. In this case, small nodes do require too strong bias to achieve optimal spreading. This explains the anomaly of α o for γ = 2.1 and r = −0.3. For smaller values of r = −0.4, r = −0.5, ⟨k⟩ Γ(h) and H Γ(h) also become smaller there is no very obvious star-like structure, and the α o thus remains unchanged. S1: Some statistics of network properties for different degree exponents. Structural properties include the mean minimum degree kmin of networks, mean degree ⟨kmax⟩, mean neighboring degree ⟨k⟩ Γ(h) and neighboring degree heterogeneity H Γ(h) of the largest-degree nodes. The neighboring degree heterogeneity is defined as H Γ(h) = ⟨k 2 ⟩ Γ(h) /⟨k⟩ 2 Γ(h) , where ⟨k⟩ Γ(h) and ⟨k 2 ⟩ Γ(h) are the first and second moments of neighboring degrees, respectively. S3. Structural properties of the empirical networks We wish to investigate how the correlation coefficients r affect the optimal value of preferential structure exponent. This is achieved by rewiring the original network with preserving the degree sequence. However, due to the abundance of degree 1 nodes in the two empirical networks, the correlation coefficients are confined to a small region. Also, with those degree 1 nodes it is difficult to adjust the correlation coefficients while preserving the connectivity of networks. To overcome this problem, we remove all 1-shell nodes from the original networks [1]. Briefly, first we remove all the nodes with degree 1, and then re-calculate the degrees of nodes. This procedure is repeated until the degrees of all nodes are greater than one. To exhibit the structural complexity of the empirical networks, we randomize the empirical networks by sufficient rewiring process but do not change the original degree distribution and the degree of each node. Some structural properties of the two networks (after removing 1-shell nodes) are presented in Table S2. S4. Time evolutions of information spreading for the LID and GID cases The time evolutions of ρ G (t) [Fig. S2(a)] and n I (t) [Fig. S2(b)] show that the spreading speed for the LID case is quicker than that for the GID case. Since the local density information can better reveal the information distribution in a local region, some small-degree nodes with low informed density of neighbors can be informed early [see Figs. S2(c) and (d)]. As a result, the information can diffused to whole networks more effectively as compared to the GID case. S5. Cases of α = α o and β < 0 in empirical networks We verify the effectiveness of the informed density information based strategies in Router network and CA-Hep network. All the real (Figs. S3-S5) and correlated (Figs. S6-S7) empirical networks show the similar results with artificial networks. On one hand, the convergence time can be reduced when β is slightly below zero for the local density strategy. Nevertheless, too small values of β will instead increase the convergence time. In other words, there is an optimal value β o at which the information spreading can be effectively enhanced. On the other hand, introducing the local density information not only reduces the convergence time more significantly, but also yields a wider region of effective β as compared with the global case. Thus, the local local density based contact strategy performs better in improving the speed of information diffusion. S6. The LID based PCS with general parameter combinations (α, β) We use heat maps to reveal the dependence of the convergence time on different parameter combinations (α, β) in uncorrelated networks (see Fig. S8) and assortative networks (see Fig. S9). Two source selecting strategies are compared: randomly chosen source nodes in Figs. S8(a) and S9(a), and in Figs. S8(b) and S9(b) the largest-degree nodes are selected greedily as sources. We found that, in different network structures, different parameters (α, β) and source nodes can not qualitatively affect the results shown in the main text, i.e., the LID based PCS can effectively accelerate the spreading. It is noted that, a pair of (α, β) values optimise the spreading simultaneously, which are marked with red solid circles in Figs. S8 and S9. For the case of the largest-degree sources in assortative networks [see Fig. S9(b)], the position of the while line in the parameter space is under that of the random sources case [see Fig. S9(a)]. This indicates that small-degree nodes should be favored more strongly when the information starts from largest-degree nodes. Interestingly, the red solid circle of optimal parameter combination is at α > 0 in Fig. S9(a). That's to say, for the case of randomly chosen source nodes in assortative networks, large-degree nodes in network core should be preferentially contacted in the early stage, and then preferentially contact neighbors with low informed density in the periphery. For the largest-degree case in Fig. S9(b), large-degree nodes don't need to be preferentially contacted in the early stage because the sources start in the core. How different network structures influence the optimal parameter combination deserve further investigation in more synthetic and real-world networks. For each realization, a 5W of nodes are selected as seeds. The white lines correspond to the optimal β with fixed α, and the red solid circles represent the points of the minimum convergence time in parameter space (α, β). We set other parameters as N = 10 4 , γ = 3.0, ⟨k⟩ = 8, λ = 0.1.
1,944.6
2016-06-17T00:00:00.000
[ "Computer Science" ]
Correlation between antibiotic consumption and the incidence of healthcare facility-onset Clostridioides difficile infection: a retrospective chart review and analysis Healthcare facility-onset Clostridioides difficile infection is the leading cause of antibiotic-associated diarrhea, and is associated with morbidity and mortality. The use of antibiotics is an important risk factor for healthcare facility-onset C. difficile infection. We evaluated the correlation between the incidence of healthcare facility-onset C. difficile infection and antibiotic consumption, according to antibiotic class. Patients with healthcare facility-onset C. difficile infection from January 2017 to December 2018 at Konkuk University Medical Center (a tertiary medical center) were included. We evaluated changes in the incidence of healthcare facility-onset C. difficile infection and antibiotic consumption. The correlation between the incidence of healthcare facility-onset C. difficile infection and antibiotic consumption was evaluated two ways: without a time interval and with 1-month interval matching. A total of 446 episodes of healthcare facility-onset C. difficile infection occurred during the study period. The incidence of healthcare facility-onset C. difficile infection was 9.3 episodes per 10,000 patient-days, and increased significantly. We observed an increase in the consumption of β-lactam/β-lactamase inhibitors, and a decrease in the consumption of other classes of antibiotics, with a significant decrease in the consumption of fluoroquinolones, glycopeptides, and clindamycin (P = 0.01, P < 0.001, and P = 0.001, respectively). The consumption of β-lactam/β-lactamase inhibitors was independently correlated with the incidence of healthcare facility-onset C. difficile infection in the analysis without a time interval. When the analysis was conducted with 1-month interval matching, glycopeptide consumption was independently associated with the incidence of healthcare facility-onset C. difficile infection. Despite the reduction in fluoroquinolone and clindamycin consumption, the incidence of healthcare facility-onset C. difficile infection increased during the study period, and was correlated with increased consumption of β-lactam/β-lactamase inhibitors. Reduced consumption of specific antibiotics may be insufficient to reduce the incidence of healthcare facility-onset C. difficile infection. shock, and death [2]. A worldwide increase in the incidence of CDI has been reported, and is associated with the wide use of broad-spectrum antibiotics, the emergence of hypervirulent strains, and the use of more sensitive diagnostic tools, such as nucleic acid amplification tests (NAATs) [3][4][5][6][7]. There have been many attempts to reduce the incidence of healthcare facility-onset (HO)-CDI, including antibiotic stewardship programs and infection control measures [8][9][10][11][12][13][14][15]. Recently, the United States Centers for Disease Control and Prevention reported a decrease in the incidence of CDI, predominantly in HO-CDI [16]. The consumption of antibiotics, such as ampicillin, cephalosporins, clindamycin, and fluoroquinolones, is an important risk factor for CDI, and is associated with the incidence of HO-CDI [1,6,17,18]. Furthermore, a reduction in antibiotic consumption, due to antibiotic stewardship programs, resulted in a decrease in the incidence of HO-CDI [8-10, 12, 14]. These studies were conducted in Western countries, such as the United States and Europe. However, because of the differences in the major strains and their antibiotic susceptibility, the effect of antibiotic stewardship programs on the incidence of HO-CDI may differ depending on the nation [19]. In this study, we evaluated changes in the incidence of HO-CDI and the consumption of commonly used antibiotics, in terms of defined daily dose (DDD) and days of therapy (DOT). The correlation between the incidence of HO-CDI and antibiotic consumption was evaluated two ways, according to the class of antibiotics: without a time interval and with 1-month interval matching. Study population and design We retrospectively reviewed the medical records of patients with HO-CDI at Konkuk University Medical Center, an 800-bed tertiary hospital, from January 2017 to December 2018. The incidence of HO-CDI was evaluated monthly. It was measured in CDI episodes per 10,000 patient-days, and its trend was evaluated during the study period. Antibiotics, including intravenous and oral antibiotics, commonly used in hospitalized patients were classified as β-lactam/β-lactamase inhibitors (BLBLIs), third-generation cephalosporins, fourth-generation cephalosporins, fluoroquinolones, carbapenems, tigecycline, and clindamycin. The following antibiotics were included in each class: BLBLIs: ampicillin/sulbactam, amoxicillin/clavulanate, and piperacillin/tazobactam; third-generation cephalosporins: cefotaxime, ceftriaxone, ceftazidime, cefpodoxime, cefixime, cefditoren, and cefdinir; fourth-generation cephalosporins: cefepime; fluoroquinolones: ciprofloxacin, levofloxacin, and moxifloxacin; carbapenems: and ertapenem, meropenem, and imipenem. Antibiotic consumption during the study period was evaluated monthly and measured in DDD per 1000 patient-days and DOT per 1000 patient-days. The incidence of HO-CDI and the consumption of antibiotics, which were measured monthly, were paired without time intervals and with 1-month intervals (e.g., antibiotic consumption in January 2016 was matched with the incidence of HO-CDI in February 2016). The correlation between the incidence of HO-CDI and the consumption of each class of antibiotics was evaluated two ways, to determine the immediate and delayed effects of antibiotic consumption on the incidence of HO-CDI. The study was approved by the Institutional Review Board of Konkuk University Medical Center. The requirement for written informed consent was waived owing to the retrospective nature of the study. Diagnosis of HO-CDI We diagnosed HO-CDI according to the United States Infectious Disease Society guidelines [13] when patients had associated symptoms (unformed stools ≥ three times/day) and a positive test for C. difficile (a two-step glutamate dehydrogenase assay and a NAAT or a NAAT alone) after 3 days of hospitalization. Patient stool samples were tested for the presence of the tcdB gene (which encodes toxin B) using the Xpert ® C. difficile assay (Cepheid; Sunnyvale, CA, USA) according to the manufacturer's instructions [20]. Antibiotic stewardship program The antibiotic stewardship program began in our hospital in March 2005. Initially, the program included broad-spectrum antibiotics, such as fourth-generation cephalosporins, piperacillin/tazobactam, carbapenems, tigecycline, glycopeptides, linezolid, and aminoglycosides. Thereafter, third-generation cephalosporins, fluoroquinolones, ampicillin/sulbactam, amoxicillin/clavulanate, and metronidazole were additionally included beginning in 2007. In 2009, the prophylactic use of third-generation cephalosporins and aminoglycosides was restricted. Professors in the Department of Infectious Diseases reviewed antibiotics ordered by physicians and determined their daily use. Without agreement from the reviewer, physicians could not order antibiotics for more than one day. Antibiotic consumption was monitored by hospital pharmacists. Data on antibiotic consumption were obtained from the centralized hospital database. Statistical analyses Trends in the incidence of HO-CDI were evaluated using Poisson regression analysis. Changes in antibiotic consumption during the study period were evaluated using linear regression analysis. Correlations between the incidence of HO-CDI and antibiotic consumption were evaluated using Spearman's correlation analysis and linear regression analysis. Variables with P < 0.1 in the linear regression analysis were included for further evaluation. All statistical analyses were conducted using SPSS version 21.0 (SPSS Inc., Chicago, IL, USA). P < 0.05 was considered statistically significant. Incidence of HO-CDI During the study period, a total of 586 CDI episodes occurred. We excluded 140 episodes because they occurred within 3 days after admission, meaning they were community acquired. The remaining 446 CDI episodes were included in this study. The mean incidence of HO-CDI was 9.3 episodes per 10,000 patient-days (95% confidence interval: 8.3-10.3 episodes per 10,000 patient-days) during the study period. The incidence of HO-CDI significantly increased 1.8 times from January 2017 to December 2018 (P = 0.001; Fig. 1). When the outlying data points (January 2017 and December 2018) were excluded, the incidence of HO-CDI showed an increasing trend, without statistical significance (P = 0.054). Antibiotic consumption The consumption of all classes of intravenous antibiotics evaluated in this study, except BLBLIs, showed a decreasing trend in use; specifically, a significant decrease in fluoroquinolones, glycopeptides, and clindamycin. Fluoroquinolone use significantly decreased in terms of DOT (coefficient, − 0.39; P = 0.01), glycopeptide use significantly decreased in terms of DDD (coefficient, − 0.60; P < 0.001) and DOT (coefficient, − 0.61; P < 0.001), and clindamycin use significantly decreased in terms of DDD (coefficient, − 0.33; P = 0.003) and DOT(coefficient, − 0.39; P = 0.001). Conversely, consumption of BLBLI significantly increased in terms of DDD (from 145 to 187 per 1000 patient-days: coefficient, 1.30; P < 0.001) (Fig. 2) and DOT (from 168 to 214 per 1000 patient-days: coefficient, 1.58; P < 0.001) (Fig. 3). The total consumption of antibiotics showed a decreasing trend in terms of DDD and DOT, although this was not statistically significant. Correlation between the incidence of HO-CDI and antibiotic consumption The consumption of most antibiotics, except BLBLIs and tigecycline, showed a negative correlation with the incidence of HO-CDI. The incidence of HO-CDI was significantly associated with the consumption of BLBLIs and glycopeptide in terms of DDD, and BLBLIs, third-generation cephalosporins, fourth-generation cephalosporins, and glycopeptides in terms of DOT (Table 1). Total antibiotic consumption was not significantly associated with the incidence of HO-CDI, either in terms of DDD or DOT. When we matched the 1-month interval, the consumption of glycopeptides and clindamycin showed a Fig. 1 The incidence of HO-CDI during the study period. The incidence of HO-CDI was measured monthly. During the study period (from January 2017 to December 2018), the incidence of HO-CDI significantly increased (P = 0.001). HO-CDI healthcare facility-onset C. difficile infection significant association with the incidence of HO-CDI in terms of DDD and DOT (Table 2). When the analysis was conducted for each antibiotic, the consumption of ampicillin/sulbactam (P = 0. Tables S1 and S2). Among the antibiotics that showed a significant association with the incidence of HO-CDI, ampicillin/sulbactam and cefotaxime were the only antibiotics that positively correlated with the incidence of HO-CDI. In univariate analysis, the consumption of BLBLIs and glycopeptides significantly correlated with the incidence of HO-CDI. These variables, and other variables with P < 0.1, were included in the regression model. Multivariate analysis included the consumption of BLBLIs, glycopeptides, and carbapenems for DDD, and the consumption of BLBLIs, third-generation Fig. 2 The trend of antibiotic consumption with defined daily doses during the study period. The consumption of β-lactam/β-lactamase inhibitors significantly increased, while the consumption of glycopeptides and clindamycin significantly decreased. *Indicates the antibiotics whose use increased significantly and † indicates the antibiotics whose use decreased significantly. BLBLIs β-lactam/β-lactamase inhibitors, 3rd cephalosporins third-generation cephalosporins cephalosporins, fluoroquinolones, carbapenems, and glycopeptides for DOT. The analysis showed that the consumption of BLBLIs (P = 0.002 [DDD] and P < 0.001 [DOT]) was independently associated with the incidence of HO-CDI (Table 3). The consumption of BLBLIs (P = 0.01 [DOT]) and glycopeptides (P = 0.03 [DDD] and P = 0.045 [DOT]) were also correlated with the incidence of HO-CDI in the sensitivity analysis, excluding the outlying data points (January 2017 and December 2018) (Additional file 1: Table S3). When the regression analysis was conducted with 1-month interval matching, the consumption of glycopeptides (P = 0.04 [DDD] and P = 0.02 [DOT]) was independently associated with the incidence of HO-CDI (Table 4). Discussion In this study, we evaluated changes in the incidence of HO-CDI and the consumption of antibiotics, and examined the association between the incidence of HO-CDI and the consumption of antibiotics. We found that the incidence of HO-CDI significantly increased during the study period, despite no increase in the total consumption of antibiotics in terms of DDD and DOT. The consumption of BLBLIs, third-generation cephalosporins, fourth-generation cephalosporins, and glycopeptides Fig. 3 The trend of antibiotic consumption with days of therapy during the study period. The consumption of β-lactam/β-lactamase inhibitors significantly increased, while the consumption of fluoroquinolones, clindamycin, and glycopeptides significantly decreased. *Indicates the antibiotics whose use increased significantly and † indicates the antibiotics whose use decreased significantly. BLBLIs β-lactam/β-lactamase inhibitors, 3rd cephalosporins third-generation cephalosporins significantly correlated with the incidence of HO-CDI. Among these antibiotics, only the consumption of BLBLIs was significantly correlated with the incidence of HO-CDI, in the regression analysis. The mean incidence of CDI was 9.3 episodes per 10,000 patient-days in our study. Previous studies reported various incidences of CDI according to the instrument, with a range of 2.8 to 9.3 per 10,000 patientdays [6]. In Korea, the mean incidence of HO-CDI was reported to be 7.16 episodes per 10,000 patient-days, in one study [21]. Our results were in the upper range of those reported in previous studies. The incidence of HO-CDI increased during our study period, despite no change in the total consumption of intravenous antibiotics. During the study period, the total number of laboratory tests to detect CDI increased from 3287 in 2017 to 4126 in 2018. The wide use of highly sensitive tests, such as a NAAT and glutamate dehydrogenase assay, may enable the detection of HO-CDI cases more precisely. The pattern of antibiotic consumption changed during the study period. The major findings were a decrease in the consumption of glycopeptides, fluoroquinolones, and clindamycin, and an increase in the consumption of BLBLIs. These findings suggest that the antibiotic stewardship program works appropriately. To reduce multidrug-resistant pathogens, such as methicillin-resistant Staphylococcus aureus, extended-spectrum β-lactamase producing gram-negative bacilli, carbapenem-resistant gram-negative bacilli, and vancomycin-resistant enterococci, we restricted the inappropriate use of broad-spectrum antibiotics, such as carbapenems, fluoroquinolones, and glycopeptides. This intervention may influence the "balloon effect" in which physicians use BLBLIs more frequently. Therefore, we could not reduce the total consumption of antibiotics significantly during the study Table 1 Correlation analysis between antibiotic consumption and the incidence of HO-CDI without a time interval Table 2 Correlation analysis between antibiotic consumption and the incidence of HO-CDI with 1-month interval matching The consumption of BLBLIs and the incidence of HO-CDI showed a significant relationship. Similarly, Vernaz et al. [22] reported that amoxicillin/clavulanate consumption was correlated with the incidence of HO-CDI. However, some studies reported that the use of BLBLIs was associated with a lower incidence of HO-CDI than the use of high-risk antibiotics, such as third-generation cephalosporins and fluoroquinolones [23][24][25][26][27][28]. In our study, the risk and relative risk of HO-CDI with specific classes of antibiotics were not evaluated. Therefore, our results did not suggest that BLBLIs were associated with a higher risk of HO-CDI than other classes of antibiotics. Restricting the use of specific classes of antibiotics, such as third-generation cephalosporins and fluoroquinolones, may have a limited effect on the incidence of HO-CDI. The consumption of ampicillin/sulbactam and amoxicillin/clavulanate, but not piperacillin/tazobactam, were significantly correlated with the incidence of HO-CDI. In previous studies, use of piperacillin/tazobactam was associated with a lower risk of C. difficile colonization and CDI, compared with the consumption of third-generation cephalosporins [23][24][25][26][27]. In studies that reported on the antibiotic susceptibility of C. difficile strains in Korea, almost all isolated C. difficile strains were susceptible to piperacillin/ tazobactam, while approximately half of C. difficile isolates were resistant to ampicillin [29,30]. The antibiotic susceptibility of C. difficile may influence the occurrence of HO-CDI. Further studies are needed to establish the impact of antibiotic susceptibility on the incidence of HO-CDI. This study had some limitations. First, we only evaluated the consumption of antibiotics and the incidence of HO-CDI. Because of the ecological nature of this study, individual risk factors, such as age, underlying diseases, and disease severity, and the impact of changes in infection control measures could not be evaluated. Only a population-level correlation between the consumption of antibiotics and the incidence of HO-CDI could be evaluated. Second, the study period was relatively short. In our hospital, a NAAT and glutamate dehydrogenase assay were introduced in September 2016. Before then, toxin assays using enzyme immunoassays and toxigenic cultures for C. difficile were used to detect CDI. To minimize the impact of the changes in diagnostic tests, we used data after September 2016. Lastly, we did not evaluate the strain and antibiotic susceptibility of C. difficile detected in our study. Therefore, evaluation of the effect of the strain and antibiotic susceptibility on the incidence of HO-CDI was limited. Despite these limitations, our study showed a significant correlation between the consumption of BLBLIs and the incidence of HO-CDI in terms of DDD and DOT. Conclusions During the study period, the incidence of HO-CDI and the consumption of BLBLIs significantly increased, while the total antibiotic consumption did not. The consumption of BLBLIs was associated with the incidence of HO-CDI. Reduced consumption of specific antibiotics only may be insufficient to reduce the incidence of HO-CDI.
3,669.4
2021-08-06T00:00:00.000
[ "Medicine", "Biology" ]
THE INFLUENCE OF INNOVATIVENESS ON THE WORK PERFORMANCE OF PHYSICSTEACHERIN THE STATE SENIOR HIGH SCHOOL AT BENGKULU PROVINCE This research objective was to analyze the influence of the professional competence, work motivation, and innovativeness of physics teachers on the work performance of physics teachers in the state senior high school at Bengkulu province. This research was conducted by surveying physics teachers in the state high school at Bengkulu province. A sample of 90 teachers of physics was selected using a simple random sampling technique. Data were analyzed using the "path analysis method” with Microsoft Excel and SPSS17 as research's computing tools. The results concluded that there are direct, positive, and significant influences of professional competency, work motivation, and innovativeness of physics teachers toward the work performance of the physics teacher. Moreover, the professional competency and the work motivation of the physics teacher have an indirect, positive, and significant influence on the work performance of physics teachers through the innovativeness of the physics teacher itself. INTRODUCTION Background Issues Education can be seen as trustworthy to prepare men to face the changes that occur in society. Education is not static but dynamic. So it needs improvement continuously. The problem faced today is the low quality of education due to the low quality of physics teachers. Results of Initial Competency Test on National scale (UKA) in 2012 for high school in physics subjects have absorptive 47.52. It shows that high school teachers' ability is still low in understanding aspects of the curriculum and mastery of the subject matter of physics teachers. The teachers' knowledge in methods of teaching is not adequate also (National Education, 2012). Judging from the applied learning by teachers in the field, there is a tendency for the teaching and learning process to happen classically in the class. It just depends on the textbook as teaching methods that emphasize memorization instead of understanding the concepts. Here, teachers as a learning center. The development of process skills to students is infrequent. Teachers are less capable of teaching practices that lead to process skills. Its impact on the learning outcomes of students, particularly in physics subjects, is not maximized. Learning physics has not been a favorite for students, even very burdensome students. It impressed scary in terms of our country's need for young people to understand the physics and choose the field of physics to advance technology that is still far behind countries in Asia such as Japan, Korea, and Taiwan. Physics is the science underlying the technology. The students need to inculcate a sense of joy in learning physics. The ability of the teacher to manage an exciting learning physics is desirable. Then, the performance of the physics teacher needs to investigate. apply the principles of science learning in teaching physics. The performance of the physics teacher is a very decisive factor for the quality of education/learning physics that will have implications on the quality of the output of education after finishing school. The quality of physics teacher performance will largely determine the quality of the education/learning physics. It caused by the teacher is the man who the most direct contact with students in the educational process at the school institution-efforts to improve continuously the quality of learning to be a professional attitude as educators. Develop innovative things that need to do physics teachers to improve the quality of education and learning physics. Thus, creativeness and innovativeness will be crucial things (UharSuharsaputra, 2011). Speed to accept innovation or called innovativeness is the degree or the degree to which an individual or a particular receiver unit receives a new idea or innovation relatively early compared with other members (Rogers, 1995: 264-266). Innovation in the form of the method may result in improvement, improve the quality of education, and tools or new ways of solving problems encountered in educational activities. Keith Davis (1994: 484) argues that many factors that can influence performance are; 1) Motivation factor formed from an employee attitude in the face of the work situation. 2) Ability Factors (Ability) consists of potential ability (IQ) and the ability of reality (Knowledge + Skill). According to Robert Kreitner and Angelo Kinicki (2001: 205), motivation is a psychological process that generates and directs behavior to the achievement of goals or goal-directed behavior. Gibson (2000:87)said that work motivation closely linked to behavior and performance. Work motivation directed to achieve the goal. The causal relationship between work motivation and performance presented byMangkunegara (2005:67) that the factors influencing the performance are the work motivation that formed from an employee attitude in the face of the work situation. Work motivation is a condition that drives self-directed employees to achieve organizational goals. Mental attitude is a mental state that encourages employees to strive for self-employment to maximum performance. Employees will be able to achieve maximum performance if he is highly motivated. Performance has a causal relationship with competence. Performance is the skills, behaviors, attitudes, and actions. Competence shows characteristics of knowledge, skills, attitudes, and experience to do the job (Wirawan, 2009:9-10). Johnson (1980:12) describes the components of professional competence contain competencies that related to professional education, such as mastery of theory, principles, strategies, and techniques of education and teaching. According to some opinions on this research to reveal whether physics teacher 1. s' professional competence, work 2. motivation 3. , and innovativeness influence 4. the performance of physics teachers in the State High School in Bengkulu Province. RESEARCH METHODS The method used in this research is a survey with the quantitative research approach. The type of survey research focused on expressing a causal relationship between variables. The analytical techniques used to examine the causal relationship is path analysis. This research population is all of the physics teachers ( who have worked at least three years and a minimum S1 Physical Education) in the state high school in Bengkulu Province. The affordable population numbered 177 people and following the characteristics of a total of 122 people. The sample size of 90 people determined by Slovin the equation developed (Sevilla, 1993: 161-162) and by simple random sampling technique. The reliability of physics teacher performance, work motivation, and innovativeness calculate by using the Cronbach Alpha formula. The result of the calculation on the reliability coefficient at the physics teacher performance instruments is obtainedrcount=0.920. The result of the calculation on the reliability coefficient at the work motivation instrument obtained r count = 0.903. It can conclude that the instrument used to The relation model empirically shows that there are five significant path coefficients on α = 0.05. These have Sig <α = 0.05. So, the path coefficients between variables of professional competency of physics teacher (X1), the motivation of physics teachers (X2), and innovativeness of physics teacher (X3) on the performance of physics teacher (X4) is significant. It means the theoretical model of causal by the empirical model. Thus, it can conclude that the professional competence, work motivation, and innovativeness have a direct positive influence on the performance of physics teacher. Path analysis model also showed an indirect positive influence of professional competence and work motivation on the performance of physics teachers through the innovativeness of physics teachers. Thus, innovativeness is an intermediary or intervening variable of professional competence and work motivation of physics teachers. Discussion The first hypothesis testing results indicate that professional competence has a direct, positive, and significant influence on the physics teacher's performance. It is indicated by the path coefficient ρ41 = 0.244. Thus, the professional competence of physics teachers is the most crucial variable in improving the physics teacher's performance. These results indicate that professional competence has a direct positive influence on the ups and downs variable of physics teacher performance. It supports the opinion of Keith Davis (1994: 484) and the results of Kingkin research (2011) that the physics teacher will be able to achieve maximum performance if it has the ability of professional competence in this regard. The research results found that the value of the physics teacher's professional competence is to be from enough to less category (<58.27). This value is owned by approximately 65.6 % in the state high school physics teacher in Bengkulu province. How does physics teacher in this category capable of teaching well to students? It is in line with Johnson (1980: 12) opinion about the professional component of teacher competence. It said that the competence related to professional education includes mastery of theory, principles, strategies, and techniques of education and teaching. If physics teacher have a low professional component, it means mastery of theory, principles, strategies, and techniques of education and teaching is also low. Naturally, physics teachers in this category have not managed to make a favorite subject for physics because physics teacher is the most direct contact with students in the learning process of physics. To improving the professional competence of physics, teachers should be considered an essential indicator in the instrument that measures the professional competence of physics teachers. It needs Facts, conformity with the research findings, indicate that the professional competence of physics teacher has a direct positive influence on the performance of physics teachers. The professional competence of the physics teacher is still low cause its performance not maximized. Still, open space physics teachers improve performance through increased professional competence to improve the quality of processes and outcomes of learning physics. The second hypothesis testing results show that work motivation has a direct, positive, and significant influence on the performance of the physics teacher. It indicated by the value of the path coefficient, ρ42 = 0.207. Thus, the work motivation of the physics teacher variable is crucial in improving the physics teacher's performance. The results support the research of Pranawa (2011). In line with the opinion of James M. Higgins in Uhar (2012), it suggests the performance of a person related to a variety of factors. It can influence both internal inherent in the individual and which are external from the work environment. Performance and motivation is something that continuously interacts. Performance is a manifestation of behavioral dimensions, while work motivation is the internal dimension of a person's behavior. Descriptive analysis of the work motivation data obtained an average score = 129.58. If the average score is a divide with the ideal score (135) and multiplied by 100, an average score of work motivation can classify in a good category (96). Work motivation is not to be enabled to maximize the performance of physics teachers, mainly related to the learning process of physics. The third hypothesis testing results show that the innovativeness has a direct, positive, and significant influence on the performance of the physics teacher indicated by the value of the path coefficient ρ43 = 0.271. Thus, innovativeness is the most critical variable in improving the performance of the physics teacher. This result is consistent with the results of Sofyan Iskandar research (2012) that teacher performance is influence by the innovativeness of 20.12 %. In line with UharSuharsaputra (2012) opinion, the changes occur in public either input or the environmental community, as a whole demand on increasing the ability of teachers to perform their duties. Physics teachers need to develop new ways to improve the quality of student learning. The results showed that the value of the path coefficient, ρ43 is 0.271. Descriptive analysis of research data obtained from the average score of innovativeness of physics teachers in the state high school in Bengkulu Province at 134.65. Moreover, 53.4 % of physics teachers' innovativeness is an average and above average in good category (86). Accordance with the facts on the field, physics teacher who has high innovativeness will make innovations and continuously adjust their lesson with the changes or dynamics in society that influence his performance. The fourth hypothesis testing results indicate that the professional competence of physics teachers has a direct, positive, and significant influence on physics teacher innovativeness indicated by the value of the path coefficient ρ31= 0.221. Thus, the professional competence of teachers of physics is an essential variable in improving the innovativeness of physics teachers. Performance has a causal relationship with competence (Wirawan, 2009). So, the professional competence of physics teachers needs to develop because of not only as a function of performance, but professional competence is also a function of innovativeness indicated by the path coefficient ρ41= 0.244 and ρ31= 0.221. The model of professional competence path analysis showed that a direct positive influence on the performance of physics teachers and indirect positive influence is through innovativeness. Innovativeness variable serves as an intermediary variable or intervening variable. It means that an increase in the variable of professional competence by increasing or accompanied by increased innovativeness physics teachers will provide an even more significant impact on the performance of physics teachers indicated by the total value of the path coefficient = 0.492. Research data indicate that the average professional competence of physics teachers is still low, with mastery <58.27 that held by approximately 70 % of State High School physics teachers in the province of Bengkulu. A low value of professional competence will positively influence physics teacher innovativeness, especially in the learning process of physics. Components of the professional competence of teachers, according to Johnson (1980), understand the structure, concepts, principles, and methodology of physics science is still low. How could the physics teacher improvise by learning innovatively? To create learning to improve innovativeness can be quickly done by physics, teachers who have high professional competence. The improvement of physics teachers' professional competence needs to be to increased innovativeness and improve the performance of physics teachers. The fifth hypothesis testing results show that work motivation has a direct, positive, and significant influence on physics teacher innovativeness indicated by the value of the path coefficient, ρ32 = 0.219. Thus, the motivation of physics teachers is an essential variable in improving the innovativeness of physics teachers. The result of this research is consistent with the results of the Arientje Dimpundus research. 13.03 % variation of innovativeness caused by work motivation. Besides this, this research is in line with the theory proposed by Koontz (1996: 15), which states the role of work motivation is essential to encourage teachers to make innovation in order to manipulate the physics of learning and adapting to environmental demands and changes. The high working motivation of physics teachers will emerge and increase creativeness and innovativeness. Based on the theoretical and empirical models on path analysis, the research results show that work motivation has a direct positive and significant influence on the performance of physics teachers. It also has an indirect, positive, and significant influence on the performance of the physics teacher through innovativeness. Thus, the innovativeness is an intermediate variable or intervening variable of work motivation on the physics teacher's performance. Improving work motivation through increasing innovativeness will significantly influence the performance of physics teachers, as indicated by the total value of the path coefficient = 0.490. Accordance with the facts on the field that the physics teacher who has a high motivation to work would be easy to make innovations in learning. So, it impacts on the innovativeness of physics teacher and ultimately on performance. CONCLUSIONS AND RECOMMENDATIONS Based on research 1. , the professional competence of physics teachers has a direct, positive, and significant influence on the performance of physics teachers. It means that the high professional competence of physics teachers will improve the performance of physics teachers in the State High School in Bengkulu Province. 2. Work motivation has a direct, positive, and significant influence on the performance of the physics teacher. It means that physics teachers' high work motivation will improve the performance of physics teachers in the State High School in Bengkulu Province. 3. Innovativeness has a direct, positive, and significant influence on the performance of the physics teacher. It means that high innovativeness will increase the performance of physics teachers in the State High School in Bengkulu Province. 4. The professional competence of teachers of physics has a direct, positive, and significant influence on the innovativeness of physics teachers. It means that physics teachers' high professional competence will increase the innovativeness of physics teachers on their duties and responsibilities in the high schools in the Bengkulu Province. Based on the research results, discussions and conclusions that have described the suggestions can be put forward especially for physics teacher and other stakeholders such as principals, the Local Government through the Department of National Education, and other educational researchers that how important to improve the professional competence of teachers, teachers' work motivation and innovativeness continuously in order to improve performance. So it will impact on the quality and process of education and learning.
3,864
2020-08-22T00:00:00.000
[ "Education", "Physics" ]
Interactive comment on “A correlation study regarding the AE index and ACE solar wind data for Alfvénic intervals using wavelet decomposition and reconstruction” by Fernando L. Guarnieri et al Abstract. The purpose of this study is to present a wavelet interactive filtering and reconstruction technique and apply this to the solar wind magnetic field components detected at the L1 Lagrange point  ∼  0.01 AU upstream of the Earth. These filtered interplanetary magnetic field (IMF) data are fed into a model to calculate a time series which we call AE∗. This model was adjusted assuming that magnetic reconnection associated with southward-directed IMF Bz is the main mechanism transferring energy into the magnetosphere. The calculated AE∗ was compared to the observed AE (auroral electrojet) index using cross-correlation analysis. The results show correlations as high as 0.90. Empirical removal of the high-frequency, short-wavelength Alfvenic component in the IMF by wavelet decomposition is shown to dramatically improve the correlation between AE∗ and the observed AE index. It is envisioned that this AE∗ can be used as the main input for a model to forecast relativistic electrons in the Earth's outer radiation belts, which are delayed by  ∼  1 to 2 days from intense AE events. Introduction Around solar maximum the major causes of geomagnetic storms and space weather disturbances on Earth are interplanetary coronal mass ejections (ICMEs), especially magnetic clouds (MCs), and their sheath or shocked fields Echer et al., 2011). However, during the descending and minimum solar cycle phases, high-speed solar wind streams (HSSs) become the dominant interplanetaryheliospheric structure, causing geomagnetic activity on Earth (Sheeley et al., 1976;Tsurutani and Gonzalez, 1987;Tsurutani et al., 1995Guarnieri, 2006;Kozyra et al., 2006;Turner et al., 2006;Echer et al., 2011;Hajra et al., 2013). The features within the HSS causing geomagnetic activity are largeamplitude Alfvén waves (Belcher and Davis, 1971;Tsurutani et al., 1982Tsurutani et al., , 1990Tsurutani et al., , 2011aEcher et al., 2011;Hajra et al., 2013), the southward component of which leads to intermittent magnetic reconnection between the wave magnetic fields and the magnetopause fields, allowing the transfer of energy and momentum from the solar wind to the magnetosphere (Dungey, 1961). One of the most widely used indices to estimate the energy input to the magnetosphere and ionosphere is the geomagnetic auroral electrojet (AE) index. The index is the maximum deviation of the horizontal components of geomagnetic field variations from a set of globally distributed ground-based magnetometers located in and near the auroral zone in the Northern Hemisphere. The AE index represents the overall disturbances in both eastward and westward ionospheric electrojets located at ∼ 100 km altitude (Sugiura and Davis, 1966). Thus, substorms and injection events causing magnetotail plasmasheet injections into the midnight sector of the magnetosphere with concomitant particle precipitation into the auroral zone ionosphere may intensify both electrojets, leading to AE index increases. The Alfvén waves causing the geomagnetic (AE index) activity have short wavelengths, ranging from ∼ 2 × 10 5 to 2 × 10 6 km (∼ 10 min to ∼ 2 h in duration convected in a 400 km s −1 solar wind), much smaller than ICME and MC scales. The direct use of the IMF B z at the L1 libration point during such structures results in poor correlation against the AE on Earth (Tsurutani et al., 1990, 1995, Guarnieri, 2005. It is thought that part of the problem is that Alfvén waves are propagating in the solar wind, and what is detected at the L1 point is not what hits the Earth's magnetosphere. Another possibility is that the high-frequency wave power does not contribute to the solar wind energy transfer process. A second incentive for better understanding the relation between interplanetary structures and the AE indices is that intense AE activity events called HILDCAAs (Tsurutani and Gonzalez, 1987) have been shown to be indirectly related to the production of relativistic electrons in the Earth's magnetosphere (Hajra et al., 2014(Hajra et al., , 2015a. In particular Hajra (2015a) showed that HILDCAA onsets precede relativistic ∼ 0.6 MeV electron acceleration by ∼ 1 day and ∼ 4.0 MeV electron acceleration by ∼ 2 days. These HILD-CAA events are well correlated with the presence of Alfvén waves within HSS (Tsurutani and Gonzalez, 1987;Gonzalez et al., 1994), and the intensity of the geomagnetic event depends on the amplitude of the negative B z component of the magnetic field of these waves (Guarnieri, 2005(Guarnieri, , 2006. The overall picture of relativistic electron acceleration in the magnetosphere is the following: reconnection between the southward component of the Alfvén waves and the Earth's dayside magnetopause field (Dungey, 1961;Gonzalez and Mozer, 1974;Tsurutani et al., 1995) leads to substorms and convection events and injections of energetic electrons into the nightside region of the outer magnetosphere (DeForest and McIlwain, 1971;Horne and Thorne, 1998). The energetic electron component creates electromagnetic chorus waves through the loss cone instability (Tsurutani and Smith, 1977;Meredith et al., 2001;Tsurutani et al., 2013). Then the chorus accelerates the high-energy electrons to relativistic energies by resonant interactions (Inan et al., 1978;Horne and Thorne, 1998;Thorne et al., 2005Thorne et al., , 2013Summers et al., 2007;Reeves et al., 2013;Boyd et al., 2014;Hajra et al., 2015a). The acceleration of relativistic electrons within the Earth's outer radiation belt (Paulikas and Blake, 1979;Baker et al., 1986) is an important physical phenomenon in space weather. These electrons are also known as "killer electrons" for their hazardous effects to orbiting spacecraft (Wrenn, 1995;Horne, 2003). Recent studies by Hajra et al. (2013Hajra et al. ( , 2014Hajra et al. ( , 2015a indicate the probability that magnetospheric relativistic electron acceleration may be predicted more than 1 day in advance using ground-based observations of auroral activities during HSS. This paper describes a correlation analysis using a technique of wavelet decomposition and selective reconstruction applied in both IMF solar wind data and AE index. The results of this technique may allow us, in the future, to develop a more complete model to forecast the occurrence of relativistic electrons during periods with Alfvénic fluctuations in the interplanetary solar wind. Methodology Interplanetary magnetic field and solar wind parameters obtained from the ACE (Advanced Composition Explorer) spacecraft (Stone et al., 1998) were used in this work. This data set has ∼ 1 min resolution, and we have used the level 2 processed data. IMF vector data used in this work are in the GSM coordinate system. The ACE spacecraft is located at the L1 libration point, ∼ 1.5 million km from the Earth, orbiting a region around the Sun-Earth line. The data are available online at http://www.srl.caltech.edu/ace. The geomagnetic activity was observed through the AE indices (Sugiura and Davis, 1966) and Dst index (Rostoker, 1972). These indices are available through the World Data Center for Geomagnetism, Kyoto (http://wdc.kugi.kyoto-u. ac.jp). The AE and Dst indices have 1 min and 1 h time resolutions, respectively. The AE index was used to identify the periods of enhanced auroral electrojet activity, while the Dst index was used only to ensure that the analysed intervals were not occurring during the main phases of magnetic storms. For this study, we used a set of 14 geoeffective interplanetary HSS events, previously identified by Guarnieri (2005) as the longest-lasting elevated AE index events spanning 1998-2001. Table 1 shows a listing of these events with the year, event start date and time, event end date and time, and event duration (in minutes). The Alfvénicity of the solar wind for these intervals was verified using the classical technique proposed by Belcher and Davis (1971) (i.e. these elevated AE intervals are associated with high-speed stream solar wind origin). A filtering process adapted from the Meyer wavelet decomposition and reconstruction was employed in this technique. This procedure allows a decomposition of the signal into bands with periods in multiples of 2 n of the data cadence (1 min), with n = 1, 2, 3, . . . Each decomposed band is called a "detail" and represented by Dn, where n represents the decomposition level. Table 2 shows each decomposition level Dn, the level n, the associated period (2 n ), and the period range in minutes. More details about this technique can be found in Meyer (1993) and Kumar and Foufoula-Georgiou (1997). The last level indicated in Table 2 (A10) contains all the periodicities longer than 1024 min (∼ 17.1 h), and it can be considered as the residual of the decomposition process. Further, this level contains the average value of the data series. We choose this level to terminate the decomposition since de- Table 1. Long-lasting AE events occurred between years 1998 and 2001 (Guarnieri, 2005 tails of higher orders are so smoothed that they are not useful for the AE evaluation. If two or more details are taken and added the time series, this will result in an "approximation", which can be considered as a band pass filter. Taking the A10 level and adding it to D10 will result in approximation A9, and so on (An − 1 = An + Dn). In this way, the A0 level will be exactly the same as the original signal, since it contains all the decomposition levels. The reconstruction is an interactive process that can be started and stopped at any decomposition level. A computer routine was developed to adjust the parameters of the empirical equations for the calculated AE. After the adjustments, the model was fed with the filtered IMF and solar wind data time series. The calculated AE for each event was compared to the real geomagnetic index observed to check the correlation among them. Results and discussion Preliminary tests were performed using the cross-correlation analyses between the AE index and several interplanetary parameters, such as |B|, B x , B y , B z , V sw , and N p , as was done previously by Guarnieri (2005). The B z magnetic field component was found to be the parameter most related to the auroral activity. For this reason, the attempts to adjust a function describing the AE index were mostly focused on this IMF component. The wavelet decomposition technique was applied in both the AE index and the IMF B z component. Figure 1 shows an example of wavelet decomposition and reconstruction for the AE index. The top right panel shows the AE time series. The panels in the right side are the "details", identified by D1 to D10 (see Table 2 for the corresponding range of each detail). Periods longer than 1024 min plus the average value of the data series are in the A10 level, shown in the bottom panel, left side. The panels in the left side are the "approximations", which can be viewed as a cumulative sum of details. Since the high-speed solar wind events are characterized by enhanced AE activity, the higher approximation levels (such as A8, A9, and A10) present increasingly averaged values. Figure 2 shows the wavelet decomposition and reconstruction for the solar wind magnetic field B z component. The sequence of panels is the same as Fig. 1 Guarnieri (2005), through the calculation of average values of B z during HSS intervals. Comparing Figs. 1 and 2, an anti-correlation in level A10 is clearly observable. This same behaviour is also present in other approximation levels. This anti-correlation shows that AE activity is driven by the −B z on these long timescales during the Alfvénic intervals. Previous work (Guarnieri, 2005) had observed that high frequencies in the signal could hide or decrease the correlation between solar wind parameters and geomagnetic indices, because of the presence of noisy, turbulent activity. So, an approximation level has to be chosen in order to avoid these high frequencies and, at the same time, be able to represent the particularities of the signal. With a computer routine, each reconstruction level was tested and it was found that the correlation is high up to the level A3 (starting from A10). Levels A0, A1, and A2 include most of the high frequencies that reduce the correlation and do not significantly improve the signal characterization. In this work, we used reconstructions from A10 to A3, meaning that only periods longer than 8 min were used in the model. This decision, as well the other assumptions that were used to develop the following empirical equations, were based on several analyses reported in Guarnieri (2005). In that work, Guarnieri used the classical cross-correlation technique, power spectrum, and multi-taper analysis to correlate the B z and AE, and the results were only significant when periods shorter than 8 or 16 min (depending on the technique) were removed. The studies were performed in both ACE and IMP-8 data. There was no clear correlation employing unfiltered data and even the lag between the two time series showed inconclusive results. Progressively removing the high frequencies lead to correlation increases and the lag between time series becomes more consistent. The values obtained for correlation were in the range from ∼ 0.5 to ∼ 0.8. The lowest values were related to events with the presence of "patches" of different periodicities in B z . This behaviour exposes a limitation of the classical correlation techniques in dealing with time-located periodicities. Once we verified the good correlation between B z and AE, an empirical model was developed to estimate a time series (here called AE * ) based on interplanetary B z and compare it against the observed AE index. The first assumption for this empirical model is that reconnection is the main physical process transporting energy from the solar wind into the magnetosphere (and later to the auroral region). In this way, when B z is negative we would have energization of the auroral electrojets. Positive B z intervals implies that there is no energy input from the solar wind to the magnetosphere, and thus for these intervals a decay function is used to estimate the auroral current decay. The calculation and modelling process starts with the interplanetary B z measured at L1 (B z _interp), shifted in time to take into account the interplanetary structure travel time from the L1 libration point to the Earth, using the solar wind velocity as a proxy. B z is the shifted time series and δ is the delay applied: (1) www.nonlin-processes-geophys.net/25/67/2018/ Nonlin. Processes Geophys., 25, 67-76, 2018 B z is then decomposed and reconstructed up to the approximation level desired to eliminate high frequencies, creating the B z * time series (approximation for the shifted B z ). The field change is calculated as follows: (2) The first item of ae * is assumed to be −B z * (where ae * represents the calculated index before scale and baseline adjustments). If B * z (t) ≤ B * z (t−1) , then the B z is getting smaller or more negative, leading to energization: If B * z (t) > B * z (t−1) , then the following is true: Finally, the scale and baseline are adjusted in the calculated series: where AE * is the approximation for the AE index time series. A computational routine was developed to adjust the parameters α, β, ε, and γ . The best results were achieved with α = 70 nT, β = 150, ε = −0.3333, and γ = 1. The delay δ (in Eq. 1) depends on the solar wind velocity and the shifting method employed. It is in the range of 30-70 min. Figure 3 shows a comparison between this calculated AE * (blue line) and the observed AE index (red line). The reconstructions were created up to the level A4, meaning that only periods longer than 8 min are present. There is a good correlation between the two series (calculated and observed), although there are still some scale problems. However, some particularities of the real signal were represented very well by the calculated signal. This event was chosen due to the presence of unambiguous features, such as those occurring at day ∼ 116 and the big peak just after day 116.5. These features were used to test the accuracy of the model under unusual conditions. A comparison among all the events and calculated time series using different approximation levels is shown in Table 3. The data shown in this table are correlation coefficients between the calculated AE * and the observed AE index. Considering the A3 approximation level, all the events have correlation coefficients higher than 0.7. With the exclusion of events ev1_2000 and ev3_2000, all of the remaining 12 events have correlation coefficients higher than 0.85. Event 3_1999 has correlation coefficients larger than 0.958 for all the approximation levels. Regarding events ev1_2000 and ev3_2000, the low correlation coefficients observed led us to reanalyse the data plots to understand what would be the main difference between these events and the remaining ones. The B z data for these two events are basically high-frequency oscillations around a ∼ 0 nT value, without the longer-period excursions to negative B z values typically associated with AE energization. These high frequencies are exactly those mostly removed by the wavelets filtering technique we employed here, leading to a weak correlation against the AE index. Similar results were found by Guarnieri (2005) when analysing these exact two events, but using different techniques, and reaching the same conclusion. If one tries to apply Eqs. (1) to (5) to unfiltered B z data, this may result in a very poor correlation coefficient between the estimated AE * and the observed AE index. The interactive filtering process using wavelet decomposition allows us to effectively remove the high-frequency components that have poor predictive value, thus obtaining higher correlation values. These observations lead to several possible scenarios, as shown in Fig. 4. Correlated activity between interplanetary structures and geomagnetic indices are usually related to large interplanetary structures, such as interplanetary coronal mass ejections or long-period Alfvén waves. These structures appear in interplanetary data as low-frequency waves. Due to the sizes of these structures, the Earth's magnetosphere may react as a whole, and so the geomagnetic indices give us a good idea of the global magnetosphere energization. However, there are events with uncorrelated activity, usually those with high frequencies present in the interplanetary data, which are related to medium-and short-period Alfvén waves, that may miss impingement on the magnetosphere. There are also long-period waves that can be uncorrelated due to internal chaotic processes inside the magnetosphere. When these high-frequency events are removed by filtering IMF B z and AE index data, we are able to reach high correlation values such as those shown on Table 3. Future work can implement a forecasting model for the AE index for such periods with high-amplitude Alfvénic fluctuations. One has to use the real-time data from a spacecraft around L1 and verify the Alfvénicity through the technique employed by Belcher and Davis (1971). Once the Alfvénicity is verified, the data series would be fed in through the wavelet filtering and the AE * evaluation equations. This would give us a forecasted AE with a delay of only a few minutes, or almost a "nowcast". However, the main result would be the relativistic electron forecasting. Since Hajra et al. (2013Hajra et al. ( , 2014Hajra et al. ( , 2015a have shown the probability that magnetospheric relativistic electron acceleration may be predicted by more than 1 day in advance using ground-based observations of auroral activities during high-speed solar wind streams, the ability to obtain an equivalent auroral index may lead to a more complete model which also includes the forecasting of relativistic electrons. It is important to note that data from other spacecraft or different processing levels can be employed in this forecast. One just has to take in to account the propagation delay according to the spacecraft position. www.nonlin-processes-geophys.net/25/67/2018/ Nonlin. Processes Geophys., 25, 67-76, 2018 A method to calculate an AE * time series based on IMF data measured at L1 in the presence of interplanetary Alfvén waves was developed. This method employs an Alfvénicity check and a wavelet decomposition technique, which is applied to the interplanetary magnetic field B z component. This calculated AE * was shown to be highly correlated with the observed AE index. The correlation coefficients between the calculated and observed series can reach values over 0.90, depending on the resolution and the level of details assumed. Future works can use data from the any other spacecraft located around L1 to feed the model and obtain the AE * , and this AE * feeds a routine to predict the occurrence of relativistic electrons, giving advanced notice by more than a day. This work will hopefully be completed within the next few years. Final comments Although correlations between AE * and AE as high as 0.90 have been indicated in this paper, the causes for the lack of a high correlation in some intervals are still in question. One possibility is that there is high-frequency turbulence in the solar wind which is not geoeffective, a point mentioned previously. However there is another, recently discussed possibility: local generation of Alfvén waves in the interplanetary medium . If Alfvén waves are generated between ACE and the Earth, this would naturally reduce the correlation between AE * and AE. This may account for intervals where AE * and AE are not well correlated. Both of these explanations are possible. In the future we will probe which of these two (or other possibilities) are factors, and which are the dominant ones. Competing interests. The authors declare that they have no conflict of interest. Special issue statement. This article is part of the special issue "Nonlinear Waves and Chaos". It is a result of the 10th International Nonlinear Wave and Chaos Workshop (NWCW17), San Diego, United States, 20-24 March 2017.
5,044.2
2017-08-04T00:00:00.000
[ "Environmental Science", "Physics" ]
Active cloaking and illusion of electric potentials in electrostatics Cloaking and illusion has been demonstrated theoretically and experimentally in several research fields. Here we present for the first time an active exterior cloaking device in electrostatics operating in a two-horizontally-layered electroconductive domain, and use the superposition principle to cloak electric potentials. The device uses an additional current source pattern introduced on the interface between two layers to cancel the total electric potential to be measured. Also, we present an active exterior illusion device allowing for detection of a signal pattern corresponding to any arbitrarily chosen current source instead of the existing current source. The performance of the cloaking/illusion devices is demonstrated by three-dimensional models and numerical experiments using synthetic measurements of the electric potential. Sensitivities of numerical results to a noise in measured data and to a size of cloaking devices are analysed. The numerical results show quite reasonable cloaking/illusion performance, which means that a current source can be hidden electrostatically. The developed active cloaking/illusion methodology can be used in subsurface geo-exploration studies, electrical engineering, live sciences, and elsewhere. Invisibility has been a subject of human fascination for millennia. The basic idea of invisibility is to generate a cloaking device and use it to hide an object. Cloaking devices employ specially designed structures that would make objects 'invisible' by detecting devices (e.g. eyes, antennas, airborne or satellite detectors/sensors). Over the last two decades, theoretical and experimental studies on cloaking have been conducted in several research fields such as electromagnetism 1,2 , thermal and electrical studies 3 , thermodynamics 4-7 , solid mechanics 8 , acoustics 9-12 , elastic 13,14 , and seismic wave propagation [16][17][18] . Cloaking devices can differ by its construction (interior and exterior cloaking) and by transforming physical properties of the material surrounding an object (passive cloaking) or adding an active source (active cloaking). An interior cloaking device surrounds an object to be cloaked, so that, the object is located in the interior of the cloaking device 10 . An exterior cloaking device hides objects from potential detections without encompassing them 19 . A passive cloaking device induces invisibility by a special choice of physical parameters of a designed artificial material (so-called metamaterial) surrounding or partly surrounding an object, so that, an incident wave on the object bypasses it without distortions. A mathematical technique used to develop metamaterials is transformation optics [20][21][22] . In the case of electrostatics, such metamaterial would be a material with an anisotropic electrical conductivity 23 . An active cloaking masks (emitting) objects using active sources 2,7,[24][25][26][27][28] . In this paper, in horizontally-layered electroconductive domain we use active exterior cloaking devices in the case of electrostatics to mask current source located in the source sub-domain (SSD), e.g. Earth's ground, so making the source nearly undetectable by measurements in the observational sub-domain (OSD), e.g., seawater (Fig. 1). An "invisibility" in this case is achieved by using the current source networks suitably constructed on the interface between the two sub-domains (hereinafter referred to as ISD), which cancel (cloak) or generate imaginary (illusion) electric potential in the OSD. A mathematical background for developing the active cloaking devices lies in the theory of inverse problems 29 with the use of the superposition principle in terms of active noise control or noise cancellation 30,31 . In a three-dimensional model domain comprised of two overlain electroconductive layers, the following direct and inverse problems form essential components of our numerical experiments based on an electrostatic model. www.nature.com/scientificreports/ • Direct Problem: To find the generated electrical potential in the entire model domain for a given non-zero current source density located in SSD. • Source Identification Problem: To determine this current source density from its electric potential, which can be measured or inferred from measured electromagnetic data in the OSD. As the source identification problem was analysed by Sommer et al. (ref. 32 ), here we describe briefly the results of this study. Applications of the source identification problem are numerous; for example, it is the subject of research in volcanology 33,34 and in geo-explorations 35 . • Active Cloaking Problem: To cloak the current source density so that it gets 'invisible' for measurements in the OSD. To achieve it, we introduce an additional current source density (thereafter referred to as active cloaking device) on the ISD in order to minimize the total electric potential field in the OSD. • Active Illusion Problem: To generate an illusion in the data measured in the OSD by manipulating the total electric potential field. The manipulation is set up via an additional current source density on the ISD. A similar approach was used in acoustics and electromagnetics 36,37 . Essentially, an active illusion problem is based on an active cloaking problem. In what comes next, we present results of the four interconnected problems mentioned above. Synthetic data (that is, an electric potential) are generated by solving the direct problem (hereinafter we refer to the synthetic data as "measured" data). These data are employed as the input data in the source identification problem to determine the current source density. The active cloaking and illusion devices are then introduced to mask the current source, and the effectiveness of the devices is demonstrated. Results Electric potential determination. The electric potential u (measured in V) is determined from the volumetric current source density f = 0 (measured in A m −3 ; also known as the self-potential source 38,39 ) by solving the boundary value problem for the conductivity equation with the Robin condition at the boundary of the model domain 40 Here σ is the electrical conductivity (measured in S m −1 ); x = (x 1 , x 2 , x 3 ) T are the Cartesian coordinates; = l ∪ ∪ u ⊂ R 3 is the three-dimensional model domain (its description can be found in Method, and its two-dimensional sketch in Fig. 1); l is the SSD, u is the OSD, is the ISD; n is the outward unit normal vector at a point on the boundary ∂� , which restricts R 3 to a bounded domain ; ∂u ∂n is the normal derivative of the electric potential u; and g is a non-negative function defined at the model boundary as the reciprocal distance from the boundary to the geometrical centre of the model domain . To solve the problem (1)-(2) numerically, the finite-element method is used 41,42 . The solution to a discrete problem corresponding to the weak formulation of the problem (1)-(2) can be presented as: where u and f are the discrete representations of the electric potential and the current source density, respectively, and A is the solver operator (see Method). The solutions u + and u for two different current source densities f + and f , respectively (see Method for description of the current source densities), are illustrated in Fig. 2. As measurements of the electric potential are restricted to a part of u (OSD), we introduce the restriction operator M , which restricts u to the measured data u d := Mu := u| Ŵ , where Ŵ ⊂ � u is a set of measurement points; (3) u = Af , . Doing so, the following solution to the regularized inverse problem for given measurements u d can be obtained: where α > 0 is the regularization parameter, D T D is the penalty term, and D is the discrete Nabla ( ∇ ) operator. As the choice of α is critical in the Tikhonov regularization method, we apply the L-curve criterion to find the optimal value of the regularization parameter 44 . The inverse problem (Eq. 5) is solved numerically using the same current source densities f + and f . In our numerical experiments, the set Ŵ consists of 300 synthetic measurement points located in the OSD along three lines at the height of 500 m (parallel to x 1 -axis) and three lines at the height of 1000 m (parallel to x 2 -axis) above the plane x 3 = 0 ( Fig. 3 a,c). When choosing the points one should ensure that they are distributed rather uniformly in the sub-domain OSD (both in horizontal and vertical dimensions) to blanket the electric current source. This allows for better reconstructing the source density from measurement data, as the source detection power decreases with increasing distance. The data determined on Ŵ are used to reconstruct f + α and f α as shown in Eq. (5), and the inversion's results are shown in Fig. 3b,d. The performance of regularization and the sensitivity of numerical results have been tested by introducing a random noise on measurements u d . It is shown that the quality of the reconstructions of the current source density decreases with the noise (see Supplementary Material; Fig. S1). Active cloaking. Here we present an active cloaking device allowing the signals emanating from the electric current source to be cancelled to a considerable extent in the OSD. The active cloaking device means physically a network of electrodes installed on the ISD (although the electrode's installation can be done everywhere), which produces a complementary electric current source density patters f c ( f + f c � = 0 ), so that, the superimposed signals from the source f and from those from the electrodes f c cancel each other. The electrodes should be distributed on the ISD such a way to blanket the current source to be hidden (the influence of the position of the cloaking device and its size on cloaking is discussed later and in Supplementary Materials). To determine the effective current source density pattern f c , we employ the superposition principle. As the inverse problem associated with the active cloaking device is linear, the superposition principle can be applied. In doing so, the total electrical potential field vanishes on Ŵ (measurement paths), reduces in the OSD significantly, and hence becomes almost undetectable by measurements. Applying the operator A d to the combined electric current source density, we obtain i.e. A d f c = −u d . The cloaking procedure is described as and hence on Ŵ . Here c,α is the cloaking operator; f c,α is the current source density of the cloaking device; A d,c is the adapted operator, which maps the cloaking current source density f c,α to electrical potential u d,c on Ŵ ; and the notation means w = h (see Method for detail, where the cloaking and adapted operators are presented). In numerical experiments, we consider the current source densities f + and f and apply the cloaking operator c,α to synthetic data u + d and u d . The cloaking current pattern f + c,α and f c,α are presented in Figs. 4 and 5, respectively. Comparing the images of A d f + (Fig. 4a) and A d,c f + c,α (Fig. 4c), we see that the images are almost identical www.nature.com/scientificreports/ up to their sign, and their sum is almost vanishing (Fig. 4d). The cloaking operator c,α significantly reduces the amplitude of the total electric potential from about 10 8 to 10 2 V (Fig. 4b,e). Similarly, the operator c,α reduces the amplitude of the total electric potential in the case of the current source f (Fig. 5). Figures 4e and 5e illustrate the cancellation of the signals u + d + u + d,c and u d + u d,c , respectively, where the dashed line represents the total electric potential field. The cloak regime masks the source for measurements, and, therefore, the current source becomes invisible electrostatically, i.e. cloaked. Note that the cloaking device (i.e. electric current source density f c,α ; Figs. 4b and 5b) was designed based on data u d and not on f. In the numerical experiments presented here, the position and size of the cloaking device on the ISD have been fixed. To what extent do its position and size affect the cloaking? To answer the questions, we have performed several numerical experiments (see Supplementary Materials). It is shown that the accuracy of the devices enhances with the increasing size of the devices (Fig. S2). A shift of the cloaking devices may improve the quality of invisibility (Figs. S3 and S4). Hence, a search for the optimal size and the position of a cloaking device will assist in enhancing invisibility. When developing the cloaking device, we have considered synthetic data of the electric potential along several paths in the OSD, i.e. the cloaking device ensures that the electric potential becomes insignificant (invisible) on the paths. Meanwhile, how would the cloaking device look like and how effective would it be, if we use not only these paths to develop the cloaking device, but the entire OSD? To ensure the invisibility of the current source everywhere in the OSD (not only along the paths of measurements), a cloaking device has been developed based on the measurements in the entire OSD. It is shown that although the quality of the cloaking lowers in this case, but still reducing the signal of electric potential by an order of magnitude (Fig. S5). www.nature.com/scientificreports/ Active illusion. An illusion is generated in numerical experiments such a way that measurements in the OSD "detect" a current source artificially constructed instead of the existing current source located in the SSD. We achieve this by introducing a specially-designed illusion device, which, according to the principle of superposition, changes the total electric potential field in u into that generated by the current source density chosen for the illusion. For given f + in the SSD, we determine an additional current source density f i = f + c − f c on the ISD so that the inverse problem approach applied to the new data u + d + A d f i delivers a solution corresponding to f α . Namely, This means that the illusion pattern f i generates "measured" data u d corresponding to f . The current source density f has been chosen just for simplicity of the illustration of the illusion's results; any admissible current source density can be considered as an additional source. The illusion procedure can be briefly described as , where the cloaking patterns f + c,α and f c,α are determined from data u + d and u d . Assuming that the given current source density is f + (Fig. 6a), the cloaking pattern f + c,α (Fig. 6b) and f c,α (Fig. 6c) are computed. Then the operator A d (Eq. 8) is applied to the (8) www.nature.com/scientificreports/ current source density f + + f + c,α − f c,α to get the resulting "measured" data u d (Fig. 6d). Finally, applying the cloaking operator c,α the illusive current source density f α is obtained (Fig. 6e). Discussion In this work, an approach to design exterior active cloaking devices for self-potentials is presented, and it has been applied to an electrostatic problem so that an electric current source located in the SSD becomes "undetectable" by measurements in the OSD. Compared to the passive cloaking devices, active devices are more simpler as they do not need metamaterials to be constructed and employed. Using synthetic examples of electric current sources, we have obtained that a constructed camouflage on the ISD allows to reduce significantly (at least by six orders of magnitude) the signal of the electrical self-potential on given measurement paths in the OSD, which is emanated from the electric current source located in the SSD. We note that the same approach can be applied to develop interior active cloaking devices by specifying the support of f c around the source to be hidden, i.e., the cloaking device envelops the source completely. www.nature.com/scientificreports/ Although the results of the study are promising and show that the amplitude of the total electric potential is reduced by several orders of magnitude compared to the measured data, a full cloaking cannot be reached due to several reasons. An exterior cloaking device considered here does not envelop completely the source to be hidden. The smaller is the size of the cloaking device, the less effective it is (see Fig. S2 in Supplementary Material). Similarly, the effectiveness of the cloaking device will depend on the network of electrodes installed on the ISD: the denser network, the better results. However, a computational cost will increase with the denser network of electrodes associated with computational nodes. Moreover, the regularization of inverse problems as well as numerical errors degrade the quality of cloaking. In addition, we have extended the idea of cloaking in electrostatic problems to illusion by manipulating the cloaking device so that the observed field of electric self-potential contains a superposition of hidden field created by the electric source in the SSD and a completely new field, which can be generated arbitrarily. Using synthetic examples, we have demonstrated the applicability of the illusion approach to the same electrostatic problem and shown that a "cross"-type source in the SSD becomes invisible, but instead a "ring"-type source can be reconstructed from measurements in the OSD. Since it is more difficult to make an object completely invisible/undetectable due to the measurement inaccuracy and noise, an illusion device can help to hide a real shape of the source or object by mimicking another modelled shape. For example, a source or object could become smaller or bigger for an observer, like a transformation of the ogre into a lion and a mouse in the fairy tale Puss in Boots by Charles Perrault. Electric self-potentials are usually generated by a number of natural sources, such as electrochemical, electrokinetic, thermoelectric, and mineral sources, as well as by a conducting fluid flow through the rocks. Selfpotentials can fluctuate in the Earth with time due to different processes, e.g., alternating currents induced by effects of thunderstorms or heavy rainfalls; variations in Earth's magnetic fields 45 . As hydrocarbons in a reservoir are moving continuously because of stress and pressure differences, seismic or other vibrations, they create alterations in the electric potentials acting as an electric dipole in the geo-electromagnetic field 46,47 . Non-invasive measurements of self-potential in the subsurface does not require electric currents to be injected into the ground as in the cases of resistivity or induced polarisation tomography. The method has been used in geological explorations 48 to detect massive ore bodies, in groundwater and geothermal investigations, environmental and engineering applications, to monitor a salt plume, volcano and lava dome activities, and to reveal a borehole leak during hydraulic fracturing [49][50][51] . Airborne or seaborne geophysical surveying allows for detecting changes in physical variables of sub-surface processes in the Earth, e.g., in the electromagnetic potential and electric conductivity 52 . The surveying has been used for subsurface exploration, such as hydrocarbon exploration, groundwater management, and shallow drilling hazards. The presented methods of cloaking and illusion can be used in geo-exploration. For example, depending on a commercial confidentiality, operators may wish to cloak the subsurface objects in electrostatic sense from airborne/seaborne measurements by other operators. We note that when the OSD is filled by seawater, an electric potential can be measured by seaborne surveys. Meanwhile, during airborne prospecting, a measured value is the amplitude of the magnetic field. This amplitude can be then converted into an electric potential using an appropriate operator, such that the presented approach based on the Tikhonov regularization can be applied 32 . The airborne/seaborne surveying provides the information on aquifers for groundwater investigations, paleochannels for shallow gas investigation and drilling hazards, on soils and overburden for engineering purposes 53,54 . Cloaking and illusion can be used in these studies as well, depending on purposes and needs of subsurface explorations. The superposition principle in terms of active noise cancellation presented here can be used in other areas, e.g. in submarine engineering and marine research. The corrosion of a submarine may create an underwater electric potential that can be detected by available seabed mines with appropriated sensors 55 . The cancellation of the underwater electric potential could be improved by using the presented approach. Also, there are living creatures perceiving electric or electromagnetic signals, and this behaviour of the creatures is an important component of their survival strategy. For example, the Gnathonemus elephantfish, hammerhead shark and platypus rely on their electric receptors in muddy waters rather than on their optic sensory organs [56][57][58] . So, to hide objects from hammerhead sharks, a cloaking or deflecting device could be developed. We believe that an active cloaking and illusion in electrostatics will inspire new applications in geosciences, electrical engineering, live sciences, and elsewhere. Method We employ a weak formulation of the boundary value problem (Eqs. 1 and 2) transforming it into an integral equation: where the operators B and L are defined as B(u, v) : Here v is the test function, and S is the boundary element. The solution to the problem (9) for given σ and f (the electric potential u) is the weak solution to the original problem 41 42 , and the model domain is discretized by tetrahedral finite elements at n = 18 × 10 3 nodes. The electric potential u and the test function v are approximated by a combination of n linear finite elements, that is, piecewise linear polynomials, {v i } n i=1 , i.e. u(x) := n i=1 u i v i (x) and u(u 1 , u 2 , . . . , u n ) T ∈ R n . Inserting the approximation into Eq. (9), we obtain a discrete problem corresponding to the problem (9) = 1, 2, ..., n) . The vectors u and f are discrete representatives of the electric potential and the current source density, respectively. Sommer et al. 32 showed that the numerical direct problem (10) is well-posed, and the operator B is positive definite and invertible. Hence, the solution to (10) is u = B −1 L f =: Af . It is important to note that the existence of the forward problem's solver operator A = B −1 L and its positive definition 41 as well as the symmetry and the positive definition of matrix L yield the operator A to be invertible. At each node of the discrete model domain , we assume the specific electrical conductivity to be σ = 10 −1 S m −1 for x 3 ≤ 0 (in the SSD and on the ISD), and σ = 10 −6 S m −1 for x 3 > 0 (in the OSD). We consider two examples of artificial current source densities (in A m −3 ): where Note that the support K of f + is a simply connected domain (a "cross") and the support R of f is a double connected domain (a "ring"). Function g is defined in the model as . We employ the COMSOL Multiphysics FEM software (www. comsol. com) to generate the mesh. The direct and the inverse problem solvers are implemented in MATLAB (www. mathw orks. com), which is linked to COMSOL Multiphysics. In constructing the cloaking device, we assume that the complementary current source density f c has a support on the ISD, and introduce a continuation operator U extending the support as Uf c (x) = f c (x) for x ∈ and Uf c (x) = 0 elsewhere in . Thus, the domain of the cloaking device corresponds to the domain of f c and is shaped by U . We introduce the adapted operator, which maps the cloaking current source density to the electrical potential in the entire domain : A d,c := A d U . The active cloaking problem is formulated as a minimization problem with a penalty term: where L 2 (G) is the space of functions that are square integrable over domain G , equipped with the standard scalar product (u, v) = G u(x)v(x)dG and the norm �v� = (v, v) 1/2 . The solution to the minimization problem (11) can be found using the Tikhonov regularization in the following form 60 : where c,α = (A T d,c A d,c + αD T D) −1 A T d,c is the cloaking operator, and f c,α is the current source density of the cloaking device. We define here the electric potential data on Ŵ generated by f c,α as The cloaking procedure (7) can be then obtained using Eqs. (12) and (13). As the computational design of the cloaking device is based on a Tikhonov regularization, the quality of numerical results depends on the choice of the regularization parameter α , and a search for the suitable parameter α is computationally extensive. The active cloaking problem has been solved for different values of the regularization parameter, and the value providing an optimal cancellation of the electric potential signal on measurement paths has been then chosen.
5,783.4
2021-01-07T00:00:00.000
[ "Physics" ]
MHD Boundary Layer Flow of a Power-Law Nanofluid Containing Gyrotactic Microorganisms Over an Exponentially Stretching Surface : This study focusses on the numerical investigations of boundary layer flow for magnetohydrodynamic (MHD) and a power-law nanofluid containing gyrotactic microorganisms on an exponentially stretching surface with zero nanoparticle mass flux and convective heating. The nonlinear system of the governing equations is transformed and solved by Runge-Kutta-Fehlberg method. The impacts of the transverse magnetic field, bioconvection parameters, Lewis number, nanofluid parameters, Prandtl number and power-law index on the velocity, temperature, nanoparticle volume fraction, density of motile microorganism profiles is explored. In addition, the impacts of these parameters on local skin-friction coefficient, local Nusselt, local Sherwood numbers and local density number of the motile microorganisms are discussed. The results reveal that the power law index is considered an important factor in this study. Due to neglecting the buoyancy force term, the bioconvection and nanofluid parameters have slight effects on the velocity profiles. The resultant Lorentz force, from increasing the magnetic field parameter, try to decrease the velocity profiles and increase the rescaled density of motile microorganisms, temperature and nanoparticle volume fraction profiles. Physically, an augmentation of power-law index drops the reduced local skin-friction and reduced Sherwood number, while it increases reduced Nusselt number and reduced local density number of motile microorganisms. Introduction After nanofluids in 1995 [Choi and Eastman (1995)], the study of enhancement heat transfer by adding suitable nanoparticles has received numerous attentions due to its wide engineering applications. The review articles for enhancement heat transfer using nanofluids and their applications can be found in Daungthongsuk et al. [Daungthongsuk and Wongwises (2007); Trisaksri and Wongwises (2007); Wang and Mujumdar (2007); Kakaç and Pramuanjaroenkij (2009) The study of boundary-layer MHD flow which is resulting from the presence of magnetic fields controls many systems using electrically conducting fluids. Moreover, there are several applications for MHD flow including nuclear reactors, MHD generators and geothermal energy extractions. Sparrow et al. [Sparrow and Cess (1961)] firstly introduced the effects of magnetic field on natural convection flow. Chamkha et al. [Chamkha and Aly (2010)] studied the MHD natural convection flow on boundary layer flow of a nanofluid along a permeable plate. Moreover, Chamkha et al. [Chamkha, Mansour and Aly (2011)] investigated the presence of transverse magnetic field and Hall current with the effects of chemical reaction and heat generation on unsteady free convective along a porous plate. Uddin et al. [Uddin, Khan and Ismail (2012)] studied numerically MHD laminar boundary layer flows of an electrically conducting Newtonian nanofluid over a solid stationary plate. Mabood et al. [Mabood, Khan and Ismail (2015)] adopted Runge-Kutta Fehlberg fourth-fifth order method to study the MHD laminar boundary layer flow of a nanofluid over a nonlinear stretching sheet. The macroscopic convection motion of a fluid caused by the density gradient created by collective swimming of motile microorganisms is called bioconvection [Avramenko and Kuznetsov (2004);Hill and Pedley (2005);Kuznetsov (2006Kuznetsov ( , 2011; Nield and Kuznetsov (2006)]. Kuznetsov et al. [Kuznetsov (2010)] first introduced the bioconvection term for nanofluid. Siddiqa et al. [Siddiqa, Gul and Begum et al. (2016)] studied the bioconvection flow of a nanofluid and gyrotactic microorganisms along a wavy cone. Khan [Khan (2018) . Runge-Kutta-Fehlberg method will be used to study the MHD boundary layer flow of a power-law nanofluid containing gyrotatic microorganisms over an exponentially stretching surface. It is found that, the power law index is considered an important factor in this study. The rescaled density of motile microorganisms increases as the bioconvection Péclet number, bioconvection constant and magnetic field parameter are increase. The nanoparticle volume fraction increases as thermophoresis parameter, generalized Biot number and magnetic field parameter are increase. The reduced local skin-friction coefficient has the higher values at lower magnetic field parameter and power law index. The reduced Nusselt number is decreasing with an increase on the magnetic field parameter and thermophoresis parameter. The reduced Sherwood number is increasing according to an increase on the magnetic field parameter, Lewis number and Brownian motion parameter. The reduced local density number of the motile microorganisms increases as the Prandtl number, magnetic field parameter and power law index are increase. Problem formulation The two-dimensional steady MHD boundary layer flow of a power-law nanofluid containing gyrotactic microorganisms over an exponentially stretching surface is considered. The flow is originated by virtue of exponentially stretching of the sheet. At a lower surface, the sheet is heated convectively with temperature T f and a heat transfer coefficient hf. The ambient temperature and concentration are T∞ and C∞. Fig. 1 presents the initial schematic diagram of the current problem. Here, the x-axis is taken along the exponentially stretching surface and y-axis is normal to it. As shown in this figure, the transverse non-uniform magnetic field with strength is taken as parallel to the y-axis. The governing equation are: (1) (3) The imposed boundary conditions are: (6) at Introducing the following similarity transformations [Abd El-Aziz and Afify (2016); Afify and Abd El-Aziz (2017)]: Then, the velocity components are: The dimensionless forms of the governing equations are: With the boundary conditions: where, is applied magnetic field, is magnetic field parameter, where, is surface heat flux, is the surface mass flux and is the surface motile microorganisms flux. is shear stress. Numerical method In this section, the numerical procedure for solving the similar nonlinear Eqs. (9)-(12) with boundary conditions (13) (2019)] is applied to solve the nonlinear equations as followings: The nonlinear Eqs. (9)-(12) are transformed into set of first-order ordinary differential equations as: Using Eq. (18) into the system (9)-(12), hence the nonlinear equations are converted to the first order differential equations as: with the boundary conditions: Finally, the shooting technique is used to estimate disappeared initial conditions and by a stepwise process. The step size in Runge-Kutta-Fehlberg method for solving initial value problem (Eqs. (19)- (22)) is . The computed values at with boundary conditions at , are fixed by Newton-Raphson method to give a superior estimation for the required solution. The iterative process is repeated until getting the results with correction up to 10 −6 . Results and discussion For getting clear insight of the current physical problem, the graphical illustrations were displayed for the numerical results. The computations of the physical parameters were carried out including magnetic field parameter , bioconvection Péclet number , bioconvection constant , Brownian motion parameter , thermophoresis parameter , bioconvection Lewis number , generalized Biot number , Prandtl number , Lewis number , and power-law index . The profiles of the rescaled density of motile microorganisms under the effects of bioconvection Péclet number, and magnetic field parameter at two values of a power law index n=0.7 and n=1.2 have been shown in Figs. 2 (a) and 2(b). It is observed that, the rescaled density of motile microorganisms within the boundary layer increases as both of the bioconvection Péclet number and magnetic field parameter are increase. The rescaled density of motile microorganisms is lower at higher value of power law index n=1.2. Moreover, the boundary layer thickness is shrinking as power index n is increasing from 0.7 to 1.2. Fig. 3 presents the rescaled density of motile microorganisms under the impacts of bioconvection Lewis number , and bioconvection constant at two values of power law index n=0.5 and n=1.2. The rescaled density of motile microorganisms decreases as increases from 0.5 to 2. In addition, the rescaled density of motile microorganisms increases as bioconvection constant increases. Moreover, the boundary layer thickness is shrinking as power law index n is increasing from 0.5 to 1.2. 9 shows the velocity profiles under the effects of magnetic field parameter at two values of power law index n=0.7 and n=1.2. Due to Lorentz force, that suppresses the velocity, then velocity profiles are decrease with an increase on the magnetic field parameter. Moreover, the velocity profiles decrease slightly as power law index increases. In addition, due to neglecting the buoyancy force terms in the momentum equation, the bioconvection parameters and nanofluid parameters have slight effects on the velocity profiles. Fig. 10 depicts the profiles of the rescaled density of motile microorganisms under the effects of Prandtl number at two values of power law index n=0.5 and n=1.2. The rescaled density of motile microorganisms reduces as Prandtl number increases and it has the lowest values at higher power law index n=1.2. Fig. 11 shows the variations of the reduced local skin-friction coefficient with magnetic field, power law index along thermophoresis parameter. It is observed that, the reduced local skin-friction coefficient has the higher values at lower magnetic field parameter and lower power law index n=0.7. The reduced local skin-friction coefficient is slightly change according to an increase on thermophoresis parameter. Figure 11: Variation of reduced local skin-friction coefficient with magnetic field, power law index along thermophoresis parameter The variations on the reduced Nusselt numbers under the effects of several parameters are shown in Fig. 12. In Fig. 12(a), the reduced Nusselt number is decreasing with an increase on the magnetic field parameter and thermophoresis parameter. This is relevant to the fact that magnetic field parameter reduce the velocity and increase the temperature within the boundary layer and then the reduced Nusselt number decreases as magnetic field parameter increases. In addition, the reduced Nusselt number has the higher values at higher power law index n=1.2. In Fig. 2(b), the reduced Nusselt number is increasing as the Prandtl number, generalized Biot number and power law index are increase. The variations on the reduced Sherwood numbers under the effects of several parameters are shown in Fig. 13. In this figure, the reduced Sherwood numbers are increasing according to an increase on the magnetic field parameter, Lewis number and Brownian motion parameter. Moreover, the reduced Sherwood numbers are decreasing, as the Parndtl number, power law index and generalized Biot number are increase. It is important to observe that, an increment rate on the reduced Sherwood number is higher at Brownian motion parameter (0.1-0.5) compare to Lewis number (1-5). Conclusion This study investigated the steady natural convection MHD boundary layer flow of a power-law nanofluid containing gyrotactic microorganisms over an exponentially stretching surface. The nonlinear system of the governing equations is transformed into dimensionless similar equations, which are solved numerically using Runge-Kutta-Fehlberg method. The effects of the governing parameters such as transverse magnetic field, bioconvection parameters, nanofluid parameters, Prandtl number, Lewis number and power-law index on the velocity, temperature, nanoparticle volume fraction, density of motile microorganism's profiles as well as the local skin-friction coefficient, local Nusselt, local Sherwood numbers and local density number of the motile microorganisms are explored. The main findings of this work are reported: • The rescaled density of motile microorganisms increases as the bioconvection Péclet number, bioconvection constant and magnetic field parameter are increase. • The nanoparticle volume fraction increases as thermophoresis parameter, generalized Biot number and magnetic field parameter are increase. • Due to Lorentz force, that suppresses the velocity, velocity profiles are decrease with an increase on the magnetic field parameter. • The reduced local skin-friction coefficient has higher values at a lower magnetic field parameter and a lower power law index. • The reduced Nusselt number is decreasing with an increase on the magnetic field parameter and thermophoresis parameter. • The reduced Sherwood number is increasing according to an increase on the magnetic field parameter, Lewis number and Brownian motion parameter. • The reduced local density number of the motile microorganisms increases as the Prandtl number, magnetic field parameter and power law index are increase.
2,669.8
2020-01-01T00:00:00.000
[ "Physics" ]
Graphene nanoribbon devices at high bias We present the electron transport in graphene nanoribbons (GNRs) at high electric bias conduction. When graphene is patterned into a few tens of nanometer width of a ribbon shape, the carriers are confined to a quasi-one-dimensional (1D) system. Combining with the disorders in the system, this quantum confinement can lead into a transport gap in the energy spectrum of the GNRs. Similar to CNTs, this gap depends on the width of the GNR. In this review, we examine the electronic properties of lithographically fabricated GNRs, focusing on the high bias transport characteristics of GNRs as a function of density tuned by a gate voltage. We investigate the transport behavior of devices biased up to a few volts, a regime more relevant for electronics applications. We find that the high bias transport behavior in this limit can be described by hot electron scattered by the surface phonon emission, leading to a carrier velocity saturation. We also showed an enhanced current saturation effect in the GNRs with an efficient gate coupling. This effect results from the introduction of the charge neutrality point into the channel, and is similar to pinch-off in MOSFET devices. We also observe that heating effects in graphene at high bias are significant. experimental studies of disordered graphene nanoribbons (GNRs) [14][15][16][17][18][19][20], however, suggest that this observed transport gap may not be a simple band gap. In an effort to explain these experimental results, various theoretical explanations for the transport gap formation in disordered graphene nanostructures have been proposed, including models based on Coulomb blockade in a series of quantum dots [21], Anderson localization due to edge disorder [22][23][24], and a percolation driven metal-insulator transition [25]. In order to distinguish between these different scenarios, systematic experiment including treatment of both disorder induced localization and electron-electron interaction is required. In our recent experiment [20], we carried out systematic studies of the scaling of the transport gap in GNRs of various dimensions. From this scaling of several characteristic energies with GNR width (W ) and length (L), we find evidence of a transport mechanism in disordered GNRs based on hopping through localized states whose size is close to the GNR width. We found that At the charge neutrality point, a length-independent transport gap forms whose size is inversely proportional to the GNR width. In particular, we found that in this gap, electrons are localized, and charge transport exhibits a transition between thermally activated behavior at higher temperatures and variable range hopping at lower temperatures. By varying http://www.nanoconvergencejournal.com/content/1/1/1 the geometric capacitance, we find that charging effects constitute a significant portion of the activation energy. Extending this earlier work, in this review, we examine the electronic properties of lithographically fabricated GNRs with widths in the tens of nanometers. Here we investigate the transport behavior of devices biased up to a few volts, a regime more relevant for electronics applications. We will first address characteristics of graphene at high bias which are not specific to graphene nanoribbons, then we address GNRs at high bias specifically. Graphene devices operated at high source-drain bias show a saturating I − V characteristic. This decrease in conductivity at high applied electric field is described by carrier velocity saturation due to optical phonon emission. This result is analogous to the high bias results obtained CNTs. In a well known experiment, Yao et. al. [26] found that current in metallic single wall carbon nanotubes saturates at high electric field. Their result is explained in terms of zone-boundary optical phonon emission from high energy electrons. At high electric fields, a steady-state population is developed between right and left moving charge carriers with a maximum energy difference corresponding to the phonon energy = 160 meV, leading to a saturated current of (4e/h)/( ) ≈ 25 μA. A slightly different behavior was reported in semiconducting single wall carbon nanotubes by Chen and Fuhrer [27]. In these devices, current does not saturate completely, and the transport is described by an electric field dependent carrier velocity. The authors fit their data with a model based on a carrier velocity that saturates to a constant value at high electric field and a carrier density dependent on the local potential along the device. They find a saturation velocity of 2×10 7 cm/s in their device. These results demonstrate the feasibility of 1D GNR devices for electronic applications with a proper bandgap engineering. Graphene nanoribbon fabrication GNRs used in this study were fabricated by lithgrapically patterned structure from mechanically exfoliated graphene. The process flow is outlined in Figure 1. Briefly, we begin with exfoliated graphene, fabricate metal electrodes using standard electron beam (e-beam) lithography procedures, pattern an etch mask using an negative ebeam resist, and etch away unprotected graphene using an oxygen plasma etch. An atomic force microscope (AFM) image of a finished device is shown in Figure 2. Once a suitable piece of graphene has been deposited and identified using the procedure described above, the next step is to electrically contact the graphene with metal electrodes using e-beam lithography. We begin by spinning on a layer of poly(methyl methacrylate) (PMMA) e-beam resist and baking on a hotplate at 180°C for 2 minutes. Then we use e-beam lithography to write a 2 mm by 2 mm grid of alignment marks at roughly the location of the graphene, and develop in a solution of methyl isobutyl ketone:isopropal alcohol (MIBK:IPA) 1:3 for 5-10 seconds. This quick development leaves alignment mark "holes" in the PMMA, which we use for alignment in the following e-beam lithography step, eliminating the need for metal alignment mark deposition or another PMMA spin step. Electrodes are patterned in this PMMA layer with e-beam lithography, using an optical image of the sample with the alignment mark holes for design and alignment. Thermal evaporation is then used to deposit 1-2 nm of chrome and 25-50 nm of gold, and the chip is placed in acetone overnight at room temperature for lift-off (Figure 1(b)). Once the graphene has been successfully contacted with Cr/Au electrodes, we create an etch mask to define the nanoribbons. A negative tone e-beam resist, hydrogen silsesquioxane (HSQ) (2% in MIBK) is spun on to the chip (at 4000 rpm, for a typical film thickness of 14 nm). We use HSQ as the resist for this step because a negative resist is ideal for creating a small etch mask, and because with HSQ we can obtain small feature sizes. The etch mask is written at a relatively high e-beam dose (1300 μC/cm 2 for the ribbons in our 30 keV system, with lower doses for larger features) and developed in a solution of 0.26N tetramethylammonium hydroxide (TMAH) in water for 1 minute (Figure 1(c)). After defining the etch mask, the graphene is ready to be etched. The device is exposed to oxygen plasma in a Technics reactive ion etcher (RIE) with 200 mTorr O 2 at 50 W for 5-10 seconds. These conditions etch graphene at a rate of about one layer per second, so that unprotected single layer and few-layer graphene are etched away cleanly ( Figure 1(d)). The finished device ( Figure 2) is then ready to be wirebonded and measured. The devices measured in this experiment are backgated and dual-gated etched graphene devices. Graphene devices often fail or change drastically and irreversibly when the current density per unit width exceeds a threshold of ∼ 2 mA/μm. We operate the device at currents below this threshold. Current-voltage characteristics at varying gate voltages were measured for 17 ribbon devices with a range of widths and lengths, and three "wide" devices with W = 200 nm, in order to compare to the behavior of non-ribbon devices. Figure 3 shows a plot of current vs source-drain bias for varying gate voltages in a back-gated device. We focus here on the curves taken at densities far from the charge neutrality point, such as the curve singled out in Figure 4. Saturating behavior fits a velocity saturation model Here we see that at low bias the slope of the curve is constant, and at high bias the curve turns down, approaching a linear behavior with a reduced slope. http://www.nanoconvergencejournal.com/content/1/1/1 To describe this saturating decrease in conductivity, we propose a model based on an electric field-dependent where μ 0 is the low field mobility and v sat is a phenomenologically introduced saturation velocity. The total current through the device is given by We assume that the capacitance to the back gate dominates in determining the charge density in the channel, so that where V = V (x) is the potential at position x along the channel, and we have defined Rearranging terms and integrating gives the current In its limiting forms, Equation 5 for the current qualitatively gives the behavior seen in Figure 4. At low V b , current is linear in V b with a conductivity WC g V 0 μ 0 /L, determined by the low field mobility, as expected. At high V b , current is again linear in V b , but now with a conductivity of WC g v sat /2 and an offset determined by the http://www.nanoconvergencejournal.com/content/1/1/1 gate voltage. At low fields, the variation in carrier density is small and the linear I-V results from the linear form of v d (E) ≈ μ 0 E in this regime. At high fields, v d approaches a constant value v sat , and the linear dependence of the carrier concentration on V b is responsible for an I-V characteristic approaching linear behavior. Note this is in contrast to the case of carbon nanotubes, where there are a set number of conducting channels, so that the current saturates with the drift velocity. The expression in Equation 5 for I = I(V b ) was fit to the I-V characteristics in Figure 3; the result is shown in Figure 5. For ribbon devices, the geometry is not well approximated by a parallel plate capacitor, so the gate capacitance was calculated numerically.For the device in Figure 5, the capacitance was calculated to be 47.5 nF/cm 2 using a numerical calculation based on the finite element method. The model fits well for curves taken at densities far at high carrier densities, and begins to break down for curves measured near the charge neutrality point, as seen in Figure 5 for V g = −10 V. This fit has two free parameters, v sat and μ 0 . For this dataset, this model gives μ 0 values between 400 and 600 cm 2 /Vs, compared to the value of 700 cm 2 /Vs from low bias sweeps of G-V g . The values of v sat obtained from this fit are plotted against V g in Figure 6(a). In Figure 6(b), we plot v sat against the inverse of the Fermi energy Converting V g to E F involves the value of V CNP , which commonly drifts throughout measurement due to changes in adsorbed molecules and positions of trapped charges. Here the black circles correspond to conversion of V g to E F using V CNP = −15 V, the same value used in Equation 5 for the original fit. Red triangles represent a conversion to E F using V CNP = −8 V so that the a linear fit of v sat vs. E −1 F intersects the origin. In order to understand the inverse relationship between v sat and E F , we seek a physical understanding of the electric field dependent carrier velocity, or drift velocity, in Equation 1. This expression corresponds to scattering by optical phonons, which would produce an electric field dependent mean free path. By Matthiessen's rule, mean free paths add as where l is the total mean free path and l sc is the mean free path for elastic impurity scattering and quasi-elastic acoustic scattering, and l op is the mean free path for optical phonon emission. If electrons are immediately scattered upon reaching the optical phonon energy, so that where E is the electric field and is the relevant optical phonon frequency, then the mobility μ is given by This form of the mobility results in the expression for the drift velocity v d = μE given in Equation 1. For electrons and holes in graphene, which have a constant carrier velocity of v F , drift velocity can be understood as the time averaged velocity of carriers when scattering is taken into account. From the above calculation we see that our phenomenological velocity saturation model can be understood in terms of a picture where electrons scatter by optical phonon emission upon reaching the phonon energy under the influence of the applied electric field. With this in mind, we derive an expression for current density using a different approach, in order to gain insight into our measured values for the saturation velocity. Current density is given by where D k = 2/(2π) 2 is the density of electronic states in k-space, v( k) = v F is the electron velocity, and g( k) is the distribution function. In the relaxation time approximation, we have where g 0 ( k) is the equilibrium distribution function, τ is the relaxation time, and f is the Fermi-Dirac distribution function. For a device with its length in the x direction, we seek j = jx, so we consider only E = Ex, and where θ is the angle between d k and E. We assume that electrons are immediately scattered upon reaching the energy threshold for phonon emission, giving So that for Equation 10 we have In polar coordinates At high fields, we assume j = nev sat and use Using this expression with v F = 10 8 cm/s [2,3], we obtain a value of = 62.0 meV from the linear fit (dashed line) in Figure 6(b). This is well below the value of the longitudinal zone-boundary phonon for graphene, which has = 200 meV [28]. We suggest that our measured phonon energy corresponds to the SiO 2 surface phonon energy = 55 meV [29][30][31], although we note that values measured in other ribbon devices of different geometries vary widely (from ≈ 22 meV to ≈ 120 meV), possibly due to discrepancies in determining the relevant device geometry, the corresponding capacitance, and the position of the charge neutrality point. Top-gated graphene devices show an enhanced current saturation effect In dual-gated devices, we observe a velocity saturation behavior similar to that the back-gated device behavior described above. However, we also see an enhanced current saturation at certain gate voltage combinations, as first reported in Reference [32]. Figure 7 shows currentvoltage characteristics and corresponding conductancegate voltage sweeps for a dual-gated device with W = 35 nm and L = 2 μm. At combinations of V bg and V tg near the charge neutrality point, we see a "kink" in the I-V curve, where the current first begins to flatten out, then turns upwards again. Figure 8 highlights this behavior in one I-V curve from the same device. This effect is specific to top-gated devices, where the strong capacitive coupling allows the bias voltage to dominate the carrier density in the channel. The "kink" effect in graphene is similar to pinch-off in traditional MOSFETs, where a strong bias voltage pulls the quasi-Fermi level at one end of the channel into the charge-depleted bandgap. In graphene, where there is no bandgap, this results in a transition within the channel from one carrier type (electrons or holes) to the other. In a device that is n-type, as in Figure 8(c)(I), a positive sourcedrain voltage (applied to the source) depletes the electron density in the channel near the source (II). At sufficiently strong positive bias voltage, the bias voltage begins to pull holes into the channel, so a region of the channel is at charge neutrality and contributes a large resistance (III). As bias is further increased, hole density at the source also increases, so conductivity increases again (IV). In Reference [32] we showed that wide plateaus in current could be achieved when this "kink" effect is made to coincide with velocity saturation. Heating effects can overcome transport gap at high bias The results discussed above come from graphene nanoribbons measured at high bias, but the key features of the data, saturation velocity at strong electric fields and the "kink" effect in the current for top-gated devices, are also seen in wide graphene devices [32]. This leads to the question, how are nanoribbons different from wide, non-ribbon devices when operated at high bias? Here we present the preliminary results of a comparison between dual-gated ribbons and wide devices and so far find no major differences in their performance. This result is only preliminary because the widths of the GNRs in this experiment are not well specified within the range of W ≈ 20-60 nm. The widths of nanoribbons lying underneath the dielectric and metal layers cannot be accurately measured in this device geometry. Estimates of the width can be made based on the expected width dependence of the low-temperature transport characteristics m and V b from the analysis in earlier work [20]. From these comparisons, it is estimated that the ribbons used in this experiment have W ≈ 50 nm. Ribbons of this width are narrow enough to behave distinctly from "wide" (W 100 nm) devices at low temperatures and low bias, but as we shall see below, they may not be narrow enough show a difference in transport characteristics at high bias. Ribbons as narrow as W ≈ 15-20 nm are achievable by our fabrication methods, so measurements of narrower devices with larger transport gaps may still reveal distinct device behavior. In comparing gapped graphene nanoribbons to wide graphene with no gap, there are several differences we may expect to see. First, since graphene nanoribbons have a strongly suppressed current at energies inside the gap, we may see an increased transconductance. Also, we could see larger and more fully saturated current in the "kink" region, as the presence of a gap causes the "kink" to more closely resemble pinch-off in a traditional MOS-FET. We may also see the effects of edge roughness. In narrow ribbons where edge roughness constitutes a significant portion of the total ribbon width, this could lead to a decrease in maximum current carrying capabilities, or cause the devices to degrade more quickly. Figure 9 shows I-V characteristics for graphene devices taken at two different temperatures, 77 K and 300 K. The I-V curves do not change significantly between the two temperatures. Since we expect to see thermal effects in the conductivity even away from the charge neutrality point, this suggests that the effective temperature in the device is http://www.nanoconvergencejournal.com/content/ similar at both 77 K and 300 K, in other words, other heating in the system dominates over the ambient temperature up to 300 K. As a straightforward method to directly compare ribbon devices with wide devices, we compare the scaled current density per width j = I/W for two devices, a ribbon device with width W ≈ 50 nm and a wide device with W = 200 nm, both with length L = 500 nm, shown in Figure 10. Here we can see that in the ribbon device, there is no difference in the size or shape of the kink behavior ( Figure 10(b)), and only a minor difference in transconductance. From this data we see that a ∼ 50 nm wide graphene nanoribbon shows no major differences in behavior from a wide device when operated at high bias. To understand the similarity in behavior between 50 nm and 200 nm wide devices, we compare the gap size of the ribbon device with the relevant thermal effects in the system; if the available thermal energy in the system is larger than the gap, the effect of the gap will be washed out by thermally activated charge carriers. In earlier work [20], we found that there are three different ways to measure the size of the gap: m , from the gate voltage, V b , from the bias voltage, and E a , from the activation energy for nearest neighbor hopping. Here we are concerned with current flow at high bias, so V b is the most relevant of these scales for distinguishing the on and off states of the device, though E a will determine the leakage current in the off state. For the 500 nm long devices studied here, these values are similar. Since V b has a strong length dependence, if we wish to increase V b we can increase the device length L, with the trade-off of an increased the resistance and therefore a decreased current. We consider two heating effects in this experiment. First, we compare the size of the gap with the thermal energy at room temperature, k B T ≈ 26 meV. Figure temperature thermal energy. We see that V b can be easily made greater than 26 meV by decreasing the ribbon width to below 30 nm or increasing ribbon length. However, only the narrowest ribbons shown here have a large enough E a ; narrower ribbons would be needed to ensure a low thermally activated leakage current. If heating effects raise the device temperature above room temperature, then heating effects will be more relevant than ambient temperature effects. Several recent works [33][34][35] address the topic of heating in graphene at high bias. From the ribbon device data in Figure 9(a), we can expect to see dissipated electrical power P = IV of up to ≈ 350 kW/cm 2 , though power dissipation may be lower in the optimal operating regime for device applications. From the results in Reference [35] for temperature vs. power per area, this power dissipation corresponds to a temperature of 1350 K, or a thermal energy of 116 meV. From this it is clear that the thermal energy from heating greatly exceeds that from the ambient temperature, but this result was from a back-gated device. In a dual-gated device geometry, the top-gate dielectric and electrode may act as a heat sink and decrease the effect of heating. In Figure 12, we model the heat sinking effects of a gate dielectric and top gate on a hot ribbon. This was done in the COMSOL Multiphysics finite element modeling package by assigning a heat flux to the ribbon such that maximum temperature in a back-gated device is ∼ 1100 K, shown in Figure 12(a). The graphene ribbon and graphene leads were assigned a thermal conductivity of 5000 K [36] and a thickness of 3.4; heat dissipation was also allowed through the 285 nm SiO 2 layer to the Si substrate below. In Figure 12(b), a 30 nm SiO 2 gate dielectric and a 30 nm gold top gate are added to the same model, again allowing heat dissipation into the gate dielectric and the top gate. Here, SiO 2 was used in place of HSQ because they are expected to have similar material properties. In this model, the maximum temperature is decreased to 825 K. If the top gate thickness is increased to 100 nm to allow for more heat sinking, the temperature decreases only slightly more, to 812 K. The gate dielectric actually used in the experiment consists of HSQ/HfO 2 with thicknesses of 15/15 nm. Hafnium dioxide has a much higher thermal conductivity than silicon dioxide (23 W/m · K for HfO 2 versus 1.4 W/m · K for SiO 2 ). When the model is changed to include the proper layer thicknesses of each dielectric, the maximum nanoribbon temperature decreases to 680 K, which corresponds to an energy of 59 meV. This is the behavior we can Figure 11 it is clear that E a and V b are both far below this energy, so thermally activated carriers easily wash away any gap-related effects we might have seen in the transport at high bias. Heat sinking could be greatly improved by removing the HSQ, such as by an hydrofluoric acid etch, and depositing ≈ 15 nm of hafnia only as the dielectric. In this geometry, the dielectric would be thinner and more thermally conductive, allowing for more efficient heat dissipation to the metal top gate. For the same heating conditions, this device construction would result in a maximum nanoribbon temperature of 460 K. The corresponding energy, 40 meV, is a gap size easily achievable by our nanoribbon fabrication methods. We note that from the results in our earlier work [20], the addition of a top gate tends to decrease E a , but as the top-gated geometry provides very good heat sinking, and top gates will ultimately be needed for optimized device design, we see this as the best route for development of a graphene nanoribbon device that retains its gapped behavior at high bias. Conclusions In this review we have described a saturating I-V characteristic in graphene devices operated at high source-drain bias, and described the behavior using a model where surface phonon emission results in a carrier velocity that saturates to a Fermi energy dependent value at high applied electric field. We showed that for top-gated graphene devices have an enhanced current saturation effect at certain gate voltage combinations. This effect results from the introduction of the charge neutrality point into the channel, and is similar to pinch-off in MOSFET devices. We observe that heating effects in graphene at high bias are significant, and very narrow ribbons with a strongly heat sinking device design are required to produce a device where confinement-induced gap effects dominate over the effects of heating.
5,958.4
2014-02-20T00:00:00.000
[ "Materials Science", "Physics" ]