text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Identification of potential angiogenic biomarkers in human follicular fluid for predicting oocyte maturity
Background Angiogenesis in folliculogenesis contributes to oocyte developmental competence in natural and in vitro fertilization (IVF) cycles. Therefore, the identification of key angiogenic factors in follicular fluid (FF) during folliculogenesis is clinically significant and important for in vitro fertilization. This study aims to identify the key angiogenic factors in FF for predicting oocyte maturity during in vitro fertilization. Materials and methods Forty participants who received ovarian stimulation using a GnRH antagonist protocol in their first in vitro fertilization treatment were recruited. From each patient, two follicular samples (one preovulatory follicle, > 18 mm; one mid-antral follicle, < 14 mm) were collected without flushing during oocyte retrieval. In total, 80 FF samples were collected from 40 patients. The expression profiles of angiogenesis-related proteins in FF were analyzed via Luminex high-performance assays. Recorded patient data included antral follicle count, anti-müllerian hormone, age, and BMI. Serum samples were collected on menstrual cycle day 2, the trigger day, and the day of oocyte retrieval. Hormone concentrations including day 2 FSH/LH/E2/P4, trigger day E2/LH/P4, and retrieval day E2/LH/P4 were measured by chemiluminescence assay. Results Ten angiogenic factors were highly expressed in FF: eotaxin, Gro-α, IL-8, IP-10, MCP-1, MIG, PAI-1 (Serpin), VEGF-A, CXCL-6, and HGF. The concentrations of eotaxin, IL-8, MCP1, PAI-1, and VEGF-A were significantly higher in preovulatory follicles than those in mid-antral follicles, while the Gro-α and CXCL-6 expressional levels were lower in preovulatory than in mid-antral follicles (p < 0.05). Logistic regression and receiver operating characteristic (ROC) analysis revealed that VEGF-A, eotaxin, and CXCL-6 were the three strongest predictors of oocyte maturity. The combination of VEGF-A and CXCL-6 predicted oocyte maturity with a higher sensitivity (91.7%) and specificity (72.7%) than other combinations. Conclusion Our findings suggest that VEGF-A, eotaxin, and CXCL-6 concentrations in FF strongly correlate with oocyte maturity from the mid-antral to preovulatory stage. The combination of VEGF-A and CXCL-6 exhibits a relatively good prediction rate of oocyte maturity during in vitro fertilization.
Introduction
Angiogenesis, the formation of new blood vessels from preexisting vessels (1), is critical to ovarian follicle development and oocyte growth. The development of new blood vessels provides cytokines, growth factors, and hormones that induce follicle growth. Healthy follicles are highly vascularized, whereas those undergoing atresia have poor vascularity (2). Thus, properly functioning follicular vasculature is critically important to the fate of the follicle (3).
In the human ovary, new blood vessels form in the medulla (interior) (4) and provide nutrients by passive diffusion to the cortex (outer layer), where they induce primordial follicle development (3). Moreover, the onset of follicular vascularization begins at the early secondary stage, increases during follicular growth, and declines during follicular atresia in the marmoset (5). Thus, the decrease in vascularization is thought to be a cause or consequence of atresia, perhaps because dying follicles fail to produce angiogenic factors needed to support the vasculature. The levels of vascular endothelial growth factor (VEGF), fibroblast growth factor 2 (FGF2), growth differentiation factor-9 (GDF-9), and insulin-like growth factor (IGF) correlate with folliculogenesis and oocyte maturation (6)(7)(8). These factors are either indirectly induced or directly produced by follicular granulosa cells (GCs) and VEGF, FGF-2, and IGF have been shown to associate with angiogenesis among these factors (9,10).
Ovarian follicular fluid (FF) contains a variety of molecules involved in oocyte maturation that are secreted by GCs, cumulus cells, and theca cells (TCs) and are transported via blood circulation (11). The FF includes steroid hormones, metabolites, polysaccharides, and antioxidants that provide a microenvironment for oocyte development (12,13). Clinically, oocyte maturity (one of the factors determining oocyte quality) is commonly determined according to morphological criteria under microscopy (14). However, morphological appearance does not predict oocyte quality with absolute certainty. Therefore, more accurate tests are needed to assess oocyte quality and maturity. FF may contain molecules that could serve as biomarkers for predicting oocyte maturity and quality (15).
A prudent strategy for investigating such potential biomarkers in FF is to identify angiogenic factors essential to folliculogenesis, as angiogenesis contributes to oocyte development in the natural cycle and may also play an important role during in vitro fertilization (IVF) cycles. Thus, this study aims to identify the key angiogenic factor(s) in FF that are responsible for oocyte maturation during IVF.
Patient recruitment
This study was approved by the Ethics Committee of Cathay General Hospital, Taipei, Taiwan (CGH-P107083). The study was carried out from March 2019 to March 2020, and informed consent was obtained from all patients. Patients meeting the following criteria were enrolled in the study (1): undergoing first IVF or intracytoplasmic sperm injection (ICSI) (2); age 20-45 years (3); cycle day 2 or day 3 basal follicle stimulating hormone (FSH) < 15 IU/mL (4); ovarian stimulation with gonadotropin-releasing hormone (GnRH) antagonist protocol (5); without chromosomal abnormalities. Patients with ovarian pathologies, including endometrioma, cyst (> 3cm in diameter), teratoma, and benign ovarian tumors, were excluded. A total of 40 IVF patients aged 26-44 years were enrolled. Patient clinical data collected for further evaluation included age, anti-müllerian hormone (AMH), body mass index (BMI), antral follicle counts (AFCs), basal hormone profiles, and sex hormones on human chorionic gonadotropin (hCG) day.
2.2
Ovarian stimulation protocol and sample collection 2 or 3 according to their age, BMI, and AFC. A daily subcutaneous dose of 0.25 mg of Cetrotide (Merck Serono) was started 5 to 6 days after the initiation of gonadotropins or when the mean follicle diameter was 14 mm. When two or more than two follicles were over 18 mm in diameter, the ovulation was induced using a dual trigger (hCG 6,500 U + GnRH-a 0.2 mg) (Ovidrel, Merck-Serono) (Decapeptyl, MSD). Transvaginal oocyte retrieval was performed 35-37 hours after dual trigger administration. The FF and blood serum were obtained during oocyte retrieval. Two follicular samples with one preovulatory follicle (size > 18 mm: group A) and one mid-antral follicle (follicle size < 14 mm: group B) were collected from each patient during oocyte retrieval. The maturation stage of all oocytes, including group A and group B, was recorded from patients. The oocytes were evaluated and categorized based on their nuclear maturation status into three groups: metaphase II (MII), metaphase I (MI), and germinal vesicle (GV) stages. An oocyte was categorized as metaphase I (MI) if it lacked a germinal vesicle (GV) and a polar body (PB), while an oocyte was classified as metaphase II (MII: mature oocytes) if it had a spherical shape, a uniform zona pellucida, a uniform translucent cytoplasm, and an extruded first polar body of appropriate size (16).
Preparation of human FF
The FF sample collection was performed as previously described by our laboratory (8). FF was collected immediately after isolation of the cumulus-oocyte complexes. The aspirates of FF containing cells such as mural GCs, erythrocytes, and leukocytes were pooled in tubes on ice. The sample collection procedure was carried out very carefully to avoid blood contamination, and the FF was obtained without washing with culture medium to minimize the wash medium volume and to avoid FF dilution. If blood contamination occurred, that FF sample was discarded. Otherwise, the FF was then further centrifuged at 1000 × g for 3 min at 4°C to remove any contaminating blood cells or cell debris. The FF supernatant was then aliquoted into tubes and stored at −80°C for further analysis.
Preparation and analysis of human serum
Patient serum samples were collected at three time points during IVF: on day 2 or 3 of the menstrual cycle, i.e. before gonadotropin administration; on the day of hCG/GnRHa administration, and 35-37 h after hCG/GnRHa administration. In contrast to the other sex hormones, the serum level of AMH (AMH Gen II assay, Beckman Coulter, Brea, CA) was measured by ELISA before the IVF cycle. All serum samples were collected into tubes and centrifuged at 1300 × g at 4°C for 10 min and stored at −80°C for further analyses. Serum estradiol (E2), luteinizing hormone (LH), progesterone (P4), and/or FSH levels were measured by chemiluminescence assay (Abbott Biologicals B.V., The Netherlands).
Granulosa cell culture
Follicular GCs were prepared as previously described (17). Briefly, GCs were obtained from patients undergoing oocyte retrieval for IVF. GCs in FF were isolated by centrifugation at 1000 × g for 3 min. The pellets with GCs were resuspended and placed in 50% Percoll solution and centrifuged at 400 × g for 30 min. After centrifugation, GCs retrieved from the middle of Percoll layer were cultured at M199 with 10% FBS and 100-U/mL of penicillin, 100-mg/mL streptomycin, and 25-mg/mL amphotericin B (Thermo Fisher Scientific, NY, USA) in tissue culture flasks at 37°C.
The total RNA, 1st strand cDNA synthesis, PCR, and PCR product analysis were performed as previously described (18) except the annealing temperature for PCR was set at 60°C and amplification was 30 cycles.
Statistical analysis
Data are reported as the standard error of mean (SEM) and were compared using the paired t-test. Logistic regression and receiver operating characteristic (ROC) curve analysis were used to determine the correlation between angiogenic protein level in human FF and oocyte maturation rate. The area under the ROC curve was used to determine the probability of accurately distinguishing high-quality oocytes. Statistical significance was considered as p < 0.05. All analyses were performed using SPSS version 18.0 (Chicago, IL, USA).
Patient demographics
Patient demographic data are summarized in Supplementary Table 1. A total of 80 FF samples including preovulatory and midantral follicles were collected from 40 patients with a mean age of 36.38 ± 0.79 years old, mean AMH of 3.65 ± 0.4 ng/mL, mean BMI of 20.51 ± 0.44 kg/m 2 , and mean AFC of 11.34 ± 4.53. The concentration of serum E2 on trigger day was 27.4 times higher than that on basal day and 1.75 times than that on oocyte retrieval day. The level of serum P4 on oocyte retrieval day was 29.9 and 12.2 times higher than that on basal day and trigger day, respectively.
Comparison of oocyte maturation rate between preovulatory and mid-antral follicles
Next, we compared oocyte maturation rate between preovulatory and mid-antral follicles. Two approaches were adopted. First, the percentage of oocyte maturity in which stage for all oocytes from the 40 patients was analyzed. Second, the fraction of oocytes in MII was compared between preovulatory (group A) and mid-antral (group B) follicles. Among them, 9 patients were undergone egg freezing without fertilization, whereas 31 patients were undergone IVF procedure. The total maturation rate of oocytes was found to be MII, 69.57%; MI, 13.87%, GV, 14.91%; and degenerative, 1.66% (Supplementary Table 2), in which the MII oocyte production rate is similar to that of a previous study (19). In parallel, the fraction of oocytes in MII differed significantly between group A and B, with 90.0% versus 72.5%, respectively (p < 0.05). The fertilization rate in group A was 80.6%, and that in group B was 67.7% (p > 0.05) ( Table 1).
FF angiogenic protein levels differed significantly between preovulatory and mid-antral follicles
To further determine whether angiogenic protein(s) in FF from preovulatory and mid-antral follicle correlate(s) with oocyte maturity, Luminex assay was performed to compare protein concentrations between FF from preovulatory and mid-antral follicles. The result showed that 10 of the targeted proteins were not detected (ND) or in a lower concentration and 3 of the targets did not differ in concentration between preovulatory and midantral follicles (Supplementary Figure 1). However, the concentrations of 7 of the angiogenic proteins in FF, including VEGF-A (p = 0.000), PAI-1 (p = 0.017), IL-8 (p = 0.001), MCP-1 (p = 0.001), eotaxin (p = 0.001), CXCL-6 (p = 0.000), and Gro-a (p = 0.002) differed significantly between these two groups ( Table 2 and Figure 1). The concentrations of eotaxin, IL-8, MCP1, PAI-1, and VEGF-A were significantly higher in preovulatory follicles than those in mid-antral follicles, while the Gro-a and CXCL-6 expressional levels were lower in preovulatory than in mid-antral follicles (p < 0.05). However, one may concern that some concentration data points in the PAI-1 scatter plot were near the upper limit of detection. To confirm the PAI-1 result, the original mean fluorescence intensity (MFI) data obtained from the Luminex analysis were reanalyzed. It was found that the reanalyzed result was similar to the previous one shown in Figure 1, demonstrating a significant higher level of PAI-1 expression in the FF of preovulatory follicle (p < 0.05) (Supplementary Figure 2). Therefore, the data presented in Table 2 and Figure 1 were further used in the following analyses. Logistic regression analysis revealed that the concentrations of VEGF-A, eotaxin, and CXCL-6 differed significantly between the fluid of preovulatory and midantral follicles (p < 0.05) ( Table 3).
Notably, the combination of VEGF-A and CXCL-6 displayed a strikingly high AUC of 0.900, with a sensitivity of 91.7% and specificity of 72.7% in predicting oocyte maturity (p < 0.001). Additionally, the combination of VEGF-A and eotaxin yielded an AUC of 0.883, with a sensitivity of 97.2% and specificity of 63.6% (p < 0.001). Furthermore, the combination of eotaxin and CXCL-6 achieved an AUC of 0.870, with a sensitivity of 61.1% and specificity of 100% (p < 0.001). These findings indicate that the combination of VEGF-A and CXCL-6 outperforms all other individual factors or combinations in terms of predictive power (Table 4 and Figures 3D-F).
CXCL-6 and eotaxin mRNA are expressed in follicular GCs
Our previous study showed that VEGF is produced by follicular GCs and thus is likely to be present in FF (17). Because VEGF, CXCL-6, and eotaxin are three potential predictors of oocyte maturity, CXCL-6 and eotaxin mRNA expression in follicular GCs was examined by RT-PCR analysis. CXCL-6 and eotaxin mRNA expression was observed in GCs, although at different levels of expression in the two representative patients (Figure 4). These results suggest that follicular GCs may be a secretory source of the CXCL-6 and eotaxin found in FF.
Discussion
In this study, we identified 7 angiogenesis-related proteins that were differentially expressed between preovulatory and mid-antral FF (Figure 1). Of these, VEGF-A, eotaxin, and CXCL-6 concentrations strongly correlated with oocyte maturity. The correlation with oocyte maturation was positive for VEGF-A and eotaxin, but negative for CXCL-6 ( Figure 1, 2). Two of the three significantly-expressed angiogenic factors (eotaxin and CXCL-6) belong to the chemokine family (20). Previous studies have shown that VEGF and eotaxin play roles in angiogenesis (17,21). Additionally, VEGF is one of the most relevant angiogenic factors studied to date in orchestrating folliculogenesis (22). Elevated VEGF levels in FF have been observed in both natural and IVF cycles, and it has been established that mature follicles originate from highly vascularized follicles (23,24). Moreover, in hormonestimulated IVF cycles involving patients with normal ovarian response, a positive correlation was found between VEGF levels in FF and the extent of peri-follicular vascularity on the day of follicle aspiration (25). Nevertheless, our current study revealed that the area under the curve (AUC) of VEGF in predicting oocyte maturity was only 0.788 (Figure 3), indicating that FF VEGF alone is insufficient to serve as a robust predictor of oocyte maturity.
Follicles with a diameter of 16-22 mm on trigger day are most likely to contain mature oocytes in IVF (26). Therefore, the follicle size appears to indicate the timing of the final follicular maturation trigger (27). In addition, healthy follicles are highly vascularized, suggesting a relationship between follicular vascularization and TABLE 3 Logistic regression analysis of the significant angiogenic proteins associated with oocyte maturation between preovulatory and mid-antral follicles.
Biomarkers
Odds ratio 95% C.I. p value Comparison of the concentrations of seven angiogenic factors in human ovarian FF between preovulatory and mid-antral follicles. Follicles were divided into two groups according to their mean size: preovulatory follicles > 18 mm (group A) and mid-antral follicles < 14 mm (group B). The absolute concentration of an angiogenic protein in human ovarian FF was determined via Luminex assay. *p < 0.05. follicular function (28). Predicting oocyte maturity using follicle size is not a perfect method as it might cause an interpersonal and intrapersonal errors of measurement. Therefore, measuring the serum concentration of estradiol is an alternative method for predicting oocyte maturity (200 pg/mL per mature oocyte). In this study, we observed that follicles larger than 18 mm in diameter (preovulatory follicle) had an oocyte maturation rate of 90%, whereas those smaller than 14 mm (mid-antral follicle) had a maturation rate up to 72.5% (Table 1). Thus, follicle size neither can perfectly predict oocyte maturity nor follicular maturity. Two related limitations are encountered in the clinical practice. First, identifying the stage of oocyte maturation prior to cumulus cell removal presents challenges. Second, inducing oocyte maturation via artificial methods after removing cumulus cells is time- consuming. Therefore, the identification of key angiogenic factor(s) in FF that are responsible for oocyte maturation during IVF could lead to new methods for determining oocyte maturity. Eotaxin (also known as CCL11), an 8.3-kDa protein belonging to the chemokine CC family, interacts with eosinophils (EOS) through CC chemokine receptor 3 (CCR3) (29). EOS preferentially accumulates in dilated microvessels of the thecal layer transforming into septae of the corpus luteum. The number of extravasated EOS was observed to be low in the granulosa layer under luteinization, moderate in the thecal layer, and high in hemorrhages in the former antrum (30). Eotaxin binds to human endothelial cells to induce endothelial proliferation and migration (31,32). Elevated eotaxin expression in FF might play an important role in angiogenesis and oocyte maturation during the stage of ovarian angiogenesis between the LH surge and meiosis completion (33,34). This phenomenon would explain the higher eotaxin level observed in the preovulatory follicle during IVF. We found that the expression level of CXCL-6 was higher in mid-antral FF than that in preovulatory FF, demonstrating its negative correlation with oocyte maturity. A previous study showed that CXCL-6 is a neutrophil-activating chemokine with bactericidal properties (35). Torań et al. reported that CXCL-6 is an important paracrine factor in the pro-angiogenic human cardiac progenitor-like cell secretome (36). Further, VEGF-A, IGF-1, HGF, and IL-8 gene expression was promoted at a high level by CXCL-6 in a myocardial infarction model (37). Hence, the higher level of CXCL-6 in small follicles (mid-antral follicles; group B) implies that CXCL-6 may affect oocyte maturity in an earlier phase of follicle development. On the contrary, VEGF-A may influence oocyte maturity in late phase (preovulatory phase; group A). Whether the two factors work together or independently to contribute to oocyte maturation remains to be determined. However, our study, at least in part, revealed that the combination of CXCL-6 and VEGF can be a better predictor for oocyte maturation in IVF.
To our knowledge, few studies have investigated the relationship between follicular angiogenic factors and oocyte maturation in preovulatory and mid-antral FF in the human ovary. FF is a plasma filtrate with a large dynamic range of protein concentrations that render the detection of lower-abundance proteins challenging (38). Luminex multiplex assays are designed to simultaneously detect and quantitate multiple secretory proteins (e.g., cytokines, chemokines, and growth factors) with greater efficiency and high throughput. Our results may pave the way for the selection of angiogenic factors in FF for use in predicting oocyte maturity. However, the following limitations of this study should be addressed: 1) this study was a prospective and observational study, and the results should be further validated by a well-designed randomized controlled trial; 2) the relatively small sample size may produce bias in statistical data; 3) due to limited FF sample amount, the absolute PAI-1 concentrations of some patients near upper limits cannot be reanalyzed by Luminex assay. Therefore, the non-involvement of PAI-I as a potential angiogenic factor doesn't mean to exclude the possibility of its potential. Inversely, based on our analysis on the original MFI data (Supplementary Figure 2), PAI-1 may also be a potential factor for predicting oocyte maturation, although it needs to be further investigated; 4) the mechanisms underlying the action of FF angiogenic factors on oocyte maturation were not investigated in this study. Further studies in cultured GCs and animal models are needed to address this question.
In this study, we did not include parameters such as age, BMI, AMH, AFCs, and hormones for statistical analyses in the two groups. The main focus of this study was to compare the differences in follicular angiogenic factors between preovulatory and mid-antral follicles from the same patient. Therefore, we collected two follicular samples, one preovulatory follicle (size > 18 mm: group A) and one mid-antral follicle (size < 14 mm: group B), during oocyte retrieval. Consequently, there is no need to adjust the results based on each patient's age, BMI, AMH, AFCs, and hormones. Hence, we utilized the paired t-test to compare the differences in follicular angiogenic factors between these two groups. The mean age of the study participants was 36.38 ± 0.79 years. Each patient contributed two follicular samples, one from group A and one from group B, ensuring that the follicular fluid (FF) samples were obtained from the same patient. Consequently, any age-related effects should be offset between the two groups. However, the impact of age on the expression of angiogenic factors could not be assessed due to the limited sample size of only 40 patients. We intend to investigate this aspect in future studies, where a larger sample size can provide more robust insights into the relationship between age and angiogenic factor expression.
In conclusion, FF VEGF-A, eotaxin, and CXCL-6 are involved in oocyte maturation during the mid-antral to preovulatory stage. These three angiogenic factors can be used individually as a biomarker to predict oocyte maturity. In addition, a combination of CXCL-6 and VEGF can be a better predictor for oocyte maturity in IVF, revealing a potential application of FF CXCL-6 and VEGF in judging oocyte maturity during IVF. In this regard, it is possible to develop a protein assay kit for fast predicting oocyte maturity by targeting these biomarkers. However, further efforts are required to explore the actual biological roles of the three angiogenic factors and combined impact of VEGF-A and CXCL-6 in oocyte maturity prediction in folliculogenesis in IVF cycles. Differential expression of CXCL-6 and eotaxin mRNA in follicular GCs in patients undergoing IVF. The total mRNA was extracted from follicular GCs of two IVF patients as examples (P1: patient 1 and P2: patient 2). The differential expression of CXCL-6 and eotaxin mRNA was detected using RT-PCR.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
Ethics statement
The Ethics Committee of Cathay General Hospital approved this study (IRB no.: CGH-P107083), and written informed consent was obtained from all patients. The patients/participants provided their written informed consent to participate in this study. | 5,292 | 2023-08-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Local identification of the stress-strain curves of metals at a high strain rate using repeated micro-impact testing
.
Introduction
Using efficient finite element simulations, it is now possible to gain a thorough understanding of many industrial processes based on impacts such as stamping, turning or milling [1]. Some of these processes are dedicated to the creation of new surfaces and others are used as mechanical surface treatments to improve near-surface properties. The most famous process of this kind is shot peening. During the treatment, the sample is peened by a large number of shots over a short period of time, which creates a compressive residual stress area in the sub-surface. It has been shown in numerous papers how finite element analyses may be used to explain the creation of such residual stress fields [2]. It appears that one key-parameter is the stress-strain curve of the materials treated. Indeed, the choice of the stress-strain curve will directly influence the level and size of the induced compressive residual stress field. One of the main difficulties is that shot peening induces high strain rates (higher than 100 s À1 ) and it is well known that the stress-strain relation of most metals at such strain rates is radically different from the stress-strain curve at lower strain rates. The identification of the stress-strain curves at the strain rates induced by the peening process is then required to correctly model the shot peening treatment.
Identifying metal behavior at high strain rates is often performed by means of Hopkinson's bar devices [3].H o w e v e r ,t h e resulting stress-strain curve corresponds to a bulk behavior of the material and thus does not take into account the modification induced by surface preparations or surface treatments. To correctly describe or model the mechanical response of engineering surfaces submitted to peening or scratch processes [4], this local mechanical behavior has to be known. It is the reason why another kind of mechanical testing has to be used. Generally, local mechanical properties are extracted using the nano-indentation technique [5][6][7]. However, the standards of nano-indentation are not really adapted to the identification of metal behavior at high strain rates. Specific dynamic indentation devices -instrumented nano-impacts -have been designed [8][9][10][11], which permit such measurements. However, the instrumentation of such devices is really difficult and expensive, which limits their practical use.
In this paper a new method based on multiple impacts in a same point is developed to extract the elastoplastic behavior of metals at strain rates close to those induced by shot peening processes. It is based on the use of a standard industrial micropercussion device, which allows to accurately control the locations of impacts and their kinetic energies [12,13]. It is to be noted that no specific additional instrumentation is required to use the method developed in this paper. Here the main objective is not to identify the true stress-strain curve for different strain rates but to obtain a fairly accurate mechanical behavior of the surface so as to describe the modification induced by shot peening processes correctly. The strategy developed in this paper is to determine the best stress-strain curve which allows to reproduce the growth of the residual imprint at each impact for a given impact energy.
In the first part, the repeated impact set-up is presented. Then the FEM strategy is detailed. Finally the identification method is developed and an application of this method to an AISI1045 steel and an AISI316L stainless steel is presented.
Indentation versus impact
The impact of spherical balls under normal incidence was well described by Tabor [14] in the case of dynamic hardness measurements. When dynamic effects can be neglected, except on the indenter kinematics [15], an impact under normal incidence can be considered to be a classical indentation. The main difference is that impacts are energy-controlled whereas classical indentation loadings are load-controlled or displacement-controlled. For low impact energies, only elastic deformation takes place. In this case, the relation between impact energy, geometrical and mechanical parameters has been derived by Johnson [15] based on Hertz theory of elastic indentation. For higher impact energy, the plastic deformation of metal occurs until the kinetic energy has been consumed. Finally, there is a release of elastic stresses in the indenter and the material and a permanent impression is visible (Fig. 2). The same results are observed when elastic and plastic deformations occur under indentation loadings.
In the theory of indentation [16], it is possible to define equivalent values of the stress and strain fields. It is obvious that stress, strain and strain rate fields are not uniform all over the deformed area. In fact, these equivalent parameters correspond to an average level of the mechanical fields [17,7]. The representative stress is related to the mean pressure, which is often called hardness. According to Tabor, the representative stress and strain are linked by the stress-strain curve of the indented materials. For instance, the spherical indentation representative strain [17][18][19] is often written where a is the residual imprint radius after indentation under a given load and R is the ball radius. Hence the stress-strain curve of metals can be determined very easily using different indentation loads and by measuring the residual imprint radius after each indentation. Although this identification method was strongly criticized in the past [20,16,21], it points out the fact that it is possible to determine a closed-form of the material stress-strain curve using simple spherical indentation experiments. Let us now consider the representative strain rate. Subhash et al. [9] proposed to define the nominal strain rate as the ratio of the indenter speed over the imprint radius. Mok and Duffy [22] and Tirupataiah and Sundararajan [23] proposed to define the nominal strain rate as the ratio of the nominal strain over the impact duration, the nominal strain being defined according to Tabor's definition [17]. It is clear that these definitions are only an overall estimate but they make it possible to measure qualitatively the strain rate level induced by an impact. Here, we propose to adopt the simple definition of Tabor where v is the impact velocity, a the residual imprint radius and g a constant (set to 1 in the present paper). Let us note that this definition is in very good agreement with previous results on the indentation of time-dependent materials [7]. Taking these different relations into consideration, impacting can be considered to be a classical spherical indentation but with a much higher level of strain rate. Because impacts are energy-controlled, it is not really possible to apply the classical methods developed for spherical indentation experiments. In this paper we propose a new method based on repeated impacts with constant kinetic energy. Kermouche et al. [24] have shown that two impact regimes can be identified. The transient impact regime is characterized by a growth of the residual imprint at each impact [12] and an increase of the maximum impact load. This transient impact regime is followed by the stabilized impact regime characterized by the shakedown of the structure to a macroscopic elastic response. During this regime there is no more increase of the contact area per impact.
The main idea of the method developed here is to use the transient impact regime. According to the spherical indentation theory, the increase of the residual imprint and of the maximum load per impact is related to the stress-strain curve of the impacted material. If these measurements could be performed with sufficient accuracy, then a perfect control of the impact energy would not be required. However, if the measure of the post-mortem imprint morphology can be performed with very high accuracy, the determination of the maximum impact loading has to be taken with caution. Indeed, this could be a very difficult task with regard to the dynamic response of the impact tester and also to the load sensor capability. To overcome this difficulty and because it is possible to accurately control the impact energy, the method presented in this paper is only based on the measure of the residual imprint morphology at each impact for a given impact energy.
Experimental set-up
The principle of the repeated impact device used in this study has already been presented in previous papers [12] and is sketched in Fig. 1. A rigid indenter, ended by a hemispherical tip electromagnetically accelerated is pushed onto the sample surface under normal incidence. Zirconia balls (E¼200 GPa, Fig. 1. The principle of the impact testing device [12]. hardness: 800 Hv) with a diameter of 2 mm (grade 10) have been used as impacting tips leading to a total indenter mass of 174 g. A constant acceleration being generated by the electromagnets, the indenter velocity and kinetic energy just before the impact may be directly determined using the indenter weight and its initial position above the sample surface. The incident kinetic energy was checked using a laser diode displacement sensor. The usual impact energy range belongs to [1,21] mJ, which corresponds to an impact speed range of [50-500] mm/s. For most metals tested with this impact device, the imprint radius belongs to ½1002400 mm. Hence, according to Eq. (2), the range of impactinduced strain rates belongs to [100, 1000] s À 1 . For most metals, this range is sufficiently limited to consider that an equivalent time-independent stress-strain curve is accurate enough to model the mechanical response of the surface.
As explained in the previous section, the strategy of the method developed in this paper is based on the measure of the growth of the residual imprint morphology -depth and radiuswith the number of impacts. To illustrate this, Figs. 2 and 3 show the growths of the residual imprints induced by repeated impacts on an AISI1045 steel (200 Hv) for an impact energy of 8 mJ.
Because the impact tester is not perfectly rigid, a significant part of the kinetic energy may be lost in the device deformation according to Eq. (3) where T is the impact energy, W contact is the work used to deform the sample and the tip, and W device is the work lost in the device. This last component has been estimated from impacts on a Tungsten Carbide sample (E¼ 650 GPa, H¼1300 Hv) using different impact energies. Indeed, impacts being perfectly elastic on such materials (no residual imprint after each impact condition used), W contact can be calculated from the maximum load measured using Hertz contact theory. W device is then obtained from Eq.
(3). Let us note that here W device is assumed to depend on the maximum impact load only.
FEM modeling strategy
Most authors use explicit finite element analyses to model the behavior of materials under impact loadings [25][26][27]. Indeed explicit finite element analyses are well adapted to fast nonlinear dynamic problems. However, the time step in such analyses has to be small enough to maintain the stability of the solution procedure. The time step is related to the density, the elastic modulus and the mesh length. The smaller the mesh length of the smallest element, the smaller the time step. Consequently accurate simulations are very costly and may sometimes lead to instable results [28]. In order to reduce the computation time and the accuracy of the parametric study, a static analysis using an energy equivalence has been adopted to model normal impact loadings [24]. A quasi-static approach can be used to model impact loading if and only if there is no evidence of dynamic effects, except on the indenter kinematics [15]. This hypothesis is valid if the impact time is much higher than the elastic wave propagation in the contact region [29]. This last condition can be written where t impact is the impact time, a is the maximum contact radius and c the longitudinal elastic wave speed in the sample. In our experiment the contact duration is about 200 ms whereas the ratio 2a=c is about 100 ns. According to this theoretical result, a quasi-static model can be used instead of a dynamic model. However, the development of a reverse analysis requires the use of the most accurate model. For that purpose, Appendix A shows that the results obtained with the quasi-static finite element model are in very good agreement with those resulting from classical dynamic explicit finite element calculations.
Quasi-static simulation
The quasi-static simulation is based on standard indentation models. Calculations have been performed with Systus/Sysweld [30] using axisymmetric elements and a large displacement/large strain option (updated Lagrangian formulation). The mesh is particularly refined near the contact zone, but also sufficiently wide to approximate a semi-infinite solid (Fig. 4). In order to ensure plastic incompressibility, four node quadrilateral isoparametric elements with a selective reduced integration scheme are used in the plastically deformed area. The plastic flow is described via a plastic von Mises stress. The loading is achieved by imposing a quasi-static displacement of the indenter as explained above. For each impact, the penetration depth of the ball is stopped when the impact energy T equals the sum of W contact and W device . The contact between the indenter and the work-piece has been assumed to be frictionless. It is shown in Appendix B that the sensitivity of the residual imprint morphology to the friction conditions is very weak. The parametric study has been performed assuming a perfectly elastic indenter with the properties of Zirconia (E¼200,000 and n ¼ 0:3).
Material model
To simplify finite element investigations on indentation, many authors [31,32,16] proposed to use Hollomon's law to describe the stress-strain curve of metals. This is a two-parameter power law description s ¼ ke n [31] where k is a strength coefficient and n the strain hardening exponent. It has been shown that for many pure and alloyed engineering materials, it gives a good approximation of the uniaxial stress-strain curve. It is to be noted that the stress sensitivity to the strain rate is not taken into account in Hollomon's law. According to Eq. (2), the nominal strain rate induced by the impact device belongs to the range [100-1000 s À 1 ]. It is considered here that this strain rate range is sufficiently limited to assume that a time-independent stress-strain curve such as Hollomon's law will accurately describe the mechanical response of materials to such impact loadings. This hypothesis is valid for a large class of metals. Let us also note that this law is only a good approximation of most metal stress-strain curves but it cannot be used to model phenomena related to kinematic hardening and more specifically to cyclic loading. Collin et al. [33] have shown that the use of combined isotropic and kinematic hardening allows to better reproduce cyclic spherical indentation curves. However, it appears that kinematic hardening does not have a first-order effect on the indentation curve. Consequently, for a first approach, the kinematic hardening has not been considered in the present study.
Parametric study
The strategy developed in this paper is to determine the best stress-strain curve which allows to reproduce the growth of the residual imprint at each impact for a given impact energy. In order to cover a large range of stress-strain curves and impact conditions, an extensive parametric study has been performed. The input parameters of the finite element analysis are The elastic properties of the impacted materials have been fixed to E¼200 GPa and n ¼ 0:3. For each finite element calculation, 10 loading/unloading cycles were simulated, which corresponds to 10 repeated impacts. For each numerical simulation corresponding to a given set of [k, n, E impact ] , the growths of depth and radius with the number of impacts are obtained as shown in Fig. 5.I ta p p e a r st h a t these variations can be expressed as where N is the number of impacts, x is a geometrical parameter of the residual imprint, i.e. radius r and/or depth d. Let us note that the choice of Eq. (5) was drawn from a large number of studies on indentation testing [31,32,16]. A x , B x , C x are functions of [k, n, E impact ] and have been stored in two databases: one for the growth of depth (A d , B d , C d ) and the other for the growth of radius (A r , B r , C r ). To illustrate this, function B d ðK,nÞ corresponding to an energy of 8 mJ is plotted in Fig. 6.
Identification method
Similarly to numerical results, experimental results can be approximated by Eq. (5). Let us denote re(N) and de(N) the experimental values of the residual imprint radius and depth after N impacts at a given energy. For the impact energy investigated, the optimal values of K and n are determined by finding the minimum of the following functions: where x is a geometrical parameter of the residual imprint, i.e. radius r and/or depth d. This function could be related to the general concept of variance, or standard deviation, and it has been drawn from the classical least square method. The minimization of the function I r (k,n) (resp I d (k,n)) makes it possible to identify the best couple (K,n) which permits to adjust the experimental variation of the radius (resp depth) to the number of impacts. The minimization of these two functions may lead to different results, thus making it difficult to determine which one is better. For that purpose, we propose to minimize the product of these two functions I(k,n), which should lead to another solution satisfying both experimental variations of radius and depth. For instance, the function I(k,n) is plotted in Fig. 7 for ten impacts at 8 mJ on the AISI1045. The minimization of this function leads to the couple k¼1225 MPa and n¼ 0.026. Fig. 8 illustrates the good agreement between the growths of radius and depth measured experimentally and those resulting from this couple (K,n).
Set of solution
The remaining difficulty of reverse analysis is the solution unicity. The minimization method used here leads to a single couple (K,n) solution. However, a set of couples (K,n) which gives a value close to the global minimum can be identified. It is obvious, that the couple (K,n) solution will strongly depend on the experimental uncertainties related to the measure of imprint radius and depth and also on the energy loss due to the experimental device stiffness. Hence the couple (K,n) identified by the present inverse method is to be taken with caution. Let us have a look at the stress-strain curves resulting from different couples (k,n) of the set identified above (Fig. 9). It appears that these stress-strain curves are very close on a given strain range. The upper limit is very close to the spherical indentation representative strain given in Eq. (1).
Therefore, it indicates that the stress-strain curve identified with the inverse method is only available on a given strain range according to ball indentation theory. For instance, the strain interval from Fig. 2 for AISI1045 is 0.04-0.06 referring to Eq. (1). In order to identify a wider strain range, it is thus necessary to run the identification method using different impact energy to assess the behaviour at smaller and/or larger strains.
Materials
Repeated impact tests have been conducted on an annealed AISI1045 steel (200 HV) and an AISI316L stainless steel (150 Hv). The surface was mechanically polished with abrasive papers up to 1200 grit size. Surface finishing was achieved using diamond paste up to 3 mm grit before ultrasonic cleaning in ethanol. Two impact energies (8 mJ and 21 mJ) were used for each material. According to Eq. (2), the nominal strain rate induced by these impacts belongs to the range [100 s À 1 ; 1000 s À 1 ]. According to literature data [3,34], the stress difference for a strain of 0.1 within this strain rate range is lower than 30 MPa for the AISI1045 steel and lower than 60 MPa for the AISI316L.
For each residual imprint, the depth and radius have been measured using a 3D optical profilometer and an optical microscope. Before applying the present method, it is important to discuss the problem of the temperature rise during the impact which could induce adiabatic deformation. Indeed such a phenomenon is not taken into account in finite element simulations, therefore it can alter the results of the inverse identification. Consequently it is important to state about the conditions for which the adiabatic deformation can be neglected. For that purpose, Johnson [15] proposed the following non-dimensional parameter for measuring the behavior regime for the impact of metals where T is the impact energy, Y d is the dynamic yield stress and R is the indenter radius. Between G ¼ 10 À3 s À1 and 10 À1 s À1 , the impact can be reasonably described by the quasi-static indentation theory and heating effects are thus negligible. In the present study, G is always lower than 10 À2 s À1 for the two metals tested (with Y d ¼ 800 MPa). Therefore, the identification procedure can be applied as such.
Application to the AISI1045 steel
The impact energies used here are 8 mJ and 21 mJ. For each impact energy level, a stress-strain curve is deduced using the identification method developed above. These curves are plotted in Fig. 10. Let us note that the stress-strain curves identified with these two impact energy levels are very close, which points out the reliability of the method and the fact that the difference in strain rate between 8 mJ impacts and 21 mJ impacts is not significant for this steel. We propose here to define the stress-strain curve of the AISI1045 steel as the one which minimizes the distance between these two stress-strain curves. Here we obtain the following solution: k¼1344 MPa and n¼ 0.047. This curve is also plotted in Fig. 10.T o check this result, finite element calculations of ten impacts have been conducted using the quasi-static model developed in this paper for these two impact energies. The numerical results are then compared to the experimental results in Fig. 11. It shows a very good agreement which confirms the right choice of the values of K and n.
The stress-strain curve obtained with this method corresponds to a local behavior of the surface, which means that the modifications induced by the different surface preparations and treatments are taken into account. To illustrate this difference, the stress-strain curve obtained at low strain rates and the one resulting from Hopkinson's bar testing (Bulk behavior [3]) are also plotted in Fig. 10. It points out the need to identify appropriate surface stressstrain curves when the surface behavior is concerned.
Application to AISI316L
The same procedure has been used with the AISI316L stainless steel. The impact energies used here are 8 and 21 mJ. Here again the stress-strain curves identified with these two impact energy levels are very close (Fig. 12). The stress-strain curve obtained at low strain rates and the one resulting from Hopkinson's bar testing (Bulk behavior [34]) are also plotted in Fig. 12. Contrary to the AISI1045 steel the stress levels resulting from Hopkinson' bar testing and those resulting from the proposed method are not very different. However, it appears that the strain hardening is significant for repeated impacts while there is barely any for the bulk high strain rate test. Several phenomena can be the origin of such a difference. For instance, it can be due to residual stresses or microstructure gradients. It is important to note that the main aim of this paper is to present a new material characterization method. Such kind of questions will be addressed in future works which will deal with the analysis of the response of different materials to such loadings.
Conclusions
The aim of this paper is to propose a new technique based on local micro-impact testing to characterize the stress-strain curves of metals. Therefore, a small volume of material is loaded with a strain rate [100-1000 s À 1 ] close to those induced by several surface mechanical treatments. One strong interest of this method lies in its simplicity and its low cost compared to other kind of mechanical characterization such as dynamic indentation testing, where both load and displacement are measured continuously during the test. Even if the accuracy level of this last technique is better, the method presented in this paper appears to be the best compromise between precision, cost and rapidity in view of understanding the effect of surface treatments on surface mechanical properties at high strain rates. This method can be used on any kind of metals but only if the loaded volume can be considered as homogeneous. Consequently this work needs to be extended to the identification of local properties of graded materials. Future works will focus on the effect of temperature on local mechanical properties. This last point is of great interest for some manufacturing processes such as machining or finishing, where small volume are submitted to high strain rates and high temperature rises.
The purpose of this section is to compare the results obtained with a dynamic explicit FEM with those obtained with the quasi-static implicit FEM described in this paper. Non-linear dynamic explicit simulations have been performed with Abaqus Explicit [35] using axisymmetric elements. Similarly to the quasi-static model, the mesh has been specifically refined near the contact zone (Fig. 13), and the sample is also sufficiently wide to approximate a semi-infinite solid. The mesh consists of quadrilateral elements with reduced integration and hourglass control. Contrary to quasi-static calculations, the displacement of the ball depends on its initial speed and is not monitored during the impact.
The work-piece is an elastic-plastic solid with a linear strain hardening (300 MPa at 0% plastic strain and 800 MPa at 100% plastic strain). Ball radius is 1 mm and the contact is frictionless. Five balls impact the sample at 310 mm s À1 , which corresponds to an impact energy of 8 mJ. The resulting growths of depth and radius of the residual imprint are plotted in Fig. 14. It is shown that during the five impacts -i.e. the five loading-unloading cycles -the depth and radius of the residual imprint are almost similar. This comparison shows that it is possible to simulate repeated normal impacts using a quasi-static approach with enough accuracy. Let us however note that such approach is suitable because n the present case study, according to Eq. (4) the impact velocity is low enough to neglect elastic wave propagations [15].
Appendix B. Influence of friction
The purpose of these calculations is to state on the effect of friction conditions on the residual impact morphology. Finite element calculations of micro-impact tests have been performed with a rough contact condition (infinite friction coefficient) and a frictionless contact condition between the indenter and the work piece. The incident impact energy used was 8 mJ. The work-piece is an elastic-plastic solid with a linear strain hardening (300 MPa at 0% plastic strain and 800 MPa at 100% plastic strain). These mechanical properties have been chosen in order to induce a high enough increase in the radius of the residual imprint at each impact. Therefore, this should highlight the effect of friction on the results [32]. The depth and radius of the residual imprint are compared in Fig. 15. From the results of these calculations, it can be concluded that the sensitivity of the identification method to the friction conditions is very weak. This is the reason why all the calculations have been performed using a frictionless contact. Comparison of results (depth and radius of the residual imprint) obtained with a quasi-static implicit FE calculation (Systus [30]) and a dynamic explicit FE calculation (Abaqus Explicit [35]). | 6,451.2 | 2013-05-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
A T-junction device allowing for two simultaneous orthogonal views: application to bubble formation and break-up
A novel design for the classical microfluidic device known as T-junction is proposed with the purpose of obtaining a simultaneous measurement of the in-plane velocity components in two orthogonal planes. A crucial feature of the proposed configuration is that all three velocity components are available along the intersection of the two planes. A dedicated optical set-up is developed to convey the two simultaneous views from the orthogonal planes into the sensor of a single camera, where a compound image is formed showing on either half one of the two views. A commercial micro-particle image velocimetry system is used to measure the velocity in the two planes. Feeding the T-junction with a liquid continuous phase and a dispersed gas phase, the velocity is measured by phase averaging along the bubble formation and break-up process showing the potentialities of the new design. The accuracy analysis shows that the error is dominated by a systematic component due to the thickness of the measurement slice. The error can be reduced by applying confocal microscopy to the present system with no further modifications so as to reduce the thickness of the measurement slab thereby reducing the error. Moreover, by sweeping the planes across the region of interest, a full three-dimensional reconstruction of the velocity field can be readily obtained. Finally, the simultaneous views offer the possibility to extract the principal curvatures of the bubble meniscus thereby providing access to the Laplace pressure.
Introduction
In microfluidics, different methods are used to produce microbubbles, exploiting laminar flow conditions to allow for high reproducibility and high production rate at relatively low energy and mass transfer rates (Chen et al. 2014). Among the different devices conceived to manipulate fluids at the microscale (Whitesides and Stroock 2001), the T-junction is one of the most fundamental and widely used device. It consists of two orthogonally intersecting microchannels. The device is used to produce bubbles or droplets of a secondary phase into the main continuous one. Since its original development in the first few years of the present century, see, e.g., Thorsen et al. (2001), it has been repeatedly addressed by a number of investigators (Tice et al. 2003;Günther et al. 2004;Garstecki et al. 2006;Menech et al. 2008). In micro electro-mechanical systems (MEMS) and lab-on-chip (LOC), such geometry can be exploited to obtain two-phase flows (Zhao and Middelberg 2011), where two partially miscible or immiscible fluids interact in the microfluidic networks with well-defined and controlled conditions (Graaf et al. 2005;Garstecki et al. 2006;Fu and Ma 2015. The possible applications are countless. In the medical/biological field, LOCs featuring the T-junction configuration have been developed to deliver drugs at a precisely controlled rate, e.g., in Okushima et al. (2004) and Stride and Edirisinghe (2008). In micro-and nano-technology, T-shaped channels have been employed to produce regularsized polymeric particles ) typically used, e.g., in liquid chromatography (Ugelstad et al. 1983) or for flow measurements (PIV and microPIV) (Melling 1997). The generation of microbubbles is also relevant for fabricating porous biomaterials (Wang et al. 2011), for mixing enhancement in chemical processes (Günther et al. 2005;Kreutzer et al. 2005) and for different biomedical applications, e.g., to form liposomes (Swaay 2013).
Since two immiscible fluids are present, surface tension is dynamically important. On the other hand, given the scale of the order of hundreds micrometres and the velocities of tens of millimetres per second, inertial effects are usually negligible and viscous forces dominate over inertial forces. In these conditions, the Navier-Stokes equation can be linearised to describe a "creeping flow". However, although the familiar convective non-linearity of macroscopic fluid mechanics is ineffective, instability may still set in due to competition between surface tension and viscous forces (Landau and Lifshits 1959;Taylor 1934). As a consequence, despite the simple geometry, the dynamics leading to bubble/droplet formation is far from being trivial. In particular, the complex bubble shape and the strong three-dimensional and time-dependent velocity and pressure distributions are rather difficult to measure directly when the characteristic size falls below the millimetric range. In simple cases, important information can be inferred from numerical simulations (Menech et al. 2008;Soh et al. 2016;Steijn et al. 2010; Amaya-Bower and Lee 2011).
However, a way to resort to experiments is clearly needed both for validation of the numerical models and for the investigation of more complex cases when, e.g., rheologically complex fluids are involved. The experimental analysis has been mainly focused on the global characterisation of the device addressing the influence of global parameters, such as characteristic length scale, flow rates and capillary number (Yamamoto and Ogata 2013;Wehking et al. 2014;Nunes et al. 2013;Fu et al. 2010). Attention has been given to unsteady phenomena, as in the case of bubble break-up (Garstecki et al. 2006;Fu et al. 2011;Fu and Ma 2015).
The velocity field can be obtained using micro-particle image velocimetry ( PIV), a non-invasive technique that allows to measure the velocity on planar sections (Steijn et al. 2007;Sinibaldi and Romano 2017). The accuracy is limited by the thickness of slab over which the field is implicitly averaged. A better resolution may be achieved by coupling PIV with confocal microscopy, to reduce the thickness of the measurement volume, see Lima et al. (2006) and Oishi et al. (2009) for applications to the T-junction configuration. Still working with planar section, the third component of velocity can be acquired using stereo PIV (Lindken et al. 2006) at the prize of a considerably more complex system. By sweeping the measurement plane across the flow domain, a complete three-dimensional field can be reconstructed from the two-dimensional fields.
The purpose of this paper is to illustrate the concept of a novel set-up able to allow for the simultaneous measurement of the two-dimensional velocity field in two orthogonal planes in the T-junction. The fabrication technique used to build the proof-of-concept is described in some detail, but the reader should be aware that other alternative procedures and materials can be used. The process we adopted should then only be considered as an inexpensive and easy way to manufacture the microchip.
Concerning the velocity measurement, the advantage of the new configuration is that a standard PIV is used, with a single camera. Along the intersection of the two orthogonal planes, all three velocity components are simultaneously acquired. In principle, by letting the intersection line span the measurement volume, the entire three-dimensional field can be reconstructed. After validation in a simple straight channel, the new approach is used in a T-junction to show how the flow field around the bubble and the bubble interface can be extracted during generation and break-up phases by phase averaging the combined planar acquisitions. The new set-up is based on the idea of looking at the field from two orthogonal planes. The two views are conveyed to the camera and each one is captured on one half of the sensor producing a compound image of the two sights. A standard PIV processing of the image allows to extract the velocity. The availability of the two orthogonal views may also be used to estimate the total curvature of the bubble/droplet, providing access to the Laplace pressure.
The paper is organised as follows. The experimental setup is described in Sect. 2 where a detailed description of the new device is provided, including manufacturing, measurement technique and validation. The main results are presented in Sect. 3 where velocity field and bubble configuration are discussed. Finally, Sect. 4 is devoted to conclusions and possible perspectives. The three appendices complement the discussion with a detailed description of the fabrication procedure and a theoretical model of the measurement process exploited to assess accuracy and main sources of error.
Experimental set-up
The proposed device has been characterised by PIV and high speed imaging, as schematised in Fig. 1. The latter is obtained by connecting a high speed camera (Photron mini UX100, 1280 × 1024 px CMOS sensor, 4000 fps at full frame and 800000 fps 1D) to an inverted microscope (Zeiss Observer Z1), provided with a 5 × objective. For the PIV system (LaVision), the same inverted microscope is combined with a Nd:YAG double pulsed laser (Litron NanoPIV) at 532 nm with a maximum pulse energy of 30 mJ and a pulse duration of 8 ns. The continuous phase is seeded with polystyrene, fluorescent particles with nominal diameter D = 4.47 μ m. The fluorescent light ( = 607 nm) emitted by the microparticles is sent to a dual frame CCD camera (Imager SX-4) that captures pairs of images ( 2360 × 1776 px) processed by the LaVision Davis software to provide the velocity field. The thickness of the volume where the velocity is measured, the depth of correlation of the PIV system, depends mainly (Meinhart et al. 2000) on the numerical aperture of the objective, NA = 0.16, the magnification, M = 5 × , the light wavelength and the particle diameter D. Under the present conditions, the correlation depth is c = 90 μm . The interrogation window for PIV analysis is 24 × 24 px with an overlap factor of 50 % where 1 px corresponds to 1.1 μm . For the present case, the polystyrene fluorescent particles (microparticles GmbH) are provided as a suspension in water with mass ratio r m = m p ∕m s = 2.5 % where m p and m s are the mass of the particles and the mass of solution, respectively. Given the mass density of polystyrene, p = 1.050 g/cm 3 , r m corresponds to the volume ratio r v ≃ 2.5 × 10 −2 . Before using the particles as PIV tracers, the suspension is diluted in 2-propanol in the ratio 1:133. This corresponds to a volume fraction of polystyrene particles in the flowing solution of 2-propanol (plus water in traces) of ≃ 1.9 × 10 −4 , resulting in a probability, occupancy fraction, f o = 0.21 of finding a particle in the interrogation volume. In other words, one out of (almost) five acquired images contains a velocity signal per interrogation volume. On the other hand, the probability of finding more than one particle per interrogation volume is negligible ( ∼ 0.04 ) thereby considerably simplifying the error analysis. It should be mentioned that such a low particle concentration also enhances particle visibility (Olsen and Adrian 2000), a parameter of general relevance in PIV which is even more crucial in the present application where, as explained in the following section, two views with different optical paths need to be imaged.
Device description and manufacturing
The microdevice consists of a T-junction with main microchannel for the liquid phase and secondary channel for the dispersed gas phase. The liquid phase is supplied by a syringe pump (PHD Ultra, Harvard Apparatus) with a mass flow rate in the range 1.56 pl/min − 216 ml/min . The gas is fed by a pressure pump (Dolomite Mitos P-Pump) which keeps the control chamber pressure constant. The pump guarantees a steady flow over a wide pressure range (0-10 bar) with excellent response time and accuracy.
We report here a brief description of the assembly procedure of the novel microdevice. The detailed fabrication process is reported in Appendix A. The novel chip is entirely built in glass, using calibrated slides to assemble the geometry sketched in Figs. 2 and 3. The main channel runs along one of the edges of the chip and offers a double optical access through the two orthogonal bottom and side slides joining at the bottom corner. The slides are glued together with a thin film of cyanoacrylate that can easily penetrate in the gap between the parts. The other elements are assembled using silicon glue, that allows for a precise positioning of calibrated spacers and top slide and prevents delamination due to the pressurised fluids. The adopted gluing procedure is an inexpensive and easy way to ensure proper bonding and perfect sealing in the investigated flow regimes. Alternative techniques, like thermal fusion bonding (Stjernström and Roeraade 1998;Jia et al. 2004), can be used if desired. Microchannels with rectangular cross-sections and different aspect ratio, height and width can be fabricated using calibrated glass spacers of different thickness, highlighted in yellow in sketches of Figs. 2, 3 and 4. Dimensional control of the geometry is achieved using auxiliary calibrated spacers as templates to guarantee channel dimensions and parallelism/orthogonality of the walls. Inlet and outlet sections of the main channel and inlet to the secondary channel are endowed with micro-cannulas for connection through tygon tubes to the respective pumps and waste, tight sealing being ensured by cyanoacrylate glue.
The double optical access through orthogonal planes allows to visualize the bubble formation and break-up in the two planar views, (x-y) and (z-x), respectively, (see the sketch in Fig. 2 for the definition of the coordinate system). The new chip allows for simultaneously capturing both images with a single inverted microscope. This is achieved by integrating a 45 • inclined mirror on the side slide as shown in Fig. 4. In the present arrangement the (x-y) view, hereafter called direct view, is directly captured by the objective. The (z-x) view is instead first reflected on the mirror, reflected view. To simultaneously focus both views, the two optical paths should be identical. This requires the interposition of an optical adapter, here a simple calibrated glass spacer. From the geometry of the optical system, the refraction indices of the materials and the desired positions of the focal planes as well as the thickness of the glass spacer can be easily evaluated, see Appendix A for details.
A good quality micro-mirror is fabricated by evaporating a reflective metal on the surface obtained by cutting a 150 μm-thickness glass slide. A Bulzers evaporator was used to deposit, under vacuum, a 300 Å Aluminium layer on top of a first 150 Å Chrome layer to promote adhesion. In the practical design of the microchip in its basic configuration proposed here, a critical aspect is combining the two conflicting requirements of wide field to include the views in the two orthogonal planes in a single image and sufficient magnification to allow for a precise PIV measurement. To this purpose, it is instrumental to minimize the thickness ( 150 μm in the present realization) of the side slide ( Fig. 4) that is imaged in the composite view capturing the two orthogonal planes, Fig. 5.
Flow calibration
The operational regime of the T-junction depends on the flow rates of the fluids, the viscosity of the two phases, the surface tension and the dimensions of the channels (Garstecki et al. 2006). Figure 6 is a compilation of data (Nunes et al. 2013) showing the phase diagram with the different observed flow regimes where the ordinate is the ratio of disperse, Q d , (gas in our case) to continuous Q c (liquid) phase flow rates. The abscissa reports the capillary number, Ca = Q c ∕(A ) , where is the dynamic viscosity of the continuous phase, the surface tension and A = hw the area of the main channel cross-section. Bubble are formed in two regimes. Below a lower capillary number threshold, the T-junction works in the so-called squeezing regime. Above an upper threshold the bubble starts dripping (dripping regime). Between these two critical values for the capillary number a transitions region, centred around Ca ≃ 0.01 , separates the two stable modes of operation. The cross reported in Fig. 6 indicates the conditions used for the data analysis to be discussed in the following, corresponding to Q d ∕Q c = 1.3 and capillary number Ca = 6.19 × 10 −3 .
To be certain of the operating conditions under which velocity measurements and bubble break-up visualisations are carried out, one should certify the viscosity of the liquid stream and the surface tension at the interface between the two phases that could, in principle, be altered by the fluorescent particles needed for PIV. Given the high dilution of the suspension, the viscosity can be estimated by Einstein's relation = (1 + 5∕2 × ) where is the viscosity of the suspension and is the volume fraction of the suspended particles. As expected, this leads to a negligible change in the viscosity. Surface tension modifications were instead directly quantified exploiting Yurin's law (Jurin 1718), by measuring the height of the meniscus in a capillary tube partially immersed in the suspension, and were also found to be negligible at the present particle concentration, as expected.
Results
As a benchmark, results concerning a simple, straight microchannel featuring the same double view with length L = 30 mm and square cross-section with side w = h = 800 μm are illustrated first. Ideally, the PIV tracer particles should be as small as possible to allow a fast relaxation to the local fluid velocity. In the specific case, assuming a typical flow velocity U bulk = Q∕A = 52 mm/s , the Stokes number of the f luorescent tracers is St = p D 2 U bulk ∕(18 h) = 3 × 10 −5 showing that inertial effects are negligible. Velocity measurements at the microscale may be affected by significant fluctuations due to the Brownian motion of the probing particles. For a colloid of mass m c the classical theory of Brownian motion predicts the rms (root mean square) fluctuation of a velocity component, say V rms x , to be V rms In the present case, assuming a nominal particle diameter of 4.47 μm and a density comparable with water, random fluctuations of intensity V rms x = 0.29 mm/s are superimposed to the average particle velocity which is the proxy used to estimate the fluid velocity. The particle random motion can be removed from the data by averaging over a collection of samples, under the assumption that the background velocity field is purely deterministic and repeatable, as expected given the low Reynolds number of the flow, in the present case Simultaneous view of the bubble forming at the junction. Raw data correspond to a snapshot taken with the fast camera at 3200 fps . The direct/reflected view is shown in the lower/upper part of the image, respectively. The optical adapter allows both images to be in focus. The different shading is due to the lighting system: microscope lamp in the direct view and auxiliary LED in the reflected view (Nunes et al. 2013). The cross indicates the working point used to collect the data in the novel, double view chip Re = Q∕(h ) = 13 , where Q is the flow rate and the kinematic viscosity of the fluid (2-propanol). The maximum number of acquired samples is N sample = 450 , which gives a confidence interval on the estimated fluid velocity given by ̄B rownian The resulting figure should be compared with the bulk velocity U bulk = 52 mm/s to give ̄B rownian x ∕U bulk ≃ 2.6 × 10 −4 . The consequent inaccuracy due to Brownian fluctuations is found one order of magnitude smaller than those due to other error sources to be discussed below.
The left panel of Fig. 7 shows the axial velocity profiles as obtained from the direct view (x-y) in the channel symmetry plane, z = h∕2 . The three curves correspond to different flow rates, Q = 1, 2, and 3 ml/min and are normalised by the corresponding bulk velocity. On the right panel, Fig. 7 shows the profiles in the orthogonal view (x-z, reflected image), now in the symmetry plane y = w∕2 . The profiles, estimated by averaging the velocity over different samples, collapse within the provided confidence interval. For each flow rate, Fig. 8 shows the behaviour of the confidence interval, ̄x(N sample ) = V rms x ∕ √ N sample , as a function of the number of samples, where V rms x is the velocity rms obtained from N sample data. The error bars reported on the velocity profiles of Fig. 7 corresponds to N sample = 450 . The effective number of samples follow from the acquisition of N frame ≃ 1500 acquired image couples with occupancy fraction of each interrogation window f o = 0.21 . The value of f o was estimated as explained in Sect. 2 and is confirmed by analysing the raw images from the acquisition system. The inset of the figure shows in green an example of the confidence interval distribution in the transversal direction. The red curve is a theoretical prediction based on a model of the measurement process to be illustrated below. Figure 9 provides a visual impression of the field showing the two measurement planes and the corresponding in-plane components of the velocity. The two measurement planes intersect along a straight line parallel to the channel axis. The position of the intersection line is determined by maximising the correlation of the axial velocity in the two planes, where the superscripts direct/reflected denote the plane where the velocity is measured. For the present case of a , straight channel, both transversal velocity components, U y and U z , vanish along the intersection.
The positions of the two orthogonal planes can be independently varied by changing the objective focus and the thickness of the optical path adapter with an accuracy related to the correlation depth. Since, in principle, the coordinates y 0 , z 0 of the intersection line can span the entire channel section, the full three-dimensional field can be, in principle, reconstructed.
The left panel of Fig. 10 directly compares the velocity profiles across the two orthogonal views [red and blue lines corresponding to direct-U x (y)-and reflected views-U x (z) , respectively]. In a square channel, the velocity profiles on the two orthogonal mid-planes should be identical by symmetry. The comparison confirms that the velocity is correctly reconstructed in both views within the accuracy of the measurement. The inset reports U x vs the axial coordinate x along the intersection line of the two views, illustrating the possibility to simultaneously measure all three velocity components.
The right panel of Fig. 10 provides the comparison between the measured velocity profile and the prediction of a theoretical model of the measurement process. The model is based on the analytical solution for the flow in a straight channel with square cross-section described in details in Appendix B. In summary, the PIV signal consists of the fluorescence light emitted by the tracer particles that happen to transit through the region imaged by the optical system. The measurement in, say, the x-y plane at nominal spanwise location z 0 involves the slab z 0 − c ∕2 ≤ z ≤ z 0 + c ∕2 . Since tracer particles randomly reach the sensitive region (i.e., they are independently and uniformly distributed across the correlation depth), the estimated velocity is in fact the spatial average of the probe velocities across the slab. Apart from Brownian fluctuations, the probe velocity fluctuations are basically a function of the probe position and may be explicitly determined from the analytical velocity and the statistical model reported in Appendix C, see Lima et al. (2006) for a related procedure. Figure 11 illustrates the results of the model. The left panel is the axial velocity distribution across the channel section normalised by the bulk velocity. The right panel shows three profiles at three different spanwise positions. For each case two curves are shown, the (modelled) experimental estimate of the velocity together with the corresponding statistical error bars and the exact velocity at the nominal position. As the model suggests, the main source of error in the conditions of the experiment are due to the averaging of the velocity across the correlation depth. This systematic error component dominates over the statistical error incurred in evaluating the average with a finite sample. As already discussed, the Brownian fluctuation is even smaller, showing that the depth of correlation is the crucial parameter determining the accuracy. As apparent in Fig. 11, the profile taken in the middle of the channel is perfectly captured by the model, see the red curves in the right panel. Both accurately correspond to the actual measurement as shown in the right panel of Fig. 10. Moving towards the side of the channel, the systematic component of the error tends to grow, due to the steeper change of velocity with the position of the probing particle which makes the average across the slab significantly different from the nominal local value.
Let us now focus on the T-junction configuration described in Sect. 2. The left panel of Fig. 12 shows the velocity field in the two orthogonal views corresponding to the planes y 0 = 0.16 w and z 0 = 0.75 h . In both planes, the intersection of the plane with the bubble meniscus can be readily figured out. The fields are reconstructed by phase averaging the PIV frames corresponding to the same configuration of the bubble. Along the common intersection line, all the three velocity components are measured, providing a time-dependent (depending on the bubble position), three-component reconstruction. As for the case of the straight channel, by changing objective focus and thickness of the optical adapter the entire flow domain can, in principle, be spanned for a complete three-dimensional reconstruction.
At variance with the simpler case of the straight channel, for the T-junction configuration the a priori estimate of the confidence interval is much harder to achieve. The reason is that, due to the geometrically complex and time-dependent flow domain, it is difficult to accurately evaluate the occupancy fraction of the interrogation volume. To circumvent this difficulty, the occupancy fraction was directly evaluated from the raw data. Its typical value turned out to be f 0 ≃ 0.34 , which yields N sample ≃ 100 out of 300 image couples for each bubble configuration. The error bars reported in the velocity profiles to be shown below are based on this information.
The right panel of Fig. 12 consists of the superposition of a raw image taken from the PIV system with three axial velocity profiles at selected stations, one before the bubble (red curve), one on the middle of the bubble (green curve) Apparently, the statistical error estimated by the error bars is smaller than the systematic error due to averaging the velocity across the finite amplitude slice and the third (blue) behind the bubble. The top/bottom part of the image corresponds to the reflected/direct view, respectively. The axial velocity distribution along the common intersection line is plotted in the left panel of Fig. 13, where the red curve denotes data from the direct (x-y) view and the blue one corresponds to the reflected (z-x) view. As for the case of the rectilinear channel, the confidence interval is ̄x = V rms x ∕ √ N sample , where V rms x is the variance of the velocity signal and N sample ≃ 100 is the number of samples, related to the total number of acquired frame couples by N sample = f 0 N frame . The values measured at corresponding positions from the two views are consistent with the expected statistical error, now significantly larger than in the straight channel case due to the reduced number of frames available for a given bubble configuration. The right panel of the figure shows the transverse velocity components along the intersection, with the red line showing U y from the direct (x-y) view and the blue line providing U z from the reflected (z-x) view. Figure 14 provides the spatial distribution of the in-plane velocity vectors together with the trace of the bubble in the two planes. In the present conditions, the bubbles form with frequency f = 3 Hz and the adopted frame acquisition rate is 15 Hz , corresponding to five images per cycle. In particular, the three plots correspond to dimensionless time instants t = 0 , t = 4.35 and t = 8.7 , with the reference time scale given by T = h∕U bulk = 15.4 ms.
Conclusions
An important issue in microfluidics is accessing the threedimensional structure of the velocity field. For the specific case of a T-junction, the three-dimensional field can be reconstructed using stereo PIV as shown in Lindken et al. (2006). Here we provided a proof of principle of a simpler approach which is able to reconstruct the three-dimensional, three-component velocity using a standard PIV set-up. A microdevice has been conceived to allow the simultaneous visualisation of the flow in a T-junction along two orthogonal planes. As a specific feature, the novel device allows for capturing both views in a single camera sensor. Half the image corresponds to the direct view, (x-y) plane. The other half captures the orthogonal view in the (z-x) plane which is conveyed to the camera objective through an inclined mirror and a suitable optical adapter. The device is used together with a traditional, single camera, PIV system able to measure the planar velocity components on the two planes. Beside providing simultaneous views and related velocity components in the two orthogonal planes, all three velocity components are measured along the common intersection line. By adjusting the objective focus and the thickness of the optical adapter, the intersection line can span the three-dimensional flow volume, allowing, in principle, to reconstruct the entire 3D field.
The analysed data were obtained for channels with typical size of 800 μm using 4.47 μm-sized seeding fluorescent particles. In these conditions, the thermally induced Brownian motion of the probes is negligible. The main source of error is found to be related to the PIV correlation depth. The adopted fabrication technique allows to reduce the size of the channels down to 100 μm , due to limit in the assembly precision.
For a 100 μm channel, an increased magnification should be used together with smaller tracer particles, e.g., M = 20× with 1 μm tracer particle diameter. In these conditions, the correlation depth is c = 10 μm , such that c ∕h is almost the same as the case of the 800 μm channel, explicitly studied here. Assuming the same supply pressure, the expected velocity in the small channel is of the order of 1 mm/s . On the other hand, the Brownian fluctuations of 1 μm-particles at the same ambient temperature turn out to be 2.7 mm/s , which is comparable with the fluid velocity. As a consequence, the number of samples needs to be substantially increased to maintain the same statistical accuracy. In any case, like for the larger channel, the main source of error is associated with the correlation depth. In these conditions, a possible strategy to increase the measurement accuracy would be to resort to confocal microscopy (Oishi et al. 2009) which allows for reaching correlation depths of the order of c ≃ 5 μm . One of the interesting aspects of the present device is that it can be easily used in conjunction with confocal PIV , substantially increasing the potentiality of this powerful technique. A second interesting perspective emerges in the opposite limit of a (relatively) large depth of focus in the context of particle tracking. In this case, the two orthogonal views would allow to track the seeding particles in their three-dimensional motion through the measurement volume (Yoon and Kim 2006). A third field of possible application is for benchmarking of other less direct methods to obtain the third velocity component like, e.g., Barnkob et al. (2015), in defocusing PIV . Finally, using image analysis techniques, the present set-up can be used to directly measure the curvature radii of the bubbles, simultaneously with the acquisition of the velocity field. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
Fabrication procedure
The description of the fabrication procedure is reported step by step here.
-The first step consists in obtaining an L-shaped assembly template (see Fig. 15a). It is built by gluing together two orthogonal glass slides. It is important to achieve a wellcut squaring of the glass slides. Optical microscopy is used to verify the orthogonality of the two walls. -We place the first glass of the chip in the lower part of the L-shaped mold (bottom glass in Fig. 15a). This slide will therefore form the bottom of our channels. We make sure that the glass is in perfect contact with the orthogonal wall of the L-shaped mold (see Fig. 15a). -To control the widths of the main and secondary channels of the T-junction, we use a T-shaped mold (in red in Fig. 15b) with two orthogonal slides with the desired size. The T-shaped mold is moved against the vertical wall of the L-shaped template and is pressed on the bottom glass to cancel the play. -The next step is the positioning of two slides of required thickness (calibrated-thickness glasses in Fig. 15b) on the substrate to define the height of the channels and their position in the x-y plane. A few micron thickness film of silicon glue is spread over the lower side of the slides by spin coating. The calibrated glasses are moved along the substrate until they get in perfect touch with the T-shaped mold. -The T-shaped mold is removed (see Fig. 15c) to form main and secondary channel of the T-junction (see Fig. 15d). -A further slide, glued to the controlled-thickness glasses with silicon glue, is used to cover the channels (top glass in Fig. 15d). The thickness of top and bottom glasses, though not crucial, should be sufficiently thick to allow tight gluing of the lateral side, as explained in the next step. At the end of this step the assembly consists of a glass wafer, made of bottom, top and intermediate calibrated glasses (see Fig. 15e). -A fifth glass slide is now interposed between the glass wafer and the vertical wall of the L-shaped mold to close the main channel on the x-z plane (see Fig. 15e). Cyanoacrylate gluing of the side glass is performed in two phases. First, the glue is applied along the contact edge between the top and side glasses. Then, after removing the L-shaped mold, the glue is placed along the lower contact edge. The glue spontaneously spreads on glass, filling the interstices and ensuring tight bonding of the assembly with a thin film. -Metal micro-cannula are inserted for fluid supply (see Fig. 15f). -Once the chip is completed, the inclined mirror is introduced on the free side of the Side glass to deflect the x-z view towards the camera sensor (see Fig. 16). For the present realization a 2 mm × 20 mm mirror was used to allow imaging of the full width of the channel for a large axial extent. Alignment between mirror and microchannel is verified under optical microscope by spanning along the mirror-slide contact edge. -The optical path of the reflected view is corrected by interposing a calibrated glass thickness (optical path adapter) between the mirror and the camera sensor (see Fig. 16). The alignment of this adapter is guaranteed by two support brackets. In the paper we analyse two specific cases. The first one concerns the straight channel imaged on the two orthogonal symmetry planes. The second one, concerns the T-junction where both focal planes of direct and reflected views are displaced off the symmetry plane. Each configuration needs a properly tailored glass spacer to compensate for the optical path. The procedure to determine the thickness of the glass spacer is now explained in detail. The working distance of the direct view is given by the objective characteristics, while the working distance of the reflected view has to be properly estimated. The glass spacer with its own refractive index modifies the convergence angle of the light path from the original value (in absence of the spacer) to the new value . The left panel of Fig. 17 shows an idealized configuration. As a consequence of the interposition of the glass spacer a light ray is displaced by S parallel to the focal plane, inducing a translation, X , of the focal plane along the optical axis. S follows directly from Snell's law, where sin ∕ sin = n 2 ∕n 1 with n 1 and n 2 the refractive indexes of air and glass respectively. The axial displacement of the focal plane is then expressed as Eqs.
(1, 2) can be used to determine the thickness X s of the glass spacer able to displace the focal plane by a given amount X , X s = X tan( )∕(tan( ) − tan( )) . In fact, the light ray coming from the reflected view encounters an additional glass layer of thickness S 1 (with same refractive index), while the direct view goes through the bottom glass of thickness S 2 , see right panel of Fig. 17. Taking into account the additional shifts and the change of the propagation medium (air, glass and liquid), the expression for the glass spacer thickness needed to displace the focal plane of the reflected view by X relative to the direct view, can be evaluated by equating the light paths of direct view, l dir , and reflected view, l ref .
(3) where W D is the objective working distance, y and z are the shifts of the imaged plane with respect to the symmetry planes of the channel and sin ∕ sin = n 3 ∕n 2 with n 3 the liquid refractive index. Consequently, setting l dir = l ref , is given by In case the two symmetry planes ( y = 0 and z = 0 ) of the square ( W = h ) channel are imaged, the spacer thickness sym is given by Assuming n 1 = 1 , n 2 = 1.55 , n 3 = 1.36 (2-prop), = 0.247 rad , the glass thickness is sym = 3.3 mm . When the focal planes of direct and reflected views are displaced off the symmetry plane by y = −272 μm and z = 200 μm , respectively, the thickness is = 3.15 mm.
Velocity field in a square duct
The Stokes equations in a rectangular duct ( 0 ≤ y ≤ w, 0 ≤ z ≤ h ) infinitely extended in the axial direction x result in solving the equation (6) sym = S 2 − S 1 + W tan( ) tan( ) − tan( ) .
(7) ∇ 2 u x = dp dx | 0 , with the no-slip boundary conditions It easy to show that the cross-flow velocity u y , u z is identically zero. In the above equation dp∕dx | 0 is the constant pressure gradient, u x = u x (y, z) is the axial velocity and ∇ 2 = 2 ∕ y 2 + 2 ∕ z 2 . The solution of the above problem may be found, e.g., in Bruus (2008). Here we obtain the solution in a slightly different form. We look for the solution in the form u x = u p + u h , where a u p is particular solution satisfying A possible choice for u p is the Hagen-Poiseuille solution for a circular pipe with radius a = √ h 2 + w 2 ∕2 , u p = −1∕(4 )dp∕dx | 0 a 2 − r 2 , extended to the rectangle of shorter side of length 2a circumscribed to the circle. u h satisfies Laplace equation with boundary conditions By linearity, the superposition of u p and u h satisfies the original differential equation and the corresponding boundary conditions. We may decompose u h as the sum of four (8) u x (0, z) = u x (w, z) = u x (y, 0) = u x (y, h) = 0.
the bottom side, u b , is obtained from u t by the substitution z → h − z , u b = ∑ ∞ n=1 a n sin(n y∕w) sinh[n (h∕w − z∕w)] with the same coefficients a n as before. The solution with non-vanishing data on the right side of the rectangle is instead obtained from u t with the substitutions z → y , y → z , w → h , h → w , u r = ∑ ∞ n=1 b n sin(n z∕h) sinh(n y∕h) with coefficients given by Finally u l follows as u l = ∑ ∞ n=1 b n sin(n z∕h) sinh [n (w∕h − y∕h)] , with same b n coefficients. Eventually, the solution of the original problem (7,8) shown in the left panel of Fig. 11 is with flow rate Q = ∫ w 0 ∫ h 0 u x (y, z)dydz given by Since each tracer particle is carried along with the local fluid velocity, u x (y, z) , the probability distribution function (pdf) of the particle velocity at position z that contribute to the acquired signal at the nominal position y 0 is The average velocity of the particles in the slice centred at y 0 is then Analogously, the particle velocity variance is The estimator of the fluid velocity at the nominal position (y 0 , z) corresponds to the arithmetic average of the velocity v (k) x , k = 1, … , N , of N tracer particles as measured by PIV, Under the assumption of independently distributed particle positions, from the central limit theorem the estimator is seen to fluctuate with variance where x = � ⟨v � x 2 ⟩ . The analytic solution reported in Sect. C for the square duct allows to obtain both the mean value ⟨v x ⟩ , i.e., and the variance ⟨v ′ x 2 ⟩ , i.e., the statistical error bar e = 3̄x(y 0 , z) on the estimated velocity ū ( y 0 , z) . Moreover the estimate with N samples can be compared with the true solution u x (y 0 , z) . The results of the right panel of Fig. 11 show that, with increasing the number of samples, the estimated velocity statistically converges toward the average across the slice. The statistical fluctuation becomes negligible and the error in reconstructing the velocity is dominated by the systematic difference between the local fluid velocity and its average across the finite thickness slice. | 9,928.8 | 2018-07-30T00:00:00.000 | [
"Engineering",
"Physics"
] |
Utilizing PLC Data for Workpiece Flaw Detection in Machine Tools
Workpiece quality is one of the essential goals of every production company, due to the fact that it is strongly related to customer satisfaction and therefore to sales revenue and business success. While in single-part production every product can be checked for quality issues, this is inefficient in higher volume series production. Hence, random sampling is a widespread method for quality control in high-volume production. The problem with this method is that it just uses samples. Defect workpieces in between the samples are not recognized and therefore processed further on. In this chapter, an approach for monitoring systems for higher volume series production in machine tools is introduced to implement a 100% quality monitoring. It is utilizing the already implemented sensor network that is delivering its data to the machine PLC of the machine tool to indirectly measure the quality.
Introduction
Digitization is rapidly changing our entire economy and our society. The number of connected devices, like IT infrastructural connected objects, sensors and programmable logic controllers (PLCs) [1], is currently increasing rapidly [2]. Thus, until 2020 the number of devices connected to the Internet is expected to rise to eight billion [3]. This applies not only to the areas of household, traffic and mobility, infrastructure and buildings, but also to the industry. Data is currently considered the most valuable resource [4] and is even called "gold of the future" [5].
Larger industrial companies have recognized the value of their own production data and are using different analytic methods to improve the production process. Despite this, small-and medium-sized enterprises still have great difficulties in collecting and utilizing production data.
Internal machine data such as PLC and bus data can be used not only for process control, as is usually the case, but also for condition and quality monitoring, as well as energy efficiency [1]. In addition, data analysis can prevent high economic losses due to late detection of workpiece flaws. A diagnosis of insufficient workpiece quality on a machine tool can be categorized into the following five groups [6]: • Observations by the user of the machine, • Diagnosis by measuring and testing equipment, • Diagnosis by testing workpieces, • Diagnosis by additional sensors, • Model based or signal analytical diagnosis.
Conventional quality control systems can only be applied to randomly chosen samples and are cost-intensive and error-prone. As a consequence, monitoring systems are more and more automated. For these monitoring systems, only the diagnosis with additional sensors as well as model based or signal analytical diagnosis is utilizable [6]. These types of diagnosis allow an early detection of workpiece flaws as well as an identification of their causes [7]. As scrap is reduced, the safety, reliability and profitability of products are improved as well [8].
As described in [9], monitoring systems are divided into direct and indirect measuring systems, depending on whether the parameters to be monitored are observed directly (e.g. cutting force in machine tools) or indirectly via correlated data. For example, measurements of the spindle's power consumption can be correlated to the cutting forces [9][10][11]. Because the environmental influences and the usage of cooling lubricant impedes direct measurements in machine tools, indirect monitoring systems are usually used. As the costs for additional sensors have to be minimized [12], this article focuses on signal analysis diagnostics using machine internal sensors, such as those which are used within the drives.
Furthermore, there are some model-based prediction methods for surface roughness in machining processes [13,14], but there are no known methods for identifying typical workpiece flaws from pre-processing like moulding or forging. Typical flaws of these pre-processes are listed in [15,16]. Based on previous work, which showed that workpiece flaws can be detected through drive-based PLC data [1], this chapter outlines an automated method for monitoring workpiece quality using machine drivebased signals in machine tools. Because the analysed signals are sensitive to tool wear, this aspect is examined in the second part of this chapter.
Automated Quality Monitoring Using Drive-Based Data
In order to show the automated workpiece flaw detection method, a test series, which is subdivided into preliminary and main tests, is examined in this chapter. The preliminary tests investigate the face turning of a solid cylinder for various cutting parameters and are used to develop a statistical concept for automated flaw diagnosis. Within the main tests, the developed concept is applied to a real production process, in which different machining steps of a control disc for a hydraulic pump, which is manufactured at the ETA Factory, a model factory for energy and resource Table 10.1 shows the relevant processing steps and parameters of the control disc manufacturing process which is examined in the main tests.
Information Flow and Evaluation Process
To control the movements of the axis, the actual values of the machine drives are constantly measured at the frequency inverter and transmitted to the PLC via the fast automation bus Sercos. The signal flow and the evaluation workflow are shown in The drive-based signals are read out at a sampling rate of 2 ms with a software called MTX efficiency workbench (EWB). This data is buffered until the end of the measurement. In parallel, the Data Analytic Server (DAS, formerly Generic Data Server), a workflow-based software framework by Bosch Rexroth introduced in [1], detects the end of the recording with a filewatch trigger and executes a workflow consisting of the CNC-DataProvider and a computational activity. The CNC-DataProvider imports the buffered EWB data into the database used by the DAS called MongoDB. In the next step, the calculate activity evaluates this data using a precompiled MATLAB DLL which contains the necessary MATLAB functions for workpiece flaw diagnosis and writes the diagnostic results back into the Mon-goDB. This workflow-based evaluation method enables fully automated workpiece flaw analysis in parallel to the production.
Sensitivity Analysis and Signal Processing Steps
A sensitivity analysis was carried out to select appropriate drive-based signals for evaluation. Available signals include the current values of position, speed, power, force and momentum of each axis. In addition to the actual position of the spindle, which is needed to locate the flaw on the workpiece, the following five signals form a feature vector that represents the input for the analysis: • The current position value of the x-axis • The current position value of the y-axis • The current position value of the z-axis • The current spindle power value • The process force of the x-axis, which is the only feed axis during the tests.
While the current position of the axis is measured directly at the engine encoder and the process force is derived from a model, the actual power of the axis is calculated by measuring the DC link voltage and current at the frequency inverter [17]. As a result, it was found that the interference caused by the flaw can be detected in a better way in the five signals by analysing the difference between the averaged last three signals and the currently measured signal. This formula is represented in Eq. 10.1, where i is the number of the currently measured signal. The resulting calculated signal calc.signal oscillates about zero and is not affected by the scale of the signal's trend.
The peaks of the signals caused by the flaws can be easily detected in the calculated signal, as depicted in Fig. 10.2. The calculated signal for the five analysed characteristics is plotted over the radius of the workpiece for the facing process of the full cylinder, whose bores have a diameter of 2.0, 1.5 and 1.0 mm, respectively, at a distance of 9, 18 and 27 mm to the workpiece centre. Three empirically chosen tolerance bands, which are divided into multiple sections, mark a certain standard deviation to the mean value of the currently measured signal. The advantage of dividing the tolerance bands into equidistant sections is the ability to detect small flaws even if the noise's amplitude varies over the workpiece radius.
Workpiece Flaw Detection
A diagnosis of workpiece flaws includes an identification of the flaw, a localization on the workpiece and a quantification of the flaw.
Possible workpiece flaws are detected if the calculated signal exceeds the narrowest tolerance bands. With the corresponding actual position information of the feed and spindle axis, the potential workpiece flaws are localized in the next step. In order to quantify the potential workpiece flaw, a new parameter called intensity of diagnosis IoD was introduced. The IoD indicates the accuracy of the flaw diagnosis and the distribution of the IoD over the workpiece's surface can give more detailed insight into the flaw's size. According to Eq. (10.2), the IoD is the quotient of the number of features F that simultaneously manifest a trespass of the smallest tolerance band and the total number of features F tot multiplied by the quotient of the number of the largest tolerance band T of all features that was trespassed and the total number of tolerance bands T tot times one hundred. If the workpiece shows frequent and locally concentrated of measurements with a high IoD, After the evaluation of the drive-based data, the results of the workpiece flaw analysis, their position and quantification are combined and transferred onto a virtual image of the workpiece, which is displayed in Fig. 10.3. If a potential flaw is identified, because its corresponding data points trespass one of the tolerance bands, the information of its location and its intensity of the diagnosis is mapped on the virtual workpiece image. Areas on the workpiece that show both a locally concentrated high frequency of flaw identifications and an IoD above 75% are thus caused by strong variations between the currently measured and previously measured signals, which clearly indicates a potential workpiece flaw. Areas on the workpiece surface with IoDs below 20% can be correlated to noise in the signal. High IoDs on the outer boards of the workpiece are due to deviations from a perfectly circular workpiece rotation, which is explained in greater detail in the following paragraphs. The virtual image is later transmitted to the machine operator and supports the quality control and source inspection process. As shown in a close-up view of the repartition of the IoD on the virtual image of the workpiece in Fig. 10.4, it is even possible to derive the diameter of the boreholes.
Evaluation and Limits of the Presented Concept
A diagnosis is considered to be accurate if the displayed intensities of the diagnosis at the corresponding locations of the virtual workpiece image match well with the size and position of the actual borehole. As explained in this section, the choice of the analysed process steps impacts whether the diagnosis delivers accurate results or not. A high-precision diagnosis was achieved both for the examined face turning process of the full cylinder and for the face scrubbing of the forged unmachined part of the control disc. The analysis of the face finishing process and the exterior scrubbing of the unmachined part resulted in an inaccurate diagnosis.
These differences in the accuracy of the diagnosis for the different process steps result from the great influence of the relation between the selected cutting parameters and the accuracy of the diagnosis. The diagnosis becomes less accurate at high cutting speeds and feed rates because less data is recorded over the workpiece surface. In addition, the quality of the diagnosis deteriorates at cutting depth of less than 1 mm because the plastic deformation of previous cutting steps reduces the effective size of boreholes.
The face scrubbing of the unmachined workpiece is characterized by moderate cutting speeds and feed rates with simultaneously high cutting depths. Therefore, the accuracy of the diagnosis proved to be better than with the finishing steps, which use high cutting speeds, low feed rates and very small cutting depths.
Flaws on the side surface of the unmachined part were not detected during the exterior scrubbing, because the flaw's influence on the signal is negligible compared to that of the initially not perfectly round rotating unmachined part. At each revolution of the workpiece, the signals of the five features show large oscillations due to the fact that the flaw, which is located at a covered distance of 7.7 mm form the exterior scrubbing process, cannot be identified. This is seen in Fig. 10.5, which shows the relation between the actual position and the power of the spindle. The flaw could have been detected in the second turning step if the exterior scrubbing process of the unmachined part was divided into two steps each with half of the current cutting depth.
Although the tool wear influence is reduced by focusing on the calculated signals based on the average of the last three cutting processes, noise from progressive tool wear is added to the signal. The noise increases the standard deviation of the signal and thus the size of the tolerance bands. This makes it more difficult to detect flaws in the workpiece.
Influence of Tool Wear on Machine Drive-Based Signals
In order to investigate the influence of the process parameters on the tool wear and to better understand the resulting surface roughness, further experimental series were executed. A face turning process is, therefore, conducted multiple times with different combinations of cutting parameters and analysed for their influence on tool wear, surface roughness and specific energy consumption. The investigated cylinder consists of 42CrMoS4 and GARANT CNMG120408-SG HB7035 inserts are used with a PCLNR 2525 M12 AFR231 tool holder. In these experiments, the already mentioned EMAG VLC100Y machine tool is used, and the surface roughness is measured after each test run with a mobile MarSurf Perthometer M2 measuring device. The process parameters are selected with regard to the face scrubbing process of the hydraulic control disc in order to examine ten different cutting parameter combinations. Based on the basic process parameters listed in Table 10.1, one parameter is modified within a certain range, and the other two parameters are maintained constant (Table 10.2). Considering the differences between the material of the cylinder and the hydraulic control disc, a cutting depth (a p ) of 1.5 mm is used as basis.
As also [18,19] describe, the main influence on tool wear results from an increasing cutting speed. An increase of the feed rate leads to an increase of the surface roughness but has no essential impact on the tool wear, which corresponds with the results of [18,20]. By increasing the cutting depth neither the surface roughness nor the tool wear is affected significantly, as is also the case with [18,21]. In accordance with [21,22] an increase in each of the analysed cutting parameters leads to a reduction in specific energy consumption by increasing the material removal rate.
Furthermore, the impact of tool wear on the power consumption of the spindle is compared for two different combinations of cutting parameters. For this purpose, the cutting parameter combination A with a cutting speed of 180 m/min, a cutting depth of 1.5 mm and a feed rate of 0.3 mm/rev plus the cutting parameter combination B with a cutting speed of 220 m/min, a cutting depth of 1.5 mm and a feed rate of 0.3 mm/rev are chosen. For cutting parameter combination A, representative data rows of the face turning process after a different number of cuts of an insert are shown in Fig. 10.6. It is observable that the spindle power increases with increasing material removal due to tool wear.
In Fig. 10.6, the corresponding spindle power consumption for cutting parameter combination B is displayed. It is obvious that the power consumption varies between the first and 56th cut. Also, this cut represents the last cut for the tool before the test is completed. The latter is associated with a clearly higher power consumption, and the pattern shows irregularities which do not occur with an unworn cutting edge. Figure 10.6 also indicates that the processes with the cutting parameter combination B are shorter because of the higher cutting speed, but also have a higher power consumption because of the higher required spindle speed.
Besides the signals analysed in Sect. 2.2, the power consumption of the axis drives was evaluated for tool wear analysis. It is visible that a change is detected in the power consumption of the x-axis, y-axis and z-axis with increasing tool wear. The corresponding power consumption of the drives of these axes is shown in Fig. 10.7 for cutting parameter combination B. A significant deviation can be determined for the last cut of the x-axis. The curve shows both deviating and a higher level of the characteristic curves with a peak at the beginning of the process. Additionally, the power consumption curve of the y-axis changes with increasing tool wear. There is a shift to the right and when the tool wear is higher, the curve is at a higher level of power consumption. In contrast, the z-axis does not have higher power consumption, but in that case higher fluctuations in power consumption occur with increasing tool wear.
The data is collected in one measurement and is available both for improving the flaw detection algorithm by automatically adjusting the barriers as tool wear and with it signal noise increases, and for automatic and proactive identification of worn inserts.
Conclusions
This paper presents an automated approach to component failure diagnosis. For this, drive-based data are measured at high frequency and evaluated after completion of the recording. Workpiece flaws, like shrink holes, which are represented by boreholes of different diameters, can, therefore, be reliably detected, located and quantified. Following diagnosis, quality assurance is supported by purposefully prepared data as the display of a virtual image of the workpiece with the intensity of the diagnosis plotted at the corresponding position. In addition, investigations of cutting parameter combinations have shown that the cutting speed has a significant impact on tool wear. The increasing tool wear is clearly identifiable in the power consumption of the axis drives and the main spindle power. This information is useful for both, automated proactive tool replacement and improved flaw detection. The limitations of the concept, like the influence of process parameters and tool wear have been pointed out and offer starting points for future research.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 4,415.6 | 2019-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Carrier-envelope-phase dependent coherence in double quantum wells
By analyzing the interaction of a few-cycle laser pulse within an asymmetric semiconductor double quantum well structure, we show that the transient coherence thus produced is strongly dependent on the carrier-envelope-phase (CEP) and significantly enhanced due to the Fano-type interference. A method to determine the CEP is proposed by directly mapping the CEP dependent coherence to the quantum beat signals. © 2009 Optical Society of America OCIS codes: (270.1670) Coherent optical effects; (320.7130) Ultrafast processes in condensed matter, including semiconductors. References and links 1. Z. Ficek and S. Swain, Quantum Interference and Coherence (Springer, New York, 2004). 2. H.C. Liu and F. Capasso, Intersubband Transitions in Quantum Wells: Physics and Device Applications (Academic Press, San Diego, 2000). 3. H. Schmidt, K. L. Campman, A. C. Gossard, and A. Imamoǧlu, “Tunneling induced transparency: Fano interference in intersubband transitions,”Appl. Phys. Lett. 70, 3455-3457 (1997). 4. J. Faist, F. Capasso, C. Sirtori, K. W. West, and L. N. Pfeiffer, “Controlling the sign of quantum interference by tunnelling from quantum wells,” Nature (London) 390, 589-591 (1997). 5. G. B. Serapiglia, E. Paspalakis, C. Sirtori, K. L. Vodopyanov, and C. C. Phillips, “Laser-induced quantum coherence in a semiconductor quantum well,” Phys. Rev. Lett. 84, 1019-1021 (2000). 6. L. Silvestri, F. Bassani, G. Czajkowski, and B. Davoudi, “Electromagnetically induced transparency in asymmetric double quantum wells,” Eur. Phys. J. B 27, 89-102 (2002). 7. T. Müller, W. Parz, G. Strasser, and K. Unterrainer, “Influence of carrier-carrier interaction on time-dependent intersubband absorption in a semiconductor quantum well ,” Phys. Rev. B 70, 155324(1-5) (2004). 8. S. M. Sadeghi, S. R. Leffler, and J. Meyer, “Quantum interference and nonlinear optical processes in the conduction bands of infrared-coupled quantum wells ,” Phys. Rev. B 59, 15388-15394 (1999). 9. M. D. Frogley, J. F. Dynes, M. Beck, J. Faist, and C. C. Phillips, “Gain without inversion in semiconductor nanostructures,” Nature Materials 5, 175-178 (2006). 10. E. Paspalakis, M. Tsaousidou, and A. F. Terzis, “Coherent manipulation of a strongly driven semicondutor quantum well,” Phys. Rev. B 73, 125344(1-5) (2006). 11. J. F. Dynes, M. D. Frogley, M. Beck, J. Faist, and C. C. Phillips, “ac Stark Splitting and Quantum Interference with Intersubband Transitions in Quantum Wells,” Phys. Rev. Lett. 94, 157403(1-4) (2005). 12. B. S. Williams, B. Xu, Q. Hu, and M. R. Melloch, “Narrow-linewidth terahertz intersubband emission from three-level systems,” Appl. Phys. Lett. 75, 2927-2929 (1999). 13. T. M. Frontier, P. A. Roos, D. J. Jones, S. T. Cundiff, R. D. R. Bhat, and J. E. Sipe, “Carrier-Envelope PhaseControlled Quantum Interference of Injected Photocurrents in Semiconductors,” Phys. Rev. Lett. 92, 147403(1-4) (2004). 14. K. A. Pronin and A. D. Bandrauk, “Coherent Control of Harmonic Generation in Superlattices: Single-Mode Response,” Phys. Rev. Lett. 97, 020602(1-4) (2006). 15. C. Van Vlack and S. Hughes, “Carrier-Envelope-Offset Phase Control of Ultrafast Optical Rectification in Resonantly Excited Semiconductors ,” Phys. Rev. Lett. 98, 167404(1-4) (2007). 16. T. Nakajima and S. Watanabe, “Effects of the carrier-encelope phase in the multiphoton ionization regime,” Phys. Rev. Lett. 96, 213001(1-4) (2006). (C) 2009 OSA 31 August 2009 / Vol. 17, No. 18 / OPTICS EXPRESS 15402 #110483 $15.00 USD Received 23 Apr 2009; revised 24 Jul 2009; accepted 11 Aug 2009; published 17 Aug 2009 17. T. Nakajima and S. Watanabe, “Phase-dependent excitation and ionization in the multiphoton ionization regime,” Opt. Lett. 31, 1920-1922 (2006). 18. Y. Wu and X. Yang, “Carrier-envelope phase-dependent atomic coherence and quantum beats,” Phys. Rev. A 76, 013832(1-4) (2007). 19. Y. Wu and X. Yang, “Strong-Coupling Theory of Periodically Driven Two-Level Systems,” Phys. Rev. Lett. 98, 013601(1-4) (2007). 20. G. L. Kamta and A. D. Bandrauk, “Phase Dependence of Enhanced Ionization in Asymmetric Molecules,” Phys. Rev. Lett. 94, 203003(1-4) (2005). 21. W. Yang, X. Song, S. Gong, Y. Cheng, and Z. Xu, “Carrier-Envelope Phase Dependence of Few-Cycle Ultrashort Laser Pulse Propagation in a Polar Molecule Medium,” Phys. Rev. Lett. 99, 133602(1-4) (2007). 22. C. Zhang, X. Song, W. Yang, and Z. Xu, “Carrier-envelope phase control of carrier-wave Rabi flopping in asymmetric semiparabolic quantum well,” Opt. Express 16, 11487-1496 (2008). 23. H. Schmidt and A. Imamoglu, “Nonlinear optical devices based on a transparency in semiconductor intersubband transitions,” Opt. Commun. 131, 333-338 (1996). 24. J. Faist, F. Capasso, C. Sirtori, A. L. Hutchinson, K. W. West, and L. N. Pfeiffer, “Itersubband emission in double-well structure with quantum interference in absorption,” Appl. Phys. Lett. 71, 3477-3479 (1997). 25. I. Waldmüller, J. Förstner, S.-C. Lee, A. Knorr, M. Woerner, K. Reimann, R. A. Kaindl, T. Elsaesser, R. Hey, and K. H. Ploog, “Optical dephasing of coherent intersubband transitions in a quasi-two-dimensional electron gas,” Phys. Rev. B 69, 205307(1-9) (2004). 26. E. Paspalakis, N. J. Kylstrra, and P. L. Knight, “Transparency induced via decay interference,” Phys. Rev. Lett. 82, 2079-2082 (1999). 27. E. Paspalakis, N. J. Kylstrra, and P. L. Knight, “Transparency of a short laser pulse via decay interference in a closed V-type system,” Phys. Rev. A 61, 045802(1-4) (1999). 28. J. H. Wu, J. Y. Gao, J. H. Xu, L. Silvestri, M. Artoni, G. C. La Rocca, F. Bassani, “Ultrafast All Optical Switching via Tunable Fano Interference,” Phys. Rev. Lett. 95, 057401(1-4) (2005). 29. W. X. Yang and R.-K. Lee, “Controllable entanglement and polarization phase gate in coupled double quantumwell structures,” Opt. Express 16, 17161-17170 (2008). 30. W. X. Yang, J. M. Hou, and R.-K. Lee, “Ultraslow bright and dark solitons in semiconductor quantum wells,” Phys. Rev. A 77, 033838(1-7) (2008). 31. D. Ahn and S. L. Chuang, “Exact calculations of quasibound states of an isolated quantum well with uniform electric field: Quantum-well stark resonance,” Phys. Rev. B 34, 9034-9037 (2008). 32. M. O. Scully and M. S. Zubairy, Quantum Optics (Cambridge University Press, Cambridge, England, 1997). There have been significant research activities on quantum coherence and interference phenomena induced by the intersubband transitions (ISBT) of semiconductor quantum wells (QW) in the last decades [1, 2]. A number of fascinating coherence introduced effects have been discovered when lasers are applied to the QW structures, such as tunneling induced transparency [3, 4], electromagnetically induced transparency [5, 6, 7, 8], gain without inversion [9], coherent control of electron population [10], Autler-Townes splitting [11], and terahertz emission [12]. These studies have considerably modified our understandings of the nature and consequences of quantum coherence on the quantum and nonlinear optical processes in QW systems [13, 14, 15]. Recently, the effects of the carrier-envelope phase (CEP) of few-cycle pulses on the quantum coherence and interference in optical media have drawn lots of attentions [16, 17, 18, 19, 20, 21, 22], due to that these investigations can lead to many practical applications in extracting the related information of an ultrashort laser pulse. In this letter, we theoretically investigate the effects of CEP on the transient coherence produced by an ultrashort laser pulse of a few cycles in an asymmetric double quantum well structures. We demonstrate that the coherent effect is strongly dependent on the CEP, and the magnitude of transient coherence can be enhanced significantly due to the Fano-type interference. We also show that the coherence thus produced can also be mapped into the signal of quantum beats and hence might be used to determine the CEP of few-cycle pulses. The schematic energy-level diagram of a GaAs/AlxGa1−xAs coupled quantum well structure are shown in Fig. 1(a): a AlxGa1−xAs shallow well and a GaAs deep well separated by a thick AlyGa1−yAs tunnel barrier. This barrier will couple the excited state of deep well with the ground state of shallow well to create a doublet states |2〉 and |3〉. One external light field is used to illuminate the system, and acts both on the transitions |1〉 ↔ |3〉 and |1〉 ↔ |2〉 simultane(C) 2009 OSA 31 August 2009 / Vol. 17, No. 18 / OPTICS EXPRESS 15403 #110483 $15.00 USD Received 23 Apr 2009; revised 24 Jul 2009; accepted 11 Aug 2009; published 17 Aug 2009
There have been significant research activities on quantum coherence and interference phenomena induced by the intersubband transitions (ISBT) of semiconductor quantum wells (QW) in the last decades [1,2].A number of fascinating coherence introduced effects have been discovered when lasers are applied to the QW structures, such as tunneling induced transparency [3,4], electromagnetically induced transparency [5,6,7,8], gain without inversion [9], coherent control of electron population [10], Autler-Townes splitting [11], and terahertz emission [12].These studies have considerably modified our understandings of the nature and consequences of quantum coherence on the quantum and nonlinear optical processes in QW systems [13,14,15].Recently, the effects of the carrier-envelope phase (CEP) of few-cycle pulses on the quantum coherence and interference in optical media have drawn lots of attentions [16,17,18,19,20,21,22], due to that these investigations can lead to many practical applications in extracting the related information of an ultrashort laser pulse.
In this letter, we theoretically investigate the effects of CEP on the transient coherence produced by an ultrashort laser pulse of a few cycles in an asymmetric double quantum well structures.We demonstrate that the coherent effect is strongly dependent on the CEP, and the magnitude of transient coherence can be enhanced significantly due to the Fano-type interference.We also show that the coherence thus produced can also be mapped into the signal of quantum beats and hence might be used to determine the CEP of few-cycle pulses.
The schematic energy-level diagram of a GaAs/Al x Ga 1−x As coupled quantum well structure are shown in Fig. 1 ously.Tunneling to a continuum of energies takes place from states |2 and |3 through the thin barrier on the right of the deep well.The probability amplitude for the absorption of a photon can be thought as the superposition of two absorption paths, one via level |2 and one via level |3 , both decaying by tunneling to the same continuum.Fano-type destructive interference between the two absorption paths may then occur so as to cancel the absorption altogether.Nearly vanishing absorptions due to the Fano effect have already been predicted [23] and observed [3,4,24].As shown in Fig. 1(b), we consider an ultrashort optical pulse of the electric field E(t) = −∂ A(t)/∂t with the vector potential A(t) = A 0 e −(t−2τ) 2 /τ 2 sin(ωt + φ ) [16,17,18,19], where A 0 , τ, ω, and φ are the amplitude, pulse width, carrier-envelope frequency, and the phase of the vector potential, respectively.Let us assume that the electronic wave function in the form of |ψ = a 1 |1 + a 2 |2 + a 3 |3 , then the time evolution equation for |ψ is governed by the Schrödinger equation, with which we can have the corresponding differential equations for the probability amplitudes a j as follows: where ξ (t) = ω −1 ∂ [e −(t−2τ) 2 /τ 2 sin(ωt +φ )]/∂t, the dot overhead means the derivative with respect to time.qΩ = qΩ * = qμ 12 ωA 0 /(2h) is the half Rabi frequency for the transition |1 ↔ | j ( j = 2, 3), with q being the ratio of the dipole matrix element between two upper levels.2δ is the energy splitting due to the tunneling between the upper levels and Δ = ω − ω 0 is the detuning between the frequency of the ultrashort pulse and the average transition frequency ω 0 = (ω 3 −ω 2 )/2, where ω 2 and ω 3 being the transition frequencies corresponding to |2 ↔ |1 and |3 ↔ |1 , respectively.The decay rates have been added phenomenologically in the above equations (1)(2)(3), where γ 2,3 = γ 2l,3l + γ 2d,3d denotes the total decay rate of the upper states including both the population scattering rates γ 2l,3l due to longitudinal optical (LO) phonon emission events at low temperature and the dephasing rates γ 2d,3d due to a combination of quasielastic interface roughness scattering or acoustic phonon scattering.Besides, we have neglected other inhomogeneous broadening effects due to their small influences[25].Moreover, p √ γ 2 γ 3 = √ γ 2l γ 3l represents the cross-coupling of the two upper states via the LO phonon decay [3,4], which arises from the tunneling to the continuum through the thin barrier next to the deep well.the limit values p = 0 and p = 1 correspond, respectively, to no interference and perfect interference.
As an example for the numerical calculations, we consider the structure design of the asymmetric double quantum-well: a 68 Å thick Al 0.15 Ga 0.85 As shallow well and a 70 Å thick GaAs deep well separated by a 20 Å thick Al 0.3 Ga 0.7 As tunnel barrier.The doublet states (|2 and |3 ) are both coupled to the continuum by a 15 Å thin Al 0.3 Ga 0.7 As barrier, which produces the decay-induced coherence.Note that, for temperature up to 10 K with electron sheet densities smaller than 10 12 cm −2 , the dephasing rates γ id can be estimated [3] to be γ 2d = 4.13 meV and γ 3d = 5.35 meV.The population-decay rates can be calculated [31]: upon solving the effective mass Schrödinger equation with outgoing waves at infinity, we obtain a set of complex eigenvalues whose real and imaginary parts yield, respectively the quasibound state energy levels and resonance widths.For our asymmetric double quantum well structure, the population-decay rates turn out to be γ 2l = 5.6 meV and γ 3l = 7.0 meV.In such a scenario, a coupling ultrashort laser can produce the oscillation between the doublet states.Sequentially the induced oscillation is strongly dependent on the CEP of a few-cycle pulse, which produce a CEP dependent transient coherence for versus the CEP φ for the case of (a, c) p = 0 and (b, d) p = 1 at the time t = 4τ for different widths, τ, and Rabi frequencies, Ω, of the pulse with other parameters hω = 125 meV, q = 1.2, Δ = 0, 2δ = 17.6 meV, γ 2l = 0.31 meV, γ 3l = 0.26 meV, γ 2d = 0.031 meV, and γ 3d = 0.026 meV.
Clearly CEP dependence has been produced even for the low Rabi frequency i.e., Ω = ω/20.Just as shown in Fig. 2, the dependent amplitude become pronounced as the Rabi frequency increases and the pulse width becomes narrow.The low Rabi frequencies induce less transient coherence and hence are obviously non-favorable from the viewpoint of the experimental measurement.The lower limit for Rabi frequency depends on the precision of the technique in measurement.With state-of-the-art technologies to handle the weak light-QW interaction, relative effects of the QW system considered here can be measured in low temperature (10 K) [2].Besides, we note that α ∼ E characterizes the electric field E(t) with the period 2π for the CEP φ .In such a case, the relation ρ 23 = O(α 2 ) implies that ρ 23 should approximately have the period π, as illustrated in Fig. 2.
It should be noted that the interference induced by the resonant tunneling have been included in plotting Fig. 2. According to the decay-rate values (γ 2l = 5.6 meV, γ 3l = 7.0 meV, γ 2d = 4.13 meV, and γ 3d = 5.35 meV), we can obtain the cross coupling strength between |2 and |3 p = 0.54.In order to examine the effect of the interference induced by the resonant tunneling on the CEP dependent coherence, we consider a similar GaAs/AlGaAs asymmetric double quantum well structure consists of two quantum wells (55 Å Al 0.3 Ga 0.7 As shallow well and 57 Å GaAs deep well) separated by a 35 Å Al 0.5 Ga 0.5 As tunneling barrier.Aluminum is added to the shallow well in order to reduce the contribution of interface roughness scattering.The energy splitting between the upper levels is calculated to be 2δ = 7.6 meV.For a sheet carrier density of 10 12 cm −2 in the quantum wells, we can obtain the LO-phonon decay rates γ 2l = 0.31 meV and γ 3l = 0.26 meV, and the dephasing rates can be estimated to be γ 2d = 0.031 and γ 3d = 0.026 meV.Thus, the cross coupling strength is estimated as p = 0.90.This is close to the ideal value p = 1 and corresponds to a large tunneling efficiency leading to a strong Fanotype interference effect.With new parameter values of this QW structure, we show in Fig. 3 the transient coherence |ρ 23 | versus the CEP φ at the time t = 4τ under the same initial conditions as in Fig. 2, and it demonstrates that the amplitude of the transient coherence is enhanced.This interesting result is produced from the perfectly interference induced by the resonant tunneling.The large amplitude is obviously favorable from the viewpoint of the experimental measurement in the weak-field regime.More interestingly, the parameters of the electron subbands in QW structures can be engineered to give a desired amplitude of coherence by utilizing so-called structure coherent control in design [2].
We now study the quantum beats due to the coherence ρ 23 produced by a few-cycle ultrafast pulse for the time interval T > t with the initial time t = t 0 = 4τ.The quantum beat note signal I can be given as [32] with the state of our system |ψ(T Here |n, 0 , 1, 1 ω j1 describe the levels |n (n = 1, 2, 3) with no photon, and ground state |1 with one photon in the field mode j characterizing the transition |0 → | j ( j = 2, 3), respectively.Ê(−) 1 (T ) = E 1 â1 e −iω 21 (T −t) and Ê(+) 2 (T ) = E 2 â † 2 e iω 31 (T −t) denote the electric field per photon for the mode j.Inserting Hamiltonian H = hΣ j g j ( with g j = μ 0 j E j /(2h).By solving Eqs.(5,6) under the initial conditions of b 1,2 (t) = 0, the quantum beat signals can be calculated as where I 0 (φ ) = C |ρ 23 (t)| with the coefficient C being determined by E j , g j , γ j , and p.And η(t) is an adjustable phase shift of the ultrashort pulse at the time instant T = t.Here we have used the assumption of γ j , p √ γ 2 γ 3 2g j , so that we can neglected the time-dependent term in the coefficient C. From Eq. ( 7), we find that I 0 (φ ) depends on the CEP φ through the CEP dependent coherence |ρ 23 (t)| as shown in Fig. 2. Thus the CEP of a few-cycle pulse might be determined by measuring the quantum beat signals.By defining the depth of modulation, M, in the signal amplitude of quantum beats as M = 2[I 0 (φ ) max − I 0 (φ ) min ]/[I 0 (φ ) max + I 0 (φ ) min ], with I 0 (φ ) max,min being the maximum (minimum) amplitude.For a certain value of C, one can have a much larger value of the modulation depth M in our system than those proposed in the previous schemes [16,17,18], as illustrated in Fig. 2.
In conclusion, we have studied the generation of transient coherence induced by few-cycle laser pulses in an asymmetric semiconductor double QW structure, and shown that the coherence thus produced strongly depends on the carrier-envelope phase of the ultrashort laser pulses.Importantly, the amplitude of the CEP dependent transient coherence can be greatly enhanced due to the Fano-type interference.Besides, we also shown that the CEP-dependent coherence can be mapped into the signal of quantum beats, thus one can determine the CEP by measuring the quantum beat signals.We believe that the CEP dependent coherence in our proposed QW structure will also manifest itself in other quantum interference phenomena as well, and hence our study might open up an avenue to explore and utilize the CEP dependent coherent effects and could be exploited in real solid-state devices as high speed optical modulators and switches.
Fig. 1 .
Fig. 1.(a) Schematic diagram of our proposed GaAs/Al x Ga 1−x As QW structure illuminated by an ultrashort few-cycle laser pulse in (b), where the electric field E(t) of the ultrashort pulse versus time t is shown for φ = π/2. | 4,718 | 2009-08-31T00:00:00.000 | [
"Physics"
] |
Palynofacies Analysis and Hydrocarbon Generation Potential of Dokan and Gulneri Formations ( Upper Cretaceous ) from selected wells in Northern Iraqi Oil Fields
Sixty five cutting samples from Dokan and Gulneri Formations in three subsurface sections from Khabbaz, Jambour and Taq-Taq oil fields at Northern Iraq have been selected to be studied optically and analytically from Palynofacies points of view. Four palynofacies types have been determined depending on the ratio of the existed palynomorphs, phytoclasts, amorphous and opaque organic materials. The suggested paleoenvironment of deposition for Dokan and Gulneri Formations is seems to be deposited in proximal to distal shelf environment and that depending on the ratio of the palynomorphs, amorphous and phytoclasts to each other. The optical examination of the organic matter revealed the slightly mature to mature condition of the organic matter (TAI between 3 and +3). The GC analysis also supported such a maturity condition as the Pr/Ph ratio for the studied sections was greater than 1 while the Carbon Preference Index (CPI) less than 1. TOC values of different depths have been determined and types of amorphous organic matter from their ability to hydrocarbon potential also pointed out optically. By connecting between maturity stage, quality and quantity of the organic matters in the Formations of Dokan and Gulneri in the studied sections a number of oil and condensate-wet gas zones have been detected at certain depths of each section.
Introduction
The Dokan Formation was formerly included in the Kometan Formation (Jassim & Buday, 2006).It was first described by Lancaster Jones in 1957 (Bellen et al., 1959).The type locality is on the site of Dokan Dam in the High Folded Zone NNW of Sulaimaniya NE Iraq.It is composed of four meters light grey or white; white-weathering oligostigenal limestones, locally rubbly, with glauconitic coatings of constituents pebble-like masses, locally worm-riddled.However in the subsurface sections, the color of the limestone is mostly dark grey, shaley or marly.The formation thickens to the SW reaching 150m in the Chamchamal wells (Buday,1980).Further west it is (5-30m) thick in Kirkuk, Bai Hassan, Demir Dagh, Qara Chauq areas (Jassim & Buday,2006) Gulneri Formation was first described by Lancaster Jones in 1957from the site of Dokan Dam in the high folded zone, where it consists of about 2m of black, bituminous, finely laminated, calcareous, shale with some glauconite and cellophane in the lower part (Bellen et al., 1959).The high bitumen content and dwarfed fossils indicate the formation was deposited in a euxinic environment (Jassim&Buday,2006).
The age of the formation is Early Turonian as recorded by Bellen et al. (1959).The formation is separated by unconformities with both the underlying Dokan and the overlying Kometan Formations (Buday, 1980).
The Study Area
The study area composed of(3)wells within the three oil fields of Khabaz, Jambour and Taq -Taq.The location of the studied sections is shown in Fig .(1)and the U.T.M coordinates of the three sections are as follows:
Sample Collection and Methodology
A sum of (65) cutting samples of Dokan and Gulneri Formations have been selected from the wells of , Jambour-50 (Ja-50) and Taq-Taq-1 (Tq-1).The interval of the selected samples ranged from 23 cm to 6 m depending on the thickness of the formations and the availability of the samples.The selected rock samples have been treated following the common procedure of preparing palynological slides using HF and HCl acids.Generally, all the selected samples considered to be poor (less than 25%) in their richness of palynomorphs.
Previous Studies
No previous detail palynological studies have been carried out until now on Dokan or Gulneri Formations in Iraq.Most of the studies are within the internal reports of Northern Oil Company which generally deal with either lithological description or reservoir characteristics of these two formations in the wells that penetrated them.
Classification of Sedimentary Organic Matter
There are many classifications of Sedimentary Organic matter (Staplin, 1969, Bujack et al., 1977, Combaz, 1980, Masran & Pocock, 1981, Whitaker, 1984, Hart, 1986---------etc).Pittet and Gorin (1997), in their study about distribution of sedimentary organic matter in a mixed carbonate siliciclastic platform environment of Oxfordian in Swiss Jura Mountains, and in order to make palynofacies a cost-effective routine tool in paleoenvironmental and sequence stratigraphic investigations; proposed a sufficiently simple classification for observations in transmitted light microscopy.The classification took into account some important variables, mainly the biological origin of constituents, their preservation state and any significant variation in size, morphology or density likely to affect the hydrodynamic behavior of particles.The classification proposed by Pittet and Gorin (1997) was actually adapted from that of Whitaker (1984) and was modified with simplification from Steffen and Gorin (1993a, b) to retain eight constituent categories.In this study, the classification of Pittet and Gorin (1997) is suggested to be used (as it is simple and practicable) in distinguishing between the different constituents of the sedimentary organic matter, although their classification did not include Amorphous Organic Matter as a separate category because their worked samples were poor of it.
Palynofacies
The concept of palynofacies was first introduced by Combaz, 1964 (Tyson, 1995).His definition may be paraphrased as the palynological study of the total assemblage of particulate organic matter contained in sediment, following removal of the sediment matrix by (HCL & HF).Palynofacies analysis involves the integrated study of all aspects of the organic matter assemblages: identification of the individual particulate components, assessment of their absolute and relative proportions, their size and preservation states.Powell et al. (1990), in Tyson (1995), define palynofacies as distinctive assemblages of HCL & HF insoluble particulate organic matter (Palynoclasts) where composition reflects a particular sedimentary environment.Tyson (1995) defined palynofacies as a body of sediment containing a distinctive assemblage of palynological organic matter thought to reflect a specific set of environmental conditions, or to be associated with a characteristic range of hydrocarbon-generating potential.Tyson's definition of Palynofacies analysis is the palynological study of depositional environments and hydrocarbon-source rock potential based upon the total assemblage of particulate organic matter.In this study (4) types of palynofacies have been identified depending on the estimated ratio of the existed organic matter components as been mentioned by Pitted and Gorin (1997) (Figs.2-4).The detail of each Palynofacies components is as follows:-
Palynofacies and Paleoenvironmental Interpretation
Tyson,(1995)summarized a number of ternaries which are of much use in determining the paleoenvironment of deposition,depending on Palynofacies data; for example, Ternary of Microplankton-Spore-Pollen palynomorph plot by (Federova,1977;Traverse,1988 andDuringer& Doubinger, 1985) to indicate onshore-offshore depositional environments and transgressiveregressive trends.Pocock et al.(1988)used a structured phytoclast-biodegraded phytoclast -(yellow + grey AOM) plot to indicate the style degradation, with supposed transitions from structured to biodegraded in oxidizing environments, and from structured to amorphous in reducing environments.A ternary composed of Alginite+Amorphous,Herbaceous+Pollen+spores,and Woody-coaly is supposed by Shimazaki,(1986)in (Omura,2004) from which the fluvial, estuarine, prodeltaic, shelf,sub-marine fan and basin floor sediments can be identified and distinguished.Tyson,(1985Tyson,( ,1989Tyson,( ,and 1993) ) in Tyson,(1995) used an AOM-Phytoclast-Palynomorph (APP) plot to characterize kerogen assemblages.The plot can pick out the differences in relative proximity to terrestrial organic matter sources, kerogen transport paths and the redox status of the depositional subenvironments that control AOM preservation.Relatively high palynomorph percentages(>10%)and high phytoclast percentages(> 50%) are apparently characteristic of only some shallow shelf settings.The estimated ratios of the sedimentary organic matter components for the two formations of Dokan and Gulneri in the studied sections plotted on the mentioned ternary of Tyson, (1995) separately, figures (6 and 7).From the resulted ternaries, it is clear that the paleodepositional environment of the two formations (P.F.1&3) are mainly represented by distal suboxicanoxic basin (IX field).According to Tyson's comments on this field,it is of AOM dominated assemblages, low abundance of palynomorphs partly due to masking.Frequently alginate-rich, deep basin or stratified shelf sea deposits, especially sediment starved basin.The field is also of low spore and prasinophyte content and of type I&II kerogen (II>I), and is highly oil prone.The recorded variations in the paleodepositional environment for Dokan Formation can be seen in Ja-50 (P.F.2).This Palynofacies is represented mainly by distal dysoxic-anoxic shelf (VII field) and partly by distal dysoxic-oxic shelf (VIII field).
Journal of Kirkuk University -Scientific Studies , vol.2, No.3,2007
Tyson's descriptions for those two environments is that Distal dysoxicanoxic shelf(VII)is of moderate to good AOM preservation, low to moderate palynomorphs, dark colored, slightly bioturbated mudstones are typical,of low spore and moderate to common dinocyst content and of type II kerogen, oil prone.While his description for Distal dysoxic-oxic shelf (VIII)is that it is of AOM-dominated assemblages, excellent AOM preservation, low to moderate palynomorphs(partly due to masking), typical of organic-rich shales deposited under stratified shelf sea conditions, of low spore and moderate to common dinocyst content and of type II >> I kerogen, oil prone.Relating to Gulneri Formation, it also shows a variation in its paleodepositional environment in the section of Tq-1, representing by P.F.4.This Palynofacies seems to be deposited in proximal suboxic -anoxic shelf(VI field).Tyson (1995)described this environment as it is of high AOM preservation due to reducing basin conditions.Absolute phytoclast content may be moderate to high due to turbiditic input and /or general proximity to source, variable, low to moderate spore content, low to common dynosist dominant, with a type II kerogen (oil prone).
Hydrocarbon Generation Potential
Six samples from Dokan & Gulneri formations (Two samples from each section) were analyzed by GC and the results are in Table (2).By projecting the measured Pr/ n-C17 and Ph/ n-C18 on special cross plot proposed by Shanmugam (1985)(Fig.8),it is clear that both formations of Dokan and Gulneri in the studied sections contain organic matter of a marine to mixed source (kerogen II).From the same plot and regarding the maturation states of the existed organic matters, it seems that they are within the moderately mature stage.The Pr/ Ph ratios are generally less than 1(generally 0.7) and a slight even Carbon Preference Index(CPI < 1)(generally 0.85), such a condition indicates a free algal / bacterial organic detritus in the kerogen of a marine source rock deposited under less reducing condition.Total Organic Carbon for the selected samples have been measured as an aid for estimating the quantity of the existed organic matters within the two formations.The average of TOC content for Dokan Formation in the studied sections are 1.06%, 0.46%, 1.645% for Kz-12, Ja-50, and Tq-1 sections respectively.While the average TOC content for Gulneri Formation was 1.45%, 1.525%, 0.7% for the same sections respectively.A quality evaluation also has been attempted optically for the organic matter content by distinguishing the different types of the Amorphous Organic Matters according to Thompson and Dembecki,1986's classification and their ability for generating hydrocarbons (oil and gas).The color change of the palynomorphs also followed optically by estimating the Thermal Alteration Index (TAI) according to Pearson, 1990, for determining the state of maturity of the organic matters.The examined organic matters optically all show a mature stage with a colors ranged between Yellow brown and Brown(TAI, 3 and +3)(Fig.9).Actually the Dokan Formation showed much mature stage than Gulneri Formation.Such a common condition is a result of more depth of burial and older age.A B
Conclusions
Generally, Dokan and Gulneri can consider as rich formations from organic matter content point of view, and the amorphous organic matter comprises the greatest part among their components. Dokan and Gulneri Formations are generally poor in their palynomorph content, which consist mainly of Dinoflagellates, Fungi Spores, and Pollens in addition to Foraminiferal test linings. The preservation of the palynomorphs can consider as of a bad state due to degradation, and may be the fungi had a roll in that degradation. The components of the sedimentary organic matter within the two formations vary in their ratio along the studied sections representing different Palynofacies. The paleodepositinal environment of the Dokan Formation seems to be distal suboxic-anoxic shelf and partly distal dysoxic-oxic shelf as appeared from the results of the plotting the ratio of the main organic components (Palynomorphs, Phytoclasts, and AOM) on APP triangle of Tyson (1995).While the paleodepositional environment of Gulneri Formation detected to be distal suboxic-anoxic shelf and partly proximal suboxicanoxic shelf depending on the same procedure. The kerogen within the two formations seems to be greatly of type II, oil prone. By connecting the quantity, quality and maturity parameters at each of the studied sections (Tables 3,4&5); the hydrocarbon generating potential of both Dokan and Gulneri Formations detected and concluded that Gulneri Formation in the present time has the ability of generating liquid oil in Tq-1 well, while it generates oil and gas in the wells of Kz-12 and Ja-50 because of the difference in the type of organic matter content.Relating to Dokan Formation; it concluded that the formation generates liquid oil in Tq-1 well and generates oil and gas in the wells of Kz-12 and Ja-50 (in Ja-50 only the lower part of the formation generates hydrocarbons because the upper part contains no sufficient quantity of organic matters).
Fig
Fig. (1): Location map of the studied sections.
Figure:(2) Percentage of the organic matter components in Kz-12 well.
Figure
Figure(2): Percentages of the organic matter components in Kz-12 well
Fig
Figure(5): Shows the correlation between the identified Palynofacies within the studied sections.
Figure ( 8
Figure (8): Relationship between Isoprenoids and n-alkanes showing Source and Depositional Environments for Dokan and Gulneri Formations in the studied sections.(The plot afterShanmugam, 1985) generation potential of Dokan & Gulneri formations in Tq-1 well.
Table ( 2
): Gas Chromatography Analysis for selected samples from both Dokan & Gulneri Formations in the studied sections. | 3,096.8 | 2007-12-28T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Madagascar corals reveal a multidecadal signature of rainfall and river runoff since 1708
Pacific Ocean sea surface temperatures (SST) influence rainfall variability on multidecadal and interdecadal timescales in concert with the Pacific Decadal Oscillation (PDO) and Interdecadal Pacific Oscillation (IPO). Rainfall variations in locations such as Australia and North America are therefore linked to phase changes in the PDO. Furthermore, studies have suggested teleconnections exist between the western Indian Ocean and Pacific Decadal Variability (PDV), similar to those observed on interannual timescales related to the El Ni ño Southern Oscillation (ENSO). However, as instrumental records of rainfall are too short and sparse to confidently assess multidecadal climatic teleconnections, here we present four coral climate archives from Madagascar spanning up to the past 300 yr (1708–2008) to assess such decadal variability. Using spectral luminescence scanning to reconstruct past changes in river runoff, we identify significant multidecadal and interdecadal frequencies in the coral records, which before 1900 are coherent with Asian-based PDO reconstructions. This multidecadal relationship with the Asian-based PDO reconstructions points to an unidentified teleconnection mechanism that affects Madagascar rainfall/runoff, most likely triggered by multidecadal changes in North Pacific SST, influencing the Asian Monsoon circulation. In the 20th century we decouple human deforestation effects from rainfallinduced soil erosion by pairing luminescence with coral geochemistry. Positive PDO phases are associated with increased Indian Ocean temperatures and runoff/rainfall in eastern Madagascar, while precipitation in southern Africa and eastern Australia declines. Consequently, the negative PDO phase that started in 1998 may contribute to reduced rainfall over eastern Madagascar and increased precipitation in southern Africa and eastern Australia. We conclude that multidecadal rainfall variability in Madagascar and the western Indian Ocean needs to be taken into account when considering water resource management under a future warming climate. Published by Copernicus Publications on behalf of the European Geosciences Union. 642 C. A. Grove et al.: Corals reveal a multidecadal signature of rainfall and river runoff
Abstract.Pacific Ocean sea surface temperatures (SST) influence rainfall variability on multidecadal and interdecadal timescales in concert with the Pacific Decadal Oscillation (PDO) and Interdecadal Pacific Oscillation (IPO).Rainfall variations in locations such as Australia and North America are therefore linked to phase changes in the PDO.Furthermore, studies have suggested teleconnections exist between the western Indian Ocean and Pacific Decadal Variability (PDV), similar to those observed on interannual timescales related to the El Niño Southern Oscillation (ENSO).However, as instrumental records of rainfall are too short and sparse to confidently assess multidecadal climatic teleconnections, here we present four coral climate archives from Madagascar spanning up to the past 300 yr to assess such decadal variability.Using spectral luminescence scanning to reconstruct past changes in river runoff, we identify significant multidecadal and interdecadal frequencies in the coral records, which before 1900 are coherent with Asian-based PDO reconstructions.This multidecadal relationship with the Asian-based PDO reconstructions points to an unidentified teleconnection mechanism that affects Madagascar rainfall/runoff, most likely triggered by multidecadal changes in North Pacific SST, influencing the Asian Monsoon circulation.In the 20th century we decouple human deforestation effects from rainfallinduced soil erosion by pairing luminescence with coral geochemistry.Positive PDO phases are associated with increased Indian Ocean temperatures and runoff/rainfall in eastern Madagascar, while precipitation in southern Africa and eastern Australia declines.Consequently, the negative PDO phase that started in 1998 may contribute to reduced rainfall over eastern Madagascar and increased precipitation in southern Africa and eastern Australia.We conclude that multidecadal rainfall variability in Madagascar and the western Indian Ocean needs to be taken into account when considering water resource management under a future warming climate.
Introduction
Tropical Indian Ocean warming in the 20th century has accelerated since the late 1970s, affecting rainfall patterns and intensity across much of the western Indian Ocean and adjacent landmasses of eastern and southern Africa (Richard et al., 2000;Funk et al., 2008).As both these regions heavily depend on regular rainfall for food production and ecosystem sustainability (Fleitmann et al., 2007), the uncertainty in the rainfall response to accelerated warming of the Indian Ocean is a serious socioeconomic issue of global importance (Funk et al., 2008).To fully assess this response it is necessary to identify the long-term natural rainfall patterns, yet we currently lack an understanding of the major drivers of natural decadal rainfall variability in the Indian Ocean and the regional synergy with global warming (Cane, 2010).Some evidences indicate that decadal and interdecadal South African rainfall is associated with the El Niño-Southern Oscillation (ENSO) due to the shifting tropical temperature troughs in response to large-scale changes in Indo-Pacific sea surface temperature (SST) and sea level pressure (SLP) (Reason and Rouault, 2002).Since rainfall patterns are sensitive to SST change, which includes both natural internal variability and anthropogenic forcing, here we investigate natural multidecadal and interdecadal modulation of Indian Ocean rainfall/river runoff in response to Indo-Pacific SST variability.
The Pacific Decadal Oscillation (PDO) is a major mode of climate variability (Mantua et al., 1997).Positive PDO phases are characterised by lower than average SST in the central midlatitude Pacific and warm anomalies along the northern and eastern margins, and south of 30 • N. The PDO is remotely forced from the Tropics in part (Schneider and Cornuelle, 2005), and responsible for strong multidecadal (50-70 yr) (Minobe, 1997) and interdecadal Pacific oscillations in SST (IPO; 17-28 yr) (Meehl and Hu, 2006).The PDO is considered as the leading mode of North Pacific SST variability, defined by instrumental data for the past 120 yr (Mantua et al., 1997), and recognised in extended proxy time series, e.g.tree ring records of rainfall in NE Asia (D'Arrigo and Wilson, 2006).
The IPO is often referred to as a Pacific-wide manifestation of the PDO (Power et al., 1999).The IPO is known to modulate ENSO over eastern Australia, whereby negative phases increase the intensity and frequency of wet La Niña events (Power et al., 1999;Kiem et al., 2003;Kiem and Franks, 2004;Verdon et al., 2004).As the PDO and IPO indices highly correlate (correlation = 0.86), Henley et al. (2011) refer to them collectively as the IPO-PDO phenomena.However, significant differences exist between published IPO-PDO reconstructions that extend beyond the 1900s (Biondi et al., 2001;D'Arrigo et al., 2001;Gedalof and Smith, 2001;MacDonald and Case, 2005;D'Arrigo and Wilson, 2006;Shen et al., 2006;Henley et al., 2011).Henley et al. (2011) suggest this is because of uncertainties associated with the paleo-data itself and its interpretation, nonlinearities and errors in the original physical data analysis, nonstationarity of the proxy/climate relationship, and/or the different levels of explained variance between the various proxies at various locations.
Given the significant differences between published IPO-PDO reconstructions, the persistence of the PDO pre-1900 has been debated (Di Lorenzo et al., 2008;Henley et al., 2011).It has been suggested that no well-defined coupled ocean-atmosphere "mode" of variability exists in the Pacific on decadal to interdecadal timescales, since paleoclimate records conflict and instrumental records are too short to provide a robust assessment (Biondi et al., 2001;Gedalof et al., 2002;Schneider and Cornuelle, 2005).Schneider and Cornuelle (2005) suggest that the PDO is not itself a mode of variability but is a blend of three phenomena.Nevertheless, the North Pacific and the equatorial Pacific do vary on interdecadal to multidecadal timescales, and the manifestation of this Pacific Decadal Variability (PDV) and its influence on surrounding regions needs to be better understood (Schneider and Cornuelle, 2005).
Mounting evidence indicates that the PDO or PDV has teleconnections extending to the Indian Ocean (Cole et al., 2000;Crueger et al., 2009).The positive PDO phase corresponds to warm Indian Ocean SST anomalies (Deser et al., 2004), thought to exceed SST anomalies associated with ENSO (Krishnan and Sugi, 2003), particularly in the southwestern Indian Ocean (Fig. 2a) (Meehl and Hu, 2006).While it is evident that changing rainfall patterns over Australia (Power et al., 1999;Arblaster et al., 2002;Verdon et al., 2004;Meinke et al., 2005) and North America (Mantua et al., 1997;Minobe, 1997) respond to the PDO, links to rainfall in southeastern Africa and the western Indian Ocean have only been suggested (Deser et al., 2004;Zinke et al., 2008).In this study we investigate the primary modulators of Madagascar (SW Indian Ocean) rainfall on multidecadal and interdecadal timescales by applying coral cores.
Massive corals such as Porites spp.offer century-long geological archives locked within their skeletons.Further, corals that reside close to river systems can potentially record temporal variability in soil and sediment erosion, resulting in highly resolved and continuous proxy records of changing terrestrial impacts on the coastal ocean (McCulloch et al., 2003;Fleitmann et al., 2007;Lough, 2007Lough, , 2011a) (Appendix A).Luminescence in banded corals is indicative of past humic acid runoff from river discharge (Isdale, 1984;Susic and Boto, 1989;Susic et al., 1991;Matthews et al., 1996;Isdale et al., 1998;Wild et al., 2000;Grove et al., 2010Grove et al., , 2012) (Appendix A), while skeletal Ba/Ca as an indicator for past suspended sediment runoff (Alibert and Mc-Culloch, 1997;McCulloch et al., 2003;Sinclair and Mc-Culloch, 2004;Maina et al., 2012).Here we present up to 300 yr of monthly resolved proxy records of soil erosion from four giant Porites spp.colonies growing in two coastal marine catchments of eastern Madagascar (Fig. 1) (Appendix A).Significant temporal correlations of Ba/Ca with luminescence have previously been observed in studies using the same coral cores analysed here (Grove et al., 2010(Grove et al., , 2012)).As well as the runoff proxies, Sr/Ca and Mn/Ca data are presented in this study.Coral skeletal Sr/Ca is an indicator of SST (Beck et al., 1992;Corrège, 2006;Alibert and McCulloch, 1997) and Mn/Ca is used as an indirect indicator of ash fallout from slash-and-burn deforestation (Abram et al., 2003).
Research area and climate setting
Coral cores were taken from Antongil Bay in NE Madagascar, which is surrounded by one of the country's largest remaining rainforests (Birkinshaw and Randrianjanahary, 2007).Air temperature and rainfall in Antongil Bay was monitored for the period 1992 to 1996 (Kremen, 2003).Antongil Bay is characterised by an August-December cold-dry season and a January-July warm-wet season.Air temperatures peak in December and January and are lowest between July and September.Highest rainfall occurs between January and April, while lowest rainfall occurs between September and November (Kremen, 2003;Jury et al., 1995).The annual average precipitation at Andranobe (coral site ANDRA) was 6049 mm (1 SD = 979 mm) between 1992 and 1996.Highest river discharge occurs between January and April, reaching lows in October and November (Gerten et al., 2008).
Coral sampling and analysis
Three live corals MAS1, MAS3 and ANDRA were drilled in March 2007from Antongil Bay, NE Madagascar, dating back to 1904, 1880and 1914, respectively (Figs. 1 and S1;Table S1 in the Supplement) (Grove et al., 2010).Another live coral, MASB (15 • 30.566S; 49 • 45.437 E), was drilled in October 2008, dating back to 1708 (Figs. 1 and S2; Table S1 in the Supplement).Three of the corals used for this study, MAS1, MAS3 and MASB are directly influenced by a major river draining into the Bay, named Antainambalana (Fig. 1) (Grove et al., 2012).Its source lies 1450 m above sea level and its watershed covers an estimated 4000 km 2 .As well as being influenced by the Antainambalana, a fourth coral, ANDRA, is located 30 km south of MAS1/3/B, and is located 7 km from a much smaller river called the Ambanizana, which has a watershed of 160 km 2 (Grove et al., 2012).
All cores were sectioned into 7 mm slabs, cleaned with sodium hypochlorite (NaOCl, 10-13 % reactive chloride; Sigma-Aldrich Company) for 24 h to remove residual organics that would quench luminescence (Nagtegaal et al., 2012), and subsequently scanned under UV light to measure continuous spectral luminescence ratios (G/B; Green/Blue) (Grove et al., 2010).MASB was cleaned for a further 24 h with NaOCl to remove resistant organic contaminants that remained after a single cleaning step.Corals were dated by counting density and luminescent bands down core (Hendy et al., 2003;Grove et al., 2010).Annual growth bands were visualised by X-radiograph-positive prints, and the growth axis of the coral slab was defined as the line normal to these bands (Helmle et al., 2011).All four corals used in this study displayed wide luminescent bands (Figs.S1 and S2 in the Supplement), reflecting the broad seasonal cycle in both precipitation and river discharge that are characteristic of the study site (Grove et al., 2012;Maina et al., 2012).Age models for all corals are based on seasonal cycles in luminescence (G/B) (Figs.S1 and S2 in the Supplement).All coral age models for luminescence (G/B) data were based on G/B anchor points that correspond to October, the lowest (driest) value in any given year (Grove et al., 2010).For each high resolution G/B data point we assigned a date between the October anchor points using AnalySeries 2.0 (Paillard et al., 1996).The luminescence (G/B) data were then converted to a monthly timescale (12 samples per year) using a further interpolation step between the October minima values (Paillard et al., 1996), the driest month in the Antongil Bay region (Kremen, 2003;Grove et al., 2010).
The precision of the G/B ratio was obtained from the top 10.5 cm-long section of the ANDRA coral core by taking replicate measurements (5-fold) and calculating the standard deviation as a proportion of the mean G/B value.The median error was 1.56 % (−0.12/+4.54),which translates to an absolute median error of 0.015 (−0.001/+0.043).Luminescence imaging of the MAS3 core revealed some dark stains in the older sections of the core, therefore as a precaution luminescence data starts in 1930.A composite G/B record was created by normalising the 4 coral cores by removing a common period and averaging the records together.This reduces the local signals associated with each core, allowing us to assess the regional response to runoff.The composite G/B record was used when applying record segmentation analysis (Supplement) and assessing relationships with observational climate data (Appendix A).Laser-Ablation Inductively Coupled Plasma Mass Spectrometry (Laser-Ablation ICP-MS) profiles were taken to analyse the trace element ratios of Sr/Ca, Ba/Ca and Mn/Ca at 40 µm intervals on the coral cores MAS1 and MAS3 at ANU Canberra (Sinclair et al., 1998;Fallon et al., 2002).Data were first smoothed using a 10 point running mean to reduce the influence of outliers, followed by a 10 point stepped mean to reduce data volume.This procedure reduced the sampling resolution from 40 µm to about 200 µm per sample point.At an average growth rate of about 1 cm per year for Porites spp., this resulted in a sampling resolution of 50 samples per year.To determine accuracy, a NIST 614 glass standard and a pressed coral standard were used (Fallon et al., 1999(Fallon et al., , 2002)).Daily and long-term (5 month) reproducibility was monitored by repeated measurements of the pressed coral standard and an in-house coralline sponge standard (Fallon et al., 1999).The daily and long-term reproducibility was 1.6 % and 3.3 %, respectively.The analytical internal precision for Ba/Ca was 4.3 % RSD (Fallon et al., 1999).Further details on the methodology and standards are available in Fallon et al. (1999Fallon et al. ( , 2002)).
Laser-Ablation ICP-MS profiles cover the entire age of MAS1 and since 1935 for MAS3 (1935MAS3 ( -2006) ) (Table S1).Age models for Sr/Ca, Mn/Ca and Ba/Ca for all corals are based on Ba/Ca anchor points that correspond to October, the lowest (driest) value in any given year (Kremen, 2003;Grove et al., 2010Grove et al., , 2012)).In a first interpolation step (based on the 50 samples per year resolution) we assigned a date to each data point using AnalySeries 2.0 (Paillard et al., 1996).Subsequently, in a second step, we interpolated the high resolution data to a monthly timescale.
In this study we assess coral proxy records together with the SST and rainfall data available.For SST data we used gridded ERSST (v.3) (Smith et al., 2008).For rainfall we used data from the Climate Research Unit (CRU) at the University of East Anglia (CRU TS3) (Mitchell and Jones, 2005), CMAP (Xie and Arkin, 1997), NCEP/NCAR reanalysis (Kalnay et al., 1996), CAMS OPI (Janowiak and Xie, 1999) and the longest continuous precipitation record from Madagascar (Antananarivo; WMO station 67083).Singular Spectrum (SSA) (Ghil et al., 2002) and coherence analysis, cross-wavelet analysis (Grinsted et al., 2004) and record segmentation were the primary statistical methods applied to time series presented here.
River runoff reconstructions using coral luminescence
We measured the G/B ratio in corals to determine seasonally resolved soil-derived humic acid runoff resulting from hinterland rainfall (Grove et al., 2010) (Appendix A).A spatial assessment study of coral cores from Antongil bay revealed that absolute coral G/B values and seasonality are related to discharge rates of individual rivers (Grove et al., 2012).All coral records used in this study shared a significant amount of variance on annual timescales, with the only exception being MAS3 G/B with MASB G/B (Table 1; Fig. A1).When correlating the four coral MAS G/B composite record with different rainfall datasets, clear significant positive relationships are observed in the Antongil bay region (Fig. A2).Although different rainfall datasets show a varying degree of correlation with G/B (Fig. A2), all are significant, with the highest relationship being with NCEP/NCAR reanalysis (Kalnay et al., 1996) (R = 0.421; P = 0.0014; N = 50).A recent hydrological model-coral proxy comparison study for the Antainambalana river watershed revealed that rainfall, river discharge and sediment runoff explained the variance in coral proxies of terrestrial river runoff (Maina et al., 2012).However, correlations of MAS1 G/B with a regional rainfall dataset located 200 km from the coral site showed a low yet statistically significant relationship (Grove et al., 2010).We suggest that this relationship was dampened by the well documented slash-and-burn deforestation period NE Madagascar experienced in the mid-century during the social uprising (Green and Sussman, 1990;Harper et al., 2007).As G/B is an indicator of humic acid runoff and not rainfall, G/B variability can also be related to deforestation induced erosion.
Pre-anthropogenic climatic modulation of runoff
We investigated pre-anthropogenic climatic modulation of humic acid runoff by analysing the 300 yr long coral core record, dating from November 1708 to November 2008 (MASB; Fig. 1).To eliminate any potential anthropogenic impact on past humic acid runoff we first focused on the period 1709-1920 (Fig. 2).Annual mean time series of the G/B data (Fig. 2a) and its spectrum (Fig. 2c) identified frequencies at centennial, multidecadal and interdecadal timescales.In particular, the centennial and multidecadal spectral peaks were above the 95 % confidence level.We next removed the long-term centennial variability from the time series to focus on shorter timescales, i.e. multidecadal and interdecadal, which are the subjects of the present study.The long centennial variability was reconstructed using the first and second mode of the singular spectrum analysis (SSA), as shown by the low-frequency solid line in Fig. 2a.The residual time series (Fig. 2b) showed no trend or long-term centennial variability, with its periodicity concentrated in the 50-70 yr and 20-30 yr bands (Fig. 2d), similar to the dominant periodicity associated with Pacific decadal variability (PDO and IPO), known to influence Indian Ocean SST (Krishnan and Sugi, 2003;Cole et al., 2000;Deser et al., 2004;Crueger et al., 2009).The multidecadal peak of the residual time series (Fig. 2b) remained above the 95 % confidence level with the removal of the long-term centennial variability.However, the interdecadal (20-30 yr bands) spectral peak remained below the 95 % confidence level.
The multidecadal and interdecadal variability in MASB G/B between 1709 and 1920 explained 7 % and 4 % (total of 11 %) of the total variability, respectively.This was considerably lower than the long-term centennial variability of 55 %.When the centennial variability was subtracted from the record (Fig. 2), the 50-70 yr band explained 18 % of the residual time series variability (Fig. 2b) and the 20-30 yr band 9 % (total of 27 %).As the IPO-PDO is known to influence Indian Ocean SST (Krishnan and Sugi, 2003;Cole et al., 2000;Deser et al., 2004;Crueger et al., 2009), we investigated the relationship of MASB G/B with the IPO-PDO on multidecadal and interdecadal timescales.Since the instrumental PDO index (Mantua et al., 1997) only dates back to 1880, the 1709-1920 MASB G/B time series was compared to a number of PDO reconstructions to further investigate decadal variability in runoff (Figs. 3 and S3).In agreement with Shen et al. (2006) and Henley et al. (2011), we observed a large spread between the individual PDO reconstructions.Some indicated stronger power related to the interdecadal component of the PDO (IPO), and others the multidecadal component (Fig. S3).
For the MASB G/B time series, we observed most coherence with the PDO reconstructions from Asia (D' Arrigo and Wilson, 2006;Shen et al., 2006), both on interdecadal and multidecadal timescales (Figs. 3,4 and S3).The dominant spectral peak of the D' Arrigo and Wilson (2006) index was between 20-40 yr, yet the Shen et al. (2006) index was between 50-70 yr (Fig. S3).The MASB G/B time series also showed similarities with the Mann et al. ( 2009) SST reconstruction and with the combined PDO reconstruction of Henley et al. (2011) (Figs. 3 and S3).When considering the North American based PDO reconstructions specifically (Biondi et al., 2001;D'Arrigo et al., 2001;Gedalof and Smith, 2001;MacDonald and Case, 2005), the Madagascar runoff record showed an anti-phase relationship or a delayed response between 1709 and 1850 (Figs. 3 and S3).This is particularly highlighted in the MacDonald and Case ( 2005 1850 (Figs. 3 and S3).This is also apparent with the Biondi et al. (2001) record, yet not as pronounced (Fig. 3 and S3).
To investigate time-dependent frequency relationships between MASB G/B and the reconstructed PDO indices we applied a wavelet coherence analysis (Fig. 4).Similar to the visual time series comparison, clear multidecadal and interdecadal relationships were observed between MASB G/B and the Asian PDO reconstructions (Fig. 4d, f; D 'Arrigo and Wilson, 2006;Shen et al., 2006), whereby an in-phase relationship existed on multidecadal timescales.On interdecadal timescales, an in-phase relationship was observed between MASB G/B and the D' Arrigo and Wilson (2006) PDO reconstruction, while the Shen et al. (2006) reconstruction indicated an anti-phase relationship in the early part of the record (1709-1800).Other PDO reconstructions showed rather patchy coherence, with the exception of the MacDonald and Case (2005) index, which again indicated a delayed response on multidecadal timescales, such that the PDO lead the MASB G/B record.However, it should be noted that such timescales are still too short to fully resolve coherence at multidecadal timescales, and that wavelet coherence should be interpreted carefully when hypothesising exact phase relationships.Therefore, to validate the relationship between the PDO and MASB G/B we applied another approach by means of filtering the time series.
As the clearest in-phase coherence of the coral runoff record was observed with both the NE Asia tree ring based PDO reconstruction (D'Arrigo and Wilson, 2006) and the eastern China flood/drought PDO reconstruction (Shen et al., 2006) (Fig. 5a, grey boxes), we focus on these time series for the remainder of this study.We applied a 50-70 yr band pass filter to the data as this is the defined frequency of the PDO (Minobe, 1997) (Fig. 5c).The D' Arrigo and Wilson (2006) record showed near identical changes in phase and timing with the coral runoff record for over two centuries in the multidecadal frequency band (Fig. 5c), then diverged from each other after the 1920s, when considering the entire G/B 1708-2008 period (Fig. 5c).Coherent temporal changes in signal timing and phase between both records showed that positive phases of the PDO corresponded to positive runoff anomalies (Fig. 5c).Comparing the filtered Shen et al. (2006) PDO reconstruction with the coral runoff record revealed similar changes in timing and phase for over two centuries in the multidecadal frequency band (Fig. 5c).However, again when considering the entire G/B 1708-2008 period the two records diverged from each other after the 1920s, similar to the D'Arrigo and Wilson ( 2006) record (Fig. 5c).
Post-1920 climatic modulation of runoff
To further investigate post-1920 PDO modulation of eastern Madagascar soil runoff, we also analysed the G/B records of additional corals in combination with high resolution geochemistry.Together, they allowed us to decouple the three major components influencing eastern Madagascar soil runoff; i.e. human land use changes and natural decadal climate variability interacting with Indian Ocean warming.
Long-term changes in runoff appear in the 11 yr running mean of both G/B and Ba/Ca in each coral (MAS1, Fig. 6a, c; MAS3, Fig. 6d, f).An 11 yr running mean was applied to the data in order to remove high-frequency noise and highlight long-term trends (Fig. 6).Most pronounced is the continuous increase in humic acid runoff (G/B) since the mid-1970s and sediment runoff (Ba/Ca) from the mid-1950s.In recent years both proxies have increased to maximum values, seemingly in concert with rising south central Indian Ocean SST (Fig. 6).
The longest continuous precipitation record from Madagascar (Antananarivo) is also in agreement with the two coral Ba/Ca records and the south central Indian Ocean SST dataset, whereby rainfall increased from the mid-1950s until the record ends in 1987 (Fig. 6b, e).Consequently, increasing Ba/Ca appears tightly coupled to rising SST and precipitation (Fig. 6).However, unlike Ba/Ca, a reduced coherence between humic acid runoff (G/B) and Indian Ocean SST is observed in the mid-20th century (Fig. 6).This suggests that factors other than rainfall may be involved in the large-scale erosion of humic acids during this period.Discrepancies between G/B and SST occur in both cores analysed for geochemistry for the periods 1945-1955 and 1966-1980, whereby G/B increases while SST decreases or remains stable .These periods are also marked by enhanced coral Mn/Ca above the seasonal background (Figs. 6 and B1; Appendix B).
Mn is an indicator for biological activity in seawater (Abram et al., 2003;Wyndham et al., 2004).Fallon et al. (2002) and Alibert et al. (2003) proposed two possible mechanisms for the seasonal cycle in Mn/Ca.An increase in the photoreductive dissolution of suspended particulate Mn oxides, which increases in spring with increasing solar www.clim-past.net/9/641/2013/radiation (Fallon et al., 2002;Alibert et al., 2003); or alternatively, a diagenetic release of Mn at the seawater sediment interface as a result of reducing conditions induced by decaying organic matter produced in spring and summer (Alibert et al., 2003).The latter process is what we infer from the Mn spikes observed here, which we relate to intense deforestation periods (Abram et al., 2003;Wyndham et al., 2004).
High Mn levels are associated with decaying organic matter following ash fallout from wildfires, which promote phytoplankton blooms (red tides).As the organic matter decays with time it produces reducing conditions, subsequently increasing seawater Mn concentrations (Abram et al., 2003;Wyndham et al., 2004).Indeed, the pronounced increase in Mn/Ca testifies to the well documented intense slash-andburn deforestation for upland rice cultivation between 1950 and 1980 (Green and Sussman, 1990;Harper et al., 2007), associated with the economic collapse of Madagascar and the return to subsistence agriculture.In the MAS1 and MAS3 records, G/B increases approximately at the same time as Mn/Ca during this period, which is consistent with the massive addition of organic matter after the documented periods of high slash-and-burn deforestation (Green and Sussman, 1990;Harper et al., 2007).Segmentation analysis (Webster, 1973(Webster, , 1980) ) of the coral composite G/B record (MAS1, MAS3, MASB and ANDRA) further highlights these mid-20th century human deforestation periods, as well as climatic shifts associated with the IPO-PDO (Fig. C1; Appendix C; Supplement).
The coupling between increasing runoff and south central Indian Ocean warming is evident after the prominent climate shift around 1976/1977, when both global mean temperatures and runoff strongly increased (Fig. 6) (Meehl et al., 2009).As Mn is also associated with seasonal soil runoff through erosion (Lewis et al., 2007), we observe similar increasing linear trends in the G/B and Mn/Ca ratios (Fig. B1; Appendix B).Because G/B is a direct indicator of soil erosion and not rainfall, we attempted to remove the deforestation effect using the available coral geochemical records (MAS1 and MAS3; MAS1/3 composite), by subtracting the normalised MAS1/3 Mn/Ca composite record from the normalised MAS1/3 G/B composite record, prior to singular spectrum analysis (Figs. 7 and B1;Appendix B).This also removed the long-term erosion trend, resulting in a MAS1/3 G/B-Mn/Ca composite record that reflects the natural rainfall variability, now increasing from the mid-1950s in agreement with the SST and Ba/Ca data (Figs.6 and 7).Spectral analysis of the monthly instrumental PDO index (1880-present) (Mantua et al., 1997) and coral MAS1/3 G/B-Mn/Ca composite show strong power in the interdecadal and multidecadal band (Fig. B2), which is in agreement with the pre-1920 frequency analysis of MASB G/B and the Asian based PDO indices (D'Arrigo and Wilson, 2006;Shen et al., 2006) (Figs. 3 and S3).The tight temporal relationship of the G/B-Mn/Ca composite time series with the PDO index shows that a positive (negative) phase is associated with wet (dry) conditions (Fig. 7).Interestingly, the G/B-Mn/Ca composite correlates with typical positive PDO-like conditions in global SSTs, coupled with a positive correlation with south central Indian Ocean SST (Figs. 7 and 8).Also, in the Sr/Ca temperature proxy record of MAS1, a positive PDO phase is associated with a warm SST anomaly (Fig. 8), pointing to a typical SST pattern found in the Indian Ocean in response to Pacific decadal forcing (Krishnan and Sugi, 2003;Cole et al., 2000;Deser et al., 2004;Crueger et al., 2009).The temporal alignment of all records (Sr/Ca, Ba/Ca, G/B-Mn/Ca) with the PDO (Fig. 8) therefore argues for Pacific modulation of Madagascar river runoff and rainfall on multidecadal timescales for at least the past 300 yr.
IPO-PDO climatic modulation of runoff
Madagascar is an iconic example of the extreme environmental impacts human deforestation and habitat destruction has on soil runoff and land degradation (Green and Sussman, 1990;Harper et al., 2007).Human activity is also reported for two 200-300 yr erosion records from Kenya (Malindi), based on coral Ba/Ca, that show a simultaneous major shift in base level runoff at 1906at ±3 yr and 1908at ±5 yr (Fleitmann et al., 2007)).This 1908 shift in soil erosion was attributed predominantly to a change from traditional subsistence agriculture to intensive European land use practices introduced by the British settlers.The Kenya coral records also indicate accelerated soil erosion between the late 1940s and early 1950s and in the late 1970s following periods of intense drought, which occur simultaneously with shifts in the Madagascar coral records presented here.Further, these multidecadal runoff changes co-occur with the 1905, 1947 and 1976 shifts in the PDO and IO SST, suggesting a possible link to Pacific modulation of Kenyan soil erosion by rainfall.For the same (Malindi) coral, SST was reconstructed using δ 18 O as a proxy (Cole et al., 2000).This was shown to be strongly linked with Pacific decadal SST variability and North Pacific SLP, supporting the importance of Pacific decadal forcing on western Indian Ocean climate (Cole et (Mantua et al., 1997) with global annually averaged (May to April) rainfall data produced by the Climate Research Unit (CRU) at the University of East Anglia (CRU TS3) (Mitchell and Jones, 2005).Correlations are shown over the globe (a) and for central and southern Africa (b).Colour shading represents confidence of 95 % and greater.Red shading indicates positive correlations and green negative correlations.Note the positive correlation of rainfall with the PDO over Madagascar and negative correlation over eastern Australia and the northern Rocky Mountains, North America.Correlations were computed at http://climexp.knmi.nl/.2000;Cobb et al., 2001;Deser et al., 2004;D'Arrigo et al., 2005).
The PDO/river runoff relationship in Great Barrier Reef corals and east Australia river gauges is opposite to that in Madagascar, as the negative PDO phase (i.e.1947 to 1976) is linked with higher river discharge, and vice versa for the positive PDO phase (Lough, 2007;McGowan et al., 2009).Correlating precipitation with the principle component time series of the IPO (Meehl and Hu, 2006), and the PDO (Felis et al., 2010) shows a negative response over eastern Australia and southern Africa, and a positive response in eastern Madagascar and eastern Africa (Meehl and Hu, 2006;Felis et al., 2010).A spatial correlation of the PDO and global rainfall supports these results, with a negative correlation shown in southern Africa, eastern Australia (Lough, 2007;McGowan et al., 2009) and the northern Rocky Mountains (St. Jacques et al., 2010), as well as a positive correlation in Madagascar (Fig. 9).Since Indian Ocean SST is sensitive to the PDO (Krishnan and Sugi, 2003), and rainfall is linked to SST (Goddard and Graham, 1999), runoff variability is ultimately linked to Pacific Ocean decadal variability.During the positive IPO-PDO phases, higher mean SST is responsible for enhanced atmospheric convection over the Indian Ocean, which in turn drives anomalous subsidence over southern Africa and eastern Australia (Lough, 2007;Goddard and Graham, 1999;Richard et al., 2000;Hoerling et al., 2006;McGowan et al., 2009).The robust Indian Ocean SST signal associated with decadal Pacific SST and SLP variability is most likely responsible for shifting the hydrological balance in the Indian Ocean, which is detected in the G/B and Ba/Ca records from eastern Madagascar.However, the specific mechanisms responsible for such teleconnections are beyond the scope of this paper and require further investigation.
Clear differences are observed between the PDO reconstructions before the 20th century.Henley et al. (2011) ascribe these differences to a number of possible reasons, including (1) uncertainties associated with the paleo-data itself and its interpretation, (2) nonlinearities and errors in the original physical data analysis, (3) nonstationarity of the proxy/climate relationship, and/or (4) the different levels of explained variance between the various proxies at various locations.Schneider and Cornuelle (2005) suggest that perhaps the PDO is not itself a mode of variability but is a blend of three phenomena.Given instrumental records are too short to provide a robust assessment and paleoclimate records conflict regarding timescales (Biondi et al., 2001;Gedalof et al., 2002), we cannot rule out this possibility that no well-defined coupled ocean-atmosphere "mode" of variability exists in the Pacific on decadal to multidecadal timescales pre-1900.However, what is clear is that a strong teleconnected response does exist between eastern Asia and the southwest Indian Ocean on multidecadal and interdecadal timescales beyond the 1900s, likely driven by SST variability in the northern Pacific and southwestern Indian Ocean.The MASB coral luminescence record highlights this with its strong continuous coherence with the Asian PDO reconstructions.
The long-term coral data presented here suggest that southwest Indian Ocean rainfall is indirectly linked to Pacific decadal variability, transmitted through the Asian Monsoon circulation.Consequently, for the upcoming decades, rainfall in eastern Madagascar is expected to decrease, as the PDO is currently in a transition from a positive to a negative phase.Elsewhere, IPO-PDO teleconnected regions with weaker rains in recent decades should experience more precipitation, i.e. in eastern Australia and southern Africa (Cai and van Rensch, 2012).It remains a major milestone in future research to unravel if and when projected anthropogenic warming of the Indian Ocean (Forster et al., 2007) will dominate rainfall over the inherent multidecadal component.The data presented here illustrate this interplay as an acceleration of rainfall and erosion following the prominent 1976/1977 climate shift (Meehl et al., 2009), which is related to both anthropogenic and multidecadal forcing.However, we cannot rule out that mid-century deforestation may have also enhanced the observed post-1976 acceleration in G/B (Maina et al., 2012).
Coral luminescence and G/B
Changing luminescent intensities observed in coral cores were originally thought to be caused by the skeletal incorporation of soil-derived humic acids (Isdale, 1984;Susic and Boto, 1989;Matthews et al., 1996;Isdale et al., 1998;Wild et al., 2000).Variability in coral skeletal density and architecture were later proposed as the cause since banded 1. luminescence is also observed in corals far from terrestrial inputs (Barnes andTaylor, 2001, 2005).However, a combination of both humic acids and skeletal density/architecture likely explain the observed changing luminescent intensities in coral skeletons.To accurately reconstruct river runoff, deconvolution of the two fractions is required (Grove et al., 2010).
Spectral luminescence scanning (SLS) is an applied technique that separates the intensity of coral luminescence emissions into three spectral domains: red, green, and blue (RGB) (Grove et al., 2010).High-quality, normalised photoluminescence images are generated using SLS, which are composed of multiple RGB pixels with a linear resolution of 71.4 µm (Grove et al., 2010).As the luminescent emission signals of humic acids (G) are slightly longer than aragonite (B), taking the green/blue (G/B) ratio identifies the amount of humic acids locked within the coral relative to the skeletal density (Grove et al., 2010).SLS resolves many density/architectural effects associated with luminescence intensities, such as declining trends in intensity with coral age (Jones et al., 2009;Lough, 2011a, b).For our study site of Antongil bay (NE Madagascar), relationships of G/B with runoff/precipitation have previously been formulated on comparisons with other runoff proxies (Grove et al., 2010(Grove et al., , 2012)), modelled river discharge data (Grove et al., 2012;Maina et al., 2012) and/or precipitation data from weather stations located hundreds of kilometres form the study site (Grove et al., 2010).Here, we show that G/B is significantly correlated with four separate rainfall datasets for the Antongil region (Figs.A1 and A2), verifying the precipitation link with coral records.(Mitchell and Jones, 2005), (b) CMAP (Xie and Arkin, 1997), (c) NCEP/NCAR reanalysis (Kalnay et al., 1996) and (d) CAMS OPI (Janowiak and Xie, 1999).The mean annual average for the optimal correlation with the G/B record has been chosen for individual rainfall datasets.The G/B composite record was composed of MAS1, MAS3, MASB and ANDRA for the common time period of 1930-2006.Only correlations > 95 % significance level are shown in colour.See legend for correlation coefficients.Study area indicated by rectangular box (stipled).Correlations computed at http://climexp.knmi.nl/.
Appendix B Coral Mn/Ca
The coral Mn/Ca record is in and out of phase with the G/B time series on seasonal timescales, highlighted by MAS1 (Fig. B1).This implies that Mn concentrations, associated with slash-and-burn deforestation, increase in Antongil bay during both the wet and dry seasons.Both G/B and Mn/Ca also have similar linear trends (Fig. B1), indicating that a fraction of Mn is likely flushed into the bay associated with the soils or sediment (Lewis et al., 2007).To remove the deforestation induced erosion signal from the G/B records, we attempted a novel approach using the two coral records MAS1 and MAS3.As the normalised monthly Mn/Ca and G/B records, for both MAS1 and MAS3, had identical standard deviation values of 1.0, for the same period, we created a composite G/B and composite Mn/Ca record based on the two corals (MAS1/3).Records were normalised by subtracting the mean and dividing by the standard deviation.By subtracting the normalised Mn/Ca composite record from the normalised G/B composite record, we removed the deforestation effect, as well as the long-term runoff trend (Fig. B1), leaving a G/B-Mn/Ca record that primarily shows the natural runoff variability (Figs. 7 and B2).
Record segmentation analysis
Record segmentation analysis of the coral composite G/B record, which includes MASB, MAS1, MAS3 and ANDRA (Fig. 1), identify years within the G/B time series that correspond to phase changes in the PDO and south central Indian Ocean SST (CI SST) (Fig. C1; Supplement).Two major shifts are detected in the PDO time series: between 1940 and 1951, towards a negative index, and between 1971 and 1985, towards a positive index.The timing of these major shifts is in agreement with PDO multidecadal changes as described in previous studies (Minobe, 1997;Mantua et al., 1997).
The 1940-1951 shift of the PDO is associated with a shift in the composite G/B record and SST data (Fig. C1).The SST data, however, show a pronounced transition to cooler SST between 1942 and 1952, hence approx.1-2 yr after the shift in the PDO (Fig. C2).This is most likely an artefact created by the sampling bias in observational data for this period (Gedalof et al., 2002).At the second major shift in the PDO between 1971 and 1985, the south central Indian Ocean SST show a transition from 1974 to 1988, whereas G/B does not record this transition, with the exception of a weak signal cantered at 1971 (Fig. C1).This is likely explained by a perturbation created by the 1970s deforestation period.In general, all transitions in G/B between 1950 and 1980 are moderate (e.g. in 1956), most likely caused by the enhanced deforestation period, which is marked by the highly pronounced Mn/Ca peaks (Figs. 4 and B1).Nevertheless, significant shifts (2 × 10-yr window) in the G/B did occur in 1930, 1940, 1945, 1971, and 1994 (Fig. C1a).
The 1994 shift most likely marks the start of a transition to a negative PDO phase on multidecadal timescales (Verdon and Franks, 2006).Minor shifts in the PDO are associated with the interdecadal frequency mode (Interdecadal Pacific Oscillation/IPO) (Fig. C1).
Fig. 1 .
Fig. 1.Map of the region where coral cores MAS1, MAS3, MASB and ANDRA were drilled.Coral locations (stars) and their corresponding rivers and watersheds (grey shaded areas) are marked accordingly in Antongil Bay.The largest river is the Antainambalana, influencing MAS1, MAS3 and MASB.ANDRA is also influenced by a separate river, the Ambanizana, flowing south westward into the bay.
Fig. 6 .
Fig. 6.An 11 yr running mean of MAS1 and MAS3 coral G/B (green) compared to the SST anomaly (ERSSTv.3)for the southern central Indian Ocean 5-20 • S, 60-90 • E (black) since 1904 (a, d), the coral Mn/Ca (red solid; µmol mol −1 ) (b, e), and the Ba/Ca (blue) (c, f).Note that multidecadal oscillations in G/B and Ba/Ca show high coherence with SST.Higher Mn/Ca ratios identify periods of slash-and-burn deforestation that overprint the climatic control of humic acid runoff.Differences observed between Ba/Ca and G/B are linked to watershed composition.A 360-month low pass filter of Antananarivo precipitation anomalies (18.80 • S, 47.50 • E, 1276 m, WMO station code: 67083 ANTANANARIVO/IVATO) is shown (black dashed; b, e), indicating increasing rainfall conditions until 1987.Note that this precipitation record ends in 1987 due to recent data gaps.
Fig. A1 .
Fig. A1.Annual average G/B anomalies of MASB (red), MAS1 (blue), ANDRA (green) and MAS3 (black).Anomalies were calculated by subtracting the mean annual average G/B value for 1961-1990 (common period) from individual records.Correlations of records are shown in Table1.
Fig. B1 .
Fig. B1.The seasonal alignment of (a) the MAS1 coral G/B record with Mn/Ca values for the years 1970-1980.The complete MAS1 time series of normalised (b) G/B and (c) Mn/Ca is shown in green and red, respectively.The 11 yr running means of (d) G/B and (e) Mn/Ca are shown together with linear trends of monthly data (black line).The regression equations of each line are given in the top left hand corner of each plot.Note that the trend lines have a similar slope.
Table 1 .
Correlations of annual average G/B and Ba/Ca between coral records.Correlations are calculated using the maximum number (N ) of years shared between corals.Individual G/B records are graphically shown in Fig.A1. | 9,800 | 2013-03-13T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Quality Aspects of Continuous Delivery in Practice
Continuous Delivery is recently used in software projects to facilitate the process of product delivery in Agile software development. As an Agile practice, this practice is mainly used to achieve better quality of software development process and higher customer satisfaction. However, less attention has been paid on exploring the quality factors related to Continuous Delivery as well as quality model. The main aim of this paper is to figure out the quality aspects and factors of Continuous Delivery. Initial data analysis showed that this practice is impressed by people related factors, organizational issues, tools and process related factors as well. Keywords—Continuous delivery; quality model; agile software development; agile methods; agile practice
INTRODUCTION
Agile methods are widely using in software development projects since the last decade.These methods promote a different style of software development which distinguishes the development from traditional or disciplined methods in software engineering.Focusing on Agile values and principles, defined in Agile manifesto [1], these methods promote early and frequent delivery, higher quality level, better customer collaboration, embracing required changes in customer"s requirements and so on [2].It"s why many software companies are looking for the best way to adopt these methods in their software product lines [3].However, they are faced with various challenges [4].
Agile software development includes various methods such as Scrum, Extreme Programming (XP), Crystal family, Test Driven Development (TDD), Feature Driven Development (FDD), etc. [2], each defines its own particular practices, roles and artifacts.However, usually, Agile software teams use various practices that can be commonly used in all Agile methods [4], [5].Continuous Delivery (CD) is one the popular practices which recently has gained special importance for Agile projects.
CD focuses on releasing reliable software product through software development, test and deployment [6].This practice was introduced in 2010, as the ability to release every time [7].However, the core concept of CD is not really continuous code development; it is the ability of release at any time [6], [8].Indeed, the recently developed code should have the ability to be added by new features and functionalities as easy as possible.
Since the ultimate goal of software development is achieving customer satisfaction through increasing quality of both development process and product, quality of all the development practices is important.Quality of CD also play a great role in customer satisfaction.Better conduction of this practice may lead to higher customer trust directly and satisfaction indirectly.However, the literature review shows less effort on exploring the quality related aspects of CD, proposing a clear quality model, or even providing guidelines to increase quality of this practice in real environments.This article tries to explain the concept of CD from the lens of quality and address the most related previous studies and finally describe the outline of a required quality model dedicated for this practice.So, the rest of this article is organized as follows: Section 2 describe the underlying concept of CD briefly.Section 3 addresses the most related works followed by Section 4 which outlines a quality model associated for CD.Section 5, finally, concludes the paper.
II. CONTINUOUS DELIVERY
Agile software development defines an underpinning concept, short cycles, which its focus is on early and frequent delivery.To establish such concept, Agile approach defines proper practices, among them CD plays a critical role.As mentioned before, CD focuses on the ability of software release whenever customer needs [6], [8].CD is a really a practice to help the software stakeholders (i.e. business and technical parties) to collaborate in development and deployment of a software product in short cycles while focusing on the quality factors.
Technically speaking, CD is considered as an Agile practice which facilitates the process of delivery of product increments upon the customer request.However, CD focuses more on commitment to ensuring the recently developed code is able to be released at any time rather than the delivery process [9].The promised advantages of CD temp both business and technical practices of software development to adopt it in their product line [8].Accelerating time-to-value, quick user feedback, achieving clear and visible believable www.ijacsa.thesai.orgprogress, reducing the risks of delivery, providing innovations in the release process, better quality and data-driven decision making are the most addressed advantages and benefits of CD in practice [10], [11].
The above advantages have root in the concepts and goals of CD.For instance, frequent delivery and release provides the ability to get customer feedback timely and faster.Also, short cycles and frequent delivery increase the chance of risk discovery and avoid them in the next delivery.So, better quality will be expected indirectly.Recently and in the competitive software industry, lots of the reputed companies such as Facebook, Google, IBM and Microsoft are trying to use CD as a compulsory development practice in their project [6], [11].
CD process includes a series of activities all together are known as "Continuous Delivery Pipeline".As shown in Fig. 1, this pipeline involves some automatic and manual tasks.Although, literature review shows different steps for this pipeline, all are almost the same in tasks and activities in which Build, Staging and Production are constant [7], [11], [12].In the Build stage software teams use source repository as input and store an artifact in the artifact repository.The main goal of this stage is software development, software test, packaging and archiving.Unit tests are mostly used in this stage.The second stage, Staging, software teams install and deploy the recently build artifact in a staging environment and simultaneously perform regression, performance, integration and functional tests.Production stage, finally, focuses on deployment of the recently testes software into the production environment [7].
Despite of its simple concept, employing of CD in practice needs proper conditions.For instance, software development process needs to support iterative development in advance [13].Indeed, without defining several iterations, CD cannot be considered.This would be a serious limitation for small projects where number of iterations are limited.Furthermore, extensive of positive team climate and also positive atmosphere between customer and development team is necessary [13].
III. RESEARCH BACKGROUND
Most of the previous studies paid attention to introduction and employment of CD only.Indeed, less attempts have been made to determine and highlight the quality factors and aspects of CD in practice.However, a few studies have referred to this issue.Some studies focused on the barriers and challenges of employing and quality of CD in real environments."organizational challenges" was reported as a serious challenge in the CD pipeline [5].Another study [13] technical, procedural and customer-related challenges of CD have been addressed and the details of each were explained.For instance, issues with CD downtime, problems and limitations of automatic test process and configuration related problems are listed as technical challenges.
Another study tried to create a trade-off between risk of lower release quality and time-to-market while adoption of CD [7].Agile practices and their impacts on employing of CD were investigated in another study.This study showed that while some Agile practices like TDD, Pair testing, and customer involvement and collaboration have a positive and significant impact on the CD, some others like Pair programming have not such impact.
In another study, a new eco-system, Rugby, was proposed to support the CD life cycle and facilitate its pipeline [12].The main focus of this study was to indicate the impact of Agile approach on the CD pipeline.The proposed eco-system defined some new roles such as team leader, project leader, customer, and developer to support and facilitate the CD adoption in real environments.The results of this showed the increase of frequency and quality of interactions between development team and customer party.
In another study, some of the adaptable quality metrics of CD were addressed.These metrics have been categorized as project level, product level, and pipeline level.The addressed metrics are suitable to be used in evaluating quality of CD.
In sum up, literature review shows that only a few studies focused on quality aspects of CD.This indicates a research gap that can be fulfilled by conducting proper research studies in practice.Focusing on this gap, the next section provides some quality factors that may affect the process of CD in practice.
IV. OUTLINE OF CD QUALITY
Conducting a qualitative research study led to collection of proper data related the topic under study, CD quality aspect.Data collection and analysis are ongoing at the time of this writing.However, some aspects of the results can be showed in this article.This section provides the main findings of this study.However, the details of each aspect and evaluation of the findings will be provided in another article in future.Data analysis showed that quality of CD is impressed by four different aspects including People, Process, Organization, and Tools, as shown in Fig. 2.These aspects are the high level abstract for various quality factors.Indeed, each of them www.ijacsa.thesai.orgconsists of several quality factors which together impress the quality of CD in practice.
"People" category mainly indicates that people related issues are important factors that impress quality of CD.People relationship is so critical in performing CD since this practices connect both technical and business parties.Furthermore, the relationship between development team members also is important, because it seems that collaborative teams conduct CD in a better quality.Usually the people involved the CD process having different level of experience and so this aspect can impress the quality of CD too.
"Process" of CD has a great impact on quality of this practice.Various activities included in this process such as TDD, frequent testing, mechanisms used for requirement prioritization, and daily continuous integration seriously needed to be perform in the professional manner.Therefore, any weakness in doing such activities results in low quality of CD directly.
In "organization" category the main focus in on the organizational culture and its related issues.Existence of culture of CD in organizational processes is compulsory to achieve the desired quality of CD.Also, providing mechanisms to manage the potential technical and human related risks greatly can lead to better quality of CD.Moreover, quality control and assurance and its process positively impress the quality of all the involved practices generally and CD particularly.
"Tools" category deals with tools related issues.For instance, automatic facilitates directly accelerate the process of CD and avoid the human related errors in this practice.Also, existence of mechanisms for version controlling leads to reduce configuration related defects.
In sum up, it seems that quality of CD depends of various technical and human related activities.However, more data analysis is necessary to explore the details of the above mentioned aspects, as noted earlier.
V. CONCLUSION AND FUTURE WORK
CD is one the most important practices which recently is widely used in software projects.This practice focuses on the ability of software release at any time.CD defines a sequential set of activities to facilitate the release process.Quality of CD directly impresses the quality of development process.To explore the quality factors and aspects of CD, a qualitative study has been conducted.Initial data analysis showed that quality of CD is impressed by four aspects including People, Organization, Process, and Tools related factors.Each of these aspects by involving some quality factors may lead to better quality of CD in practice and real environments.
For the future work, the authors intend to employ the proposed model in two case studies to evaluate its usefulness and applicability in an empirical study.
Fig. 2 .
Fig. 2. The outline of quality aspects of CD. | 2,605.4 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
MATHEMATICAL MODEL OF A QUEUING SYSTEM WITH ARBITRARY QUANTITY OF SOURCES AND SIZE-LIMITED QUEUE
Abstract: The paper presents a mathematical model of an open multi-channel queuing system having m service facilities of identical efficiency with exponentially distributed service time. The input stream of Poisson character includes demands of different types arriving from an arbitrary quantity of sources h and having various size-limited queues. General mathematical formulae for probabilistic characteristics, first and second moments of numerical characteristics specifying the quality of service in a steady-state mode of work have been obtained.
Introduction
Issues of studying combined models of queuing originate from Cohen's works (Cohen J.W.) [2], where the combination of Erlang models and classical queuing system was considered for the first time.A number of formulae for probabilities of queuing system (QS) steady states, call loss probability, and first moments of demands number in a queue and waiting time in a queue are given in the paper.
Another specific case of a combined model is a mixed system with losses and expectation having some servers and finite memory, presented in the work of H. Takagi [6].In this case there are two sources of demands in the system, thus demands from the first source will be lost if all servers are busy at the time of their arrival in the system.Demands from the second source are accepted in a queue only if the number of demands in it does not exceed some defined value K. Streams of demands arriving in the system also have a Poisson character.Formulae for probabilistic characteristics of the system and for the moments of n order of waiting time and common delay time in the system are given in the paper.In the specific case K→ ∞,this model is reduced to J. Cohen's model.
A more general model of a queuing system which is a combination of a multi-channel Erlang model, M/M/m/E model, and also multi-channel classical model (M/M/m models) is considered in the work of authors [3].A complete formula derivation for probabilistic characteristics, and also for the first and second moments of numerical and temporary characteristics of this type of aqueuing systemis presented in work [7]; a general algorithm of queuing models mathematical formalization taken from monographs [4] and [5] is used.
A mathematical model of an open multi-channel system of queuing having m service facilities of identical efficiency with exponentially distributed service time is presented in this paper.A demand input stream in this case is a superposition of components'random number h, each of which represents a Poisson stream of demands served in the order of arrival.For each type of demands entering the system from the j-th source there is a specific size-limited queue ε j where ε A zero (Erlang) component contains demands which are served only if there is at least one free service facility, and they never stand in a queue.In the case, if at the time when the next similar demand arrives in the system there is no free service facility this demand is refused and leaves the system unserved.The model of a queuing system, containing one such component in an input stream, is the Erlang model; therefore we will call this component an Erlang component.
The first component includes demands which are served if there is a free service facility, or they stand in a queue if the number of demands in the queue is fewer than a particular number ε 1 .In case when there is already available ε 1 or more demands in a queue, a newly arrived demand from the first source is refused and leaves the system unserved.
The second component contains demands which are served if there is a free service facility, or they stand in a queue if the number of demands in a queue is fewer than a particular number ε 2 > ε 1 .In the case when ε 2 or more demands are already available in the queue, an arrived demand from this source is refused and leaves the system unserved, and so on.
In general, the h-th component includes demands which are served if there is a free service facility, or they stand in a queue if the number of demands in the queue are fewer than a particular number ε In case when there are already ε h demands in the queue, a newly arrived demand from the h-th source is refused and leaves the system unserved.
Let us accept the following designations: a size-limited queue (memory volume) for demands of the j-th component; where λ j demand stream intensity of the j-th component; where ρ j is the given demand stream intensity of the j-th component.Demand streams arriving from each source are Poisson and have intensity λ j ; in this case total streams with intensities Λ j also have, as we know, a Poisson character.Let us designate the mean intensity of demand service by one service facility as µ.In this case the intensity of an output stream of served demands before the m-th states is multiple µ and depends on the number of busy channels.After the m-th state the intensity of served demand stream is equal to mµ.The served demand stream is also Poisson.
With accepted designations and assumptions taken into account, we will obtain a continuous-time Markov chain.
Probabilistic Characteristics of a Queuing System in a Steady-State Mode
We make up a set of Kolmogorov-Chapman equations for probabilities of QS states in a steady-state mode of its functioning.Adding the normalization condition m+ε h i=0 P i = 1, to this set of equations, we obtain a system . . . . . .
that has a unique solution where the designation [1] 2) defines expressions for probabilities of all possible QS states of this type in a steady-state mode of its functioning.
For further calculations it is convenient to introduce the following basic probabilistic characteristics of QS of this type through which all other quantities are expressed: -basic probability 1: -basic probability 2: -basic probability h: -congestion probability of the system: As a result, a general formula for basic probability is written in the form By means of the expression (4) it is possible to present traditional probabilistic characteristics of a queuing system in the most compact form: -probability of a newly arrived demand service expectation in the queue -probability of a newly arrived demand service refusal (probability of demand loss) The probability of an immediate service of a newly arrived demand has, apparently, a form (5)
Numerical Characteristics of a Queuing System
By means of probabilistic characteristics of the system found above, it is possible to express all main features characterizing a steady-state mode of a queuing system functioning.So, through put capacity of a queuing system is a number of demands passing through the system per unit of time A = Λ 0 q = Λ 0 (1 − P L ) = Λ 0 (P IS + P W ) .This number includes all demands from a general input stream except refused demands and those that did not get into the system.Relative through put capacity of the system, thus, is a share of demands passing through a queuing system from a general input stream of demands q = 1 − P L .
The average number of demands under service at the same time (or, that is the same, an average number of busy channels) with formulae (2) -( 5) taken into account has a form The second initial moment of demands number under service is An average demands number in a queue (average queue length) are The second initial moment of demands number in a queue is An average demands number in the system on the whole (both in a queue and under service) are The covariance of demands number under service and the number of demands in a queue is Therefore, the second initial moment of a total demands number in the system is Further, in the considered queuing system,the queue is possible only when all service facilities are busy.Thus, the total stream of served demands of the whole system consists of service streams of each channel and has mµ intensity.In this case, the probability that the system serves i demands during t time in the event of queue, will be recorded in the form B i (t) = (mµ t) i i ! e −mµ t .The function of service waiting time distribution for one demand we will find according to a known dependence F W (t) = 1 − P (t W ≥ t) , where P (t W ≥ t) -the probability that waiting time in a queue for one demand is more than an advanced set time t.As it is easy to see, it is possible, firstly, in case when the queue is absent, but a newly arrived demand finds all service facilities in the system busy, and during t time none of facilities is released.Secondly, in case when one demand is already in a queue and during t time the system serves no more than one demand, or there are two demands in a queue, and during t time no more than two demands are served, and so on.In this case, according to the formula of full probability, we have After a number of intermediate calculations, it is possible to obtain the following expressions for finite-sums sequence in square brackets in the righthand side of this ratio: and so on; thus in general we have As a result, substituting ratios obtained above ( 7) -(10) into the right member of a formula (6), we will finally find Hence, the density of a demand waiting time distribution for service in a queue is and then, mean waiting time of demand service in a queue is A in compliance with J. Littl's formulae.In the same way the second initial moment of a demand waiting time in a queue is Mean sojourn time of a demand on the whole (both in a queue, and under service) is, apparently, tT = tS + tW = 1 Λ 0 q R A and then, the second initial moment of a demand sojourn time is Let us note that the ratio (11) gives a possibility to calculate moments of any order as a demand waiting time in a queue for service, and demand sojourn time in general (in a queue and under service).The model presented in this paper is the most general in relation to earlier studied models [2], [6] and [3]. | 2,443 | 2016-02-19T00:00:00.000 | [
"Mathematics"
] |
Sparsity Preserving Discriminant Projections with Applications to Face Recognition
Dimensionality reduction is extremely important for understanding the intrinsic structure hidden in high-dimensional data. In recent years, sparse representation models have been widely used in dimensionality reduction. In this paper, a novel supervised learning method, called Sparsity Preserving Discriminant Projections (SPDP), is proposed. SPDP, which attempts to preserve the sparse representation structure of the data and maximize the between-class separability simultaneously, can be regarded as a combiner of manifold learning and sparse representation. Specifically, SPDP first creates a concatenated dictionary by classwise PCA decompositions and learns the sparse representation structure of each sample under the constructed dictionary using the least square method. Secondly, a local between-class separability function is defined to characterize the scatter of the samples in the different submanifolds. Then, SPDP integrates the learned sparse representation information with the local between-class relationship to construct a discriminant function. Finally, the proposed method is transformed into a generalized eigenvalue problem. Extensive experimental results on several popular face databases demonstrate the feasibility and effectiveness of the proposed approach.
Introduction
In many fields such as object recognition [1,2], text categorization [3], and information retrieval [4], the data are usually provided in high-dimensional form; this makes it difficult to describe, understand, and recognize these data.As an effective method, dimensionality reduction has been widely used in practice to handle these problems [5][6][7][8].Up to now, a variety of dimensionality reduction algorithms have been designed.Based on the data structure they utilize, these methods fall into three categories: global structure-based methods, local neighborhood-based methods, and sparse representation-based methods.
Principal Component Analysis (PCA) [9], Linear Discriminant Analysis (LDA) [10], and their kernelized versions are typical global structure-based methods [11,12].Owing to its simplicity and effectiveness, PCA, which aims at maximizing the variance of the projected data, has extensive applications in the fields of science and engineering.PCA is a good dimensionality reduction method; however, it does not employ the label information of the samples, leading to inefficiency of the classification.Unlike PCA, LDA is a supervised method that attempts to identify an optimal projection by maximizing the between-class scatter and as such minimizing the within-class scatter.Because the label information is fully exploited, LDA has been proven more efficient than PCA in classification [13].However, LDA can extract at best − 1 features ( is the number of categories), which is unacceptable in many situations.Moreover, both PCA and LDA are based on the hypothesis that samples from each class lie on a linear subspace [14,15]; that is, neither of them can identify the local submanifold structure hidden in high-dimensional data.
Recently, manifold learning methods, which are especially useful for the analysis of the data that lie on a submanifold of the original space, have been proposed [16][17][18][19][20][21][22][23][24][25][26].Representative manifold learning methods include Isomap [16], Laplacian Eigenmaps (LE) [17], and Locally Linear Embedding (LLE) [18].All these nonlinear methods are able to discover the optimal feature subspace by solving an optimization problem based on the weight graph question; however, none of them can overcome the "out-of-sample" problem [19].That is, they yield maps that are characterized only on the training data points but how to evaluate the maps on new test data points is still unclear.In order to address this problem, Cai et al., respectively, developed the linear visions of the above manifold learning methods such as isometric projection [20], Locality Preserving Projections (LPP) [21], and Neighborhood Preserving Embedding (NPE) [22].However, these methods suffer from a limitation that they do not encode discriminant information, which is very important for recognition tasks.Recently, Gui et al. proposed a new supervised learning algorithm called Locality Preserving Discriminant Projections (LPDP) to improve the classification performance of LPP and applied it to face recognition [26].Experimental results show that LPDP is more suitable for recognition tasks than LPP.
Sparse representation, as a new branch of the state-of-theart techniques for signal representation, has attracted considerable research interests [27][28][29][30][31][32][33][34][35][36][37][38].It attempts to preserve the sparse representation structure of the samples in a lowdimensional embedding subspace.The representative dimensionality reduction algorithms based on sparse representation include Sparsity Preserving Projections (SPP) [39], Sparsity Preserving Discriminant Analysis (SPDA) [40], Discriminative Learning by Sparse Representation Projections (DLSP) [41], Sparse Tensor Discriminant Analysis (STDA) [42], and sparse nonnegative matrix factorization [43].It is worthwhile to note that a sparse model also depends on the subspace assumption: each sample can be linearly expressed by other samples from the same class; that is, each sample can be sparsely recovered by samples from all classes.In general, these sparse learning algorithms provide superior recognition accuracy compared with the conditional methods.However, all these dimensionality reduction methods based on sparse coding mentioned above are required to solve the ℓ 1 norm minimization problem to construct the sparse weight matrix.Therefore, they are computationally prohibitive for large-scale problems.For example, SPP attempts to preserve the sparse reconstructive relationship of the data [39], which is an effective and powerful technique for dimensionality reduction.However, the computational complexity of SPP is excessively high and hence, it cannot be used extensively for large-scale data processing (in fact, the time cost for constructing the sparse weight graph is ( 4 ), where indicates the total number of training samples).Moreover, SPP does not absorb the label information.Thus, the algorithm is unsupervised.
Motivated by the above works, a novel supervised learning method, called Sparsity Preserving Discriminant Projection (SPDP), is proposed in this paper.By integrating SPP with local discriminant information for dimensionality reduction, SPDP can be viewed as a combiner of sparse representation and manifold learning.Because sparse representation can implicitly discover the local structure of the data owing to the sparsity prior, this property can be used to describe the local structure.However, differing from the existing SPP, which is time-consuming in sparse reconstruction for each test sample, SPDP first creates a concatenated dictionary using classwise PCA decompositions and learns the sparse representation structure of each sample under the constructed dictionary quickly with the least square method.Then, a local between-class separability function is defined to characterize the scatter of the samples in the different submanifolds.Subsequently, by integrating the sparse representation information with the local between-class relationship, SPDP attempts to preserve the sparse representation structure of the data and maximize the local between-class separability simultaneously.Finally, the proposed method is converted into a generalized eigenvalue problem.
It is worth emphasizing some merits of SPDP and the main contributions of this paper: (1) SPDP is a supervised dimensionality reduction method that attempts to identify a discriminating subspace where the sparse representation structure of the data and the label information are maintained.Meanwhile, the separability of different submanifolds is maximized; that is, different submanifolds can be distinguished more clearly.(2) SPDP is able to explore the local submanifold structure hidden in high-dimensional data because the manifold learning is employed to characterize the local between-class separability.(3) The time required for extracting discriminant vectors in SPDP is significantly less than many algorithms based on sparse representation.Therefore, the proposed method can be widely applied for large-scale problems.(4) Label information is employed twice in SPDP.First, it is absorbed in constructing the dictionary for sparse representation and calculating the sparse coefficient vector, which may contribute to a more discriminating sparse representation structure.Further, it is utilized in computing the local between-class separability, which is more conducive for classification.
The rest of this paper is organized as follows: Section 2 briefly reviews the existing SPP algorithm.The SPDP algorithm is described in detail in Section 3. The experimental results and analysis are presented in Section 4 and the paper ends with concluding remarks in Section 5.
Brief Review of Sparsity Preserving Projections (SPP)
SPP aims to preserve the sparse reconstruction relationship of the samples [39].Given a set of training samples {x } =1 , where x ∈ R and is the number of training samples, let X = [x 1 , x 2 , . . ., x ] ∈ R × be the data matrix consisting of all the training samples.SPP first seeks the sparse reconstruction coefficient vector s for each sample x through the following modified ℓ 1 minimization problem: where s = [s 1 , . . ., s ,−1 , 0, s ,+1 , . . ., s , ] is an -dimensional column vector in which the th element is equal to zero, implying x is removed from X, and the element s , ̸ = , denotes the contribution of x for reconstructing x .Then, the sparse reconstructive weight matrix S is given as follows: where s is the optimal solution of (1).The final optimal projection vector w is obtained through the following maximization problem: with S = S + S − S S.This problem transforms to a generalized eigenvalue problem.It follows that SPP must resolve time-consuming ℓ 1 norm minimization problems to obtain the sparse weight matrix S. Thus, the computational complexity of SPP is excessively high and therefore not widely applicable to large-scale data processing.Moreover, SPP does not exploit the prior knowledge of class information, which is valuable for classification and recognition problems such as face recognition.
Sparsity Preserving Discriminative Learning
In this section, the proposed SPDP algorithm is described in more detail.To reduce the disadvantage that is inevitable for SPP to resolve time-consuming ℓ 1 norm minimization problems to obtain the sparse weight matrix S, SPDP first constructs a concatenated dictionary through classwise PCA decompositions and learns the sparse representation structure of each sample under the constructed dictionary quickly using the least square method.To enhance the discriminant performance, it defines a local between-class separability function to characterize the scatter of the samples in the different submanifolds.Then, by integrating the sparse representation information with the local interclass relationship, SPDP aims to maximize the separation between the submanifolds (or intrinsic clusters) without destroying localities and meanwhile preserve the sparse representation structure of the data.Hence, the proposed algorithm is expected to preserve the intrinsic geometry structure and have superior discriminant abilities.
Constructing the Concatenated Dictionary.
For convenience, we first provide some notations used in this paper.Assume that X = {x 1 , x 2 , . . ., x } is a set of training samples, where x ∈ R .We can categorize the training samples as . ., ) consists of samples from class .Suppose that samples from a single class lie on a linear subspace.Thus, each sample can be sparse linearly represented by samples from all classes.The subspace model is a powerful tool to capture the underlying information in real data sets [44].For the convenience of PCA decomposition and relevant calculations, we first center the samples from each class at the origin, X = [x 1 − , x 2 − , . . ., x − ] ( = 1, 2, . . ., ), where denotes the mean of class ; that is, x / .Therefore, the training sample can be recast as X = [ X1 , X2 , . . ., X ].Afterwards, PCA decomposition is conducted for every X ( = 1, 2, . . ., ), whose objective function is where ∑ is the sample covariance matrix of X .For every class , the first principal components are selected to construct (in fact, is automatically selected by the value of the PCA ratio from the system).Thus, a sample x from class can be simply represented as with D = [D 1 , D 2 , . . ., D ] and s = [0 , 0 , . . ., 0 , s , 0 , . . ., 0 ] .D is the dictionary of class by the PCA decomposition above, D is the concatenated dictionary composed of all D ( = 1, 2, . . ., ), s is the sparse representation of a sample x under the concatenated dictionary D, and s is the coefficient vector under the dictionary D .In fact, s can be quickly computed from the least square method as The orthogonality of each principal component of PCA decomposition of the same class is utilized in the reduction of the above formula.The process of constructing the concatenated dictionary is presented in Figure 1.
According to the preceding procedure, each training sample corresponds to a sparse representation under the concatenated dictionary D and the sparse coefficient vector s of any training sample from class can be quickly computed from the least square method (in fact, it is the primary reason that the proposed approach is significantly faster than SPP, which will be explained in detail in Section 4.4) because the computational process of s involves only D , which is column orthogonal in view of ( 5) and (6).
Preserving Sparse Representation Structure.
As can be seen in Section 3.1, to some extent, the dictionary D describes the intrinsic geometric properties of the data and the sparse coefficient vectors explicitly encode the discriminant information of the training samples.Thus, it is hoped that this valued property in the original high-dimensional space can be preserved in the low-dimensional embedding subspace.Therefore, the objective function is expected to look for an optimal projection that can best preserve the sparse representation structure: where s is the sparse reconstruction vector corresponding to x .
Characterization of the Local Interclass Separability.
To effectively discover the discriminant structure embedded in high-dimensional data and improve the classification performance, in this subsection, we construct a local interclass weight graph.Because data in the same class lie on one or more submanifolds and data belonging to different classes are distributed on different submanifolds, it is important for classification problems to distinguish one submanifold from another.Therefore, a local between-class separability function is defined in this section to characterize the separability of the samples in different submanifolds.The aim of SPDP is that different submanifolds can be distinguished more clearly after being projected; hence, the local between-class separability of different submanifolds should be maximized.Thus, we can construct a label matrix B to describe the local and interclass relationships of each point as follows: where ‖x −x ‖ 2 2 denotes the geodesic distance between points x and x , is a parameter which is often set to be as the standard deviation of the samples, − () denotes the index in the nearest neighbors of the sample x , however with a different class label, and B is called the local betweenclass weight matrix (or local interclass weight graph).As can be seen in the above definition, if two distant points x and x belong to different submanifolds, the scatter of them is big and vice versa.That is, the points belonging to different submanifolds should be located farther after projection.Therefore, the local interclass separability can be characterized as the following equation: where y = w x ( = 1, 2, . . ., ) is the low-dimensional representation of the original data, which can be obtained by projecting each x onto the direction vector w ∈ R .With algebraic simplifications, (11) can be rewritten as where L is Laplacian matrix with definition L = D − B and D is a diagonal matrix [45]; that is, D = ∑ B .Equation ( 12) characterizes the separability (or scatter) of the data set in different submanifolds.Therefore, each manifold can be separated clearly, as long as the optimal projection w * is adopted.
Sparsity Preserving Discriminant Projections.
To achieve improved recognition results, we explicitly integrate the sparsity preserving constraint as indicated in (7) with the local between-class separability as illustrated in (12).The novel supervised algorithm SPDP, which not only preserves the sparse representation structure but also separates each submanifold as distant as possible, is defined as where the denominator term (w) measures the quality of preserving the sparse representation structure and the numerator term (w) measures the separability of different submanifolds.It is well known that the criterion of LDA is to maximize the between-class scatter and, meanwhile, minimize the within-class scatter.Similar to LDA, the aim of SPDP is to maximize the ratio of the local between-class separability to the sparse representation weight scatter.Letting the objective function can be recast as the following optimization problem: Then, the optimal w's are the eigenvectors corresponding to the largest eigenvalues of the following generalized eigenvalue problem: It is worth noting that since the training sample size is much smaller than the feature dimensions for those highdimensional data, M might be singular.This problem can be tackled by projecting the training set X onto a PCA subspace spanned by the leading eigenvectors to get X and replacing X by X .
Based on the above discussion, the proposed SPDP is summarized in Algorithm 1.
Step 2. Calculate the coefficient vector s under the dictionary D for each sample based on (6) to obtain the sparse coefficient vector s and then calculate S.
Step 4. Calculate the projecting vectors by the generalized eigenvalue problem in (16).
Experiments
In this section, the proposed SPDP algorithm is tested on three publicly available face databases (Yale [13], ORL [46], and CMU PIE [47]) and compared with six popular dimensionality reduction methods-PCA, LDA, LPP, NPE, LPDP, and SPP.For PCA, the only model parameter is the subspace dimension and for LDA, the performance is directly influenced by the energy of the eigenvalues kept in the PCA preprocessing phase.For LPP and NPE, the supervised versions are adopted.In particular, the neighbor mode in LPP and NPE is set to be "supervised"; the weight mode in LPP is set to be "Cosine."The empirically determined parameter in LPDP is taken to be 1 [26], in SPP is set to be 0.05 as indicated in [39], and in SPDP is set to be the standard deviation of the samples.The nearest neighbor classifier (1−) is employed to predict the classes of the test data.All experiments are accomplished with MATLAB R2013a on a personal computer with Intel(R) Core i7-4770 K 3.50 GHz CPU, 16.0 GB main memory, and the Windows 7 operating system.
Experiment on Yale Face
Database.The Yale face database contains 165 face images of 15 individuals.There are 11 images per individual.These images were collected under different facial expressions (normal, happy, sad, surprised, sleepy, and wink) and configurations (left-light, center-light, and right-light) and with or without glasses.All the images are cropped to a size of 32 × 32 and then normalized to have a unit norm.Some samples from this database are presented in Figure 2.For each person, ( varies from 2 to 8) images are randomly selected as the training samples and the remaining 11 − for the test.For each , the results are averaged over 50 random splits.Table 1 presents the best recognition rate and the associated standard deviation of the seven algorithms under the different sizes of the training set. Figure 3(a) presents the best recognition rate versus the variation of the size of the training set. Figure 3(b) is the variation rules of the recognition rates of the seven algorithms under different reduced dimensions when the size of the training samples from each class is fixed as six.The fact that the upper bound for the dimensionality of LDA is − 1 ( is the number of categories) because there are at most − 1 generalized nonzero eigenvalues [13] deserves to be noted; similar situations will occur in other experiments in this paper.Hence, one can see that the SPDP algorithm significantly outperforms the other methods.points, under different lighting conditions, varying facial expressions.In our experiment, each image is cropped to a resolution of 32×32 as shown in Figure 4. We randomly select ( varies from 2 to 8) pictures from each person for training; the remainder are used for testing.We repeat these splits 50 times and report the average results.Table 2 displays the best classification accuracy of the seven algorithms under the different sizes of the training set; the number in parentheses is the corresponding standard deviation.
Experiment on CMU PIE Face Database.
In this subsection, it is verified that the proposed algorithm achieves higher classification accuracy than the other dimensionality reduction methods under varying illumination, pose, and expression.The CMU PIE face database contains over 41,368 face images of 68 subjects that were captured by 13 synchronized cameras and 21 flashes under varying poses, illumination, and expression.In our experiments, we choose the five frontal poses (C05, C07, C09, C27, and C29).This leaves 170 face images per subject; all the images are cropped to 32 × 32. Figure 6 shows some pictures of one subject.A random subset with (=5, 10, 15, 20) pictures per subject is selected with labels to form the training set; the remainder are used for testing.For each given , we average the classification accuracies over 50 random splits.Table 3 presents the best recognition rate and the associated standard deviation in brackets of the seven algorithms under the different size of the training set.The critical factor of the above phenomenon is that the approaches of SPP and SPDP to obtain the sparse representation structure are entirely different.In SPP, time-consuming ℓ 1 norm minimization problems are required to be solved to construct the sparse weight matrix, whose computational cost is ( 4 ) [48,49], whereas SPDP can achieve this significantly faster through only PCA decompositions and least square methods.Because PCA decompositions can be completed in ( 2 ∑ =1 ) according to the more efficient algorithm [50], the time cost for learning the sparse coefficient vector of each sample, which only involves the least square method, is ( ) and the sparse weight matrix S can be calculated with ( ∑ =1 ); the computational complexity of SPDP to learn the sparse representation structure is ( 2 ∑ =1 + ∑ =1 ).In general, ≪ , ≪ , and ≪ ; hence, SPDP performs considerably faster than SPP as indicated in Tables 4, 5, and 6.
Overall Observations and Discussions.
Several observations and analysis can be achieved from the above experimental results.
(1) From Tables 1, 2, and 3 and Figures 3(a (2) From Figures 3(b), 5(b), and 7(b), it can be observed that the reduction dimensions for SPDP to achieve the best recognition rate are less than those of the other compared algorithms.This saves a considerable amount of time and storage space after obtaining the optimal embedding functions.
(3) From Tables 4, 5, and 6, it is indicated that SPDP is considerably faster than SPP in obtaining the discriminant vectors.This is because the method SPDP uses to learn the sparse representation structure which is more effective than that of SPP as analyzed in Section 4.4.
Conclusions
This paper proposed a new supervised learning method, called Sparsity Preserving Discriminative Projections (SPDP), by combining manifold learning and sparse representation.Specifically, SPDP first constructs a concatenated dictionary by means of classwise PCA decompositions and learns the sparse representation structure of each sample under the constructed dictionary quickly using the least square method.Then, it defines a local between-class separability function to characterize the separability of the samples in different submanifolds.Subsequently, SPDP integrates the sparse representation information with the local between-class relationship.Thus, SPDP preserves the sparse representation structure of the data and maximizes the local between-class separability simultaneously.Finally, the proposed method is transformed into a generalized eigenvalue problem.Extensive experiments on three publicly available face data sets confirmed the promising performance of the proposed SPDP approach.
Figure 1 :
Figure 1: The process of constructing the concatenated dictionary.
Figure 2 :
Figure 2: Some face samples from the Yale database.
Database.There are 400 images of 40 people in the ORL face data set, where each one has 10 different pictures.The images were collected at different time
Figure 3 :
Figure 3: Recognition rates of the seven algorithms on the Yale database: (a) the best recognition rates versus the different size of the training set and (b) the average recognition rates versus the variation of dimensions when the size per class is fixed as six.
Figure 5 (
a) presents the best recognition rate versus the variation of the size of the training set. Figure 5(b) is the variation rules of the recognition rates of the seven algorithms under different reduced dimensions when the size of the training samples from each class is fixed as five.It can be seen that SPDP and LPDP are superior to other compared methods (their performances on the ORL database are quite similar), especially when the size of the training set is small.The reason may be that both SPDP and LPDP consider the discriminant information and local structure of the data.
Figure 4 :
Figure 4: Some face samples from the ORL database.
Figure 7 (
a) presents the best recognition rate versus the variation of the size of the training set.
Figure 7(b) is the variation rules of the recognition rates of the seven algorithms under different reduced dimensions when the size of the training samples from each class is fixed as ten.We can observe that the proposed SPDP outperforms the other dimensionality reduction methods such as PCA, LDA, LPP, NPE, LPDP, and SPP about pose, illumination, and expression variations.
4. 4 .
Comparison of Time Cost for Acquiring the DiscriminantVectors of SPP with SPDP.In this subsection, the time cost for acquiring the discriminant vectors of SPDP is compared with that of SPP.Tables4, 5, and 6 list the average time costs for acquiring the discriminant vectors of SPP and SPDP versus the different sizes of the training set on the three face data sets.It is demonstrated that SPDP is significantly faster than SPP in acquiring the embedding functions in our experiments, especially in the large-scale problems such as CMU PIE.
Figure 5 :
Figure 5: Recognition rates of the seven algorithms on the ORL database: (a) the best recognition rates versus the different size of the training set and (b) the average recognition rates versus the variation of dimensions when the size per class is fixed as five.
Figure 6 :
Figure 6: Some face samples from the CMU PIE database.
Figure 7 :
Figure 7: Recognition rates of the seven algorithms on the CMU PIE database: (a) the best recognition rates versus the different size of the training set and (b) the average recognition rates versus the variation of dimensions when the size per class is fixed as ten.
Table 1 :
The best recognition rate and the corresponding standard deviation of the seven algorithms under the different size of the training set on Yale ( is the training sample size).
Table 2 :
The best recognition rate and the corresponding standard deviation of the seven algorithms under the different size of the training set on ORL ( is the training sample size).
Table 3 :
The best recognition rate and the corresponding standard deviation of the seven algorithms under the different size of the training set on CMU PIE ( is the training sample size).
Table 4 :
Time (s) for acquiring the discriminant vectors of SPP and SPDP on Yale ( is the training sample size).
Table 5 :
Time (s) for acquiring the discriminant vectors of SPP and SPDP on ORL ( is the training sample size).
Table 6 :
Time (s) for acquiring the discriminant vectors of SPP and SPDP on CMU PIE ( is the training sample size). | 6,274.8 | 2016-01-20T00:00:00.000 | [
"Computer Science"
] |
Functionally different AU- and G-rich cis-elements confer developmentally regulated mRNA stability in Trypanosoma cruzi by interaction with specific RNA-binding proteins.
Post-transcriptional regulatory mechanisms have been suggested to be the main point of control of gene expression in kinetoplastid parasites. We have previously shown that Trypanosoma cruzi SMUG mucin mRNA steady-state level is developmentally regulated by post-transcriptional mechanisms, being stable in the epimastigote insect vector stage, but unstable in the trypomastigote infective stage of the parasite. Its turnover is controlled by an AU-rich element (ARE) localized in the 3'-untranslated region, since a reporter gene lacking this sequence was stable in the trypomastigote stage (Di Noia, J. M., D'Orso, I., Sanchez, D. O., and Frasch, A. C. (2000) J. Biol. Chem. 275, 10218-10227). Here, we show by gel mobility shift assay that the 44-nt ARE sequence interacts with a set of stage-specific AU-rich element RNA-binding proteins (ARE-BPs). The epimastigote stage AU-rich element RNA-binding protein, named E-ARE-BP, and the trypomastigote stage ARE-BPs, named T-ARE-BPs, are efficiently competed by poly(U). UV cross-linking analysis showed that E-ARE-BP has an apparent molecular mass of 100 kDa and is different from the 45-50-kDa ARE-BPs present in other stages of the parasite. Transfection experiments allowed the identification of a novel cis-element that might be responsible for a positive effect on mRNA stability. It is a G-rich element, named GRE, composed by two contiguous CGGGG pentamers. The factors that recognize GRE were different from the ones that bind to ARE, in both molecular masses and subcellular localization. Thus, ARE and GRE are functionally different cis-elements, which might regulate mucin expression throughout the parasite life cycle.
viewed in Refs. 2 and 3). ␣-Amanitin-sensitive RNA polymerase II from trypanosomes transcribes large polycistronic units containing a number of coding sequences (4). Transcriptional start sites have been extremely difficult to detect; only two putative promoter regions were described as transcriptionally void regions upstream from the actin and Hsp70 genes (5,6). The maturation of polycistronic RNA precursors to render individual mRNA molecules is achieved by cleavage in the intergenic region by a coupled processing of 5Ј end trans-splicing and 3Ј end polyadenylation (7). Both processes seem to depend on the recognition of polypyrimidine tracts present in the intergenic regions (8), which acts as a bifunctional element affecting RNA processing both upstream and downstream from itself (7).
In vivo treatment of parasites with protein synthesis inhibitors induces an accumulation (9) or a decrease in the mRNA levels of some transcripts (1), and this effect is not due to an increase or a reduction in transcriptional levels, respectively. Therefore, these results point to the presence of labile factors, affected by protein synthesis inhibitors, that might be negative or positive regulators of mRNA maturation. However, the mechanisms of interference in pre-mRNA processing, unbalanced nucleo-cytoplasmic transport, or unusual mRNA stability control processes remain to be identified. It is known that both 5Ј-and 3Ј-untranslated regions (UTRs) 1 are responsible for stabilization/destabilization mechanisms, up-or down-regulating mRNA levels in a developmentally regulated manner (10,11). In transient and stable parasite transfection experiments, the 3Ј-UTRs of some mRNAs were found to influence the expression of a reporter gene in a stage-specific manner (1,10,11). The way in which the 3Ј-UTR differentially influence the mRNA steady-state levels is still unknown. Furthermore, few cis-elements responsible for these post-transcriptional regulatory mechanisms have been defined (12)(13)(14).
Several cis-elements and trans-acting factors controlling mRNA stability have been characterized in higher eukaryotes (15,16). A well known example is the case of AU-rich elements or AREs, cis-sequences localized in the 3Ј-UTR of short-lived mRNAs, such as proto-oncogenes and cytokines (17). These elements are recognized by different positive or negative RNAbinding proteins, like HuR and AUF-1/heterogeneous nuclear ribonucleoprotein D, respectively (18 -20), causing rapid changes in mRNA stability. Another example is the ribonucleoprotein complex associated with human ␣-globin mRNA (21). A cytidine-rich (C-rich) segment within the 3Ј-UTR of ␣-globin is critical for mRNA stability through the interaction with different trans-acting factors that mediate this effect (22). However, it has been shown that neither ␣CP1 nor ␣CP2 complexforming proteins can bind the C-rich element unless they are complexed with the remaining non-poly(C)-binding proteins, such us AUF1/heterogeneous nuclear ribonucleoprotein D (23). Thus, a protein implicated in ARE-mediated mRNA decay is also an integral component of the mRNA stabilizing ␣-complex.
Trypanosoma cruzi, the protozoan parasite agent of Chagas disease, is covered by a dense mucin coat (24), at least in two of its developmental stages. Mucins are highly O-glycosylated proteins having relevant roles in cell protection and in cell-cell interactions, especially in immune cell migration in vertebrate cells (25). Mucins from T. cruzi were classified into two different protein families that differ between parasite stages. The form of the parasite present in the insect vector, epimastigote, expresses a small mucin family named TcSMUG (35-50 kDa) whose core proteins are encoded in about 70 different genes (1), while the forms of the parasite present in the mammalian host, bloodstream trypomastigotes, have larger mucins encoded by 500 different genes (26). Developmentally regulated expression of these mucins in the different parasite stages is relevant because they might accomplish different functions related with parasite survival (27).
We have previously demonstrated that a 44-nt ARE sequence within the 3Ј-UTR of SMUG mucin family was a destabilizing cis-element acting in a stage specific manner (1). These results suggest that different trans-acting factors might bind mucin transcripts in vivo, and selectively regulate its mRNA stability throughout parasite development. We have now identified a novel G-rich element, named GRE, which might be responsible for a stage-specific stabilization of SMUG mRNA family in the epimastigote form of the parasite. Transfection experiments show that GRE and ARE sequences have opposite functions in terms of mRNA stabilization in the different stages of the parasite and are specifically recognized by trans-acting factors, some of them being developmentally regulated during the trypanosome life cycle.
EXPERIMENTAL PROCEDURES
Parasite Cultures and Drug Treatments-Trypanosoma cruzi CL-Brener cloned stock (28) was used. Different forms of the parasites were obtained as described previously (29). Purity of the different parasite forms was determined by conventional microscopy and was at least 95%. Epimastigote cultures were taken in logarithmic growth phase at a cell density of 3 ϫ 10 7 /ml and treated with actinomycin D (ActD) (Sigma), at a final concentration of 10 g/ml, which is known to inhibit transcription in trypanosomatids (12,30). Aliquots were taken at different times after addition of the inhibitor. Cycloheximide (Sigma) was used at a final concentration of 50 g/ml (31). Parasite viability was confirmed by microscopy at every time point of the experiments. Culture aliquots were harvested by centrifugation, washed with phosphate-buffered saline, and frozen at Ϫ70°C until RNA extraction.
Chloramphenicol Acetyltransferase (CAT) Assay-An equal number of parasites from each transfected population was harvested, washed once with 0.25 M Tris-HCl (pH 8), and cellular extracts were prepared by four freeze-thaw cycles and heat inactivation. Cell lysates were assayed for CAT activity as described previously (32). Reactions were conducted for 1 h at 37°C with cellular extracts prepared from 10 7 parasites. This time was previously adjusted to fit within the linear range of the assay. Conversion of [ 14 C]chloramphenicol to acetylated forms was analyzed by thin layer chromatography and quantified by densitometry.
DNA Constructions and Parasite Transfections-The chloramphenicol acetyltransferase (cat) gene, the complete TcSMUG intergenic region, and the SMUG-L and SMUG-L⌬AU constructs were amplified by PCR as described previously (1). All 3Ј-UTR deletions were created by PCR and fused downstream from cat into the HindIII and XhoI sites.
Each DNA fragment was cloned in the pTEX vector (33), kindly provided by Dr. J. M. Kelly (London School of Hygiene and Tropical Medicine, London, United Kingdom). Transfections were carried out as described previously (1). The neo resistance gene was used for selection and as an internal control of transfection levels since it is transcribed polycistronically from the same promoter (33). The polyadenylation site of the cat mRNA was determined by reverse transcription-PCR using the oligonucleotide anchor d(T) (5Ј-GCGAGCTCCGCGGCCGCG(T) 18 -3Ј) using the Superscript II enzyme (Life Technologies, Inc.). PCR was performed on first strand product using CAT/se (5Ј-gggATG-GAGAAAAAAATCACTGGATATA-3Ј) and an oligonucleotide with the anchor sequence of anchor d(T). The products were cloned in pGEMT-Easy (Promega, Madison, WI) and sequenced.
In Vitro Transcription-All plasmids for in vitro transcription were constructed as follows. Complementary oligonucleotides, corresponding to the sense and antisense strands of the RNAs transcribed, were annealed and cloned into the EcoRI and HindIII sites of the vector pBSϪ (Stratagene, La Jolla, CA). Transcription of sense sequences was performed with 1 g of HindIII-digested plasmids using T7 RNA-polymerase (Promega) in the presence of [␣-32 P]UTP (800 Ci/mmol, PerkinElmer Life Sciences), 500 M ATP, CTP, and GTP. Antisense transcripts were synthesized with T3 RNA polymerase. All transcripts were purified on a 8 M urea, 12% polyacrylamide gel and eluted overnight in RNA elution buffer (0.3 M NaOAc, 10 mM MgCl 2 , and 1 mM EDTA). After elution, RNAs were ethanol-precipitated and resuspended in 50 l of water. Preparative in vitro transcription was done as described previously (34) and detected by UV shadowing.
Protein Extract Preparation and Subcellular Fractionation-For total protein extract preparation, parasites were resuspended in lysis buffer (0.75% CHAPS detergent, 1 mM MgCl 2 , 1 mM EGTA, 5 mM -mercapthoethanol, 10 mM Tris-HCl (pH 7.6), and 10% glycerol) supplemented with protease inhibitors: 1 mM phenylmethylsulfonyl fluoride and 50 M E-64 (Sigma). After 30 min on ice, the extract was centrifuged at 19,000 rpm (SS-34 rotor) and the supernatant stored at Ϫ70°C. For subcellular fractionation, nuclear and cytoplasmic fractions were prepared as described previously for another kinetoplastid parasite, Crithidia fasciculata (35). Briefly, parasites were washed twice in Buffer A (10 mM Tris-HCl (pH 7.6), 1.5 mM MgCl 2 , 10 mM KCl) and resuspended in Buffer B (Buffer A plus 1 mM dithiothreitol, 1 mM EDTA, and 0.5% Nonidet P-40) in the presence of protease inhibitors. After 20 min on ice and vortexing each 3 min, the preparation was centrifuged for 15 min at 5000 rpm. The supernatant containing the cytosolic fraction was mixed with an equal volume of Buffer D (10 mM Tris-HCl (ph 7.6), 10 mM KCl, 1 mM MgCl 2 , 1 mM EGTA, 10% glycerol). The pellet was resuspended in an equal volume of Buffer C (Buffer D plus 20% glycerol) and passed through a 21-gauge needle and frozen several times on liquid N 2 to lyse nuclei. After centrifugation to remove debris, the supernatant was mixed with an equal volume of Buffer D (nuclear fraction). Polysomes were prepared as previously described (36). Polysome extract was pre-treated at 25°C for 15 min with ribonuclease A (37) when indicated, and the RNase was inactivated with the ribonuclease inhibitor RNasin (Promega), prior incubation of the extract with the labeled RNA. The amount of RNase A used was determined by titration.
Analysis of RNA-Protein Interactions-Binding reactions were performed with 10 l (3 g/l) of trypanosome total extract (prepared as above), 10,000 cpm of RNA probe, 10 mM Tris-HCl (pH 7.6), 5% glycerol, 100 mM KCl, 5 mM MgCl 2 , 1 g/ml bovine serum albumin, 500 ng/l tRNA (Sigma) in a 20 l final volume. The incubation time was 10 min at 25°C. Heparin was added at a concentration of 1 g/ml. Each reaction was loaded directly onto a 7% acrylamide-bisacrylamide (38:2), 0.5ϫ TBE nondenaturing gel to perform an electrophoresis mobility shift assay (EMSA). The gels were dried and exposed to film at Ϫ70°C. For competition experiments, the extract was incubated simultaneously with the indicated amounts of unlabeled and labeled RNAs. All homori-bopolymers (poly(A), poly(C), poly(G), and poly(U)) were from Sigma.
UV Cross-linking Analysis-32 P-Labeled RNA was incubated with a trypanosome total extract as described above. The in vitro binding
FIG. 1. Half-life determinations of cat mRNA fused to complete mucin SMUG-L 3-UTR and deletion mutants.
A, schematic representations of complete SMUG-L and 3Ј-UTR deletion mutants are shown. All constructs were done by PCR as described under "Experimental Procedures" using PCR primers with restriction endonuclease sites (B, BamHI; S, SmaI; H, HindIII; E, EcoRI; X, XhoI). The 5Ј and 3Ј intergenic regions (IR) contain the original trans-splicing site (ag) and polypyrimidine tract (pPy) for efficient mRNA processing. Epimastigote forms of the parasite were transfected with the indicated DNA constructs cloned in pTEX vector (33). B, epimastigotes transfected with the recombinant DNAs described in A were treated with 10 g/ml ActD and total RNA was prepared at the indicated times (0, 60, 120, and 180 min). Equal amounts of RNA were analyzed by Northern blot. The same filter was sequentially hybridized with cat, neo, and rRNA probes. The hybridization performed with the neo probe serves as an internal control of the experiment since this gene is expressed from the same vector. C, quantitation of the bands from the Northern blots shown in B. The half-life of each transcript is indicated below the graphic. D, epimastigotes transfected with SMUG-L and SMUG-L⌬GRE constructions were treated with 10 g/ml ActD and total RNA was prepared at the indicated times (0, 15, 30, 45, and 60 min). E, quantitation of the bands from the Northern blots shown in D. In panels C and E, the data were expressed as the mean relative amount of mRNA Ϯ the standard error of the media (n ϭ 3) at each time point after correction for the level of rRNA. Differences between SMUG-L and each deletion mutant were significant when comparing the means by Student's t test (*, p Ͻ 0.05; **, p Ͻ 0.01). reaction was run on a 7% acrylamide-bisacrylamide (38:2), 0.5ϫ TBE native gel. The RNA-protein complexes detected by exposing to films at 4°C, cross-linked by UV light irradiation (254 nm, 500 mJ/cm 2 ), treated with RNase T1, were cut from the gel and eluted with 0.1% SDS at 37°C with vigorous shaking. The cross-linked products were resolved by electrophoresis on 10% SDS-PAGE, and the apparent molecular masses of the proteins were determined with molecular size protein standards.
Northern Blot-RNA was purified using TRIzol reagent following the manufacturer's instructions (Life Technologies, Inc.). Northern blots were carried out as described previously (38). Zeta-Probe nylon membranes (Bio-Rad) were used for all blottings. Probes were radioactively labeled with [␣-32 P]dCTP (PerkinElmer Life Sciences) by PCR as in Ref. 39. Densitometry was done using 1D Image Analysis Software (Kodak Digital Science).
Both Positive and Negative cis-Elements within the 3Ј-UTR of SMUG Mucin Family Regulate mRNA Stability and Transla-
tion Efficiency-TcSMUG mucin family was previously shown to be post-transcriptionally regulated and an ARE within its 3Ј-UTR was found to be responsible for a destabilizing mechanism acting in a stage-specific manner (1). However, this mucin family is very stable in the epimastigote stage, and the ARE motif was not responsible for this selective mRNA stabilization (see below). We now searched for other cis-elements required for mRNA stability and/or translational control in this parasite stage. Five 3Ј-UTR deletion mutants of the SMUG-L clone were constructed (Fig. 1A). Each mutant is deleted in one of the blocks in which the mucin 3Ј-UTR is organized (1). The complete construction consists of the cat gene flanked by both 5Ј and 3Ј intergenic regions of SMUG-L group, which contains sequences that ensure correct trans-splicing and polyadenylation, cloned in the pTEX vector (33).
Half-life determinations of transcripts from the complete construct (SMUG-L) and deletion mutants were carried out in the epimastigote form of the parasite (Fig. 1), taking advantage of the presence of an ActD-sensitive promoter in the pTEX vector. The transcript from the complete construct SMUG-L had a half-life of about 70 min. Conversely, SMUG-L⌬GRE transcript had a shorter half-life (t1 ⁄2 ϭ 30 min), that is about 42% of that from SMUG-L clone. GRE sequence is a G-rich element that contains the first 27 nt of the 3Ј-UTR downstream the stop codon and is composed by two contiguous CGGGG pentamers (see below). Transcripts from two other constructs, SMUG-L⌬2 and SMUG-L⌬3, had similar half-lives to those of SMUG-L (t1 ⁄2 ϭ 75 min and t1 ⁄2 ϭ 65 min, respectively). Finally, SMUG-L⌬1 and SMUG-L⌬Sire deletion mutants were transcribed into RNAs having increased half-lives (t1 ⁄2 ϭ 140 min) (Fig. 1, B and C). Since the short interspersed repeat element (SIRE) retrotransposon (40) is a large element (450 base pairs), some partial deletion would be required to better define the region causing this effect. Since in the half-life determination of clone SMUG-L⌬GRE (Fig. 1B), less than 50% of the mRNA levels remained at the first sampling time (60 min), the experiment was repeated taking samples between 0 and 60 min. Thus, the half-life of SMUG-L⌬GRE was better calculated and shown to be 30 min, identical to that indicated in Fig. 1C (Fig. 1, D and E). These results suggest that the sequences in the 3Ј-UTR could be divided into several functional regions: 1) a positive G-rich element named GRE; 2) one negative element between nucleotides 28 and 62 downstream stop codon, named E1 for element 1; and 3) an AU-rich element between nucleotides 272 and 318 involved in selective mRNA destabilization in a stage-specific manner (see next section). The 3Ј-UTRs of SMUG-L and SMUG-L⌬GRE were modeled using the Genequest program (Lasergene Package, DNAstar Inc.) to predict if the deletion of GRE sequence would have an effect on the three-dimensional structure of the RNA. Both transcripts were found to share the same modeled structure, including all loops of the 3Ј-UTR (data not shown). Thus, it is likely that the sequence of the G-rich element is the one that confers the effect reflected on mRNA stability, and not any modification of the whole 3Ј-UTR of the RNA molecule.
In order to determine if the different domains of the 3Ј-UTR
FIG. 3. A novel GRE localized in the 3-UTR of SMUG-L confers mRNA stability in a stage-specific manner and is functionally different to the AU-rich element.
A, schematic representation of SMUG-L (complete construct) and SMUG-L⌬GRE and SMUG-L⌬AU deletion mutants used to transfect epimastigote stage of the parasite. The sequence that was deleted in clone SMUG-L⌬GRE and SMUG-L⌬AU is indicated in the SMUG-L scheme. B, Northern blot of total RNA from epimastigotes transfected with the constructs shown in A. Epimastigotes were treated with 10 g/ml ActD, and total RNA was prepared at the indicated times (0, 45, 60, 90, and 120 min). The same filter was sequentially hybridized with cat, neo, and rRNA probes. C, quantitation of cat mRNA levels from the Northern blot shown in B. The data were expressed as the mean relative amount of mRNA Ϯ the standard error of the media (n ϭ 3) at each time point after correction for the level of rRNA. Differences between SMUG-L⌬GRE and SMUG-L and between SMUG-L⌬GRE and SMUG-L⌬AU were significant (*, p Ͻ 0.05; **, p Ͻ 0.01 when comparing the means by Student's t test). D, epimastigote-derived metacyclic trypomastigotes were treated as indicated in B. E, quantitation of cat mRNA levels from the Northern blot shown in D. The data were expressed as the mean relative amount of mRNA Ϯ the standard error of the media (n ϭ 2) at each time point after correction for the level of rRNA. Differences between SMUG-L⌬AU and SMUG-L and between SMUG-L⌬AU and SMUG-L⌬GRE were significant (**, p Ͻ 0.01, when comparing the means by Student's t test). In panels C and E, the half-life of each transcript is indicated below the graphic. also influence expression at the translational level, the CAT activity from control and deletion mutants was measured and the values obtained were normalized to cat mRNA steady-state levels from each construct (Fig. 2). Enzymatic activity was expressed as the percentage of that obtained with the complete construct SMUG-L. The value obtained with the parasite population transfected with SMUG-L⌬GRE was similar (117% of SMUG-L) than the one obtained from parasites transfected with the complete contruct SMUG-L, suggesting that this Grich element does not modulate translation efficiency. Conversely, SMUG-L⌬1 deletion mutant, whose transcript has a larger half-life, also presented an increase in translation (185% of SMUG-L). This suggests that SMUG-L⌬1 regulates both mRNA stability and translation efficiency in a negative manner. Moreover, SMUG-L⌬2 and SMUG-L⌬3 did not show a considerable effect on translational activity (88% and 112% of SMUG-L, respectively) (Fig. 2). Finally, the retrotransposon SIRE seems to produce a positive effect on translation, since its deletion causes a decrease in the ratio CAT activity/cat mRNA (15% of SMUG-L construct). This result is interesting, because it was suggested that SIRE exhibits another function in the process of mRNA maturation (see "Discussion"). Sites for 5Ј end trans-splicing and 3Ј end polyadenylation were the same in all the mRNAs derived from the constructs made, as indicated under "Experimental Procedures." A Novel GRE Confers mRNA Stability in a Stage-specific Manner and Is Functionally Different from the AU-rich Instability Element-The effect of the GRE deletion (SMUG-L⌬GRE construct) on mRNA stability was analyzed in different parasite stages and the results compared with those obtained with the constructs SMUG-L (complete 3Ј-UTR) and SMUG-L⌬AU (lacking the 44-nt AU-rich instability element) (Fig. 3A). Epimastigote forms were differentiated into the infective form of the parasite, metacyclic trypomastigotes, and incubated with ActD to determine half-lives of the transcripts (Fig. 3). The probe used in the Northern blot analysis corresponds to the cat open reading frame. In the epimastigote stage, both SMUG-L and SMUG-L⌬AU transcripts bearing the GRE sequence within its 3Ј-UTR have similar half-lives (t1 ⁄2 ϭ 70 and t1 ⁄2 ϭ 68 min, respectively). On the other hand, transcripts from SMUG- L⌬GRE are less stable (t1 ⁄2 ϭ 30 min) (Fig. 3C). It might be concluded that: 1) the GRE sequence in the epimastigote stage is involved in a selective mRNA stabilization process, and 2) the ARE sequence seems not to be involved in mRNA stabilization in this parasite form, since transcripts from both SMUG-L and SMUG-L⌬AU constructs have similar half-lives (Fig. 3, B and C) (see "Discussion").
Analysis of the infective metacyclic trypomastigotes stage, derived from differentiation of epimastigotes, also revealed differences in the mRNA steady-state levels. Both SMUG-L and SMUG-L⌬GRE RNAs were extremely short-lived (t1 ⁄2 Ͻ 10 min) as compared with those from SMUG-L⌬AU, which have a t1 ⁄2 Ͼ 30 min (Fig. 3E). Thus, the instability of SMUG-L and SMUG-L⌬GRE transcripts in the metacyclic trypomastigotes stage could be due to the presence of the ARE sequence within its 3Ј-UTR. Additionally, the same filter used to detect cat transcripts was hybridized with a neo probe. Since the neomycin gene is flanked by glyceraldehyde-3-phosphate dehydrogenase intergenic regions (33) in the same plasmid bearing the cat reporter, it serves as an internal control of half-life determinations. As seen in Fig. 3 (B and D), neomycin half-lives are similar in each parasite stage independently of the construct tested.
The 27-nt GRE That Confers mRNA Stability Specifically
Interacts with Different Nuclear and Cytoplasmic Complexforming RNA-binding Proteins-The identification of this novel cis-element involved in a mRNA stabilization process allowed the searching for trans-acting factors able to recognize G-rich sequences. The 27-nt GRE sequence was transcribed in vitro as described under "Experimental Procedures" and used to perform RNA-protein binding reactions and EMSA. The SMUG-L-GRE RNA oligonucleotide revealed the same three ribonucleoprotein complexes in all four parasite forms tested (Fig. 4A). As controls, no bands corresponding to the G-complexes 1, 2, and 3 were observed after incubation of SMUG-L-GRE RNA with RNase A or the protein extract with proteinase K (data not shown). To determine the apparent molecular masses of the proteins that compose the GRE-ribonucleoprotein complexes, a total protein extract from the epimastigote form of the parasite was incubated with an excess of 32 Plabeled SMUG-L-GRE RNA oligonucleotide. The in vitro binding reactions were run in a native polyacrylamide gel and after UV cross-linking, the complexes were treated as under "Experimental Procedures" and further electrophoresed in a 10% SDS-PAGE (Fig. 4B). G-complex 1 gave rise to a single band having an apparent molecular mass of 80 kDa, while G-complexes 2 and 3 are composed by several proteins with apparent molecular masses of 35, 39, and 66 kDa. The RNA-binding proteins that compose G-complex-2 are present in different abundance in the epimastigote total lysate. One low abundant protein of about 66 kDa is detected together with two highly abundant factors of 35 and 55 kDa (Fig. 4B).
Competition experiments were conducted to further characterize the sequence specificity of all complexes formed. Each of the four homoribopolymers was used to compete with the SMUG-L-GRE RNA oligonucleotide in an in vitro binding reac-tion. Poly(G) selectively blocks the assembly of two ribonucleoprotein complexes, G-complex 1 (smaller band) and G-complex 2 (Fig. 4C). This result is in agreement with the G-rich nature of the cis-element, used in the in vitro binding reaction. Gcomplex 1 is effectively competed out with a molar excess of 10-fold, whereas the formation of G-complex 2 partially disappeared at a molar excess of 1000-fold. This result could be due to differences in the concentration of protein-forming complexes in the epimastigote lysate and is also suggested by the UV cross-linking analysis (Fig. 4B), where G-complex 1 is barely detectable comparing with the amount of proteins forming G-complex 2. Conversely, complex 3 was not efficiently competed by any homoribopolymer and, thus, might be unspecific.
The minimal size of the SMUG-L-GRE RNA element recognized by the proteins forming G-complexes 1 and 2 was analyzed. The RNA sequence was divided into two separate sequences: (a) SMUG-L-GRE-1, with the sequence GGACGGGG-CGGGGC; and (b) SMUG-L-GRE-2, which presents a CG-rich content, GCGCGUGCGCCG (Fig. 5A). The SMUG-L-GRE-1 RNA is sufficient to interact with both trans-acting factors (Fig. 5B). This result suggest that the minimal sequence for Gcomplex 1 and 2 formation is the first half of the element, which is composed of two contiguous CGGGG pentamers. G-complex 1 is localized in the cytoplasm, whereas G-complex 2 is equally distributed in both compartments, nucleus and cytoplasm (Fig. 5B).
The 44-nt AU-rich Instability Element Interacts with Stagespecific, Developmentally Regulated, RNA-binding Proteins-
The 44-nt AU-rich cis-element was important in conferring mRNA instability in a stage-specific manner (1) (Fig. 3). Therefore, to know if the RNA-binding proteins that recognized in vitro this element, named here SMUG-L-AU, are developmentally regulated, protein extracts from the four different parasite stages were incubated with the RNA template in an in vitro binding reaction. The complexes formed were identified in a native polyacrylamide gel (Fig. 6A). A stage-specific pattern of RNA binding to this motif was observed. In the epimastigote stage, an RNA-binding protein named E-ARE-BP, for epimastigote AU-rich element binding protein, migrated much more slowly in the polyacrylamide native gel than the ribonucleoprotein complexes detected in the other three parasite stages. To determine the apparent molecular masses of these RNA-binding proteins, the total protein lysate of each parasite stage was incubated with an excess of SMUG-L-AU RNA probe and the ribonucleoprotein-complexes identified in the EMSA were UVcross-linked and further electrophoresed in SDS-PAGE. The E-ARE-BP had an apparent molecular mass of ϳ100 kDa. In contrast, the ARE-BPs range between 45 and 50 kDa (Fig. 6B). Both results, 1) the ARE deletion affecting SMUG mucin mRNA stability (Fig. 3), and 2) the developmentally regulated expression pattern of the RNA-binding proteins that recognized the ARE motif (Fig. 6, A and B), point to a coordinated and stage-specific process during the life cycle parasite development.
Competition experiments were carried out to further confirm the specificity of the RNA-binding protein of the epimastigote form of the parasite that recognized the 44-nt SMUG-L-AU RNA template. Results with the four homoribopolymers showed that E-ARE-BP is selectively competed by poly(U) (Fig. 7A) but not by the other three homoribopolymers, as expected due to the U-rich nature of this element. Unlabeled sense and antisense RNAs were also tested in competition experiments (Fig. 7B). The addition of increasing amounts of unlabeled sense SMUG-L-AU RNA to the reaction mixture resulted in a concentration-dependent reduction in the formation of the ribonucleoprotein complex containing E-ARE-BP, whereas the addition of unlabeled antisense SMUG-L-AU RNA had little effect on the formation of this complex. Trypomastigote ARE-BPs (T-ARE-BPs) are also efficiently competed by poly(U) RNA, and not by any other homoribopolymer (Fig. 7C). Additionally, we tested the competition with unlabeled in vitro transcribed SMUG-L-AU sense and antisense RNAs. The SMUG-L-AU sense RNA, as was shown for the E-ARE-BP, abolished the binding of the ARE-BPs in a concentration-dependent manner. This result confirmed that the T-ARE-BPs selectively and specifically recognized the AU-rich sequence of SMUG mRNAs (Fig. 7D) and that the U-rich nature of the oligoribonucleotide is important for the binding.
Different Subcellular Localization of ARE RNA-binding Proteins-The presence of both AU and G-rich binding activities was analyzed in a nuclear and cytoplasmic preparation of T. cruzi epimastigotes and trypomastigotes. Subcellular fractionation was done as described under "Experimental Procedures." These experiments showed that the E-ARE-BP is mainly cytosolic or that the E-ARE-BP might recognize the SMUG-L-AU RNA only in the cytoplasm and not in the nucleus (see "Discussion") (Fig. 8A). In contrast, the 45-50-kDa T-ARE-BPs are localized in similar amounts in both compartments, nucleus and cytoplasm (Fig. 8C).
ARE-binding proteins of higher eukaryotes were shown to be associated with polysomes, and this particular localization was due to a translational regulatory mechanism conferred by those trans-acting factors (41)(42)(43). In a previous work, we reported that the ARE motif positively regulates translation efficiency in the epimastigote stage of the parasite (1), as is the case with the ARE sequences in TNF-␣ and some cytokine and protooncogene mRNAs (44). A polysome fraction (P) of T. cruzi epimastigotes was prepared as described previously (36) in the presence of cycloheximide to freeze ribosomes. After extract preparation and centrifugation through a sucrose cushion, the supernatant was saved as a postribosomal preparation (PS) and the pellet was saved as polysomes (P). All the extracts were analyzed in an in vitro binding reaction with the SMUG-L-AU RNA template. The polysome extract was shown to have some AU-rich sequence binding activity, but minimal in comparison to the one observed in the postribosomal fraction (Fig. 8A). To further determine if the lack of a strong shifted band in the polysome fraction was due to the presence of some endogenous U-rich RNA competitor that might be sequestering part of E-ARE-BP, the extract was pre-treated with ribonuclease A (RNase A) as described previously (37), and the nuclease was inactivated prior to perform the in vitro binding reaction with the SMUG-L-AU RNA probe. The result shown in Fig. 8B demonstrate that, in the presence of RNase, the binding of E-ARE-BP is increased 4.5-fold, suggesting that there might be some RNA competing with the labeled AU-rich RNA in the polysome fraction. Moreover, the RNA probe remains intact after incubation with the polysome extract. Thus, the absence of such a strong band in these fraction was not due to the presence of a polysome-associated nuclease that could recognize the ARE sequence (Fig. 8B).
We conclude that E-ARE-BP is mainly cytoplasmic and may be partially associated to polysomes, whereas T-ARE-BPs are localized in both compartments and may be nuclear-cytoplasm shuttling RNA-binding proteins.
DISCUSSION
In this work we have obtained evidence for the existence of novel cis-elements localized in the 3Ј-UTR of SMUG mucins from T. cruzi that control both mRNA stability and translation efficiency. In addition to the AU-rich element involved in the selective mRNA destabilization of mucin transcripts in the metacyclic trypomastigote stage of the parasite (Ref. 1 and this work), new negative and positive cis-elements have now been identified. First, a small GRE, composed of the first 27-nt downstream stop codon and containing two contiguous CGGGG pentamers, functions as a positive element only in the epimastigote stage of the parasite. Second, deletion of another element in the construction SMUG-L⌬1, named here E1 and localized between nucleotides 28 and 62 of the 3Ј-UTR, increases the half-life of a cat reporter mRNA (Fig. 1), suggesting that this sequence acts as a negative element. Finally, deletion of the 450-base pair retrotransposon SIRE also produces the same effect as the element E1, but, given the large size of SIRE sequence, further work is required to confirm this effect. It was shown previously that SIRE is responsible for the down-regulation of gene expression of the TCP2 ribosomal protein by altering its trans-splicing efficiency (45). Thus, different functions might be assigned to sequences within this retrotransposon. Indeed, it was reported that U-rich regions and also the length of the 3Ј-UTR positively regulate mRNA polyadenylation and the translation efficiency of a reporter gene (11). Although GRE sequence is sufficient to up-regulate SMUG mRNA abundance, E1 has a dual effect on mRNA stability and translation, regulating both processes in a negative manner. It is not unprecedented for a single element to have two functions, since AU-rich sequences within the 3Ј-UTR of TNF-␣ affect both mRNA abundance and translation efficiency (46 -48).
Two functionally different cis-elements, ARE and GRE, were identified. The ARE was involved in mRNA destabilization in the infective stage of the parasite, but not in the replicative epimastigote stage, because mRNAs from SMUG-L and SMUG-L⌬AU constructs have similar half-lives in the latter stage (Fig. 3). These results further support the idea that the RNA-binding protein(s) that recognize the ARE in the epimastigote stage of the parasite, might provide resistance to endo-or exonucleolytic cleavage rather than providing actively mRNA protection. Conversely, GRE sequences have a different effect on mRNA stability throughout parasite development. It upregulates SMUG mRNA abundance in the epimastigote stage, since deletion of the GRE motif makes the mRNA more labile (Fig. 3, B and C). The presence of the ARE sequence within the 3Ј-UTR of mucin SMUG mRNA also have been shown to modulate translation efficiency in a positive manner (1). In contrast, GRE had no considerable effect on translational levels, suggesting that both elements might coordinately cooperate in the in vivo regulation of SMUG mRNA abundance in the epimastigote stage of the parasite, but not in translation. Coordinated interaction between different negative and positive ciselements was observed in the 3Ј-UTR of procyclic mRNAs of African trypanosomes, affecting both mRNA abundance and translation efficiency (12).
Cellular factors interacting with RNA motifs that regulate mRNA stability have not been identified yet in trypanosomes. Evidence showing that GRE and ARE RNA sequences interact with different cellular trans-acting factors has now been obtained (Figs. 4 and 6, and summarized in the model of Fig. 9). Three GRE-forming ribonucleoprotein complexes were detected. Two of them, named G-complex 1 and G-complex 2, were specifically and efficiently competed by poly(G) homoribopolymer (Fig. 4C). G-complex 1 is formed by a single protein band whose apparent molecular mass is 80 kDa, and G-complex 2 is composed of several factors whose molecular masses were about 35, 39, and 66 kDa. This suggests that the 80-kDa protein of G-complex 1 directly recognized GRE sequence. In the case of G-complex 2, the three proteins might also be involved in protein-protein interactions. The presence of large complexes might regulate mRNA expression in a coordinated way, depending on the proteins that compose it or the protein-protein interactions that are produced during the different stages of the parasite. Since the presence of the ARE within SMUG-L 3Ј-UTR led to a rapid mRNA decay, it is possible that a coordinated interaction between GRE-binding proteins with ARE-BPs and/or other protein factors not identified yet might determine the final mucin SMUG mRNA stability (Fig. 9).
A model for the post-transcriptional regulatory mechanism acting on mucin SMUG mRNA and mediated by ARE and GRE RNA-binding proteins is shown in Fig. 9. E-ARE-BP, only expressed in the epimastigote stage, might be a positive transacting factor interacting with the ARE and protecting SMUG mRNA from degradation. E-ARE-BP binding could also prevent the association of the destabilizing factor(s) to those mRNAs, possibly through competition for binding to similar cis-elements. Indeed, E-ARE-BP might be one of the proteins involved in the modulation of the translation activity mediated by the ARE motif (1), probably through the interaction with other cellular factors of the translational apparatus. On the other hand, GRE RNA-binding proteins are always present during the life cycle of T. cruzi (Fig. 9). The possibility that an ARE-GRE-complex exists in vivo, and that this whole complex or some complex-forming proteins interact with a poly(A)binding protein or other cellular factor(s) to prevent the attack of a deadenylase activity, remains to be investigated. It is well known that, in mammalian cells, a large complex is formed by several proteins having different affinities for poly(C) homoribopolymer, such as the assembly of the ␣-globin mRNA stability complex in the pyrimidine-rich region of the globin 3Ј-UTR (22).
The results obtained by subcellular fractionation suggest that E-ARE-BP is localized in the cytoplasm or only recognized the RNA in this cellular compartment, where mRNA decay or translational processes take place. Future experiments on Western blot analysis would permit us to determine if E-ARE-BP is also present in the nucleus and, thus, is recruited by some complex-forming proteins. Conversely, the ARE-BPs, at least in the trypomastigote stage, are present in similar amounts in both nucleus and cytoplasm and might be shuttling RNA-binding proteins (Fig. 8). G-complexes forming proteins, at least G-complex 2, might be formed by RNA-binding factors that showed a shuttling behavior between nucleus and cytoplasm. Consequently, it is possible that those GRE RNA-binding proteins might protect the messenger during transport between both compartments. Several proteins in higher eukaryotes were shown to present a shuttling behavior between nucleus and cytoplasm (18,20,49). In trypanosomes a classical nuclear localization signal was identified and shown to be functional in the La and histone H2B proteins (50). A regulated nuclear-cytoplasm export pathway mediated by CRM1 also might be present in kinetoplastid parasites, since leptomycin B affects the axenic growth of the epimastigote form of the parasite. 2 Leptomycin B inhibits the formation of the complex formed by nuclear export signal-containing proteins, RanGTP, and the receptor CRM1 (51).
Post-transcriptional regulatory mechanisms, such as the ones mediated by ARE or GRE sequences, may be required for a quick response to change mucin core molecules expression pattern, triggering parasite adaptation to sudden changes on the environment. In this regard, expression of the correct surface mucin coat may be of central importance for parasite survival. Identification of an in vivo role for these ARE and GRE RNA-binding proteins in the mRNA stability of T. cruzi transcripts may allow proposal of a model of RNA metabolism and maturation in parasites that are deficient in the regulation by RNA polymerase II transcription. | 8,915.2 | 2001-05-11T00:00:00.000 | [
"Biology"
] |
Findings of the WMT 2016 Bilingual Document Alignment Shared Task
This paper presents the results of the WMT16 Bilingual Document Alignment Shared Task. Given crawls of web sites, we asked participants to align documents that are translations of each other. 11 research groups submitted 19 systems, with a top performance of 95.0%.
Introduction
Parallel corpora are especially important for training statistical machine translation systems, but so far the collection of such data within the academic research community has been ad hoc and limited in scale. To promote this research problem we organized a shared task on one of the core processing steps in acquiring parallel corpora from the web: aligning bilingual documents from crawled web sites.
The task is to identify pairs of English and French documents from a given collection of documents such that one document is the translation of the other. As possible pairs we consider all pairs of documents from the same webdomain for which the source side has been identified as (mostly) English and the target side as (mostly) French.
Lack of data in some cases has held back research. To give an example, there are significant research efforts on various Indic languages (Post et al., 2012;Joshi et al., 2013;Singh, 2013), but this work has been severely hampered, since it uses very small amounts of data. But even for the language pairs tackled in high profile evaluation campaigns, such as the ones organized around WMT, IWSLT, and even NIST, we use magnitudes of data less than what has been reported to be used in the large-scale efforts of Google or Microsoft. This diminishes the value of research findings: reported improvements for methods may not hold up once more data is used. Work in reduced data settings may also distract from efforts to tackle problems that do not go away with more data, but are inherent limitations of current models.
Related Work
Although the idea of crawling the web indiscriminately for parallel data goes back to the 20th century (Resnik, 1999), work in the academic community on extraction of parallel corpora from the web has so far mostly focused on large stashes of multilingual content in homogeneous form, such as the Canadian Hansards, Europarl (Koehn, 2005), the United Nations (Rafalovitch and Dale, 2009;Ziemski et al., 2015), or European Patents (Täger, 2011). A nice collection of the products of these efforts is the OPUS web site 1 (Skadiņš et al., 2014).
These efforts focused on individual web sites allow for writing specific rules for aligning documents as well as extracting and aligning content. Scaling these manual efforts to thousands or millions of web sites is not practical.
A typical processing pipeline breaks up parallel corpus extraction into five steps: • Identifying web sites with bilingual content • Crawling web sites • Document alignment • Sentence alignment • Sentence pair filtering For each of these steps, there has been varying amount of prior work and for some tools are readily available. Since there has been comparatively little work on document alignment, we picked this problem as the subject for the shared task this year, but other steps are valid candidates for future tasks.
Web Crawling
Web crawling is a topic that has not received much attention from a specific natural language processing perspective. There are a number of challenges, such as identification of web sites with multilingual content, avoiding to crawl web pages with identical textual content, learning how often to recrawl web sites based on frequency of newly appearing content, avoiding crawling of large sites that have content in different languages that is not parallel, and so on.
We used for the preparation of this shared task the tool Httrack 2 which is a general web crawler that can be configured in various ways. Papavassiliou et al. (2013) present the focused crawler ILSP-FC 3 that integrates crawling more closely with subsequent processing steps like text normalization and deduplication.
Document Alignment
Document alignment can be defined as a matching task that takes a pair of documents and computes a score that reflects the likelihood that they are translations of each others. Common choices include edit-distance between linearized documents (Resnik and Smith, 2003), cosine distance of idfweighted bigram vectors (Uszkoreit et al., 2010), and probability of a probabilistic DOM-tree alignment model (Shi et al., 2006).
Sentence Alignment
The topic of sentence alignment has received a lot of attention, dating back to the early 1990s with the influential Church and Gale algorithm that is language-independent and easy to implement. It relies on relative sentence lengths for alignment decisions and hence is not tolerant to noisy input.
It is not clear, which of these tools fares best with noisy parallel text that we can expect from web crawls, which may have spurious content and misleading boilerplate.
Filtering
A final stage of the processing pipeline filters out bad sentence pairs. These exist either because the original web site did not have any actual parallel data (garbage in, garbage out), or due to failures of earlier processing steps.
As Rarrick et al. (2011) point out, a key problem for parallel corpora extracted from the web is filtering out translations that have been created by machine translation. Venugopal et al. (2011) propose a method to watermark the output of machine translation systems to aid this distinction. Antonova and Misyurev (2011) report that rulebased machine translation output can be detected due to certain word choices, and machine translation output due to lack of reordering.
This year, a shared task on sentence pair filtering 8 was organized, albeit in the context of cleaning translation memories which tend to be cleaner that the data at the end of a pipeline that starts with web crawls.
Comprehensive Tools
For a few language pairs, there have been individual efforts to cast a wider net, such as the billion word French-English corpus collected by Callison-Burch et al. (2009), or a 200 million word Czech-English corpus collected by Bojar et al. (2010). Smith et al. (2013) present a set of fairly basic tools to extract parallel data from the publicly available web crawl CommonCrawl 9 .
In all these cases, the corpus collection effort reinvented the wheel and wrote dedicated scripts to download web pages, extract text, and align sentences, with hardly any description of the methods used.
Our data preparation for the shared task builds partly on Bitextor 10 , which is a comprehensive pipeline from corpus crawling to sentence pair cleaning (Esplà-Gomis, 2009).
Training and Test Data
We made available crawls of web sites (defined as pages under the same webdomain) that have translated content. We also annotated some document pairs to provide supervised training data to the participants of the shared task.
Terminology
A quick note on terminology: Unfortunately, the notion of domain is ambiguous in NLP applications, and we use an unusual meaning of the word in this report. To avoid confusion we will instead use the term webdomain to refer to content from a specific website, e.g,"This page is from the statmt.org webdomain." We distinguish between webdomains using their Fully Qualified Domain Name (FQDN). Thus, www.example.com and example.com are considered to be different webdomains.
We will use source to denote English pages and target for French ones. This does not imply that translation was performed in that direction. In fact we cannot know if translation from one side to the other was performed at all, both sides could possibly be translations of a third language document.
The task was organized as part of the First Conference on Machine Translation (WMT), and all data can be downloaded from its web page 11 .
Data Preparation
We crawled full web sites with the web site copyer HTTrack, from the homepage down, restricted to HTML content. Web sites differed significantly in their size, from a few hundred pages to almost 100,000.
In the test data we removed all duplicates from the crawl 12 . Duplicates are defined as web pages, whose text content is identical. Duplicates may differ in markup and URL. To extract the text we used a Python implementation of the HTML5 parser to extract text as a browser would see it. As the text is free of formatting, determining whitespace is important. While generally following the standard, e.g. inserting line breaks after block level elements 13 , we found that inserting spaces around <span> tags helps tokenization as these are often visually separated using CSS.
We restricted the task to the alignment of French and English documents, so we filtered out all web pages that are not in these two languages. However, we did not expect that participants would develop language-specific approaches. To detect the language of a document we feed the extracted text into an automatic language detector 14 . We note that language detection is a noisy process and many pages contain mixed language context, for example English boilerplate but French content. We take the overall majority language per page as the document language.
We decided to have a large collection of web sites, to encourage methods that can cope with various types of web sites, such as differing in size, balance in the number of French and English pages, and so on.
Given the large number of correct document pairs, we did not even attempt to annotate all of them, but instead randomly selected a subset of pages and identified their corresponding translated page. We augmented this effort with aligned document pairs that are indicated at the web site Linguee 16 , a searchable collection of parallel corpora, in which each retrieved sentence is annotated with its source web page.
The task then is to find these document pairs. Since this is essentially a recall measure, which can be gamed by returning all possible document pairs, we enforce a 1-1 rule, so that participants may align each web page only once.
Training Data
As training data we provide a set of 1,624 EN-FR pairs from 49 webdomains. The number of annotated document pairs per webdomain varies between 4 and over 200. All pairs are from within a single webdomain, possible matches between two different webdomains, e.g. siemens.de and siemens.com, are not considered in this task.
The full list of webdomains in the training data is listed in Table 1. Webdomains range in size from 33×29 pages (schackportalen.nu) to 24,325×43,045 pages (www.nauticnews.com).
Test Data
For testing, we provide 203 additional crawls of new webdomains, distinct from the ones in the training data in the same format. No aligned pairs are provided for the any of these domains. We removed exact duplicates of pages, keeping only one instance. Otherwise, we processed the data in the same way as the training data.
Data Format
The training document pairs are specified as one pair per line: Source URL<TAB>Target URL For the crawled data we provide one file per webdomain in .lett format adapted from Bitextor. This is a plain text format with one line per page. Each line consists of 6 tab-separated values: • Language ID (e.g. en) • Mime type (always text/html) • Encoding (always charset=utf-8) • URL • HTML in Base64 encoding • Text in Base64 encoding To facilitate use of the .lett files we provide a simple reader class in Python. We make sure that the language id is reliable, at least for the documents in the train and test pairs.
Text extraction was performed using an HTML5 parser. As the original HTML pages are available, participants are welcome to implement their own text extraction, for example to remove boilerplate.
Additionally, we have identified spans of French text in French documents for which we produced English translations using MT. We use a basic Moses statistical machine translation engine (Koehn et al., 2007) trained on Europarl and News Commentary with decoding settings geared towards speed (no lexicalized reordering model, no additional language model, cube pruning with pop limit 500).
These translations are not part of the lett files but provided separately. The format for the source segments and target segments is URL<TAB>Text where the same URL might occur multiple times if several lines/spans of French text were found. The URLs can be used to identify the corresponding documents in the .lett files.
Baseline Method
We provide a baseline systems that relies on the URL matching heuristic used by Smith et al. (2013). Here two URLs are considered a pair if both can be transformed into the same string through stripping of language identifiers. Strings indicating languages are found by splitting a large number of randomly sampled URLs into components and manually picking substrings that correlate with the detected language.
We further improve the approach by allowing matches where only one URL contains a strip-able language identifier, e.g. we match x.com/index.htm and x.com/fr index.htm. If a URL has several matching candidates we pick the one that requires the fewest rewrites, i.e. we prefer the pair above over x.com/en/index.htm x.com/fr index.htm.
The baseline achieves roughly 60% recall, compared to 95.0% of the best submission.
Evaluation
Our main evaluation metric is recall of the known pairs, i.e. what percentage of the aligned pages in the test set are found. We strictly enforce the rule that every page may only be aligned once, so that participants cannot just align everything. After a URL has been seen as part of a submitted pair, all later occurrences are ignored.
After we released the gold standard alignments, a number of participants pointed out that some predicted document pairs were unfairly counted as wrong, even if their content differed only insignificantly from the gold standard.
To give an example, the web pages www.taize.fr/fr article10921.html?chooselang=1 and www.taize.fr/fr article10921.html are almost identical, but the first offers a checkbox to select a language, while the second does not. Since the text on the pages differs slightly, these were not detected as (exact) duplicates.
To address this problem, we also included a soft scoring metric which counts such near-matches as correct. We chose that to be a close duplicate, the edit distance between the text of two pages, normalized by the maximum of their lengths (in characters) must not exceed 5%.
If we observe a predicted pair (s, t) that is not in the gold set, but (s, t ) is and dist(t, t ) ≤ 5%, then this pair is still counted as correct. The same applies for a close duplicate s of s but not both as we still follow the 1-1 rule.
Results
11 research groups participated in the shared task, some with multiple submissions. The list of participants is shown in Table 2, with a citation of their system descriptions, which are included in these conference proceedings. Each participant submitted one or more collections of document pairs. We enforced the 1-1 rule on the collections, and scored them against the gold standard. Results are summarized in Table 3. Almost all systems outperformed the baseline by a wide margin. The best system is NOVALINCS-URL-COVERAGE with 2,281 correct pairs, 95.0% of the total.
Note that the submissions varied in the number of document pairs, but after enforcing the 1-1 rule, most submissions comprise about 200,000-300,000 document pairs. Table 4 displays the results with soft scoring. Essentially, every system improved, mostly by around 3%. The top two performers swapped places, with YODA now having the best showing with 96.0%. We also experimented with a tighter threshold of 1% which gave almost identical results.
6 System Descriptions NOVALINCS (Gomes and Pereira Lopes, 2016) submitted 3 systems that use a phrase table from a phrase-based statistical machine translation system to compute coverage scores, based on the ratio of phrase pairs covered by a document pair. In addition to the purely coverage-based system, NOVALINCS-COVERAGE (88.6%), they also submit a system that uses coverage-based matching as a preference over URL matching NOVALINCS-COVERAGE-URL (85.8%) and the converse system that prefers URL matching over coverage-based matching NOVALINCS-URL-COVERAGE (95.0%).
YODA (Dara and Lin, 2016) submitted one system (93.9%) that uses the machine translation of the French document, and finds the English corresponding document based on bigram and 5-gram matches, assisted by a heuristics based on document length ratio.
UEDIN1 (Buck and Koehn, 2016) submitted one system (89.1%) that uses cosine similarity between tf/idf weighted vectors, extracted by collecting n-grams from the English and machine translated French text. They compare many hyperparameters such as weighting schemes and two pair selection algorithms.
DOCAL (Azpeitia and Etchegoyhen, 2016) submitted one system (88.6%) that used word translation lexicons to compute document similarity scores based on bag-of-word representations. They expand a basic translation lexicon by adding all capitalized tokens, numbers, and longest common prefixes of known vocabulary items.
UEDIN2 (Germann, 2016) submitted 2 systems based on word vector space representations of documents using latent semantic indexing and URL matching, UEDIN LSI (85.8%) and UEDIN LSI (87.6%). In addition to a global cosine similarity score, a local similarity score is computed by re-centering the vector around the mean vector for a webdomain. (Papavassiliou et al., 2016) submitted one system (84.9%), which uses boilerplate removal, and carries out document alignment based on features such as links to documents in the same webdomain, URLs, digits, image filenames and HTML structure. Their paper also describes in detail the open source ILSP Focused Crawler.
YSDA (Shchukin et al., 2016) submitted one system (84.1%) that uses n-gram matches between the machine translation of the French document and the English document. They cluster French and English words into bilingual clusters of up to 90 words, starting with word pairs with high translation probability in both directions, and then adding words that translated well into existing words in a cluster.
UA PROMPSIT (Esplà-Gomis et al., 2016) submitted 2 systems based on Bitextor and describe improvements to the Bitextor toolkit. Their submissions contrast the old version of the tool, UA PROMPSIT BITEXTOR 4.1 (31.1%), with the recent release, UA PROMPSIT BITEXTOR 5.0 (83.3%). Improved document alignment quality is based on various new features: ratio of shared links, similarity of link URLs, ratio of shared images, binary feature indicating if the documents are linked, and similarity of URLs, in addition to the old features bag of words similarity using a translation dictionary and DOM structure similarity.
MEDVED (Medved et al., 2016) submitted one system (79.4%), which determines the top 100 keywords based on tf/idf scores for each document and uses word translation dictionaries to match them.
BADLUC (Jakubina and Langlais, 2016) submitted one system (79.3%) that uses the information retrieval tool Apache Lucene to create two indexes, on URLs and text content, and retrieves the most similar documents based on variants of td/idf scores. Both monolingual queries and bilingual queries based on a word translation dictionary are performed.
ADAPT (Lohar et al., 2016) submitted one system (and a revision) that combines similarity metrics computed on ratio of number of sentences in documents, ratio of number of words in the documents, and matched named entities.
JIS (Mahata et al., 2016) submitted one system (2.0%), which uses text matching based on sentence alignment and word dictionaries. Their paper also described improvements over the original submission. | 4,429.6 | 2016-08-01T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
An advanced deep learning model for maneuver prediction in real-time systems using alarming-based hunting optimization
ABSTRACT
Introduction
In recent years, autonomous driving has emerged as one of the most desirable research areas in the artificial intelligence (AI) community. Vehicles can now operate automatically to carry out routine driving activities effectively and safely [1]. The four functional modules that make up autonomous vehicles' core components are environment sensing, decision-making, motion planning, and motion control [2]. The decision-making and motion planning modules, which link environment sensing with motion control, constitute the autonomous vehicle's "brain" and are considered to be of the utmost importance [3], [4]. The lane-changing choice is an important component of the research in this area, and the driving decision-making system is the key technology for maintaining the driving safety of AVs [5]- [7]. Making decisions in unpredictable and dynamic traffic settings is one of the difficulties in achieving full automation of driving [8]. Making decisions involves coming up with a series of motion behaviors to carry out certain tasks, like merging into a crowded lane, navigating an unguarded crossroads, and overtaking with ease on a highway [1], [9]. The robot motion planning algorithm The increasing trend of autonomous driving vehicles in smart cities emphasizes the need for safe travel. However, the presence of obstacles, potholes, and complex road environments, such as poor illumination and occlusion, can cause blurred road images that may impact the accuracy of maneuver prediction in visual perception systems. To address these challenges, a novel ensemble model, named ABHO-based deep CNN-BiLSTM has been proposed for traffic sign detection. This model combines a hybrid convolutional neural network (CNN) and bidirectional long shortterm memory (BiLSTM) with the alarming-based hunting optimization (ABHO) algorithm to improve maneuver prediction accuracy. Additionally, a modified hough-enabled lane generative adversarial network (ABHO based HoughGAN) has been proposed, which is designed to be robust to blurred images. The ABHO algorithm, inspired by the defending and social characteristics of starling birds and Canis latrans, allows the model to efficiently search for the optimal solution from the available solutions in the search space. The proposed ensemble model has shown significantly improved accuracy, sensitivity, and specificity in maneuver prediction compared to previously utilized methods, with minimal error during lane detection. Overall, the proposed ensemble model addresses the challenges faced by autonomous driving vehicles in complex and obstructed road environments, offering a promising solution for enhancing safety and reliability in smart cities.
Traditional control methods, such as constant spacing (CS) policy, constant time headway (CTH) policy, and sliding mode control (SMC) [17]- [19], have a poor ability to adapt to the environment and are unable to make precise and efficient decisions based on a variety of complex driving situations, particularly in situations where CAVs and conventional driver-controlled vehicles coexist [14]. The search engine, probabilistic sampling, prospective field, approximation curve, and mathematical optimization approaches are the five primary groups among the many algorithms that have been researched for path planning [3]. The most popular and useful path planning algorithm is the one called graph search. In terms of avoiding collisions, the graph search method performs well. The thickness of the grid, however, frequently affects its ideal course. A typical sampling approach that effectively searches the best path while taking non-holonomic restrictions into account is known as rapidly exploring random trees (RRT) [20]. However, RRT needs to continue to strengthen its security and the fineness of its intended course [4]. One drawback of these techniques is that some gesture parameters, which are frequently employed in path planning, are non-linear and non-convex, which might lead to the NPhard (non-deterministic polynomial-time hard) issue [11], [21].
In this research, traffic sign detection and maneuver prediction-based vehicle control are used to control autonomous driving cars. Social behaviors, such as driving patterns and targets of the vehicles in the immediate surrounding area, are taken into account. The hybridized algorithm is utilized for both traffic sign detection and maneuver prediction, which is motivated by advanced algorithms for AV control and decision-making. The modified Hough-enabled Lane GAN model is used to accurately segment the surrounding driving area in the input image for maneuver prediction, while the ensembled CNN-BiLSTM classifier is then applied to predict the traffic sign and make accurate decisions for autonomous driving. In addition, Alarming-based hunting optimization, the alarming-based hunting optimization (ABHO) algorithm is well implemented in the modified Hough-enabled Lane GAN and the ensembled CNN-BiLSTM classifier for lane detection and traffic sign prediction. The shared weights in the lane detection technique and the tunable parameters in the traffic sign prediction technique are controlled by the ABHO algorithm.
Related works and Challenges
The implementation of autonomous vehicles (AVs) is expected to have a considerable positive impact on traffic safety by reducing the number of accidents by up to 94%. However, AV accidents can still occur due to various unforeseen environmental obstacles, such as human-driven vehicles, bicycles, animals, and passengers. Even fully autonomous cars cannot guarantee being completely crash-free under these circumstances. As a result, ethical concerns arise when dealing with such challenges, particularly when human lives are at stake. This section provides an overview of traditional approaches to decisionmaking-based autonomous vehicles, including path selection and braking, as well as their benefits and challenges.
In this section, the autonomous vehicle-based decision-making process using various strategies is revealed. An effective fuzzy CoCoSo approach was created by [22] built on the logarithmic method and Power Heronian function to address the problem of additional benefit selection in vehicular management techniques. Three primary stages make up the suggested MCDM paradigm. The MCDM's inputs, such as criteria, options, and experts, are chosen in the first step. The logarithmic technique is used to determine the optimal parameters in the second step. The final stage ranks the options according to the Power Heronian function. The suggested fuzzy LM PH'CoCoSo methodology's efficacy is undeniable. However, the technical intricacy of the fuzzy WPHA and fuzzy WGPHA functions for evaluating the computational technique can be a constraint. The decision-making and mobility control for traffic movements of an autonomous vehicle (AV) taking into account the human behaviors of other traffic users were addressed in this [4] unique integrated approach. When making decisions and predicting the condition of a course of an autonomous vehicle, the Stackelberg Game theory and Model Predictive Control (MPC) are both employed. The ability of the agile solution to handle various social contacts with other vehicle drivers demonstrates its viability and efficacy. Only the velocity and acceleration behaviors are taken into account for obstacle vehicles because the lane-change behaviors of these vehicles are not part of the high-speed driving situation. An automated, safe, and effective decision-making paradigm for AVs was put forth by [23] for driving at junctions. To find the best navigation rule in terms of security and protection, the deep Q-network method was used. The suggested approach might aid in developing the decision-making component of AVs to improve travel convenience and traffic flow. This study's shortcomings include the fact that the bigger standard deviations meant that driving comfort was reduced. In an environment of rapid change, [14] presented an autonomous braking decision-making technique that chooses the best course of action using deep reinforcement learning (DRL). To increase safe driving, the automobile can proactively adopt the best braking behavior in an urgent situation once the strategy has been trained correctly. To execute high-level control techniques to coordinate CAVs in typical circumstances, multi-agent reinforcement learning is necessary.
A unique LC decision (LCD) model is presented by [7] that enables autonomous cars to acquire judgments that are similar to those made by humans. This approach integrates the XGBoost algorithm with a deep autoencoder (DAE) network. The presented method is currently only relevant to the traditional LC decision-making mechanism in straight lanes or curved lanes on motorways due to the complication and instability of regular traffic. A predictive control paradigm for moral judgments in driverless vehicles is presented forth by [24] using the principles of rational ethics. The author proposes the use of powerful AI tools and reasonable procedures to develop ethical guidelines for autonomous vehicles. One such approach is the Lexicographic Optimization-based Model Predictive Controller (LO-MPC), which prioritizes barriers and restrictions to ensure the flexible application of ethical principles. To address lane change decision-making [11], the author proposes a risk-aware driving decision strategy using deep reinforcement learning's Risk Awareness Prioritized Replay Deep Q-Network (RA-PRDQN). This approach aims to identify a sequence of actions that minimize risk and prevent accidents with the host car in congested environments with both static and dynamic obstacles. The sample selection probability function can be improved by considering vehicle location sets and incorporating stopping behavior for speed regulation using deep reinforcement learning. Another decision-making [1] system prioritizes the safe and effective operation of autonomous vehicles. The author presents a simulation of passing driving situations and defines standard methods such as the intelligent driver model and minimization of overall braking caused by merging traffic. For highway overtaking, the author proposes using the Dyna-H algorithm, which combines a modified Q-learning algorithm with a heuristic planning approach. Overall, these approaches aim to develop safe and ethical decision-making systems for autonomous vehicles. To develop online decision-making techniques for autonomous vehicles, deep learning and enhanced RL algorithms must also be combined. The proposed model achieved high accuracy in detecting traffic signs and predicting lanes compared to the existing techniques [25]. The research aims to enhance the decision-making capabilities of autonomous vehicles to ensure safety, energy efficiency, and mobility. The author [23] efficiently ranked the agents relying upon their importance in making decisions, using the CNN network that effectively learned the features and obtained the domain knowledge. The decision-making system acts as the central nerve of driverless vehicles and is important for the safe and effective operation of vehicles [26]. While considering the surrounding environment, the other car motion and the evaluation of self-esteemed vehicles, decisionmaking is indicated to develop reasonable and safe driving characteristics at the human level [27].
The challenges considered during the development of effective decision-making of autonomous vehicles are as follows: • In the decision-making process for motion planning, Stackelberg game theoretic optimization and Model Predictive Control (MPC)-based optimization are used to determine the optimal course of action, which is then executed within predefined limits. However, if these limits are too narrow, the motion planner may struggle to find viable alternatives. On the other hand, setting the boundaries too broadly can significantly increase the computational complexity of position control [4].
• Therefore, it is crucial to strike a balance between setting limits that are too narrow or too broad.
This will ensure that the motion planner can find feasible solutions within a reasonable timeframe. Achieving the optimal direction of flow within the expected timeframe is a complex task that requires careful consideration and balancing of various factors [4].
• However, using a fuzzy control system on a vehicle has the drawback of requiring a level of understanding to define fuzzy rules and similarity measures. The choice of the membership function is where a fuzzy logic-based control technique becomes challenging. Bandwidth is significantly impacted by the settings for the membership function and fuzzy word set [14].
• The fact that various motion requirements employed in motion planning are frequently non-linear and non-convex poses a drawback of the risk awareness prioritized replay deep Q-network technique [11]. This may result in the NP-hard (non-deterministic polynomial-time hard) problem.
• One major drawback of probabilistic-based techniques is that they solely use specialized information to provide rule-based action, failing to make the right decisions in disruptive environments and ignoring the learning aspects of the human drivers in navigation [11].
• The diminishing gradient experienced during training presents the biggest difficulty in using simple RNNs. The number of instances the gradient signal is ultimately multiplied can be as great as the time steps taken. When dealing with sequence data, a standard Recurrent Neural Network (RNN) may not be suitable for capturing long-term dependencies. This is because, in a deep or extended sequence analysis, the gradient of the network's output may struggle to impact the weights of the preceding layers. As a result, it becomes challenging for the network to record long-term dependencies in the sequence data. The network's weights won't be properly updated under gradient vanishing, leading to very low weight values [28].
Method
In this section decision-making system for the autonomous driving cars using deep models are discussed. The autonomous vehicles require a strong decision controller to support the safe-driving experience in the smart cities for which the road video dataset is acquired. To the video frames, traffic sign detection and maneveur prediction is done using modified CNN-BiLSTM classifier and ABHO-Hough GAN model. The algorithm, ABHO is designed for training the classifier parameters to support the prediction with accuracy. The ABHO algorithm is developed by integrating the hunting characteristics of Canis latrans with the leadership hierarchy and alarming nature of starling birds. On the other hand, the pre-processed video data is fed forward to the ABHO-Hough GAN model, which is tuned by ABHO that has shown good image enhancement and image restoration capabilities. ABHO-Hough GAN model has the tendency to update the performance based on the optimization algorithm and effective maneuver detection. Fig.1 (a) and Fig.1 (b) shows the illustration of the intelligent transportation using maneveur prediction.
Road vehicle video database
The road vehicle video database [4] is utilized in this research for traffic sign detection and maneveur prediction as the initial step, which is expressed as where, the utilized road vehicle database is denoted as D , and the total available videos in the database is denoted as d D , which is in the range of 0 to d . Each video from the database is supposed to hold Ff number of video frames and for ensuring the accurate support system, the frame-wise processing is enabled.
Traffic sign detection using Optimized CNN-BiLSTM classifier
The video frames are acquired from the road video, for which the traffic sign detection is done through the designed ABHO-Ensembled model. Fig. 2 shows ABHO-Ensembled model with three layers of CNN, a layer of convolution, leaky RelU, and MaxPooling, which makes-up the ensemble model's initial channel. The deep CNN holds the filter size as 264 with the kernel size of of dimension. Initially, the frame is processed using the CNN structure of the first channel to extract spatial characteristics, but the depth of time features extracted from the raw high-dimensional data is insufficient. To finish the extraction of the data time-series features and extract the long-term dependencies between the data features, the BiLSTM structure of dimension is employed. While the model is being trained, the BiLSTM structure can prevent gradient disappearance and gradient explosion. After reshaping the output from CNN, the BiLSTM utilizes the dropout size, which is then fed to the dense layer for an efficient detection of the traffic sign in each frame.
Algorithm for tuning the Ensemble model
The traffic sign detection is the basic need for ensuring the safe driving using the autonomous cars and the accurate detection depends on the ensembled fusion parameters, which is decided optimally using ABHO. ABHO is proposed by employing the exceptional behavior of the Canis latrans [29] and the starling bird [30] for the observation of traffic signs in the road video. The ability of the Canis latrans relies in the effective balance between the exploitation and exploration stages. The social grading system of the guiding beta and a lack of emphasis on following dominant norms are what distinguish the Coyote's algorithmic behavior. The ABHO approach places more emphasis on social interactions and opinion-sharing during the hunting process. Certain common issues, such as the excessive processing time and inadequate searching potential in the Canis latrans performance, are resolved through the characteristics, such as scary as well as the defending characters of the starling bird. The combined behavior enhances the resilience and power of the suggested ABH optimization, leading to a good performance. The starling bird is highly intelligent and has a good memory when compared to many other small birds. The technique is to be used to solve global optimization issues due to its simplicity, scalability, and high performance. The performance of the starling bird is evaluated to the optimization problems that belong to popular engineering applications.
1) Inspiration
The Coyote algorithm, which is based on the Canis lupus genus, inspired by the Canis latrans species, and serves as both an ecological and swarm intelligence criterion, is the basis for the population-based approach that is suggested. Even though the Canis latrans is used as the pack leader, the social hierarchy and dominant standards of these species are disregarded by its unique mathematical structure configuration. Furthermore, unlike grey wolf hunting, Canis latrans hunting emphasizes the social structure and experiences that share as a whole rather than only hunting prey. By considering the social organization of the Canis latrans and their environmental adaption, the suggested method offers a unique mathematical model in comparison to metaheuristics. It also provides novel techniques to balance the exploration and exploitation phases of the optimization process. The Canis latrans's behavior has been linked to both internal (such as gender, social standing, and pack membership) and extrinsic (like snowfall height, snowpack severity, climate, and corpse weight) factors. As a result, the alarming-based hunting mechanism was proposed based on the social settings of the Canis latrans and starling birds.
2) Mathematical modeling of alarming-based hunting optimization
The three top most significant phases in the ABHO algorithm is the initialization, fitness evaluation of the population, and establishing ranking depending on the measured fitness.
• Initialization: According to the social environment, the Canis latrans's worldwide population in addition to the starling bird population is randomly generated, which is expressed as, where, x be the total population in an attained cluster, the solutions are denoted as Ρ and equation The Canis latrans are dispersed randomly across the population, therefore they may decide to separate from the group and become alone instead of joining them. The maximum capacity of Canis latrans that may be separated from the group over the total population. • Choose follower position: The regulations must be upheld by the followers, who should also act as producers by acting like the starling bird with the highest energy. Many hungry followers are more prone to fly to different locations in search of food to increase their energy. Followers look for food by following the producer who can offer the best food. While waiting for food, some followers may be constantly watching the producers and struggling for it to increase their predation rate. Some followers keep a closer eye on the producers, as was already mentioned. They quickly leave their present designation to struggle for food as soon as they learn that the producer has acquired nice food. If they succeed, they can instantly receive the producer's food; if not, the regulations are still followed. The following is a description of the follower's role updating formula.
where the matrix is represented as Β and W , in which the velocity of a producer in approaching the food and staying away from enemies is denoted as V .
• Choose remaining followers: The starling bird in the center of the group randomly walks to be close to others, whereas the starling bird at the group's edge swiftly goes into the safe region to gain a better position when aware of the danger. It is possible to express the mathematical model as follows: where the worst as well as the global fitness is denoted as worst F and glo F . The Canis latrans share other groups' perspectives and methods for moving from one location to another, yet they lack these traits when hunting and adapting to new social conditions. Therefore, the integration of the defending characters of the starling bird prevents the Canis latrans from falling into the local optimum, and there needs to be a solution back so that the algorithm can avoid reaching the local optimum. Incorporating an integrating operation to increase the algorithm's ability to avoid the local optimum is the most popular remedy for this problem. In this work, the optimization is improved by integrating the social characteristics of Canis latrans with the starling bird. By incorporating the defending behavior while renovating the social state during opinion sharing, the ABHO optimization has greater flexibility, quick resolution, and incredibly consistent findings. Thus, to improve the effectiveness of the ABHO optimization and fine-tune the classifier's hyperparameters for improved vehicle control. Fig. 3 shows the Proposed ABHO pseudocode, This system adjusts the positions of the starling bird and reduces energy waste in a random movement to reach ideal solutions with the fewest iterations.
Based on fitness 8.
End for 17.
Update position based on equation (7) Rank population 29.
Update the ranked groups 30.
End while
Once the traffic sign in the traversing road is detected, the lane segmentation is processed using ABHO-Hough GAN model for maneveur prediction. Thus, both the lane segmentation as well as the maneveur prediction is performed in order to control the autonomous vehicle.
Modified Hough enabled optimized generative adversarial network for lane segmentation and maneveur prediction
Once the traffic sign detection is accomplished, the lane segmentation and maneveur prediction is guided using the ABHO-Hough GAN model, which comprises of a generator as well as the discriminator. In this research, a ABHO-Hough GAN model is developed for the background subtraction of driving scenes, where the lanes are determined by a discriminator using shared weights and evaluated by a generator depending on the input road vehicle data. The ABHO-Hough GAN model is a remarkable tool for identifying shapes and curves in the road vehicle video images. To determine the particular location or gain geometrical details of the vehicle, it is used to detect loops, ellipses, and lines. The Hough transform is a great tool for recognizing lane lines for the self-driving automobiles in the target area, and the actual benefit of this model is that it predicts lanes that are precise and narrow rather than the broad, flexible boundaries that CNN's typically introduce. The hough Lane transform recognizes lanes in multiple continuous frames as opposed to only the current frame, which is dissimilar from the aforementioned deep-learning-based methods that only detect lanes and are considered a time-based issue. The proposed technique can provide robust performance in lane detection under difficult circumstances with more detailed information. Using the sign detection and lane segmentation outputs, the maneveur detection is proceded using the optimized GAN. In Fig. 4, Hough Lane enabled optimized GAN model is presented, where the lane segmentation is done using the optimized GAN model, and the hough lane transform supports the maneveur prediction, where the ABHO algorithm guides the segmentation model to acquire the accurate prediction. The detailed sketch on the ABHO algorithm is presented in section 3.2.1.
Results and Discussion
In this section, the reliability of the ABHO-Hough GAN for the maneuver prediction and ABHOtuned CNN-BiLSTM for traffic sign detection is revealed depending on the performance using the various epoch. The comparative analysis is implemented to show the better efficiency of the proposed model in the research area of an autonomous vehicle. The implementation of both lane prediction and traffic sign detection is done in python on windows 10 OS with 8 GB RAM and the road vehicle video dataset is used for estimation.
The road dataset was enumerated through the aerial images over 1171. Every aerial image is disguised over 2.25 square kilometers with 1500 dimensions from 1500 pixels. The data was divided into three sets in terms of unpremeditated. The following sets are an 1108-image training set, a 14-image validation set, and a 49-image test set. The dataset contains a large amount of urban, suburban as well as rural districts which is present in 2600 square kilometer. To obtain knowledge of real-time decision-making, the test data was helped by enclosing more than 110 square kilometers unaided. The experimental validation of the approach is visualized in Fig. 5.
The performance metrics used for the traffic sign detection along with the ABHO-tuned CNN-BiLSTM is explained as follows • Accuracy: The percentage of samples that the ABHO-based CNN-BiLSTM properly identifies while determining the autonomous vehicle's decision-making system is known as accuracy, and it is given by, = • Sensitivity: The true positive outcome of the result from ABHO-based CNN-BiLSTM when the decision-making occurs on the autonomous vehicle, described the sensitivity in terms of probability and it is given as, ISSN 2442-6571 International Journal of Advances in Intelligent Informatics 311 Vol. 9, No. 2, July 2023, pp. 301-318 (10) • Specificity: The true negative of the result from ABHO-based CNN-BiLSTM when the decisionmaking occurs on the autonomous vehicle, described the specificity in terms of probability and it is given as, The performance metrics used for the lane detection along with the ABHO-Hough GAN is explained as follows.
• Mean Absolute Error: The distinction between the magnitudes of the measurement of an individual with the quantity of true value for the ABHO-based Hough GAN, when identifying the lane prediction on an autonomous vehicle is defined as the Mean Absolute Error (MAE) and it is given as, where, the total error is represented as v , and j q q − denotes an absolute error.
• Mean Square Error: The error quantity in the statistical model as well as the difference between the experimental and predicted rate from the ABHO-based Hough GAN in the decision-making function processed for lane prediction on the autonomous vehicle which is estimated in terms of Mean Square error (MSE) and it is given as, where, the available data is denoted as r , j g describes prediction, and j g represents the observed value.
International Journal of Advances in Intelligent Informatics ISSN 2442-6571 Vol. 9, No. 2, July 2023, pp. 301-318 • Root mean squared error: ABHO-based Hough GAN in an autonomous vehicle is used to determine land prediction in terms of using the mean square value of error in the root which was described as root mean squared error (RMSE) and it is given as, where, the observed sample is denoted as j R , a predicted sample is represented as Z Q with Z observations.
The performance depending on maneuver prediction and traffic sign detection using ABHO-based Hough GAN as well as the ABHO-based deep CNN-BiLSTM are described in the following section.
Manuveur prediction and traffic sign detection analysis
The error-based values such as MAE, MSE, and RMSE for ABHO-based Hough GAN for the lane prediction are represented in Fig. 6 (a) represents both the percentage of MAE as well as the training percentage based on the epoch value. Accurate and reliable traffic sign recognition is crucial for self-driving vehicles to make informed decisions and avoid accidents. With better prediction accuracy, automated driving systems can respond more quickly and appropriately to traffic signs, such as speed limit signs, stop signs, and yield signs, leading to improved safety for passengers and other road users. Additionally, accurate traffic sign recognition can help optimize vehicle speed and reduce energy consumption, leading to improved efficiency and reduced emissions. Overall, the practical implications of improved traffic sign prediction accuracy are numerous and essential for the successful implementation of automated driving systems in real-world environments.
Comparison of maneuver prediction models
For evaluating the errors MAE, MSE, and RMSE, the proposed ABHO-based Hough GAN is compared with the other existing methods represented in Fig. 8 (a) represents the MAE for both the proposed as well as the existing depending on the percentage of trained data. When the number of training data is 90 %, the error rate of the proposed method is 2.149. Then the attained improved variation of MAE for the ABHO-based Hough GAN is 64.39 % when compared with the existing GAN with SSO model. Fig. 8 (b) represents the MSE for both the proposed as well as the existing depending on the percentage of trained data. When the number of training data is 80 %, the error rate of the proposed method is 4.348. Then the attained improved variation of MSE for the ABHO-based Hough GAN is 54.82 % when compared with the existing GAN with SSO model. Fig. 8 (c) represents the RMSE for both the proposed as well as the existing depending on the percentage of trained data. When the number of training data is 90 %, the error rate of the proposed method isb6.027. Then the attained improved variation of RMSE for the ABHO-based Hough GAN is 44.03 % when compared with the existing GAN with SSO model. For evaluating the performance measures accuracy, sensitivity, and specificity, the proposed ABHObased deep CNN-BiLSTM is compared with the other existing methods. The accuracy for both the proposed as well as the existing depending on the percentage of trained data. When the number of training data is 90 %, the accuracy rate of the proposed method is 99.703 %. Then the attained improved variation of accuracy for the ABHO-based deep CNN-BiLSTM is 2.20 % when compared with the exiting deep CNN-BiLSTM with the SSA model. When the number of training data is 90 %, the sensitivity rate of the proposed method is 99.200 %. Then the attained improved variation of sensitivity for the ABHO-based deep CNN-BiLSTM is 1.47 % when compared with the exiting deep CNN-BiLSTM with the SSA model. When the number of training data is 90 %, the specificity rate of the proposed method is 99.839 %. Then the attained improved variation of specificity for the ABHO-based deep CNN-BiLSTM is 4.11 % when compared with the exiting deep CNN-BiLSTM with the SSA model.
The performance of the ABHO-based maneuver prediction and traffic sign detection approaches is presented in Table 1 and Table 2. The proposed model outperforms other methods in terms of accuracy, sensitivity, and specificity for both traffic sign detection and maneuver prediction, which is crucial for effective decision-making and control of autonomous vehicles. The proposed maneuver prediction model also exhibits lower mean absolute error, mean square error, and root mean square error compared to existing models.
Conclusion
This research proposes an efficient and precise autonomous decision-making technique for AVs to promptly exit hazardous situations. The approach involves developing traffic sign detection and maneuver prediction models using ABHO-based deep CNN-BiLSTM and ABHO-based HoughGAN techniques. The modified Hough-enabled lane GAN is responsible for accurately segmenting the driving area from the input image based on shared weights to facilitate decision-making. The ensemble CNN-BiLSTM classifier is used to anticipate traffic signs and assist the driver in making informed decisions. | 7,106.4 | 2023-07-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Feature selection and risk prediction for diabetic patients with ketoacidosis based on MIMIC-IV
Background Diabetic ketoacidosis (DKA) is a frequent acute complication of diabetes mellitus (DM). It develops quickly, produces severe symptoms, and greatly affects the lives and health of individuals with DM.This article utilizes machine learning methods to examine the baseline characteristics that significantly contribute to the development of DKA. Its goal is to identify and prevent DKA in a targeted and early manner. Methods This study selected 2382 eligible diabetic patients from the MIMIC-IV dataset, including 1193 DM patients with ketoacidosis and 1186 DM patients without ketoacidosis. A total of 42 baseline characteristics were included in this research. The research process was as follows: Firstly, important features were selected through Pearson correlation analysis and random forest to identify the relevant physiological indicators associated with DKA. Next, logistic regression was used to individually predict DKA based on the 42 baseline characteristics, analyzing the impact of different physiological indicators on the experimental results. Finally, the prediction of ketoacidosis was performed by combining feature selection with machine learning models include logistic regression, XGBoost, decision tree, random forest, support vector machine, and k-nearest neighbors classifier. Results Based on the importance analysis conducted using different feature selection methods, the top five features in terms of importance were identified as mean hematocrit (haematocrit_mean), mean hemoglobin (haemoglobin_mean), mean anion gap (aniongap_mean), age, and Charlson comorbidity index (charlson_comorbidity_index). These features were found to have significant relevance in predicting DKA. In the individual prediction using logistic regression, these five features have been proven to be effective, with F1 scores of 1.000 for hematocrit mean, 0.978 for haemoglobin_mean, 0.747 for age, 0.692 for aniongap_mean and 0.666 for charlson_comorbidity_index. These F1 scores indicate the effectiveness of each feature in predicting DKA, with the highest score achieved by mean hematocrit. In the prediction of DKA using machine learning models, including logistic regression, XGBoost, decision tree, and random forest demonstrated excellent results, achieving an F1 score of 1.000. Additionally, by applying feature selection techniques, noticeable improvements were observed in the experimental performance of the support vector machine and k-nearest neighbors classifier. Conclusion The study found that hematocrit, hemoglobin, anion gap, age, and Charlson comorbidity index are closely associated with ketoacidosis. In clinical practice, these five baseline characteristics should be given with the special attention to achieve early detection and treatment, thus reducing the incidence of the disease.
Introduction
Diabetic ketoacidosis (DKA) is a potentially life-threatening metabolic complication associated with diabetes mellitus (DM).DKA is characterized by a severe lack of insulin and increased levels of counter-regulatory hormones, which can cause the accumulation of ketones in the body.If not promptly diagnosed and treated, DKA can lead to serious complications and even death.Therefore, it is critical to closely monitor DM and take appropriate measures to prevent DKA from developing or to swiftly manage it (1).DKA can develop rapidly, often taking place within 24 hours (2).It can even occur earlier in patients treated with short-acting insulin, such as Humalog, with metabolic changes potentially occurring 1.5 to 2 hours sooner (3).Infection is a frequent precipitating factor for DKA worldwide and accounts for approximately 30-50% of DKA cases.Among potential infections, urinary tract infections and pneumonia are among the most commonly associated with DKA.Other factors that can trigger DKA include concurrent health conditions such as surgical procedures, trauma, myocardial ischemia, and pancreatitis.Psychological stress and medication non-compliance, particularly with insulin therapy, can also contribute to the development of DKA (4).
One of the main triggers for DKA is insufficient insulin.In the absence of adequate insulin, blood glucose levels rise, leading to increased breakdown of triglycerides in adipose tissue and release of a large amount of free fatty acids.More free fatty acids enter the kidneys through the liver, causing an increase in gluconeogenesis in the liver and releasing more glucose into the bloodstream.In an environment of high blood glucose and insufficient insulin, the liver begins to excessively produce ketone bodies, including betahydroxybutyric acid, acetoacetate, and acetone.The accumulation of ketone bodies in the blood results in increased blood acidity, ultimately leading to ketoacidosis.Ketoacidosis is one of the most significant physiological effects of DKA.Excessive ketone bodies cause an increase in blood acidity, affecting acid-base balance and potentially leading to an acidotic state.Due to increased urine output caused by high blood glucose and ketoacidosis, patients may experience severe dehydration.This can lead to electrolyte imbalances, reduced blood volume, and blood concentration.Dehydration and hyperglycemia may result in disturbances of sodium, potassium, and other electrolytes, potentially triggering arrhythmias and other severe physiological problems.DKA can negatively impact multiple organs, including the heart, kidneys, and nervous system.Recent progress in medical technology has led to significant advances in treatment options for DM.However, despite these developments, the incidence and mortality rates associated with DKA remain high.As the global prevalence of DM continues to rise, the incidence of DKA is also increasing year by year (5).A study involving 28,770 individuals under the age of 20 with DM found that among these participants, 94% did not experience DKA, 5% had a single episode of DKA, and 1% had at least two episodes of DKA (6).The mortality rate for DKA varies between 1% and 5%, with the highest mortality rates typically observed among elderly individuals and those with complications related to their diabetes (7).It is worth noting that cerebral edema, a complication that can occur as a result of DKA, is the leading cause of death among individuals under the age of 24 with DM (8).
Research has shown that there are 100,000 hospitalization cases of DKA in the United States every year, accounting for 4-9% of all discharge records of diabetic patients (4).The treatment of DKA requires a significant amount of healthcare resources.In adult type 1 diabetes patients in the United States, direct medical care costs account for 1/4 of the total expenses (9).Indeed, effective control and prevention of DKA are paramount in reducing healthcare costs.The emergence of computer technology has opened up new avenues for utilizing machine learning techniques to support doctors in disease diagnosis.By leveraging these technologies, healthcare professionals can potentially enhance their diagnostic accuracy and efficiency, leading to improved patient care and costeffectiveness.Furthermore, given the high risk and poor prognosis associated with DKA, the development of a risk prediction model specifically for this condition is of great importance.Such a model can aid in identifying patients who are at higher risk of experiencing DKA, allowing for targeted interventions and preventive measures.By implementing a risk prediction model, healthcare providers can potentially reduce the incidence of DKA episodes, improve patient outcomes, and mitigate the economic burden on both the healthcare system and patients (10).
This study combines the existing public dataset MIMIC-IV with machine learning techniques for healthcare analysis.By employing feature selection methods (random forest, Spearman correlation analysis), baseline characteristics are optimized to identify five baseline characteristics highly correlated with DKA.Based on the abnormality of these five highly correlated baseline characteristics, early warning can be given in the early stages of the disease, assisting clinicians in clinical diagnosis, providing more effective treatment plans, and reducing the incidence of the disease and patients' suffering.Meanwhile, this study utilizes six machine learning methods to establish a risk prediction model based on DKA, including logistic regression, XGBoost, decision tree, random forest, support vector machine, and k-nearest neighbors classifier.Experimental results demonstrate the effectiveness of feature selection, as the five optimized baseline characteristics can accurately predict the risk of DKA.The research process of this paper is as depicted in Figure 1.
Databaset
The MIMIC dataset was established in 2003 with the support of the National Institutes of Health in the United States.It was jointly created by the MIT Laboratory for Computational Physiology, the Beth Israel Deaconess Medical Center (BIDMC) affiliated with Harvard Medical School, and Philips Healthcare (10).The dataset utilized in this study is known as the 'Medical Information Mart for Intensive Care IV' (MIMIC-IV).It encompasses a wide range of data, including demographic information, disease diagnoses, vital signs, laboratory tests, treatment details, survival status, and other comprehensive clinical records.Compared to its predecessor, MIMIC-III, the scope of the MIMIC-IV dataset has been extended to cover the period from 2008 to 2019, providing a broader range of data for analysis and research.
Participant selection criteria
In this study, a total of 2379 patients were chosen from the MIMIC-IV dataset.Among them, 1193 patients had DKA and 1186 patients had DM without ketosis.The participants in this study were required to meet the following criteria: The participants in this study needed to meet the following criteria:
Selection of indicators and data preprocessing
This study excluded baseline characteristics with missing data greater than 30% in MIMIC-IV, such as C-reactive protein, procalcitonin, height, and serum albumin.At the same time, Structured Query Language (SQL) was used to extract data of DKA patients from MIMIC-IV.The baseline characteristics selected in this study included demographic features, vital signs, laboratory indicators, comorbidity indicators, and scoring system indicators.Demographic features included gender, age, weight, and ethnicity.Vital signs included heart rate (heart_rate_mean), respiratory rate (resp_rate_mean), body temperature (temperature_ mean), peripheral oxygen saturation (SPO2_mean), systolic blood Flow chart of this study.All data were analyzed using IBM SPSS Statistics 25.Two-sided statistical analyses were conducted, and a significance level of p ≤ 0.05 was used for interpretation of statistical significance.Normality was assessed for continuous variables, which were presented as mean ± standard deviation (SD), while categorical data are summarized as counts or percentages.Group comparisons were performed using the chi-square test for categorical variables and analysis of variance, and the Kruskal-Wallis test for continuous variables.The detailed baseline characteristics are shown in Table 1.
The LODS (Logistic Organ Dysfunction System) is a medical scoring system commonly used to assess the degree of organ dysfunction in patients.This scoring system evaluates and quantifies the functional status of multiple organ systems based on clinical indicators such as blood pressure, respiratory rate, and oxygen saturation to determine the presence of organ dysfunction in patients.
In the field of home healthcare, OASIS (Outcome and Assessment Information Set) commonly refers to an assessment tool used to collect and document clinical information and functional status data of patients in a home care setting.OASIS assessment covers multiple domains, including activities of daily living, medical history, pain assessment, medication management, emotional status, and more.
The Charlson Comorbidity Index is a scoring system used to assess the burden of comorbidities or other chronic medical conditions in a patient.It assigns a score to various comorbidities based on their association with one-year mortality.The scores are summed to calculate a total score, which is used as an indicator of the patient's overall health status and the risk of future complications or mortality.
Before carrying out feature selection and developing a DKA risk prediction model, we used mean imputation to handle missing values in the data set.Mean imputation is a commonly used method where the missing values are replaced with the mean or mode of the available data.The formula (Equation 1) for mean imputation can be represented as: The symbols indicating whether an answer is provided represent the number of samples.In this study, mean imputation was performed for missing values in neutrophil and lymphocyte counts.
Feature selection
The study employed two feature selection methods to screen important baseline characteristics related to DKA, including Spearman correlation analysis and random forest.Spearman correlation analysis is used to assess the monotonic relationship between two continuous or ordinal variables.It is used to describe the correlation between two variables that have ordinal variables or distribution characteristics that cannot be described by mean and standard deviation.The formula (Equation 2) can be represented as: Where N represents the total number of observations, r ranges from -1 to 1. [-1, 0) represents a negative correlation, and (0, 1] represents a positive correlation.A correlation of 0.8-1.0indicates a very strong correlation, 0.6-0.8indicates a strong correlation, 0.4-0.6 indicates a moderate correlation, 0.2-0.4indicates a weak correlation, and 0.0-0.2indicates a very weak or no correlation.It is worth noting that to better reflect the correlation, we took the absolute value of all correlation coefficients.The top 20 baseline characteristics in terms of correlation strength are shown in Table 2. To enhance the reliability of the experimental results, we also incorporated a feature selection method based on random forests.Random forest is a collection classifier composed of multiple decision trees.The classifier ensemble of the random forest is RF = {h(X, q k ),k = 1,2,3,•••K}, where K is the number of decision trees, and q k is a random variable that follows an independent distribution.Under the known conditions of the independent variables, all classifiers are weighted to obtain the optimal selection result.We had a total of 10,000 decision trees, with a training set to test set ratio of 8.5:1.5.Random forest performed repeated sampling on the replaced dataset to obtain 10,000 data subsets, and each subset generate a corresponding decision tree, ultimately forming the DKA important baseline characteristics ensemble.The importance of random forest in selecting relevant indicators is shown in Table 3.
After conducting correlation analysis using two feature selection methods, it was discovered that certain baseline characteristics exhibited high levels of correlation.By combining the importance rankings of baseline characteristics from the two feature selection methods, the top five strongly correlated baseline characteristics were selected based on their smallest sum of importance rankings.These five baseline characteristics include hemoglobin_mean, haematocrit_mean, aniongap_mean, age, and Charlson_comorbidity_index.
Establishing a risk prediction model for DKA
The study utilized supervised machine learning models for the prediction of DKA risk.The experiments were divided into two parts: the first part focused on risk prediction using logistic regression with a single baseline characteristic, while the second part utilized xgboost, decision trees, random forests, support vector machines, and k-nearest neighbors classifiers with multiple baseline characteristics for risk prediction.The complete dataset for the study was divided into training and testing sets, with a ratio of 0.85:0.15.The experiments were then conducted using five-fold cross-validation.The performance evaluation metrics used for the experiments included the area under the curve (AUC) of the receiver operating characteristic (ROC) curve, accuracy, and F1score.These metrics were utilized to assess the predictive performance of the models, overall accuracy, and the balance between precision and recall in predicting DKA risk.
Risk prediction based on logistic regression with a single baseline characteristic
The study aimed to predict DKA risk independently for each baseline characteristic using logistic regression.Based on Table 4, the experimental results were categorized into three levels according to the F1 score: F1 scores higher than 80, F1 scores between 80 and 60, and F1 scores lower than 60.A total of two baseline characteristics, hematocrit mean and hemoglobin_mean, achieved an F1 score greater than 80.There were 20 baseline characteristics (Age, weight, heart_rate_mean, resp_rate_mean, temperature_mean, anioingap,dbp_mean, abs_neutrophils_mean, congestive_heart_failure, platelets_mean, glucose_mean, obesity, myocardial_infarct, peripheral_vascular_disease, chronic_ pulmonary_disease, renal_disease, oasis, cad, mechvent, charlson_comorbidity_index) with F1 scores between 60 and 80.The prediction results demonstrated a significant similarity with the feature selection results, highlighting the strong performance of hematocrit mean and hemoglobin_mean compared to other baseline characteristics.This indicated the importance of these two features in predicting DKA risk.
Risk prediction based on multiple baseline characteristics using xgboost, decision trees, random forests, support vector machines, and knearest neighbors classifiers
To predict DKA, we utilized all 42 baseline characteristics and employed various machine learning algorithms, including xgboost, decision trees, random forests, support vector machines, and knearest neighbors classifiers.The specific algorithm parameter details for xgboost were as follows: a learning rate of 0.01, 3000 iterations, a tree depth of 4, and a minimum sum of leaf node sample weight of 5.The decision tree classifier used the Gini coefficient as the splitting criterion and was constructed with a maximum depth of 50.The random forest classifier employs 8 decision trees, each with a maximum depth of 50.The support vector machine classifier used the radial basis function (RBF) kernel.The k-nearest neighbors classifier was configured to use 5 nearest neighbors, and the algorithm for selecting the nearest neighbors was the automatic optimization algorithm available in the scikit-learn library.
The experimental resulted in Table 5 indicate that xgboost, decision trees, and random forests achieve an AUC, accuracy, and F1-score of 1, which demonstrates their ability to accurately identify DKA patients.However, the performance of the support vector machine and k-nearest neighbors classifiers was comparatively weaker.
We believe that the reason support vector machines and knearest neighbors classifiers cannot accurately identify DKA is due to some baseline characteristics interfering with the model's decision-making.To further predict DKA, we used feature selection to select five baseline characteristics (namely hemoglobin mean, hematocrit mean, aniongap mean, age, and Charlson comorbidity index).The experimental resulted in Table 6 demonstrated a significant improvement in the performance of support vector machines and k-nearest neighbors classifiers, validating the effectiveness of the feature selection method and the five important features.We also provided accuracy change plots for support vector machines and knearest neighbors classifiers based on both the full set of features and the important features.These plots, labeled as Figures 2-5, demonstrate the variation in accuracy for the different feature sets.The learning curve illustrated the impact of the number of training samples on the model's performance.The results indicated that the machine learning approach adopted by the research institute did not exhibit overfitting or underfitting phenomena.The model had essentially reached a performance bottleneck, and there was no need to supplement the data for further training.
Discussion on the importance of baseline characteristics
The occurrence of DKA is attributed to the relative or absolute deficiency of insulin, along with the presence of excessive counterregulatory hormones such as glucagon, cortisol, catecholamines, and growth hormone.These factors lead to hyperglycemia, glucosuria, dehydration, acidosis, and varying degrees of hyperosmolarity (11).When blood glucose levels elevated, especially in individuals with diabetes, the body is unable to effectively utilize glucose as energy and instead begins to break down fats to provide energy.One of the byproducts of this process is acetoacetic acid.Acetoacetic acid is a ketone body, and when it accumulates excessively in the body, it can lead to ketonemia, which triggers DKA (12).DKA can affect the chemical balance of the blood, including the acid-base balance.It also impacts various parameters related to the blood, such as hemoglobin and hematocrit.
Hemoglobin_mean refers to the mean value of hemoglobin (Hb).Hb is a protein presenting in red blood cells, primarily responsible for carrying and delivering oxygen to various tissues in the body (13).In the state of DKA, due to insufficient insulin or resistance to insulin by cells, is blood glucose levels rise.High blood glucose can lead to excessive urine production by the kidneys, causing significant loss of fluids in the body (14).Inadequate insulin prevents cells from properly utilizing glucose as an energy source.As a result, the body resorts to breaking down fats, leading to an excessive production of ketones in the liver (13).These excess ketones are excreted in urine along with a significant amount of urine, resulting in fluid loss.Glucose is an osmotically active substance, and in a state of high blood glucose, the osmotic pressure of the blood increases, leading to further dehydration of cells.These changes in the body can cause blood to become Accuracy change plot of support vector machines based on all features.Frontiers in Endocrinology frontiersin.orgconcentrated, resulting in an increase in the concentration of hemoglobin per unit volume of blood (15).In DKA state, there is a significant increase in acidic substances in the blood.The body utilizes the buffering agents in the blood to neutralize the excess acid, thereby maintaining the acid-base balance of the blood (16).Hemoglobin, a basic protein, can serve as a buffer and increase compensatively in response to acidosis.Thus, changes in HB can effectively reflect the condition of DKA.Hematocrit_mean represents the mean value of hematocrit (Hct).Hct refers to the proportion of red blood cells in the volume of blood.In clinical practice, Hct is an important indicator for assessing blood concentration and determining blood volume status (17).DKA's hyperglycemia and ketoacidosis characteristic result in osmotic diuresis and significant depletion of fluid and electrolytes in the intracellular and extracellular fluid compartments (18).The elevated blood glucose and increased urine output caused by DKA lead to dehydration within the body (14) (13).Dehydration-induced blood concentration can cause an increase in Hct.In DKA, the elevated blood glucose and increased concentration of glucose in the blood lead to increased blood viscosity, resulting in an elevated Hct.Therefore, there is a close relationship between Hct and the state changes in DKA.
Hemoglobin and hematocrit are both based on whole blood and therefore depend on plasma volume.If a patient is severely dehydrated, the hemoglobin and hematocrit levels will be higher compared to the normal blood volume (18).An increase in hemoglobin and hematocrit may indicate dehydration and blood concentration (19).Hematocrit and hemoglobin can play a supportive role in evaluating DKA.Given the data from these hematological parameters, such as an increase in red blood cell volume and hemoglobin concentration, they may be useful indicators of inadequate extracellular fluid volume in DKA.Meanwhile, it had been mentioned earlier that cerebral edema was a crucial factor contributing to the increased mortality rate in DKA, primarily due to the most severe complication of excessive or rapid fluid administration.Therefore, accurately assessing the degree of dehydration before initiating fluid therapy in DKA patients was of paramount importance.However, this is not a straightforward estimation, as dehydration did not directly correlate with the severity assessment of DKA based on blood gas values.In this context, hematological parameters can be employed, and two examples were hematocrit (Hct) and hemoglobin (Hb) concentration (10).However, they have limitations in predicting the occurrence of DKA (20), but physiologically, it is reasonable to consider them as useful indicators.
The term 'anion_gap_mean' refers to the mean value of anion gap, w hich is used to measure the difference between undetermined anions and undetermined cations in the blood.It is calculated by measuring the concentrations of anions (such as chloride ions) and cations (such as sodium ions, potassium ions) in the blood (21).The formula for anion gap is as follows: Anion Gap = [Na+]-([Cl-] + [HCO3-]) (22).In normal conditions, the anion gap typically falls between 8-16 mmol/L.The anion gap is commonly used to evaluate acid-base balance, and it can be easily calculated from routine laboratory data.It has the widest application in the diagnosis of various forms of metabolic acidosis (23).DKA possesses its unique physiological characteristics, including the generation and elimination of ketones, hyperglycemia, and fluid loss.This combination directly influences the biochemical parameters of patients with DKA, particularly the anion gap and total carbon dioxide levels Mifsud and Salem (11).In the state of DKA, metabolic disturbances in the body lead to the production and accumulation of a large number of ketones, such as betahydroxybutyrate, acetoacetate, and acetone.Ketones are metabolic byproducts of fatty acid metabolism, and their breakdown metabolism generates anions, especially beta-hydroxybutyrate.These anions are not accounted for in routine electrolyte analysis and are not included in the sum of cations (such as sodium, potassium) or measured anions (such as chloride) (24).D-lactic Accuracy change plot of support vector machine based on important features.acid is a product of methylglyoxal (MG) metabolism through the glyoxalase pathway (25).In a state of hyperglycemia, the production of MG can significantly increase (26).Therefore, in hyperglycemic conditions, the blood concentration of D-lactic acid should also increase significantly.Research has shown that in the state of DKA, the increase in D-lactic acid also contributes to the generation of anion gap during acidosis.Therefore, in DKA, the increase in ketones and D-lactic acid leads to the accumulation of unmeasured anions, r esulting in an increase in the anion gap (24).Therefore, measuring changes in the anion gap can be helpful in diagnosing and monitoring the severity of DKA.
A significant correlation exists between an individual's age and the likelihood of developing DKA.A study analyzing 4,807 cases of DKA revealed the incidence rate was 14% for those above 70 years old, 23% within the age group of 51 to 70 years, 27% within the age group of 30 to 50 years, and 36% for individuals under 30 years old (5).Based on this data, it is evident that younger patients have a higher incidence rate, with DKA commonly being observed in children and adolescents with both type 1 and type 2 diabetes (27).This is believed to be due to several factors commonly found in patients within this age group, including a higher rate of growth and development, increased metabolic rate, and greater insulin requirements.Furthermore, children and adolescents may have less developed self-management skills for diabetes and may be more susceptible to neglecting or inadequately controlling their blood glucose levels, thus increasing the risk of developing DKA.DKA can affect individuals of all age groups, with older individuals who have additional comorbidities often experiencing higher mortality rates.However, DKA is the leading cause of death among diabetes patients younger than 24 years old, with cerebral edema commonly induced by DKA being the most common cause (8).Middle-aged and elderly patients in this age group may have coexisting chronic conditions such as hypertension, coronary heart disease, and renal failure.These conditions may increase the risk of mortality in DKA and can affect treatment options.Furthermore, elderly patients may have decreased physiological reserves and require careful monitoring of fluid balance and insulin therapy (5).Healthcare providers should develop personalized treatment plans for patients of different age groups, taking into account their physiological characteristics, medical history, and risk of complications.As a result, age plays a crucial role in guiding the management and treatment strategies for DKA.Relevant studies indicate that a mixed state of ketoacidosis and hyperosmolarity is observed in 30% of presentations of hyperglycemic emergencies in diabetes.While both age and the degree of hyperosmolarity influence the mortality rate, only age emerges as an independent predictor of mortality Feldman (12).Poor blood glucose control disproportionately affects young patients with a detrimental impact on DKA.Hence, we emphasize the need for a better understanding of the role of age in diabetes intervention, especially in the context of DKA.
The Charlson Comorbidity Index (CCI), also known as the Charlson Index, is a frequently used instrument for evaluating the burden and risk of comorbidities in patients.It assigns scores to various diseases, depending on a patient's medical history and diagnoses, and these scores are then combined to generate a composite score (28).CCI offers useful insights into a patient's overall health status and can assist healthcare professionals in assessing and anticipating the effects of comorbidities on patient outcomes.A high CCI score indicates that the patient is significantly affected by multiple diseases, indicating a greater burden of comorbidities and a higher risk of illness (29).It is widely recognized that many adults with diabetes also experience concurrent chronic conditions such as chronic heart failure, chronic obstructive pulmonary disease, renal disease, and depression (30).In a comprehensive study on medical insurance, it was discovered that the presence of multiple comorbidities can complicate a patient's condition.The study identified congestive heart failure (CHF), pneumonia (CKD), and chronic obstructive pulmonary disease (COPD) as the most frequent conditions leading to readmission within 30 days after discharge (31).As a result, the proportion of DKA patients with comorbidities such as CHF, CKD, and COPD may be higher, indicating that these conditions commonly coexist in individuals with diabetes, potentially leading to a higher readmission rate for DKA patients.Furthermore, research has suggested that a Hospital Admission Index (HAI) with a CCI score of 3 or higher can serve as a predictive factor for DKA readmission.As previously mentioned, the presence of comorbidities complicates the treatment of diabetes patients, thereby increasing the risk of readmission.Thus, active monitoring and treatment of DKA patients with comorbidities can contribute to enhancing DKA management (32).
The diagnosis of DKA itself is prone to misdiagnosis, and the indicators used are often influenced by the underlying diabetes, making early prediction challenging.The five features we have selected exhibit strong stability, contributing to a comprehensive assessment of the patient's overall physiological status, not just the diabetes-related physiological changes.In the prodromal stage of DKA, when the values of blood glucose and ketone bodies have not reached diagnostic thresholds, we can complementarily analyze the five features to achieve a comprehensive analysis and provide assistance in predicting DKA.Our intention is not to replace the diagnostic indicators for DKA but rather to serve as an auxiliary indicator to help doctors diagnose more quickly and accurately.
For young patients or those with multiple complications, it is crucial to provide enhanced education and guidance on insulin or medication therapy (33).During the diagnostic and treatment process, it is essential to promptly monitor indicators such as hemoglobin, hematocrit, anion gap, age, and Charlson comorbidity index in DM patients who present with relevant symptoms.Early intervention should be implemented to reduce the incidence of the disease.By closely monitoring these indicators and promptly intervening, the occurrence rate of the disease can be reduced.
Conclusion
This study was based on the MIMIC-IV dataset and utilized feature selection and machine learning methods to construct a risk prediction model for DKA.Five potential baseline characteristics highly correlated with DKA have been identified, which include hemoglobin_mean, haematocrit_mean, aniongap_mean, age, and Charlson_comorbidity_index.Furthermore, we utilized machine learning methods to accurately predict the incidence of DKA in patients and demonstrated the effectiveness of important baseline characteristics.This study holds the following significant values: (1) Early warning: DKA typically develops gradually rather than occurring suddenly.By continuously monitoring important baseline characteristics and utilizing a machine learning prediction model, it is possible to identify the risk of DM patients progressing to DKA at an early stage, thereby providing early warning signals.This enables doctors to intervene in a timely manner, adjust the patient's treatment plan, and prevent the occurrence of DKA.(2) Optimization resource allocation: Establishing a DKA risk prediction model can assist hospitals and healthcare institutions in better allocating resources.For instance, for high-risk patients, more attention and resources can be allocated to their monitoring and treatment to reduce the risk of DKA occurrence.This targeted allocation of resources ensures that those at higher risk receive the necessary support and intervention, optimizing the overall healthcare delivery system.(3) Reduction healthcare costs: Treatment for DKA typically requires hospitalization and is associated with high medical expenses.By utilizing important baseline characteristics and predictive models, it is possible to effectively reduce the frequency of DKA episodes, resulting in significant cost savings for patients with recurrent DKA.This cost reduction is achieved through proactive management and prevention strategies based on risk assessment, ultimately improving the overall economic efficiency of healthcare delivery.
There are some limitations associated with this study: (1) Data Quality: The model's performance heavily relies on the quality of the data used.If there are errors, missing information, or biases in the input data, the model may be influenced by quality variations, impacting its predictive capabilities.(2) Sample Bias: If the samples in the training data are insufficient or do not adequately represent the diversity in the real world, the model may exhibit bias in future practical applications.The representativeness of the samples is crucial for the model's generalization ability.(3) Concept Drift: If the data distribution changes over time or space, the model may struggle to effectively adapt to the new data distribution.This could result in a decline in the model's performance in real-world applications.(4) Uncertainty: Machine learning models typically provide probabilities or scores for predictions rather than deterministic outcomes.In the medical field, for certain situations, patients and doctors may prefer to understand the uncertainty of the model rather than just binary predictive results.
FIGURE 3
FIGURE 3Accuracy change plot of k-nearest neighbors classifier based on all features.
FIGURE 5
FIGURE 5Accuracy change plot of k-nearest neighbors classifier based on important features.
TABLE 1
Baseline characteristics between DKA and non-DKA group.
TABLE 2
Top 20 baseline characteristics based on Spearman correlation analysis.
TABLE 3
Top 20 baseline characteristics based on feature selection method using random forest.
TABLE 4
Characteristic at baseline between DKA and non-DKA group.
TABLE 5
DKA risk prediction based on all baseline characteristics.
TABLE 6 DKA
risk prediction based on feature selection. | 7,333.2 | 2024-03-27T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Human Re-Identification with a Robot Thermal Camera Using Entropy-Based Sampling
Human re-identification is an important feature of domestic service robots, in particular for elderly monitoring and assistance, because it allows them to perform personalized tasks and human-robot interactions. However vision-based re- identification systems are subject to limitations due to human pose and poor lighting conditions. This paper presents a new re-identification method for service robots using thermal images. In robotic applications, as the number and size of thermal datasets is limited, it is hard to use approaches that require huge amount of training samples. We propose a re-identification system that can work using only a small amount of data. During training, we perform entropy-based sampling to obtain a thermal dictionary for each person. Then, a symbolic representation is produced by converting each video into sequences of dictionary elements. Finally, we train a classifier using this symbolic representation and geometric distribution within the new representation domain. The experiments are performed on a new thermal dataset for human re-identification, which includes various situations of human motion, poses and occlusion, and which is made publicly available for research purposes. The proposed approach has been tested on this dataset and its improvements over standard approaches have been demonstrated.
Introduction
The ageing population and increased life-expectancy of people worldwide motivated the growing number of wellbeing and health monitoring applications for personal and domestic use.Service robotics is a promising research field that contributes to the creation of new solutions for elderly care.In recent years, service robots have become very popular by accomplishing various tasks, from guiding visitors in public environments to assisting elderly people at home.For the latter, in particular, a robust human reidentification system is needed in order for the robot to Serhan Cos ¸ar<EMAIL_ADDRESS>Bellotto<EMAIL_ADDRESS>1 Lincoln Centre for Autonomous Systems (L-CAS), School of Computer Science, University of Lincoln, LN6 7TS Lincoln, UK distinguish between two or more users in the household and provide personalised services (e.g.medication reminders).
Considering its importance, human re-identification has not been sufficiently investigated in robotic applications.A very large amount of work focused on recognizing people across a network of RGB cameras in surveillance systems [2,26].In most of these applications, re-identification is performed by extracting appearance features from RGB images [5,10,16].On the other hand, by exploiting RGB-D cameras, anthropometric features (e.g., limb lengths) extracted from skeleton data [1,21], point cloud information [20] and volumetric features extracted from depth image [8] can be used for re-identification in service robot applications.
However, for long-term applications of domestic service robots, many existing approaches have strong limitations.For instance, appearance-based approaches are not applicable as people change often their clothes.In addition, for poorly illuminated or dark environments (at night), which are typical in domestic environments, RGB images provide very little information (Fig. 1d).Skeletal data is not always available because of self-occluding body motion (e.g.person facing opposite the camera, Fig. 1b) or objects occluding parts of the body (e.g., passing behind a table, see Fig. 1c).In order to deal with the above limitations, in this paper we propose the use of a thermal camera, which provides clear images in the infrared spectrum, even in the darkness (Fig. 1e).The camera is mounted on the top of an interactive service robot (Fig. 1a) used in the ENRICHME 1 project to monitor and assist elderly people with mild cognitive impairments at home.
In this kind of robotics applications, it is very hard to collect large amount of thermal data because, differently from static cameras, the robot should be moving among people for an extensive period of time and collect a multitude of views, which can be technically infeasible.Therefore, deeplearning approaches based on large amounts of thermal data are not suitable for our human re-identification system.Moreover, in domestic environments, people can move freely and can be observed by the robot from many different views and distances.Thus, it is essential to implement a robust re-identification system that can cope with these variations and the uncertainty introduced by occlusions and human pose.
Our approach builds a dictionary of thermal features at different views.However, instead of sampling at predefined, we perform an entropy-based sampling that automatically selects the observations providing more information.We then transform each video to a new sequence of dictionary elements (symbols).In this new representation, we use the geometric distribution among symbols as features and train a support vector machine (SVM) classifier.As our approaches performs entropybased sampling, it uses small amount of data, which fits well with the requirements of service robots.
Although thermal images are widely used in computer vision, especially for face recognition, there is no benchmark dataset for re-identification.To our knowledge, there are two datasets about people walking i) along a corridor [19] and ii) outside a building [22], both recorded by a fixed camera in a surveillance setup, not covering the case of domestic environments from a robot perspective.Thus, we have collected a thermal re-identification dataset, which is publicly available, with the camera mounted on a mobile service robot (see Fig. 1a).We have recorded thermal images of people under various domestic environment cases such as walking, sitting, occlusion from different views.
The contributions of this paper are threefold: -a new entropy-based sampling to build thermal dictionaries and symbolic representations of humans in thermal images; -a full software pipeline, implemented in ROS2 , for thermal-based human re-identification with service robots; -a new publicly available thermal dataset for human re-identification with such robots.
The reminder of this paper is as follows.Related work on re-identification approaches are presented in Section 2. Section 3 explains the details of our approach and how entropy-based sampling is used to create thermal dictionary models (TDM).The symbolic representation and classification in the TDM domain are described in Section 4. Experimental results with our new public dataset are presented in Section 6.Finally, we conclude this paper in Section 7 discussing achievements and current limitations.
Related Work
The main goal of re-identification is to establish a consistent labeling of the observed people across multiple cameras or in a single camera in non-contiguous time intervals [2].
The approach of [10] on RGB cameras focuses on an appearance-based method, which extracts the overall chromatic content, spatial arrangement of colors and the presence of recurrent patterns from the different body parts of the person.In [17], the authors propose a deep architecture that automatically learns features for the optimal re-identification.However, the problem of these methods is the use of color, which is not discriminative for long-term applications.
In [1], re-identification is performed on soft biometric traits extracted from skeleton data and geodesic distances extracted from depth data.These features are weighted and used to extract a signature of the person, which is then matched with training data.The methods in [20,21] tackles the problem applying features based on the extracted skeleton of the person.This is used not only to calculate distances between the joints and their ratios, but also to map the point clouds of the body to a standard pose of the person.This allows to use a point cloud matching technique, typical of object recognition, in which the objects are usually rigid.However, as skeleton data is not robust to body motion and occlusion, these approaches have strong limitations.In addition, point cloud matching has a high computational cost.
In [25], a multi-modal dissimilarity representation is obtained by combining appearance and skeleton data.Similarly, in [24], an ensemble of distance functions, each one learned using a single feature, is built in order to exploit multiple appearance features.While in other works the weights of such functions are pre-defined, in the latter they are learnt by optimizing the evaluation measures.Although these ensembles of state-of-the-art approaches can improve the accuracy of human re-identification, their dependency on color and/or skeletal data pose strong limitations on the type of environment and sensing capabilities of a mobile robot.
In [3], human recognition is performed by fusing a histogram-based human clothes classification, which takes into account the uncertainty of the human position to select relevant image regions, and a simple face recognition algorithm.Then, the output of the recognizers are integrated with multi-sensor detectors to perform simultaneous tracking and recognition.Wengefeld et al. [30] present a combined system on a mobile robot using both laser and 3d-camera for detection, tracking and visual appearance based re-identification.Similarly, [14] presents a method for person identification and tracking with a mobile robot.The person is recognized using height, gait, and appearance features.The tracking information is also used in [29], where the identification is based on an appearance model, using particle swarm optimization to combine a precise upper body's pose estimation and appearance.In these approaches, re-identification is mainly used to recover human IDs during people tracking.In this case, appearance based features are enough for human reidentification in the short period, but not to identify people in the long term.
In the last decade, thermal images have became increasingly popular to solve standard computer vision problems, in particular for face recognition [7,11,12,31].In [7], local binary patterns (LBP) are extracted from thermal images.Then, given a feature vector from a test sample, authors use partial least squares-discriminant analysis (PLS-DA) to perform face recognition.Wu et al. [31] presented a convolutional neural network (CNN) architecture that can automatically learn effective features from thermal data and perform face recognition with a softmax classification layer.Although there are many thermal face recognition approaches that achieve good results, they are typically not suitable for domestic service robot applications because they require a clear frontal image of the face, which is often not possible to obtain with a mobile robot.
Entropy-Based Sampling
The proposed approach for human re-identification uses images acquired by a thermal camera, shown in Fig. 3a.It performs face segmentation, extracts thermal features and creates thermal dictionary models from training sequence.Symbolic representations of the thermal features are used then to train an SVM classifier.The flow diagram of the system and the respective sub-modules are depicted in Fig. 2. The following subsections explain each part of our approach in detail.
Head Segmentation and Thermal Feature Extraction
The image acquired from the thermal camera provides the temperature of objects in the field of view of the camera.Since the temperature of humans are within a specified interval, it is possible to segment people in the thermal image by thresholding the temperature data.Face and body provide an important feature to recognize people.However, the temperature data obtained by observing the human body is largely dependent on the type of clothes the person wears (Fig. 3b).Therefore, we focus on the segmentation of the head region only.
We first perform thresholding on the thermal image (Fig. 3b) in the interval [32 • C − 39 • C] and obtain a binary image (Fig. 3c).Then, we apply connected component analysis on the binary image.We filter the components based on area and width, by keeping the ones that occupies an area and that has a width bigger than pre-defined values.Among the remaining components, we select the region of the binary image with smallest width (Fig. 3).After the head region on the thermal image is segmented, we extract features from it.The temperature data of the head region (i.e., whole 3D head, not just 2D face) provide important information.We therefore calculate the temperature histogram of the current head region (Fig. 3e), which is normalized to obtain the distribution of the temperatures (Fig. 3f).The concatenation of temperature histograms from different points of view will provide the temperature characteristic of the person's head.In [27], it is noted that the temperature of the skin surface varies with the environmental temperature, the body temperature, with the conditions of the skin and the structures beneath it.However, the head is one of the regions on the human body where the skin temperature remains more or less constant despite temperature changes in the environment [23].Our hypothesis is that the head temperature distribution can be used to distinguish people identities.This is verified experimentally in Section 6.
Thermal Dictionary Models
In real-world scenarios, where people freely move in the environment, service robots require a view-independent re-identification approach.Considering single-shot reidentification, most of the view-independent methods are based on full person models built from various views [6,33].Although they can typically achieve very good results, they also have a high computational cost.In addition, finding A model representing the person from different point of view can be embedded in a dictionary of thermal features.Assuming we have thermal data of a person turning around, we can extract a sequence of features obtained at different angles (Fig. 4).However, choosing the sampling angles is not easy.The representation could be too coarse or too fine, depending on the pre-defined angle intervals.In our approach, instead, we let the data determine which features are worth to be kept by performing an entropy-based sampling on the sequence of features.The features that provide sufficient information gain are included into the dictionary.
In information theory, the information gain (relative entropy) is a measure of the difference between two probability distributions, which can be measured by the Kullback-Leibler (KL) divergence [15].We determine the information gain between features using the latter, calculated as follows: where P and Q are features extracted from the thermal data.Using Eq. 1, we perform an entropy-based sampling as follows: first, we calculate the KL divergence between each element in the dictionary model and a new thermal feature; then, if the information gain is bigger than a pre-defined threshold, we include the new feature to the dictionary model.The procedure is summarized in Algorithm 1.Then, we convert the sampled features into symbols and include them in a thermal dictionary model (TDM) (Fig. 5).As TDMs are individually processed and generated for each person, we associate histograms from different people with same symbols.In our approach, we do not assume a pre-defined number of samples or any particular (fixed) orientation of the user.Thanks to our entropy-based sampling scheme, the system automatically selects the most informative observations.During training, the user is only asked to turn around in front of the robot, so the system can measure the information gain and automatically generate the TDMs.These are used for the classification step of the re-identification, as explained in the next section.
Algorithm 1 Thermal dictionary models are obtained by entropy-based sampling based on the Kullback-Leibler (KL) divergence [15].
Symbolic Representation
For each person, we extract a TDM using a training sequence, i.e., T c , for 1 ≤ c ≤ C where C is the number of people (classes).Then, we obtain a symbolic representation of each test sequence by converting the thermal features into symbols using the TDM of each person.For each feature extracted, we find the most similar element in the TDM and assign its symbol to the feature.Here, we use the KL divergence (1) as a measure to evaluate the similarity between features and TDM elements.We calculate the KL divergence between a feature and each TDM element and assign the symbol of the most similar element.As a result of this operation, we obtain a symbol S for each TDM: where T i represents the i th element of the TDM, F is the extracted feature, and m is the index of the most similar element.
An important aspect of the symbolic representation lies on taking into account how similar a feature is to a dictionary element.S represents the most similar element of each feature, but it does not contain this information.Thus, we extend the symbolic representation by including the similarity measure between features and dictionary elements (3).
In conclusion, we obtain the following combined representation for each thermal feature:
Classification
c provides a representation of feature vector F in the domain of samples from class c.This representation contains answers to the following questions about the feature vector: i) what is the most similar dictionary element in class c and ii) how similar are they?Geometrically, the representation is shown in Fig. 6a.If we concatenate the symbols computed for each class into a single vector, we obtain a new feature vector with respect to the whole training space (Fig. 6b): With this representation, we can encode any new feature vector in the (previously trained) feature space.We assume that features from the same class (i.e., person) will have similar representations, so we can match the representation of a test sample to the representation of training samples from the same class.
As a result of the features extracted by Eq. 5, we obtain a high dimensional data representation for the next classification stage.We train an SVM classifier [9], which is proven to work very well for high dimensional data [13].We use the similarity measure to train the SVM classifier:
Software Pipeline and Implementation
The full software pipeline of the proposed human reidentification approach is implemented as a ROS node 3 for applications with domestic mobile robots.This ROS node assumes that the SVM classifier is already trained and the corresponding models for thermal dictionary and classifier are present.The software pipeline is illustrated in Fig. 7.
The blue and yellow boxes present the main and auxiliary functions implemented in the software, respectively.The green box represents the ROS driver of the Optris thermal camera 4 .The following subsections explain the details of the pipeline.
Thermal Image Acquisition
In this paper, the thermal images are acquired using the thermal camera Optris PI-450 (Fig. 3a).This thermal camera offers a temperature range of −20
Software Implementation
The thermal re-identification software is encapsulated into a ROS package, which can be very easily installed thanks to catkin-compatibility 5 .The ROS package developed for re-identification includes a one-click roslaunch 6 file with a YAML 7 file containing the parameters for re-identification shown in Table 1.This ROS node subscribes to both raw and color thermal images published by the Optris driver and performs the following operations.First, following the PI imager library 8 , raw image data (data) are converted to temperature values in floating point format (t) as follows: t = (data − 1000)/10.0 Then, as described in Section 3, the ROS node performs thermal feature extraction and create the symbolic representation of a new thermal feature.Finally, the SVM classifier is used to predict the label of the new feature.The SVM classifier is implemented by using the libSVM library [4].This ROS node assumes that the model files for thermal dictionary and classifier are present under the "config" directory of the ROS package.
The name of the recognized person is published as a ROS topic together with a confidence level.In addition, the result of the re-identification is visualized on a color converted thermal image and published as a separate ROS topic.
Training
The training of the SVM classifier is performed by an offline procedure using a separate software.We set the parameters of libSVM [4] to perform multi-class support vector classification.Linear, polynomial, sigmoid and radial basis functions were tried as kernels.The radial basis function (RBF) was empirically selected to provide the best classification results.The gamma of the RBF is set to 1/120, where 120 is the size of our feature vector.Different values of the cost (C) and epsilon eps have been tried.
Based on these classification tests, we chose the C and eps that gave the best results, i.e. 1000 and 0.01, respectively.To reduce the training time, we also enabled the shrinking parameter of libSVM and set the kernel cache to 4,000 MB.
Experimental Results
We evaluated the performance of our approach under various real-world conditions such as walking, sitting, and occlusion of face.As one of the contributions of this paper, we have recorded a novel thermal dataset for human reidentification.The details of this dataset together with the obtained results are presented in the following subsections.The sequence details are presented as per person
Thermal Re-identification Dataset
We have recorded a publicly available thermal dataset for human re-identification 9 .The dataset was recorded in a laboratory environment using an Optris PI-450 thermal camera mounted on a Kompaï robot.Thermal images were recorded with a resolution of 382 × 288 at 10 fps.Our dataset covers different challenges in real-world scenarios, such as observing people from different points of views, while walking and sitting.It also includes people wearing accessories such as hat, glasses, and scarf that occlude part of the face.The dataset is available as ROSbag files and it can be directly used in a ROS environment.
In particular, the dataset consists of sequences of four categories: 1) person turning around on the spot, 2) person walking, 3) person sitting on a sofa, and 4) person turning around while wearing hat, glasses, or a scarf.In total, 15 people were recorded in the dataset.Sample images from the dataset are depicted in Fig. 8.
For the turn around and occlusion sequences (categories 1 and 4), the participants were asked to turn to four different directions (frontal, left side, back, right side) and remain still for about 3 seconds.In the occlusion sequences, they were also asked to wear accessories occluding different parts of the face.We asked people to repeat the sequences 4 times and 6 times in the occlusion and turn around sequences, respectively.On average, each sequence lasted about 15 seconds, resulting in 600 and 900 total thermal images per person in occlusion and turn around sequences, respectively.The walking sequences (category 2) lasted about 30 seconds.They contain images of non-frontal face views of people walking freely in front of the robot.The sitting sequence (category 3) contains images acquired while the participants were sitting on a sofa at three different distances (2m, 3.5m, and 5m) for about 5 seconds.For the walking and sitting sequences, we asked people to repeat the same activity twice, resulting in 600 and 300 thermal images per person, respectively.In total, our dataset contains around 9 https://lcas.lincoln.ac.uk/wp/research/data-sets-software/ l-cas-rgb-d-t-re-identification-dataset/ 2,400 thermal images per person, resulting 36K images in total (Table 2).
Thermal image datasets are relatively new in computer vision and robotics communities.Most of the existing datasets are for face recognition and include only frontal images [7].To our knowledge, there are only two thermal datasets for human re-identification that do not focus on faces [19,22].However, these datasets only contain thermal images of people walking along a corridor or in front of a building.Therefore, they are not suitable to represent real-world situations typical of domestic environments (e.g., sitting).To the best of our knowledge, ours is the first thermal dataset recorded using a robot in challenging realworld scenarios.
Experimental Setup
In our experiments, we used the sequences in which people turn around on the spot for learning the thermal dictionary models, and the rest for testing.We took 2/3 of the turn around set for training.The rest is included in the testing set.Hence, on average, 600 thermal images per person were used for training.
For the thermal features, we calculated histograms (10 bins) of the head region in the same temperature interval used for thresholding, i.e. [32 • C − 39 • C].The number of bins was selected empirically from real tests.With a higher number of bins, the histograms become sparse, and the TDMs very large, decreasing the re-identification performance.If we select a smaller number of bins, the histograms become flat and look mostly alike, generating very few TDMs, which are not enough to distinguish people.
Results
We evaluated the system on single-frames comparing the recognized class of each frame in a test sequence with the ground truth.We compared our approach ("Symbolic Rep.") to a standard SVM classifier using the whole training set without entropy sampling ("Whole Training Set").In order to analyse the advantages of the symbolic representation, we also compared our approach to an SVM classifier that We calculated recall, precision, accuracy and F 1 score for every subject individually, averaging the results across all the test frames.We also computed the Cumulative Matching Characteristic (CMC) curve, which is commonly used for evaluating re-identification methods [28].For every k = {1 • • • N train }, where N train is the number of training subjects, the CMC expresses the average person recognition rate computed when the correct person appears among the k best classification scores (rank-k).A standard way to evaluate CMC is to calculate the rank-1 recognition rate and the normalized Area Under Curve (nAUC), which is the integral of the CMC.
Turn Around Sequences
First, we evaluated the system on turn around sequences.As the type of motion in the test set is the similar to training, the re-identification problem is relatively easy and we would expect very good results.This is confirmed indeed by Table 3.It can be seen that our symbolic representation achieves the best results with over 85% recall, precision and accuracy.This is much better than the 75% obtained with the other approaches.We should also note that our symbolic representation achieves 10% better performance in average compared to the SVM classifier without symbolic representation ("Entropy-based Samp."). Figure 9 shows the CMC curve of all the approaches for the turn around sequences.Again, this shows that our symbolic representation outperforms the other approaches.
Occlusion Sequences
The results for the occlusion scenario are presented in Table 4.This experiment was designed to understand the effect of occlusion cases that usually happen in real-world scenarios wearing accessories such as glasses, hat, and scarf.The results show a decrease in performance compared to the turn around case, which was expected due to the more challenging nature of the experiment.However, our approach still achieves the highest re-identification rates among all methods.We can also see that our symbolic representation achieves 5% better performance in average compared to the classifier without symbolic representation.This is also confirmed by the CMC curve in Fig. 10.Although the results look similar in higher ranks, we can Fig. 9 The Cumulative Matching Characteristic (CMC) curve of all approaches for turn around sequence clearly see that our approach obtains the highest recognition rate in lower ranks, especially in rank-1.
Walking Sequences
Table 5 presents the re-identification results for the walking scenario.This includes people walking freely observed by various angles and distances very different from the training set.Thus, the complexity of this scenario is higher than previous cases, as can be observed by the performance drop for all the methods.Nevertheless, our approach achieves again the best performance, in particular thanks to the improvement introduced by the entropy-based sampling stage.This can also be observed from the CMC curves in Fig. 11, showing that our approach obtains the highest recognition rate in most of the lower ranks.
Sitting Sequences
Finally, in the sitting sequences, we evaluated the performance under different human poses.This scenario is to understand the effects of observation distance individually.
The results are presented in Table 6.We can say that our approach is not much affected by distance.It clearly outperforms other approaches.Compared to the classifier without symbolic representation, we can see that our final classification achieves 5% better performance in average.Again, when we look at CMC curve in Fig. 12, we can see that our approach achieves the best re-identification rates almost in all ranks.
Overall Performance
For a comparison of overall re-identification performance, we have measured the accuracy of all approaches in a big testing set that consists of all testing sequences.Table 7 presents the overall results and Fig. 13 displays the overall CMC curve.Once again, we can clearly see that our approach outperforms the others with a high margin.We can also see that our symbolic representation achieves 10% better accuracy in average compared to the SVM classifier using only plain TDMs (without symbolic representation).This shows the superiority of the symbolic approach.
Fig. 10 The Cumulative Matching Characteristic (CMC) curve of all approaches for occlusion sequence Bold values represent the best accuracy rates
Discrimination and Rejection
In this subsection, we present the experiments that are performed to evaluate the discrimination and rejection properties of our approach.
To have a better understanding of the classification results, we calculated the confusion matrix for the turn around sequence in Fig. 14.We can see that, except for a couple of people (person 10 and 14), our approach achieves at least 80% recognition accuracy.This also proves that our symbolic representation, even if based on simple temperature histograms, enables a powerful and discriminative re-identification of humans with a robot thermal camera.
We have also tested the rejection property of our approach by analysing the confidence level of the samples that were correctly classified (true positive).In Fig. 15, we present the histogram of all the confidence levels in the testing sequences.We can see that more than 90% of the true positives have a confidence level greater than 60%, proving that our approach has the ability to reject unknown people in most of the cases.
Effects of Head Segmentation Performance
The head segmentation process is the first step of our algorithm, and its failure may compromise the re-identification performance.To understand the effect on our system, we simulated several levels of failure in the head segmentation by artificially occluding the binary image from four directions (Fig. 16b-c), replicating potential problems due to poor thresholding or connected component analysis.In particular, we tested the classification performance by occluding 10% to 90% (at 10% intervals) of the binary image in the turn around sequence.
Figure 17 shows the recognition rate of our approach with various levels of occlusion.Similar to the previous occlusion experiments (Section 6.3.2),we see that there is a decrease in the recognition rate as the face gets more and more occluded.However, we can also see that our approach still works in challenging cases, achieving 60% recognition rate with 30% occlusion.Notice that the recognition performances with the occlusions on the y-axis (top-to-bottom and bottom-to-top) are slightly worse than Fig. 11 The Cumulative Matching Characteristic (CMC) curve of all approaches for walking sequence Bold values represent the best accuracy rates Fig. 13 The overall Cumulative Matching Characteristic (CMC) curve of all approaches Fig. 14 The confusion matrix of our re-identification system for the turn around sequence on the x-axis, showing a higher of our approach on the temperature at the top and bottom of the head.An error the measurement of the temperature from thermal images can negatively affect the head segmentation step.To evaluate the effect of this error, we tested our approach with several levels of temperature error.We simulate this by randomly removing some pixels from the binary image (Fig. 16a).In particular, we tested the classification performance on the turn around sequence by removing 10% to 90% (at 10% intervals) of the binary image.
Fig. 15 The histogram of confidence levels of our approach for the true positive samples in the testing sequences.90% of the true positives have a confidence level greater than 60% (marked by a red line) Fig. 16 The effects of head segmentation performance is tested by applying several levels of occlusion error on the binary image, some examples of which are shown here Figure 18 shows the recognition rate of our approach with various levels of temperature error.Our system can still achieve a recognition rate of 77.8% even with a 50% temperature error, which is higher than the recognition rate achieved by other (non-symbolic) methods without temperature noise (Table 3).This proves that our approach is also robust to this type of errors in the head segmentation step.
Experiments on a Mobile Robot
We further evaluated our approach on a different platform, a TIAGo mobile robot (Fig. 19a), also used in the ENRICHME project.The system works in real-time on the robot using our ROS software implementation (Section 5).An Optris PI-450 thermal camera was mounted on the robot's head, slightly higher than the previous case.We Figure 19 illustrates some outputs of the ROS reidentification The images show the segmented head with a white bounding-box and the recognized person together with the confidence value.Following our analysis on confidence level (Section 6.4), we set the rejection threshold on confidence level to 60%.It can be seen that there are some failures, mostly because of wrongly detected head regions (Fig. 19b).We can also notice that the confidence level of the classifier drops when the person is too close or too far from the robot (Fig. 19c).This is due to the out-of-focus thermal images affecting the extracted features.Nevertheless, we see that in general the proposed approach can recognize the person with high confidence and, importantly, that it works successfully on a real service robot.
Conclusion
This paper presented a new re-identification system for service robot applications using thermal images.A viewindependent approach, using entropy-based sampling and a symbolic representation, has been described.The method is suitable for mobile robots monitoring and assisting elderly people at home, in particular to distinguish the actual user in case of two or more occupants.Our solution requires a relatively small amount of training data, which is an advantage in many real-world applications.To achieve this, we extracted a thermal dictionary model of the person sampling over a single rotation sequence of the head.Then, we transform each thermal frames to a new set of dictionary elements (symbols).In this new symbolic representation, we exploit the geometric distributions of its symbols as features for classification.The proposed approach was evaluated under various real-world conditions, including people walking, sitting, and under occlusion.Both quantitative and qualitative results were presented on a new dataset and a real mobile robot, respectively.Despite some limitations in case of walking or sitting people, the experimental results showed the good performance of our re-identification system in several challenging situations and proven that it can be used for companion robots assisting elderly in daily life.Future work will consider temporal models and multi-sensor extensions of our solution to improve the robustness of the re-identification in case of different human poses and motion behaviors.We will also look into on-line learning approaches [18,32] to incrementally improve the re-identification performance over time as the robot keeps track of people and collects more and more data about the target users.
Fig. 1
Fig. 1 Examples of human observed by a Kompai service robot a from the back b, under occlusion c, with poor lighting conditions d and on thermal camera e
Fig. 2
Fig. 2 The flow diagram of our approach for thermal-based human re-identification
Fig. 5
Fig. 5 Using entropy-based sampling, we create a thermal dictionary model of each person and represent it in symbols
Fig. 6 Fig. 7
Fig. 6 Symbolic representation of a feature vector in class 1 (i.e., person 1) (a) and whole training space (b).Light and dark colored stars represents all the test samples and the dictionary elements of each class obtained by entropy-based sampling (e.g., T 1 ), respectively
Fig. 8
Fig.8 The dataset consists of four parts: people standing still and turning around (a-b), people walking freely (c), people sitting on a sofa recorded at 3 different views (d-f), and people occluding parts of their face while wearing hat, glasses, and scarf (g-i)
Fig. 19
Fig. 19 Some examples of the experiments on a TIAGo mobile robot (a) with two subjects walking (b) and sitting (c).The red marks indicate cases of unsuccessful re-identification The image resolution is 382 × 288 and the maximum frame rate is 80 fps.Our model is equipped with a 15mm lens, which provides a field of view (FOV) of 38 • × 29 • C up to 900 range of 7.5 to 15 μm.•.The ROS driver of the Optris thermal camera provides in two different formats: i) raw image data in unsigned short format, ii) color thermal image in (BGR8) format.Using the ROS driver of Optris thermal camera, both raw and color thermal images are acquired and made directly available to our ROS software module for thermal re-identification.
Table 1
Description
Table 6
The re-identification accuracy rates of our approach (Symbolic Rep.), SVM with | 8,188.6 | 2019-05-15T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Design of Delivery Valve for Hydraulic Pumps
After briefly recalling the main problems that arise in the study of globe valves for alternative pumps, a methodology has been set up in order to refine the design. The obtained method has the advantages of simplicity and independence from empirical diagrams. In summary, from the obtained equation, the suitable values of the parameters can be deduced, based on the assigned data (capacity Q0 and number of rounds n) of all the dimensions of the valve or of the valves. Depending on the parameter values, it is possible to identify the most suitable kind of valve: a single dish-shaped valve, a ring valve, a valve with several rings or a group of valves.
Introduction
Automatic valves for suction and delivery are the most important components of a piston pump, since the wear and locking of a valve, in addition to greatly reducing capacity, can also cause serious deformations of the piston rod.Therefore, it is necessary to design such mechanical components with care and with the use of high-quality materials [1][2][3][4].
Regarding the type of valve which should be used, it is possible to provide only indicative data: valves with a conical seat, having no return springs, are suitable for low flow and low speeds.By increasing the capacity, valves with multiple seats must be used, and for high rotating speeds, it will be necessary to use return springs, which allow a more rapid closure, improving volumetric efficiency.To prevent the leakage of the liquid through the passage hole for the rod of the plunger, stuffing boxes with flexible seals are generally used, fixed at the end part of the cylinder or added there with a series of bolts [5][6][7][8][9][10].
In the annular cavity surrounding the rod, rings of hemp or asbestos graphite are inserted, which are then compressed by the mobile collar, properly tightening the bolts that attach it to cylinder.The rings adhere more or less strongly to the piston rod, limiting fluid outflow; the adjustment of the bolts must be very accurate because over-tightening may cause the heating of the rod and of the packing gland, while tightening too weakly can cause a substantial loss of liquid and thus a decrease of the volumetric efficiency [11][12][13][14][15][16].
A complete methodology for the sizing of the control valve passes through the definition of its size and its degree of opening during the design capacity.As a first approach to a "default", it is necessary to choose a size of valve with a nominal diameter minutely smaller than the diameter of the line.In any condition, it is possible to use a "line size" valve.This choice identifies the capacity coefficient (commonly known as CV) of the valve.The CV full opening of a control valve is the number of US gallons/minute that flow through the considered valve with a pressure drop of 1 psi.The load losses in the valve are regulated by the degree of opening that the valve has in its design capacity.The opening fraction (expressed in formulas) is the ratio of the actual CV and full opening CV.Normally, this value is chosen (computed with design capacity) between 60% and 90% [17].
In the study of the motion of a piston pump valve [18][19][20][21][22][23], it is necessary to take into account three speeds: the v of the fluid, which depends on the load loss corresponding to the passage through the valve; -v , which affects the water pressure acting against the dish-shaped valve; -and the v v of the dish-shaped valve of the considered system.
Further, it is necessary to consider also the diagrams of the valve lift depending on the crank and the piston position; in a first approximation, these diagrams can be reduced to very simple schemes.
Finally, it is interesting to show that the maximum valve opening depends on the pump capacity but not on individual factors (run, area of the piston and number of the round of the crank), singly considered, that determine the pump capacity.
Of all these issues, very little is reported in the current literature.Therefore, in this study, we will proceed by beginning to establish, in an approximate way, the relationships between the main variables that influence the considered phenomenon.
Subsequently, corrections have been performed, aimed at determining the best operating conditions of the valves considered.
Preliminary Analysis
Before getting started, an approximate analysis has been performed, and corrections have been added afterwards in order to lead to a precise representation of the considered phenomena.
As an example, we can consider the pump in the delivery phase (Figure 1) with ω = const.If the obliquity of the connecting rod is neglected, the stroke s of the piston is given by Knowing that u = ωr, the velocity of the piston can be written as follows Furthermore, naming F as the piston surface, the instantaneous capacity, Q, will be If h is the height at the instant t, δ the angle of the seat and l the peripheral development of the hole (Figure 2), the cross-section f of the latter will be, at the instant t, Machines 2018, 6, x FOR PEER REVIEW 2 of 15 capacity.The opening fraction (expressed in formulas) is the ratio of the actual CV and full opening CV.Normally, this value is chosen (computed with design capacity) between 60% and 90% [17].
In the study of the motion of a piston pump valve [18][19][20][21][22][23], it is necessary to take into account three speeds: the v of the fluid, which depends on the load loss corresponding to the passage through the valve; -v', which affects the water pressure acting against the dish-shaped valve; -and the vv of the dish-shaped valve of the considered system.
Further, it is necessary to consider also the diagrams of the valve lift depending on the crank and the piston position; in a first approximation, these diagrams can be reduced to very simple schemes.
Finally, it is interesting to show that the maximum valve opening depends on the pump capacity but not on individual factors (run, area of the piston and number of the round of the crank), singly considered, that determine the pump capacity.
Of all these issues, very little is reported in the current literature.Therefore, in this study, we will proceed by beginning to establish, in an approximate way, the relationships between the main variables that influence the considered phenomenon.
Subsequently, corrections have been performed, aimed at determining the best operating conditions of the valves considered.
Preliminary Analysis
Before getting started, an approximate analysis has been performed, and corrections have been added afterwards in order to lead to a precise representation of the considered phenomena.
As an example, we can consider the pump in the delivery phase (Figure 1) with = const.If the obliquity of the connecting rod is neglected, the stroke s of the piston is given by Knowing that u = r, the velocity of the piston can be written as follows (2) Furthermore, naming F as the piston surface, the instantaneous capacity, Q, will be If h is the height at the instant t, the angle of the seat and l the peripheral development of the hole (Figure 2), the cross-section f of the latter will be, at the instant t, Then, if v is the theoretical speed of the water through the hole and the flow coefficient, this will result in the following: From ( 3) and ( 5), we immediately obtain and for (4) If now it is assumed in first approximation that •v is constant, this will also be true for the coefficient of sin at the second member of (7), a coefficient which gives the maximum height hmax occurring for = /2; then, (7) can be written Using Formula (8), reporting the angles on the x-axis and the height h on the y-axis, we obtain a sinusoid with amplitude hmax.From (1), it is possible to obtain
= −cos
While from (8) we obtain sin max h h Thus, by squaring and summing we will obtain the relationship: In addition, reporting the distances travelled by the piston starting from the midpoint of its curve on the x-axis and the risers of the valve on the y-axis, we obtain an ellipse, with axes r and hmax.Then, if v is the theoretical speed of the water through the hole and µ the flow coefficient, this will result in the following: From ( 3) and ( 5), we immediately obtain and for (4) If now it is assumed in first approximation that µ•v is constant, this will also be true for the coefficient of sin φ at the second member of (7), a coefficient which gives the maximum height h max occurring for ϕ = π/2; then, (7) can be written Using Formula (8), reporting the angles ϕ on the x-axis and the height h on the y-axis, we obtain a sinusoid with amplitude h max .From (1), it is possible to obtain While from ( 8) we obtain h h max = sin φ Thus, by squaring and summing we will obtain the relationship: In addition, reporting the distances travelled by the piston starting from the midpoint of its curve on the x-axis and the risers of the valve on the y-axis, we obtain an ellipse, with axes r and h max .
It is clearly possible to see that, in this first approximation, the valve should open and close precisely in the instants in which the piston is located at the dead points.Now, it is possible to show easily the expression of the section f max of the valve's maximum opening.In fact, from ( 6) we obtain and then from which Thus, for the supposed constancy of µ•v, it is possible to demonstrate the assumption that f max (and therefore h max , with parity of l) depends on the product valve F•n•r (=30 Q 0 ) and not from each of the factors considered, F, r, n.Now, where T is the spring load on the dish-shaped valve, G is the weight of the valve, γ 1 is the specific weight of the pumped liquid and γ is that of the material of which the valve is made.The product G (1 − (γ 1 /γ)) is the reduced weight of the valve from the hydrostatic thrust (we suppose the valve moving vertically) and P, in absolute value, is the thrust of the water on the considered valve.Since T is a function of h, the same thing will happen for P, with f 1 indicating the area of the section of the valve in the plane of the seat, approximately equal to the surface of the valve, which is The ratio (P/f 1 ) is expressed in atmospheres.Formula (16) gives the pressure in N/m 2 .Pressure b corresponds, in known different conditions, to the theoretical speed of outflow: for which the actual value through the valve hole can be put, indicating µ p , a particular flow coefficient determined experimentally and which depends on the type of valve considered.Then, in comparison with (5) gives The hypothesis of the µv constant implies therefore that µ p •v is constant.On the other hand, with f being the area of the outflow aperture hole, the following must be true: with f and Q variables, however, for µv constant must be Q/f constant Assumed known µ p from (18) we obtain which allows us to deduce b, if Q is known.To keep Q constant, the product must obviously be bµ 2 p .Then, the velocity v v of the valve, is obtained from (8): which explains that the speed of the valve reported on the ordinate in the function of the time (or of the angles φ) gives rise to a cosinusoid.Formula ( 22) also indicates that, for φ =0 and for φ = π, v v = 0, so that the valve opens and closes with finite speed; i.e., with shock.
Additional Effects and Their Influence
Establishing the basic relationships on the motion of the valve and the water through it, on the basis of this simple hypothesis, it is possible to perform a closer examination by introducing some important corrections such as the piston effect of the valve, the variability of the product µv, the obliquity of the connecting rod, etc.
As regards the plunger effect of the valve, we examine v s , and v v in the function of φ, supplied respectively by ( 2) and (22).
For φ < π/2, in the infinitesimal time dt, the plunger will tend to push the volume Fv s dt over the valve.Then, the valve, in the phase of turning away from the seat, will generate, upstream of the outflow aperture hole, the volume f 1 v v dt, so that not all of the water pushed by the piston is able to pass through the free hole.
Conversely, per φ > π/2, the valve will be coming closer to the seat and it forces to go through the clearance hole, not only the volume Fv s dt driven by the piston, but also the f 1 v s dt pushed by it.
Since these volumes, for the constancy of F and f 1 , are proportional to v s and v v , it is easy to obtain the values of the instantaneous flows that pass downstream of the delivery valve.From these results, the simplifying assumptions made about the motion of the valve lead us to admit that for very small φ, the valve is in an opening phase while the liquid passes from downstream to upstream of it, which is impossible, because of the occurrence of a pressure that forces the valve to move in the opening direction.
Therefore, the delivery valve remains closed until φ not has reached a certain value, which is also rather small.Further, the delivery valve cannot take place for exactly φ = π, but must occur with a determined delay.Therefore, we have to evaluate this delay, and hence the speed with which the collision takes place, during the closure of the delivery valve.Then, we compute the volume that passes through the free span in the time unit with the difference between the volumes generated by the piston and by the valve, and in such a condition, we determine the instantaneous valve lift.Therefore, we will have the relationship from which, substituting at f the value from (4), at v s the value from (2) and at v v the value from ( 22), we get Now, the coefficient of sinφ in the ( 7) is h max so we can write the following: from which it is possible to obtain the angular delay ψ, as that value of φ at which h = 0.This should, however, proceed as follows.
Deriving the (23) respect to time, it is possible to obtain a new expression, approximated in a better way, of the valve speed: from which we obtain the impact speed in the closure phase: Assuming that such a delay is small enough, sin(π + ψ) =0 and cos(π + ψ) = −1 may be assumed; with that, (24) reduces to the simple relation which for ( 7), ( 8) and ( 19), becomes, with u = ω•r, The kinetic energy dissipated in the impact, named M the valve mass, it will then be for ( 26) Referring this force at the section of the valve, with G being the weight of the valve, will get the additional relationship (28) in which the correction factor ξ 1 , takes into account the water dragged by the valve.
Placing (πn/30) for ω and taking account of ( 12) and ( 19) occurs after a few steps: Now this specific force has not to exceed a determined value, so that the impact in the closure phase is not very violent, to put in a short time, out of the work the valve; on the other hand, it is necessary that the same living force is not too small, otherwise too large valve dimensions are obtained.
It is therefore appropriate that it has an almost constant value.Welding together all of the numeric values in this constant, it is appropriate to set with C•sinδ = 1.3 ÷ 1.9, as experience suggests conservatively.
In Formula (30), Q must be expressed in L/s, f 1 in cm 2 , b in m of water column, and l in cm, and it is possible to note that, if a given valve has operated satisfactorily in a pump with flow Q 0 and number of rounds n, it will be able to work equally well in any other pump in which the product Q 0 n is the same value.
Meanwhile, from (6), we obtain and then and f max = h max •l•sinδ are then, on basis of ( 12), and repeat for the value of the product n•h max what was said for Q 0 n: the valve will work in the same way (equally good or bad) for the same value of the product n•h max .However, because it is necessary that by varying h max , Q 0 does not change, it is necessary to change h max only within small limits that ensure that the load on the valve does not alter simultaneously.While the (30) is useful for the completely new design of a valve (this case happens very rarely), the considerations relating to the products Q 0 n and n•h max are extremely useful for the adaptation of valves already designed for load conditions which are rather different to those assigned.
Design Method
To design a valve, it is therefore necessary that, given the pump capacity Q 0 , it is possible to obtain f max from the relationship ( 14) after fixing a convenient value for µ•v based on the following data and criteria: -Pumps for exhaustion (small prevalence) observing that µ•v must be lower for small prevalences.
Obtaining the f max value, it is necessary to split this value into the two factors h max and l, setting preferably h max so that it is still possible to perform when the type of valve is simply chosen based on available data related to valves already built of the same type.If these data are not available, it is possible to follow the following indications.
For φ = π/2, the valve has zero speed, as shown by (22); so, the piston effect being null, it is possible to use the formulas of the elementary theory first established with full accuracy, without the effect considered.Then, the variable light will actually be at its maximum aperture f max (corresponding to the maximum value h max of the lift), and only the water displaced by the piston will pass through this.
For φ = π instead, the valve will still have a lift h 0 that can be calculated on the basis of the hypothesis of the constancy of the outflow speed µ•v since, stopping the piston, water will pass through the corresponding opening of the valve (pumped from the valve considered) providing the speed as reported in (22).The above-mentioned hypothesis allows us then to suppose that the risers h 0 and h max are proportional, respectively, to the instantaneous flow f 1 •v vmax and f max •v, and this therefore allows us to write, on the basis of ( 22), In the case, e.g., of a ring valve with a mean diameter d m = (d e + d i )/2 and a radial width a = (d e − d i )/2, the following will be true: On the other hand, from which (32) becomes Now, experience has shown the following can be accepted from handbooks [24,25]: Thus, (35) leads to the relationship in which a is expressed in m, with v being expressed in m/s.Wanting to express a in cm, it is necessary to write That is, with k ranging between 30 and 100.
Assigning n, and µ•v being already fixed, as has been said discussing (14), Formula (38) allows us to establish a/sinδ so that, in the case of the ring valve, if it is accepted that the speed in the section (33) and ( 34)), its value is 2•h max .
In fact, it has to be and from the equality of the last members, Replacing the precedent in (38), we have As either a and d m do not appear in (40), it is also true in the case of the valve that is possible to verify this by placing a = d m = d/2 in the previous formulas.
It is necessary to show that the method shown has the merit of the simplicity and independence from empirical diagrams (such as those of µ p , etc.), but it is not sure that the limits set for the κ in (40) are valid in general for all types of valves.
Summarizing from ( 14) and ( 40), with v and k suitably chosen, on the basis of the data Q 0 and n, all the dimensions of the valve or of the valves can be deduced.Depending on whether l = (f max )/(h max sinδ) results in a small or large value, we will choose, in order, a single dish-shaped valve, a ring valve, a valve with several rings or a group of valves.This may require some testing, and it is clear that for the dish-shaped valve and for that with a single-ring, we obtain, respectively, In the case of the ring, the knowledge of d m is not sufficient, but because it is necessary to know d e and d i , or a, it is possible to use the relationship between the value obtained in identifying the speed in the light constant with that in the light variable, namely supposing ( 39) is valid or assuming the a value to be somewhat larger, for reasons of working as well as to reduce friction losses.
Numerical Simulation
To better understand the working of the considered dish-shaped valve, two numeric simulations were performed for the scheme reported in Figure 2, considering two operating conditions, relating to two positions of the dish-shaped valve: case 1, with a plate opened with only 3 mm of lift; case 2, with the plate open with 10 mm of lift, considering a pump working with water at the standard pressure.
For this application, we chose a case where the Reynolds number was based on the pipe diameter, D, and the maximum speed u max inside the pipe is 400.The simulation of the equations was carried out without the addition of filters or turbulence models.The resulting field, shown in the following figures, is, therefore, the result of a numerical integration of the Navier-Stokes equations.The numerical models' mesh for the considered cases are reported in Figures 3 and 4. In Figures 5 and 6, characterized by a valve opening of 3 mm, the horizontal and vertical velocity components for case 1 are reported.It can be noted how the valve induces a substantial change of the flow within the pipe.This condition also corresponds to the greater loss of pressure of the flow and the maximum load condition among the considered configurations.In Figures 7 and 8, characterized by an opening of the valve of 10 mm, the horizontal and vertical velocity components for case 2 are reported.In this condition, the flow lines show the spatial structure of the motion field inside the pipe.
components for case 1 are reported.It can be noted how the valve induces a substantial change of the flow within the pipe.This condition also corresponds to the greater loss of pressure of the flow and the maximum load condition among the considered configurations.In Figures 7 and 8, characterized by an opening of the valve of 10 mm, the horizontal and vertical velocity components for case 2 are reported.In this condition, the flow lines show the spatial structure of the motion field inside the pipe.flow within the pipe.This condition also corresponds to the greater loss of pressure of the flow and the maximum load condition among the considered configurations.In Figures 7 and 8, characterized by an opening of the valve of 10 mm, the horizontal and vertical velocity components for case 2 are reported.In this condition, the flow lines show the spatial structure of the motion field inside the pipe.
Conclusions
In this paper, a methodology to refine the design of a delivery valve for hydraulic pumps has been set up to examine the problems that arise in the study of dish-shaped valves for alternative pumps.The obtained method has the advantage of ease of use and does not depend on empirical diagrams.On the basis of the different equations obtained, with the parameter values suitably chosen, it is possible to deduce, based on the data assigned (capacity Q 0 and number of rounds n), all the sizes of the considered valves.Depending on the considered parameter values, it is possible to choose, in order, a single dish-shaped valve, a ring valve, a valve with several rings or a group of valves.
Further, to better understand the working of the pump dish-shaped valve, two operative conditions have been considered, with valve openings of 3 mm and 10 mm, evaluating the motion field obtained.
Figure 3 .
Figure 3. Case 1: valve operative condition with h = 3 mm; numerical model mesh for the scheme reported in Figure 2.
Figure 4 .
Figure 4. Case b: valve operative condition with h = 10 mm, numerical model mesh for the scheme reported in Figure 2.
Figure 3 .
Figure 3. Case 1: valve operative condition with h = 3 mm; numerical model mesh for the scheme reported in Figure 2.
Figure 3 .
Figure 3. Case 1: valve operative condition with h = 3 mm; numerical model mesh for the scheme reported in Figure 2.
Figure 4 .
Figure 4. Case b: valve operative condition with h = 10 mm, numerical model mesh for the scheme reported in Figure 2.
Figure 4 . 15 Figure 5 .
Figure 4. Case b: valve operative condition with h = 10 mm, numerical model mesh for the scheme reported in Figure 2. Machines 2018, 6, x FOR PEER REVIEW 11 of 15
Figure 5 .
Figure 5. Case 1: valve operative condition with h = 3 m, with velocity horizontal component u.
Figure 5 .
Figure 5. Case 1: valve operative condition with h = 3 m, with velocity horizontal component u.
Figure 6 .
Figure 6.Case 1: valve operative condition with h = 3 mm, with velocity vertical component v.Figure 6. Case 1: valve operative condition with h = 3 mm, with velocity vertical component v.
Figure 6 .
Figure 6.Case 1: valve operative condition with h = 3 mm, with velocity vertical component v.Figure 6. Case 1: valve operative condition with h = 3 mm, with velocity vertical component v. Machines 2018, 6, x FOR PEER REVIEW 12 of 15
Figure 7 .
Figure 7. Case b: valve operative condition with h = 10 mm, with velocity horizontal component u.Figure 7. Case b: valve operative condition with h = 10 mm, with velocity horizontal component u.
Figure 7 .
Figure 7. Case b: valve operative condition with h = 10 mm, with velocity horizontal component u.Figure 7. Case b: valve operative condition with h = 10 mm, with velocity horizontal component u.
Figure 7 .
Figure 7. Case b: valve operative condition with h = 10 mm, with velocity horizontal component u.
Figure 8 .
Figure 8. Case b: valve operative condition with h = 10 mm, with velocity vertical component v.Figure 8. Case b: valve operative condition with h = 10 mm, with velocity vertical component v.
Figure 8 .
Figure 8. Case b: valve operative condition with h = 10 mm, with velocity vertical component v.Figure 8. Case b: valve operative condition with h = 10 mm, with velocity vertical component v.
Symbol ψ angular delay ω angular speed of the crank f 1
area of the valve section in the plane of the seat v component of the speed normal to the valve plate = the theoretical outflow speed δ corner of the seat ξ 1 corrective coefficient that takes into account the water dragged from the plate r µ p particular efflux coefficient determined experimentally and dependent on the type of valve considered φ path angle l peripheral development of the port v s plunger speed F plunger surface a radial width s space traveled by the plunger γ specific weight of the material of which the plate is made γ 1 specific weight of the pumped liquid u speed component along the x-axis T spring load on the valve plate f straight section of the hole = area of the outflow port v theoretical water speed through the section v v valve plate speed P water thrust on the valve plate G weight of the plate | 6,749.4 | 2018-10-01T00:00:00.000 | [
"Engineering"
] |
The essentialism of early modern psychiatric nosology
Are psychiatric disorders natural kinds? This question has received a lot of attention within present-day philosophy of psychiatry, where many authors debate the ontology and nature of mental disorders. Similarly, historians of psychiatry, dating back to Foucault, have debated whether psychiatric researchers conceived of mental disorders as natural kinds or not. However, historians of psychiatry have paid little to no attention to the influence of (a) theories within logic, and (b) theories within metaphysics on psychiatric accounts of proper method, and on accounts of the nature and classification of mental disorders. Historically, however, logic and metaphysics have extensively shaped methods and interpretations of classifications in the natural sciences. This paper corrects this lacuna in the history of psychiatry, and demonstrates that theories within logic and metaphysics, articulated by Christian Wolff (1679–1754), have significantly shaped the conception of medical method and (psychiatric) nosology of the influential nosologist Boissier De Sauvages (1706–1767). After treating Sauvages, I discuss the method of the influential nosologist William Cullen (1710–1790), and demonstrate the continuity between the classificatory methods of Sauvages and Cullen. I show that both Sauvages and Cullen were essentialists concerning medical diseases in general and psychiatric disorders in particular, contributing to the history of conceptions of the ontology and nature of mental disorders.
of medicine and psychiatry by investigating the logical and metaphysical origin, nature and the logical and metaphysical presuppositions of eighteenth-century (psychiatric) essentialism.
The work of Sauvages has been studied by King (1966), who notes the influence of Christian Wolff on Sauvages. Martin (1990) provides an account of the scientific context within which Sauvages was trained and worked. French (1990) describes Sauvages' work in relation to Stahl and Hoffman, and Williams (2003) discusses Sauvages within the context of providing a history of medical vitalism in enlightenment Montpellier, also discussing his nosology. Finally, Huneman (2008) discusses Sauvages within the context of an account of Montpellier vitalism and its influence on the emergence of alienism in France, whereas Foucault (2006) provides brief discussion of Sauvages (and Cullen).
Cullen remains little studied. Risse (1974) describes Cullen's letters and consultation practice. Cullen's views on melancholia are discussed by Jackson (1983). Barfoot (1993) provides an account of philosophy and method in Cullen's medical teaching, whereas Bynum (1993) discusses Cullen and the nervous system. Kendell (1993) briefly discusses Cullen's nosology. Finally, Dyde (2015) reconstructs the meaning of neurosis in Cullen's work, and Beatty (2016) discusses Cullen in her history of the concept of nervous disease. The essentialism of Cullen has received little to no discussion to the best of my knowledge.
Although the influence of Wolff on Sauvages has been noted by King (1966), King does not discuss the content of Wolff's logic, his theory of axiomatic science, and his metaphysics. These topics, and their impact on Sauvages, will be the focus of this paper. In this way, I provide a novel account of the impact of logic and metaphysics on eighteenth-century (psychiatric) nosology. Martin (1990) briefly notes Sauvages' essentialism, but the nature of this essentialism and the logical and metaphysical Wolffian context within which this essentialism is to be interpreted are nowhere discussed by Martin. This will be the task of this paper.
Finally, it is important to describe why I focus on (i) the nosologies of Sauvages' and Cullen and (ii) their views on psychiatric disorders, given that they wrote nosologies on all diseases in general. As to (i), I focus on Sauvages because, as Foucault (2006) and in more detail Huneman (2008) have shown, Sauvages was an influential medical researcher who significantly influenced the rise and birth of psychiatry around 1800. Moreover, because Sauvages' work contains much explicit philosophical reflection on nosology, nosological method, and medical method in general, he is a suitable figure to demonstrate the influence of eighteenth-century theories within logic and scientific methodology on (psychiatric) nosology, which is the core contribution of this paper. I focus on Cullen because Cullen was a nosologist and medic who was influential at the end of the eighteenth century, and shaped, mainly through his teaching and textbooks, the views of many of his students and medics (Doig et al., 1993). Cullen is also discussed because, even though he was based in Edinburgh and exposed to different philosophical currents then Sauvages, there is a great deal of continuity between the views of Sauvages and Cullen, including in their adoption of a causal method of classification. One aim of this paper is to demonstrate this continuity and thus to highlight the impact of certain common methods of classification in eighteenth-century medicine.
As to (ii), note that I will describe the general method of classification adopted by Sauvages and Cullen, a method adopted for both psychiatric and non-psychiatric disorders. Thus, I will discuss their views on nosology in general, which are essentialist. It is important to point out this essentialism for diseases in general because it is very rarely discussed. I subsequently focus on their views on psychiatric disorders, because Sauvages' and Cullen's views on the nature and classification of mental disorder, and especially their essentialism concerning psychiatric disorders, are also very little discussed despite the importance of these authors for the birth of psychiatry. By discussing these essentialist views on mental disorder, this paper contributes to the history of the ontological conceptions of mental disorder. Through our discussion of eighteenth-century conceptions of mental disorder, we will also see that in the eighteenth century some researchers such as Sauvages thought that mental disorders have multi-factorial causes. However, Sauvages remained committed to essentialism. This is philosophically interesting, insofar as some contemporary philosophers of psychiatry take the fact that mental disorders are multi-factorial diseases (in contrast to mendelian diseases) to be a reason to reject essentialism. If history is a guide, it seems possible to combine essentialism with the fact that mental disorders have many causes.
In Sect. 2, I provide a description, following Ereshefsky (2001), of core features of essentialist classifications. I argue that these features were widespread throughout history in general and influenced eighteenth-century nosologists in particular. In Sect. 3, I analyze Wolff's and Sauvages' shared conception of science, Wolff's logic, theory of division, and conception of essence, and Sauvages' views on nosology and the causes of psychiatric disorders. I argue that Sauvages was an essentialist and that he aimed, following Wolff, to give real definitions of (psychiatric) disorders or diseases. In Sect. 4, I analyze Cullen's philosophy of classification and his views on the causes of psychiatric disorders. I argue for continuity between the methods of Sauvages and Cullen and for the fact that both were essentialists.
A brief description of essentialism in the eighteenth century
In this section, I will describe some core features of essentialist classifications, building on the work of Ereshefsky. Ereshefsky takes the eighteenth-century naturalist Linnaeus to be a paradigmatic example of an essentialist. However, this view has been challenged by (Müller-Wille, 2007). According to Müller-Wille (2007), which is a very rich article which I cannot explain in detail, Linnaeus did not classify by logical division but used inductive and empirical methods for providing classifications. I am completely convinced by the reading that Linnaeus adopted these inductive and empirical methods, but think that Linnaeus can still be called an essentialist on metaphysical grounds. This is because Linnaeus, as Müller-Wille agrees, distinguishes between artificial and natural classifications and argues that we can provide natural definitions of some classes. This suggests, to me, that Linnaeus still thinks there are natural kinds that carve nature at its joints, even if having knowledge of these kinds is difficult to obtain and requires empirical and inductive methods. It is certainly the case that the medical researchers who I discuss also adopt many empirical methods. However, my contention is that from a metaphysical perspective, which concerns the metaphysical interpretation of classes, they are still essentialists like I think Linnaeus is too.
Ereshefsky provides an account of essentialism that will be useful to use when analyzing the eighteenth-century views of Sauvages and Cullen, for these researchers conformed to this account of essentialism. Ereshefsky defines essentialism as follows: According to Essentialism, each entity has an essential feature that makes it the type of entity that it is. That feature is an entity's real essence. The real essence of an entity occurs in all and only entities of that type, and it helps us understand why entities of that type do the sorts of things they do. For the essentialist, real essences capture the fundamental structure of the world; or to use Plato's phrase, they "carve nature at its joints". (Ereshefsky, 2001, p. 17) Ereshefsky further notes that members of a kind share common necessary properties, which are caused by the real essence of an entity (2001, p. 17). Such necessary properties are required for membership in a kind. For example, the real essence of gold causes pieces of gold to have the necessary properties "of being soluble in certain types of acids, reflecting certain wavelengths of light, and having a particular range of malleability" (2001, p. 17). If someone knows the real essence of an object, she can explain why the object has the necessary properties it has. Note that this is a completely metaphysical account of essentialism, which I think is not affected by Müller-Wille's (2007) argument, which mainly concerns the empirical and inductive methods of eighteenth-century naturalists.
According to Ereshefsky, essentialist scientific classification in the eighteenth century followed the Aristotelian method of logical division (Ereshefsky, 2001, p. 201). Here, Müller-Wille's argument that Linnaeus' did not follow the method of logical division is important. However, I will argue in what follows that medical researchers like Sauvages and Cullen did follow this method. Moreover, as we shall see below, the method of logical division again largely consists of metaphysical claims or metaphysical interpretations of logical categories, which I think one can adopt even if one agrees with Müller-Willle (2007) that eighteenth-century naturalists adopted inductive methods.
The method of logical division, as described by Ereshefsky, postulated five predicables: a definition, a genus, a differentia, a property and an accident (Ereshefsky, 2001, p. 201). Definitions describe which characteristics pertain to an object in virtue of which it belongs to a particular kind. Moreover, definitions provide us with the real essence of the members of a kind (2001, p. 201). Definitions are given by the traditional Aristotelian method of providing a genus and a differentia. Thus, for example, the concept of "man" is traditionally defined as a "rational" (differentia) "animal" (genus). Properties are characteristics that follow from an object's essence and are found in all the members of a particular kind (2001, p. 201). To return to our example: "animal" is part of the essence of "man". From the fact that "man" is an "animal", it also follows that "man" is a "substance", insofar as the concept of "substance" is contained in the concept of "animal" (all animals are substances). Hence, "substance" is a property of "man". Accidents are accidental properties and have no relation to the essence. Thus, for example, insofar as some man are pale, "paleness" is an accidental property of man. Species are distinguished from other species of a genus by their differentia. Thus, "man" is a species of "animal". The essence of a species is given by its definition (2001, p. 202). Followers of the method of logical division also distinguish between necessary properties and accidental properties of individual objects.
In the next sections, we will see that the method of logical division (as described above) was adopted by the famous eighteenth-century philosopher Christian Wolff, who, as we shall see, significantly influenced the classificatory practices of Sauvages.
Wolffian logic and the nosology of Sauvages
François Boissier de Sauvages (1706-1767) is famous for introducing a nosology for diseases in the spirit of Thomas Sydenham. As Martin (1990, p. 111) notes, Sauvages (1706-1767) is known as a celebrated Montpellier medical professor, who wrote an influential classification of diseases. Martin wants to describe the influence of Bacon, botanical method, and Newtonian physics on the work of Sauvages (ibid.). It is common to describe the affinity between Linnaeus and Sauvages and the similarity between their classificatory practices. Thus, as Munsche and Whitaker (2012) explain, Sauvages' published the first version of Nosologie méthodique in 1731, when Linnaeus was a medical student. Linnaeus used this work for his own Genera Morborum of 1759, in which he classified diseases. According to Munsche and Whitakker (2012), these two authors influenced each other's subsequent works, a fact which demonstrates the influence of Sauvages on medical classifications in the eighteenth century. The interaction between Linnaeus and Sauvages also illustrates the mutual influence of natural history and medical nosology upon each other. Martin (1990) describes the influence of the botanical methods of Tournefort on Sauvages' work. Tournefort classified plants in accordance with their essential character, i.e., the plant's reproductive parts (1990, p. 119). According to Martin, Sauvages adopted a form of essentialism from Tournefort: following Tournefort, Sauvages wanted to discern the "essential characteristics of species of diseases" (1990, pp. 125-126). In a classic paper about Sauvages, King (1966) remarks that Sauvages was influenced by Christian Wolff. According to King, commenting on the Pathologia methodica (1752), Sauvages, under the influence of Wolff, "touches upon numerous aspects of logic and relies upon definitions and their implications" (King, 1966, p. 47). In addition, Sauvages copied Wolff's distinction between historical, philosophical, and mathematical knowledge, Wolff's distinction between principium and causa, and Wolff's distinction between mechanical and physical principles (1966, pp. 48-49).
In this section I want to further develop the historical study of Wolff's influence on Sauvages, with the goal of demonstrating that Wolff's logic, conception of definitions, and conception of proper science, which King does not discuss, significantly shaped Sauvages' conception of medical science and disease. In addition, Wolff, as we will see, adopted a traditional essentialist position according to which objects are a type of entity in virtue of their essence. Wolff further adopted the traditional Aristotelian theory of logical division, which we have already discussed, and argued that species and genera reflect the essences of nature. Sauvages also adopted these positions. In order to demonstrate Wolff's influence on Sauvages, I will first consider both these authors' views on proper science. Through studying this topic, we will see that there is enormous continuity between the methodological and logical views of Wolff and Sauvages. On this basis, we can subsequently argue more convincingly that there is also enormous continuity between the logical and metaphysical views of Wolff and Sauvages that concern essentialism, namely a continuity between their views on logical division, which was traditionally a topic that pertained to metaphysics and logic, and essences.
Wolff's and Sauvages' conception of proper science
As van den Berg and Demarest (2020) have argued, Wolff accepts a variety of a traditional axiomatic ideal of science, which has been modeled by de Jong and Betti's 'classical model of science ' (de Jong & Betti, 2010. See also van den Berg, 2020). According to this ideal, a proper science has fundamental concepts and non-fundamental concepts are defined in terms of these fundamental concepts. In addition, a proper science has fundamental propositions and non-fundamental propositions are grounded by or demonstrated from these fundamental propositions (de Jong & Betti, 2010). The propositions of a proper science should also be certain, i.e., known to be true. Wolff himself argued that any proper science should follow what he called the mathematical method (see for descriptions of this method Blok, 2016, pp. 13-45;Shabel, 2003, pp. 49-52;Dunlop, 2013;Gava, 2018, pp. 279-284). As van den berg (2021, p. 272) explains, Wolff's mathematical method moves from definitions to axioms, and from axioms to theorems and problems. Axioms, which are either axiomata (which show that something is the case) or postulata (which show that something can be done or constructed), are derived from definitions. Theorems are then derived from axioms (axiomata and postulata) through strict syllogistic demonstrations (Wolff, 1999(Wolff, [1750). See for a quantitative study of the spread of Wolff's mathematical method the as yet unpublished van den Berg, Parisi, Oortwijn, and Betti "The Spread of the Mathematical Method in Eighteenth-Century Germany: A Data-Driven Investigation").
Next to arguing that all sciences should have a strict axiomatic structure, Wolff also had strict views on the hierarchy of sciences (see on this topic van den Berg, 2013, on which I draw in the following). According to Wolff, sciences are ordered from higher to lower and the so-called higher or preceding sciences provide concepts and propositions which can be used in proofs of the lower sciences. For example, Wolff argues that the higher science of ontology grounds the lower sciences of psychology and physics.
Such general notions are the notions of essence, existence, attributes, modes, necessity, contingency, place, time, perfection, order, simplicity, composition, etc. These things are not explained properly in either psychology or physics because both of these sciences, as well as the other parts of philosophy, use these general notions and the principles derived from them. Hence, it is quite necessary that a special part of philosophy be designated to explain these notions and general principles, which are continually used in every science and art, and even in life itself, if it is to be rightly organized. Indeed, without ontology, philosophy cannot be developed according to the demonstrative method. (Wolff 1963, 40) Wolff's rationalistic conception of axiomatic science was combined with a form of empiricism in his account of the natural sciences. As van den Berg and Demarest (2020) have shown, Wolff "combines experimental research with a deductive mode of presentation" (2020, p. 385). Natural science should proceed in the demonstrative fashion and should provide strict syllogistic demonstrations. The premises of such demonstrations are "definitions based on experience and propositions of experience" (p. 385). Wolff thought that such empirical premises express clear experiences, and are thus certain (p. 386). Hence, Wolff harmonized the ideals of experimental science and axiomatic science.
Having discussed the basics of Wolff's conception of science, we may now turn to the views on proper science and medicine of Sauvages. In his 'Preliminary Discourse' to Methodological Nosology (2015 [1772]), Sauvages remarks that medicine should not be based on unfounded hypotheses but on certain principles drawn from experience: "You should carefully discard all theory with precarious principles based on a whim rather than on repeated experience, and supported by possibilities, rather than on certain facts and experiences" (2015 [1772], p. 481). Medicine, according to Sauvages, should be based on such certain empirical principles, which he also calls, no doubt following Wolff, Experiences incontestables (p. 483). Hence, according to Sauvages, medicine should be a certain science based on incontestable and certain principles. The problem, however, is that medicine has not been furnished with such certain principles: "[…] Medicine, which is the noblest and the most ancient of all Arts, has made little progress, and its theory is unable to initiate Candidates to practice, only providing a few real and incontestable principles" (p. 482).
Apart from stating that medicine should be based on certain empirical principles, Sauvages also clearly accepts Wolff's axiomatic or demonstrative method for medicine. As Sauvages puts the point: One must not allow in Medicine any principles except those that are as certain as those that we acquire by the testimony of the senses. Now, following the method of the Geometrists, these principles are none other than the experiences and syllogisms deduced, one from the other. (2015 [1772], p. 483) Here, Sauvages seems to construe medicine as an axiomatic science based on certain empirical principles from which non-fundamental propositions are deduced through syllogisms. Moreover, he explicitly states, like Wolff, that we should follow the method of the geometrists. This reading is strengthened by Sauvages' definitions of proof and demonstrations, which all conform to Wolff's definitions of these terms. Wolff defines a demonstration as a syllogism starting with definitions, clear experiences, or axioms (1742, p. 95). Similarly, Sauvages states: If we use syllogisms to demonstrate a proposition by means of some others that are already known, this is called a Proof (une Preuve) and a Demonstration (une Démonstration), when we only use as premises Definitions (définitions), Unquestionable experiences (d'Expériences incontestables), Axioms (d'Axiomes), and Propositions (Propositions) already demonstrated. (2015 [1772], p. 483. Translation amended) In addition to adopting Wolff's account of the mathematical method and demonstration, Sauvages adopts a view on the hierarchy of sciences that is similar to that of Wolff. According to Sauvages, higher sciences, such as mechanics, provide concepts and principles for medicine in order to provide proofs in the latter science. As Sauvages puts the point: Finally, Medicine has to borrow from philosophy, from Mechanics, from Geometry, and from other general sciences, not only terms but also principles; it is from these fields that the Physicians borrow the propositions demonstrated, and they are not required to demonstrate these propositions by themselves. (2015 [1772], p. 483) Sauvages illustrates the hierarchy of sciences by explaining how hydraulics can be of use to physicians: However, I say that anyone who does not study in Hydraulics the general property of fluids, and how to understand their speed and force, will be unable to draw from Geometry and Mechanics the knowledge of the capacity of the vessels, their diameters and surfaces, as well as the knowledge of the hardness of solids, the movement and tone of fibres. Such a person, I say, will never succeed in obtaining perfect knowledge of the animal economy, and will not acquire the theory of hearing and sight without studying Acoustics and Optics. (2015 [1772], p. 484) Finally, within his account of the hierarchy of sciences, Sauvages stresses that physicians should adopt the demonstrative method, which as we have seen is Wolff's axiomatic method. It is thus clear without a shadow of a doubt that Sauvages adopted his conception of proper science from Wolff. As Sauvages puts the point: In so far as the Physicians ignore the demonstrative method, there will be no principle upon which practice can be built, and which has the certainty it demands; the theory of this Art will always be uncertain, and everyone will assert his opinion in proportion to the mind and the credit that he has. (2015 [1772], p. 483)
Wolff's logic and conception of essence.
Wolff adopts the Aristotelian theory of logical division that was first developed by Aristotle and that, as we have seen, is a core feature of essentialism. Wolff defines an essence as that which is constantly present in a thing and not derived or determined by something else (1728, p. 145. I will further explicate the concept of essence 12 Page 10 of 25 below). Attributes are those necessary properties that follow from the essence of a thing (1728, p.146). Modi are changeable things which are not determined by or related to the essence (1728, p. 147). Definitions are given by genus and specific difference (1728, p. 208), and allow us to distinguish kinds of objects from each other (1728, p. 190). Finally, Wolff distinguishes genera and species, and, as I will explain below, argues that species pick out essences in nature.
The theory of definition adopted by Wolff is traditional. In his German Logic, Wolff describes a definition (Erklärung) as a clear, distinct and exhaustive concept (1742, p. 44. See for Wolff's account of definitions also van den Berg, 2014, pp. 19, 59-60). Such a concept, Wolff argues, is applicable to multiple things of a kind and enables us to differentiate particular types of objects from other objects (1742, p. 44). A concept is clear if it suffices to recognize the objects to which it applies (1742, p. 18). A concept is distinct if we can specify its marks, i.e., the partial concepts contained in it on the basis of which we know an object (1742, p. 20). Finally, a concept is exhaustive if its marks suffice to know an object and differentiate it from other objects (1742, p. 23).
Wolff distinguishes between nominal definitions and real definitions. Nominal definitions provide an account of properties on the basis of which an object can be distinguished from other objects. For example, if one defines the word clock by saying it is a machine (genus) which specifies the hours (differentia) one provides a nominal definition of the word clock (1742, p. 18). Real definitions show how a thing is possible, or they show how a thing is generated, and thus explicate the essence of composite objects (1742, p. 48, pp. 52-53). We know the essence of a composite object if we know its parts and the mode of composition of the parts (Wolff, 1738, p. 29). Thus, if one specifies the parts of a clock and their mode of composition one explicates the essence of a clock and provides a real definition of a clock (Wolff, 1742, p. 48). In a similar way, if one specifies the parts of the eye and their mode of composition, one provides a real definition of the eye and explicates its essence (1742, p. 53).
Wolff follows the traditional dictum that we provide a definition of a concept if we specify its genus and differentia (1728, p. 208). However, such definitions need not be real definitions. In my opinion, definitions in terms of genus and differentia would be classified as nominal definitions by Wolff. For a real definition, it is required that we explain how a thing is generated. This requires that we know what kinds of things are required for an object to be generated and what each of these things contributes to the generation of an object (1742, p. 54). In geometry, for example, real definitions are not provided by a definition in terms of genus and differentia. Thus, we can give a nominal definition of a circle as a round plane figure (genus) whose boundary is equidistant from a fixed center (differentia). However, we only provide a real definition of a circle by constructing it, i.e., by drawing a straight line around a fixed point, thus showing how it can be generated. Hence, Wolff claims that in geometry we assume points and lines and through their movement obtain real definitions of planes (1742, p. 55). In a similar fashion, real definitions in natural science show how an object is possible or generated.
Central to Wolff's account of real definition is the notion of essence. In his German Metaphysics, Wolff defines essence as that which contains the ground for the properties of objects (1738, p. 18. See also van den Berg, 2014, p. 61). Thus, for example, if we know the essence of the eye we know why the eye has the capacity for sight. We understand the essence of an object if we understand how it is possible (1738, p. 19). According to Wolff, the essence of objects is necessary, eternal, and unchangeable (1738, pp. 20-21). As said above, the essence of composite things consists in the mode of composition of its parts (1738, p. 29). It follows, according to Wolff, that composite things are similar, i.e., belong to the same kind, if their mode of composition (their essence) is similar (1738, p. 29).
Wolff explicitly links the notion of species and genus to his conception of essence. He argues that insofar as objects share an essence, they belong to the same species (Art). (1738, pp. 95-96). The differentia of a species consists in the way an object is differentiated from other objects and consists in the way it is determined (1738, pp. 96-97). Finally, since objects sharing an essence belong to a species, and other objects belonging to a different species have a different essence, genera consist again in the similarity between essences of different objects (1738, p. 99). For example, windows and doors share the similarity that they are openings in the wall and therefore can be counted among a common genus (1738, p. 100). Insofar as Wolff argues that objects sharing an essence belong to a single species, he must have taken classifications of species and genera, if properly conducted, to reflect the essence of objects. Hence, Wolff's essentialism is evident from his account of essence, genus, and species.
Wolff's construal of essences no doubt differs from present day accounts of essences. Contemporary philosophers writing about essences and natural kinds often have a very restricted account of essence. They point to a limited set of examples such as chemical elements and fundamental particles as having essences and as being natural kinds. By contrast, Wolff basically takes the structure (mode of composition) of any composite object (eye, clocks, geometrical figures, etc.) to constitute its essence. However, although there are differences between Wolff's essentialism and contemporary essentialism, Wolff shares the basic modal characterization of an essence as something that an object must necessarily have (Robertson & Atkins 2020). In addition, as we have seen, he is committed to the essentialist view, adopted by historical figures and contemporary philosophers and scientists alike, that essentialist kinds are "classes whose members share an essence from which their defining feature arises" (Kendler et al., , p. 1143). This follows from Wolff's Aristotelian method of logical division, in which attributes of an object follow from the essence of an object. Finally, Wolff's account of real definition suggests that he also adopted the essentialist viewpoint that if an object has an essence we must be able to provide an account of how this object came about, i.e., we must be able to specify its cause. In the following, we will see that Sauvages was also committed to these viewpoints.
Sauvages' nosology
In this section, I will describe Sauvages' nosology. In the first subsection, I will first analyze Sauvages' philosophy of medical classification, as outlined in the initial discourse to his Nosologie Méthodique (1772). I argue that Sauvages' adopts the logic of Wolff to provide classifications in medicine, and that he shares the essentialism of Wolff. In the second subsection, I analyze Sauvages' views on psychiatric nosology. It will be argued that Sauvages' again followed Wolff and tried to give real definitions of psychiatric diseases.
Sauvages' philosophy of classification
In the initial discourse to the Nosologie Méthodique (1772), a translation of the Nosologia methodica (1763), Sauvages writes on the method of nosology. This method is once again greatly influenced by Wolff. Sauvages argues that in nosology we should adopt the so-called systematic method, which he describes as follows: The systematic method groups together the diseases that resemble each other, and separates them from those that do not have a resemblance; it reduces all the individual diseases to their species, these species to their genera, the genera to orders, and these to a small number of classes. (2015 [1772], p. 489) According to Sauvages, who here follows Wolff, we distinguish objects and diseases in terms of signs. Through signs, we achieve the aim of nosology, which is to distinguish the diseases from each other. Hence, if we want to cultivate nosology, we must know the signs of diseases. The botanists gave these signs the name of Characters (2015 [1772], p. 489). We enumerate signs through definitions, as was also argued by Wolff. Sauvages clearly follows Wolff's account of definitions: The Definition (la Définition) is the enumeration of the signs necessary and sufficient to make the defined object known, and to distinguish it from others. Wolf. Logic 153. It provides a complete and determined notion of the respective term. Therefore, in order to have a complete and established idea of a disease, it is necessary to define it or enumerate its proper signs and characters. (2015 [1772], p. 489) We define diseases through providing the genus and the specific difference, which allow us to distinguish diseases from each other (2015 [1772], p. 490). In his commentary on Sauvages, King notes that in his early work Sauvages aimed to classify symptoms. Hence, it is symptoms that function as signs to differentiate diseases from each other. These symptoms, according to King's reading of Sauvages, must be phenomena that are manifest, essential, and constant. The phenomena must be manifest to the senses, essential as opposed to accidental, and constant and invariable (King, 1966, p. 46).
In order to elucidate how nosologists should classify diseases, Sauvages quoted Sydenham, the great seventeenth-century English authority on medicine. He followed Sydenham in arguing that we should give a history or description of diseases and to establish a method of cure. In giving a history of diseases, we should "order them under defined and certain species, with the same care and the same exactitude as practised by the Botanists" (2015 [1772], p. 486). The goal was to discover the essence of diseases, without resorting to unfounded hypotheses concerning the disease (2015 [1772], p. 486). Thus, Sauvages quotes Sydenham as follows: Similarly, it is not enough to observe the general symptoms of a disease that includes a variety of species. It is true that we do not notice the same variety in all the diseases, but there are several that authors arrange under the same class, without distinguishing their species, which differ between them in essence, as we shall see in what follows. There is more: in the case where one arranges the diseases according to their species, this is done relative to a hypothesis which replaced the truth of phenomena, so that this distinction is much less founded on the true character of the disease than on the hypothesis adopted by the Author. (2015 [1772], p. 486) Hence, classification of diseases in terms of different species reflect the true essential nature of diseases. This view, shared by Sydenham and Sauvages, suggests that Sauvages was an essentialist concerning nosology. Martin (1990), discussing Sauvages' Nouvelles Classes des Maladies, also stresses that Sauvages aimed to characterize the essence of diseases "Yet Sauvages was not only presenting a History-the careful description of observed symptoms of diseases-but, following Tournefort, he was presenting as well the essential characteristics of species of diseases" (1990, pp. 125-126).
In classifying diseases, we should not, according to Sauvages, rely on unfounded philosophical hypotheses (2015 [1772], p. 487). We should also classify diseases in terms of the necessary symptoms that accompany it. Sauvages again quotes Sydenham: In third place in order to describe a disease, the symptoms that necessarily accompany it, and that are its own, are to be carefully separated from those that are accidental and fortuitous, such as those that depend on the temperament and age of the sick, and on the curative method that is employed; because it often happens that the disease varies according to the method being used, and the symptoms are much less the effect of the disease than the conduct of the Physician [...]. (2015 [1772], p. 487) Sauvages described this method as entailing that one should describe the characteristics of symptoms that the disease constantly presents (2015 [1772], p. 487). The view that we should classify diseases in terms of their necessary symptoms mirrors the theory of logical division, described earlier. Munsche and Whitaker (2012) argue that Sauvages classified diseases on the basis of symptoms, although they note his approach hinted at dissatisfaction with a purely symptom-based approach (2012, p. 228). King (1966, pp. 47-48), commenting on the Pathologia methodica, notes that for Sauvages pathology embraces both the study of phenomena and the study of causes. According to King, the study of causes is called etiology, while the study of phenomena comprises nosology. Williams (2003, p. 92) argues that for Sauvages "medicine must proceed only on the basis of what could be known directly, a principle that by definition excluded the search for causes […]". I wish to argue that in his Nosologie Méthodique, Sauvages constructed a nosology that was based both on the study of symptoms and the study of causes. This is already evident from the initial discourse, in which Sauvages argues that a sure theory in medicine requires knowledge of physics and geometry, which supply knowledge of causes. Reflecting once again on the hierarchy of sciences, which we have discussed above, Sauvages notes: I argue that only the study of Anatomy, experimental Physics and Mathematics can provide a sure theory; and as most of the Physicians ignore these sciences, it is not surprising that Aetiology is full of errors; an erroneous Aetiology is not useful to the Physician, like music to an architect, Aetiology will be unable to direct him in practice, or enhance the study of symptoms, for observation and experience, although most Physicians pretend to the contrary. (2015 [1772], p. 485) Note that Sauvages argues explicitly that etiology should direct the practice of physicians, and that it can enhance the study of symptoms, on which we base our nosology. Hence, it seems that knowledge of causes is of direct relevance to nosology. The importance of studying causes for nosology has also been stressed by Martin (1990), who notes that Sauvages distinguished, like Wolff we may add (Martin is silent on the influence of Wolff on Sauvages), between historical (empirical or descriptive) knowledge, philosophical knowledge, which provides reasons for why certain phenomena occur, and mathematical knowledge, which provides knowledge of quantity. Sauvages describes the difference between historical, philosophical, and mathematical knowledge as follows: We have only three ways to instruct ourselves and to extend our knowledge: namely, through History, Philosophy, and Mathematics. History is the knowledge of facts: for example, it teaches us that Pleurisy is accompanied by fever, breathing difficulties, cough, and chest pain. Philosophy is the knowledge of the causes and the principles; hence there is a philosophical knowledge of Pleurisy, which tells about the causes and principles of the four symptoms that accompany it, which are, for example, that they come from the inflammation of the pleura or the lungs. Mathematical knowledge consists in knowing the quantities and to know how to measure them; for example, to determine the strength and speed of the pulse, the degree of heat, the intensity of pain, the violence of the cough, and such other symptoms. (2015 [1772], pp. 487-488) Sauvages, Martin argues, takes nosology to combine both historical knowledge and philosophical knowledge (1990, p. 135). Hence, nosology includes cognition of the causes of diseases. According to Martin, knowledge of causes was for Sauvages inextricably bound up with knowledge of the essence of diseases: "By knowing something of the essence of a species of disease, Sauvages believed he knew something about its cause as well […]" (Martin, 1990, p. 126). Hence, it seems clear that Sauvages was an essentialist about diseases, and that he thought, like Wolff, that knowledge of the essence of an object (such as a disease) requires knowing how this object comes about. In the next section, we will see that this viewpoint was also adopted within the nosological practice of Sauvages, insofar as he often cites causes in giving definitions of diseases and appends sections on theory, discussing the causes of diseases, to his classifications.
Sauvages and the practice of psychiatric nosology
In the psychiatric nosology of the Nosologie Méthodique (1772), Sauvages started with specifying the character, i.e., the definition and differentiating feature, of the class of vesaniae, i.e., illnesses that trouble or cloud reason (1772, p. 1). The character of vesaniae is that they are diseases of the soul. They consist in a "depravity of the imagination, of the appetite or of the judgment, or in a hallucination, a bizarrie or a delirium" (1772, p. 1). After defining the class, Sauvages defines the order of hallucinations, which are defined causally as errors of the soul, caused or occasioned by a vice or defect of the organs situated outside the brain (1772, p. 1). He then lists different genera of diseases belonging to this order, such as vertigo, suffusio, hypochondria, etc. (see for an overview Munsche and Whitaker 2012). Later in the book Sauvages gives an extensive treatment of these genera, listing their respective species, such as Hypochondriasis biliosa and Hypochondriasis sanguinea (1772, pp. 169-170). After dealing with hallucinations, Sauvages defines the order of Morisitates or Bizarries as depraved desires or aversions, and again lists a number of genera of diseases such as pica, bulimia, nymphomania, etc. (1772, pp. 2-3). Species are later listed such as Pica infantilis (1772, p. 205). In contrast to the other orders, Sauvages does not provide a causal definition of the order of Morisitates or Bizarries, although he offers etiologies in later discussions. The order of deliria is causally defined as an alienation of the mind caused by a defect of the brain, and contains genera such as mania (madness, Folie) and melancholia (1772, p. 4). In his treatment of these genera, Sauvages then lists the species, such as Mania à pathemate, mania caused by a passion, and Mania ab hemicrania, mania caused by a migraine (1772, pp. 393-397). In this way, Sauvages gave a hierarchical classification of mental disorders adopting the methods of the botanists.
Note that while defining psychiatric disorders, Sauvages often explicitly included the cause of these psychiatric disorders. In the terminology of Wolff, we can thus say that Sauvages did not merely provide nominal definitions of psychiatric disorders, but he also wanted to provide real definitions of these disorders, illuminating how these disorders come about. For Wolff, providing a real definition of an object meant explicating its essence. Given Sauvages' essentialism, which we described in the previous section, it is likely that Sauvages also saw his causal definitions as explicating the essence of disorders or diseases.
Sauvages provides further insights into the cause of mental disorder in the section of theory on mental disorders. We have already seen that he attributed some mental disorders, such as deliria, including mania and melancholy, to a defect of the brain. However, in the section on theory Sauvages makes clear that the brain is not the sole cause of mental disorders. He argues, as Huneman has already noted (2008, p. 624), that mental disorders also arise from a mistake and the wrong use of our faculties. According to Sauvages, "The mistake stems not only from a bodily flaw […], but also from our own contempt for our faculties, and our lack of care in searching for the truth or cultivating our judgment" (Sauvages, 1772, p. 14. Translated and quoted by Huneman, 2008, p. 624). Hence, as Huneman concludes, Sauvages relates madness to a particular kind of moral vice. According to Sauvages, the more imperfect a man is, the more he resembles a beast, the more he neglects the cultivation of reason, and the more chance there is for developing mental disorders (1772, p. 11). Our insanity comes from the fact that we do not know how to curb our passions, and that we do not cultivate our faculties and judgment (1772, p. 12). Sauvages illustrates his views by noting that a peasant who suffers from cataract suffers from hallucinations, whereas a philosopher, who supposedly cultivates her judgment, recognizes the mistake and gets rid of it (1772, p. 14). He also states that although the majority of maniacs owe their disorder to a defect of the brain, there are some who owe their illness also to a vice of the soul (1772, p. 17). All of these remarks are meant to argue against materialists. If we adopt purely physiological or anatomical explanations of mental disorder, we are led to materialism and Spinozism, and If we adopt such a position, Sauvages argues, there would be no genuine responsibility and no moral philosophy (1772, pp. 13-14).
A core feature of Sauvages etiology of psychiatric nosology is that, according to him, the cause of such disorders can be both mental and physical. Thus, for example, next to a damaged brain, many people fall into madness because they are excessively occupied with some object (1772, p. 19). Huneman (2008) notes that this feature of Sauvage's thought was a feature of eighteenth-century Montpellier vitalism. As Huneman describes (2008, p. 615): "Vitalism conceived of organisms as animal economies understandable through the transformations of the various modes of their sensibility. This allowed some physicians to define a kind of anthropological program, which viewed human beings as a whole, with no distinction between le physique and le moral". According to Huneman, Sauvages argued that both physical causes and moral causes can generate mental disorders. As Huneman explains: Here is the reason why, in mania, moral causes and physical alterations are both at work: while (1) the sympathy between brain and other centers of the "economy" explains the production of psychical symptoms, conversely (2) the moral affections clearly are possible causes of diseased organs. (Huneman, 2008, p. 625) Hence, Sauvages had a complex conception of the etiology of mental disorders. However, he did not doubt that the cause of mental disorders could be clearly established and he referred to such causes in his definitions of mental diseases. As I have argued, understood from a Wolffian point of view, this amounted to providing real definitions of mental disorders, which explain their essence. If I am correct, it was partly because de Sauvages thought that we can establish the causes of mental disorders, that he adopted an essentialist psychiatric nosology.
We can conclude this section by wondering how Sauvages could argue that mental disorders have both physical and moral causes and still be an essentialist about mental disorders. For some present-day accounts of essentialism assume that mental disorders have a single cause . Although my account must necessarily be a bit speculative, I will argue that the Wolffian conception of causation allowed for attributing a single complex cause to mental disorders while treating both physical causes (e.g., brain defects) and moral causes (e.g., being obsessed with something) as partial causes of mental disorders. We have already seen that King had established that Sauvages followed Wolff's distinction between principium and causa (1966, pp. 48-49). Hence, Sauvages was aware of the Wolffian conception of cause. Importantly, Wolffians distinguished between complete causes and partial causes. Thus, for example, the Wolffian Baumgarten, who wrote a Wolffian textbook on metaphysics that was highly influential in the eighteenth century, 1 argued, first analyzing the concept of a ground, that a sufficient ground is the complex of partial grounds that explain why something is the case, whereas an insufficient ground is merely a partial ground (Baumgarten, 1766, p. 8). Thus, for example, my having eaten lots of fast food in the last year is a partial ground for gaining weight, whereas this fact taken together with my metabolism, my exercise regime, my other nutritional food habits, and possible other factors is the sufficient ground of gaining weight. Now, causes are defined in terms of the concept of ground. More specifically, that which contains the ground of the actuality of something, i.e., that which explains why something is actual, is a cause (ibid., p. 83). Insofar as the concept of cause is understood in terms of the concept of a ground, we may expect that we can also distinguish between complete causes and partial causes of a thing. Indeed, Baumgarten appears to make this distinction when he argues that multiple causes of a caused thing are Mitursachen (concausae) who come together in order to cause a certain thing (Baumgarten, 1766, p. 85). Hence, complex phenomena can have a cause that is analyzable into multiple partial causes. In this way, we can argue that mental disorders have complex single causes, which are analyzable into, e.g., partial causes as brain defects and moral partial causes. Cullen (1710Cullen ( -1790 was chair of medicine at the University of Edinburgh. He was an internationally well-known scholar. In addition, he was an active medical practitioner himself, operating a blossoming consultation practice (Risse, 1974). Cullen devoted much of his time to writing a nosology that would allow physicians to correctly diagnose diseases (Munsche & Whitakker 2012). He was significantly influenced by Linnaeus and Sauvages, and in turn influenced many of his successors, such as Benjamin Rush, one of the founding fathers of the United States and one of the founders of American psychiatry, and the influential American physician Thomas Parke (Bell, 1950).
Cullen's nosology
This section discusses, first, Cullen's views on the methodology of nosology (4.1) and, second, his account of the causes of psychiatric disorders (4.2). I argue that Cullen's Nosology by and large followed the method of classification as discussed in the section on essentialism and that his views on the methodology on classification show great similarity with that of Sauvages. I then describe Cullen's views on the causes of psychiatric disorders, and argue that Cullen adopted the essentialist viewpoint that a single essential cause is the reason for the symptoms associated with psychiatric disorders.
Cullen's philosophy of classification
A nice guide to Cullen's philosophy of classification can be found in his Lectures Introductory to the Practice of Physic, which comprise lectures of Cullen first published in 1827 and printed from copies of these lectures (1827a,1827b, I, p. v-vi). There, Cullen distinguishes between medicine (physic) based on an empirical plan, where we are guided by experience alone, and medicine based on a dogmatic plan, where we have recourse to reasoning and try to explain medical phenomena through their causes (1827a,1827b, I, p. 415). Cullen argues that although experience is indispensable in physics, we must always rely on a dogmatic plan to perfect medicine. He states that in medicine in particular and humanity in general there is a strong propensity to seek for causes of phenomena, and accordingly dogmatic reasoning in medicine is unavoidable (1827a,1827b, I, p. 417). Hence, Cullen concludes that it "is evident that reasoning, and what is called theory in physic, is unavoidable […]" (1827a,1827b, I, p. 419). This is also true for nosology, where we should strive to find the inner cause of external phenomena through dissection, which aims to find the proximate cause of diseases (1827a,1827b, I, p. 429): It is, I think, now agreed, that the dissection of morbid bodies is one of the best means of improving us in the distinction of diseases. Sauvages indeed has rejected the employment of the internal seat of diseases as a means of distinguishing them; but he has, in an hundred instances, tacitly employed it; and under the ambiguity that often occurs in external symptoms, it is evident that dissection, by showing the parts singly or jointly affected, shows the real and steady changes in the system, upon which the external symptoms depend, and therefore must lead to the proper limiting of genera and species. (1827a,1827b, I, p. 423) Here, Cullen argues, similar to Sauvages, that causes can be taken to individuate and identify diseases, insofar as the external phenomena by which we classify diseases in nosology are taken to result from an inner cause. This internal cause explains these phenomena and explains why the external phenomena co-occur. In line with this reasoning, Cullen argues that nosology is intimately connected to the study of causes of diseases in sciences such as pathology and physiology: On the present subject, I think it must now appear evident, that the distinction of diseases must be often guided by the dissection of morbid bodies -must be constantly guided by anatomy, physiology, and pathology united together; and therefore, that the discernment and accurate distinction of external symptoms will be most effectually obtained by the cultivation of a Dogmatic system. (1827a,1827b, I, p. 424) The dogmatic search for causes is thus of great utility for nosology, insofar as it is through causes that we can identify diseases and establish, in Cullen's terms, the common nature of diseases (1827a,1827b, I, p. 435). As Cullen puts the point: "and even where these [organic affections] are in the internal parts, anatomy has often explained their connexion with external symptoms, so as to establish a common nature in different diseases more certainly than any observation of the symptoms alone" (1827a,1827b, I, p. 435). Hence, Cullen thinks that a common cause can account for all the external symptoms of a disease, and is thus a primary means to identify the nature of diseases with. Schematically, we can, drawing on Kendler and colleagues' (2011) picture of essentialism, present this view as follows: Cause Symp. Symp.
in order to illustrate Cullen
This is an essentialist picture of diseases because different symptoms are taken to be explained by a single underlying essentialist cause (although there are surely more varieties of essentialism). In the same way as the properties of gold are taken to follow from its essence, i.e., its atomic number, the symptoms of diseases are taken to follow from their cause (Kendler et al., , p. 1144. Insofar as Cullen believes that the proximate cause of diseases accounts for all its symptoms, and we have seen there is evidence he adopts this picture, he can be taken to adopt this essentialist scheme. In line with the importance he assigns to the study of causes for medicine, Cullen describes his general method in medicine as follows (1827a, 1827b, I, pp. 440-443). First one provides a history of a disease, i.e., an empirical description of all the symptoms that accompany a disease. Then, secondly, one investigates the proximate cause of the disease. Thirdly, one moves to nosology: from "the phenomena of the disease, and with a view to the conclusion respecting the proximate cause, I am next to enter into a critical disquisition with regard to the proper character and limits of every genus, and its division into species and varieties" (1827a, 1827b, I, p. 442). Fourth, one studies the remote causes of the disease, moves to the prognostic, and finally one studies the method of cure.
The focus on individuating and identifying diseases in terms of their causes is also evident in Cullen's Nosology (1800[1769). There, Cullen argues that diseases are of the same kind or species if they arise from the same cause: The one is, that similitude in the cause of the disease, argues a similitude in the disease thence arising: thus, when the diseases of two different persons arise from one and the same cause; when that cause is essential to the production of the disease in both; and when the same cause appears to be of the same quality, we may safely infer that such diseases are of the same, or of a similar kind. (Cullen, 1800(Cullen, [1769 Next to individuating and identifying diseases through their cause, Cullen also argues that the similarity of diseases in different persons may be shown if there is a similarity in the remedies by which they are cured. In other words, we view diseases as being the same if they are cured by the same remedy (1800 [1769], p. xvi).
Interestingly, and not properly analyzed in the literature, Cullen wavers in his assessment of whether causes should be used as characters to define diseases. He does not reject his previously formulated account that species of diseases are identified in terms of causes. Cullen adopts this viewpoint throughout his work. However, the question is whether the characters which we use to describe diseases in nosology should only consist in external observable characters or symptoms or if we may include causes as well. In his lectures, Cullen argues that the characters used to define diseases should be observable symptoms, since there is much disagreement between medics on the cause of diseases: The fourth rule is, that the characters should be absolutely free and independent of all theory and hypothesis. Sauvages, in his Prolegomena, mentions ten or twelve definitions of Pleurisy, all taken from some view of the proximate cause; but all of them would now be entirely rejected. By looking into the systems, however, you will perceive that physicians have gone on in the same track of defining diseases by their proximate causes, which are in many cases disputable, and may long be so. (1827a, 1827b, I, p. 457) Cullen is quick to note that this only concerns the definition of diseases given in nosology, and that he retains the idea that species of diseases are identified by causes. Thus he states that he has previously said that the internal seat is used to identify the cause of diseases, but that he should be read as saying that the internal seat belongs to the history of disease, and is thus still relevant for nosology, but that it should not be used as a character in the definition of a disease, which should consist of externally observable symptoms (1827a, 1827b, I, p. 458). Interestingly, Cullen was not consistent in his views. In his Nosology, Cullen maintains that the cause can be used in the definition of a disease as a character if it is well known: Ought the cause of a disease to make any part of the definition? To this it may be answered, that as the judgment formed by physicians of the causes of diseases, is often fallacious, and even false, and therefore not to be rashly relied on in distinguishing diseases; yet, as these causes are sometimes sufficiently certain, and easily to be observed, they may be admitted in Nosology, as legitimate characters. (1800 [1769], p. xviii) The logic Cullen employed in his nosology can be described as the traditional logic of classification, expounded in the section on essentialism and still articulated by Wolff and other logicians in the eighteenth century. Cullen notes that "all diseases, in order to be easily and certainly discriminated, should be arranged, like systems of Botany, by genera and species, with characteristic definitions: that is, by a methodological Nosology" (1800 [1769], p. v). In the denomination of diseases, Cullen followed Linnaeus, arguing that his rules for naming classes are taken from Linnaeus in the Critica and Philosophica Botanica (1800, p. xxi).
Species of diseases should be characterized, according to Cullen, by essential and necessary characters, which further illustrates his essentialism. In his Nosology, Cullen chides classifications which detail symptoms that seldom attend the disease, as opposed to those that are necessarily connected with it, common, and inseparable (1800 [1769], p. iv). In his lectures, Cullen notes that we must take pains to "distinguish between what are the essential and what the accidental symptoms." (1827a, 1827bb, I, p. 447). Hence, Cullen distinguished between necessary and essential symptoms of a disease, in terms of which we must define a disease, and the contingent accidental symptoms.
Cullen and the cause of psychiatric disorders
Cullen construed an order of vesaniae, or impaired judgment, within the class of neuroses. This order contained diseases such as amentia, or imbecility of the judgment, melancholia, or partial insanity, and mania, or universal insanity (1800 [1769], pp. 130-133). After specifying genera, he would treat the different species of disease. In this way, Cullen classified the vesaniae in terms of genera and species.
In his First Lines of the Practice of Physic (1784), Cullen treats the cause of the vesaniae, i.e., the disorders of the intellectual functions. He starts off by discussing delirium, which consists, according to Cullen, in erroneous judgment (1827aCullen, in erroneous judgment ( , 1827bCullen, in erroneous judgment ( [1784, II, p.510). A delirium is defined as a false or mistaken judgment of relations of things, about which most men form the same judgment. In addition, delirious persons form judgments that are very different from the judgments that the person had formed before (1827a, 1827b [1784], II, p.510). This false judgment is frequently associated with a false perception of external objects, or a very unusual association of ideas, or a disproportionate emotion or passion (1827b [1784], II, pp. 511-512). This leads to the following definition of delirium: Delirium, then, may be more shortly defined, -In a person awake, a false judgment, arising from perceptions of imagination, or from false recollection, and commonly producing disproportionate emotions. ((1827a, 1827b [1784], II, p.512) Insanity is defined as a particular kind of delirium, one without pyrexia and comatose affection, and Cullen sets out to find the cause of delirium in general. He argues that the connection between body and mind is such that delirium must have a corporeal cause (1827a, 1827b [1784], II, pp.512-513). The part of the body connected with the functioning of the mind is the brain (1827a, 1827b [1784], II, p.513). According to Cullen, it is probable that the state of the intellectual functions depends on the nervous power, a subtle fluid present in every part of the medullary substance of the brain and nerves (1827b [1784], II, pp. 513-514. For further discussion of this nervous power, see Jackson, 1983, p. 311). This nervous power can be in a state of mobility and force that is sufficient for the exercise of the intellectual functions, which is called excitement, or it can be in a state that is not sufficient for the exercise of the functions, which is called collapse (1827a, 1827b [1784], II, p. 514). These states of excitement and collapse correspond to states of waking (excitement) and sleeping (collapse) (1827a,1827b [1784], II, p. 515). The change from collapse to excitement, as witnessed for example when moving from sleeping to waking, is one of degrees. From this Cullen concludes "that not only the different states of excitement and collapse can take place in different degrees, but that they can take place in different parts of the brain, or at least with respect to the different functions, in different degrees." (1827a, 1827b [1784], II, pp. 515-16). In the transitions from waking to sleeping and from sleeping to waking, i.e., in "intermediate state of unequal excitement", we witness delirium, false perceptions, false associations, and so forth (1827a, 1827b [1784], II, p. 516). This shows, according to Cullen, that delirium depends "upon some inequality in the excitement of the brain" (1827a, 1827b [1784], II, p. 516) This is further proven by the fact that in dreams we witness delirium and that in case of fever, i.e., a case of unequal excitement of the brain, patients also often suffer from delirium (1827a, 1827b [1784], II, pp. 516-517). Hence, Cullen concludes that delirium "may be, and frequently is, occasioned by an inequality in the excitement of the brain (1827a, 1827b [1784], II, p. 517). On this basis, Cullen concludes that insanity is the result of different states of excitement of the brain (1827a,1827b [1784], II, p. 519). In line with his account of the cause of insanity, Cullen reduces all symptoms of psychiatric disorders to this cause. Thus, for example, mania is characterized sometimes by a false perception or imagination, a false judgment concerning a single object, and a mind that rambles from one subject to another or hurry of the mind, among others ((1827a, 1827b [1784], II, p. 521). All such symptoms are explained as follows: It appears to me, that the whole of these circumstances and symptoms point out a considerable and unusual excess in the excitement of the brain, especially with respect to the animal functions [...]. (1827a, 1827b [1784], II, p. 522) Here we see, once again, how Cullen adopts an essentialist account of psychiatric disorders. Such disorders are characterized by a multiplicity of symptoms, but all of these symptoms are explained in terms of a single essential cause, which is responsible for the multiple observable symptoms.
Conclusion
In present-day psychiatry and history of psychiatry, debate about the ontology of mental disorders, and in particular on the question of whether mental disorders are natural kinds, is prevalent. However, historians of psychiatry who have dealt with questions dealing with essentialism, natural kinds, and nosology, have paid little to no attention to the impact of the sciences of logic and metaphysics on conceptions of medical and psychiatric method, the natural of mental disorders, and the classification of mental disorders. Historically, however, logic and metaphysics have significantly shaped methods and interpretations of classification in the natural sciences. This paper corrects this lacuna in the history of psychiatry by analyzing the impact of Christian Wolff's logical and metaphysical theories on the conception of medical method and (psychiatric) nosology of Boissier De Sauvages. Wolff accepted the Aristotelian method of logical division and adopted the view that species pick out essences of objects. Wolff exerted a significant impact on Sauvages, who, as I have shown, adopted Wolff's conception of science, his views on logic, in particular his views on definitions, and also adopted the theory of division. In my view, we can posit considerable continuity between Wolff's views and the nosological practice of Sauvages, much more then has so far been recognized in the literature. I have argued that Sauvages attempted, in line with Wolff, to provide real definitions of (psychiatric) disorders, i.e., definitions which explicate how disorders come about and that explicate their essence. This would explain why Sauvages stressed the importance of giving causal definitions of (psychiatric) disorders, even if some commentators have interpreted Sauvages' nosology as a purely symptombased approach to classification. There is considerable continuity between the methods of Sauvages and those of William Cullen, much more then has so far been recognized. Cullen adopted an approach to nosology according to which we identify and individuate a species of disease by locating its proximate cause. This led him to explain the multiple observable symptoms of psychiatric disorders in terms of a single essential cause, which demonstrates his essentialism. Hence, the concept of a mental disorder adopted by influential eighteenth-century nosologists was that (i) psychiatric disorders are similar to other medical disorders in having an essence, (ii) from this essence it followed that psychiatric disorders had necessary characteristics (attributes) in terms of which we can describe psychiatric disorders, (iii) we can classify psychiatric disorders in species and genera and provide classifications of genera and species that carve nature at its joints. Future research should determine whether this essentialist perspective influenced nineteenth and twentieth-century nosologists and when the essentialist view on mental disorders became an object of critique. | 15,271.2 | 2023-03-22T00:00:00.000 | [
"Psychology",
"Philosophy",
"History"
] |
The Text Type Effect on Moroccan EFL University Learners ’ Reading Achievement
This study is a potential endeavor to make an inquiry into the perceived effect of text typology on reading achievement gains among Moroccan English as a foreign language (EFL) learners. It also evinces whether strategy instruction can be an influencing variable on learners’ reading achievement with regards to text type (i.e., narrative, expository). Indeed, incorporating two primary text genres (i.e., narrative, expository) in the conduct of this current research, the study is intended to substantiate any marked interrelatedness existing between text typology and reading achievement at the preand post-testing stages among EFL university learners. For assuring a thorough, rich investigation of this stated postulate, two sampled Moroccan EFL groups (n=113), as first-year English majors, were addressed. The obtained data were collected by means of a corpus of research instruments such as reading comprehension tests (i.e., pre-test, post-test), strategy training and reading comprehension texts (i.e., narrative, expository). The findings showcased that text genre is not a significant, influential variable on reading achievement scores among the control (n=50) and treatment groups (n=63). Finally, the study puts forward some useful implications pertaining to EFL text processing/ analysis and an explicit mention of some limitations, which encountered the undertaken study, is made.
Introduction
Assuming the stark complexity of conducting academic English as a foreign language (EFL) reading at the university level, it is apparent that text processing and analysis, as a cognitive enterprise in the field of academia, entails "high-level" thinking processes (e.g., Rapp & van den Broek, 2005) and flexible strategy usage on the part of EFL learners for comprehension achievement purposes.This plainly unveils that learners do engage in the synthesizing process and resort to diverse text-based strategies with a view to making meaningful sense of the textual content regardless of which typology of the discourse that they tend to cope with.In other words, though text type at times dictates more frequent use of some reading strategies than others (Baritta et al., 2009;Yoshida, 2012), the attainment of sufficient, efficient understanding of the written input remains the ultimate goal of any undertaken reading act among EFL learners.This is basically underscored by Smith (1982) who argues that reading certainly implies comprehension.
Also, much outstanding research on the process of EFL text reading has been done from the schema-theoretic (e.g., Johnson, 1982;Carrell, 1984), interactive (Anderson & Pearson, 1988), meta-cognitive (Mokhtari, & Sheory, 2002;Iwai, 2016) and strategic (Casanave, 1988;Mothtari & Reichard, 2002;He, 2008) perspectives.This exceedingly rich, extensive research has shown the basal specifics and major intricacies governing the sense-making procedure in the reading behavior among EFL learners.Also, tackling the influence of text genre (i.e., narrative, expository) on strategic reading processes has been the primary focus of a group of reading specialists and researchers (e.g., Afflerbach, 1990;Best et al., 2008;Yoshida, 2012).This has shed light on the interdependency and "interactivity" between the variable of strategy use and the variable of text typology.Yet, further and extended research, couched within the confines of EFL reading as a basic receptive skill in any academic context, is needed to provide more illustrative, relevant and confirmatory findings that contribute to both the enrichment of EFL reading comprehension research and the plausible understanding of the academic EFL reading act as regards text type, namely narrative and expository.
In effect, given the constant and frequent exposure of most EFL learners to written discourse of narrative and expository type in their academic studies, many reading researchers have opted for investigating the reading process by including these two typologies of texts in their studies (i.e., Best et al., 2008;Baretta et al., 2009;Yoshida, 2012).However, most relevant literature on reading research has tended to manifest the plain influence of text genre on learners' reading processes/ strategies and thinking mechanisms during text analysis and meaning synthesis and neglected the text type impact on learners' reading achievement scores.Thus, the gap existing in the current reading research is apparent in the lack of delving deeply into the extent to which the underlying factor of text type (narrative & expository) can influence learners' reading performance gains.It is this posited perspective on which the present research tends to put a bright spotlight with the focal aim of confirming whether EFL learners' reading achievement is genre-dependent or remote from "genre-sensitivity".
Reading Comprehension
Reading comprehension, by definition, is the process via which the reader intends to identify the meaning included in the written text.It is "the internal thinking during which meaning is constructed through interactions between text and reader" (Harris & Hodges, 1995).In fact, accessing and making complete sense of the content of a particularly given textual input is a clear indication of the achievement of an adequate comprehension.In this regard, the concept of reading comprehension, as maintained by Smith (1982), can be referred to as "meaning identification".That is, in attempting to read the text, learners focus on the intended meaning which "is always relative to what they know and to what they want to know" (Smith, 1982).On the basis of this principle, it is assumed that the two concepts, the reading process and the textual comprehension, are intimately intertwined.
Furthermore, Snow (2002) notes that reading comprehension consists of three componential elements: the reader, the text, and the activity or purpose for reading.This demonstrates that the learner plays an active role in attempting to make sense of the meaning of the text by drawing upon various effectual reading strategies (e.g., inferring, monitoring, questioning).In addition, the content of text, if it is properly processed and analyzed, can also assist the learners to attain fuller comprehension, i.e. reading comprehension, as a purely cognitive process, is the outcome of the interaction between the reader's previously acquired knowledge and the content of the written input.Of equal, fundamental importance is the setting of purpose before being engaged in the task of textual interpretation.It is considered another precondition to achieving an effective understanding.In effect, "the reader processes the text with regard to the purpose" (Snow, 2002).Given that reading comprehension is based on these three variables (e.g., the reader, the text, the reading rationale), it plainly implies the analysis, synthesis and interpretation of the ideas and conceptualizations reflected in the written input.
It can be claimed that an effective comprehension of the written texts is highly dependent on the use of some specific reading strategies that are of substantive importance and greater help to the meaning construction task.Obviously, in addition to the possession of wider, richer knowledge of lexical items which facilitates the processing of the text in an efficient manner, the comprehension process also requires a sophisticated kind of critical thinking on the part of student-readers in order to understand the content of the given discourse more elaborately and perfectly.
Schema-Theoretic Approach
Based on the clear-cut view of the interactive approach that reading essentially involves the interaction between the reader and the text, the schema-theoretic approach tends to emphasize a similar perspective and gives higher importance to the prime role of background knowledge in text processing.It conceives of reading as an interactive process in which readers count on both bottom-up (data-driven) processing and top-down (conceptually-driven) processing (Carrell, 1984).This, indeed, alludes to the fact that the process of comprehending the written text (i.e., narrative, expository) mainly consists in the readers' prior knowledge, predictions and expectations.So, within the general framework of the schema theory, it is assumed that an efficient textual comprehension can only be attained via activating an appropriate, relevant schema that provides readers with the necessary knowledge and pertinent thoughts for interpreting the target content.
According to Rumelhart (1984), a schema theory is "a theory about how knowledge is represented and about how that representation facilitates the use of knowledge in particular ways".This evidently reflects that the readers' schemata encompass broad knowledge about basic ideologies that enable the processing, analysis and interpretation of the information included in the written discourse.This can only be achieved by readers if they activate the proper schemata to which they are expected to bring what is stated in the printed text.Hence, considering that schemata are the foundational basis for the overall achievement of content comprehension, it is plain that they have a key role in the reading process.In this context, Anderson (1978) and Anderson & Pichert (1978) affirm that the schemata have six primary functions which are: (1) to provide ideational scaffolding for assimilating the target text.
(2) to facilitate selective allocation of attention.
(4) to allow orderly searches of memory.
Importantly, many researchers (e.g., Alderson, 2000;Carrell, 1987) recognize two broader types of schemata: content schemata and formal schemata.The first type refers to the readers' existing knowledge of the subject matter of the text.It constitutes an essential precondition to undertaking an efficient reading and achieving understanding.In fact, the familiarity with the text content results in an enhanced reading performance (Alderson & Urquhart, 1988).Moreover, this type of schemata also includes the cultural knowledge which is deemed of critical significance in enabling the learners to approach a given text more effectively and properly.The fact of knowing some cultural ideologies and particularities provides readers with an insight into the textual message that is intended by the author/writer.
As regards the second type, the formal schemata, it denotes the readers' knowledge about the organization and structure of certain typologies of written texts (i.e., narrative, expository).For instance, in reading literary or narrative texts (e.g., short stories, novels), student-readers invariably expect that there exist the setting, characters, themes and events, whereas in coping with expository texts, readers presuppose that they are to explore basic facts, crucial ideologies and underlying conceptualizations about certain issues.This distinction between the nature and types of the textual discourse enables the learner, in a way, to invoke the fitting strategic steps for analyzing the content.Thus, a facilitative effect of this type of schemata on text reading and meaning identification is apparent.
In light of what has been said, it is worth reiterating, then, that the schema-theoretic approach places a focal emphasis on the actual interaction between the readers' previously acquired knowledge and the textual information.The activating of a fitting schema while being engaged in reading the text content allows readers to elaborate on the contained meaning, and thus attain an effective comprehension.Realistically, the background or schematic knowledge of the reader substantially contributes to the process of meaning-building in differing ways (Carrell, 1984).
Interaction between Text Type & Reading Process
The variable of text genre is a crucial element which typically characterizes the written texts of diverse sorts.It primarily serves as a "frame of reference" directing readers to analyze and interpret the textual content in a convenient, efficient manner.In actuality, granted that EFL learners are invariably exposed to a wide plethora of printed texts in the course of their academic studies, it is assumed that text type can impact the learners' mode of reading behaviour.In other terms, the strategy selection and utilization can be directed, to some extent, by the genre of the written text being studied and read.In this sense, text genre is viewed by Pappas & Pettegrew (1998) as a critical feature in the reading skill.Simply put, reading, as a process involving the recruitment of many strategic moves, is correlated with the genre since the latter embodies the nature of the given text from a linguistic and content perspective.
Indeed, many researchers have come to the manifest realization that narrative passages differ from expository ones at the content level on a large scale.This is sturdily emphasized by Yoshida (2012) who argues that important differences do exist in structure and content between narrative and expository texts.Further, other researchers (e.g., Best et al., 2008) maintain that narrative texts follow a simple structure and a sequence of causally related events.Thus, learners tend to process the content of the narrative discourse more readily and effectively.On the other hand, expository passages put "increased processing demands on the reader due to their greater structural complexity, greater information density, and greater knowledge demands" (Best et al., 2008).In actual fact, tending to process expository texts, learners preconceive that these texts include some crucial facts and ideologies about certain issues and phenomena the perception of which entails critical thinking and reasoning skills.This evinces that making sense of narrative passages is easier than processing written texts of expository type (Best et al., 2008;Yoshida, 2012).
The prime difference as regards text content processing is embodied in the heavy dependence of some strategic processes for the attainment of adequate comprehension of the writer's/ author's views and conceptualizations.
In effect, during textual synthesis, some of the strategies used by the EFL learners slightly differ according to the kind of the written text being processed (i.e., narrative, expository).This slight variability concerning the strategy deployment is governed by the typology of the written discourse which presupposes effective strategic processes that are certain to facilitate the reading comprehension task among EFL readers.For instance, narrative texts entail the extensive usage of visualizing for easing the act of understanding the events, actions, characters' behaviours and themes.Contrastingly, expository texts require the use of multiple strategies (e.g., rereading, synthesizing, paraphrasing, prior knowledge) in order to reach a coherent grasp of the textual content.This reveals that expository texts present a wide range of difficulties for learners at the level of comprehension construction (Zhou & Siriyothin, 2011).Yet, Taking into account that strategy use is, to an extent, genre-specific, the present study tends to highlight the impact of text type (i.e., narrative, expository) on EFL learners' reading achievement gains.
Research Objectives & Research Questions
The current study is intended for the exploration of the likely effect of text type on reading achievement gains among Moroccan EFL first-semester university learners.It seeks to show the impact of text typology on EFL learners' reading test scores at the pre-and post-testing levels.For obtaining pertinent and relevant data, the usage of some research instruments was resorted to in an endeavour to realize the prime objective that undergirds the empirical framework of this conducted research.The research-based tools include the reading comprehension texts (i.e., narrative, expository) and reading pre-and post-tests (narrative & expository).Therefore, two major research questions, gearing the present research towards the attainment of some useful implications and conclusions, have been formulated.
Research Hypotheses
Based on the above-cited research questions, two major research hypotheses have been formulated in the light of the relevant literature on the issue.These hypotheses, intended to thoroughly investigate the current study under scrutiny, are put forward as follows: a.Text type does not have an impact on Moroccan EFL university students' reading achievement gains.b.Strategy instruction cannot be an influencing element on learners' reading achievement gains with regards to text type (i.e., narrative, expository).
Participants
The current study targets one hundred and thirteen Moroccan EFL university students belonging to the English language department at the Faculty of Letters and Human Sciences, Mohammed V-Agdal in Rabat.The EFL participants undertake their English Studies at the first-semester level.Two EFL groups were randomly selected among multiple EFL groups by the researcher and each group is made up of mixed-ability learners.One group (n=63) served as the experimental group, whereas the other group (n=50) was assigned to the control condition.The subjects belonging to these two groups (i.e., control, experimental) are not repeaters and they have a similar educational background.The chief rationale behind the mere selection of two groups is to draw a comparison between the treatment and control groups as to reading scores at pre-and post-test levels.This will further highlight whether text type is a strong predictor of EFL learners' reading achievement.
Procedure
This exploratory study is firmly founded on a pre-post-test design involving a reading pre-test (narrative & expository) and a reading post-test (narrative & expository).These reading tests were administered to the control and experimental groups.After being pre-tested, the treatment group (n=63) received a comprehensive instruction in strategy use in an attempt to reinforce their strategic awareness for a semester-long period (Fall Term/ 2012).This procedure was effected through the delivery of a large corpus of written discourse (i.e., narrative, expository) along the intervention course.By contrast, the control group (n=50), not being exposed to any systematic training in strategy knowledge and usage, was taught the reading comprehension in the traditional manner.Indeed, the same pre-test (narrative & expository), which was accorded to the treatment group, was assigned to the control group.At the end of the semester, both the experimental group and controls were post-tested on narrative and expository written discourse.
The reading comprehension tests (i.e., pre-test, post-test) designed by the researcher incorporated four major reading tasks: the wh-question task, the meaning-inferring task, the paraphrasing task, the summarizing task.Indeed, the fulfilment of these reading-based tasks entails a great amount of cognitive reasoning, critical thinking and metacognitive processing on the part of learners.The control as well as the treatment groups were assigned a reading test (i.e., narrative, expository) at both the pre-test and post-test levels.Thus, the assignment of each reading test (pre-test & post-test) to the EFL participants belonging to the two groups (control & treatment) took a two-hour period.
More explicitly, the pre-and post-tests were systematically given a score out of twenty (20) after being analyzed in terms of the correctness and accuracy of the responses relating to the stated text-related questions.Each set question is graded a scoring rubric.The comprehension questions task and the meaning-inferring task are assigned (7.50) and (4.00) respectively.As to the paraphrasing and summarizing tasks, they are granted (3.00) and (5.50) sequentially.Thus, the total score of the expository reading comprehension test is added up to that of the narrative reading comprehension test.Then, they are divided by two so as to get the final grade of the pre-test.This procedural criterion was applied to the post-test in a comparable way in an attempt to reach a reasonable, sound measurement.
In order to guarantee the reliability of the scoring procedure applied to the reading comprehension tests (pre-test & post-test), it is worth noting that many significant modifications in terms of the scoring rubric were made to assure a certain kind of balance and objectivity in distributing and assigning the grades to the set questions pertaining to the reading comprehension texts included in the two tests.Hence, before starting the procedure of the data gathering, it was crucial to guarantee the feasibility and efficiency of the reading tests depended upon in the current study.This was executed through the process of "piloting" with the primary objective of detecting some prospective inadequacies and slight inconsistencies that can stand, at times, as a real impediment to the actual conduct of the research study.Further, for the purpose of ensuring the validity of the reading comprehension pre-test and post-tests, they were presented to some university instructors to gain highly constructive, informative feedback pertaining not only to the content of the reading tests and the adopted scoring method, but also to the relevance, accuracy and sequence of the comprehension questions used.
The obtained scores of the assigned reading comprehension tests (e.g., pre-test, post-test) were submitted to statistical analysis through the usage of both the Excel Software Program (version 2007) the SPSS Software Program (16.0).Indeed, both the paired-samples t-test and the independent samples t-test were implemented in an attempt to define and evince the means, standard deviations and mean differences among the target EFL groups (control & experimental) at the pre-testing as well as the post-testing stage.
EFL University Learners' Reading Achievement at Pre-testing
Based on the reached outcomes of the pre-test, it is manifest that text typology (i.e., narrative, expository), as a prototypical embodiment of the discourse content, does not exert any marked influence on the target groups' reading performance gains.The pertinent findings are plainly presented in the following figure.As explicitly disclosed in figure 1, there seems to be discrepancy in the reading achievement gains between the narrative and expository reading comprehension tests among both the control and experimental groups at the pre-test level.In effect, the subjects in the control group scored relatively higher grades on the expository reading test than the narrative one.Contrariwise, the treatment subjects achieved higher gains on the narrative reading test compared to the expository one.This state of affairs can be ultimately accounted for by stating that processing either the narrative or the expository reading text, which were included in the pre-test, did not have any measurable effect on the learners' reading performance gains.Statistically speaking, the experimental group attained mean scores of (5.23) and (4.84) on the narrative and expository reading comprehension tests respectively.As to the control group, they reached a mean score of (3.67) for the narrative reading test and (5.96) for the expository reading test.
To back up the above-stated findings and to unveil the extent to which the typology of the written passage influences the EFL learners' reading gains, it was necessary that independent samples t-tests be performed.This presents a detailed account of the target subjects' reading performance as to each text type (e.g., narrative, expository).The results are clearly exhibited in the following two tables.As featured above, it is of note to claim that, whereas the experimental group outscored the control group at the level of the narrative reading test with a mean difference of (-1.568), the reverse occurred when both groups were presented with the expository reading test with a plain difference in means of (1.110).In fact, the treatment group did not outperform its counterpart, the comparison group, on the expository reading test.This is testified to by the significance level which is lower than the set criterion (.05) for the narrative reading test (.004) and higher than the same criterion (.05) for the expository reading test (0.71).Therefore, text type does not seem to be an influential variable on the reading comprehension gains among EFL learners.
EFL University Learners' Reading Achievement at Post-testing
At the post-test stage, it was deducible that the reading performance of both participating groups (control & experimental) was not genre-dependent.The results reached after scoring the narrative and expository reading post-tests of both EFL groups are presented in the figure that follows.
Figure 2. Experimental & control groups' mean scores on the post-test
The above-stated results seem to validate the postulate that text type is not a determining factor in the reading comprehension performance.This is plainly consistent with the findings of the pre-test.Though the control subjects did show some progress relating to the reading achievement gains of the narrative reading test (M= 3.67; M= 5.38), they did not advance in terms attaining significant gains on the expository reading test (M=5.96;M=5.65) from the pre-test to the post-test stage.As for the experimental participants, they achieved significantly higher grades with means of (9.51) and (10.66) on the narrative and expository reading tests respectively.Actually, the fact that the experimental subjects attained a more significant mean score on the expository reading test than the narrative one at post-testing is in total opposition to their achievement at pre-testing in which they scored higher on the narrative reading test than the expository one (see Figures 1 & 2).
For providing somewhat accurate, confirmatory findings, the t-test, as an effective measuring statistical tool, was conducted on the obtained post-test data.Indeed, the two target groups (e.g., control, experimental) were compared in terms of the mean scores gained on the narrative as well as the expository reading test at post-testing.The two tables presented below manifest the relevant results.The resultant outcomes gained by means of the independent samples t-test evince that the mean difference between the target groups (e.g., control, experimental) is positively significant both at the level of the narrative reading test (.000) and the expository one (.000).This indicates that the experimental treatment conducted over the course of the semester was a differentiating factor between the two groups where reading performance is concerned.Yet, given that the control group's mean score on the expository reading test (5.65) was not highly superior to that of the narrative reading test (5.38) and granted that the treatment group obtained slightly different means for the narrative (9.51) and the expository reading test (10.66),text type appears to have a non-significant effect on reading achievement.
All in all, the decline of the control group's mean score on the expository reading test from the pre-to the post-test level and the treatment group's higher performance gains on the narrative reading test at pre-testing and on the expository reading test at post-testing are clear evidence that text type is not an influencing variable on the reading achievement gains.This is highly suggestive that the typology of written discourse (i.e., narrative, expository) cannot immeasurably impact the EFL students' reading accomplishment scores in manifestly various dimensions.
Discussion
The present study revealed the extended extent to which text genre can starkly impact EFL learners' reading accomplishment.It also scrutinized whether strategy instruction has any marked effect on learners' attained scores on narrative and expository reading tests.These two primary objectives serve as the core foundation upon which this research is predicated.
It is important to state that the participating EFL learners' reading performance was rather different across the narrative and expository written tests at the pre-test and the post-test levels.Notably, a large number of the participants in the control group scored slightly better on the expository reading test than the narrative one at both the pre-test and post-test stages.Succinctly put, the EFL learners belonging to the control group tackled the expository reading test in a slightly efficient way.Nonetheless, across the pre-post-test continuum, their reading achievement as concerns the expository reading test decreased, whereas some mildly significant reading gains were achieved at the level of the narrative reading test.On the other hand, though the treatment group scored higher on the narrative than the expository reading test at the pre-test stage, their reading achievement was much greater on the expository reading test as compared to their reading gains of the narrative test at post-testing.Under this perspective, it is apparent that text genre was not an influencing variable on the reading gains in both tests (pre-test & post-test).This is consistent with the outcomes of some research studies (e.g., Wolfe & Mienko, 2007;Cervetti et al., 2009) backing up the view that text type does not predict differences in reading performance.This is, in a way, in accordance with the claim postulated in the first research hypothesis which states that text type does not have an effect on Moroccan EFL university learners' reading achievement gains.Although most student-readers, from both control and experimental groups, did not measurably perform on the pre-test, their actual reading performance did not evince any significant difference at the level of text type.Yet, this plainly contradicts with previous research (Kucan & Beck, 1996;Şahin, 2013) espousing that text type (i.e., narrative, expository) has an effect on reading test scores.The underlying evidence is that the impact of text genre (e.g., narrative, expository) on the learners' reading performance was not observable given the comparison drawn between the two groups before the strategy training intervention.Indeed, the scores obtained by the treatment group on the narrative reading pre-test markedly exceeded those attained by the controls.As to the expository reading pre-test, the reading gains realized by the control group were higher than those achieved by the treatment group.This is suggestive of the notion that the learners' reading achievement is not inextricably bound up with the type of written discourse.
Actually, despite the incremental increase in the intervention group's reading gains as regards the narrative and expository written texts at post-testing following the strategy instruction, the genre influence was not apparent.Particularly, the results of the post-test among the treatment group showed mild differences between the narrative and expository reading tests.For the sake of clarity, this group's performance on the expository reading test was moderately higher than their performance on the narrative reading test.This is contrary to the substantive reading outcomes achieved by the same group on the narrative rather than the expository reading test at the pre-testing level.This is, again, indicative of the premise that "genre effect" is not a crucial variable that characterizes and governs the outcomes of reading comprehension among the EFL learners.Thus, the view that EFL learners' reading gains are not significantly genre-dependent can be advocated.
The assumption that can be stated, in the light of what has been discussed thus far, is that text genre is unlikely to impact the reading test scores amongst EFL student-readers.In other terms, given that the strategy intervention is a significant contributor to augmenting the learners' reading achievement across the different written texts (e.g., narrative, expository), the element of genre cannot have any facilitative role in the text-processing performance (e.g., Wolfe & Mienko, 2007;Cervetti et al., 2009).The variability at the level of scores gained on the two reading comprehension tests (e.g., narrative, expository) at post-testing cannot be attributable to the strategy-based intervention.Rather, the extensive instruction of the EFL learners in the deployment of reading "heuristics" can enhance the learners' capabilities to perform significantly better in both narrative and expository text processing.This confirms the second hypothesis of this research study that strategy instruction cannot be an influencing variable on learners' reading achievement with regards to text type (i.e., narrative, expository).
Accordingly, it can be held that the text genre is not a determining factor in comprehending textual information.The empirical findings yielded by this conducted case study are in support of previous research (e.g., Wolfe & Mienko, 2007;Cervetti et al., 2009).Nonetheless, the study's findings contradict, in a way, some former empirical studies that advocate the increasing impact of text type on the learners' accomplishment in reading comprehension (Goelman, 1982;Geva & Ryan, 1985;Kucan & Beck, 1996).Clearly, the attained results of the current research refute, to some extent, the notion that the complexity of either the expository or the narrative reading test can be deemed an obvious factor behind the learners' attainment of lower or higher scores.
Conclusion
The current study placed a particular focus on the perceived influence of text type on EFL learners' reading accomplishment gains.It also investigated the extent to which strategy instruction can impact the narrative and expository reading test scores.In this vein, the reached findings did reveal that the impact of text genre on the reading comprehension scores is almost non-existent.Put differently, the component of text genre is not influential on the reading performance gains attained by the target EFL groups.This was shown by the pre-test as well as post-test results achieved by the two participating groups.Whereas the control group reflected somewhat better performance on the expository text than the narrative one, the experimental group evinced more increase in the reading achievement gains on the narrative text than the expository one at pre-testing.
More significantly, as the results of the comparison group testify, the decline in the mean score on the expository reading test from the pre-to the post-test session and the shift from a lower to a higher mean score on the narrative reading test across the pre-post-test sessions negate the possibility of "genre effect" on the reading performance.Thus, it can be concluded that the EFL learners' reading achievement is, to a greater extent, "immune" to the component of genre (e.g., narrative, expository).Indeed, the variable of strategy instruction had an appreciable impact on learners' reading performance regardless of which text typology (i.e., narrative, expository) is tackled.In actuality, the improvement of the learners' reading gains across the pre-post test stages, especially among the experimental group, did not reveal any substantial effect of text type on the attained test scores.Though it is assumed that the learners' strategy use in processing the texts was somewhat specific to genre, their reading achievement was not genre-sensitive as indicated by the reached findings.Thus, it can be stated that the influence of the component of text type on using some reading techniques more frequently among EFL learners is apparent granted the salient features that characterize each text genre.However, learners' reading achievement scores, as evinced by the attained findings, were not governed by "genre-sensitivity".
Implications
It is noteworthy, apropos of the obtained results relevant to this research study, that the process of exposing EFL student-readers to differing text genres, namely narrative and expository written texts is an essential requirement in reading comprehension teaching at the university level.More clearly, it can be postulated that, only when EFL learners are presented with various typologies of written discourse can their analytical abilities, reasoning skills and processing techniques be further developed and raised to a fairly extended level.In recognizing the premise that the processing of differential text types does not have any impact on the EFL learners' reading achievement as shown by the current findings, it can be admitted that the interpretive analysis and effective synthesis of written texts that are of narrative as well as expository type can increase the EFL students' engagement in the cognitive act of reading and improve their "higher-order" thinking processes which constitute the cornerstone of the reading strategy application.Further, it is also implied that the assignment of somewhat challenging written texts (e.g., narrative, expository) to the EFL learners, namely at the first-semester stage, is of exceedingly primary importance to the development of a sturdy reading competence and the nurturing of the core analytical skills that enable an optimum synthesis/ analysis of academic textual discourse.It is true that, when EFL learners are assigned the written discourse (e.g., narrative, expository) that is characterized by easiness/ facility at the level of content, they will not foster an efficiency-oriented reading that involves critical thinking skills and metacognitive processes.
Limitations & Suggestions for Future Research
It is worthy of claiming that any conducted research is not without limitations.The first one is incarnated in the fact that this case study was limited to the Faculty of Letters and Human Sciences in Rabat.Thus, it is critical that many Moroccan Faculties of Letters and Human Sciences and Higher Education Institutions be addressed and taken as representative case studies by future research with a view to obtaining adequacy in terms of the "representativity" of the Moroccan EFL student-readers.The second limitation is relatable to the premise that a large corpus of significant future research, which is longitudinal in nature, is in need of as concerns the perceived effect of text typology on the sampled EFL university learners' reading achievement.It is fairly suggested that prospective studies in the area of reading comprehension research be dependent on the assignment of a wide series of pre-and post-tests (e.g., narrative, expository) across the semesters.This proposed state of affairs can be a reasonably promising way to further substantiate that text type impact on the EFL learners' reading achievement is non-existent.
The third limitation is manifested in the gender variable which can be, to some extent, viewed as an intervening factor.Given that this research study was primarily concerned with testing the conceived impact of text type on the EFL learners' reading achievement, it did not take account of the reading performance gains (i.e., narrative, expository) obtained by the male subjects as opposed to the females belonging to the two EFL groups under investigation.In clearer terms, it can be estimated that the EFL female learners could perform significantly better on narrative and expository reading tests than the EFL male learners or vice versa.This investigation would have imparted insightfully rich data as to whether reading achievement on narrative and expository reading tests is somewhat "gender-specific".Thus, it can be posited that addressing the reading accomplishment by the EFL learners both before and after the instructional intervention from a gender perspective, which was beyond the scope of this research study, could be investigated and expatiated upon by further, future research.
a. To what extent are the Moroccan EFL university learners' reading achievement gains genre-dependent?b.To what extent does reading strategy instruction impact Moroccan EFL university learners' achievement gains on narrative and expository reading tests?
Figure 1 .
Figure 1.Experimental & control groups' mean scores on the pre-test
Table 1 .
EFL learners' achievement on the narrative reading test at pre-testing
Table 2 .
EFL learners' achievement on the expository reading test at pre-testing
Table 3 .
EFL learners' achievement on narrative reading test at post-testing
Table 4 .
EFL Learners' achievement on expository reading test at post-testing | 8,295 | 2017-05-30T00:00:00.000 | [
"Linguistics",
"Education"
] |
Development of an algorithm for automatic classification of right ventricle deformation patterns in arrhythmogenic right ventricular cardiomyopathy
Abstract Background Different disease stages of arrhythmogenic right ventricular cardiomyopathy (ARVC) can be identified by right ventricle (RV) longitudinal deformation (strain) patterns. This requires assessment of the onset of shortening, (systolic) peak strain, and postsystolic index, which is time‐consuming and prone to inter‐ and intra‐observer variability. The aim of this study was to design and validate an algorithm to automatically classify RV deformation patterns. Methods We developed an algorithm based on specific local characteristics from the strain curves to detect the parameters required for classification. Determination of the onset of shortening by the algorithm was compared to manual determination by an experienced operator in a dataset containing 186 RV strain curves from 26 subjects carrying a pathogenic plakophilin‐2 (PKP2) mutation and 36 healthy subjects. Classification agreement between operator and algorithm was solely based on differences in onset shortening, as the remaining parameters required for classification of RV deformation patterns could be directly obtained from the strain curves. Results The median difference between the onset of shortening determined by the experienced operator and by the automatic detector was 5.3 ms [inter‐quartile range (IQR) 2.7–8.6 ms]. 96% of the differences were within 1 time frame. Both methods correlated significantly with ρ = 0.97 (P < .001). For 26 PKP2 mutation carriers, there was 100% agreement in classification between the algorithm and experienced operator. Conclusion The determination of the onset of shortening by the experienced operator was comparable to the algorithm. Our computer algorithm seems a promising method for the automatic classification of RV deformation patterns. The algorithm is publicly available at the MathWorks File Exchange.
| INTRODUC TI ON
Arrhythmogenic right ventricular cardiomyopathy (ARVC) is an inherited disorder characterized by progressive myocardial replacement by fibrofatty tissue; this predisposes patients to life-threatening ventricular arrhythmias and dysfunction of the right ventricle (RV) predominantly. 1,2 The clinical diagnosis is based on the presence of ventricular arrhythmias, electrocardiographic and structural/functional abnormalities, combined with family history and the presence of ARVC-associated mutations. 3 These mutations are found in more than 60% of the ARVC patients, most frequently in the gene encoding the desmosomal protein plakophilin-2 (PKP2). 4 However, diagnosing ARVC is complex as these mutations are known to have incomplete penetrance and ARVC has a variable disease expression. Early manifestations of ARVC can be subtle and relatively asymptomatic, with sudden cardiac death (SCD) as first presentation. 5 This emphasizes the importance of early diagnosis and detection of individuals at risk for life-threatening arrhythmias.
In general, electrical abnormalities are considered to manifest before signs of structural disease, which has led to the recognition of different clinical stages of ARVC; (a) subclinical or concealed stage, when no electrocardiographic (ECG) or structural abnormalities (eg, regional contractile dysfunction and increased myocardial stiffness 6 ) are present; (b) electrical, with solely ECG abnormalities; and (c) structural, with both ECG and structural abnormalities. 7,8 Interestingly, recent echocardiographic deformation (strain) imaging studies have demonstrated abnormal deformation patterns of the basal right ventricle (RV) myocardium in subclinical stages of ARVC, suggesting that sensitive assessment of mechanical function already shows structural dysfunction in earlier stages. 9 This implies a potential role for RV strain analysis in detecting early disease and at-risk individuals. [10][11][12][13] Consequently, deformation pattern analysis has recently been suggested by the European Association of Cardiovascular Imaging as part of the clinical evaluation for ARVC. 12 Various abnormal strain parameters have been associated with ARVC pathology, including the delayed onset of mechanical shortening, (systolic) peak strain, and postsystolic shortening relative to the peak strain (postsystolic index; Figure 1). Combining these four parameters, Mast et al identified three types of deformation patterns related to the clinical ARVC stage ( Figure 1). 13,14 Importantly, this deformation pattern classification method was shown to have added prognostic value for clinical disease progression during early stages of ARVC. 14 Currently, the measurement of these parameters and classification of the strain pattern is an offline, manual process potentially prone to inter-and intra-observer variability (or interpretation) which is directly related to experience level. The aim of this study was to eliminate these limitations, by designing an algorithm for fast, uniform, and automatic classification of RV deformation patterns and validating the performance.
| Population
Echocardiographic data were retrospectively obtained from 26 plakophilin-2 (PKP2) mutation carries and 36 healthy control subjects that received echocardiography including deformation imaging in the University Medical Center Utrecht between 2006 and 2015. This study was approved by the local institutional ethics review board.
| Imaging
Echocardiography was performed as previously described using a Vivid 7 or Vivid E9 ultrasound (US) machine (General Electric, Milwaukee, Wisconsin) using a broadband M3S transducer. The RV lateral free wall was visualized using the RV focused apical 4-chamber view, after optimizing temporal resolution by reducing its sector width. 13 Pulmonary valve opening and closure were determined using RV outflow tract spectral Doppler measurements during end-expiration.
| Image processing
Right ventricle (RV) strain curves were obtained using two-dimensional strain imaging using standard B-mode images (speckle-tracking) as described before. 15 With this technique the speckles as generated by the reflected US beam are followed frame by frame, resulting in a unique speckle pattern. Displacement of the speckle pattern represents the myocardial deformation. Images were analyzed using EchoPAC (version 12, GE Vingmed Ultrasound AS). Separate strain curves of the basal, mid-ventricular, and apical segments of the RV free wall were obtained, composing a final dataset of 186 curves. 15 The onset of the QRS complex was determined manually. Classification of the RV deformation patterns was performed manually using the strain curve of the RV basal lateral segment. 13
| Preprocessing
The saved strain curves were loaded into Matlab R2015a (MathWorks, Inc). All strain curves were interpolated to 1000 samples to create a uniform database with small sample intervals. That way, a precise comparison between the analysis of an experienced operator and the algorithm can be performed. The timing of the onset of the QRS complex was used as a reference.
K E Y W O R D S
arrhythmogenic right ventricular cardiomyopathy, classification, computer algorithm, strain
| Algorithm
There are four parameters required to calculate the classification; onset of shortening, peak strain, systolic peak strain, and the postsystolic index (Figure 1). These parameters were determined based on the following algorithm. First, strain curves without a peak systolic stain below −10% were scored with 4 points ( Figure 1) and during classification of the basal lateral curves automatically marked as a type III ARVC class. These curves were excluded for analysis of the onset of shortening, as these are noninformative for validating the algorithm performance.
| Onset of shortening
Onset of shortening is defined as the time between the onset of the QRS complex and the onset of the mechanical shortening measured from the strain curves ( Figure 1). Assuming onset of the mechanical shortening always starts before pulmonary valve closure (PVC), only the segment of the strain curve between the onset of the QRS complex and PVC was analyzed. Onset of shortening was determined by comparison of the different peaks in the strain curve before PVC. A peak was defined as a local maximum, where the target data point is larger than both neighboring data points. If no peaks are detected F I G U R E 1 Example of a strain curve of the basal right ventricular segment, with the corresponding ECG. The onset of shortening (blue) = time between onset-QRS (orange circle) and the onset of mechanical shortening. Systolic peak strain (green) is the maximal negative value between pulmonary valve opening and closure. Peak strain (PS) (red) is the maximal negative strain. Postsystolic shortening (black) is the peak strain minus the systolic peak strain and is used to calculate the postsystolic index, according to formula (Equation 1). In the lower table, the classification of the RV deformation pattern as defined by Mast et al 13 is explained. Three parameters are used to score the deformation pattern: onset of shortening, postsystolic index, and systolic peak strain. Based on the scoring, the curves can be marked with the accompanying classification score. It is important to note that a systolic peak strain of ≥−10% directly results in a classification score of 4 points and thus type III classification. In comparison, type I classification corresponds to a normal deformation pattern, and type II shows a delayed timing of the onset of shortening, increased postsystolic index, and reduced systolic peak strain. ECG = electrocardiogram; PVC = pulmonary valve closure; SPS = systolic peak strain before PVC, onset of shortening was assumed to be at the onset of the QRS complex. With one peak before PVC, the timing of this peak was stored as the onset of shortening. If two peaks are detected before PVC, the strain offset between these two peaks was compared ( Figure 2). If the second peak was more than 1.5% absolute strain below the first peak, the timing of the first peak was used; otherwise, the timing of the second peak was used. This process was repeated for any subsequential peaks until one peak met the onset criteria.
| Peak strain
Peak strain is the maximal negative strain value and was determined using a MATLAB function which determines the minimal value of the curve.
| Systolic peak strain
The systolic peak strain was defined as the minimal strain value before or at the PVC, whichever was the lowest value. The systolic peak strain was determined using a MATLAB function which determines the minimal value of the curve between the start and the PVC.
| Postsystolic index (PSI)
The PSI was calculated using the previous determined peak strain and systolic peak strain using the following formula, which was implemented in the algorithm.
If the peak strain is equal to the systolic peak strain, then PSI is set to zero.
After all parameters were calculated, all curves were scored and classified as type I, type II, or type III according to the criteria defined by Mast et al 13 (Figure 1).
| Validation
The determination of the onset of shortening was validated using a specific validation procedure, as shown in the flowchart (
| Statistical analysis
Values are presented as median and inter-quartile range or mean ± SD as appropriate. Results are displayed using a Bland-Altman graph. The onset of shortening as determined by algorithm is compared with the onset of shortening as determined by the first operator by calculating a (intra-class) correlation coefficient. Groups were compared by an independent samples t test or Mann-Whitney U test. Proportions were compared between groups using Fisher's exact test. P-values of <.05 were considered significant.
| Population
In this study, 36 healthy subjects and 26 subjects carrying the plakophilin-2 (PKP2) mutation were included. Of the PKP2 mutation carriers, 20 (77%) fulfilled definite ARVC diagnosis by the 2010 TFC. ARVC diagnosis was defined as fulfillment of ≥4 points by the revised 2010 Task Force Criteria. 3 The baseline characteristics are provided in Table 1. PKP2 carriers were older than control subjects (1) Post systolic index = 100 • peak strain − systolic peak strain peak strain F I G U R E 2 Example of a strain curve in which two peaks were detected in the first part of the curve. The dotted red line represents the first peak while the dotted blue line represents the second peak. The absolute difference between the two peaks is only 0.63%, and therefore, the onset of the mechanical shortening is set at the second peak (43.5 ± 16.1 years vs 36.0 ± 9.8, P = .028). PKP2 carriers had significantly increased RV size and decreased LV/RV function by conventional echocardiographic measurements, compared to control subjects.
| Validation
For all 186 curves, the four parameters could successfully be determined by the algorithm within 0.4 seconds. Fourteen curves did not reach −10% peak strain and were therefore marked as type III and excluded from the validation of the onset of shortening.
| Comparison onset of shortening
The median difference between the onset shortening determined by the two experienced operators was 4.3 ms (IQR 1.5-9.8 ms) ( Figure 4A).
The algorithm was able to detect the onset of shortening directly in 148 curves (86.0%). In 24 curves (15 cases, 9 controls), a subsequent peak was selected by the automatic process previously described. The difference in onset of shortening time between the selected peak and the peak before that was −64 ± 31 ms For these cases, the median difference between the onset shortening determined by the algorithm and the first experienced operator was 3.5 ms (IQR 1.4-6.4 ms).
Overall, the median difference between the onset shortening determined by the algorithm and the first experienced operator was 5.3 ms (IQR 2.7-.6 ms) ( Figure 4B). The correlation coefficient between the algorithm and the first experienced operator for the onset of shortening was ρ = 0.97 (P < .001), and the intra-class correlation coefficient was 0.96 (0.94-0.97). Three curves (2%) were scored differently between the algorithm and the first experienced operator ( Figure 4C, D). The mean framerate (which is depended on the settings during the US imaging) was 75 Hz, resulting in a duration of ~13 ms between two frames (time frame). 96% of the differences in all curves were within 1 time frame.
| Comparison of the classification score
There were no differences in classification of the PKP2 mutation carriers as determined by the algorithm and the experienced operator, based on differences in timing of the onset of shortening of the basal lateral curve.
| D ISCUSS I ON
In this study, an automatic algorithm was created for the analysis and classification of RV strain curves. The results suggest that the algorithm is capable of accurate detection of the four classification parameters and thereby capable of calculating the accompanying F I G U R E 3 Flowchart of the calculation and validation method. For the comparison of the onset of shortening, both the three curves of the healthy subjects and the three curves of the PKP2 mutation carriers were included. Curves with a peak systolic strain >−10% were excluded from the analysis. The basal lateral strain curves of PKP2 mutation carriers were analyzed to compare the classification score between the algorithm and the experienced operator classification score. The results of this study show that our proposed automatic algorithm is feasible, quick, and applicable to nonexperts on RV deformation characteristics.
In this study, the operators have 13 and 3 years of experience in analyzing RV strain curves, respectively. The inter-operator differences as shown in Figure 4A underscore the need for a more reliable method to determine the required strain parameters. As shown in Figure 4D, in some cases there was a large difference between the algorithm and the experienced operator due to the presence of multiple peaks in the first phase of the curve. In these cases the inter-operator differences were equally large, because there is no strict consensus on which peak to use as the start of contraction. By using an algorithm, systematic analysis of the RV strain curves will result in uniform and reliable output.
A high (intra-class) correlation was seen between the algorithm and the experienced operator for the onset of shortening. However, overall the onset of shortening as determined by the algorithm was slightly earlier than the operator, median of 5.3 ms (IQR 2.7-8.6 ms).
An explanation for this difference might be that the operator visually picks the location on the curve at which the curve is clearly descending. The algorithm however is able to determine the ultimate first moment where the curve is descending and is therefore earlier than the operator. It is important to keep in mind that the minimum frame duration is 5 ms The median difference of 5.3 ms between the algorithm and the operator as found in this study is thus the same magnitude as the temporal resolution of the US machine.
The first step of the algorithm was to exclude all curves without strain below −10% and mark these curves as a type III ARVC class. In the scoring system described by Mast et al, 13 the threshold for the systolic peak strain is −10%; every curve with a systolic peak strain ≥−10% was assigned 4 points. In the classification, 4-6 points were classified as a type III and therefore it is valid to mark these curves as type III and not to score the onset and the amount of postsystolic shortening (which are both characteristically abnormal in the type III strain pattern).
Classification of the RV deformation patterns was based on the RV basal lateral segment, which is the subtricuspid region, since this region is typically first affected in ARVC. 10,16,17 No differences were seen in classification type between the algorithm and the experienced operator. The assumption was made that peak strain, systolic peak strain, and the PSI did not differ between the algorithm and the experienced operator since the estimation of the peak strain, systolic peak strain, and the postsystolic index are straightforward and could be visually verified (eg, Figure 1).
| Limitations
The
| Clinical relevance
Previous studies have shown that RV strain analysis is very useful for early detection and classification in ARVC. 13,14 In those studies, the clinical relevance of the classification score as used in this algorithm The algorithm is vendor-independent and is publicly available at the MathWorks File Exchange. 18 Future steps to advance the clinical implementation would be to develop an easy-to-access online tool.
Moreover, to stimulate clinical use of (RV) strain analysis, this type of algorithms should be implemented in the postprocessing software of the different vendors.
In this study, we focused on the classification of deformation patterns of the RV in ARVC patients. However, this method is not F I G U R E 4 A, Bland-Altman plot of the difference between the two experienced operators in onset of shortening. Black dots represent the healthy subjects while red dots represent the ARVC patients carrying a PKP2 mutation. The red line represents the mean difference of 8.7 ± 15.0 ms B, Bland-Altman plot of the difference between the first experienced operator and algorithm in onset of shortening. The red line represents the mean difference of 6.6 ± 10.9 ms The timing of the experienced operator is overall slightly later than the timing of the algorithm. C, Differences in onset of shortening between the algorithm and the first experienced operator. The two black dotted lines represent the threshold value for the classification as stated by Mast et al 13 The lower-left segment (gray) represents the curves with normal onset of shortening, while the upper-right corner shows the delayed onset of shortening. Note that this difference does not distinguish between normal and abnormal strain curves, since onset of shortening is only one of the four parameters needed for classification. Both the upper-left and the lower-right corners represent the values where the algorithm and the experienced operator scored differently. The blue line represents the perfect linear relation (x = y) between the algorithm and the experienced operator. D, Example of the RV curve was the difference between the algorithm and the experienced operator showed the largest difference. In this case, two peaks were detected. The absolute difference in strain was 1.57%, and therefore, the first peak was marked as the onset of shortening by the algorithm limited to solely RV strain in this specific patient population, but might be useful for several other myocardial diseases which have a distinct fibrosis pattern, 19 left bundle branch block, and, for example, ischemic heart diseases. Future research could use our algorithm for the analysis of deformation patterns in these other diseases.
| CON CLUS ION
In this study, an automatic algorithm was developed and verified to automatically analyze RV deformation patterns. Using this algorithm, intra-and inter-observer differences are prevented, resulting in fast and uniform analysis of strain curves. Specialistic laboratories are working on RV strain analysis in ARVC for more than ten years; this algorithm might stimulate the implementation of their methods into the real world. | 4,671.8 | 2020-05-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Impulse and Performance Measurements of Electric Solid Propellant in a Laboratory Electrothermal Ablation-Fed Pulsed Plasma Thruster
: Electric solid propellants are advanced solid chemical rocket propellants that can be controlled (ignited, throttled and extinguished) through the application and removal of an electric current. This behavior may enable the propellant to be used in multimode propulsion systems utilizing the ablative pulsed plasma thruster. The performance of an electric solid propellant operating in an electrothermal ablation-fed pulsed plasma thruster was investigated using an inverted pendulum micro-newton thrust stand. The impulse bit and specific impulse of the device using the electric solid propellant were measured for short-duration test runs of 100 pulses and longer-duration runs to end-of-life, at energy levels of 5, 10, 15 and 20 J. Also, the device was operated using the current state-of-the-art ablation-fed pulsed plasma thruster propellant, polytetrafluoroethylene (PTFE). Impulse bit measurements for PTFE indicate 100 ± 20 µN-s at an initial energy level of 5 J, which increases linearly with energy by approximately 30 µN-s/J. Within the error of the experiment, measurements of the impulse bit for the electric solid propellant are identical to PTFE. Specific impulse when operating on PTFE is calculated to be about 450 s. It is demonstrated that a surface layer in the hygroscopic electric solid propellant is rapidly ablated over the first few discharges of the device, which decreases the average specific impulse relative to the traditional polytetrafluoroethylene propellant. Correcting these data by subtracting the early discharge ablation mass loss measurements yields a corrected electric solid propellant specific impulse of approximately 300 s.
Introduction
Recent innovations in the solid rocket propellant field have led to the development of a solid propellant that is safe, throttleable, and green with on-demand on-off capability. These electric solid propellants (ESPs) ignite and decompose when electric power is applied at sufficient current and voltage [1]. This decomposition is a highly exothermic process that generates hot gas at a burn rate that can be throttled by varying the applied current. Removal of the voltage and current extinguishes the reaction, which may be restarted by the reapplication of electric power [2]. Because this reaction is only induced by the electric current, ESPs are not susceptible to accidental ignition by spark, impact or open flame. These characteristics are extremely beneficial compared to traditional solid rocket propellants which are not throttleable, toggleable, or insensitive to external ignition sources. The advent of ESPs expands the potential for the use of solid propellants in applications that were previously infeasible or dangerous.
Development of ESPs began in the 1990s with the design of an automobile air bag inflator propellant (ABIP) using materials safe for unprotected human contact (i.e., "green" materials). This ABIP was ammonium nitrate-based and was later repurposed for use in other areas, including rocket propulsion. Shortly thereafter, "ASPEN," the first digitally controlled extinguishable solid propellant, was developed [3]. This propellant featured additives with the ammonium nitrate base to lower the melting point and increase electrical conductivity [2]. This material exhibited performance metrics comparable to that of previous solid rocket propellants, but major problems existed with the repeatability of ignition. Further development for gas-generation applications led to a special family of electrically controlled energetic materials which may be mixed as solid, liquid or gel form propellants, all of which are electrically ignitable [4,5]. Some mixtures are flame-sensitive and explosive, some are insensitive to flame and combustion sustainable, and some are insensitive and extinguishable (like ESPs). One particular formula which conducts electricity and exhibits high specific impulse is known as the high-performance electric propellant (HIPEP) [1,6], which is not sensitive to open flame, spark, or impact and is extinguishable. In this solid energetic material, the ionic liquid oxidizer hydroxyl-ammonium nitrate (HAN) is dissolved and cross-linked in polyvinyl alcohol (PVA), forming a gel that is hardened by baking. The resulting rubbery solid HIPEP exhibits a pyroelectric behavior unique to energetics. When direct current electric power is applied, the proton transfer reaction between hydroxyl-ammonium and nitrate is promoted, and the level of nitric acid rapidly rises in the material, eventually triggering ignition of the propellant. This exothermic, gasgenerating reaction may be harnessed in a solid rocket motor to generate thrust on demand using electric power.
HIPEP's pyroelectric behavior may facilitate a dual mode propulsion system. The first mode is a high thrust chemical mode where direct current electric power is applied to incite pyroelectric gas generation. The decomposed propellant is gas-dynamically accelerated through a nozzle to generate thrust like any typical solid rocket motor. The duration of each chemical mode firing is determined by the duration that electric power is supplied and could be ≥500 ms. The inventors of this propellant and collaborating groups have reported on this mode of operation previously, with some efforts still ongoing [7][8][9]. This solid rocket motor may be paired with a second, high specific impulse (Isp) electric mode in the same device using the same thruster and solid propellant connected to a second, pulsed electrical circuit. One promising electric configuration for a high Isp mode is a pulsed electric propulsion device known as the electrothermal coaxial pulsed plasma thruster.
Pulsed plasma thrusters [10] (PPTs) have been in use since the first orbital flight of an electric propulsion device in 1964. PPTs offer repeatable impulse bits with higher exhaust velocities than can be achieved using chemical thrusters. Ablating polytetrafluoroethylene (PTFE) in a discharge to yield a working fluid, ablation-fed PPTs (APPTs) have the added benefit of inert propellant storage with no pressure vessel requirements. PPTs typically fulfill secondary propulsion needs on spacecraft such as station-keeping and attitude control, but have recently garnered more attention as a main propulsion system for small spacecraft [11,12]. Broadly speaking, APPT geometry may be classified as either rectangular or coaxial [10]. Coaxial geometry APPTs, like that of PPT-4 [13], electrothermal PPTs [14][15][16][17][18][19], or ablative z-pinch PPTs [20], possess a central and a downstream electrode and may have a conical-shaped dielectric between the electrodes. The central or upstream electrode is typically cylindrical and positively charged (anode) while the downstream electrode is ring-shaped. Solid propellant fills the space between electrodes and may be fed from the side through the conical dielectric. Most commonly this propellant is the inert polymer PTFE which is the state-of-the-art propellant for APPTs. A capacitor or bank of capacitors is charged to a few kilovolts, with that voltage applied across the electrodes. The main arc discharge is initiated by a small ignition pulse, which is always located in or near the cathode in a PPT. The igniter generates a surface flashover discharge to produce a seed plasma that initiates the main arc discharge. Radiant heat supplied by the high temperature arc heats and ablates the surface of the solid propellant, yielding gaseous propellant that further fuels the arc. At low energy, the coaxial PPT is a device dominated by electrothermal acceleration mechanisms, with the energy of the arc heating the gas to yield high exit velocities through gas-dynamic acceleration. Ablation processes are at the core of APPT operation, with many PTFE ablation studies in the literature [21][22][23][24][25][26].
The aforementioned dual mode device combining a solid chemical rocket motor mode with an electric coaxial APPT mode remains conceptual. Research in the use of HIPEP and other ESPs for gasgeneration and chemical mode applications with long (>1 ms) timescales is ongoing and separate from the present work. Current efforts by the authors are focused on understanding the behavior of the HIPEP material in the proposed APPT pulsed electric mode. Our recent work has compared ablation of HIPEP with traditional PTFE in ablation-fed arc discharge devices [27][28][29][30][31]. At high temperatures and during long (~ms) time-scales, it is known that HIPEP undergoes a thermal decomposition process, while PTFE evaporates after depolymerization. However, ablationcontrolled arc discharges occur on much shorter timescales, as the discharge current has a period of less than 10 µs. The specific ablation (µg/J) of HIPEP was measured to be roughly twice that of PTFE, and this was attributed to differences in the material thermal and chemical properties [27]. Plume measurements of HIPEP-fueled pulsed microthrusters [28] indicate electron temperatures (1-2 eV) and densities (10 11 -10 14 cm −3 ) of the weakly ionized plasma comparable to that of PTFE fueled APPTs. Exhaust velocity measurements indicate similar specific impulse performance of HIPEP relative to PTFE in the microthrusters, for at least the ionized portion of the expelled mass. Furthermore, it has been shown that the fraction of late-time ablation mass is similar for both propellants. Estimates from high-speed imagery of a pulsed HIPEP microthruster suggest that up to 50% of the ablated mass may be attributed to low-speed macroparticles ejected after the main current pulse [29].
To date, HIPEP has not been used in a traditional APPT configuration, where propellant material is ablated during a high current, short duration (~10 µs) arc discharge. Another ESP, the ammonium nitrate-based ABIP, was previously tested in Aerojet's modular test unit (MTU) and reported impulse bits roughly 50%-80% of the PTFE solid propellant typically used in that unit [1]. No measured performance metrics (impulse bit, specific impulse) are yet published for a PPT using HIPEP as propellant. The objective of this work is to investigate the performance of the HAN-based HIPEP material relative to that of PTFE in an electrothermal APPT. The device is a coaxial geometry electrothermal APPT and a modified version of it was used previously to quantify the propellant specific ablation [27]. Both PTFE and HIPEP are used as propellants in this work and the impulse bit and specific impulse are measured using an inverted pendulum thrust stand. For each propellant, the device was nominally operated for 100 pulses in vacuum, with the impulse bit measured throughout the test and the average propellant mass loss per pulse found by measuring the propellant mass before and after a test. These measurements are the first reported one-to-one performance comparisons between the HIPEP and PTFE materials in an ablative pulsed plasma device. Results from these experiments, when combined with previous observations on the ablation of the HIPEP material, can now be used to draw conclusions about the propulsive performance. The evolution of ablation mass with pulse number was also examined closely. Very short duration tests are conducted to quantify the early-pulse mass loss, and the mass loss measurements in longduration tests are closely examined for both PTFE and HIPEP propellants to identify long-term trends in the calculated specific impulse. We discuss the role of moisture absorbed by the hygroscopic HIPEP in mass loss measurements and specific impulse calculations, as well as its impact on future thruster designs.
Methods and Apparatus
The performance of the electric solid propellant, HIPEP, operating in an electrothermal pulsed plasma thruster was measured using an inverted pendulum thrust stand. This section details the propellant, the devices, and methods used in the present work.
High-Performance Electric Propellant
HIPEP is a HAN-based solution solid manufactured by Digital Solid State Propulsion (DSSP) using "green" ingredients and processes free of harmful fumes. HIPEP has a chemical composition of 75% HAN oxidizer (an inorganic ionic liquid), 20% polyvinyl alcohol (PVA) fuel binder, and 5% ammonium nitrate. It is mixed in standard chemical glassware, with only gloves and safety glasses needed for protection, and cured at 35 °C/95°F. It is initially a liquid and poured into a mold, curing to form a rubbery solid with density ~1.8 g/cm 3 and the appearance and texture of a soft pencil eraser. In a typical PPT, the PTFE is an electrical insulator between the electrodes. The conductivity of HIPEP (1-2 S/m) is comparable to highly conductive ionic liquids. However, our previous work has shown that the conductivity of the HIPEP has a negligible effect on the measured current in the arc discharge [27]. Furthermore, it has been observed that the HIPEP material ablates more readily than PTFE in an ablation-fed arc, which may be attributed to thermodynamic properties of the solid propellant. It is currently unclear how the additional ablation mass contributes to the thrust produced in an ablationfed thruster.
The solid HIPEP material is hygroscopic and gradually absorbs moisture from a typical laboratory atmosphere (~50% rel. hum.), eventually causing the propellant to become completely liquid. To mitigate absorption of moisture in this work, HIPEP samples are handled and measured only in a dry-air glovebox kept at 5% relative humidity. The material is stored only in a vacuum or dry-air environment. Furthermore, these samples undergo a vacuum drying process wherein samples were kept at <5 × 10 −2 torr for at least 24 h. After this time, the mass of the samples reach steady state, with the measured mass within 0.26% of the dry mass [27]. A Sartorius QUINTIX125D-1S dual range semi-micro balance is used to measure the mass of propellant samples before and after testing. In the selected range, this balance has a capacity of 60 g and can be read to an increment of 0.01 mg. The factory reported repeatability of the balance is 0.02 mg. For measurements reported here the typical variation in measurement was ±0.03 mg.
Electric Propellant Thruster Experiment
The electric propellant thruster experiment (EPTX) shown schematically in Figure 1 has geometry similar to that of a coaxial electrothermal APPT. It should be noted that this device was originally used primarily to study the mass ablation of propellants and it was not designed to be an efficient thruster [27]. The device was designed to facilitate removal and replacement of small propellant tube samples and is not optimized for performance.
Geometry and Operation
A circular stainless steel rod serves as the anode (+, positive) and a stainless steel ring with a 15° conical nozzle bore serves as the cathode (-, ground). The assembly is housed in a non-conductive PEEK body. The propellant tube sample has length 12 mm and inner diameter 6.35 mm. Because HIPEP is conductive, the propellant is isolated electrically from the two electrodes by thin PTFE washers with inner diameter of 7 mm which are not shown in Figure 1. These washers have an approximate thickness of <0.5 mm which is sufficient to hold off the maximum voltage (2.23 kV) used in the present work. The washers remain during PTFE testing to keep electrode spacing consistent between propellant samples. The test article and the capacitor bank are co-located inside the vacuum test facility. It is intended that the arc discharge occurs in the cylindrical cavity (6.35 mm dia.) formed by the inner propellant tube wall and the anode end. Because the test article is at vacuum, the capacitor can be charged to a high voltage (1-5 kV) across the anode/cathode-gap without initiating a Paschen breakdown. Breakdown of the gas is initiated by a surface discharge igniter constructed of two tungsten wires cemented in a two-bore alumina tube with approximately 2 mm exposed tip lengths. The wire tips are embedded in the nozzle of the cathode as shown in Figure 1. A capacitor discharge ignition (CDI) circuit creates a low-energy surface discharge between the tungsten wire tips. Electrons from this discharge are accelerated to the positively charged anode and sputter particles from it and the nearby propellant, triggering the main arc discharge.
During the main arc discharge, current flows in the z-direction through the arc region from the anode and attaches at the cathode/nozzle electrode. This current oscillates between high positive and negative currents over a few microseconds. Because the magnetic field induced by this rapidly changing current is in the θ-direction and follows the sign of the current, the Lorentz force is always directed in the negative radial direction (pinching toward the z-axis) in the arc region labeled in Figure 1. Consequently, the current sheet does not propagate along the z-axis in the cavity. In the conical nozzle region there is a radial component of current that may give rise to a small electromagnetic axial thrust component. The high current flowing through the resistance of the arc discharge in the cavity dissipates the energy that was initially stored on the capacitors. This energy transiently heats the walls of the propellant cavity to well above the vaporization temperature and causes ablation of propellant mass at a rate of 30-300 µg/pulse. The gas generated by ablation is then further heated by the arc discharge to high temperatures on the order of a few eV. This mass of high temperature charged particles and neutrals is accelerated gas-dynamically via the nozzle and imparts an impulse per pulse or impulse bit (Ibit). The capacitor bank must be recharged after each discharge, pulsed at a repetition rate of once per 20 s in this work (0.05 Hz). This low repetition rate means the propellant has time to cool after each discharge. Further details on operation, propellant sample preparation, and the ablation mass rates of PTFE and HIPEP in the precursor to this device may be found in our previous publication [27]. The only change in the device between that work and the present work is the implementation of the conical-shaped cathode nozzle.
Thrust Mode
In this device, the electromagnetic force pinches the plasma radially inward, increasing the plasma pressure which, in turn, generates gas-dynamic thrust in the axial direction. To estimate the gas-dynamic contribution to axial thrust due to electromagnetic pinching, we use a z-pinch quasiequilibrium analysis [32] for which the equilibrium condition is written: In our device the magnetic field is unidirectional in and a function of , while the current density ⃑ is equal to the curl of the magnetic field and is written: Evaluating the gradient and cross product in Equation (1) yields: where ( ) is the azimuthal magnetic field strength, which is assumed to vary only in thedirection. If we further assume that the plasma within the discharge chamber of the EPTX device is a cylindrical column spanning = 0 to , the differential Equation (3) has a solution of the form: where is the peak pressure in the plasma column, assumed to be at = 0. Further assuming a uniform axial current density distribution within the plasma column of maximum radius = , and utilizing Ampère's Law in Equation (3), we may write the magnetic field as: where is the current flowing through the thruster-capacitor circuit and is the azimuthal magnetic field strength at the outer edge of the plasma column. While we will not report it in this work, the current was measured for each discharge and the results are practically identical to the current waveforms presented in Ref. [27]. Substituting Equation (5) into Equation (4) and performing the integral yields: after some minor rearranging of terms. Equation (6) is the "pinch condition" for the plasma. The axial force arising from the imbalance in gas pressure at the open end of the propellant cavity at any time may then be obtained by integrating Equation (6) over the solid back face of the cavity, as: Thus, the axial force arising from the electromagnetic pinching in the radial direction scales with the square of the current flowing through the arc plasma. Using the definition of impulse as the time integral of force, we obtain the contribution of that force to the measured impulse of the device as: where Ψ is the integral of the current squared over the entire discharge. The quantity Ψ is obtained by numerically integrating the experimentally-measured current. In electrothermal PPTs, the value of the impulse contributed by the electromagnetic force is small relative to the measured impulse. In Section 4.3, we use the results of the present work to show that this is true for the EPTX device.
Compact Thrust Stand
This work was conducted in Electric Propulsion Facility 1 at the University of Illinois Urbana-Champaign (UIUC) Electric Propulsion Lab. This vacuum facility is approximately 1000 L in volume and achieves a nominal base pressure of 2 × 10 −5 torr. Housed in this facility is the UIUC Compact Thrust Stand designed for accurate measurement of thrust and impulse bit in the micro-and millinewton range [33]. This stand is an inverted-pendulum design diagramed in Figure 2a. Two modes of stand operation allow for constant thrust measurement in the range of 1-10 mN and impulse bit measurement in the range of 0.1-3.0 mN-s. In this work, the stand is operated in impulsive measurement mode to determine the impulse bit of the electrothermal APPT device. The thruster and hardware are mounted on top of the long stand platform which is mounted to the fixed frame by stainless steel arms with torsional flexures, as shown in the photograph in Figure 2b. Any motion of the stand platform in the x-direction causes deflection of the stand arms and is opposed by the spring force of the torsional flexures, yielding an oscillatory response. Calibration is performed using a method similar to that described in Polk, et al. [34] for impulsive measurement using an invertedpendulum thrust stand. A small impact hammer constructed of aluminum body and soft plastic head is mounted to a hinge and actuated by a solenoid. The solenoid is triggered remotely, causing the head of the hammer to strike a piezoelectric force transducer at the impact location shown in Figure 2a. This impulsive force to the stand platform and generates an oscillatory response in the x-direction. The impulse delivered to the stand may be calculated by integration of the transducer signal. In this work, a typical calibration impulse bit is 100-1400 µN-s, adjusted manually. The measurement error for each calibration impulse is ±6 µN-s due to bit noise and trapezoidal integration error. The motion of the thrust stand is monitored by a linear variable differential transformer (LVDT) affixed to the rear of the stand platform. Typical noise levels for this analog signal are on the order of 10 −4 V peak-to-peak. The LVDT signal is used both for electromagnetic (eddy current) damping of stand motion and for determining the response of the stand to an impulse. Specifically, the differential between successive position measurements (i.e., the velocity of the stand platform) is examined to determine response. For each calibration impulse, a distinct peak in the differential voltage waveform is detected. The value of this peak in volts is known as the response.
In this work, calibration was performed immediately prior to and following each testing session. Typically, 20-25 impulsive strikes are delivered to the stand and both transducer and LVDT output signals stored to memory for each. The response of the stand is plotted on the y-axis, the applied calibration impulse bits are plotted on the x-axis, and a linear fit to the data is established as the calibration curve. Figure 3 presents such a calibration curve for a typical pre-test calibration in the range of 100-1400 µN-s. A standard least-squares regression method as described in Polk, et al. [34] is used to determine the best linear fit to the calibration data. Also shown in Figure 3 are the standard residuals shown relative to the average standard residual indicated by the solid black line. The distribution and mean of the standard residuals indicate that the linear fit is appropriate, and typical values are 0.95 or greater. After each calibration, a testing session was conducted wherein the EPTX device was pulsed once every 20 s, imparting an impulse on the stand. For each pulse of the device, the thrust stand response was obtained from the LVDT measurement. The calibration curve in Figure 3 was then used to determine the impulse bit of each pulse based upon the measured thrust stand response. In the present work, the impulse bits measured are in the range of roughly 100-800 µN-s, which is fully contained in the linear region of the established calibration curve. A typical standard deviation of residuals in calibration is 1.5 mV. Using the linear fit in Figure 3, this suggests the error in a single impulse bit measurement is ±20 µN-s, equivalent to one standard deviation of response residual in either direction.
Results
The EPTX was operated in the facility described and tested using PTFE and HIPEP as propellants for comparative purposes. Using the compact thrust stand, the impulse bit of each propellant was recorded for four nominal stored energy values of 5, 10, 15, and 20 J. Initially, two test durations were conducted in this work: a short-duration test consisting of 100 pulses and a long-duration test to endof-life. In this section, we present the results of these tests.
Short-Duration Tests
In our previous work, PTFE and HIPEP were tested in a similar device specifically designed to quantify the ablated mass per pulse [27]. The nominal test duration for that work was 100 pulses, which was initially selected as the test length for the short-duration test in this work. Each test begins with a 20 point calibration at the nominal base pressure of 2 × 10 −5 torr. The high voltage power supply is then set to the voltage corresponding to the desired energy level and impulse testing begins. Each pulse is triggered remotely via a surface discharge igniter and imparts an impulse to the thrust stand which is recorded and post processed to yield the impulse bit by the method described in Section 2.3. Six separate 100 pulse test trials were conducted at each energy level, three for PTFE and three for HIPEP. A new propellant sample is used for each test trial, but no other parameters of the experiment are changed.
Typical impulse bit sets from one such test trial for each propellant is shown in Figure 4, normalized by the average impulse bit value over all 100 pulses. It is observed in the Figure 4 that the impulse bit varies about the mean and remains roughly constant, within the error bars (±20 µNs), over pulses 10-100 for both propellants. However, it is noted that the measured impulse bit for the first pulse of each trial is 30%-40% greater than the average. Subsequent pulses 2-10 decrease in each trial until a rough steady state is achieved near the average value. The impulse bit then varies about the mean and remains roughly constant, within the error bars, through pulse 100. Only one test trial (HIPEP propellant, 20 J) deviated from this behavior as shown in Figure 4. In this trial, the impulse bits for pulses 1-10 were near the mean value of 608 µN-s, rather than 30% greater. An increasing trend over pulses 10-40 is observed before decreasing again to end near the mean of the other trials. While this trial deviated significantly from the typical trend observed in the other energy tests, the mean impulse bit of this trial is still similar to the other five trials at 20 J. For each energy level (5, 10, 15, 20 J), three separate test trials were performed for each propellant. This yields 24 100-pulse trials, 12 trials for each propellant. All measurements from these trials are presented in histograms in Figure 5. The width of the histogram bars is 20 µN-s in Figure 5, equivalent to the error for a single impulse measurement. We observe in general that for each propellant there are four distinct groupings corresponding to the four energy levels, which are organized into the four subfigures of Figure 5. The HIPEP results are shown in white bars but are plotted semi-transparent such that where the bars overlap with PTFE results, a light gray bar is shown. Each group of measurements is roughly centered about the mean impulse value for each energy level. Additionally, we see that the number of impulse measurements in each group are quite similar between the two propellants for a given energy. Because the initial impulse measurements of each trial are up to 30% greater than the mean, there also exists a number of additional bins slightly greater than the mean value at each energy. Finally, the effect of the unique trial for HIPEP at 20 J can be clearly observed in Figure 5d, where the spread of measurements is much greater for HIPEP than for PTFE. The spread of impulse measurements is noticeably wider in the 15 and 20 J energy trials when compared with the 5 and 10 J trials for both propellants. Figure 6 presents the impulse bits averaged over 300 pulses (3 propellant samples at 100 pulses each) at each energy level for each propellant, with error bars indicating two standard deviations above and below the average. Also shown in Figure 6 is a linear fit and coefficients of the results for both propellants. The y-intercept for this linear fit is not forced to be zero because the impulse is not linearly related to energy at low energy levels. Some finite discharge energy is required to overcome static friction and generate an impulse, thus, the y-intercept is negative. From the average impulse bit results in Figure 6, it is observed that impulse bit increases linearly with initially stored energy with a slope of about 30 µN-s/J for both propellants. Impulse bit values at each energy level are nearly identical between propellants. At the 20 J energy level, HIPEP exhibits an average impulse bit of 590 µN-s compared to 565 µN-s for PTFE, a difference of 25 µN-s, or about 5% which is still within the error of the experiment. This is the largest discrepancy between propellants at any energy, and 20 J is the only energy level where a larger impulse bit is measured for HIPEP. Standard deviation in impulse bit also increases with energy level for both propellants, but not at the same rate. The standard deviation for PTFE has a value of 16 µN-s at 5 J and 29 µN-s at 20 J, with a roughly linear slope between the two. At 5 J, HIPEP impulse bit standard deviation is 17 µN-s, similar to PTFE, but increases to 62 µN-s at the 20 J level. The standard deviation for HIPEP at 20 J was largely affected by the one anomalous short-duration trial previously discussed. As a result of this trial's unique trend, the standard deviation for HIPEP measurements at 20 J is significantly increased compared to other energy levels and PTFE. Otherwise, the mean impulse bit at a given energy for HIPEP is typically ~95% of the mean impulse bit for PTFE, with increased variation (~10% larger standard deviation) about the mean.
Long-Duration Tests
Also of interest in the present work is the trend of impulse bit over the entire lifetime of a propellant sample. Long-duration tests were conducted using the same EPTX device operating on HIPEP and PTFE propellant samples. In these tests, the device is pulsed at the same repetition rate, and the impulse bit is measured using the compact thrust stand as in the short-duration tests, but over a greater time period (>24 h). Automated pulsing of the EPTX device is achieved by use of a battery-powered timer circuit that triggers the surface discharge igniter every 22 s. At the beginning of its life, the inner diameter of a propellant sample is at the nominal dimension (6.35 mm) and the main arc discharge is repeatedly triggered by the igniter. Each discharge ablates propellant material from the inner wall of the sample and gradually increases the diameter of the propellant cavity. As this diameter increases, ignition of the arc discharge becomes more difficult, and the time between successive pulses increases to two or more multiples of 22 s. That is, the first trigger event may not initiate arc formation, but a second or third or subsequent trigger event may initiate arc formation. The end-of-test in this work is defined as the pulse number where the time between pulses is in excess of 1 h, which means 160 trigger events do not initiate arc formation. The long-duration test trials begin with fresh samples of nominal inner diameter and end at the sample end-of-life as previously defined. Figure 7 presents the measurements of impulse bit over these long duration tests for the four nominal energy levels and for each propellant. Error bars here show the estimated measurement error for a single impulse bit measurement (±20 µN-s). In Figure 7, it should first be noted that for each long-duration trial, comparison of pulses 1-100 shows close agreement with the trends observed in short-duration testing (Figure 4). For example, a pulse 1 at 5 J using PTFE was measured to produce 130 µN-s and the impulse bit decreased to a mean value of about 115 µN-s over the first 100 pulses. Beyond pulse 100, the PTFE impulse bit measurements at 5 J in Figure 7a are largely constant, and the mean over the full lifetime is 114 µNs. At increased discharge energy, over the duration of the test, a decreasing trend in impulse bit is observed. At 10 J, PTFE impulse bit measurements average 274 µN-s through 100 pulses, but 268 µNs at end-of-life (3083 pulses). A rough linear fit indicates the impulse bit decreases by about 1.1 µN-s per 100 pulses for PTFE at 10 J. At 15 J, this decrease is slightly greater in magnitude (1.8 µN-s per 100 pulses) but still nearly linear, and the average over the 5783 pulses is 361 µN-s. At 20 J, the average over the full 8445 pulses is 418 µN-s and a decreasing trend is still observed, but the profile deviates from a linear shape. Furthermore, it is noted in Figure 7a that the lifetime of the test trial increases with discharge energy. Lifetime for PTFE is 8445 pulses at 20 J compared to 2000 pulses at 5 J. In Figure 7b, a similar trend of increasing lifetime with discharge energy is observed for HIPEP. This increase is most apparent between the 10 and 15 J energy levels, where pulse lifetime increases from 1323 to 4974 pulses. From beginning to end-of-life, however, slightly different trends are observed for HIPEP compared to PTFE. At 5 J, the decrease in impulse bit for HIPEP is much greater than for PTFE, decreasing by 19 µN-s per 100 pulses. Average impulse bit through pulse 100 is 120 µN-s, but a decreasing trend is observed through the final pulse, and the lifetime is much shorter than 5 J for PTFE (793 vs. 2000 pulses). For both propellants, the shortest lifetime in number of pulses is observed for the 5 J trial, and the longest lifetime is for the 20 J energy level, as noted in Figure 7. Average HIPEP impulse bits are typically about 90%-99% of the average value measured for PTFE for a given initial energy. Also, at each discharge energy the lifetime for the HIPEP samples is up to 60% less than PTFE. Finally, HIPEP impulse bit typically decreases more than PTFE over a shorter lifetime at a given energy.
Analysis and Discussion
Further details and discussion concerning the results presented in the previous section are provided here. The measured values of mass loss and impulse for each propellant are used to calculate the specific impulse for the device depending on propellant selection. Comparisons of these key metrics between the two propellants are a focus in this section.
Very Short Duration Tests
A key observation in the above results led us to conduct a third series of tests. The increased impulse bit over pulses 1-10 indicated that some form of propellant surface conditioning was occurring. Our initial hypothesis was that the ablation mass loss was also greater during these pulses, but we could not definitively test this hypothesis because no method was available to actively measure mass loss on a shot-by-shot basis. Consequently, we performed very short duration tests of only 10 pulses to better quantify the early-pulse mass loss. Testing and sample preparation procedures for very-short-duration 10 pulse tests were identical to those of the short-duration (100pulse) tests, including pre-test vacuum drying and mass measurement. Results for 10-pulse trials at each energy level are shown in Table 1 alongside the average mass loss measured for 100-pulse trials. Also shown in the final column of Table 1 is the 10-pulse mass loss as a percent of the 100-pulse mass loss. In Table 1 we observe that for similar conditions the early pulse mass loss for HIPEP is significantly greater than for PTFE. In 100-pulse tests, HIPEP mass loss is typically about twice that of PTFE. This is much less than in the 10-pulse tests, where the HIPEP mass loss is nearly six times that of PTFE. Second, while the mass loss of PTFE clearly increases with energy in 10-pulse trials, the same is not observed for HIPEP. Rather, the 10-pulse mass loss data for HIPEP appears to be independent of stored energy and is, on average, ~6 mg. Finally, we note that for all energy levels, the 10-pulse mass loss is 10%-11% of the 100-pulse mass loss for PTFE. This result indicates that PTFE mass loss is roughly constant for both the 10-pulse and 100-pulse intervals. For HIPEP, the mass loss during 10-pulse tests is much greater than 10% in all cases, indicating that a significant part of the mass lost during the 100-pulse tests was lost during the first 10 pulses.
Specific Impulse
One of the most reported performance metrics for in-space propulsion devices is the specific impulse, or . This quantity is expressed in seconds and describes the efficiency at which the device can generate thrust per unit mass of propellant. In this work, is obtained by: where is the sum of all impulse bit measurements for a given trial and is the acceleration due to gravity. In a previous work, the ablation mass was investigated in a similar device [27]. The same propellant sample preparation procedures were followed in this work, and similar mass losses were measured during short duration tests. In general, ablation mass increases in a linear fashion as a function of discharge energy. For PTFE, the ablation mass at 5 J is 35.3 µg/pulse which yields a specific ablation of 7.0 µg/J. For the other, higher energy levels, the specific ablation is on average a constant 6.3 µg/J. HIPEP ablation exhibits similar scaling, but at a specific ablation rate that is much greater than PTFE. At 5 J, the ablation mass of HIPEP is on average 106.8 µg/pulse or 21.0 µg/J, which is about three times that of PTFE. The specific ablation of HIPEP decreases to about 12.5 µg/J at the higher discharge energy levels tested. This is roughly twice that of PTFE. Because the measured impulse bits at all energy levels are nearly identical between the two propellants, the higher mass ablated per pulse for HIPEP results in a HIPEP specific impulse that is significantly lower than PTFE. The of both propellants were calculated using Equation (9) for the short-duration (100 pulse) test trials and the results are presented in Figure 8. The measurement error for HIPEP ( ) specific impulse is ±50 s based on mass loss measurement error of ±35 µg/pulse [27] and impulse measurement error of ±20 µN-s. For PTFE, the measurement error ( ) is ±30 s. These errors are shown as representative error bars in Figure 8. The specific impulse at 5 J is reduced for both propellants because of the increased ablation mass relative to stored energy. For PTFE, the average at 5 J is 320 s compared to >400 s at the higher energy levels. HIPEP specific impulse at 5 J is on average 100 s, but is typically above 200 s at 10, 15, and 20 J. The reduced specific impulse at 5 J relative to a mostly constant value for other energies indicates this device may be operating in a different mode at low energy. One option is that a charring phenomenon observed in APPTs using PTFE as propellant at low energy is reducing the specific impulse. It has been observed in other work that excessive carbonization of the PTFE surface occurs if the local current density is below a certain threshold [35,36]. This charring leads to non-uniform ablation over the surface. It is possible that this non-uniformity may translate to non-uniform heating of the ablated material and thus a lower average exhaust velocity and specific impulse. Alternatively, at low discharge energies, the energy available for the arc discharge may be too low to sustain a breakdown across the entire gap, yielding an incomplete current channel that cannot dissipate the electrical energy efficiently. Although the EPTX device is not optimized as a thruster, its performance is near to that of other similar devices. The measured for PTFE at 10 J or above in this work is comparable to other coaxial geometry APPTs using PTFE as propellant. For example, theUIUC coaxial PPT was measured to have specific impulse of 500-600 s operating with a stored energy of 7.5 J/pulse [10,33]. Various configurations of the ablative Z-pinch PPT, which possesses a geometry similar to the EPTX, exhibited a specific impulse in the range of 300-600 s [20]. On average, over the three higher energy levels, the specific impulse for PTFE is calculated to be 450 s, compared to 225 s for HIPEP. The measured impulse bits between propellants was virtually identical but the ablation mass for HIPEP was significantly greater. This leads to the conclusion that much of the additional mass ablated when operating with HIPEP does not appreciably contribute to increasing the impulse bit, but is expelled at a low average velocity. Furthermore, the observations made in Section 4.1 indicate that much of the additional mass ablated by HIPEP is ablated during the first few discharges.
This phenomenon of initially high and then decreasing impulse bit over the first few pulses has previously been observed in the literature for PTFE fueled ablation-fed devices [16,20]. The propellant surface is conditioned by the transient heating from the adjacent arc discharge resulting in the removal of impurities during those pulses. These impurities may be foreign particles on the surface acquired through handling or contact with a non-vacuum atmosphere. While PTFE is not porous or hygroscopic, it is expected that a small amount of moisture may reside on the surface as an impurity of the material before being subjected to the vacuum. These impurities add mass to the initial measurements, but evaporate or are expelled quickly during the first few pulses. In Table 1 we see that the first 10 pulses with PTFE exhibit a mass loss-per-pulse that is about 1% higher than the next 90 pulses. This indicates that the mass of impurities that are then expelled during propellant conditioning is quite small compared to the mass loss due to arc discharge ablation. Furthermore, a summation of the impulses in Figure 4 reveals that the sum total impulse for the first 10 pulses is about 10.8% of the sum total impulse for all 100 pulses. The additional mass (<1%) expelled due to surface impurities roughly translates to a relative increase of impulse (<1%) in the early pulses, indicating that on average the impurities are likely liberated by the arc discharge and accelerated to near the bulk plasma velocity.
As seen in Table 1 for HIPEP, the mass loss-per-pulse during pulses 1-10 is much greater than that of the 90 subsequent pulses. In the most extreme case, at 5 J, the mass lost in the first 10 pulses is more than 50% of the total mass loss over an entire 100-pulse test. However, using the data in Figure 4, the sum total impulse for the first 10 pulses is only 10%-11% of the sum total impulse from all 100 pulses. These combined observations indicate first that HIPEP has more mass loss attributed to surface impurities (and thus more mass loss in earlier pulses) relative to PTFE. They also indicate that the average gas velocity of these first few pulses is significantly reduced because the impulse bit remains unchanged. Because HIPEP is extremely hygroscopic, we attribute this phenomenon to water absorbed into the propellant. The propellant preparation procedure appears to effectively remove a considerable amount of water (typically 5%-6% propellant mass) by allowing it to slowly evaporate when exposed to vacuum conditions. It could be that some water absorbs deeper into the material, rather than just the surface. This deeply absorbed water would typically require a greater amount of time to evaporate in vacuum. The addition of thermal energy through arc discharge heating would greatly increase the evaporation rate and the commensurate mass loss rate. However, the fraction of early mass loss is significant and the vacuum drying process is sophisticated, so we expect that a majority of the absorbed water is released during this preparation. Prior to drying, the absorbed water molecules may chemically react with the propellant resulting in a surface layer of unknown chemical composition and thickness. This layer of unknown chemical composition would not revert to the original chemical composition of the propellant through a drying process. It is possible that this layer, which would be adjacent to the arc discharge for early pulses of a test, could ablate more readily than the standard propellant composition. However, it is difficult to quantify this potential effect at present. The HIPEP material is completely soluble in water, and in fact an entire slug used in this work may be dissolved in about10 mL of water. This makes it difficult to estimate the thickness of such a surface layer as there is no clear limit over time. An additional study investigating the penetration depth of atmospheric moisture into the material over time would need to be conducted. We have previously used a simple model to predict the mass ablated for a given ablation energy in this device [27]. This estimate (which is now seen to slightly overestimate HIPEP ablation relative to PTFE) was partially based on the chemical reaction incited in pure HIPEP unaffected by absorbed water. Both major constituents of HIPEP (HAN and PVA) are known to be soluble in water, but it is unclear if they would dissolve at the same rate. Between the two, HAN is more sensitive to temperature and present in a greater quantity, so a water solvated surface layer is likely composed mainly of HAN. Further, water molecules are capable of bonding with both the hydroxylammonium and the nitrate ions, forming HAN in a high purity aqueous form. When the water is then desorbed during vacuum drying, pure HAN crystals could remain on the surface, which are both irregular and highly heat sensitive.
Ultimately, the mass loss measurement for HIPEP is skewed artificially high because of the very high mass loss rates in the early pulses. As a result, commensurate specific impulse calculations for the duration of the test are skewed lower. In the interest of reporting a specific impulse that is ideally achievable, we develop a simple method to correct the average mass lost data. Specifically, we subtract the mass loss and total impulse measured in the first 10 pulses from results for 100-pulse mass loss and total impulse measurements, and then perform all the calculations to obtain the average mass loss-per-pulse and average corrected using those remaining 90 pulses from the 100pulse test. The 100 pulse average values from Figure 8 and the corrected values for HIPEP are shown in Figure 9. Average specific impulse as a function of discharge energy for each short-duration tests as a function of propellant type and discharge energy, with both raw and corrected HIPEP data.
In Figure 9 we observe that the corrected for HIPEP is greater than or equal to the previously measured values at each energy level. In fact, all but one corrected value at 20 J is greater than all of the previous results at that energy. This is the expected result, based upon the observation that a significant fraction sometimes constituting a majority of the mass is lost in early pulses. When we ignore this poor propellant utilization in early pulses, the overall specific impulse increases. At ≥10 J, the mean corrected is about 300 s, with the data scattered relatively uniformly about that value. As before, the mean corrected at 5 J is reduced when compared to the higher energy data, with an average value of 211 s.
In the long-duration testing, the thruster was operated until the trigger pulse could no longer initiate a discharge. As the number of discharges increase, the overall mass loss for an experimental data set will be larger and the initial mass loss for the first 10 pulses would become a decreasingly small portion of the overall mass loss. Consequently, we expect that the average mass loss-per-pulse based on pre-and post-test mass measurements of the propellant will start to approach the corrected average mass loss-per-pulse obtained for pulses 11-100 of the 100-pulse tests. We can also use the same method (subtract from the data set the mass loss and total impulse measured in the first 10 pulses) to correct the long-duration test data to quantify the effect of increased initial mass bits on calculated specific impulse. We observe that the raw calculated specific impulse of HIPEP does indeed appear to asymptotically approach the corrected value as the pulse number increases. In these long-duration tests (1000+ pulses), we find that the corrected specific impulse for HIPEP is very similar to the raw calculated value. As an illustration of this, the longest duration test on HIPEP involved 5474 pulses at 20 J. This resulted in an overall mass loss of 788 mg and a total impulse of 2.31 N-s, which yields a raw average specific impulse of roughly 300 s. The typical mass loss for the very short 10-pulse duration test conducted at 20 J was 6 mg and total impulse was approximately 5 mN-s. These values are both less than 1% of the long-duration test totals, limiting their overall influence on the average specific impulse calculated using the long-duration test data.
Furthermore, it was noted in our original tests that the 5 J energy level specific impulse values for both propellants is significantly decreased relative to the higher energy levels. While the exact cause of the reduction at the low energy is currently unknown, it is suspected that the stored energy is too low to sustain a uniform arc discharge in the given cavity geometry. As a result, the arc would be either incomplete or non-uniform, causing non-uniform wall ablation and heating of propellant in the cavity. Therefore, many of the observations in the present paper may only be valid for the 10-20 J energy range, and may not hold for lower energy discharges.
Thrust Mode Verification
In the present discussion, it is useful to divide both sides of Equation (8) by the initially stored energy, , yielding: The value of Equation (10) may then be compared with the constant slope of the results presented in Figure 6 showing the total / for each test. The right-hand side of the normalized Equation (10) only depends on the quantity Ψ/ , calculated by integration of the measured current. This quantity has a minimum value of 14 A 2 -s/J at 5 J and 20 A 2 -s/J at 20 J and is almost entirely dependent on the initial discharge energy. Variation in Ψ between the PTFE and HIPEP propellants at each discharge energy is at most 1 A 2 -s/J. For the range of Ψ values, the electromagnetic pinching contribution to the impulse bit is calculated to be 0.7-1.0 µN-s/J, or about 3%-5% of the total impulse bit measured in this work. This very low fraction of the measured impulse supports the assumption that the performance of the EPTX device is dominated by electrothermal acceleration mechanisms. This also helps explain the observed difference in specific impulse performance between propellants, even after correcting for the effects of absorbed water. HIPEP ablates more mass in a given energy discharge than PTFE, but the bulk temperature is lower because the energy available for heating is relatively constant in both cases. As a consequence, HIPEP exhibits reduced specific impulse despite similar arc discharge circuit parameters [27]. We conclude that the specific impulse depends strongly on the amount of propellant mass ablated in the discharge and is only very weakly dependent on the electric circuit parameters.
Conclusions
A compact thrust stand of inverted pendulum design was used to measure the impulse of an electrothermal APPT. This device was operated using both PTFE and an electric solid propellant, HIPEP. The impulse bit for PTFE was roughly 100 µN-s for 5 J of initial stored energy and it increased by about 30 µN-s per joule of additional stored energy. The impulse bit for HIPEP was typically 95%-99% of PTFE, exhibiting similar trends at each of the four discharge energies tested (5, 10, 15, and 20 J). The device used in this work was not designed to be an optimized APPT, so the specific impulse for PTFE was low at roughly 450 s, which is at the bottom of the range of more optimized coaxial APPTs tested using PTFE. The ablated mass of HIPEP for a given discharge energy is typically double that of PTFE and, as a result, the calculated specific impulse is approximately half that of the thruster operating on PTFE. In the present work, we have found that the additional ablated mass of HIPEP does not increase the measured impulse when compared to operation with PTFE under identical testing conditions. The average specific impulse per pulse was calculated from the total mass loss and impulse bit measurements for each pulse. This was found for PTFE to be relatively constant for a given discharge and not dependent on the total number of pulses, implying relatively constant surface conditions across the tests. The HIPEP propellant is significantly different in that it is a hygroscopic material, and absorbed water has been found to greatly affect the experimental results. Drying the material by exposure to vacuum allows much of the absorbed water to desorb and evaporate over about 24 h. However, there appear to be residual effects from the water. In a coaxial ablation-fed PPT, the result is increased ablation mass loss in the first several (about 10) arc discharges. The mass loss in these early pulses is up to 50% higher than later pulses, but the total impulse during the early pulses is only 10% higher. These observations are attributed either to the desorption of deeply absorbed water remaining in the propellant after the drying process or to a chemical reaction with absorbed water that forms a surface layer that ablates more readily. As a result, the average specific impulse for 100pulse tests on HIPEP was only 225 s. Correcting these data by removing the first 10 pulses from the data set yields an average specific impulse for HIPEP of 300 s. Increasing the test duration to thousands of pulses significantly diminishes the effect of these early, high-mass-loss pulses on the average specific impulse. In the long-duration tests the average specific impulse is roughly the same as the value obtained from the 100-pulse tests when those data are corrected for the contribution of the first 10 pulses.
Author Contributions: M.S.G. conducted the impulse measurement experiments and analysis of results obtained, forming the initial writeup. J.L.R. advised the direction of experiments, and the organization of results and analysis, extensively editing initial draft. K.A.P. also advised in the interpretation of experimental results and lead the analysis of the electromagnetic thrust contribution.
Funding: This work was funded by NASA research grant NNX15AP31H.
Acknowledgments: M.S. Glascock would like to graciously thank the NASA Space Technology Research Fellowship program for funding his graduate research. This work was a significant part of that research and would not be possible without the support from this program. Additionally, the authors wish to thank DSSP for providing the HIPEP material in custom-made form for our research, as well as numerous discussions on the nuances of HIPEP operation and handling. | 12,938.8 | 2020-05-30T00:00:00.000 | [
"Engineering",
"Chemistry"
] |
Factors Affecting Fines Flocculation Performance with Cofactor-Polyethylene Oxide
In literature, neutral polyethylene oxide (PEO) flocculated fines at low shear rates, while with cofactor (CF) addition, the formed CF-PEO complex showed larger ability to bridge fines, producing flocs. In this work, some process factors were found having significant effects on fines flocculation. Increases in CF to PEO ratio at constant PEO enhanced the bridging bonds, causing increases in flocculation initial rate (efficiency), amplitude (floc size), and fastness (a decrease in characteristic time). On the other hand, an increase in stirring rate (shear rate) in flocculation vessel caused decreases in initial rate and amplitude, and an increase in the fastness. All runs showed transient flocculation; the amplitude increased with time, reached maximum at equilibrium, and then started to decrease showing deflocculation. In brief, the CF to PEO ratio and the shear rate were found important parameters in mill operation, having significant effects on flocculation efficiency, fastness, and floc size.
Introduction
Flocculation process requires retention aid systems of dual-or multi-components to retain colloids and fines. The conventional cationic retention aids were found interfered with charged substances in furnishes, and the neutral and high molecular weight of polyethylene oxide (PEO) is used as an alternative [1,2]. PEO was found working at low shear rates [3] efficiently with a cofactor (CF) having phenol groups [1,2,4]. In flocculation and retention processes, the following cofactors: modified phenolic resin (MPR), sulphonated kraft lignin (SKL), tannic acid (TA), phenol formaldehyde resin (PFR) and sodium naphthalene sulphonate (SNS) were used in papermaking applications [5]. In literature, phenol cofactors CF and PEO are not adsorbed on some colloids, fines, and fibers, but their combinations were found effective [6]. Some hypothesis explained the work of retention aids; the most dominant one was the association-induce polymer bridging mechanism by van de Ven and Alince [7], who argued the mechanism of the net-work [6]. Inassociation-induce polymer bridging, the negative charge CF segments adsorbed on PEO coils expand and stiffen PEO coils to larger size (δ) due to repulsion among CF segments on the coils, making the coils capable to bridge surfaces. This large coil of PEO with CF is a CF-PEO complex, working as a polyelectrolyte capable to overcome the electrostatic double layer thickness ( 1 κ − ) with a thickness 1 δ κ − > [8]. When PEO was used alone, the inducing polymer bridging (Figure 1(a)) showed that small PEO coil was not capable to bridge surfaces since 1 δ κ − > , but with CF addition (Figure 1(b)), the expanded CF-PEO complex ( 1 δ κ − > ) bridged the surfaces [7,9].
Flocculation processes are either heteroflocculation of dissimilar particles or homoflocculation of similar ones. The rate of flocculation rate − is a net of attachment rate ( att r ) and detachment or deflocculation rate ( det r ) [3,9,10]. Re-flocculation rate ( ref r ) is the rate of the detached particles to form flocs again [11]. Attachment rate att r can be assumed here as comprising One important parameter enhancing f r is flocculation efficiency (η ), or capture efficiency ( ) α η = as used for polymeric retention aids. When PEO is used with CF forming the CF-PEO complex in a suspension, the efficiency η will be a function of the complex, or mainly of factors in PEO and CF. One factor is PEO quantity ( Γ ) added to suspension to maintain the fractional coverage ( ) Γ Γ m θ = on the surfaces; here Γ m is the maximum quantity needed to maintain the full coverage [9,12]. The second factor is the CF quantity added, or the CF to PEO ratio ( ϕ ) in suspension, which mainly determines the size δ of CF-PEO complex and the strength of the bridging bond between the complex and the surfaces [3,11]. The third factor is the shear rate ( G ) subjected in flocculation vessel as a stirring rate N (rpm). In previous work [9], since the attachment rate applies ( ) , att att r f k η = , the shear rate G that enhances the attachment rate constant ( att k ) and affects flocculation efficiency η , will also affect att r . On the other hand, the shear rate G that enhances the detachment rate constant ( ) ( ) to increase [9]. What's more, the shear rate G effects on flocculation were reported in literature as a breakage of flocs that enhances det r [9,13], and as a dissociation of the polyelectrolyte retention aid that causes η to decrease [9]. Long exposure to hydrodynamic forces of similar intensity ought to be able to overcome the binding forces that hold flocs together [14]. The shear rate tends to break fiber flocs into one by eroding fiber from the surface of a floc [15,16] or by splitting a floc into two [17][18][19]. Macromolecules can be described brittle; agitation to suspension of fiber flocculated by high mass polyelectrolyte tends to cause breakage, reducing the mean molecular mass [20][21][22]. Addition of cationic starch solution initially caused an extensive flocculation of fibers, which with continued agitation, the induced attachments between fibers were gradually broken resulting in a well dispersed suspension [23,24]. The forth factor is the number of parameters in PEO preparation, causing changes to PEO coil microstates prior to addition to flocculation process, and affecting flocculation efficiency η after addition. These parameters are stirring intensity and time in PEO dissolu-tion unit; time in storage tank; dilution in preparation unit; and shear rate of a pump in transportation unit [9,25]. In literature, the first factor was expressed ( ) 2 1 η θ θ = − [9,12]. The forth factors were parameters in preparation unit prior to PEO addition worked to dissociate the PEO entanglements decreasing their size δ , causing a drop in flocculation efficiency η [9,25]. The PEO entanglements were found reformed after PEO granules dissolution in water resulted in a clear solution [26]. Two interacted coils are needed to build up a primary entanglement of a size larger than a single coil, while more coils will reform larger entanglements. In flocculation process, the PEO entanglement has a primary role: when PEO dosage is added as entanglements with CF, the size of the resulted CF-PEO complex will get larger that increases the complex ability to bridge particles [9].
In this work, the second and the third factors become the main subject of this study, and the objectives are to determine the effects of the increases in the ratio ϕ (at moderate values) and shear rate G on flocculation efficiency η , amplitude (floc size), and fastness. To maintain this study, experiments of flocculation of fines were performed using CF-PEO retention aid at different shear rates G and values of ratio ϕ . Since the detachment rate det r is considered negligible (zero) at initial time of flocculation, the initial rate of flocculation which becomes can be taken as a measure of flocculation efficiency η at constant shear rate G .
The flocculation amplitude can be taken as a measure of floc size. The characteristic time of flocculation can be taken as a measure of flocculation fastness. Fines, the fiber fragments passing the 200 mesh pores of sizes less than 76 µm [27], were used to determine flocculation characteristics with CF-PEO system in a wide range of particle size and a wide range of fines retention applications in papermaking and environment.
Materials
The fines used were separated by filtration of mixed pulps, taking the particles passed the 200 mesh (76 µm) openings. Flocc 999; the PEO of 7 million molecular weight was used as a flocculent. Interac 1323 the phenolic material was used as a cofactor. Masson Maclaren Mill (Canada) provided the pulps, while I.Q.U.I.P Inc. (Canada) provided Flocc 999 and Intrac 1323.
Experimental Setup
The experimental setup used in this work (Figure 2) is the one used in previous work [3,9,13]. The fines (0.1% consistency) was added to the beaker where mixed at constant stirring rate N (rpm), and then circulated by a peristaltic pump passing the photo cell of the Photome tric Dispersion Analyzer (PDA) [28]. The PDA output signals, the direct current voltage ( Dc ) and the ratio ( R ) of the alternating to direct current voltages, were plotted versus time by a recorder. After reaching steady state, CF was added to fines followed by PEO. The Dc signal, the voltage of the light transmitted the fines suspension, indicates particle concentration. The alternating voltage represents the root mean square (rms) of the transmitted light or the rms of the particle number, while ratio reading R indicates particle size. The change of reading R with time indicates the flocculation rate taken as a measure of flocculation intensity [28]. In this work, the vertical distance the pen moved in arbitrary unit (AU) was taken as R reading. Both Dc and R readings were plotted versus time ( t ) by a recorder (Figure 3). The slope value of the curve R at initial time gave initial rate of flocculation , here, m A is flocculation amplitude at equilibrium, and τ is characteristic time of flocculation. Equilibrium time ( e τ ) is the time the flocculation requires to reach equilibrium. Flocculation process was shown transient in all flocculation runs, deflocculating after reaching equilibrium. The slope the R reading decreases at initial deflocculation is taken as initial rate of deflocculation ( d r ). The time required to reach zero amplitude at initial deflocculation rate is the characteristic time of deflocculation ( d τ ).
Results and Discussion
In this work, two flocculation experiments were performed. One was to study the effect of CF to PEO ratio ϕ at moderate values by adding different quantities of CF with constant PEO, keeping the other parameters constants. The second was to study the shear rate effect on flocculation intensity at different stirring rates N (rpm), keeping other parameters constants. One result of experiment one (Figure 4) show that the flocculation intensity was enhanced when ϕ was doubled; shown as an increase in the amplitude m A and initial rate f r which indicates an increase in floc size and flocculation efficiency η . Explanation of this increase is related to the increase in the CF segments adsorbed on PEO coils. Since the PEO coils are constant, doubling the CF quantity will double the ratio ϕ which doubles the CF segments adsorbed on PEO coils. Here the PEO coils will be more stiffened and expanded resulting in larger CF-PEO complex that enhances flocculation efficiency η and floc size. Furthermore, the number of the bridging bonds between the complex and fines surfaces will be doubled which strengthens the bridging bonds between PEO and surfaces [3,8,9], enhancing flocculation efficiency and floc size. We should notice that we have used moderate values of ϕ , the one used in papermaking range to maintain the required specifications. The high values of ϕ , as reported [11] maintain small complexes and produce weak flocculation, are not in the scope of this work. The second result of experiment one was the transient flocculation of fines with CF-PEO system, recorded after equilibrium as a decrease in the ratio reading R with time and estimated as a deflocculation rate d r . This transient behavior was also reported in previous work [3] in flocculation processes with PEO alone enhanced by CF addition, where the transition behavior was ascribed to the instable PEO entanglement which dissociates with time. In this work, the deflocculation rate d r (Figure 4) is not only enhanced with CF addition, but shows a behavior with increases in ϕ . This increase in ϕ , the increase in CF segments adsorbed on PEO coils, will cause the repulsion intensity among PEO coils in CF-PEO complex to increase. This increase in CF repulsion worked, in early flocculation, to expand the CF-PEO complex that enhanced flocculation efficiency η , and after flocculation reaching equilibrium, the CF repulsion in the formed flocs works to depart the PEO coils enhancing deflocculation.
Furthermore, the effect of ratio ϕ increase on flocculation fastness (Figure 5) is indicated enhancing to the fastness, shown as a significant decrease in the characte- (s −1 ). In each run, CF (0.25 mg/g fines) was added and followed by PEO (0.12 mg/g fines) maintaining a constant and moderate ratio ϕ (= 2.1). Results (Figure 6) show that the increase in stirring rate N caused decreases in the amplitude m A and in initial rate of flocculation f r , indicating decreases in floc size and flocculation efficiency η . In other hand, the increase in N caused a decrease in the characteristic time of flocculation τ , and relatively in equilibrium time e τ since e τ τ < (Figure 3), shows faster flocculation. Comparing the effect of the increases in shear rate G and ratio ϕ at moderate values on flocculation experiment, results have shown: the flocculation efficiency η is increased with the increase in the ratio ϕ and decreased with the increase in the shear rate G ; the floc size is increased with the increase in the ratio ϕ and decreased with the increase in the shear rate G ; the flocculation fastness is increased with the increases in both ratio ϕ and shear rate G . This result indicates that the bad selection of the shear rate G in process will break large flocs resulted by CF, increasing operation cost. Furthermore, excess addition of CF at high ϕ will produce fast flocculation with small flocs [11] and when G is increased, the expected destructive effect of G on flocs will be added. Furthermore, if floc is required of small size, the high shear rate G with high dosage ratio ϕ are seemingly not recommended, since the required size can be obtained by only increasing the shear rate G .
Concluding Remarks
In this work, increases in CF addition at constant PEO and in a moderate range of CF to PEO ratio caused increases in flocculation amplitude, rate and fastness, making these moderate ratios recommended for use in flocculation and retention processes. All flocculation runs with CF-PEO system were transient, showing deflocculation after equilibrium enhanced by CF to PEO ratio increase. Increases in the shear rate caused a decrease in flocculation amplitude producing small flocs, and a decrease in characteristic time of flocculation causing fast flocculation. Selection of a required floc size and flocculation fastness will be a matter of operation setting of CF to PEO value and shear rate. | 3,396.4 | 2014-01-24T00:00:00.000 | [
"Materials Science"
] |
ELiRF-UPV at SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge
This paper describes the participation of ELiRF-UPV team at task 11, Machine Comprehension using Commonsense Knowledge, of SemEval-2018. Our approach is based on the use of word embeddings, NumberBatch Embeddings, and a Deep Learning architecture to find the best answer for the multiple-choice questions based on the narrative text. The results obtained are in line with those obtained by the other participants and they encourage us to continue working on this problem.
Introduction
In the Machine Comprehension using Commonsense Knowledge task, systems must answer multiple-choice questions given narrative texts about everyday activities. In addition to what is mentioned in the text, a substantial number of questions require inference using script knowledge about different scenarios.
In order to capture some script knowledge we decided to use a word representation based not only on distributional semantics word models but also on a knowledge graph, ConceptNet (Speer et al., 2016). ConceptNet is a knowledge graph that connects words and phrases of natural language with labeled edges. It is designed to represent the general knowledge involved in understanding language. ConceptNet could be used in combination with sources of distributional semantics, particularly the word2vec Google News skip-gram embeddings (Mikolov et al., 2013)) and GloVe 1.2 (Pennington et al., 2014), to produce new embeddings, NumberBatch embeddings, with state-of-the-art performance across many wordrelatedness evaluations (Speer and Lowry-Duda, 2017).
More specifically, NumberBatch is a list of semantic word vectors which contains a complex meaning of those terms, beyond containing only contextual information like other kinds of embeddings based on distributional semantics e.g. Word2Vec or Glove. These embeddings are obtained through a combination of Word2Vec and Glove embeddings with knowledge extracted from ConceptNet by means of a technique known as retrofitting (Faruqui et al., 2014).
In this work, we used word representations based on NumberBatch embeddings because these representations encode semantically rich information related to the commonsense. Moreover, in order to tackle this machine comprehension task, we used a Deep Learning architecture with new attention mechanisms. The inclusion of these new attention mechanisms allow us to better capture the similarities among the elements of the input. The attention mechanisms we introduce in this work are suggested in the work (Seo et al., 2016), that obtained very competitive results in Question Answering tasks.
Resources and Preprocess
As we pointed in Section 1, NumberBatch embeddings were used for the representation of words. These embedding are provided by ConceptNet 5, which was compiled by the Commonsense Computing Initiative. ConceptNet 5 is freely available under the Creative Commons Attribution-ShareAlike license (CC BY SA 4.0) from http: //conceptnet.io.
We explored several preprocessing techniques in the development phase The best results were obtained with the following preprocess: the conversion of all text to lowercase and the elimination of the question marks "?". After this, we carried out a tokenization process.
System Description
We tested Deep Learning architectures based on similarities between d-dimensional NumberBatch embeddings of story (x), question (q) and answer (r). Specifically, our approaches learn representations of x, q and r to first compute similarities, and then make a classification decision.
These kind of systems work well in Question Answering tasks, for instance, BiDAF (Seo et al., 2016) or QA-LSTM-Story (Pal and Sharma, 2016). For this reason, with the aim of improving the accuracy of these systems for this task, we incorporated some attention mechanisms of BiDAF in the QA-LSTM-Story system. A scheme of our system is shown in Figure 1. All Deep Learning systems tested in this work take first a story (x ∈ R {T,d} ), a question (q ∈ R {J,d} ) and an answer (r ∈ R {P,d} ) as input. Specifically, each of these elements is a matrix with their word embeddings as rows. Note that the length of the representations (T , J and P ) is fixed by adding zero padding at the beginning to reach the length of the longest element.
Second, x, q and r are processed by means of three non-shared Bidirectional Long Short Term Memory (BLSTM) (Hochreiter and Schmidhuber, 1997) (Schuster and Paliwal, 1997). These networks capture useful features (X, Q and R) to make decisions based on similarities among the inputs. Moreover, we used BatchNormalization (Ioffe and Szegedy, 2015) and Dropout (Srivastava et al., 2014) with p = 0.3, after the input layer and after the BLSTM output to improve the generalization of the model.
After that, we compute similarities between each term of Q and each term of X (S 1 = QX ∈ R {J,T } ) and similarities between each term of R and each term of Q (in a similar way, S 2 = RQ ∈ R {P,J} ). Now, if we concatenate S 1 and S 2 , and we apply a fully-connected layer with softmax activation functions to classify, we reproduce exactly the QA-LSTM-Story system. However, in this work, we incorporated several attention mechanisms to this architecture in order to learn more complex relationships among the inputs.
One of these attention mechanisms is an adaptation of BiDAF. This adaptation used, in addition to Query2Context (Q2X) and Context2Query (X2Q), two additional attention mechanisms, An-swer2Query (R2Q) and Query2Answer (Q2R). R2Q and Q2R are identical to X2Q and Q2X but applied to the question q and the answer r.
We have also tested many other attentions, but the best system obtained in the development phase only contains X2Q. In order to obtain this attention, we first transform the similarities S 1 by applying a softmax activation function to each row i.e.
T t=0 e S 1 [i,t] . After this, we compute Q = AX, to represent each row of Q as a weighted sum of the rows in X. That is, each row of Q is adapted in order to consider the most relevant rows of X.
From Q, we can consider more explicit relationships between x and r if we compute the similarities between Q and R, in the same way as S 1 and S 2 , to obtain S 3 .
With all, we transpose the matrix S 1 to later concatenate all the similarity matrices. The result, (S 1 , S 2 , S 3 ) ∈ R {(T +2P ),J} , is flattened by concatenating all rows as columns to obtain a vector O (T +2P ) * J . Finally, we apply a softmax fullyconnected layer to O to carry out the classification.
To train the system, we generated a training set consisting of all the triples of the corpus (x, q, r). Thus, for each x and q, we generate two triples, (x, q, r 1 ) and (x, q, r 2 ). Then, if r 1 is correct, y(x, q, r 1 ) = 1, else y(x, q, r 1 ) = 0. At inference time, given x, q and their two possible answers r 1 and r 2 , we first build two triples (x, q, r 1 ) and (x, q, r 2 ) and, second, we obtain the network outputs y 1 and y 2 , respectively. Finally, in order to decide which answer is correct, we select the one
Experimental Results
The results obtained during the development phase for the different systems above mentioned are shown in the Table 1. It can be observed that Deep Learning systems with simpler attention mechanisms worked better than those with more complex attention mechanisms (system 2 versus systems 1 and 3). If X2Q was added to compute more explicit relations between x and r, the accuracy slightly improved from 79.21% to 80.08% (system 1 versus system 3). Moreover, we tested system 3 with word2vec (Google News skip-gram) instead of NumberBatch embeddings obtaining an accuracy of 78.84%.
With these results, we chose the best system in the development phase (system 3 in Table 1).
The results obtained with this system on the test set are shown in Table 2.
Analysis of Results
Now, we make an analysis of the results obtained with our best system. In particular, we analyze the network confidence at intervals when deciding what is the correct answer (r 1 or r 2 ) given a story x and a question q. This confidence c is defined as the absolute difference between the outputs for the correct class of each answer (y 1 [1] and y 2 [1]) i.e. c = |y 1 [1] − y 2 [1]|.
During this analysis, we get a maximum confidence c max = 0.999 and a minimum confidence c min = 0.000. Thus, there are extreme cases where the system is totally sure about what is the correct answer, or has total uncertainty. In the instance (ix = 235, iq = 1), we observe that the system is totally sure about the correct answer due to the answer has been explicitly found in the story. In a second instance (ix = 131, iq = 4), the system has total uncertainty because "$20.00" and "$15.00" do not appear in the NumberBatch embeddings. (Where ix and iq refers to the index of the story and the question in the test set). Moreover, we have also performed a study on the system accuracy and the number of samples for each confidence level between [0, 1]. The results obtained are shown in the Figures 2 and 3.
In general, as it can be observed in Figure 2, the greater the confidence, the better results were obtained. However, in Figure 3, we observed that there are many samples with very low confidence values, e.g. 0.0-0.1. We think that in order to reduce the number of samples in this confidence interval, it would be necessary to incorporate new knowledge resources.
Conclusions
In this work, we presented a Deep Learning architecture with new attention mechanisms in order to learn more complex representations and similarities among input elements (story x, question q and answer r). In order to capture some script knowledge, NumberBatch embeddings were used for the representation of words. With this approach we obtained competitive results.
As future work, we propose the study and development of new attention mechanisms to learn complex features and relationships. Moreover, we also find interesting the enrichment of the Deep Learning architectures with some commonsense information beyond the use of NumberBatch embeddings, such as the script knowledge resources suggested by the competition organizers. | 2,409.4 | 2018-06-01T00:00:00.000 | [
"Computer Science"
] |
Factors shaping the COVID-19 epidemic curve: a multi-country analysis
Background Lockdown measures are the backbone of containment measures for the COVID-19 pandemic both in high-income countries (HICs) and low- and middle-income countries (LMICs). However, in view of the inevitably-occurring second and third global covid-19 wave, assessing the success and impact of containment measures on the epidemic curve of COVID-19 and people’s compliance with such measures is crucial for more effective policies. To determine the containment measures influencing the COVID-19 epidemic curve in nine targeted countries across high-, middle-, and low-income nations. Methods Four HICs (Germany, Sweden, Italy, and South Korea) and five LMICs (Mexico, Colombia, India, Nigeria, and Nepal) were selected to assess the association using interrupted time series analysis of daily case numbers and deaths of COVID-19 considering the following factors: The “stringency index (SI)” indicating how tight the containment measures were implemented in each country; and the level of compliance with the prescribed measures using human mobility data. Additionally, a scoping review was conducted to contextualize the findings. Results Most countries implemented quite rigorous lockdown measures, particularly the LMICs (India, Nepal, and Colombia) following the model of HICs (Germany and Italy). Exceptions were Sweden and South Korea, which opted for different strategies. The compliance with the restrictions—measured as mobility related to home office, restraining from leisure activities, non-use of local transport and others—was generally good, except in Sweden and South Korea where the restrictions were limited. The endemic curves and time-series analysis showed that the containment measures were successful in HICs but not in LMICs. Conclusion The imposed lockdown measures are alarming, particularly in resource-constrained settings where such measures are independent of the population segment, which drives the virus transmission. Methods for examining people’s movements or hardships that are caused by covid- no work, no food situation are inequitable. Novel and context-adapted approach of dealing with the COVID-19 crisis are therefore crucial. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-021-06714-3.
Background
Soon after the start of the COVID-19 pandemic in China, many countries followed the Chinese example to put emphasis on transmission control through quarantine of infected persons, the isolation of contacts, and then the lockdown of the entire population [1,2]. The theoretical background was based on a mathematical prediction model about the expected epidemic curve showing that without transmission control, the curve would be very high and stiff but with "non-pharmaceutical interventions (NPIs)", it would be prolonged but with lower case numbers and thus manageable by the health services [3]. Strict lockdown measures were particularly undertaken in the high-income countries (HICs) of Europe and then carried forward by low-and middle-income countries (LMICs), often leading to economic recession, human suffering and public unrest due to job and income loss [4,5]. However, there were some different approaches with less stringent lockdown policies such as in Sweden and South Korea [6]. This provides an opportunity to analyse counter-measures and the level of compliance to such measures leading to the highest impact on the shape of the epidemic curve, five months after the start of the pandemic. Additionally, it enables to investigate if the real-life experience in different countries confirms the previously defined prediction model [3], and intends to show "when" and "how much" containment measures are successful in low and high income countries which will help to take up evidence based policy decision according to the local context.
Study countries
Four HICs (Germany, Sweden, Italy, and South Korea), two better-off MICs (Mexico and Colombia), and three LICs with lower economic power (India, Nigeria, and Nepal) were selected in order to reflect the measures and impact of the covid-19 pandemic in countries with different levels of wealth, size and population densities. Academic contacts in these countries facilitated the data collection. Table 1 shows that large (India, Nigeria, and Mexico) and small countries (Sweden and Nepal) with high (South Korea and India) and low population densities (Sweden and Colombia) were included. The percentage of children within the total population was less in countries with a high Human Development Index (European countries and South Korea) and more if the index was lower (Nigeria, Nepal, and India). Table 2 shows that countries with a high Gross Domestic Product (GDP) have a low informal economic sector and the LMICs have a high proportion of "self-employed" or "vulnerable employment" who will suffer most by the lockdown measures. The only exception was South Korea, with a high GDP but yet a considerable informal economic sector, due to its recent history of moving from a LMIC to a HIC.
Scoping review of the literature
To complement our findings, a scoping review of the scientific literature was conducted with the following question: "What measures can influence the COVID-19 epidemic curve among the nine targeted countries?" The scoping review, as an ideal approach to determine Keywords: Coronavirus disease 2019, SARS-CoV-2, Segemented Time-series, Lockdown, COVID-19, Stringency index, Human mobility On May 9, 2020, a search was conducted in online databases, PubMed and Cochrane library with key search terms such as "COVID-19", "measures", and "factors". The inclusion criteria were scientific articles with information on influencing factors of COVID-19 epidemic curve and public health measures adopted by nine targeted countries. Observational and intervention studies including qualitative, quantitative, or mixed methodologies, as well as scoping reviews and full text papers in English or Spanish language were included. Preprint scientific studies were included due to the urgency of the pandemic. The excluded articles were letters to the editor, opinions, guidelines, commentaries and editorials. The study selection was done independently by three researchers (TRR, MAC, and RCS). The three sets of literature were then compared. Disagreements on the inclusion or exclusion of literature were solved through discussions or by including a fourth researcher (AK). The search was carried out in three stages. First, titles were evaluated according to the inclusion and exclusion criteria. Second, the same criteria were applied to the abstract section of the articles retained in the first stage. Third, full text articles and articles without abstract availability in the previous stages were evaluated. However, after completion of the scoping review, new publications came up which will be presented in the "Discussion" Section.
Measuring number of daily infections and deaths
Daily numbers of infections and deaths were collected from various existing data sources. Each country's national confirmed and deceased cases were collected through a data hub of COVID-19 datasets [7]. Furthermore, the COVID-19 related situation on a subnational level was analyzed. Targeted regions were: Västra Götaland (Sweden), Lombardy (Italy), Baden-Württemberg (Germany), Daegu (South Korea), Kathmandu (Nepal), Nuevo León (Mexico), Abuja (Nigeria), North Santander (Colombia), and Kerala (India). For European regions, data was obtained from the above mentioned repository [8], however, non-European areas were not accessible through the repository, and thus collected through each country's national or regional official health service websites [9][10][11][12][13][14]. Rates of infection and deaths per 100,000 population were estimated for all nine countries based on the study period from January-May 2020 and the corresponding midyear population size as the denominator [15]. Source: World Development Indicators (WDI), compiled by the World Bank from officially recognized international sources. https:// data. world bank. org/ indic ator? tab= all "Employment in the informal economy as a percentage of total non-agricultural employment. It basically includes all jobs in unregistered and/or small-scale private unincorporated enterprises that produce goods or services meant for sale or barter. Self-employed street vendors, taxi drivers and home-base workers, regardless of size, are all considered enterprises. However, agricultural and related activities, households producing goods exclusively for their own use (e.g. subsistence farming, domestic housework, care work, and employment of paid domestic workers), and volunteer services rendered to the community are excluded" "Self-employed workers are those workers who, working on their own account or with one or a few partners or in cooperative, hold the type of jobs defined as a self-employment job (i.e., jobs where the remuneration is directly dependent upon the profits derived from the goods and services produced). Self-employed workers include four sub-categories of employers, own-account workers, members of producers' cooperatives, and contributing family workers" "Vulnerable employment is contributing family workers and own-account workers as a percentage of total employment" "Poverty headcount ratio at $3.20 a day is the percentage of the population living on less than $3.20 a day at 2011 international prices. " "Proportion of employed people who live on less than $3.20 (in purchasing power parity terms) a day, expressed as a percentage of the total employed population ages 15 and older. " ILO (2019). ILOSTAT database. www. ilo. org/ ilost at "Percentage of the population at risk of suffering multiple deprivations-that is, those with a deprivation score of 20-33 percent.
Measuring timing and intensity of the lockdown
To assess the intensity of lockdown measures in the target countries, the 'Stringency Index' (SI) of the strictness in governmental policies was calculated using eight indicators: closure of schools, workplaces, public events, and/or of public transportation; restrictions on gatherings, internal movements, and international travels; and the quarantine requirements. Computation of the index followed the methodology described by Hale et al., which estimated the intensity of governmental measures into a scale from 1 to 100, with 100 indicating the maximum application of all indicators mentioned above [16]. The mean (SD) and percentiles (25th, 50th, 75th, and 95th) of the SI were computed for each country and sub-country. Data of government responses in our target countries was collected from the Oxford COVID-19 Government Response Tracker [17].
Measuring peoples` compliance with the lockdown
For the documentation of peoples' compliance with the lockdown, data from the Google COVID-19 Community Mobility Reports was used to measure the change in human mobility [18]. In these reports, percentage of changes in visits to different places (i.e., retail and recreation, grocery and pharmacy, parks, transit stations, workplaces, and residential areas) were compared to the baseline level (i.e., the median value from January 3rd to February 6th 2020) and were estimated by aggregating the location data of Google account holders.
Data management and analysis Data management of the scoping review
Data from the included studies in the scoping review were extracted and recorded in an Excel spreadsheet. The following information was collected for each article: Authors of the publication, country, study design, status of the publication, analysed measure (e.g., school closures or the lockdown), methodology, instruments, and results. No formal assessment of the methodological quality of the included articles was performed in this review, however, the quality of the papers was defined by the inclusion and exclusion criteria. Figure 1 shows the selection process of the papers [19]. A total of 1344 papers were initially retrieved. After the application of the inclusion and exclusion criteria, 17 publications were included for the synthesis of the review (Additional file 1). A narrative description is presented in the "Results" Section.
Interrupted time-series analysis
To better understand the impact of the SI on the incidence and mortality as core outcome variables, an interrupted time-series analysis of mortality and morbidity rates was conducted independently per country and sub-country [20]. The SI is the analysis predictor; in order to transform its continuous format into a meaningful intervention measure before processing the time-series analysis, the median value of SI in each country was defined as "intervention one, ". This was based on observations that the SI was able to demonstrate an impact in relation to the baseline values (both the minimum and 25th percentile) at the 50th percentile. A sensitivity analysis was performed to test this assumption; an exception was made for Sweden where the 10th percentile value was used as "intervention one" due to their relatively low stringency measures. Second and third points of interventions were defined based on 75th and 95th percentiles, respectively. In this analysis some countries managed to implement all three interventions, whereas other countries implemented one or two interventions. When two percentiles revealed small differences (< 10%), the higher percentile was used in the regression. "Baseline trend" refers to the change in rate prior to intervention one; "change at first, second, or third interventions" refers to the change in rate immediately after each intervention; "trend after each intervention" refers to the continuous change in rate after the current intervention and until the next intervention; and the "overall trend after all interventions" refers to the change in rate due to all interventions. These four trends were presented as "rates of change per 100,000 population" together with their p-value at 5% significant cut-off.
Dynamics of the COVID-19 pandemic according to the scoping review Time lag between containment measure, daily slow-down effect, and deaths
A study in Italy demonstrated that the containment measures reduced the progression of the COVID-19 epidemic [21]. The time lag between implementing measures to the reduction of COVID-19 growth rate was approximately 7-10 days. The analysis in different Italian regions showed that the earlier the measures were taken, the lower was the cumulative incidence. The importance of implementing early measures was also observed in 25 European countries: as countries with the highest mortality (Italy, Spain, and France) were late to implement national restrictions. Sweden adopted fewer restrictions compared to other neighbouring countries and suffered a higher mortality rate [22].
Daily growth rate in a controlled and uncontrolled situation
Various modelling studies analysed different scenarios to control the spread of COVID-19. A study assessed the effectiveness of social distancing in Italy based on the level of adherence to quarantine. They predicted a three-persons household with each level of complete, nearly-complete, medium, and no quarantine to have 7, 8, 12 and 20 secondary infections, respectively, during 14 days [23]. In large households with 6 persons, 16, 19, 29, and 43 secondary infections in 14 days respectively were predicted for each of the quarantine completion level, suggesting that a higher adherence to quarantine and a smaller household contribute to a lower number of secondary cases. In South Korea, during the outbreak, there was a positive correlation of compliance with lockdown measures and a decline in the confirmed cases [24]. Likewise, "home office" and the delay of school opening led to a marked transmission reduction.
Choi and Ki simulated the epidemic in South Korea and predicted nearly 5 million COVID-19 cases without any measures, while the lockdown could reduce the transmission rate by 90% to 99% [25][26][27]. The combination of different mitigation measures seems to be crucial for reducing infections and deaths [26], just as the increased compliance with the measures [28]. In Veneto, Italy, seventeen days after the lockdown strategy, 658 hospitalized cases (95% CI 618-698) could be prevented and the peak of the curve was delayed by 3 days [29]. In Italy, measures such as "red zone" (lockdown in ten towns in Lombardy) effectively contained [30].
A modelling study in India found that within 21, 42, and 60 days lockdown periods, the number of cases (378,036 infections without intervention) was reduced to 70,424 after 110 days in the 21 days lockdown-scenario, and was additionally reduced to 42,950 in the 42 days scenario, but no additional reduction by prolonged lockdown of more than 42 days [31]. Another modelling study in France showed that the isolation of individuals with no or mild symptoms was not sufficient to reduce the number of confirmed cases, however, both in France and North-Italy, a substantial case reduction could be achieved by a large-scale reduction of social interactions [32]. A study in South Korea estimated the effective reproduction number (R 0 ) to be 1.5 (95% CI 1.4-1.6), the intrinsic growth rate to be 0.6 (95% CI 0.6-0.7) and the "deceleration of growth" to be 0.8 (95% CI 0.7-0.8), which indicate for sub-exponential growth dynamics of COVID-19 [33].
Reduction of peak prevalence, cumulative incidence, and R 0 by containment measures
A study in Italy and Spain comparing daily percent increase of diagnosed cases, deaths, and ICU admission before and after the national lockdowns showed that before lockdown, daily percent of incidence was high in Spain (38.5% of diagnosed cases, 59.3% of deaths, 26.5% of ICU admissions) and less in Italy (21.6%, 32.8%, and 16.7% respectively), however, after the first lockdown, incidence was considerably lower in both countries (11.9%, 17.6%, and 9.6% in Spain, respectively, and 2.5%, 13.7%, and 3.7% in Italy) [34]. After the second and more restrictive lockdown, particularly in Italy, all outcomes declined (−2.0%, −0.2%, and −16.8% respectively), and so it happened in Spain (−2.7%, −1.8%, and −5.6% respectively).
A modelling study in India showed that if 50% of symptomatic cases are in quarantine within three days after developing symptoms, assuming a minimal basic reproduction number R 0 of 1.5 before symptoms develop, the decrease of the cumulative incidence was 62% and of the peak prevalence was 89%. In contrast, when assuming that R 0 was 4 and the infectiousness of asymptomatic cases was half compared to symptomatic cases, the estimated cumulative incidence will decrease by only 2% and the peak prevalence by 8% [35]. In another modelling study in India, lockdown measures reduced the basic reproduction number from 2.3 before the lockdown to 0.15 after the measure [36].
Age specific infection rates and case fatalities
A study in Germany showed that after establishing physical distancing in week 12, people aged 15-34 years played a predominant role in the spread of the disease compared to older (35-39 years) and younger age groups (10-14 years), assuming that the non-adherence to social distancing was frequent in this age group [37].
In Korea, Daegu province, the outbreak generally began in the younger age groups, but case fatalities were the highest among people aged ≥ 80 years (12.1%), followed by those aged 70-79 years (5.6%) [38]. In 66 laboratory confirmed fatal cases of COVID-19, the median age was 77 years (range, 35-93 years), and female-to-male ratio was 44:56. In South Korea, the crude case-fatality was higher among males (1.1%) compared to females (0.4%) and increased with older age [33].
Risk of importation and airport measures
One of the first measures implemented by the Italian government was to suspend flights from China and install in air-ports' checkpoints with thermoscans. However, this measure appeared not to be very effective to contain the epidemic [30]. Mandal et al. found in a modelling study in India that airport screening of symptomatic arrivals will lead to a delay of 2.9 days in the predicted "average time to epidemic (days to reach a prevalence of 1000 cases)" [35]. In order to get a delay of 20 days, an additional 90% coverage in the screening of asymptomatic passengers will be needed, which is difficult to achieve, however, there are additional benefits of identifying asymptomatic arrivals rather than screening only symptomatic cases [35]. Figure 2 shows the cumulative incidence of COVID-19 until the end of May 2020 (both laboratory confirmed and non-confirmed). There are mainly two types of epidemic curves; in Germany, Italy, and South Korea, a quick increase in cases can be seen followed by a slow-down of the transmission, while there was almost a linear increase in Sweden. In the LMICs, we see an exponential increase of cases ( Fig. 2; Table 3). Containment measures were implemented as a response, which in some countries happened before the first case was confirmed (minus values in Table 3), while some were shortly after the first case, and others occurred later in time. In five countries, which had already reached the peak of the wave before May 31, the length of the critical period (from the start of the wave to its peak) was between 44 days (South Korea and Italy) Table 3 Critical period of infection and delay to the containment measures in nine target countries a Number of days after the first case until the first action b Number of days after the first case until the full action; full action means complete implementation to close schools, close workplaces, cancel public events, ban public gatherings, close public transportations, ban internal traveling, ban international traveling, and promote public campaigns c Date with the peak number of newly confirmed cases; if the event with the highest number of daily cases was on May 30 or thereafter, it was considered "not yet reached" the peak d Number of days after the first case until the peak date with the highest number of cases; if the peak was "not yet reached, " the critical period was considered "ongoing" e Minus values in the delay to the first action mean the countries started with initial containment measures before the first case was confirmed and 90 days (Sweden), with Germany (61 days) in the middle. All LMICs did not reach the peak until 31 May 2020, because they had particularly long critical periods from 90 to 130 days (Colombia, Mexico, Nigeria, India, and Nepal). Figure 3 illustrates that only Germany, Italy, and South Korea show the typical epidemic curve with a sharp increase to the peak and then a slower decrease of cases skewed to the right. The curves of India, Mexico, and Nepal have a similar shape, but only during of the initial part, as they did not reach the peak at the time. Sweden, with a "hands-off " policy and relaxed strategy, has a flat and prolonged curve of new cases; likewise, Nigeria has a flat and prolonged curve, which was limited to the low testing capacity.
Cumulative and daily infections and deaths
As these findings seem to contradict the model predicted by Ferguson et al. to be high and stiff when there is no distancing measure while low and prolonged if containment measures were employed [3], we had a closer look at the stringency of containment measures and peoples` compliance with these measures [16,18]. Figure 4 and Table 4 shows the SI over time. Some countries started late but were then very fast with containment measures (Nepal, Mexico, and Italy) while others started early but then strengthened the measures stepby-step (Germany, South Korea, and India). Others opted for less stringent measures, particularly Sweden and South Korea (after a month of strict measures). The LMICs were generally more stringent than the HICs.
The stringency of the containment measures and people`s compliance level
Human movement after the introduction of containment measures in six countries are given in Fig. 5, which shows that in Colombia (similar to Italy, India, Nigeria, and Nepal), people stayed at home and did not follow many extra-domestic activities. Germany (similar to Mexico) illustrates a less strict restriction of mobility: people stayed more at home with limited recreational activities strictly, but used the public transport and visited public parks more frequently. In Sweden and South Korea, the less stringent containment policy led people to continue going to work, using pharmacies/groceries, and using public transports almost as usual, and increasing visits to parks rather than other recreational activities.
Containment measures shaping the epidemic curve (Interrupted time-series analysis)
Findings from the time-series analysis for the infection and death rates showed different patterns across the nine countries (Tables 5 and 6; Fig. 6).
• India: At baseline (prior to any intervention), infection rates showed a slight increase, which reversed temporarily at the time of the first intervention (−0.148; p < 0.05) before starting to increase again at the following intervention (0.14; p < 0.05). At sub-country level in Kerala, a fairly similar scenario was observed with exception for the rate at and after the first intervention. The death rate followed the scenario of the infection but at a slower pace.
• Nepal: The baseline trend did not indicate a statistically significant rate of change, but showed an increasing trend after the first intervention (0.004; p < 0.05). Although a slightly different rate of change was observed at sub-country level (Kathmandu), generally, the rate of reduction followed a similar trend of increasing infection rates throughout the period. Mortality rates were low and therefore no pattern of change was observed. • Nigeria: The baseline trend did not indicate a rate of change but showed an increasing trend throughout the follow-up period. In Abuja, the same trend was observed but at a higher rate; however, a decreasing trend in infection rate was observed after the second intervention. The death rates at country and sub- country levels followed the same trend but with an increasing death rate at baseline and after intervention one but decreased following to the second intervention. Nevertheless, none of these changes showed a significant trend. • Colombia: The increasing infection rates at baseline were minimal but significant and continued to increase significantly in spite of the second and third intervention. In North Santander, the rate of infection started with an increasing trend but responded to both interventions with a mild decline. The same scenario was observed for the death rates at country and sub-country levels. • Mexico: Mexico had minimal changes in the stringency index after the first intervention but was successful in reducing the infection and death rates significantly thereafter (infection: −2.676; p < 0.05, death: −0.359; p < 0.05). However, after this initial success both infection and death rates increased shortly after the intervention; this trend was similar in Nuevo Leon. • South Korea: South Korea had a continuous increase in the SI with three different intervention episodes. At baseline, the country showed an increasing infection rate until the first intervention before cases started to decline significantly (−0.181; p < 0.05), which was continued after the third intervention (−0.119; p < 0.05). Daegu Province (where more than half of the cases occurred) followed a similar trend, but the increasing stringency measures succeeded in reducing the overall infections (−0.036; p < 0.05). The impact of the interventions on death rates was even more significant both at country and sub-country level. • Italy: The rate of infection was much higher in Lombardy compared to the whole country. Nevertheless, Table 5 Interrupted time-series regression of infection rate per 100,000 in relation to countries-and sub-countries stringency Indices (Intervention) Intervention 1 measured at 50th, intervention 2 measured at 75th and intervention 3 measured at 95th percentile * Significant change (p < 0.05) a intervention 1 is measured at 10th percentile instead of 50th (only in Sweden) containment measures managed to reduce infection rates significantly at both levels (country: −1.163; p < 0.05; Lombardy: −1.520; p < 0.05). The same results were obtained for the mortality, which equally maintained a significant decline after the interventions (country: −0.174; p < 0.05; Lombardy: −0.325; p < 0.05). • Germany: Germany showed an increasing incidence before imposing containment measures, but this rate started to significantly decline after the first intervention; this decline continued after the second and third interventions so that the rates could be reduced by −0.232 (p < 0.05). Baden-Württemberg followed a fairly similar trend. The death rates at country-and sub-country level showed similar trends. • Sweden: Sweden started late in imposing containment measures with a low SI. Albeit the minimal average of the SI, the country managed to reduce the rate of infection immediately after the first intervention and continued through the following two interventions, although the reduction was statistically insignificant (−0.21: p > 0.05). The picture was different in Västra Götaland where the containment measures failed to generally reduce the rate of infection significantly. The death rate followed a similar trend for both country and sub-country level.
Effective containment measures: scoping review
Main messages from the scoping review about the effect of containment measures on infection rates were that containment measures, particularly when initiated early, could reduce the progression of the disease transmission (Italy) [21], reduce the basic reproduction number R 0 from 2.3 to 0.15 (India) [36], and delay the peak of the epidemic by 3 days in modelling studies [29,35]. A combination of containment measures is more effective, leading to a reduction of secondary cases by 90 to 99% in South Korea [25]. In Spain and Italy, particularly the second intervention, when added to the first one, led to the reduction of infections [34]. However, the lockdown of mild and asymptomatic cases alone did not have much effect in modelling studies [32], and prolonged lockdown beyond 42 days did not have an additional benefit (India) [31]. Furthermore, fever checking in airports needs a very high coverage to be effective, particularly considering all asymptomatic arrivals [35]. Finding from our review came to the same conclusion-that containment measures ware effective in reducing the transmission dynamics, but only in HICs and only minimal in LMICs.
Wealth and disease burden
Our participant countries represented a range of income groups whereby Germany, Sweden, South Korea, and Italy belong to the high-income category in contrast to the LMIC group, which engulfs the rest of the countries. The country's wealth status has apparent implications, which is reflected in the distribution of the cumulative infection and death rates (Fig. 2). Higher income nations displayed a sharp increase of both infection and death rates at the early stages of the pandemic, which slowed down 4 to 6 weeks later, with the exception of Sweden which had a less-pronounced decline due to the "relaxed" containment policy. Among the LMICs, the increase of infections was stiffer and started earlier in time and continued throughout without reaching the peak during the observation period. This difference warrants further investigation.
Assessing the containment measures in LMICs and HICs
The SI as a summary measure of the different components of the containment strategies implemented over time revealed the following (Fig. 4). High and long lasting SIs were found particularly in LMICs (India, Nepal, and Colombia) and lower SIs and/or shorter duration were observed in HICs, particularly in Sweden, South Korea, and Germany. Mobility data (Fig. 3) reflecting the compliance of people with the lockdown measures also seem to show that populations in LMICs were more compliant than those in HICs.
In South Korea, Germany, and Italy, the containment measures were successful in reducing infection rates and deaths, particularly when several restrictions were combined. In Sweden, the effect was also present but not statistically significant, which is explained by the minimal changes of lifestyle during the epidemic. Conversely, the effect in LMICs was disappointing in relation to strict containment measures imposed, particularly in India, Nepal, and Colombia. Despite the fact that peoples' compliance with the measures appears satisfactory, these countries have achieved minimal and only temporary reduction of infection rates or deaths. Also at the sub-country level, no apparent or sustained success could be observed. For instance, in Kerala, India, containment measures were particularly strict but only a minimal transient effect on transmission reduction could be observed.
Likely causes for the unsuccessful containment measures in LMICs
Data from LMICs are usually less reliable than those from HICs. Systematic testing is rarely being done, not even in symptomatic cases, and the number of infections and deaths is mainly health service based. However, the information on the sharp increase of new infections and deaths is worrying enough although the real burden is most likely to be much higher.
We assume that the high proportion of people working in the informal economic sector, particularly those in vulnerable employments (India, 74.5% vs. Germany, 5.7%), explains why lockdown measures are impossible to comply with in poverty areas. "No work, no food" illustrates that the majority of people in these areas cannot afford to stay at home. This is not captured by our human mobility analysis which rests on the ownership of a smart phone and does not reflect the movements of the poor. In other words: the lockdown measures are not followed by a large population segment which drives the virus transmission.
Comparison with similar studies from across the world
The additional literature search after completing the scoping review and the analysis of data at the earlier stages of the pandemic added the following information: A study including 41 countries showed that containment measures including non-pharmaceutical interventions were effective in reducing Covid-19 transmission, with some measures greater than others [39]. A recent study from India found that the timevarying reproduction number (R (t)) was reduced in several states as a result of various containment measures [40]. It was shown that the reproduction number increases with higher population density as it facilitates the transmission of the virus. Thus, mobility restrictions could markedly bring down the COVID-19 spread in densely populated regions (see also Table 1 on population density) [41,42]. However, such restrictions could not be implemented for a long period in LMICs, where the proportion of people belonging to the informal sector is high, as discussed above.
Other studies showed the impact of containment on mobility. A study by Barbieri et al. found in ten countries a decrease of private mobility during the first wave of the pandemic [43]: people abstained from walking (11.3% reduction in Iran), cycling or driving a car (10.2% and 13.7% reduction, respectively in Ghana). Also, the use of public transportation decreased the most significantly in Iran (18.7% less use of trams) and Australia (7.2% less use of trains) and Norway (decreased use of buses, −19.4%, and airplanes, −4.9%).
Another recent study which summarized guidance for low-income countries said that authorities in African countries could learn from China to improve emergency responses to pandemics, be more proactive, and be committed to planning and performing long-term plans for coming pandemics. Furthermore, there should be a promotion of hygiene and public participation as a routine application in all communities in Africa. Liaising with medically sophisticated countries will facilitate real-time information, assuring that gaps between advanced countries and LMIC like Africa are reduced. African countries should also increase their capacities to make their anti-epidemic elements such as personal protective equipment and testing kits to flatten the curve [44].
These study results were in line with the results of the here presented study (reduced mobility during the lockdown, population density as a risk factor for COVID-19 spread), but did not distinguish between rich and poor countries.
Study limitations
This study has utilized existing records of national and sub-national infection and death cases as reported by established sources [7][8][9][10][11][12][13][14], including the pragmatic measure of SI and Google mobility data as described elsewhere [16,18]. Just like the case of data reporting for other diseases, the authors acknowledge the possible incorrect disease classification of diagnosis particularly in LMICs including the frequent underreporting of mortality and morbidity data. However, information sources in the current study have been most useful in several recent epidemiological studies. In addition, the SI is deemed as a novel index to be applied in research, but our results suggest plausible interpretations of this index which appear to follow what was expected in the corresponding countries. Since the Google mobile data is prone to several limitations as acknowledged by their providers [18], we attempted to interpret this information more cautiously and only integrated their records in the descriptive statistics to aid the study discussion.
Conclusion
Compared to HICs, the transmission dynamics seem to follow different paths in LMICs requiring different and more context-specific strategies in order to contain the spread of the virus and protect the most disadvantaged societies. This is certainly a novel challenge for the global health community and experiences from local settings will likely help to shape national and global policies and find new ways of dealing with the COVID-19 pandemic and its disastrous impact on peoples` lives. | 8,171.8 | 2021-10-02T00:00:00.000 | [
"Medicine",
"Political Science",
"Economics"
] |
Growth of Ni loaded CdS in nanorods structure for photocatalytic and dye degradation applications under solar irradiation
Due to the exponential increase in global energy consumption and the degradation of environmental conditions caused by fossil fuels, it is critical to improve inexhaustible and sustainable resources. Generally, solar energy is one of the clean and environmentally agreeable energy sources. By harvesting solar energy for photocatalysis and considering it as a promising solution for various energy generation applications such as hydrogen production. Herein we are using Cadmium Sulphide and Nickel-doped Cadmium Sulphide in 0.5, 1 and 5 weight percent which act as photocatalyst for water splitting which will eventually produce an enormous amount of Hydrogen (H2). Cadmium sulphide was prepared through the chemical precipitation method and Ni-CdS by hydrothermal technique. The purity and phase formation were examined by the X-ray diffraction (XRD) and validated via Rietveld refinement by using Full Prof software. The surface morphology and the structure of as-synthesized material were evaluated by Field Emission Scanning Electron Microscopy (FESEM) and Transmission Electron Microscope (TEM) spectroscopic techniques. Following the results, the Ni-CdS nanocomposite having 1.0 wt% of Ni exhibits the highest H2 evolution rate of 9 mmolg−1 in 5 h with strong photo-stability, which is about 50 times higher than that of CdS. The material was tested to degrade organic dye for its photocatalytic operations. The newly prepared composite materials (CdS-Ni-NiO) were used for the photocatalytic degradation of the methylene blue (MB) dye. Ni(1.0 wt%)-CdS shows an optimal degradation percentage of 95.436 in the presence of artificial solar light in 90 min. Crystal growth mechanism shows the spherical structure of CdS agglomerate to form nanorods structure when doped with Ni metal which is also verified by the TEM images of CdS and Ni-doped CdS. The XPS peaks observed at 854.88 eV and 861.07 eV for Ni2+ with an energy separation of 6.18 eV confirmed the existence of NiO with Ni/CdS. The Raman bands of pure CdS and Ni (1.0 wt%)-CdS nanorods were observed at 300 cm-1 and 293 cm−1 for 1LO phonon and 601 cm−1 and 586 cm−1 for 2LO phonon corresponds. The Ni tuned the CdS band gap from 2.36 to 2.20 eV. The results pave the way for designing multi-component CdS-Ni nano-composites for highly efficient H2 evolution and other environmental applications.
Introduction
In recent years, the growing interest in design and development of semiconductor materials with specific surface morphology has gained thrust because of incomparable novel electronic and optical properties [1][2][3], and thus its wide range of potential applications, like bioimaging [4], chemical separations media, solar energy conversion [5], photocatalysis [6], etc.Several approaches have been developed to design both inorganic and organic materials in specific morphologies.The approaches include hydrothermal method [7][8][9], solvent-relief-selfseeding (SRSS) process [10], self-assembly method [11], and template methods [12][13][14][15], or CVD growth in a confined environment [16].The Cadmium sulphide (CdS, II-VI group) nanocrystals and its composites being an important class of materials among other semiconductors.Due to the reasonable band-gap, it attracts the researchers and the scientific community worldwide to design in control crystal phase, shapes, size and morphologies.CdS is a direct band gap semiconductor possess structural dependent band gap such as wurzite (Hexagonal) structure has 2.40 eV whereas the Zincblend (cubic) structure has 2.38 eV [17,18].The unique tuneable bandgap feature now made it one of the model materials for exploring the inimitable optical and electronic properties of quantum-confined semiconductors.It is reported that the surface defect states of CdS instantly trapped the optically excited electrons and holes, results successful quenching of their recombination [19].Thus, it is widely applied for the nonlinear optical materials, optoelectronics, biological labeling, light emitting diode for flat panel display, photoelectric conversion in solar cells and photocatalytic application for water-splitting hydrogen production and industrial dye mineralization [20][21][22][23].
Among several developed synthesis approaches of CdS nanocrystals, the production of good quality CdS using a one-pot synthetic method [22], synthesis of large quantity single-crystal CdS of micro belts morphology from the modified thermal evaporation method [23], and development of capping agent stabilized CdS nanoparticles using a simple and green route [20], have been reported.
In this specific context, the current research outlines a simple chemical technique for transforming CdS nanoparticles into CdS nanorods, utilizing a Ni catalyst.The process yielded a considerable amount of uniform CdS nanorods with integrated Ni.As anticipated, this procedure facilitated the presence of both Ni and Ni 2+ phases on the CdS rods.
Synthesis of CdS and Ni-loaded CdS
The production of CdS nanoparticles involved a straightforward chemical process.In this demonstration, 4.02 g of cadmium chloride (CdCl 2 .H 2 O) and 0.305 g of thiourea [(NH 2 )2CS] were separately dissolved in 50 ml of water in a beaker under magnetic stirring at room temperature.The Ammonium Chloride solution was gradually added to the cadmium chloride solution until the pH reached 10 to 11.Subsequently, the thiourea solution was combined with the above solution and stirred at 80 °C in a silicon oil bath until the entire solution turned yellow.The resulting yellow precipitates were washed multiple times with distilled water, followed by centrifugation to collect the powder sample.The sample was then dried in an oven at 50 °C for 24 h.This synthesized nanomaterial was designated as CdS nanoparticles.The same procedure was employed to synthesize Ni-doped CdS (referred to as Ni-CdS) nanoparticles.Different amounts of Ni (0.5, 1.0, 2.0, and 5.0 wt%) were loaded onto the CdS surface to create the nanocomposite photocatalysts.
Materials characterizations
X-ray diffraction is a standard technique used for identifying the incident source of the crystallographic phases of the sample using x-rays.As XRD experiment is multifaceted and non-destructive to samples, structure data dependent on the elastic dispersion of x-rays from individual atoms can be shown.As the substance has a unique lattice structure, the Bragg equation creates specific diffraction patterns.
Here d is the distance between the adjacent diffracting planes, θ is an incident angle of wave and planes, λ is an incident wave wavelength and n is the integer.The CdS and Ni-loaded CdS powders underwent characterization using x-ray diffraction (XRD) on a Bruker D8-Advance x-ray diffractometer with mono-chromatized CuKα radiation (λ = 1.5418Å).The XRD analysis involved an accelerating voltage of 40 kV and an applied current of 40 mA.The lattice parameters were determined by using the equation [24,25].
The formula is used to calculate the relative percentage variation in the interplanar distance 'd' [26].
JCPDS
Taking into account the first-order approximation n = 1 for computing the lattice parameters, we have The Calculation of the lattice constant 'a' is performed specifically for the {100} planes.
The lattice constant 'c' is determined for the {002} planes through the following calculations In the hexagonal arrangement of CdS, the Cd and S atoms are coordinated tetrahedrally.The distance between Cd and S atom in the Cd-S bond is denoted as 'L' and can be expressed as [27] ⎜ The position parameter 'u' in the wurtzite structure represents the cumulative displacement of each atom along the 'c' axis.It is defined as [28] ( ) The calculation of the volume of the hexagonal primitive cell is carried out using the following formula = V a c As the c/a ratio decreases, the value of 'u' increases, resulting in almost unchanged tetrahedral distances despite the distortion of tetrahedral angles.
The b q Cos hkl hkl values on the y-axis have been determined as a function of q 4Sin hkl on the x-axis and from linear fit, crystalline size D and micro strain ä have been calculated.
In many situations, homogeneity and isotropy are also not accepted.An anisotropic strategy is introduced in order to incorporate more practical conditions.Hence, Williamson-Hall is altered by an anisotropic strain ä.The lattice stress deformation is believed to be uniform in every crystallographic direction in the uniform stress deformation model (USDM) and to be present with a minimal micro strain in the fine particulates.In USDM, the Hooke's law applies to the strain and a linear ratio of the stress to strain is given by [27][28][29][30] Where σ is crystal stress, ä is anisotropic micro strain, this is dependent on the crystallographic orientation and Y hkl is an elasticity modulus or the Young modulus.This approximation holds true under the condition of a relatively small strain.However, as the strain increases, the particles deviate from this linear relationship.To address this, the equation of Williamson Hall is modified by substituting the value of ε from equation (12), resulting in the following expression [26-29] The following relationship gives Young's modulus for a hexagonal crystal [29-32] In this equation 'a' and 'c' represent the lattice parameters, while S 11 , S 13 , S 33 and S 44 correspond to the elastic compliances of CdS.These compliances have specific values: 2.069 × 10 −11 , −0.581 × 10 −11 , 1.697 × 10 −11 and 6.649 × 10 −11 m 2 N −1 , respectively [33].
There is a different method called the uniform deformation energy density model (UDEDM), which can be used to calculate a crystal's energy density.The energy density of u ed (energy per unit volume) is a function of strain for a system which follows the Hooke's law.
Thus, the energy-stress relationship from equation (16), equation (12) can be rewritten as By knowing the Y hkl value, the lattice strain can be determined.The W-H plots defined an isotropic extension of the graph.This highlights that owing to the contribution of micro strain, the diffracting domains is isotropic.When expanding the line of isotropic parameters, an average 'size-strain map' will help evaluate size-strain parameters (SSP).This methodology has an advantage because data from high angles reflection, where the accuracy is typically less significant, are given less priority [34].It is presumed that a Gaussian function explains 'strain profile,' and a Lorentzian function explains the 'crystallite size' profile and is supported by [28,35] In this context, 'dhkl' represents the lattice distance between the planes of 〈h k l〉, 'Vs' denotes the apparent volume weighted average size, and 'äa' stands for an apparent strain measurement that is in alignment with the root mean square (RMS) strain [28] ⎛ ⎝ ⎞ ⎠
RMS a
The volume average true size for spherical crystallites is represented as D v , and it is calculated as D v = V s 4/3 The Wilson equation was used in order to determine the upper limit ä hkl , and the root mean square (RMS) ä RMS in the following direction using correlation RMS hkl The upper limit micros train is derived from Wilson's law in Bragg and a Gaussian micro strain distribution was supposed to extract the root mean square micro strain from the upper limit micro strains.The root-mean square (RMS) lattice strain is provided by the replacement of the ä hkl value in equation (21), we get The observed and optimal interplanar spacing values here are d and d 0 .
A nonlinear curve fitting function called Voigt estimates the full width half maximum FWHM) for all peaks.This approach yields the most accurate fit to the experimental data.Furthermore, the Voigt function is a combination of Gaussian and Lorentzian functions [25], and FWHM is computed with the following formula In this equation β 0 represents the observed Full Width at Half Maximum (FWHM), W G is the Gaussian width, and W L is the Lorentzian width.To eliminate the instrumental broadening (β i ) the following equation is utilized Quantitative information about the preferred orientation of crystals can be derived from the Texture Coefficient, TC [28].Where TC (hkl) is a coefficient of texture, I hkl be the XRD sample intensity and N is the number of reflections considered for diffraction and I 0 hkl is the XRD intensity from JCPDS card.If TC (hkl) ≈ 1 is taken into consideration for all á ñ hkl planes, the nanoparticles are identical to JCPDS with randomly orchestrating crystallites, while TC higher than 1 are suggested by the grain abundance in a certain á ñ hkl direction.Values 0 < TC (hkl) < 1 suggest the lack in that direction of the grain.By using the Rietveld refinement technique, all of the patterns for X-ray diffraction (XRD) were analyzed using the Fullprof program.The XRD pattern can be refined for all samples through the space group P 63 m c.A well-known technique for collecting structural details from powder diffraction data is the Rietveld Refinement process.The procedure contrasts the Bragg intensities with those dependent upon a possible structural model by using a lesser-quadratic method.The parameters such as background and scale factors were optimized in the first stage of refinement.Subsequently, the process involves the step-by-step refinement of various structural parameters, including lattice parameters, profile and width parameters, desired orientation, asymmetry, isothermal parameters, atomic coordinates, and site occupancies.To assess the fitting accuracy of the experimental results, parameters such as 'goodness of fit' χ2 and R factors (R P = Profile factor, R B = Bragg factor, and R F = Crystallographic factor) are calculated.When these parameters reach their maximum values, the crystal structure is deemed adequately fitted to the experimental diffraction data.
Where 'Y i ' is the point (experimental) observed and 'Y c,i ,' is the point measured and the number of data points is n.Weighted profile factor: Where is the observation variance.Expected Weight factor: Here (n-p) is the number of degrees of freedom.'n' is the cumulative number of points of the experiment and 'p' is the number of parameters refined.Reduced chi-square: Where, 'h' is the reflection vector of the Bragg.The I obs,h is the integrated intensities that are observed and I calc,h the intensities calculated.
Crystallographic factor: The x-ray density ẍ of the actual samples have been estimated by means of an estimated lattice constant according to relation Where NA is the Avogadro number ( ´mol 6.02 10 23 1 ), M corresponds to the CdS sample molecular weight (CdS = 144.46g mol −1 ), Z corresponds to the number of formula unit in the unit cell (Z = 2) and V indicates a Volume of unit cell ( = V 3 /2 a 2 c).Jeol Jem TEM is used to obtain TEM images and SAED Pattern.The values of interplanar spacing (d) pertaining to all diffraction rings were estimated using the following equations: Where R is the distance between the center bright spot and the matching rings, L is the camera length between the sample and the photographic film, and λ is the wavelength of the electron beam depending on the accelerating voltage: 200 kV equals 0.02736 Å.We have used a camera length of 50 cm.MB degradation experiment is done in the absence of light i.e. in the dark and in the presence of artificial solar light.Around 200 mg of a sample of pristine CdS and Ni (0.5-5.0 wt%)-CdS is used to degrade the Methylene blue which is dissolved in water in the molar ratio 10 −5 :1.The supernatant of the Methylene Blue solution is taken after periodic interval for UV-vis spectroscopy measurement.The Shimadzu UV-1900i spectrophotometer was used for UV-vis absorption spectra.
The chemical states of the species available in the samples were identified using X-ray photoelectron spectroscopy (XPS) analysis.It was performed using an Axis Ultra system and a spectrometer with a monochromatic Mg-Kα source (hv =1253.6 eV) and a hemispherical analyzer.The C1s line at 284.6 eV was kept as base to correct the all XPS spectra.The Raman spectra of the material were obtained with an iHR550 Raman spectrophotometer, Horiba Jobin Yvon, with HeNe laser (632.8 nm) as the excitation source.Diffuse reflectance UV-visible spectra (DRUVS) were obtained in the 200-700 nm range using BaSO 4 as a reference and using a Carry 500 Diffuse reflectance UV-visible spectrophotometer.X-ray peak analysis using the Williamson-Hall (W-H) method was employed to assess the lattice strain and crystallite size.The Williamson-Hall method is known for its ease of distinguishing between peak broadening caused by the size and strain effects when considering the peak width as a function of 2 θ hkl .The local lattice distortion induces strain, leading to the expansion of peaks.Equation (12) represents the W-H equation of the isotropic strain model, assuming uniform strain in all crystallographic directions.Sherrer plot of CdS and Ni(5wt%)-CdS represented in figure 1.The graph plotted between b q Cos hkl hkl and q 4 Sin hkl represents the W-H plot of the ISM, which is fitted straight and seen in figure 2 for the CdS and Ni (5 wt%)-CdS nanoparticles.From the intercept of the fitted line through the Y-axis, the crystallite size is determined and the strain is estimated from the fitting slope.
Result discussion
Anisotropic strain model (ASM) The strain is generally not fully isotropic in nature for any substance.Thus, in the second term of equation ( 14) anisotropic strain should be used.A graph is then drawn in a straight line between b q Cos hkl hkl and q .
Y 4Sin hkl hkl
Young's Modulus is calculated from the equation (15).The slope and y-interception of the graph respectively measure the uniform deformation stress and the crystallite size.Figure 3 The crystallite average size can also be calculated using the modified Scherrer equation.The importance of the modified Scherrer formulation is to minimize the error source and estimate the crystallite size.In order to mathematically suppress defects, the average crystallite size is computed by using the least square method taking account of all peaks.Figure 6 illustrates the standard modified Scherrer plot for CdS and Ni-CdS for ln(β hkl ) versus.ln(1/Cos θ hkl ).The intercept of a minimum square regression line passing by all available peaks achieves the value of d.Table 1 represents the crystalline size of Cds and Ni (5 wt%)-CdS using different W-H methods.
Table 2 represents the JCPDS and Observed 2θ and d spacing values for different hkl for the CdS nanoparticles.
Young's Modulus is calculated from the equation (15) shown in table 3 for different peak positions for CdS, Ni (1 wt%)-CdS and Ni (5 wt%)-CdS respectively.
Reitveld refinement of CdS and Ni (5 wt%)-CdS
XRD pattern Rietveld Refinement procedure was done for all the samples of pristine CdS and Ni-doped CdS in the hexagonal phase with space group P63mc using the Full Prof software for all of the structural parameters such as atoms fractional coordinates, lattice parameters, thermal parameters, site occupancy and microstructural parameters such as average crystallite size.The Rietveld refined data in addition to x-ray diffraction patterns for Pure CdS, Ni (1 wt%)-CdS and Ni (5 wt%)-CdS is shown in figure 7. It can be shown that the profiles for the measured and determined ones align and that for the P63 mc space group Bragg 2θ positions are permitted for all experimental peaks.The positioning of sulfur during the refinement was taken as free parameters.Cadmium has been fixed in an atomic fractional position.Isothermal parameters and occupancies are firmed for both zinc and oxygen.During fitting, free parameters were adopted in the parameters such as lattice constants, scale factors, and shape parameters.The first stage of refinement was to optimize global parameters such as background and scale factors.The following stage consisted of refining structural parameters such as lattice parameters, profile shape, width parameter and desired orientation, and atomic and asymmetric coordinates.The background was fitted with polynomials of the sixth order, while pseudo-voigt In order to evaluate the fitting qualities of the experimental results, parameters such as 'goodness of fit' χ2 and different R-factors like Rp (profile factor), Rwp (weighted profile factor), Rexp (expected weighted profile factor), RB (Bragg factor) and RF (crystallographic factor).R p, R wp , R B , R F, and χ 2 are the reliability factors (R factors), and lattice parameters along with the crystalline size shown in table 7. The low values of different R factors and 'fitness of good' obtained justify the consistency between the refined models and the experimental data.
Texture Coefficient can be calculated from the equation (25).If TC (hkl) ≈ 1 is taken into consideration for all á ñ hkl planes, the nanoparticles are identical to JCPDS with randomly orchestrating crystallites, while TC higher than 1 are suggested by the grain abundance in a certain á ñ hkl direction.Values 0 < TC (hkl) < 1 suggest the lack in that direction of the grain.
The preferential growth of the crystallites perpendicular to the 〈 hkl 〉 plane is higher with the rise of TC (hkl) .The Texture coefficient of CdS, Ni (1 wt%)-CdS and Ni (5 wt%)-CdS values is shown in table 8.
Figure 8 illustrates the hexagonal CdS structure and Ni(5 wt%)-CdS structure in the ball and stick style image produced by the CIF-file of the VESTA software obtained from Full Prof structural refinement of experimental results.
Table 9 represents the bond angles and bond lengths between different atoms in CdS and Ni-CdS nanocomposites.
2-D and 3-D electron density mapping
The analysis of electron density mapping was conducted using the GFourier software in the FullProf kit to simulate the distribution of electron densities within a unit cell.In determining the location of the atoms inside the unit cell, the representation of electron density is important.The electron density refers to the geometrical structure element Fourier transform which occupies the whole cell unit.The scattering density of the electron is viewed as a two-dimensional or three-dimensional map of Fourier.Usually, the two-dimensional maps of Fourier are drawn as curves, showing the distribution of electron densities between individual atoms of the compound elements.If the curves of electron density are dense and thick, the location of a comparatively heavier element is indicated in the unit cell.In contrast, the Fourier 3-D maps encompass a network in the form of the chicken wire that shows a single degree of electron density.The black color in figure 9(a) applies to the density The physical properties of the materials can clearly be influenced by a redistribution of the electron density between cations and anions.Among them are strongly suspended bond strength vibrational properties, including the phonon modes, based on electron density and lattice distribution.Raman spectroscopy will also provide useful knowledge that could be specifically connected to the creation of lattice defects and changes in the density of electrons in accordance with the doping process.
Crystal growth mechanism for CdS and Ni-CdS
The mechanism for growth involved in Ni-CdS nanorods heterostructure is as shown in proposed scheme 1.
When the synthesis is conducted as an aqueous solution at the initial phase, the thiourea acts as a bidentate ligand and forms a more stable Cd-thiourea complex.In the simple chemical process wherein stirring takes place in the reaction mixture.This leads to the slow release of the Cd 2+ ions and weakens the stable Cd-thiourea complex.
Photocatalytic methylene blue degradation (MB) activity
The photocatalytic activity performance of pure CdS and Ni@NiO-CdS photocatalysts are evaluated over MB dye in the absence of light (dark) and the presence of visible light irradiation with a 300 W xenon lamp equipped with UV cut-off filter, and presented in figures 14(a)-(e).It is found that with an increase of the reaction time, the methylene blue characteristic peak is gradually decreased, which indicates the concentration of methylene blue is gradually decreased.The aliquots of the solution are examined at an interval of fifteen minutes by UV spectrophotometer.A small change in the intensity of the MB absorption peak (∼666 nm) is observed in the dark condition due to adsorption-desorption of dye onto the photocatalysts surface and considerable change is observed in light due to photo-activity of photocatalysts.
All Ni dopedCdS nanocomposite photocatalysts are showed the photocatalytic activity for the MB degradation under light irradiation compared to pure CdS.A maximum MB degradation efficiency of ∼ 95% in 90 min is recorded with Ni 1.0 wt% in CdS (Ni1CdS) nano-photocatalysts (figure 14(c)); and ∼ 69% for Ni 0.5 wt% in CdS (Ni05CdS), ∼ 72% for Ni 2.0 wt% in CdS (Ni2CdS) and ∼69% for Ni 5.0 wt% in CdS (Ni5CdS) photocatalysts under light irradiation (figures 14(a), (b) and (d).The bare CdS only degrades MB of ∼29%.The improved performance of the CdS in the presence of Ni is attributed to the plasmonic effect and formation of the Schottky barrier at the surface interface.Furthermore, the catalytic activity Ni-CdS samples were recorded in the dark condition and shown in figure 14(a).In dark conditions, a maximum value of MB degradation efficiency of ∼40% is recorded for the Ni1CdS photocatalyst in comparison to ∼8% for bare CdS in 45 min.One of the deciding factors for photocatalysis activity is the adsorption performance of photocatalytic materials to the target pollutant molecules.The adsorption activity of the synthesized photo-catalysts in the dark is calculated and seen in figure 16 in order to investigate the adsorption properties.Experimental findings reveal that the composite Ni-CdS has progressively improved adsorption efficiency as the Ni content is increased.
In contrast with pure CdS particles, the more specific surface area of the Ni-CdS composites also makes it simpler to adsorb positive loaded MB molecules.The details comparison of the MB degradation in dark and light with developed photocatalysts is tabulated in table 10.
Being a photoactive phenothiazine dye, the MB interacts with Ni-CdS photocatalysts under light irradiation, the whole molecule possibly degrades through synergetic competitive approaches, such as demethylation of MB dye followed by decomposition of the aromatic rings in the dye molecule as presented in scheme 2. The inset of scheme 2 shows the color change of the methylene blue and scheme 3 depicts the mechanism of photocatalytic methylene blue degradation [36][37][38].The obtained values of k are equal to 0.0039, 0.0062, 0.0088, 0.0069, and 0.0062 min −1 for CdS, Ni05CdS, Ni1CdS, Ni2CdS and Ni5CdS, respectively as depicted in table 10.
Possible mechanism of methylene blue degradation
The efficient separation of charges plays a crucial role in determining the photocatalytic activity of a semiconducting photocatalyst.Thus, it was essential to ascertain the conduction band (CB) and valence band (VB) potentials of the Ni1CdS photocatalyst using the following empirical equations: Here, E VB represents the valence band (VB) edge potential, E CB denotes the conduction band (CB) edge potential, χ stands for the electronegativity of the semiconductor, Ee is the energy of free electrons on the hydrogen scale (4.5 eV), and Eg represents the band gap energy of the semiconductor [39].
The calculated electronegativity value (χ) of Ni1CdS was found to be 2.026, while E CB and E VB were estimated to be −1.374 and +0.826 eV, respectively, relative to the normal hydrogen electrode (NHE).The conduction band of Ni1CdS was more negative than that of O 2 The photo-generated carriers are spatially separated according to the Z-scheme charge transfer process, which inhibits significantly the recombination of photo-generated electrons and holes.As a result, the oxidation or reduction potential of the photo carrier can be improved, thereby increasing the lifetime of photo-generated holes and electrons from Ni-CdS.
Surface elemental analysis
X-ray photoelectron spectroscopy (XPS) measurements were conducted to investigate the chemical composition of various species within the architecture of CdS nanorods loaded with Ni (1.0 wt%).The XPS
Raman analysis
The Raman spectra depicting pure CdS and Ni-loaded CdS nanocomposites are illustrated in figure 17.In figure 4(a), the Raman bands of pure CdS at 300 cm −1 and 601 cm −1 , corresponding to the 1LO and 2LO phonons, are presented.A subtle shift, specifically at 293 cm −1 and 586 cm −1 , is observed in the 1LO and 2LO Raman bands of Ni-loaded CdS nanoparticles compared to pure CdS.This shift could be attributed to the smaller ionic radius of Ni 2+ ions (Ni2+ = 0.62 Å) compared to Cd 2+ ions (Cd 2+ = 0.97 Å), as confirmed by XPS, indicating the formation of Ni 2+ on the surface of Ni.
Furthermore, an enhancement in the intensity of CdS peaks is noted after Ni doping, with the maximum observed for 1.0 wt% Ni.This enhancement may be attributed to the plasmonic effect of Ni ultrafine nanoparticles.The intensity ratio of the 2LO to 1LO modes (I2LO/I1LO) provides additional support for the strength of the exciton-phonon coupling following Ni doping, compared to pure CdS.However, this intensity ratio slightly decreases as the concentration of Ni increases, as depicted in figures 17(b) and (e).
Charge carriers separation and transportation study
Figure 18 displays the diffuse reflectance UV-V is optical absorption characteristics of both pure CdS and Niloaded CdS nanocomposites within the 400-700 nm range.In the pure CdS sample, an absorption edge appears around 530 nm, with a corresponding estimated Eg of 2.36 eV, closely aligning with the intrinsic absorption band gap of CdS particles.Following Ni loading, there is not a substantial alteration in the absorption edge of CdS, suggesting that Ni is dispersed on the CdS surface rather than being lattice incorporated.
Nevertheless, an observed enhancement in the absorption spectra of CdS after Ni loading implies increased charge accumulation at the CdS surface.The maximum absorption is recorded for Ni (1.0 wt%) loaded CdS nanocomposites.The progressive improvement in absorption indicates a heightened charge density buildup at the CdS surface, potentially attributed to the formation of heterostructures such as CdS-Ni@NiO near the surface, as evident in XPS spectra.These varied properties facilitate the amalgamation of different electronic structures, leading to an expanded light response and improved charge separation and electron transfer.
It's important to observe that the presence of a heterostructure at the material surface not only results in a color change, causing a shift in peak position but also induces a change in the refractive index at the material surface.The introduction of Ni also alters the band gap energy of CdS, as illustrated in figure 19.Tauc's method was employed to estimate the band gap energy, revealing a decrease from 2.36 eV (for CdS) to 2.21 eV for 0.5 wt% Ni, 2.20 eV for 1.0 wt% Ni, and a subsequent increase to 2.29 eV for 5.0 wt% Ni-loaded CdS samples.
In addition to absorption studies, an analysis of the interface charge transfer strength can be conducted through photoluminescence (PL) emission spectra.Figure 20 presents a comparison of the PL emission spectra to elucidate the charge separation and transfer process of photo-induced charge carriers in both pure CdS and Ni-loaded CdS nanocomposites.Previously reported results support the notion that emissions in CdS nanostructures arise from band-edge emission and surface-defect emission.Band-edge emission in CdS nanostructures is influenced by the size-sensitive quantum confinement effect, typically positioned in the 420-500 nm range, while peaks in the 530-680 nm range correspond to surface defect emission caused by sulfur vacancies or sulfur dangling bonds.Electrons and holes persist in the conduction band of CdS and the valence band of NiO.
Figure 20 illustrates two emission peaks in pure CdS at 523 nm and 540 nm under an excitation wavelength of 340 nm, attributed to surface defect states.The emission peak at 540 nm is triggered by the formation of sulfur vacancies in CdS, possibly due to the smaller ionic radius of S 2− compared to that of Cd 2+ .Conversely, the broader emission peak at 523 nm in CdS results from the recombination of holes from the CdS valence band with electrons trapped in sulfur vacancies.Upon Ni doping, the intensity of the CdS peaks at 523 nm significantly diminishes, indicating a weakening of the emission band intensity due to the transfer of electronhole pairs between various sections of the composites.
In this case, the reduction in PL band emission intensity in the CdS-Ni@NiO sample is attributed to the formation of a Z-scheme interface for electron-hole pair transfer.The electron-hole transfer process is specifically attributed to Ni nanoparticles at the interface of CdS-Ni@NiO, explaining the fluorescence quenching observed in CdS-Ni@NiO.
Photocatalytic hydrogen evolution
The hydrogen (H 2 ) production behavior of CdS and Ni-loaded CdS samples with varying amounts of Ni loading was assessed during solar irradiation in the visible region, as illustrated in figure 21.In the absence of a photocatalyst in a controlled reaction, no H 2 evolution was observed.However, a significant amount of H 2 evolution was detected when photo-catalysts were present, indicating that photo-catalytic activity occurred exclusively in the presence of these catalysts.Figure 21 illustrates that pure CdS exhibited minimal H 2 evolution, specifically 0.18 mmol.This limited evolution could be attributed to the rapid recombination rate of electrons and holes or the participation of only a small number of electrons and holes in the photo-catalytic reaction.Additionally, a substantial overpotential on the CdS surface resulted in a swift backward reaction.The introduction of 0.5 wt% Ni demonstrates a significant increase in H 2 evolution, specifically a 22.22-fold enhancement (4.0 mmol), compared to pure CdS.With a Ni loading of 1.0 wt%, the acceleration of H 2 production is even more pronounced, reaching 9.0 mmol, which is approximately 50 times greater than that observed with pure CdS (see figure 21).Consequently, the notable improvement in H 2 evolution following Ni loading underscores the impact of Ni, particularly with surface NiO, on the photocatalytic activity.The heightened H 2 evolution attributed to Ni@NiO on the CdS surface may be attributed to the presence of two competitive mechanisms: an outstanding capability to transport charge carriers (electrons and holes) and the effective hindrance of the recombination of photo-excited charge carriers (electrons and holes).
Conclusion
The pure CdS and Ni-CdS samples were successfully synthesized by the facile chemical process, affording an effective photocatalyst that can be used for hydrogen production and degradation of methylene blue.The estimation of crystallite size is done by using different Williamson-Hall methods.ISM, ASM, UDEDM, Size strain plot, Scherrer, and Modified Scherrer plot are obtained.Rietveld Refinement of CdS and Ni-CdS from the XRD data by using Full Prof software is done.FESEM and TEM data show the change in shape from spherical to nanorods with an increase in the doping concentration of Ni in CdS as depicted.FESEM, TEM, and Crystal growth mechanisms show the changing of CdS spherical structure to Ni-CdSnanorods.Under visible light, the loading of 1.0 wt percent Ni in CdS produced the highest H 2 evolution of 9 mmol at a rate of 1.8 mmol h −1 , which is 50 times higher than pure CdS (180 mol @ 36 mol h −1 ).Ni (1 wt%)-CdS shows the highest degradation percentage of 95% in just 90 min.It shows the highest environment mineralization efficiency and photocatalytic degradation in comparison to pure CdS.As a result of its high visible light photoactivity, the current Ni@NiO-CdS composite could be a promising candidate for energy renewal and conversion.
3. 1 .
Structural analysis and Rietveld refinement of CdS and Ni-CdS 3.1.1.Measurement of strain and crystallite size using Williamson-hall methods Isotropic Strain Model (ISM) reveals the 'CdS and Ni (5 wt%)-CdS' ASM W-H plot.Uniform deformation energy density model (UDEDM)A graph is then drawn and fitted into a straight line between b q Cos and slope of the fitted line, the crystallite size and uniform deformation energy density (U) is measured.The W-H plot of the UDEDM system is shown in figure4for CdS and Ni (5 wt%)-CdS.The average lattice strain is determined by ( ) = u Y2 hkl from the energy density of deformation and Y hkl value.Strain size plot of CdS and Ni (5 wt%)-CdS is represented by figure 5.Modified Sherrer Method
Figure 7 .
Figure 7. Rietveld refined of CdS, Ni(3 wt%)-CdS and Ni (5 wt%)-CdS sample XRD pattern.The dot is experimental while the solid line is a representation of refined data from Rietveld.The background shows how experimental and refined information varies.
Figure 9 (
b) shows the mapping of Cd and S elements of Fourier 3-dimensional electron density in the CdS unit cell at x = 0.After doping of Ni into the CdS the electron density has increased drastically and it depends upon the concentration of doping.Figure 10 shows the electron density pattern of Ni (1 wt%)-CdS which has increased in comparison with pure CdS electron density.Again, figure 11 reveals the Ni (5 wt%)-CdS electron density pattern which has the highest electron density with respect to pure CdS and Ni (1 wt%)-CdS.
Figure 8 .
Figure 8. Ball and Stick representation of hexagonal structure generated by VESTA program of sample (a) CdS (b) Ni (5 wt%)-CdS.
Figure 9 .
Figure 9. (a) 2D Electron Density Mapping in CdS unit cell.(b) 3D-Electron Density Mapping in CdS unit cell.The density of the electron is measured in e/Å 3 .
Then thiourea is invaded by the heavily nucleophilic O atoms of H 2 O, resulting in the weakening of the S--C double bonds, which are slowly shattered to release S 2− anions.S 2− anions will then be released in reaction with the pre-released Cd, which grows CdS nuclei that serve as seeds for further development into CdSplatelets.When Ni is doped into the CdS with concentration Ni (0.5 wt%).First, Cd 2+ and Ni 2+ can coordinate with the thiourea, which, as suggested in scheme 1, leads to the development of the metallic complex M 2+ -thiourea (M = Cd, Ni).Due to their relative stability, the thermolysis continues slowly and leads to the formation of certain Cd and Ni sulphide nuclei when heated.These newly formed nuclei are unstable in the solution and contain many unsafe bonds or defects or traps at the nuclei surface that can contribute to the formation of CdScrystalline and favor the introduction of Ni 2+ into the crystalline CdS.At constant temperature i.e. 180 °C for 5 h, due to the thermal and hydrolytic stability of the sulfur metal bond, Ni−S−Cd containing metal sulphide nanoparticles are formed which supply assembly centers for random moving metal sulphides nanoparticles.These nanoparticles are attached and aggregated on the liquid interface, forming compact shells.The ongoing aggregation and the ensuing growth process is suggested to lead to the formation of Ni-doped CdS spheres.When further Ni doping into the CdS with higher concentration i.e.Ni (1 wt%)-CdS and Ni (5 wt%)-CdS, Ni 2+ , Cd 2+, and thiourea complex formed.The bonds between randomly moving metal sulphide nanoparticles change their structure to form one-dimensional rod-like crystals following the first nucleation of Ni 2+ , Cd 2+, and thiourea.In this situation, the DLA model (The random walk of particles due to Brown's Motion Cluster forms aggregates of these particles together) includes a unified assembly to which the Ni 2+ , Cd 2+, and S 2- monomers spread from the solution and reach an area right next to a crystal surface.When the neighboring nanoparticles have reached the place adjacent to the edge of the movement, the further particles are attached to them in the same crystallographic direction.This leads to the formation of nanorods by an oriented growth process of the attachment.Reaction time also plays an unavoidable role in studying the CdS nanorods development process by maintaining the other reaction conditions.
Scheme 1. Crystal growth mechanism of CdS and Ni-doped CdS.
Figure 12 .
Figure 12.(a) FESEM image of pure CdS (b) Selected area magnification of FESEM image of pure CdS (c) TEM image of pure CdS nanoparticles (d) Selected Area Electron Diffraction Pattern (e) CdS nanoparticles fringes with d spacing (f) Particle Size Distribution.
Figure 15 .Figure 16 .
Figure 15.(a) C t /C o versus Time for Dark (b) C t /C o versus Time for light irradiation; (c) logarithmic of (Co/Ct) versus Time for dark (d) logarithmic of (Co/Ct) versus Time for light.
Figure 16 (
d) presents two resolved peaks of S2p at binding energies around 161.36 eV (S 2p3/2) and 162.67 eV (S 2p1/2) with a spin-orbital partition of 1.31 eV.The 2:1 area ratio confirms the S 2-chemical state of S, and a peak at 159.97 eV may indicate the presence of bridging disulfides S22−.Additionally, XPS spectra show a small peak of metallic Ni at 852.15 eV, and an intense NiO peak, suggesting surface transformation of Ni into NiO upon exposure to air.
Figure 16 (
c) displays peaks at 854.88 eV and 861.07 eV with a 6.18 eV energy separation, corresponding to Ni 2p3/2 for Ni2+ in Ni surface.Peaks at 872.54 eV and 878.14 eV are identified as satellite peaks of Ni2+ in NiO, confirming surface oxidation of Ni into NiO in ambient conditions.The presence of Ni metallic features at a photon energy of 1253.6 eV suggests that the NiO layer over the Ni surface is within the estimated inelastic mean free path of electrons (1.05 nm at 400 eV).Consequently, the observed phenomena support the formation of an ultra-thin layer of NiO over the Ni surface.
Table 2 .
The normal and measured d hkl interplanar spacing and relative percentage variance for some major XRD peaks for the {hkl} respective planes.
Table 1 .
Crystallite size was computed using different W-H method models, the Scherrer method, and the modified Scherrer method.
Table 4 .
Fractional atomic coordinates and isothermal parameters of various atoms derived from the Rietveld XRD sample study for CdS.
Table 5 .
Fractional atomic coordinates and isothermal parameters of various atoms derived from the Rietveld XRD sample study for Ni(1.0 wt%)-CdS.
Table 6 .
Fractional atomic coordinates and isothermal parameters of various atoms derived from the Rietveld XRD sample study for Ni(5.0 wt%)-CdS.
Table 7 .
R-factors, the goodness of fit, lattice parameters, crystalline size, X-ray density, and crystalline size.
Table 9 .
Bond angle and bond length of CdS and Ni-CdS.
Table 10 .
CdS and Ni-CdS photocatalytic activity.The plausible degradation mechanism of MB by Ni-CdS under light illumination is presented as follows | 9,492 | 2024-02-23T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Materials Science"
] |
Pretreatments for enhancing sewage sludge reduction and reuse in lipid production
Background Converting wastewater sludge to lipid is considered as one of the best strategies of sludge management. The current problem of lipid production from wastewater sludge is the low yield (0.10–0.16 g lipid/g dry sludge) due to the low availability of easily uptaken materials (such as soluble monosaccharide and oligosaccharide) in sludge to oleaginous microorganism (Rhodotorula glutinis, Trichosporon oleaginosus, Lipomyces starkeyi). Pretreatments are efficient methods to improve sludge bioavailability. This study is aimed to achieve high lipid production from sludge and high sludge reduction. Results In this study, it was observed that the soluble chemical oxygen demand (SCOD) had significantly increased after different pretreatment. The SCOD in the supernatant was increased from 32.64 to 180.25 mg/L, 924.16 mg/L, 1029.89 mg/L and 3708.31 mg/L after acidic (pH 2 for 2 h), alkaline (pH 12 for 2 h), microwave irradiation (15 min with 5 min interval), and ultrasonication (30 min at 450 W and 20 kHz frequency with 5 s on and 2 s off mode) pretreatment, respectively. Pretreatments have also increased the release of total nitrogen (TN) and total phosphorus (TP) from solids. The sludge after different pretreatments were used as a medium for lipid production, and the highest lipid content (36.67% g/g) was obtained in the fermentation with ultrasonication pretreatment sludge, and the sludge reduction was 63.10%. For other pretreatments, the lipid content and sludge reduction were 18.42% and 32.63% in acid pretreatment case, 21.08% and 36.44% in alkaline pretreatment case, and 26.31% and 43.03% in microwave pretreatment case, respectively. Conclusion It was found that ultrasonication pretreatment was the most efficient way to increase the sludge biodegradability (SCOD) and to release TN and TP from solid phase to liquid phase. Pretreated sludge for lipid production achieved significant improvement in lipid yield and sludge reduction. Lipids produced from pretreated sludge were transesterified to biodiesel and the analysis showed that biodiesel had a similar composition as commercial biodiesel. The study reveals that pretreatment on sludge is a promising method for enhancing biological sludge management efficiency.
Background
Along with the development of society, wastewater discharge amount sharply increases due to human activities [14]. For instance, it was 26.10 billion tonnes in 2004 and dramatically increased to 51.00 billion tonnes in 2014 in China [62]. Aiming to reduce its environmental and health risks, wastewater was collected and mainly treated by activated sludge process, biofilm process, or membrane bioreactor process in the wastewater treatment plants. Nevertheless, a large amount of sewage sludge (5-8 tonnes sludge with 80% water content is generated in every 10,000 tonnes wastewater treated), which is an unavoidable by-product of wastewater treatment, is generated [14]. For decades, sewage sludge was considered as a waste. Treatments, such as digestions (aerobic and anaerobic), landfill, and incineration, are mainly used for the reduction or disposal of sewage sludge [51,56]; however, digestions and landfill require large land, and incineration causes high energy and cost input [37,38].
It has been well realized that sewage sludge contains various useful materials such as carbon, nitrogen and phosphorus, which can be recovered by physical, chemical and/or biological methods [56]. Recovery of nitrogen and phosphorus as fertilizers is one of the best choices [4]. Carbon recovery from sludge should be given the main focus as it is the most abundant component in sewage sludge [5,13]. Carbon recovery from sludge is generally accomplished through direct extraction or converting it to value-added products [13]. Biogas, lipids, extracellular polymeric substances, bioplastics, short-chain fatty acids are common value-added products generated from sewage sludge [34,39,63]. It provides new option for sludge management and resource recycling [1].
Lipids are good sources for biodiesel production [63]. It has been found both in the primary and secondary sludge; however, the lipid content in raw sludge (g lipids/g sludge) is quite low [59]. Thus, direct lipid extraction from sludge has limited sludge reduction and is not attractive [45]. Refermentation of sludge is a promising way to increase lipid content and enhance sludge reduction [40]. Additionally, it was found that biodiesel production from lipid accumulated by sludge fermentation had economic and energetic feasibility, and it was principally depending on the amount of sludge reduction and lipid accumulation [6,60].
Raw sewage sludge is a mixture of complex organic compounds including protein, carbohydrates, and lipid [52], which has very low biodegradability. Pretreatment is an efficient way of breaking down complex materials in sludge [3,61]. Physical, chemical, and biological pretreatments were investigated to release nutrients from sewage sludge and were observed to highly assist the utilization of sludge by microorganisms [3]. So far, the impact of sludge pretreatment on methane and hydrogen production has been widely performed [10,25,27,32,47,50,64], however, few have evaluated its impact on lipid production [25,27,42]. Due to the concern of high cost, difficult management, and low stability, the full-scale application of biological pretreatments is scarce [2]. Thermal pretreatment is an effective sludge disintegration method despite its issue of energy efficiency [22]. The emerging and promising mechanical sludge pretreatment is ultrasonication. Ultrasonication sludge pretreatment has numerous advantages such as efficient sludge disintegration (> 95%), improvement in biodegradability and biosolids quality, no chemical addition, less retention time, sludge reduction and energy recovery [36]. Chemical sludge pretreatments, including acidic, alkaline, and oxidative pretreatments, have also been applied to enhance the sludge biodegradation [17]. Acidic and alkaline treatments are simple and easy to operate. In addition, alkali and acid reagents are effective to solubilize lignin and hemicellulose in the biomass [8,41]. Microwave pretreatment has also shown good performance [12]. Moreover, pretreatment could also provide sterilization function, which is highly favorable for lipid production from pure culture.
The study aims to recover carbon from sewage sludge by employing oleaginous yeast Lipomyces starkeyi for lipid production and simultaneously increase sludge reduction. Acidic pretreatment, alkaline pretreatment, microwave irradiation, and ultrasonication were investigated on the enhancement of nutrients and carbon release from sludge, the increase of lipid production and improvement of sludge reduction. Sludge reduction and reuse for lipid production after pretreatment were compared with other sludge regulation methods and its further application was discussed.
Pretreatment impact on soluble chemical oxygen demand and nutrient release from sludge
Soluble chemical oxygen demand (SCOD) was found to be a reliable character to represent the soluble organics in the liquid phase. The increase of SCOD indicated the release of organic matter from solid into liquid [55].
It can be seen that the SCOD in the supernatant before pretreatment was 32.64 mg/L (Fig. 1). In this study, raw sludge was collected from secondary sedimentation, thus, the SCOD in the supernatant of the raw sludge should be the same as the effluent of secondary sedimentation. It was reported that SCOD in the effluent of secondary sedimentation was generally below 40 mg/L [9,44]. Hence, the result is consistent with reality.
In this study, it was observed that the SCOD had significantly increased after different pretreatments. The SCOD in the supernatant was increased from 32.64 to 180.25 mg/L, 924.16 mg/L, 1029.89 mg/L, and 3708.31 mg/L after acidic, alkaline, microwave irradiation and ultrasonication pretreatment, respectively ( Fig. 1), which was 4.52 times, 26.31 times, 30.55 times and 112.61 times higher than that in the original sludge, respectively. Selvakumar and Sivashanmugam reported that thermochemical pretreatment has increase SCOD up to 27.60% [42]. In addition, other researchers have also found that the SCOD increased 7.22 times after thermo-alkaline pretreatment [46]. It suggests that the pretreatments could effectively assist the release of organic matters from solid.
Among all, ultrasonication pretreatment provides the highest SCOD increase which was 2.60, 3.01, and 19.57 times higher than the microwave, alkaline, and acidic pretreatment, respectively (Fig. 1). Therefore, ultrasonication was considered to be the most efficient way for soluble substances release from sludge. During ultrasonication, microbubbles and free radicals are generated which could efficiently destroy microbial cells, breakdown complex organic compounds and release nutrients to the supernatant [36]. It was reported that the release of organic matters and the increase of SCOD with the ultrasonic density and ultrasonic intensity followed the first-order reaction [15,24,30,48]. It has been reported that a neglectable amount of organic matter was oxidized during the ultrasonication pretreatment, and soluble materials were mainly transferred from the solid phase to the liquid phase [20]. In this study, the mixed liquor suspended solids (MLSS) of the sludge solution before and after ultrasonication was 7.14 g/L and 6.72 g/L, respectively. It indicates that the MLSS reduction of sludge due to ultrasonication was 5.88%.
It was noticed that the total nitrogen (TN) in the supernatant of raw sludge was very high (around 40.50 mg/L). The sludge utilized in this study was the secondary sludge collected from secondary sedimentation which is the last unit of wastewater treatment if disinfection is not performed. It means the treated water (called effluent) is discharged into natural water bodies after this unit. The supernatant of the sludge is having the same quality as effluent of the treatment plant as they are the water from the same unit. It was observed that the TN in the supernatant of the sludge was 40.50 mg/L, which indicates that the TN of the effluent was around 40.50 mg/L. According to the Criteria of Grade I of the "Standard for Discharge of Pollutants From Urban Sewage Treatment Facilities" (GB18918-2002), the TN in the effluent should be below 15 mg/L. It indicates that an excessive amount of nitrogen would be discharged to the natural waters if additional treatment has not been applied after secondary treatment. In the last decades, excessive nitrogen discharge from the wastewater treatment plant has caused severe eutrophication and destructed the aquatic ecosystems [53]. Different from other pretreatments, TN concentration in the supernatant was nearly not changed (from 40.50 to 45.00 mg/L) after acidic pretreatment (Fig. 1). It has also been reported that the acidic treatment did not significantly impact the TN concentration in the liquid portion (only 8% increase of the TN concentration after treatment) [47,49,50]. Acidic pretreatment could effectively cause cell death due to dehydration; however, it is not efficient for breaking cell membranes [54]. It suggests that no significant release of intracellular protein occurs in acidic pretreatment.
TN in the supernatant was increased from 40.50 to 112.27 mg/L, 143.84 mg/L, and 248.94 mg/L after alkaline, microwave irradiation and ultrasonication pretreatment, respectively ( Fig. 1). It can be seen that TN has been released due to pretreatment and the release amount was in the order of ultrasonication > microwave > alkaline > acidic. Both acidic and alkaline treatments are achieved by adjusting solution pH. Compared to acidic pretreatment, alkaline treatment has better performance on TN release. Alkali could react with phospholipid (the main component of the cell membrane) to occur saponification, and thus disrupt the cell and release the intracellular products (such as protein) [23]. It hence increases the TN in the supernatant. It can be seen that microwave and ultrasonication pretreatment achieved efficient TN release and ultrasonication provides better performance (Fig. 1). Microwave radiation provides rapid temperature increase and could efficiently break hydrogen bonds. It leads to the disintegration of proteins and the release of TN [31]. As been discussed above, free radicals were generated during ultrasonication that could break and deconstruct the cells, and thus lead to release of protein and polysaccharide into the supernatant [15,36].
It was observed that total phosphorus (TP) concentration in the supernatant was significantly increased after acidic pretreatment (from 7.02 to 57.15 mg/L) (Fig. 1). Sludge contains some amount of phosphorus precipitates which would dissociate at low pH conditions. Hence, the TP increase was observed in the liquid phase after acidic pretreatment. As mentioned, cell lysis occurs after alkaline treatment. However, at high pH, PO 4 3− could form precipitate and stay in solid phase. Thus, it leads to the lower TP concentration in the supernatant of alkaline treated sludge compared to that of acidic treatment. As discussed above, microwave is capable of disrupting the complex materials and thus leads to an increase of TP in the supernatant. Among all, ultrasonication was still the best one for releasing TP (from 7.02 to 83.78 mg/L) (Fig. 1).
Fig. 1 SCOD, TN, and TP variation before and after pretreatments
Overall, ultrasonication was the most efficient method for the release of SCOD, TN and TP. Similar results were reported that SCOD, TP, TN in the supernatant of sludge were significantly increased after ultrasonication pretreatment in which the SCOD, TP, and TN were increased 3.3-5.4, 2.8-4.5, and 13.1-19.6 times after ultrasonication pretreatment [21,35]. Other pretreatment methods also had some merits on releasing certain nutrients, for instance, acidic pretreatment on the release of TP, alkaline and microwave pretreatment on the release of TN. Further studies could be carried out on the combination of ultrasonication and acidic pretreatment or ultrasonication and alkaline pretreatment to enhance the release of targeted nutrients.
Sludge pretreatments on lipid production and nutrient utilization
Due to pretreatment, some suspended solids are dissolved into a soluble form, hence, decrease of MLSS was observed after treatments. The obtained MLSS after sludge pretreatment was the initial MLSS of the fermentation. The results are shown in Fig. 2.
It was observed that the MLSS concentration first increased and then decreased in all the cases (Fig. 2). The increase started after 12 h fermentation in the case of control, acid, and alkaline treated sludge. For microwave and ultrasonication pretreatment cases, the MLSS increase lasted till 24 h and 60 h, respectively. After increasing stage, MLSS gradually decreased in the systems. The increase of MLSS was mainly due to: the inoculation of preculture; the biomass growth by consuming the left substrate from preculture medium; the biomass growth by consuming the SCOD in the medium.
In the fermentation with original sludge, acid or alkaline treated sludge, the available SCOD is very limited (Fig. 1). It is not able to contribute to the biomass growth, hence MLSS increase would be mainly due to the addition of preculture. After the left substrate from the preculture medium is finished, microorganisms start to consume the organic matter in the sludge. The microorganism biomass increases but sludge amount (organic matter decomposition) is decreasing, and the increase is smaller than the decrease as part of the organic matter is emitted in the form of carbon dioxide. Thus, the observed MLSS is in decrease trend. It is similar to aerobic digestion in which significant sludge reduction occurs due to the microorganism growth [26,43].
In the fermentation with microwave and ultrasonication-treated sludge, the initial SCOD concentrations were high (Fig. 2). During fermentation, the microorganisms consumed SCOD for self-growth which led to the gradual SCOD decrease and MLSS increase. After SCOD was finished, the MLSS started to drop. At the end of the fermentation, the MLSS was in the order of ultrasonication (3.71 g/L) < microwave (4.70 g/L) < alkaline (4.81 g/L) < acid (5.54 g/L) < control (5.89 g/L). It suggests that still great amount of available organic matter remains undegraded in the sludge medium prepared with original sludge, acid, alkaline, and microwave pretreated sludge.
It was observed that lipid content gradually increased until a maximum lipid content was obtained at 48 h (microwave and ultrasonication) or 60 h (control, acid, and alkaline) in the fermentation (Fig. 2). Ultrasonication pretreated sludge medium contained the highest bioavailable materials (SCOD) among all (Fig. 2), and correspondingly the highest lipid content (36.67% g/g) was obtained in the fermentation with ultrasonication pretreatment sludge (Fig. 2). However, it is still largely lower than the reported lipid accumulation potential of the strain (up to 85.10% g/g) [18]. The common explanation was that oleaginous yeast achieved high lipid accumulation in carbon-rich and nitrogen depletion conditions. The carbon source in the raw sludge was not sufficient to support oleaginous yeast to produce high lipid content even after the pretreatments [61]. To achieve high lipid production, the promising solution was to fortify the sludge by mixing sludge with other carbon-rich substrates [57]. Our previous studies proved that the lipid content increased from 35.32% g/g while using solo pretreated sludge medium to 50.13% g/g after addition of crude glycerol to the pretreated sludge [57,61]. Due to the depletion of substrates, lipid content gradually decreased till the end of the fermentation (Fig. 2). It would be due to the microorganism self-consumption in lipid for supplying energy to cell activities.
It was found that SCOD rapidly dropped during the lipid accumulation period in the fermentation with ultrasonication pretreated sludge, which indicates the fast consumption of substrate by oleaginous yeast (Fig. 2e). Our previous study found that the consumption of substrates in the initial stage was due to the fast cell growth and thereafter was mainly due to lipid production [7]. After 60 h, the depletion of SCOD was observed which caused the fast decrease of lipid content (Fig. 2e) [7].
At the end of the fermentations, TN concentration was reduced (Fig. 3). The reduction of nitrogen concentration in the supernatant was owing to the formation of the intracellular material of the strain such as protein. Compared to carbon, nitrogen needed for cell growth was much less [16]. Thus, the utilization amount of nitrogen concentration was less than SCOD amount. The highest TN consumption occurred in fermentation with ultrasonication pretreated sludge (Fig. 3), which is due to better biomass growth in this case compared to others (Fig. 2e).
Sludge valorization and reduction
Ultrasonication pretreated sludge showed the highest lipid production potential, which suggests that it would be the feasible way of biodiesel production from sludge. Lipid extracted from biomass obtained in the fermentation with ultrasonication pretreated sludge was transesterified to biodiesel (fatty acid methyl esters, FAMEs) to evaluate the suitability of the lipid as raw material of biodiesel production. The composition is shown in Fig. 4. It was found that the fractions of C16:0, C17:0, C18:0 and C18:2 continuously increased during the fermentation. Among all, C18:2 was the principal composition (34.10%). The esters with carbon chain of C14-C20 were similar as plant seed oils which is currently used for commercial biodiesel production. Therefore, using ultrasonication pretreated sludge for biodiesel production was applicable.
Sludge reduction is an important target in sludge management. In this study, the sludge reduction due to the lipid production was calculated according to the difference of the initial MLSS of the fermentation and the final solid mass after extraction. The maximum sludge reduction occurred in the fermentation with ultrasonication pretreated sludge which was 63.10%, followed by microwave (43.03%), alkaline (36.44%) and acidic (32.59%).
Ultrasonication and its combination with other pretreatment have also been used for methane and hydrogen (Table 1), and it was found that using ultrasonication pretreated sludge for lipid production achieved remarkable sludge reduction in short time. It was reported that ultrasonication combined with other pretreatment methods had certain advantages on nutrient release and enhancement on sludge reduction [11,29,33]. Further study might be performed on the investigation of ultrasonication combined with other pretreatment methods for improving lipid production and sludge reduction.
Conclusions
Pretreatment is essential for dissociation of complex materials in sludge. It was found that ultrasonication was more efficient for the release of SCOD, TN and TP compared to acidic, alkaline, and microwave treatment from this study. Compared to the original sludge, the SCOD, TN, and TP increased 112.61 times, 5.22 times, and 11.00 times, respectively, after ultrasonication pretreatment. The highest lipid yield (0.21 g lipid/g dry sludge) and sludge reduction (63.10%) occurred in the fermentation with ultrasonication pretreated sludge. The high release of SCOD from ultrasonication treatment leads to its promising potential as pretreatment of biological sludge management. Combination of ultrasonication with other treatment would provide better performance and related study is demanded.
Sewage sludge
In this study, the raw secondary wastewater sludge was collected from a municipal wastewater treatment plant located in Shenzhen, China. After collection, the sewage sludge was covered and stored at 4 °C. The characterization of the sludge is given in Table 2.
Strain
The lipid-producing strain was oleaginous yeast Lipomyces starkeyi which was purchased from China Center of Industrial Culture Collection (CICC). According to the reports, the highest lipid content in Lipomyces starkeyi obtained is 85.1% (w/w) [18]. The strain was preserved in 20% (w/w) glycerol at − 80 °C for long-term storage and revival was achieved by streaking onto a potato dextrose agar (PDA) plate [18]. For short-term storage, the strain was maintained in malt extract agar plate and subculture was performed every 7 days.
Pre-culture medium
The preculture medium was prepared with Yeast Extract Peptone Dextrose Medium (YPD) (20 g/L glucose, 20 g/L peptone and 10 g/L yeast extract). The pH of the medium was 6.6.
Fermenter
In this study, the experiments were carried out in a 5.00 L fermenter (Blbio-5G, Shanghai, China) with a working volume of 3.50 L. The pH, dissolved oxygen (DO), agitation and temperature were automatically controlled during the fermentation. DO was maintained above 30% (v/v) by controlling the agitation (200 rpm-400 rpm) and aeration rate (0.50-3.00 L/min). The temperature was kept at around 28 °C. The pH was not controlled during the whole fermentation as the lipid accumulation only showed slightly difference with and without controlling pH according to shake flask experiment results. Samples (50 mL) were taken at every 12 h during fermentation and stored at 4 °C.
Sludge characterization
One liter sludge was directly used to determine MLSS and mixed liquor volatile suspended solids (MLVSS) after transported to lab from the treatment plant.
To analyze MLSS, a quantitative membrane filter paper was dried at 105 ºC until weight was constant. Then, 50 ml of sludge was filtered with the pre-dried filter paper. After filtration, the filtrate was used to determine the TN, TP and SCOD. The filter containing solid was dried at 105 ºC till weight constant. The MLSS was calculated based on Eq. 1.
where M 1 is the weight of membrane filter paper (g); M 2 is the total weight of the dried solid and filter paper (g); V is the sample volume, which is 50 mL in this study; the unit of MLSS is g/L.
The filter paper obtained from MLSS analysis was then used for the determination of MLVSS. The filter paper with the dry solid was transferred to a preweighed ceramic crucible. Then, the crucible was put in a muffle furnace at 600 °C for 60 min. After cooling down, the crucible was weighed (M 4 ). MLVSS was calculated according to Eq. 2 where M 3 is the weight of the empty ceramic crucible (g); M 4 is the weight of the ceramic crucible with sludge solid after calcination (g).
The property of raw sludge utilized in the study is shown in Table 2.
Sludge pretreatment
Acid or base pretreatment [61] The 5 mol/L of H 2 SO 4 solution or NaOH solution was used to adjust the pH of the 4 L sludge solution to 2 or 12. Due to the buffering nature of sludge, several adjustments were performed till the pH was stable at 2 or 12. Then, the sludge solution was stirred for 2 h followed by centrifugation at 25 °C and 5000 rpm (Sigma 3K15, Germany).
Microwave irradiation [41] In each run, plate filled with 1 L sludge solution was microwaved for 15 min. During the microwave (900 W) irradiation, the sludge solution was mixed with a 5 min interval. The obtained solutions were well mixed prior to being used for fermentation.
Ultrasonication [41,48] The ultrasonication was achieved by placing the sonication probe in the 2 L beaker
Refs.
Alkaline-ultrasonic pretreatment Methane production 25 28.68 [11] Ultrasonic pretreatment Anaerobic digestion 30 23.7 [28] Ultrasonic and free nitrous acid pretreatment Hydrogen production 3 33.6 [33] Alkaline-ultrasonic pretreatment Lysis-cryptic growth 12 56.5 [29] Ultrasonication Aerobic digestion 3 40.2 [19] Ultrasonication Lipid production 2 63.10 This study containing 0.5 L of sludge solution. The ultrasonication was conducted for 30 min at 450 W and 20 kHz frequency. Ultrasonication was operated with 5 s on and 2 s off mode. The temperature of the sludge was not controlled during ultrasonication. The obtained solutions were employed as medium of oleaginous yeast fermentation. Sludge samples collected before and after pretreatment were centrifuged and the concentration of SCOD, TN and TP in the supernatant were measured.
Fermentation
The preculture was obtained by inoculating a loop full of Lipomyces starkeyi to 350 mL sterilized YPD medium and then incubated at 28 ℃ and 150 rpm for 24 h. Then the 350 mL preculture was transferred to the 5 L fermenter with 3.15 L sludge medium. The sludge medium was the whole sludge (solid and liquid) collected from the acidic, alkaline, microwave, or ultrasonication pretreatment after adjusting the pH to around 5.5 with 1 M NaOH or 1 M H 2 SO 4 . The fermentation lasted 84 h and samples (50 mL) were withdrawn every 12 h for analysis.
Analysis
In this study, biomass referred to the MLSS in the fermentation broth. Thus, it was measured as stated above. To determine lipid [57], a 10 mL sample was centrifuged at 6500 rpm (646,684 g) for 15 min. After discard of the supernatant, the remaining solids were dried at 80 °C for 24 h. The dry solids were transferred to 50 mL solvent-proof tubes. Then, 30 mL of the mixture of chloroform and methanol (2:1 v/v) and 6 mL Zirconia beads (1 mm diameter) were added into the tube and then continuously shaken for 12 h in a wrist action shaker (Burrell Model75). After centrifugation (6500 rpm, 25℃), the bottom layer solution was withdrawn and filtered. The filtrate was put into a preweighed glass tube. After evaporation at 80 °C in oven (Hengyi DHG-9145A, Shanghai), the weight difference of the glass tube was the lipid in the 0.01 L sample. Lipid content was calculated according to the following equation: To estimate the potential of the lipid for biodiesel production, the extracted lipid was reacted with a mixture of H 2 SO 4 and methanol (H 2 SO 4 /CH 4 OH = 1% v/v) at 60 °C for 12 h [57]. The molar ratio of methanol to lipid was set at 6:1. Fatty acid methyl esters (FAMEs, Lipid content (%w/w) = Lipid concentration biomass concentration. biodiesel) were obtained and extracted with hexane. The composition of biodiesel FAMEs was analyzed using Gas Chromatography linked to Mass Spectroscopy (GC-MS) [58]. The column dimension used was 30 m × 0.25 mm, and a phase thickness was 0.25 μm. The calibration curve was prepared with a mixture comprising 37 FAMEs.
All analysis was duplicated. The results presented in this study were the average value. Standard deviations and Probability (P value) of the data obtained in the study were analyzed. The standard deviations are less than 5% and the P values were between 0.016 and 0.033 (P value < 0.05 means the difference is significant; P value < 0.01 means the difference is extremely significant). | 6,228.6 | 2020-08-13T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Beam Bridge Health Monitoring Algorithm Based on Gray Correlation Analysis
Bridge construction investment is huge and the service cycle is long. During the service cycle, the bridge structure not only beared the load effect caused by fatigue damage, but also effected by the natural environment and human damage. Beam bridge is the most kind of bridge built on the highway and had a long-term service in China. The main beam of beam bridge is the main load-bearing component. Real-time evaluation of main beam’s health degree will greatly improve the safety of highway transportation. Through the rapid assessment of the main beam of the bridge, it can not only directly reflect whether the deflection of the main beam is beyond the dangerous range and the overall condition of the main beam, but also observe the long-term variation rule of the main beam. The current assessment algorithm only stays in the monitoring of whether the deflection of the main beam is beyond the dangerous range, without a complete assessment combined with massive historical data. Based on the theory of Gray Correlation Analysis and combined with the real - time data and historical data of bridge monitoring, we calculate the statistical indicator and morphological indicator of the main beam quickly, and evaluate the comprehensive health indicator of the bridge according to the technical specification in this paper.
Introduce
In recent years, with the continuous development of computer technology, communication technology, embedded sensors and other technologies, the use of computer systems for automatic health monitoring has become the main method of bridge monitoring. The linear evaluation of bridge main beam is an important indicator which can reflect the safety of the bridge. Through the linear monitoring of the main beam of the bridge, it can not only directly reflect whether the deflection of the main beam is beyond the dangerous range and the working condition of the main beam in the operating state, but also observe the long-term variation rule of the main beam.
And the linear evaluation of the main beam of the bridge is of great significance to the bridge bearing capacity detection and the bridge earthquake disaster mitigation.
The linear evaluation of bridge main beam theory is still in the exploratory stage and without unified evaluation method in our country. The method remains on the level of monitoring whether the original data of the main beam deflection exceeds the risk threshold. There is no standardized assessment of the bridge data.
Gray system theory is a discipline with good theoretical research and application value. Gray Correlation Analysis plays a significant role in theoretical research and application study, which is one of the important parts of Gray system theory. Gray Correlation Analysis is a method of analyzing and determining the degree of correlation between factors or factors on the main behavior of the system through the calculation of gray relational degree. The calculation of Gray Correlation Grade is the basement and an important tool of Gray Correlation Analysis. Therefor the establishment and improvement of the Gray Correlation Model is an important discussion topic of the Gray Correlation Analysis. Gray Correlation Grade theories now catch the attention of the scholars at home and abroad, and become an important branch of future developments and researches of the Gray system study area. Applying the Gray Correlation Grade theory to evaluate the morphological characteristics of the main beam can provide reliable analysis theory support for bridge monitoring.
In this paper, a health assessment algorithm for the main beam of beam bridge based on Gray Correlation Grade is proposed, which can solve the shortcomings that evaluation cannot be synthetically in existing technologies.
Health Monitoring Algorithm
The algorithm mainly includes the following steps : Step 1: Define the evaluation strategy Define the range of the outliers and the reference time range of the main beam deflection data.
Step 2: Obtain the original data of deflection Read original data obtained by the i sensors installed on the main beam in real time, and store them as arrays in memory. The array is defined as rawdata i {{id0, time, value}…{idn, time, value}}.
Step 3: Filter the deflection data Classify the scrolling raw data based on the evaluation strategy, dividing the array rawdata i into the array of reference values refdata i and the array to be evaluated candata i .
The value of the attribute value in the array rawdata i is filtered based on the range of the outliers of the main beam deflection data, and the arrays in the range of the outliers are discarded and the arrays in the range of the outliers are kept in the array rawdata i .
The value of the attribute time in the array rawdata i is filtered based on the reference time range of the main beam deflection data, and the arrays in the time range of the reference value are divided into the array of reference values refdata i , and the array is not within the time range of the reference values candata i .
Step 4: Calculate reference values Get the reference mean value a i0 and the reference variance value b i0 the reference displacement value d i0 and the reference numerical sequence L 0 from refdata i .
Calculate the base mean a i0 as follows: In formula (1), i is the number of measured points, refdata in is the n-th measured value of the i-th measure.
Calculate the benchmark variance b i0 as follows: In formula (2), i is the number of measured points, refdata ij is the j-th measured value of the i-th measuring point.
Calculate the reference displacement value d i0 as follows: In formula (3), i is the number of measured points, refdata in is the n-th measured value of the i-th measure.
Calculate the displacement monitoring data L 0 .The baseline sequence is calculated as follows: In formula (4), d 00 is the reference mean of the first measurement point, a i0 is the reference mean of the i-th measurement point.
Step 5: Evaluate the health characteristics of the main beam indicators According to the health data of the main beam, calculate the variance variation coefficient, the maximum variance variation coefficient, the maximum displacement change coefficient, the ride coefficient and the curvature change coefficient.
First, calculate the mean change coefficient. Calculate the mean value of the deflection monitoring point along the main beam distribution, and define the mean change coefficient Δa as follows: In formula (5), n is point numbers, candata i is the measured value of the i-th measuring point, a i0 is the reference mean of the i-th measurement point.
Then calculate the variance change coefficient. Calculate the variance change of the overall deflection of the data at the deflection monitoring point along the main beam, and define the variance change coefficient Δb as follows: In formula (6), Δb is the variance change coefficient of the deflection point, b i is the measured variance of the i-th measuring point, b i0 is the reference variance.
Then calculate the maximum variance change coefficient. Define the maximum variance change coefficient Δc as follows: In formula (7), Δc is the maximum variance change coefficient, b i is the measured variance of the i-th measuring point, b i0 is the reference variance.
Then calculate the maximum displacement change coefficient. Define the maximum displacement change coefficient Δd as follows: In formula (8), d i is the measured displacement of the i-th measuring point, d i0 is the reference displacement value.
According to the theory of Gray Correlation Analysis, the ride coefficient p is calculated, and the sequence of the j-th of the monitoring points is defined as L j Define the correlation coefficient as pj, the formula is as follows: In formula (10), L 0 (k+1) is the k + 1 value of the reference numerical sequence, L 0 (k) is the k-th value of the reference numerical sequence, L j (k+1) is the k + 1 value of the data sequence to be evaluated, L j (k) is the k-th value of the data sequence to be evaluated.
Then normalize the degree of correlation. Take the j p average as a ride coefficient, define smoothness coefficient of p as follows: In formula (11), j is the number of sampling points, j p is the correlation coefficient obtained for the vector corresponding to the j-th sample point, p is the ride coefficient.
Then calculate the curvature change coefficient. Through the reference value data to obtain the reference deflection deformation curvature s i0 , define the data to be evaluated deflection deformation curvature s i as follows: In formula (12), s i is the deformation curvature of the i-th measuring point, d i+1 , d i , d i-1 are the i + 1 measuring point, i point, i-1 measuring point deflection deformation value, Δx is the distance between two adjacent measuring points.
The mean value of the difference of the vector is taken as the variation coefficient of curvature, and the variation coefficient of curvature is defined as Δs, and its formula is as follows: The data are scored based on the scoring rules and the weight table. Calculate the health status score F, the formula is as follows: In formula (14), F is the total score of linear evaluation, full score is 100 points, S(Δa) is the mean change coefficient score value, wa is the mean change coefficient weight, S(Δb) is the variance change coefficient score value, wb is the variance change coefficient weight, S(Δc) is the maximum variance change coefficient wc is the maximum variance change coefficient weight, S(Δ p )is the ride factor coefficient score value, wp is the ride coefficient weight, S(Δs) is the curvature change coefficient score value, ws is the curvature change coefficient weight.
Experiment and Result
The experiment data were randomly selected from July 4 to July 18, 2016, a total of 9 days data of Wohe Bridge in Anhui Province for analysis. The data acquisition frequency is 1HZ. After statistical analysis, calculate the maximum displacement of the monitoring points as shown in Table 3.1, the displacement mean as shown in Table 33.2. Table 3.5, this bridge's Technical condition evaluation is Grade Ⅲ, a poor state. It needs a maintenance work.
Conclusion
The evaluation algorithm proposed in this paper can make the bridge supervisors quick to judge the overall health status of the bridge and the damage status of the main beams compared with the existing technology. This method makes the monitoring data of the bridge have the characteristics of standardization and high efficiency, and improves the robustness and ease of use of the system, and has high engineering application value. | 2,483.6 | 2017-10-25T00:00:00.000 | [
"Engineering"
] |
Computational analysis of integrated biosensing and shear flow in a microfluidic vascular model
Fluid flow and flow-induced shear stress are critical components of the vascular microenvironment commonly studied using microfluidic cell culture models. Microfluidic vascular models mimicking the physiological microenvironment also offer great potential for incorporating on-chip biomolecular detection. In spite of this potential, however, there are few examples of such functionality. Detection of biomolecules released by cells under flow-induced shear stress is a significant challenge due to severe sample dilution caused by the fluid flow used to generate the shear stress, frequently to the extent where the analyte is no longer detectable. In this work, we developed a computational model of a vascular microfluidic cell culture model that integrates physiological shear flow and on-chip monitoring of cell-secreted factors. Applicable to multilayer device configurations, the computational model was applied to a bilayer configuration, which has been used in numerous cell culture applications including vascular models. Guidelines were established that allow cells to be subjected to a wide range of physiological shear stress while ensuring optimal rapid transport of analyte to the biosensor surface and minimized biosensor response times. These guidelines therefore enable the development of microfluidic vascular models that integrate cell-secreted factor detection while addressing flow constraints imposed by physiological shear stress. Ultimately, this work will result in the addition of valuable functionality to microfluidic cell culture models that further fulfill their potential as labs-on-chips. Here we present a computational model of fluid flow, mass transport and biosensor kinetics to develop design guidelines that enable both on-chip detection of cell-secreted factors and application of physiological flow-induced shear stress to cultured cells in a bilayer microfluidic device. These guidelines detail the selection of critical design parameters of the bilayer will ensure optimal device which is contingent on
INTRODUCTION
The forces exerted by flowing blood on vascular cells in vivo play critical roles in regulating vascular cell biology 1 . Traditional static in vitro cell culture models fail to replicate fluid flow-induced shear stresses, motivating the development of microfluidic platforms that better mimic the vascular microenvironment 2,3 . Indeed, microfluidic platforms have been extensively applied to study the effects of flow-induced shear stress on various cell types 4 .
Among the many cell behaviours affected by flow-induced shear stress is the secretion of biomolecules that impact cellular signaling and function. For example, nitric oxide, an important mediator of vasodilation, is released by endothelial cells in a shear-dependent manner 5 .
However, in situ microfluidic biosensing of cell-secreted biomolecules is typically performed under static fluid conditions 6 . Assay integration is an oft-cited advantage of microfluidic cell culture devices [7][8][9] , but in situ detection of secreted biomolecules under flow conditions presents significant design challenges due to physiological shear stress constraints, and the coupled nature of fluid flow and mass transport to the biosensor surface. Moreover, flow significantly dilutes cell-secreted factor concentrations, hindering or even preventing detection. Predicting mass transport of secreted factors and the biosensor response is therefore critical to inform the design of microfluidic cell culture systems integrating biosensing capabilities.
Computational modeling is a useful tool for studying physical phenomena, assessing feasibility and guiding design choices in microfluidics with integrated quantitation assays 10, 11 .
The behaviour of surface-based biosensors has been well-characterized using theoretical and computational modeling, which provide excellent insight into convection-diffusion-reaction design considerations [12][13][14] . This analysis, however, is not sufficient to address the constraints of incorporating physiological levels of shear stress in the device.
Page 4 of 28
Computational modeling of biosensing considerations is also generally limited to the scenario of a planar biosensor embedded within a single flow channel. This single layer design requires that the biosensor be directly adjacent to the cell layer, posing practical challenges to isolating cell culture and biosensor regions within the device. An alternative configuration is a bilayer device in which cells are cultured in the upper channel on a porous membrane support (e.g., Boyden chamber) while a surface-based biosensor is located in the lower channel underneath the membrane (Fig.1A); the model, however, is generally applicable to other common multilayer configurations such as a hydrogel layer between two channels 15,16 . The bilayer membrane configuration is widely used in microfluidic cell culture applications [17][18][19][20] , and is particularly well-suited for microfluidic vascular models, where it has been used to study cancer cell 21 and monocyte 22 adhesion to an endothelial monolayer, endothelial permeability 23,24 , the vascular/valvular microenvironment 25 and the blood-brain barrier 26,27 . Bilayer culture devices have also been coupled with mass spectrometry for on-line monitoring of drug permeability 28 . For integrated biosensing applications, the bilayer configuration is advantageous as it positions the cells and biosensor closely together while isolating the biosensor from the elevated flow rates in the upper channel, allowing cell-secreted factors to accumulate in the lower channel. Surface-based sensor of length LS is located on bottom surface of lower channel between x = A and x = B.
Here we present a computational model of fluid flow, mass transport and biosensor kinetics to develop design guidelines that enable both on-chip detection of cell-secreted factors and application of physiological flow-induced shear stress to cultured cells in a bilayer microfluidic device. These guidelines detail the selection of critical design parameters of the bilayer device that will ensure optimal device functionality, which is contingent on meeting several important criteria. First, shear stress in the upper channel must be within physiological limits. Secondly, secreted biomolecule transport rates across the membrane must be maximized in order to efficiently supply analyte to the biosensor. Finally, the biosensor signal itself must also equilibrate rapidly in order to perform measurements within a practical timescale. Meeting these criteria enables the rapid and sensitive detection of secreted biomolecules from cells under a wide range of designer-specified shear stresses, adding novel and valuable functionality to microfluidic vascular models, further fulfilling their potential as lab-on-a-chip systems.
Computational Model Description
A schematic of the computationally-modeled bilayer microfluidic device is shown in Fig.1B.
Based on a model previously developed by Young et al. 23 , the cell monolayer and porous Page 6 of 28 membrane support were modeled as a uniform porous medium. The microchannels, porous membrane and cell monolayer were layered in the y-direction. A small height-to-width aspect ratio was imposed on the upper and lower microchannel geometry and therefore, coupled with the low Reynolds numbers in the device, no variance of the model in the z-direction was assumed, resulting in a two dimensional model in the xy-plane. For simplicity, the biosensor geometry was modeled as a rectangular strip on the bottom surface of the lower channel.
COMSOL Multiphysics 4.2 finite element analysis software was used to model and numerically simulate fluid flow, mass transport and reaction kinetics in the microfluidic device. A more detailed description of the implementation may be found in the ESI.
Governing Equations & Boundary Conditions
The governing equations and boundary conditions used to computationally model the bilayer microfluidic device are described below. A complete list of constant and simulation parameter values may be found in Table 1. In order to achieve the desired device functionality, namely the integration of both biosensing and shear stress stimulation in a single device, proper selection of critical design parameter values is necessary. Table 2 contains a list of the design parameters considered in the guidelines developed in this paper. Free flow in the upper and lower channels is governed by the Navier-Stokes equations for incompressible flow: where the density ρ (kg m -3 ) and dynamic viscosity μ (Pa·s) of the fluid were that of water, is Porous media flow in the uniform porous layer representing the cell monolayer and porous membrane support is governed by the Brinkman equations: where εp is the porosity, are vectors of the velocity field within the porous medium and κ is the Darcy permeability of the porous medium (m 2 ).
Cell layer secretion
No universal model of cell secretion currently exists. Therefore, secretion was modeled as a constant local concentration on the upper boundary of the uniform porous layer in order to study the evolution of the concentration profile within the device and the biosensor response.
Biomolecules were assumed to be secreted apically from the cell monolayer, pass across the cell monolayer first, followed by the porous membrane before entering the lower channel.
The concentration of secreted analyte was held constant on the surface of the permeable membrane: where CS is the apical concentration of analyte over the cell monolayer, Lm is the length of the porous membrane support, hb is the lower channel height and δ is the combined thickness of the cell layer and porous membrane support.
Transport of secreted analyte
Transport of secreted analyte was assumed to be equivalent to the transport of a diluted species since cellular secretion generally results in low secreted species concentrations. The concentration profile within the device is given by the following equation: where ca is the concentration of analyte (M), and DS is the diffusion coefficient of a typical secreted protein (10 -10 m 2 s -1 29 ). The secreted biomolecules were assumed to be smaller in size than endothelial cell tight and leaky junctions (~2-20 nm 30 ). Cell-biomolecule interactions such as charge, steric and surface receptor binding effects were neglected, and thus solute reflection at the upper cell monolayer surface was also neglected. Oncotic pressure-driven transport was assumed to be negligible due to the protein concentration in the medium being nearly identical on both sides of the porous membrane support and cell monolayer.
No flux passes through the walls of the microfluidic device, leading to the following boundary condition on the device walls where n is the outward unit normal vector:
Biosensor binding kinetics
The biosensor on the bottom surface of the lower channel was modeled as a second order reaction representing surface receptor-biomolecule binding interactions given by the following rate equation 12,31 : where bs is the surface concentration of bound receptors (mol m -2 ), bmax is the total surface concentration of receptors, kon is the association rate constant (M -1 s -1 ) and koff is the dissociation rate constant (s -1 ).
These analyses are equally applicable to other multichannel configurations used in microfluidic cell culture, including configurations in which a hydrogel layer (with or without seeded cells) is used instead of a porous membrane support to separate channels 15,16 . Furthermore, the model assumes secretion in the apical direction, but could also be modified to study basal secretion where biomolecule transport is not across the entire cell monolayer thickness, but instead mostly across the porous membrane support.
Design Consideration: Rapid analyte transport
We studied the relative effects of convective and diffusive transport rates through the porous membrane by considering the pore Péclet number Pep, which combines the critical mass transport parameters for porous media flow into a single dimensionless quantity: For larger Pep = 0.5 and 1, representing increased convective transport to the biosensor surface, the % signal equilibration rises to completion more rapidly than when Pep < 0.1, again highlighting the important role of analyte transport to the biosensor surface.
One mechanism by which rapid response times may be achieved even at lower Pep is shown in Fig.5B. The % signal equilibration increases rapidly even at a low Pep of 10 -2 for ca/KD = 10 relative to ca/KD = 1, which are ratios of the local concentration relative to the equilibrium constant. Thus, the concentration of secreted analyte in the lower channel, if larger than the KD value of the receptor, may significantly lower the biosensor signal equilibration time.
Design consideration: Rapid biosensor equilibration times
The affinity-based biosensor's response depends on numerous parameters including the biosensor surface-bound receptor properties, analyte transport to the surface and local analyte concentration. We studied the effect of convective-diffusive analyte transport and the surface receptor binding rates on the biosensor response, as represented respectively by the pore Péclet number (Eq.10) and the Biot number: A Biot number << 1 indicates a reaction-limited system where the reaction timescale greatly exceeds the diffusive transport timescale. Conversely, a Biot number >> 1 indicates a diffusive transport-limited system, which typically occurs with surface receptors that exhibit a high affinity towards its ligand. Table S1. For each Bi, analyte transport rates to the sensor surface were also varied by examining four different Pep values between 10 -3 to 10 0 . Furthermore, for each Biot number, simulations were performed for koff = 10 -3 s -1 (Fig.6A) and koff = 10 -4 s -1 (Fig.6B).
With koff = 10 -3 s -1 , the % equilibration monotonically decreased as Bi increased for a given Pep value, with significant decreases occurring at Bi ~ 10 -1 -10 0 . At high Bi, the % equilibration was very low implying long biosensor equilibration times. Furthermore, the % equilibration increased as the Pep value increased for a given Bi, indicating that larger analyte transport rates resulted in faster biosensor signal equilibration. For example, the biosensor signal equilibration is at 96% completion for Pep = 10 0 at Bi < 10 0 , but only 40-50% for Pep = 10 -3 . At Bi > 10 0 , the % equilibration decreases rapidly as Bi increases where, at high Bi, the % equilibration after 1 h is very low even at Pep = 10 0 , the highest value simulated. The same trends were also observed when koff = 10 -4 s -1 except that the % equilibrations for a given Bi were lower. For example, at Bi = 10 0 and Pep = 10 0 , signal equilibration was at 24% completion in Fig.6B compared to 96% for the equivalent parameters in Fig.6A, a ~70% difference. koff thus has a significant effect on biosensor equilibration times.
In designing the device-integrated biosensor, a balance is required between biosensor affinity, surface receptor concentration and equilibration time. Constraints are imposed on the surface receptors and analyte transport within the membrane microfluidic device, described by koff, Bi, and Pep. Given that a surface receptor with koff = 10 -4 s -1 yields much slower response times than a receptor with koff = 10 -3 s -1 , one can infer that koff values ≤ 10 -5 s -1 will yield prohibitively lengthy response times even at high Pep. It should be stressed that these equilibration times are an inherent property of receptors with low koff values, and are not unique to this system, as the reaction-limited equilibration time scales proportionally to koff 12 . Unlike a single channel device configuration, however, it is not possible to simply increase the flow rate over the biosensor surface in order to increase convective transport and thereby minimize equilibration times. Moreover, the flow rate is ultimately constrained by the need to apply physiological shear stress. Thus, surface-bound receptors with koff >10 -4 s -1 are recommended.
Based on the significant drop in % signal equilibration after 1 h for Bi > ~10 0 in Fig.6
CONCLUSION
We developed a computational model of a microfluidic vascular model integrating flow-induced shear stress and the monitoring of cell-secreted biomolecules using an integrated biosensor. The model was applied to a bilayer configuration that isolates the biosensor from significant analyte dilution, thereby maximizing the concentration of analyte detected, but is readily extended to other multilayer configurations. Using the computational model, we developed guidelines that address both physiological and mass transport constraints as well as enable device operation to be tailored to designer-specified physiological shear stresses while ensuring rapid transport of secreted biomolecules to the biosensor. Special consideration was also given to minimizing the biosensor response time within the mass transport limits of the device configuration. These
A) B)
guidelines may ultimately be used to incorporate physiological shear flow and integrated biosensing into microfluidic vascular models, adding valuable functionality for use in fundamental cell biology and drug screening applications.
SUPPLEMENTAL MATERIAL
See supplemental material for a detailed description of the computational model mesh, derivation of the critical shear stress and biosensor reaction kinetic simulation parameters. | 3,733.8 | 2017-11-21T00:00:00.000 | [
"Engineering"
] |
fNIRS-QC: Crowd-Sourced Creation of a Dataset and Machine Learning Model for fNIRS Quality Control
Featured Application: Our dataset can be used to train novel Machine Learning and Artificial Intelligence models to automatically identify the quality of fNIRS signals. Abstract: Despite technological advancements in functional Near Infra-Red Spectroscopy (fNIRS) and a rise in the application of the fNIRS in neuroscience experimental designs, the processing of fNIRS data remains characterized by a high number of heterogeneous approaches, implicating the scientific reproducibility and interpretability of the results. For example, a manual inspection is still necessary to assess the quality and subsequent retention of collected fNIRS signals for analysis. Machine Learning (ML) approaches are well-positioned to provide a unique contribution to fNIRS data processing by automating and standardizing methodological approaches for quality control, where ML models can produce objective and reproducible results. However, any successful ML application is grounded in a high-quality dataset of labeled training data, and unfortunately, no such dataset is currently available for fNIRS signals. In this work, we introduce fNIRS-QC, a platform designed for the crowd-sourced creation of a quality control fNIRS dataset. In particular, we (a) composed a dataset of 4385 fNIRS signals; (b) created a web interface to allow multiple users to manually label the signal quality of 510 10 s fNIRS segments. Finally, (c) a subset of the labeled dataset is used to develop a proof-of-concept ML model to automatically assess the quality of fNIRS signals. The developed ML models can serve as a more objective and efficient quality control check that minimizes error from manual inspection and the need for expertise with signal quality control.
Introduction
Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging modality which allows the detection of cortical brain activity through the use of light in the near-infrared spectrum (650-900 nm). Due to the difference in absorption of oxygenated and deoxygenated hemoglobin, the fNIRS is able to measure the relative changes in the concentrations of oxygenated and deoxygenated hemoglobin which are indicative of cerebral activation and deactivation. In recent years, the use of fNIRS has seen rapid growth in neuroimaging studies [1], gaining traction in fields such as infant neuroimaging [2] and cognitive neuroscience [3].
Despite the burgeoning use of fNIRS, a general consensus or standardization of the best pre-processing practices for the NIRS signal has not been established, unlike other neuroimaging modalities such as functional magnetic resonance imaging (fMRI; see [4,5]). Differences in the use and combination of pre-processing pipelines have been demonstrated to lead to different results in fNIRS studies [6]. Hence, the absence of standardization in pre-processing methods, analysis tools, and instrumentation can lead to the scarce reproducibility of studies and results, similar to what occurs with other neurophysiological signals (e.g., infant cry [7]).
One key pre-processing step in fNIRS data analysis is the signal quality check of the raw signals for each channel. The presence of a strong cardiac component is one of the main indicators of good optode-scalp coupling characterizing a high-quality fNIRS signal. Noise in the fNIRS signal is typically the result of (i) body or head movements which causes fast spikes or baseline shifts and physiological components such as cardiac, respiratory, and blood pressure components. Usually, a manual visual inspection is conducted to assess signal quality-for example, indicators such as the presence of large motion artifacts, heartbeat oscillations indicative of good coupling between the scalp and optodes. The nature of the manual visual inspection means that signal quality check is dependent on researcher expertise and subjective judgments on a "good" quality signal. Hence, the development of an objective signal quality check can address the issues of experience and subjectivity in the signal quality control step. Machine Learning algorithms have been proven to be effective in supporting researchers' classification of signal quality. Li and colleagues, for example, successfully employed Machine Learning for the automatic quality assessment of pulsatile signals [8] and for multi-level ECG signals [9], while Gabrieli et al. [10] tested the efficacy of different classifiers in the identification of the quality of pupillometry signals.
Currently, a number of algorithms based on morphological characteristics of the fNIRS signal have been proposed for signal quality assessment: (i) Scalp Coupling Index (SCI [11]), (ii) placing headgear optodes efficiently before experimentation (PHOEBE [12]), and (iii) signal quality index (SQI [13]). The SCI and PHOEBE are algorithms that binarily assign signals to "good" or "bad" categories based on the presence of the cardiac component in the signal. The SQI algorithm provides five levels of a quantitative rating of signal quality and was developed based on visual quality assessment markers used by experts in fNIRS.
These algorithms rely on a small number of human-defined signal quality indicators and empirical thresholds. However, deep learning approaches have the key characteristic of automatically extracting high-dimensional features and leverage on non-linear decision functions. They are therefore a promising approach, similar to other studies.
Machine learning involves the training of algorithms with known input-output pairs of the function. For what concerns fNIRS signals, several studies have recently employed different Machine Learning ad Deep Learning techniques to classify signals. Ortega and Faisal [14], for example, employed a deep learning classifier to decode the strength of hand movements in order to develop more accurate Brain-Computer Interfaces (BCI). Similarly, Ma et al. [15] developed a Deep Learning classifier to classify motion imagery of three different hand gestures. Deep Learning models have also been employed to assess and classify the mental workload of difficult tasks, such as driving [16] or memory tasks [17].
Concerning signals' quality estimation, a machine learning version of the SQI (MLSQI [18]) has been developed based on the training dataset described in Sappia et al. [13]. However, the training dataset was collected from only 14 participants and labeled by individuals working at the company that produces the fNIRS recording device used. The limited number of collected signals is a crucial limitation for the efficient application of a Deep Learning approach, which is however intrinsically connected with the fNIRS field for multiple reasons. First, the novelty of the field results in a reduced availability of large-scale fNIRS datasets that can be employed for secondary data analysis or for the development of novel tools and techniques. Secondly, the difficulty in obtaining data labeled by experts of the field, combined with the lack of a ground truth that determines the quality of a signal makes it impossible to obtain large-scale labeling of the quality of fNIRS signals. To overcome these limitations, in this paper, we introduce a crowd-sourced training dataset consisting of 510 10-s segments of single-channel fNIRS signals. Through crowdsourcing, we are able to leverage multiple fNIRS recordings from a wider range of participants. By making use of a web interface, we were able to reach out to more individuals with experience working with fNIRS and tapped onto their expertise in labeling the quality of the segments through this interface. The labeled dataset is here used to train and test a machine learning model that can identify the quality of a signal, and that can therefore be used to support non-experts of the field that approach the fNIRS signals for the first time or to make pre-processing pipelines more objective, by introducing an objective way to identify segments of signals of high quality that can be used for further analysis.
Aim of This Study
This study aims at improving the quality control step of fNIRS studies by introducing an Artificial Intelligence framework that can support researchers in discriminating between usable and unusable fNIRS segments. Overall, this work brings three main contributions. First, an Open Source web interface that can be used to classify the quality of different signals using a crowd-science approach has been designed and developed. While here we employed it for the collection of fNIRS signals labels, a boilerplate of the platform that can be adapted for other signals or digital objects that require labeling has been made available, allowing other researchers to rapidly deploy citizen science platforms. The second contribution of this study is the creation of a reference dataset of fNIRS signals that can be used by researchers within the field to develop new tools for the preprocessing and analysis of fNIRS data and to train non-experts to discriminate between usable and unusable signals. Third, in this study, a proof-of-concept ML model that can support researchers by automatically assessing the quality of fNIRS signals is presented. The latter is of special interest considering the novelty of the field, the limited number of experts in the visual examination of the quality of fNIRS signals, and the increasing amount of young scholars with previous to no experience in fNIRS signal processing that may need support to evaluate the quality of recorded data. Overall, we believe that the created dataset and developed model favor a more objective and efficient approach to fNIRS quality control.
Dataset
The complete fNIRS dataset generated for this study consists of 4385 portions of single-channel fNIRS signals with a duration of 10 s each. In order to obtain a better time and space localization of signals' quality, short portions of signals were selected. To avoid biases introduced by different recording devices, all the signals have been collected using a NIRSport device (NIRx Medical Technologies LLC). This equipment has a scan rate of 7.81 Hz and employs LED emission with source wavelengths of 760 nm and 850 nm.
Signals included within this dataset were drawn from four different studies, and all belong to adult participants. The first study (Mother-Child Synchrony study) involves the simultaneous recording of fNIRS data from mothers and children engaging in a passive video viewing task [19]. Only data from mothers (N = 31, Mean Age = 34.9 ± 4.16 years) are selected for the dataset used in this work (for details on the experimental procedure, see [19]).
The second study (Father-Child synchrony study) consists of fNIRS recordings of Fathers and Children engaging in both a passive video viewing task and an active play task. Only data from fathers (N = 29, Mean Age = 38.1 ± 3.67) have been selected for the current dataset [20].
The third study (3-Love study) consists of the data of 69 participants (Mean Age = 21.21 ± 1.66) [21]. Participants were asked to watch three video clips depicting a couple interacting while baking, eating, and exercising. Before presentation, participants were informed (experimental manipulation) about the status of the couple, being either romantic partners, friends, or siblings.
Finally, the fourth study (Mother-Father synchrony study) consists of the recordings of both mothers and fathers while passively hearing audio stimuli of infants' and adults' vocalizations [22].
A breakdown of the number of signals per study is reported in Table 1. From the totality of the signals, 510 segments from the Mother-Child synchrony study were randomly selected for the current labeling stage. All the selected segments were resampled at 10 Hz, but no additional preprocessing step was conducted on the raw signals.
Web Interface
In order to obtain labels for our classifier, a web platform written in HTML5 for the human labeling of signals was designed and deployed on a shared hosting service. The web platform consists of a back-end, where signals and ratings are stored, and a frontend, which allows the users to rate the signals. when users register an account, they are asked to specify their level of expertise, which could be Beginner ("worked on less than 2 datasets (less than 100 fNIRS recordings processed"), Intermediate ("worked on 2 4 datasets (200 fNIRS recordings processed"), or Expert ("worked on more than 4 datasets (more than 200 fNIRS recordings processed)").
One randomly selected signal is presented each time and the user is asked to assign one out of three possible labels: Keep, Keep after correction, or Reject.
Visually, the User Interface presents colored buttons that can be used to rate the signals in the upper part of the screen-the button Reject is in red, the button Keep after correction is in yellow, and the button Keep is in green-followed by two rectangles, one above the other, in which the two waveform components (wavelength 1 and 2) of the signals are visually shown. The platform allows the user to zoom in on signals in order to obtain a closer view of peaks and fluctuations. Finally, on the lower part of the screen, the interface provides details about how many signals the user has rated. A screenshot of the interface is shown in Figure 1. Each user can rate as many signals as are present on the server in an anonymous way, with the only references to the user being an anonymous ID and the expertise of the user.
The web interface was used to collect ratings for a subset of 510 segments of the complete dataset, in order to develop the proof of concept of the automatic quality classification based on Deep Learning.
Collected Data and Processing
The subset of 510 segments used for the proof-of-concept were selected from the Mother-Child Synchrony study. A total of two thousand four hundred and one (N = 2401) ratings were collected: a breakdown of the ratings by user is reported in Table 2.
Each rating consists of three pieces of information: the label of the signal quality, the self-reported level of expertise of the rater, and the time required for the rater to assign a label to the signal (reaction time). No identifiable or demographic data of the raters are collected.
Overall, the Percent Agreement between self-described Expert fNIRS users is 62.4%, between Intermediate users is 14.4%, and between Beginner users is 39.4%, while the average percent agreement between expert and beginner users is 30.4%, and between expert and intermediate users is 29.3%. The average percent agreement between intermediate and beginner users is 33.9%. The time required for the rating was used to compute the confidence weight (w c ) for each rating. All rating times of each user were assigned to four confidence levels, based on thresholds corresponding to the 25th, 50th, and 75th percentile of the distribution of the rating times. Ratings below the 25th percentile were associated with a high confidence and assigned a w c = 1; similarly, other levels were associated with lower confidence and assigned a w c = 0.75 (25th to 50th percentile), w c = 0.5 (50th to 75th percentile), and w c = 0.25 (greater than 75th percentile). The self-reported expertise was also used to assign and experience weight (w e ) to each user. Users self-reported as "Expert" were assigned a w e = 1 , "Intermediate" users were assigned a w e = 0.66 and "Beginner" users were assigned a w e = 0. 33 The ratings and the weight were used to compute the class of each segment in the dataset. The three labels correspond to three different quality levels (q) of the signals: • Accept. The presented segment of a signal has a good quality, that is deemed acceptable by the user. This class was assigned a q = 3. • Keep after correction. The portion is affected by noise or artifacts (e.g., spikes), but after applying appropriate signal processing methods to increase the signal to noise ratio and remove artifacts, the portion can likely be used for further analysis. This class was assigned a q = 2. • Reject. The portion is very noisy or affected by artifacts that cannot be corrected using standard signal processing techniques. This class was assigned a q = 1.
Three different methods were adopted to aggregate the ratings from different users and compute a unique quality level Q for each segment. The first (Q m ) method was simply the majority vote, in which the most voted quality level q was assigned.
Other methods were based on weights. First we computed the sum q k of the weights w i of the ratings r i by quality level k (see Equation (1) Then we selected the level k corresponding to the maximum q k . Thus, we computed the experience-weighted aggregated quality level (Q e ) based on the experience weights and the confidence-weighted aggregated quality level (Q c ) based on the confidence weights.
In case of ties, the lower class was assigned. A breakdown of the labels assigned to the segment, given the aggregation method is provided in Table 3, while a visual representation of the users' response time by rating is shown in Figure 2.
For the proof of concept DL model, we referred to the Q m quality levels. Since the classes were highly unbalanced (with only N = 19 segments for the "Keep" class, we focused on the binary classification of "Reject" class versus the others.
Deep Learning Experiments
The architecture of the Deep Neural Network (DNN) here employed is based on the architecture described by Bizzego et al. [23], and consists of three sequential components: (i) a Convolutional Branch; (ii) a Long Short-Term Memory (LSTM) module; a Fully Connected Head (FCH). The Convolutional Branch consists of four convolutional blocks, each one consisting of a convolutional layer with kernel size set to 3, a batch normalization layer [24], a Rectified Linear Unit [25], and a pooling layer based on maximum, with kernel size 2. Additionally, in the second and third blocks a dropout layer was added to reduce overfitting. In each block, the convolutional layer expands the channels' number. The first layer expands from 2 to 32 channels, while in the subsequent layers the number of channels is duplicated iteratively reaching 256 channels. A pooling layer is then used to compute the average of the convoluted signal at 10-time points, followed by an additional dropout layer. Following the Convolutional Branch is an LSTM Module [26,27], a recursive layer used to leverage the specific properties of sequential data. The Network here employed contains a single-layer LSTM module, with a number of features in the hidden state set to 100.
The DNN was implemented in Python (v. 3.8.10), using the Numpy [28], Pandas [29], and Scikit-learn [30], and Torch [31] packages (Numpy v. 1.19.4, Scikit-learn v. 0.23.2, Pandas v. 1.1.4, Torch v. 1.9.0 + cu102). The network was trained for 1000 epochs, with a batch size of 128. The learning rate is initially fixed to 1, and divided by √ 10 every two epochs. Network's performances are evaluated in terms of accuracy, precision, and recall scores, as well as of the F1 score and Matthew Correlation Coefficient (MCC). While the accuracy of a model -the ratio between correctly classified segments and the total number of segments-is commonly used as the main metric to assess the performances of a model, it has been reported to be biased for dataset with an unbalanced number of labels per class, as in the case of the dataset here presented. In such cases, the accuracy score has been proved to overestimate the performances of a classifier [10,32,33]. To take into account such a bias, different metrics have been introduced to assess the performances of binary and multiclass classifiers, such as the F1 score and the Matthew Correlation Coefficients. While the first has obtained a higher adoption in the field, the score is not class-independent, thus indicating different scores for binary classifiers accordingly to which class is labeled as positive and which is labeled as negative. Additionally, the F1 score does not take into account segments correctly classified as negative recall scores, therefore not providing a complete and objective evaluation of a model's performance. To overcome the limitations of the F1 score, a new metric named Matthew Correlation Coefficient (MCC), based on a special case of the φ correlation has been introduced [34,35]. As compared to the F1 score, MCC has two main advantages. First, all the four categories of the confusion matrix (true positives, false negatives, true negatives, and false positives) are considered in the metric, as well as the ratio between elements of the different classes in the dataset, therefore providing a more balanced performance indicator [36]. Secondly, the metric is class-independent, thus not influenced by the assignment of the positive and negative labels to the different classes. As a result of that, when classes are swapped, the metric does not change, as opposed to the F1 score [37]. Quantitatively, the MCC metric is a value between −1 and +1, where a value of −1 is indicative of a discrepancy between predictions and observations, while a coefficient of 1 represents a perfect forecasting capability of the model. As a result, the higher the MCC score, the better a model is performing.
Results
A copy of the segments used in this study, labeled dataset, and pre-trained network are available online on the data repository of this project [38], while the template for the web platform fNIRSQC has been released as an open sourced project under the name cisciqc (Citizen Science Quality Control [39]).
The dataset was divided into two partitions: Train (80% of the segments) and Test (20% of the segments). After the training phase, the model reported an Accuracy of 0.70 on the train set (MCC = 0.18). For what concerns the results on the validation set, the network obtained an Accuracy score of 0.63, a Precision score of 0.61, Recall score of 0.95, F1 score of 0.74, and an MCC score equal to 0.25 (Table 4).
Confusion Matrices for the train and test partitions are reported, respectively, in Tables 5 and 6. Overall, the model performs better on the training partition, suggesting a possible over-fitting problem.
Focusing on the confusion matrix, the model seems to wrongly report signals that users labeled as to reject, and therefore not usable signals, as signals that must be kept in subsequent steps of the analysis process.
Discussion
In this work, we tested the possibility of using a Deep Neural Network model to support researchers in identifying usable and unusable fNIRS signals. First, a web platform for the collection of human labels for fNIRS signals has been designed and implemented, then ratings for 510 segments of fNIRS signals were obtained by raters of different expertise levels. Collected labels were then fed to train a DNN model.
The model's accuracy performances in train and test partitions indicate that the model is learning well on the test partition, but the performance drops on the test partition, suggesting a possible overfitting during the training phase. While the recall score (0.95) indicates that the majority of the relevant elements-which are usable segments-are correctly identified as usable, the precision score (0.61) suggests that a significant amount of signals labeled from the users as unusable are mislabeled by the model, as shown in the Confusion Matrix of the test partition (Table 6).
Imagining a possible implementation in a real research setting, the results here reported suggest that the model can successfully help researchers identifying usable fNIRS segments from a pool of segments that contains both usable or unusable segments, which is the typical case of fNIRS experimental studies. In fact, segments may contain a different type of artifacts, some of which can be corrected, while some are so extensively tied to the signals that require a portion of signals to be discarded. However, the current implementation fails at excluding completely unusable segments, which are labeled by the model so as to include them in further analysis. Presently, the model can still support researchers by reducing the number of segments requiring a manual inspection. The ability of the model to match users' labels for usable signals is also reflected by the F1 score (0.74), while the inability to correctly label non-usable signals with a high degree of precision is highlighted by the MCC score (0.25).
The limits in our model's classification accuracy can be explained by different factors. First, the limited number of segments that have been included in the dataset for this work (N = 510). In fact, the small number of segments may not have been sufficient to cover all the possible combinations of artifacts and noises that can affect the fNIRS signals. However, while a higher number of segments may have been helpful to reduce the bias of the model toward the segments used for training-with a possible reduction in the accuracy on the train partition and the simultaneous increase of the precision on the test partition-the labeling stage of the segments would have required more extensive resources in terms of time and users. By limiting the number of segments, we have been able to obtain a higher number of ratings per each signal, therefore reducing the impact of a single rater on the overall label used for training, which is evaluated as described in Section 2.3. Future works may try to increase the quality of the predictions, by involving both more users and by adding more signals to the dataset, in order to obtain a bigger and more balanced number of segments, which may cover a higher number of possible cases.
For what concerns the network's performances, another way to improve the classification accuracy of the network is by improving its structure. In this work, we aimed at using a simple network, that has been adapted from a previous work in which the aim was not to classify the usability of a signal, but its nature [40]. The adoption of this simple network has some benefits. First, the linearity of the structure and the simplicity of the architecture allows for easy explainability-which is crucial especially when AI algorithms are used for medical data-and to rapidly modify it, to better adapt it to different scenarios (e.g., rating from more users, a higher number of samples, etc.) and computational resources (e.g., laptop, Cloud clusters, High-Performance Computers, etc.). Currently, the network trains on an average laptop (Intel i7-8565U, 16 GB of RAM) in less than 30 min, and is able to provide predictions within seconds, making it suitable for both offline and online classification of fNIRS segments.
Overall, in this work, we have demonstrated a proof-of-concepts of how a DNN model can be trained and employed to classify the usability of fNIRS signals. The developed model can help researchers estimate the quality of an fNIRS signal segment, and its usability for research purposes in a more objective way, by reducing the subjectivity introduced by a manual inspection stage.
While the performances of the model are not excellent, the limits of the dataset-in terms of number of segments, and of number and expertise of the raters-and of the architecture of the network can help explain the results here reported. Future work should aim at collecting data from a higher number of raters, with different expertise levels, for a higher number of fNIRS segments, and try to use more sophisticated networks designed ad-hoc for this classification task. Moreover, future studies may aim at combining different signals (e.g., fNIRS and EEG) in order to increase the performances of the classification model.
Conclusions
In this work, we presented a proof-of-concept for a DNN classifier able to help researchers identifying the quality of fNIRS signals. Moreover, as artifacts of this work, we created an open-source boilerplate for the creation of a citizen science platform for the human labeling of digital elements, called cisciqc, and its implementation for the collection of fNIRS signals labels, called fNIRSQC, as well as a dataset quality-labeled fNIRS signals that can be used by others to train and test different ML models. Our results demonstrate that a simple network, trained on a small number of signals labeled by users of different expertise levels can successfully help researchers identify high-quality fNIRS signals. | 6,393.8 | 2021-10-14T00:00:00.000 | [
"Computer Science"
] |
A note on convex programming in practical problems
In recent years, convex programming has become a sophisticated tool of central importance in engineering, finance, operations research, statistics etc. The goal of this paper is to emphasize modeling and present several convex programming problem formulations especially in optimal design and location theory. We build simple models to address this question, investigate their properties and apply a variant of the Weierstrass theorem to prove the existence of a solution. Our results extend and improve some other comparable results of the author [2,8,20,21,22,23,24,25].
Introduction
The general unconstrained programming problem can be stated as the problem of finding the vector x * , where the minimum occurs, i.e., The function f is called the objective function. The elements x are often called the decision variables. Convex programming (CP) is a subfield of mathematical programming, or simply mathematical optimization, which studies the problem of minimizing convex functions. The usual and most intuitive form of describing a convex programming is: subject to gi x 0, i 1, 2,..., n hj x 0, j 1,..., k (2) where f : R n R is a convex function to be minimized over the variable x , and each f i and h j are real-valued functions defined on R n .The constraints f i x 0, i 1, 2,..., n are referred to as inequality constraints; h j x 0, j 1,..., k are referred to as equality constraints. If the objective and constraint functions are differentiable we refer to the above equation (2) as differentiable programming. The optimization problem (2) is called a linear programming problem (LPP) if the objective and constraint functions are linear i.e., e.g minimize cT x, subject to Ax b, x0 Here c, xRn and A is an mxn matrix and bRm and the constraint x 0 is usually referred to as non-negativity restriction. Consider another example given by where Q is an nxn matrix and b R m .Then (2) is said to be a quadratic programming problem under linear constraints. This class of problems is very important for practical purposes as well as linear programming problems. Convex programming has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, statistics and finance [4,5,12,13,27]. With recent improvements in computing and optimization theory, convex programming is nearly as straight forward as linear programming. Due to its ability to solve large, practical engineering problems it has been useful in solving some conic programming problems. The basic idea and focus of this combined research is that many analyses and designs arising in science and engineering can be cast, or recast, in the form of a convex programming problem, i.e., minimizing a convex objective of some decision variable subject to some convex constraints on the variable. Although such problems can appear difficult, they may have hundreds of variables and nonlinear, nondifferentiable objective functions, they can be solved (numerically) very efficiently by recently developed interior point methods that exploit convexity and the particular problem structure. Thus, the original problem is efficiently solved.
In view of the fact that not all problems arising in science and engineering are convex hence amenable to rapid numerical solution and in cases where the problem is not convex, convex approximations can yield suboptimal solutions that are very useful In practice. Generally, then, it is useful skill to recognize convexity in engineering and science problems and know how to exploit it.
A large body of literature is devoted to the algorithms development for the solution of the highlighted problems [2,7,8,11,15,19,23] and it is our wish to apply a simple and more general results to resolve(solve) the problem.
Our paper is organized as follows: we specify the problem and some of its basic properties (section 2); we present a general result in finite dimensional setting to resolve the problem; show connections between them and describe how we can apply such result to tackle the problems (section 3).Finally in section 4 we briefly state our conclusions.
Preliminaries and Problem Setup
In this section, we set up the problems and discuss some of its properties. The simplest convex programming problem is an unconstrained problem of the form Min imize ||Ax-b|| (6) where ARmxn and bRm are problem data, xRn is the variable, and . is a norm on R m .
A solution of (6) is sometimes called an approximate solution of Ax b , in the norm ||.||.The vector r Ax b (7) is called the residual for the problem, its components are sometimes called the individual residuals associated with x [5] . An n extension of (6) is the weighted convex programming problem where the problem data w R mxm is called the weighting matrix. This kind of problem arises in many disciplines as discussed in the earlier section and some of the many names that have been used depending on areas of discipline include least square norm or least square problems in statistics and engineering, Weber problem in location research and norm approximation in optimization and operations research.
In recent times, research has been carried out on the minimum norm problem which is also convex programming problem and resolved using several techniques, see ( [5,12,14,20,21]) to mention a few. Such problems have been found useful in Approximation theory, statistical estimation problem [5], signal and image reconstruction as well as in other engineering applications [12]. In [12], the author showed that minimum norm problem can be recast as fixed point problem and showed x Tx .He further proved the existence and uniqueness of the minimum solution of operator equation, if T is non expansive. The research carried out by [14] was to find the minimum norm solution of linear programs by a Newton-type method which was shown to be globally convergent. A recent stride reported in [20] was directed at an estimation problem using simple random sampling technique. The idea was used in formulating the estimation problem as an equivalent minimum norm problem in the Hilbert space and resolved by an appropriate application of the classical projection theorem. In [21], a critical survey was carried out on various applications of minimum norm with emphasis on finance.
In this paper, we show that convex programming problem can be recast as a minimum norm problem in practical problems and resolved using a more general approach.
Optimal design
The least squares method describes a frequently used approach to solving overdetermined or inexactly specified systems of equations in an approximate sense. Instead of solving the equations exactly we seek only to minimize the sum of squares of the residuals.
The least squares criterion has important statistical interpretations. If appropriate probabilistic assumptions about underlying error distributions are made, least squares produces what is known as the maximum-likelihood estimate parameters. Even if the probabilistic assumptions are not satisfied, research in this areas has shown that least squares produces useful results [1,2,5,6,7,8,9,10,11,14,15,19,21,22,23,24,25,26] A very common source of least squares is curve fitting. Let x be the independent variable and let y x denote an unknown function of x that we want to approximate. Assume there are m observations, i.e., values of y measured at specified values of x : The idea is to model y x by a linear combination of n basis functions: The design matrix A is a rectangular matrix of order mxn with elements t i , j j x i The design matrix usually has more rows than columns. In matrix-vector notation, the model is y A Some common models include the following: polynomials: log-linear: The residuals in this case are the differences between the observations and the model: (15) The Bulletin of Society for Mathematical Services and Standards Vol. 5 21 or, in matrix-vector notation We want to find the 's and 's that make the residuals as small as possible. In least squares method we wish to minimize the sum of the squares of the residuals (17) If some observations are more important or more accurate than others, then we might associate different weight, w i , with different observations and Other uses of this type of problem in statistical application occur mainly to collection of data, for example, when we design an experiment or a survey so as to minimize experimental errors. Here the residual is essentially an error in the fit of the model, say The fitted line has predicted values at points on the line and hence the residuals are vertical deviations from points to the line. As a result, the least squares procedure produces a line that minimizes the sum of squares of vertical deviation from the points to the line.
In an experimental context in the physical sciences and engineering almost all measured quantities have an error because a perfect apparatus does not exist. Least squares fitting techniques provides some guidelines for determining the values of those errors. Recent publications directed at applications of convex programming in statistical and engineering designs can be found in [1,2,18,19].
Epidemic Models
The least squares fitting techniques has been employed as one of the approaches of estimating epidemiological parameters for daily case notification time series for pandemic diseases. It is used in trajectory matching of epidemic model to epidemic curve data. We use an epidemic model of SEIR-type that classifies individuals as susceptible(S), exposed (E), Infectious (I), recovered (R) and dead(D).The transmission process can be modeled using the following system of nonlinear differential equations: where the 'prime' denotes time derivatives, infectious individuals either recover or die from the pandemic at the mean rates of and respectively, is the transmission rate,1 / k is the mean latent period and C t is the cumulative number of case notification (Chowell et.al. 2007). Specifically, we fit the cumulative number of cases given by equation C t to thecumulative number of case notifications. In the above case the residual is given by (20) 22 BSMaSS Volume 5 This type of problem possesses existence and uniqueness properties for Hilbert, Banach reflexive, rotund spaces and duals of smooth Banach spaces. The nice properties of least squares as well as their physical interpretations lead to many important applications.
Location Theory
There are numerous publications where facility location problems have been discussed [4,5,8,9,10,13,14,15,17,19,20,21,22,23].The problem has given rise to extraordinary number of generalizations, extensions and modifications. It would literally require volumes to do them justice; space only permits only a brief and somewhat arbitrarily selected summary. However, comprehensive reviews of this type of problem can be found in [16,26].
This problem is sometimes called Weber problem and it arises in many fields such as signal processing, networking, communications and called many names such as the Fermat -Torricelli problem, the Steiner problem, the Steiner-Weber problem, the median center problem, the minisum problem, and the spartial problem [10]. We are to find the "minisum" point x f , y f X which minimizes the sum of weighted Euclidean distances from itself to n fixed points with coordinates xi , yi P , i 1, ..., n .The weights which are associated with the fixed points are denoted by wi .One simple scenario for the problem is that of locating a distribution centre and that the weights wi are the cost per unit distance of shipping the requirements to customers located at the fixed points xi , yi ; x f , y f is then the distribution centre that minimizes the transportation costs. The problem can be stated as: (21) is the Euclidean distance between x, y X and Where x i , y i P .The equation can be rewritten as (22) indicating the minimization of the sum of squared Euclidean distances that looks similar to (17) and (20).It can depict a wide variety of distance measures; the Euclidean distance is the special case p 2 and rectilinear distance is p 1 ; looking at the generalization of Euclidean distance ( l p distance),which is given by p The Weber problem has two very important properties. First, the objective function is a convex function which ensures that any local optimum is also a global optimum. Second, the optimal location for the new facility must lie within the convex hull of the existing facility locations. Hence all the problems discussed are convex programming and as a result finding a global optimum of a norm-based objective function is often tractable.
The Main Results
We present the results applicable to the earlier discussed problems and show the connections existing between them. In the sequel we state the following definitions and proposition that will be needed for further development.
The Bulletin of Society for Mathematical Services and Standards Vol. 5 23
Definition 3.1. Let X be a metric space and M X be a non-empty convex set. A Function f : A strict convex function is defined by Definition 3.2. Let X be a metric space and M X be a non-empty convex set. A function f : Definition 3.3. A twice differentiable function f : R n R is said to be strongly convex if there exists a real number 0 such that for all x, y R n .
Theorem 3.1 [3,27]. Consider a convex function f : R n R .Then every local minimum of is a global minimum. Proposition 1.1. The norm function f x ||x||is a convex function.
Proof: Let f x : ||x||Then for any y, zand 0,1 , we have (25) which completes the proof. The above proposition implies that since every norm is a sublinear function then every norm is a convex function. As a result, finding a global optimum of a norm-based function is often tractable as pointed out in the earlier section.
Theorem 3.2. Let f : R n R is a twice differentiable strongly convex function over a compact set X R n and assume that the domf is a compact set. Then f attains a maximum and a minimum on X .
Proof. Since f : R n R is a strongly convex and hence a convex function it is continuous. Remark. We have used the fact that every real-valued convex functions on R n is Lipschitzian and Lipshitzian function is necessarily lower semicontinuous function. The uniqueness of the minimum norm problem is tractable since strongly convex function are strictly convex. Example 1.1: This example illustrates the use of the technique developed that identifies the subset having the conditions satisfying existence of solution.
24
BSMaSS Volume 5 Since f is a norm function (also a convex function) is continuous on its domain. By theorem 3.2, a solution exists as long as the set M is compact. To show that M is compact. we have M R n a subset of X and hence M is compact. By the theorem 3.2 the solution exists and indeed a global solution by theorem 3.1.
Conclusion
In this paper, applications of convex programming in practical problems are herein considered and resolved using a more general approach in finite dimensional settings. The work demonstrated the importance of convex programming in the presence of faults, measurement errors and statistical uncertainties. By combining the basic and desirable properties of normed spaces and convexity, we have shown the resolution of practical problems using abstract results. | 3,662.4 | 2013-03-01T00:00:00.000 | [
"Mathematics"
] |
Upregulation of Versican Associated with Tumor Progression, Metastasis, and Poor Prognosis in Bladder Carcinoma
Objective This work analyzes the role of versican (VCAN) on bladder cancer (BLCA). Versican (VCAN) is a chondroitin sulfate proteoglycan which is important for tumorigenesis and the development of cancer. However, the expression of VCAN on human bladder cancer (BLCA) has been rarely reported. Methods The clinical significance of VCAN in BLCA has been determined by our bioinformatics tools. Then, we performed immunohistochemical staining (IHC) and analyzed the correlation between VCAN expression and clinicopathological features. Results The bioinformatics results reveal that a high VCAN mRNA level was significantly associated with stage, histological subtype, molecular subtype, and metastasis in BLCA. Furthermore, IHC reveals that expression of VCAN was significantly correlated with the number of tumors, invasion depth, lymph node metastasis, distant metastasis, and histological grade. Kaplan-Meier survival analysis reveals that patients with a high expression of VCAN have poor prognosis than those patients with a low expression of VCAN. According to our result from the bioinformatics database, the mechanism of VCAN in BLCA revealed that VCAN was related to FBN1 and genes of the ECM remodeling pathway (MMP1, MMP2). Conclusion VCAN expression might be included in the process of carcinogenesis and prognosis. Hence, VCAN could be a reliable biomarker of the clinical prognosis on BLCA.
Introduction
Bladder cancer (BLCA) is one of the most prevalent malignancies of the urinary system [1]. Especially in China, the incidence and mortality rates of BLCA have gradually increased in recent years [2]. Although the diagnosis process has been improved by the development of cystoscopy, specific biomarkers for early diagnostic and prognostic assessment of BLCA are still deficient [3]. Moreover, the curative methods and five-year survival rate are limited and low for BLCA [4]. Hence, it is extremely important to explore the indicators of BLCA in order to improve the clinical treatment effect on BLCA patients. Versican (VCAN) is a chondroitin sulfate proteoglycan located at the extracellular matrix. Previous reports have already proven that VCAN is important for the development of various diseases [5]. Through the process of direct or indirect interactions, we could see that VCAN plays significant roles in modulating cell proliferation, differentiation, adhesion, and migration [6]. In addition, VCAN is associated with the formation of a pericellular sheath which could modulate cell attachment and motility [7]. Other studies also revealed that VCAN could promote the local expansion, invasion, and formation of cells. Moreover, it could promote distant metastasis by decreasing cell-matrix adhesion as well [8,9]. Hence, VCAN might exert function in the invasion and metastasis of tumor cells.
Recently, several studies have demonstrated that abnormal expression of VCAN was found in various cancer types such as prostate [10], breast [11], gastric [12], colorectal [13], ovarian [14], pancreatic [15], laryngeal [16], and testicular tumors [17]. Some studies reveal that VCAN plays an important role in various cancers [18,19]. Of possibly greater concern was that VCAN has been proven to promote cell proliferation, inhibit apoptosis, and promote metastasis in the tumor [20,21]. Stylianou et al. reveal a 140-fold increase of VCAN expression in laryngeal cancer tissue compared to normal controls [22]. VCAN was also reported to express significantly higher in severe ovarian cancer than in normal ovarian tissues [23]. Shen et al. [24] explored that the VCAN expression level was higher in gastric cancer than in adjacent normal tissues and realized that a higher VCAN level correlated with greater tumor invasion, depth, and poor prognosis. This notion was also supported by observing that increased levels of peritumoral VCAN predicted poor prognosis in patients with early-stage prostatic cancer. [25] Moreover, breast cancer and relapse with stage I node-negative were associated with the level of VCAN accumulated in peritumoral stroma as well [26]. These above studies reveal that VCAN might have an oncogenic role in tumors.
In contrast to the oncogenic role of VCAN, de Wit et al. reveal that protein expression of VCAN predicted better clinical outcome for colon cancer patients with stages II and III [27]. Voutilaine et al. also reported that a higher expression of VCAN in epithelial cells was correlated with a longer survival time, while a higher level of VCAN in the tumor stroma was an indicator for poor prognosis [14]. It showed that VCAN seems to exert its function by interacting with different types of proteins in a tissue-specific manner.
However, the physiological role of VCAN in BLCA still needs to be explored. The purpose of our study was to investigate VCAN expression in BLCA, to determine the relationship with clinicopathological factors, and also to focus on its prognostic significance. At first, we analyze the VCAN expression in human BLCA by combining the bioinformatics analysis of publically available databases. Then, we confirmed the role of VCAN in 417 cases of BLCA in a Chinese population with immunohistochemical staining. According to our presented study and this broad analysis, VCAN may represent valuable candidate biomarkers on BLCA prognosis and treatment.
Identification of VCAN Expression in BLCA Based on
Bioinformatics Database. UALCAN (http://ualcan.path.uab .edu) was used for analyzing the mRNA level of VCAN in BLCA. GEPIA (http://gepia.cancer-pku.cn/) was used to perform customizable functionalities, and the data are from TCGA and GTEx data. The prognostic value of the VCAN mRNA level was also analyzed by the above database.
Then, the prognostic value of the VCAN mRNA level in 402 cases of BLCA was assessed by using the OncoLnc database (http://www.oncolnc.org), and the survival analysis was performed using cutoff values of the median of the VCAN expression in BLCA patients.
Furthermore, STRING database was used to explore the interaction between VCAN and other related proteins, the Comparative Toxicogenomics Database (CTD) was used to explore the gene-drug interaction as well, and drugs or chemicals which could affect VCAN expression were searched in the CTD database.
Patient Samples and Construction of Tissue Microarray
(TMA). 417 cases of paraffinized specimens were collected from BLCA patients who underwent curative resection from 1998 to 2010 at the Department of Surgery (Zhejiang Provincial People's Hospital, Hangzhou, China). Those samples were used for construction of a tissue microarray (TMA).
The patient samples consisted of 366 males and 51 females aged 35-79 years (median age 63.3 years), and there were 147 cases with a low histological grade and 270 cases with a high histological grade according to the World Health Organization pathological classification of tumors. Meanwhile, 35 cases of patients presented with distant metastasis and 382 cases without distant metastasis.
The end point was overall survival (OS), which was calculated from the date of operation performed to the cutoff point during follow-up period (December 2015) or the date of death. All patients did not receive chemotherapy or radiotherapy before the operation.
The construction of TMA was performed by the Shanghai Outdo Biotech Company (Shanghai, China), and the process of the construction of TMA was described as follows: Firstly, the tissue slides were stained with hematoxylin and eosin to select the most representative tissue block from each case. Then, the targeted areas of each donor tissue block (0.6 mm in diameter) were punched and arranged in empty recipient paraffin blocks of 35 × 18 mm using a tissue microarray instrument (MTA-1, Beecher Instruments, Silver Spring, MD, USA). As each core is placed into the recipient block, the block identification number should be noted on the array map. Then, the TMA blocks were fused in 50°C-52°C for 6 hours and immersed into the melted wax quickly to coat the surface with wax. After that, the TMA blocks were stored in 4°C after natural cooling. After the TMAs were constructed, 4 μm sections were cut from the TMA blocks to generate TMA slides. The slides were dried overnight at room temperature and then baked at 60°C for 20 min before being stored.
The project was approved by the ethics committee of the Zhejiang Provincial People's Hospital, and written informed consent was obtained from all participants involved.
Immunohistochemistry.
At first, sections were deparaffinized by dimethylbenzene and rehydrated in descending series of ethanol. Then, the slides were incubated with 3% H 2 O 2 at room temperature to eliminate endogenous peroxidase. Antigens were retrieved by maintaining the temperature between 92°C and 98°C in a 0.01 M citrate buffer (pH 6.0) for 4 mins. Subsequently, slides were then incubated with 10% (vol/vol) normal goat serum for 20-30 mins at room temperature in order to reduce nonspecific reactions.
Then, the slides were added with primary rabbit anti-VCAN polyclonal antibody (ab19345, Abcam, Massachusetts, US) with a 1 : 300 dilution in phosphate-buffered saline at 4°C overnight. The next day, the slides were incubated with 2 BioMed Research International biotin-labeled secondary antibody followed by horseradish peroxidase-linked antibody (Zymed, San Francisco, CA) for 30 mins at room temperature after rinsing with phosphatebuffered saline (PBS). Then, the sections were stained with 3,3-diaminobenzidine, and the nuclear was counterstained by hematoxylin. After that, the slides were rehydrated in ascending series of ethanol and then covered by neutral resin.
Evaluation of Immunostaining
Intensity. The assessment of the immunostaining was conducted in a semiquantitative way, using an immunoreactivity score by two expert pathologists under light microscope without knowing the clinical data. The experts scored independently according to the intensity of cell staining and the proportion of tumor cells stained. We divided the results into grades with no expression (intensity 0), low intensity of expression (intensity 1), medium intensity of expression (intensity 2), and high intensity of expression in the tumor cell (intensity 3). Staining intensity was scored according to the following criteria: 0 (no staining), 1 (weak staining, light yellow), 2 (moderate staining, yellow brown), and 3 (strong staining, brown). Positive stained areas of VCAN-positive cells were expressed as the percentage of whole cancer areas and were scored as follows: 0 for no cytoplasm expression, 1 for ≤25% positive cancer cells, 2 for 26-50% positive cancer cells, 3 for 51-75% positive cancer cells, and 4 for ≥76% positive cancer cells.
According to the above data, the composite score of the staining was obtained by the product of the intensity and the proportion scores. Thus, the total values were ranged from 0 to 12. In the present study, we defined the score ≤ 5 as the low expression of VCAN and the score > 6 as the high expression of VCAN for the next evaluation.
Statistical
Analysis. The Statistical Package for the Social Sciences (version 13.0; SPSS Inc., Chicago, IL) was used to perform all statistical analyses. χ 2 or Fisher's exact test was performed to analyze the categorical data. The Kaplan-Meier method accompanying the log rank test was performed to estimate survival analysis. Meanwhile, multivariate survival analysis was performed to assess predictors related to prognosis using the Cox proportional hazard regression model. For all tests, P values were obtained from two-tailed statistical tests, and P values less than 0.05 were considered statistically significant. Figure 1: The VCAN mRNA level was associated with stage, histological subtype, molecular subtype, and node metastasis of BLCA used by the UALCAN database. N0: no regional lymph node metastasis; N1: metastases in 1 to 3 axillary lymph nodes; N2: metastases in 4 to 9 axillary lymph nodes; N3: metastases in 10 or more axillary lymph nodes. * P < 0:05.
The Upregulation of VCAN mRNA Level Was Correlated
with Progression of BLCA. In order to investigate the role of VCAN in human BLCA progression, we first performed bioinformatics analysis with published datasets to detect the VCAN mRNA levels in BLCA by the UALCAN database, and the results reveal that the VCAN mRNA level was significantly higher in patients with BLCA, and a high mRNA level of VCAN was associated with stages, histological subtypes, molecular subtypes, and node metastasis ( Figure 1). The patients with a more advanced stage result in a higher mRNA level of VCAN, and the mRNA level of VCAN was significantly higher in patients with a nonpapillary tumor than in patients with a papillary tumor. Moreover, the mRNA level of VCAN was significantly higher in BLCA patients with luminal infiltration than other molecular subtypes, and node metastasis is also associated with the mRNA expression level of VCAN in BLCA.
Relationship between VCAN Expression and
Clinicopathological Parameters of BLCA. Immunohistochemical staining results reveal that VCAN was most localized in the cytoplasm (Figure 2). Then, the relationship between VCAN protein levels and clinicopathological parameters of BLCA was investigated ( Table 1). The results showed that the rate of high expression of VCAN in BLCA was 70.3% (293/417), and high expression of VCAN was correlated with the number of tumors, invasion depth, lymph node metastasis, distant metastasis, and histological grade. But no association was presented between VCAN expression and other parameters such as age, gender, size, and lymph invasion.
Prognostic Significance of VCAN Expression in BLCA.
To define the clinical prognostic value of the VCAN, we explored the prognostic significance of VCAN by the GEPIA database, OncoLnc database, and UALCAN database. The Kaplan-Meier curve performed with the GEPIA and UAL-CAN databases showed that the survival rate of patients with a high mRNA level of VCAN was lower than those with a low mRNA level of VCAN (P = 0:03 and P = 0:072, respectively).
BioMed Research International
The Kaplan-Meier curve performed with the OncoLnc database revealed that the difference was not significant (log rank, P = 0:0528, Figure 3). In our presented study, the Kaplan-Meier survival analysis reveals that the mean survival time in patients of BLCA with a VCAN low expression was 53:79 ± 1:03 months and 46:99 ± 1:09 months for those with a high expression of VCAN. Meanwhile, the 5-year cumulative survival rates in BLCA patients with a low VCAN expression were 70.3% and 53.5% in those with a high expression of VCAN. Obviously, BLCA patients with a high expression of VCAN reveal poor prognosis than those with a low expression of VCAN ( Figure 3, log rank test, χ 2 = 9:690, P = 0:002).
Mechanism of VCAN in BLCA Based on Bioinformatics
Database. Furthermore, we also explored the potential mechanism of VCAN being involved in cancer progression, and (Figure 4(a)). Next, we constructed the protein-protein interaction (PPI) network from the STRING database between VCAN and other related proteins, and we found that fibrillin 1 (FBN1) was correlated with VCAN. FBN1 is the primary component of microfibrils at the extracellular matrix, which might be involved in cancer progression. We detected the correlation between VCAN and FBN1 with the UALCAN and GEPIA databases, and the results showed that VCAN correlated with FBN1 (R = 0:81 and R = 0:66, respectively, Figure 4(b)). The PPI network also reveals that proteins were related with cell adhesion of the ECM remodeling pathway (DCN, FN1 etc., Figure 4(a)), suggesting that VCAN may be involved in tumor progression which is an important ECM component through the ECM remodeling pathway. Next, we also extracted the mRNA level of MMP1 and MMP2 in the OncoLnc and GEPIA databases and analyzed the correlation between VCAN and MMP. The results showed VCAN was directly correlated with MMP1 and MMP2 ( Figure 5).
In the next step, in order to explore how available chemicals or drugs could influence VCAN expression, we constructed a gene-drug interaction network based on the Comparative Toxicogenomics Database (CTD) (Figure 6). This network revealed that several drugs or chemicals could influence the expression of VCAN. For example, several drugs or chemicals such as acetaminophen, cisplatin, and curcumin could increase the mRNA level of VCAN. However, doxorubicin and dexamethasone result in a decreased expression of VCAN mRNA.
Discussion
The stroma around solid tumors consists of specific extracellular matrix (ECM) components, which plays important roles in the microenvironment of primary and secondary tumor sites [28]. Meanwhile, the tumor microenvironment not only responds to tumor epithelial cells and supports carcinogenesis but actively contributes to tumor progression and metastasis [29]. In recent reports, a direct relationship between growth factor-mediated signaling and modulating extracellular matrix (ECM) components was identified [30]. VCAN was a member of the large aggregating chondroitin sulfate proteoglycan (CSPG) family; it is an important ECM component, which has been implicated in tumor progression [6].
As far as it is concerned, our study reported clinical data for the prospective power of VCAN expression in BLCA, and we found that the VCAN mRNA level was significantly higher in patients with infiltrating BLCA than in those with superficial BLCA based on the bioinformatics analysis. The Kaplan-Meier curve showed that the survival rate of patients with a high mRNA level of VCAN was lower than those with a low mRNA level of VCAN, although the difference between them was not significant. But we speculate that the BioMed Research International dysregulated VCAN expression may contribute to bladder development and/or progression. Although malignant cells can synthesize VCAN, the mechanism and functional role of epithelial VCAN expression remain to be elucidated [31]. Hence, more studies are required to clarify this issue. Then, we confirmed the role of VCAN in 417 cases of BLCA from a Chinese population with immunohistochemical staining. The results revealed that a high expression of VCAN correlated with the number of tumors, invasion depth, lymph node metastasis, distant metastasis, and histological grade. The overexpression of VCAN in cancer has also been reported to be associated with tumor progression [24]. These results suggest that VCAN is an important molecule in the progression of these malignant tumors.
Touab et al. [31] reported that cell-associated VCAN is involved in the progression of melanomas. However, epithelial VCAN expression is reported to be significantly higher in early-stage epithelial ovarian cancer [14], and tumor cellassociated VCAN is not significantly associated with clinicopathological factors in NSCLC [32]. It reveals that the mechanism and functional role of VCAN in tumors still remain unclear, and VCAN seems to exert its function in a tissue-specific manner.
Previous studies have shown that increased levels of VCAN are associated with poor prognosis of patients in a wide range of malignant tumors [33,34]. However, a study reported the opposite effect that VCAN expression in epithelial cells was correlated to a longer survival time [14]. In the present study, the Kaplan-Meier curve analysis of 417 cases of BLCA showed that patients with a high expression of VCAN showed a poorer prognosis than those with low expression of VCAN, which suggested that VCAN upregulation may contribute to the prognosis of BLCA patients. In prostate cancer, increased concentration of stromal VCAN is an independent predictor of outcome for patients with moderately differentiated tumors [25]. In a similar way, peritumoral VCAN was a strong predictor of relapse-free survival in breast cancer [26]. An increased level in the expression of stromal VCAN has also been previously reported; it was correlated with a poor prognosis in some types of cancers [32,35]. Thus, VCAN may be applied as a biomarker for malignancy and monitoring prognosis in BLCA clinically.
It is well known that the tumor environment is one of the major factors that determine the behavior of malignant cells. A decrease in the adhesive ability of tumor cells at the invasive foci has been noted in a number of human cancers [36,37]. The three-dimensional structure of the extracellular matrix (ECM) was reported to regulate cell migration, differentiation, and proliferation, which then regulate biological development and tissue repair and/or alternatively cancer progression [38]. Remodeling of the extracellular matrix (ECM) could be made through the altered expression of molecules integrated in the functional network. For example, cell-to-cell and cell-to-matrix interactions are essential for local tumor cell invasion and metastasis. The study also reported that modifications of the ECM composition during tumor development may be crucial for tumor initiation and development [38]. Gorter et al. revealed that VCAN expression in the stromal compartment of cervical cancers results in reduced numbers of intraepithelial CD8-positive T cells and enhanced local invasion [39]. Our present study revealed a high expression of VCAN which correlated with invasion depth, lymph node metastasis, and distant metastasis. We suspect that VCAN may play a role in the cell-ECM adhesion interactions during cancer progression and may be used as a prognostic marker and therapeutic target for the treatment of the disease. We found that that VCAN may be involved in tumor progression as an important ECM component through the ECM remodeling pathway as well. Furthermore, we constructed a gene-drug interaction network, which suggested that several drugs or chemicals such as doxorubicin and dexamethasone could result in decreasing the mRNA level of VCAN. Several drugs or chemicals such as cisplatin and curcumin could result in increasing the mRNA level of VCAN, which suggested that clinical treatment of bladder cancer with these drugs may induce poorer prognosis through increasing the VCAN level. Therefore, we declared that many available chemicals are efficient in correcting the abnormal gene expression of VCAN, and this could provide more strategies for BLCA therapy.
In conclusion, the present study provides novel evidence regarding VCAN expression in BLCA and its involvement in carcinogenesis and progression. The results highlight the independent contribution of VCAN overexpression towards poorer outcomes for patients with BLCA, which revealed that the VCAN expression may contribute to the aggressive biological behavior of BLCA and could be a reliable marker for clinical prognosis.
Data Availability
The data used to support the findings of this study are included within the article. | 4,930.8 | 2021-02-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
The innovative EHV line and its main indicators
. Single-circuit EHV lines widely used all over the world have a significant drawback, consisting in the fact that with the most probable single-phase stable failures, the line is completely taken out of operation. In this paper, a single-circuit EHV line is considered, one phase of which is made in the form of two parallel semi-phases, any of which is used as a backup phase in emergency modes. To symmetrize the conditions in the middle of the line, the Series Capasitor (SC) are included in the two usual phases. The article presents an algorithm for calculating normal modes, the main indicators of an innovative 500 kV line, consisting in its increased capacity, reliability and economic efficiency. The main provisions are illustrated by the example of a 500 kV line with a length of 500 km.
Introduction
Single-circuit EHV lines are widely used all over the world [1][2][3][4][5][6], which have the disadvantage that with the most probable single-phase stable failures, the line fails completely.This fact gives superiority to DC lines that can operate with single-pole stable damage at one pole with a transmission of 50% of the maximum power [7].There are a number of ways to increase the capacity of single-circuit AC lines (the use of compact lines of increased natural power [8], the use of SC [9]), in which the issue of reliability only worsens.
The reliability of a single-circuit line can be improved by using the reserve phase of the line [10], which is switched on instead of the emergency operating phase (Fig. 1).The reserve phase is used only in short-term emergency modes, and the rest of the time remains disabled, which is its significant disadvantage.The purpose of this work is to develop a line that provides increased capacity with high reliability of operation, while being characterized by favorable economic indicators.
General characteristics of the innovative line
In the innovative line [11], one phase is performed in the form of two parallel operating semi-phases, one of which is used as a reserve phase in emergency modes.
Figure 2a shows the layout of the phases and semiphases of the innovation line on the support, and Figure 2b shows its scheme.Structurally, the total cross-section of the semiphases is equal to or close to the cross-section of the individual phase.Fig. 3 shows the possible designs of phases and semi-phases of the 500 kV line, with respect to which calculations are carried out in the future.The proposed phase and semi-phase designs differ in geometric dimensions and the number of wires in the phase: these are traditional (option A) and compact (option B) designs.
Mathematical model of an innovative line for calculating normal conditions
The innovative line has a phase-by-phase longitudinal and transverse asymmetry, and for the symmetry of normal conditions, it is necessary to install SC in the middle part of the normal phases, and at the ends of one of the semi-phases, shunt reactors are connected, as shown in Fig. 4. Since the innovative overhead line is characterized by phase asymmetry, the universal method for calculating steady-state conditions is the matrix method, when the line and other installations are described in phase coordinates.
The complete phase matrix of the circuit is formed according to the above figure The corresponding matrices in formula (1) are defined as follows.The complete matrix of the untransposed half section of the line has the form where The zero and unit matrices of the 4th order are respectively determined by The other main element of the four-wire circuit includes shunt reactors (ShR) located at the ends of the line.In general, unvariable ShR (UShR) and variable ShR (VShR) can be used.(The ShRs at the ends of the line are not shown in Fig. 4, since it is assumed that the conditions of transmission of natural power is being considered when the reactors are switched off.)So, in particular, the complete VShR matrix for the line under consideration will be written where .
In turn where reactance of the i-th phase, i=a,b,c1,c2.
Vector-columns of specified voltages at the ends of the line are accepted where rated line voltage; angular shift between voltage att the ends of the line; ℎ / phase operator.
Next, we find the vector column of currents at the end of the line Given that the current in phase "c" consists of the currents of the semi-phases "c1" and "c2", we find the current at the end of the line And accordingly, the column vector of the threephase current will be To assess the level of asymmetry that occurs in the circuit, we define a vector column of symmetrical components of currents at the beginning of the line where transformation matrix from phase components to symmetric components; , , respectively, the currents of the positive, negative and zero sequences.
The coefficient of asymmetry in the current of the negative sequence will be determined accordingly An important characteristic of the circuit that determines its capacity is the angular characteristic, which represents the dependence of the active power transmitted along the line on the angular shift between the voltages at the ends of the line.Thus, the total power at the end of the line is defined as the scalar product of the corresponding column vectors of voltage and current
Accordingly, the angular characteristic of the scheme is found as The maximum of the angular characteristic at 90 determines the maximum transmitted power of the circuit Accordingly, capacity of the circuit is reduced taking into account the margin coefficient for static stability (10) where 1,2 the margin coefficient for static stability.
Reduction of asymmetry in normal conditions
Based on the proposed algorithm, we will analyze the normal conditions of the 500 kV innovation line with a length of 500 km, bearing in mind the level of asymmetry that occurs in it.
In the absence of a two-phase SC in the middle of the line and a ShR at the ends of one of the semiphases, the asymmetry coefficient reaches 14%, which significantly exceeds the permissible value for synchronous generators of no more than 6%.The decrease in the asymmetry coefficient in the negative sequence occurs not only due to the SC, but also as a result of the use of ShR.
As follows from the calculations, in the case of using UShRs, the suseptance of which is 0.65 mSm, the optimal reactance of the SC for a traditional design (option A) is 66 Ohm, and for a compact design (option B) -27 Ohm; the corresponding asymmetry coefficients are equal for option A -0.3%, for option B -1.0% (Fig. 5).Thus, it can be concluded that with the combined use of a two-phase SC and a single-phase ShR, relatively small asymmetry coefficients are provided.
Doubled capacity
The main indicator of the innovative line, first of all, is its doubled capacity compared to the traditional singlecircuit line.Table 1 shows a comparison of the capacities of traditional and innovative lines calculated according to (10).The innovative line provides an almost two-fold increase in capacity compared to a single-circuit traditional line.
Increased reliability
In the EHV lines, the overwhelming number of failures is single-phase.In case of unstable failures in the innovation line, as in the traditional line, the Single-Phase Automatic Reclosing is used.
A noticeable proportion of failures are sustained.The proposed scheme, in the event of sustained failures, allows you to switch to operation in postemergency mode with the possibility of transmitting at least 50% of the power of the initial maximum conditions.So, if one of the semi-phases is permanently damaged, it is switched off by switches 2 (Fig. 6a), and the line switches to operation in emergency mode.At the same time, in order to ensure an acceptable level of asymmetry, it is necessary to shunt the SC with switches 1.In case of sustained damage to one of the phases, for example, phase "b", it is switched off by switches 3, and the corresponding semi-phase, disconnected by switches 2 from the other semi-phase, is switched on by switches 4 instead.In order to ensure an acceptable level of asymmetry, it is necessary in this case to shunt the SC with switch 1 in phase "a", as shown in Fig. 6b.
Increased economic efficiency
The optimal conditions of a traditional overhead line is the transmission of natural power, which for a 500 kV overhead line is rounded 900 MW and, respectively, for two circuits 1800 MW.According to the condition of heating losses, the total cross-section of phases and semi-phases in the single-chain version is assumed to be equal to the total cross-section of phases of the double-chain line.
Reliable technical and economic information is available at the level of 2000 [12], and therefore a comparative analysis is made in the prices of this period, which is quite acceptable with comparative estimates.
The cost of a single-circuit three-phase line on steel supports in the range of HV and EHV is quite accurately extrapolated by the dependence where the coefficients a, b are determined based on the cost of 220 and 330 kV voltage lines.
Figure 7 shows the free-standing supports of the traditional single-circuit line [13] and the innovative single-circuit line, which shows that an additional wire is added to the innovative line and its cost increases by the suspension of this wire.According to [14], the cost of wires and insulation is approximately 40% of the cost of the line.Then the capital investment in the innovative line, taking into account the suspension of the additional wire, will be 500 1 , 3ℎ 500 (12) The unit cost of the line depends significantly on the cross-section, and it can be estimated using the linear dependence of the cost on the total cross-section of all phases 2,72 0,45 • 10 , million•rubles/km (13) As a result, we obtain the unit cost of an innovative line with free-standing supports, which with its total cross-section 3*6*330 mm 2 will amount to 5.4 million rubles/km.
The unit cost of two circuits on different freestanding supports according to [12] is 8.0 million rubles/km.
For the above data, Table 2 presents a technical and economic comparison of a two-circuit traditional line with a single-circuit innovation line, which shows a noticeable economic advantage of the innovative option.
Conclusions
In this article, a new type of single-circuit EHV line is justified, the capacity of which exceeds that for a traditional line by two times.In addition, the proposed line has increased reliability, allowing, in case of sustained failures, to switch to operation in postemergency mode with the possibility of transmitting at least 50% of the power of the initial maximum conditions.A technical and economic comparison of twocircuit traditional and single-chain innovative lines with a voltage of 500 kV, a length of 500 km and a capacity of 1800 MW showed that the capital costs in the two-circuit version are 40% more than in the innovative version.
Fig. 1 .
Fig. 1.A line with a reserve phase.
Fig. 2 .
Fig. 2. A line with parallel operating semi-phases: a -the location of the phases on the support; b -the scheme of the innovative line.
Fig. 3 .
Fig. 3. Designs of phases and semi-phases of the innovative line: a -traditional design (option A); b -compact design (option B).
Fig. 5 .
Fig. 5. Dependence of the asymmetry coefficient in the negative sequence on the reactance of the SC: a -traditional design; b -compact design.
Fig. 6 .
Fig. 6.Schemes of post-emergency mode with sustained damage: a -damage on the semi-phase «c1»; b -damage on the phase «b».
Table 1 .
Comparison of capacities of traditional and innovative 500 kV lines (500 km)
Table 2
Technical and economic indicators of two-circuit traditional and single-circuit innovative lines | 2,717.4 | 2023-01-01T00:00:00.000 | [
"Physics"
] |
Fiber Optic Vibration Sensors
The sensors presented in this chapter are fiber optic intensity modulated vibrations sensors which are non-contact (extrinsic sensor) to the vibrating object. Three sensors presented make use of non-contact vibration measurement method with plastic fiber using distinct designs, improvement of the sensor response and advantages of one sensor over the other for diverse applications. First discussed about dual plastic optical fiber vibration sensor design and its response. Secondly, discussed about 1x2 fused coupler plastic optical fiber vibration sensor design with advantages over the first one. Finally, discussed about the 2x2 fused coupler plastic optical fiber vibration sensor design along with advantages than other two methods. At the end reported the final results with comparison.
Introduction
It has been over five decades since the first emerged thought about the optical fibers could be used for sensing and measurement of the physical various parameters. Around 1960 the first patent was filed in the Photonic sensor, which is based on bifurcated bundle of fibers with half of the bundle used as transmitting fibers to illuminate on a reflecting surface and the other half of the bundle used as receiver to receive the reflected light from the reflector. The relative distance between the fiber bundles tip to the reflection precisely estimated by the suitable calibration process. In non-contact vibration sensing the Photonic sensors i.e. fiber optics have been continue for their unmatched offering of the results [1]. Fiber Optic sensors (FOS) provide many advantages over conventional sensors [2,3], some of them as listed in Table 1.
In general, Fiber optics sensors are classified in to two groups: Intrinsic and Extrinsic sensors. In first type, the physical properties of the optical fiber itself can be used to convert effect of an environmental parameter on the optical fiber into a modulation of light parameters by passing through it. The light modulation parameters may be one of the following phase, intensity or polarization. Whereas, intrinsic FOS takes place within the optical fiber itself. Virtually, an environmental effect will be converted into light signal to be perturbed. In contrast, in extrinsic FOS, the optical fiber strictly used for carrying the information only that can be act as a black box to embed the information on an optical light, which is propagating through an optical fiber to a remote receiver. This black box usually contains optical elements such as a gas/liquid cells, a mechanical arm or so many other mechanisms that may cause modulation or transforming a light beam. Further, FOS sensors can also be classified based on their principle of working such as wavelength coding, Interferometric and Intensity modulated sensors. Intensity modulated FOS sensors are worked based intensity of light modulation with respect to the external perturbation. Phase modulated FOS sensors are passive in nature with optical elements that use phase change of the light field by the external perturbations, it is also called interferometric based sensor. The disadvantages of the optical fiber vibration sensors are the narrow frequency range of measurement and unfamiliarity to the end user. Thus, the fiber optic vibration sensors has required further research and development [4,5].
Interferometric based vibration sensors
There exists few types of fiber optic interferometric vibration sensors such as Fabry-Perot, Mach-Zahnder, Michelson, and Sagnac [31] to interrogate the phase shift caused by vibration. In these sensors, the optical fiber as all-fiber interferometer which is usually a single mode optical fiber (SMF) rather than the multimode optical fiber (MMF). Because, the transfer function of SMF interferometers nearly reflects that of conventional interferometers. Whereas the transfer function of a MMF interferometer is independent of time owing to the more number of modes of the optical light in the optical fiber. Usually, phase variation in the interferometer can be produced either by an extrinsic or intrinsic effect. This phase can be encoded by the transfer function of the interferometer into modulation of light intensity at the photo-detector in a nonlinear method, using the usual interface cosine function. For most of the interferometers practical applications, a small sensor heads having a fiber optic Fabry-Perot (FP) interferometer along with a small length optical cavity are especially attractive. Because, they have the advantages of being simple in design, compact size, cheap, with lower cross sensitivity to ambient temperature and offers both high resolution and down lead insensitivity without the fading of polarization, usually faced by all in fiber optic interferometers [6]. A system using alternative EFPI arrangement is reported for the sensing of vibration, and its sensor head is shown in Figure 1. The sensor head uses a simple reflective configuration with an extrinsic FP cavity. An adjacent dual step-index MMF couple light into it Table 1.
Comparison between conventional and fiber optic sensors.
and out of the optical cavity. The light is incident by a low power LASER diode as a source. Here, a movable reflecting surface used as a transducer and a suitable pitch gradient index cylindrical (GRIN) lens has been used for efficient light guiding device between the input and output optical fibers. A partial reflecting coating (R 1 ) on the output face of the GRIN lens act as an interference reference. A high reflective surface (R 2 ) moves in sympathy with respect to the target object vibrations, it provides the interference signal. The FP cavity has a length of d in air as shown in Figure 1. These interferometric methods offer better performance but shows low stability, expensive, critical alignment, mechanical requirement because of their, need complex analysis (fringe counting) and are not well suitable for sensing the vibration at various testing points. These sensors require an electronically driven element to change the interferometer conditions. As a consequence, these sensors have a limited practical use. Thus, most of the recent optical fiber sensors are employing using intensity modulation only [7,8].
Intensity modulated vibration sensors
Intensity modulated fiber optic sensor techniques have been studied and implemented for the last three decades. A wide range of fiber optic configurations are reported, like fiber optic microbending, reflected light coupling to optical fiber, direct fiber-to-fiber coupling, fiber Bragg gratings, and modified cladding of optical fiber. All these sensors are classified into two fundamental methods either in physically in-contact or non-contact with the vibrating target or not. Usually noncontact configurations use a reflecting signal for detecting the displacement or vibration while the other configurations use the trans-missive configuration, i.e., microbending. As a general rule, in the intensity modulated sensor configuration, the intensity of light from the source is modulated by the optical fiber; then it is guided through the optic fiber to the photo-detector, then the light intensity pulses are translated into equivalent electronic signal, and adequately processed. In most of the case, a referencing mechanism is required in order to eliminate the other noises and maintain the stable sensor calibration. Without using referencing signal, the fluctuations owing of the power in light source, noise induced by the connectors, couplers, or any other optical components in the sensing system can become the significant relative errors. In this section, some of the intensity modulated sensors are discussed. For few decades, so many fiber optic sensors (FOS) works based on intensity modulation techniques have been demonstrated [4,5].
Microbending vibration sensor
The microbend optical fiber sensor is one of the earliest sensor reported which is works on the principle of the Intensity modulated. The sensing principle is works of the transmitted light power variation of as a function of applied physical variable like pressure/stress [38]. Generally, in this configuration, the amplitude of light intensity reduced by the cause of loss by the strain induced micro curvatures. The structure of the microbend fiber optic vibration sensor is shown in Figure 2. The sensing element (optical fiber) is sandwiched between a pair of strain induced plates having micro structure of saw tooth, it is capable to bend the optical fiber structure in a regular geometrical pattern with a periodicity of Λ. This deforms the fiber, with respect to an appropriate physical change (ΔE), owing to applied force (ΔF) to bent fiber, which cause the change in amplitude of the fiber (X) to vary its quantity (ΔX). The transmission coefficient of light which is propagating through the bent optical fiber (T p ) is changed by a quantity (ΔT p ), therefore Here, D is a constant which is dependents on the physical change ΔE.
The deformation triggers a coupling of the light power from the optical fiber core guiding modes to higher order radiation (cladding) modes; which are easily perturbed by the surrounding medium. Both MMF and SMF have been used for the development of these sensors. In SMF microbend sensors, the maximum sensitivity is observed when the spatial bend frequency equals to the difference between the propagation constants of the fundamental mode and a discrete cladding mode [5,9]. The microbending sensor has to be placed in-between the deformer plates to detect applied pressure. Denis Donlagic and Miha Zavrsnik reported a novel structural method by single-mode leads and multimode fiber (SMS) based on microbending on the multimode section of the optical fiber is shown in Figure 3. It exhibits higher sensitivity than classical microbend sensors [10].
In addressing the fiber strength issues, it is to be remembered that the deformer plates clamp the optical fiber. Therefore, a large stress can be produced on to the fiber. Suppose the deformer plates are brought very close together, the optical fiber may leads to the breakage. An empirical design instruction is to maintain the ratio of maximum applied stress to fiber break stress less than one to four. Since, microbend saw teeth push into the buffer coating of the optical, it should be very important to know the interaction of the buffer coating material and optical fiber with respect to the various testing properties Therefore, the principal disadvantage of the microbending sensors is that low accuracy.
Non-contact vibration sensors
Most of the non-contact dynamic displacement sensors commonly can be used for the sensing of vibration. Here, a reflective mechanism is used for detecting the vibrations, where one optical fiber is used as a transmitting of the light source and another fiber is are used as a collector. The reflection of the surfaces of the target object can be minimized with help of data treatment methods. Binu et al., presented a simple, rugged, and cheapest non-contact intensity modulation based fiber optic sensor with configuration of two PMMA optical fibers cemented together [11]. The same design was proposed by the Yasin et al., [12]. The important benefits of this design is the low fabrication cost of the device. Although intensity modulated based fiber optic sensors are cheap and easy to fabricate, a weighty error in the measurement can be presented owing to effect of light source power variations. Losses owing to physical design and reflective planes outside of the measuring structure often effect on the accuracy of the final measurement. Fortunately, source light intensity fluctuations can be easily eliminate with referencing port.
Recently, Perrone et al., reported a low cost and high resolution using plastic optical fibers (POF) based on the reflected intensity modulation using dual POF. It is capable to measure the vibrations of up to several KHz by using an intensity modulation technique with a simple data processing and compensate the reflectivity of the vibrating surface. The received optical signal is incident onto the photo-detector and processed for the conversion. However this process is not user friendly and poses critical analysis process just like interferometric sensor. Those intensity modulated based fiber optic sensors are usually very cheap, easy to build and versatile in structures [13].
Further, an intensity modulated based displacement sensor reported, which is working on guiding optical light through the optical fiber onto a reflecting surface. Lewis et al., demonstrated the configuration in which the reflected light is collected by the same incident optical fiber [14]. The transducer itself can be a simple reflecting surface which is attached to the surface of a vibrating object. This fiber optic vibration sensor is a low cost and reliable, which is alternative for non-contact vibration detection with high-resolution frequency analysis. However, the multimode fiber having low dimension is limiting the practical application of the sensor. Because the sensor was positioned perpendicular to the vibrating body, it is difficult to align and maintain the sensor position constant at this dimensions.
This chapter have a more concentration on the plastic optical fiber vibration sensors design and development for the last few decades.
Dual optical fibers
The sensing head consists of two fibers made up of PMMA (Polymethyle Methacrylate) [15,16], where one fiber acts as a transmitting fiber (TF) and other fiber acts as a receiving fiber (RF) which are bundled together parallel to each other [17]. The schematic setup is show in Figure 4(a), the displacement response of the sensor along with overlapping mechanism between TF and RF cones is shown in Figure 4(b). It is predicted that the sensor exhibits two linear regions namely the front slope and back slope. The detector output shows minimal at zero distance (Z = 0) between reflecting target and sensor probe, because the reflecting light cone of the TF does not reach the receiving cone of RF. As the distance from the sensor probe increases (Z < Zmax), the cone size of the transmitted light on the reflecting surface also increases, thereby causing the overlap with the RF cone which leads to a negligible output voltage. Further an increase in distance leads to larger overlap leading to rise in the voltage. This response reaches a maximum voltage where complete overlapping of the RF with TF reflecting cone occur (Z = Zmax), and then the output starts decreasing even though the distance increased (Z > Zmax). Because, the size of the reflecting light cone increases to very large which leading to decrease in power density, whereas the overlapping area remains constant [18,19].
The front slope exhibits high linearity in the range of about 350-800 μm with a sensitivity of 4.786 mv/μm. A dark region of 350 μm is observed in the characteristic curve, where the intensity of light is not linear with the displacement for a small distance due to the cladding of the optical fiber. On the other hand, the back slope shows high linearity in the range between 1600 μm and 2600 μm with sensitivity of 1.696 mv/μm. Therefore, the front slope exhibits relatively high sensitivity but over small measurable range compared to the back slope and is better suited for the measurement of amplitude of vibration in micro level [20,21].
Theory
According to the light intensity distribution function, the irradiance of emitted light from transmitting fiber is expressed as [20,21]. Where r and z represent the radial and longitudinal coordinates respectively, and ω z ðÞis the beam radius which can be expressed as a function of z given below where Z R is expressed as where, Z R is the Rayleigh range and ω o is the beam waist radius. The quantity of reflected intensity of light power received by the RF from the target is solved by taking the integration of the irradiance over the core area S r P z ðÞ¼ ð 0 sr Ir , z ðÞ ds r The quantity of reflected intensity of light power collected by the RF is a function of the displacement between probe and target (reflecting surface) which can be expressed as [6,7]. Where ,PEis the power of light from the TF incident on the reflector, R t and R r are the core radius of TF and RF respectively and Rd. is the distance between the centers of RF and TF cores. A simple photo-detection circuit is used for the conversion of the light intensity into equivalent output voltage. Generally, the output voltage with respect to the intensity of light incident on photo-detector is given by [8,9].
Where R ℷ ¼ ƞg ℷ 1:24 is the photo-detector responsivity, R E is the feedback resistance, ƞ, λ and g are the quantum efficiency, wavelength of the incident light and photoconductive gain respectively. For a given photo detector the values of ƞ (<1) and g are constant, therefore the responsivity (sensitivity) depends on the wavelength of the light source. Thus light source and photo-detector should be matched.
Experimental setup
The schematic of the experimental setup of the fiber optic vibration sensor is shown in Figure 5. The sensing head consists of two PMMA fibers, with constant diameters bundled together in parallel. A commercial speaker/PZT can be used as a vibrator to test the response of the FOS.
A thin plastic reflector of thickness100μm is glued at the center of the speaker to act as reflecting surface. An LED is used as a light source which should be matched to the optical transmission window of the PMMA fiber. The LED is housed in a special package which facilitates perfect holding and provide maximum coupling of The design of a dual plastic optical fiber (POF) vibration sensor using different fiber pair combinations reported along with necessary theory and experimental results. From displacement response of all the combinations, it is evident that the sensitivity of the sensor increases as the diameter of the fiber decreases and vice versa. The vibration response of the sensor for all the combinations reveals that when the fiber diameter of either TF or RF decreases, the frequency range is increased and resolution is improved. Further, the dynamic range, and the range of frequency can be optimized by using the suitable diameters of the fiber. Moreover, the dark region of the sensor can be minimized by choosing the diameter of the fiber as small as possible. The fiber combination of lower diameter shows better response than any combination and it exhibits the high frequency response with high resolution when compare with others. However, this dark region is one of the major drawback of the sensor configuration [20,21].
Fiber optic fused 1x2 coupler
The sensor system consists of a 3 dB fiber optic 1x2 coupler made of PMMA fiber having three ports, in which the first port is used as a sensing probe while the second port is for coupling of light from the light source, and the third port is used to direct the reflected light, to be incident on the photo-detector [22][23][24]. The principle of the vibration measurement is based on intensity modulation with respect to the displacement between the reflecting surface glued on the vibrating target and the sensing fiber port [25][26][27][28]. The schematic diagram of the sensing principle is illustrated in Figure 6. Light from the sensing fiber is allowed to be incident on the reflecting surface (glued on the front surface of the micro translation stage) which is kept at a distance of x from tip of the sensing fiber (port1) and the reflected light is allowed to be coupled back into the same fiber.
Theory
If P a , P b, P c and P d .represent the power of light coupled to port 2, light incident on the reflector through port1, the light reflected from the reflector coupled back into port1 and the light power received by the photo-detector via port3 respectively, then, the light transmitted from the source through the fiber to the sensing fiber port1 can be given by [29,30].
10 À0:1L À 10 À0:1D ÀÁ P a (8) Where cr, L and D are coupling ratio, excess loss and directivity of the optical fiber coupler respectively.
If the reflector is kept parallel to the sensing fiber cross-section, the power of light coupled back and received by the sensing fiber probe can be expressed as Where P i ¼ kP b is the light power coupled to the sensing fiber at x ¼ 0, a is the core radius of the fiber, Wx ðÞ¼2xtan θ ðÞþa, k = 1.15 and θ ¼ sin À1 NA is the divergence angle of the optical fiber [30].
Substituting Eq. (8) into (9), we have P c ¼ k 1 À cr ðÞ 10 À0:1L À 10 À0:1D ÀÁ P a 1 À exp À 2a Finally, the light power detected by the photo-detector from the sensing port through the port3 can be written as P d ¼ cr 10 À0:1L À 10 À0:1D ÀÁ P c (11) Therefore Where (14) This equation is the correlation function of the displacement sensor with multimode fiber coupler. It states that the power received by the photo-detector is directly proportional to the square of the diameter of the fiber and is inversely proportional to the square of the distance between the sensor head and the reflecting surface.
Experiment
This simple sensor configuration can eliminates the presence of dark region and it exhibits only single slope that enables easy setup when compared to other configurations [31][32][33]. Figure 7 illustrates the schematic experimental setup of the fiber optic fused 1x2 coupler as a vibration sensor. It consists of a LED source of suitable wavelength, which is driven by a simple circuit with regulated power supply. A 3 dB Plastic optical fiber 1x2-coupler is used to configure the sensor to detect the vibration. A photo-detector along with detection circuit is used to convert the light intensity into equivalent electrical signal. A synthesized function generator and a commercial speaker with a calibrated reflector attached at the center of it are used to test the sensor response for vibration. To record and monitor the vibration of the speaker at different frequencies and amplitudes, a digital storage oscilloscope is used. The whole experimental setup is installed on a vibration less table to eliminate the ground vibrations [31][32][33].
The calibration of the sensor for the measurement of amplitude of vibrations has been carried out. A weightless plastic reflector is glued on the surface of a rectangular block fixed to micro translation stage which is positioned perpendicular to the sensing head of the sensor. A digital Multimeter is used to measure the photo-detector output light in terms of voltage with respect to the displacement between the reflector and the sensing head (port1). Figure 8 shows the experimental and theoretical displacement characteristic curves using Eq. (14). As shown in Figure 8, the linear region in the range of 0-1000 μm. The weightless reflector is now glued on to the speaker diaphragm and is placed perpendicular to the sensor probe (port1). The distance between the speaker and the sensor head is fixed within the linear region of the displacement curve shown in Figure 8. The light from the LED is coupled to the port2 of the coupler and is directed to the port1. The light incident on the reflector through port1 gets reflected back while modulated in response to the vibration, and is received by the same fiber (port1). The light power received by the photo-detector is then converted into its equivalent voltage signal by a simple receiving circuit and is recorded or stored by the DSO. FFT technique is implemented for the conversion of the time domain signal into frequency domain signal to analyze the vibration in terms of frequency and amplitude. The experiment is repeated for different frequencies and amplitude of vibrations to detect the maximum frequency and amplitude resolution that can be measured by the designed sensor and also to test the reliability of the sensor.
The experimental is setup on a vibration less table. The speaker is allowed to vibrate by a sine wave (CH1) through the Signal Generator and the response of the sensor (CH2) is recorded using DSO for different frequencies. The FFT of these signals gives the frequencies of the applied signal and output of the sensor. It is evident from the figure that, there is a perfect agreement between the applied signal and response of the sensor. The amplitude d p of displacement can be computed from the knowledge of the peak to peak voltage of the output signal and the slope of the calibration curve. For a given frequency f p the peak velocity v p and peak acceleration a p of vibrating body can be computed by [31][32][33].
The possibilities of errors which might be present in measurement can be, light source fluctuations, stray light effect and dust formation on the mirrors. To reduce the fluctuations in the source of light a standard regulated power supply can be used. A hollow cylindrical protection tool is arranged surrounding the reflector to protect it from the stray light interference with the light from source and to reduce the dust formation on the mirror as shown in Figure 9. The sensor is positioned very close to the vibrating target within the linear sensing region and it does not require any special optics for enabling its use for sensing applications in embedded situations [31][32][33].
Fiber optic fused 2x2 coupler
In this second part discussed the design of the fiber optic 2x2 fused coupler as a vibration sensor. Study the displacement response and vibration response of the sensor. Implementation of the rational output method to improve response of the sensor than the 1x2 coupler [34,35]. Figure 10 illustrates the schematic of the proposed plastic multimode fiber optic 2x2 fused coupler made of Poly methyl methacrylate having a split ratio of 80:20 as a vibration sensor. The sensor consists of an LED used as a light source and two numbers of photo-Darlington detectors (PD) of high sensitivity housed in a connector less package used to detect the intensity of light at reference and sensing ends. A simple photo detection circuit is developed for conversion of the modulated intensity of light into its equivalent output voltage signal and a DAQ is employed to record the time domain signals (TDS) corresponding to the reference and sensing arms from which the rational output (RO) is calculated.
Displacement response of the sensor
The fiber optic fused 2x2 coupler has core/cladding diameters of 980/1000 μm, with split ratio of 80:20, and having four ports. All the ports of the coupler are used for vibration detection. The light from the LED and coupled to the port2 is split into the ratio of 80:20; and one part of light (80%) is transmitted through the port1, which act as a sensing probe and the other part (20%) is directed towards PD1 through port4 which is used as a reference signal [36,37]. The light through port1 projects onto a weightless plastic reflector having the reflectivity of 40% which is attached to the center of the speaker diaphragm (the vibrating object). The reflected light modulations corresponds to vibration is recoupled into the same fiber (port1) and is directed to be incident on the PD2 through port3. In order to avoid the effect of power fluctuations in light source and bending losses in optical fiber, a reliable method of the rational output (RO) of PD1 and PD2 is taken into consideration and is expressed as [38][39][40]. A Step Motorized Actuator has been used to move the reflector attached to the micrometer stage to and fro from the sensing probe with a step size of 1 μm over a dynamic range of 4 mm. The experimental results depicts that the sensor displacement characteristic curve is presented in Figure 11, follows the inverse square law given by Eq. (14) and the linear region of this curve is used for the vibration measurement. It can be observed that the characteristic curve representing the response of PD2 with respect to the displacement of the reflector from the sensing probe ( Figure 11) has a linear region of about 1 mm with a sensitivity of 2.1 mV/μm, whereas the response curve representing the RO has a sensitivity of 0.36a.u/mm. This linear region of both responses can be considered for vibration measurement [34,35].
Elimination of source and bending fluctuations
Prior to the vibration measurement, the sensor response is tested against source fluctuation and fiber bending at the source end. Figure 3 shows the effect of source fluctuation on the sensing and reference signals. The measured signals from PD1 and PD2 show a change in intensity of light with respect to variation in light intensity of LED by means of varying the driving voltage, whereas the RO of these signals show insensitive to the source fluctuations. It is apparent from the test results that the RO method minimizes the effect of source fluctuations on the response of the sensor. Similarly, to test the effect of bending losses of optical fiber on the sensor output, the optical fiber is allowed to undergo bending by using a microbending pressure element. Figure 12 illustrates the effect of fiber microbending at the source end (port2) on individual outputs of PD1 and PD2 as well as on RO of both the signals. It is evident from the test results that even though the outputs of PD1 and PD2 are affected by the fiber bending, the same is not present in the RO [34,35].
Vibration measurement setup
The schematic experimental setup of the fiber optic 2x2 fused coupler for the vibration measurement is shown in Figure 13. The setup is mounted on a vibration free table. To test the sensor response for corresponds to vibrations, a synthesized function generator and a commercial speaker having dimension of 25 mm depth and diaphragm of 65 mm diameter with a reflector attached at the center of the diaphragm are used. Data acquisition system is employed to record the TDS of the sensor and to monitor the sensor response for known frequencies and amplitudes of vibration of the speaker.
In general, most of the vibrations are sinusoidal displacements of the vibrating object about its mean position. Generally, this nature of vibrations can be detected by measuring its amplitude and frequency only. Thus, the FFT technique have been used for the conversion of the TDS response into frequency domain response to analyze the object vibrations in the form of frequency and also to compute the amplitude. The experimentation is repeated for various frequencies to compute the detectable high frequency and to test the reliability of and also amplitude resolution.
Results and discussion
In general, the signal to noise ratio (SNR) is well-defined as the ratio between the power strength of the signal and the noise. It can be derived from the following formula [41].
SNR ¼
Power of signal Power of noise ¼ μ σ (18) Where σ is the standard deviation of the noise signal and μ is the expected value. The SNR is a nominal term which used for the characterization of the quality of the detected signal in a measurement system. In order to measure SNR of the signals of PD1, PD2 and RO, without vibration the sensing probe is maintained at constant distance from the speaker for a period time as shown in Figure 14(a). This reveals the stability of the detection signals. The SNR of the normalized RO, PD1 and PD2 signals is calculated using Eq. (18) and found that the SNR of the RO is improved when compared to PD2 which is clearly shown in Figure 14(b).
To assess the sensor vibration response, a sine wave is applied to the speaker and corresponding TDS response of the sensor is recorded by using DAQ at certain frequency as shown in Figure 15. The TDS waveform of RO recorded by the DAQ and corresponding FFT spectrum at 1 kHz signal brings out the closeness with which the sensor responds to a given frequency of applied vibration [34,35].
The sensor is also tested for its amplitude response by the application of a gated signal of constant frequency, and noting the correspondingPD1, PD2 and RO waveforms as shown in Figure 16. It is illustrated that the amplitude of the signal applied to the speaker is constant for a period of 1.1 sec, followed by damped decay of the signal representing that the signal generator is switched off and later by the dc signal, indicating the absence of the signal during this period. This figure also depicts that the output of the vibration sensor at a given frequency the amplitude
15
Fiber Optic Vibration Sensors DOI: http://dx.doi.org /10.5772/intechopen.94013 response for the vibration. The amplitude response of the vibration sensor for the applied driving voltage to the vibrator and peak voltage of the FFT from the output signal at for various frequencies observed that the linear in response. The sensor amplitude sensitivity with respect to applied frequency to the speaker exhibits a linear response [34,35].
From the above reported results, when compared all the responses of different configuration such as dual POF, 1x2 fused coupler POF and 2x2 fused coupler POF vibration sensors, and it was found that among these sensors, 2x2 coupler has been shown better response, which are tabulated in Table 2.
Summary
An all Plastic optical fiber (POF) physically non-contact vibration sensors are discussed, that works based on the reflected light intensity modulation reported with various structures. For every system, observed development and an improvement, with proper design and eliminated the dark region, having a single slope. It enables the system of easy alignment. By considering the rational output measurement, it has eliminated some of the significant effects on sensing such as power fluctuation in light source and also bending losses. When compared the dual POF, POF fused 1x2 coupler and POF fused 2x2 coupler vibration sensors response results. It is clearly predicted that the POF 2x2 fused coupler vibration sensor exhibits enhanced response with 0.03 μm high resolution up to 3.5 kHz frequency range. | 7,770 | 2021-06-23T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
Worldsheet dilatation operator for the AdS superstring
In this work we propose a systematic way to compute the logarithmic divergences of composite operators in the pure spinor description of the AdS5 × S5 superstring. The computations of these divergences can be summarized in terms of a dilatation operator acting on the local operators. We check our results with some important composite operators of the formalism.
Introduction
During the last decade there was a great improvement in the understanding of N = 4 super Yang-Mills theory due to integrability techniques, culminating in a proposal where the anomalous dimension of any operator can be computed at any coupling [1].The crucial point of this advance was the realization that the computations of anomalous dimensions could be systematically done by studying the dilatation operator of the theory [2,3].For a general review and an extensive list of references, we recommend [4].An alternative to the TBA approach not covered in [4], the Quantum Spectral Curve, was developed in [5,6].For some of its applications, including high loops computations, see [7,8,9,10,11,12] On the string theory side it is that known the world sheet sigma-model is classically integrable [13,14].However, it is not yet known how to fully quantize the theory, identifying all physical vertex operators and their correlation functions.In the case of the pure spinor string it is known that the model is conformally invariant at all orders of perturbation theory and that the non-local charges found in [14] exist in the quantum theory [15].In a very interesting paper, [16] showed how to obtain the Y-system equations from the holonomy operator.
Another direction in which the pure spinor formalism was used with success was the quantization around classical configurations.In [17] it was shown that the semi-classical quantization of a large class of classical backgrounds agrees with the Green-Schwarz formalism.This was later generalized in [18,19].Previously, Mazzucato and one of the authors [20] attempted to use canonical quantization around a massive string solution to calculate the anomalous dimension of a member of the Konishi multiplet at strong coupling.Although the result agrees with both the prediction from integrability and Green-Schwarz formalism, this approach has several issues that make results unreliable [21]. 3n alternative and more desirable approach is to use CFT techniques to study vertex operators and correlation functions since scattering amplitudes are more easily calculated using this approach.A first step is to identify physical vertex operators.Since the pure spinor formalism is based on BRST quantization, physical vertex operators should be in the cohomology of the BRST charge.For massless states, progress has been made in [22,23,24,25].For massive states the computation of the cohomology in a covariant way is a daunting task even in flat space [26].
A simpler requirement for physical vertices is that they should be primary operators of dimension zero for the unintegrated vertices and primaries of dimension (1,1) in the integrated case.Massless unintegrated vertex operators in the pure spinor formalism are local operators with ghost number (1,1) constructed in terms of zero classical conformal dimension fields [27].So for them to remain primary when quantum corrections are taken into account, their anomalous dimension should vanish.Massless integrated vertices have zero ghost number and classical conformal dimension (1,1).Therefore they will also be primaries when their anomalous dimension vanishes.Operators of higher mass level are constructed using fields with higher classical conformal dimension.For general mass level n (where n = 1 corresponds to the massless states) the unintegrated vertex operators have classical conformal dimension (n−1, n−1).If such vertex has anomalous dimension γ, the condition for it to be primary is 2n − 2 + γ = 0.The case for integrated vertex operators is similar.For strings in flat space γ is always α ′ k 2 2 , which is the anomalous dimension of the plane wave e ik•X .This reproduces the usual mass level formula.This task of computing γ can be made algorithmic in the same spirit as the four dimensional SYM case [2,3].However, here we are interested in finding the subset of operators satisfying the requirements described above.The value of the energy of the corresponding string state should come as the solution to an algebraic equation obtained from this requirement.However we do not expect the energy to be simply one of the parameters in the vertex operator.The proper way to identify the energy is to compute the conserved charge related to it and apply it to the vertex operator.
In this paper we intend to systematize the computation of anomalous dimensions in the worldsheet by computing all one-loop logarithmic short distance singularities in the product of operators with at most two derivatives.To find the answer for operators with more derivatives one simply has to compute the higher order expansion in the momentum of our basic propagator.We used the method applied by Wegner in [28] for the O(n) model, but modified for the background field method.This was already used with success in [29,30] for some Z 2 -super-coset sigma models.The pure spinor string is a Z 4 coset and it has an interacting ghost system.This makes it more difficult to organize the dilatation operator in a concise expression and to find a solution to We can select a set of "letters" {φ P } among the basic fields of the sigma model, e.g. the AdS coordinates, ghosts and derivatives of these fields.Unlike the case of N = 4 SYM, the worldsheet derivative is not one of the elements of the set, so fields with a different number of derivatives correspond to different letters.Then D is of the form Local worldsheet operators are of the form the problem is to find 1).Another important difference with the usual case is that the order of letters does not matter, so O is not a spin chain.
The problem of finding physical vertices satisfying this condition will be postponed to a future publication.Here we will compute D and apply it to some local operators in the sigma model which should have vanishing anomalous dimension.The search for vertex operators in AdS using this approach was already discussed in [31] but without the contribution from the superspace variables.The author used the same "pairing" rules computed in [28].This paper is organized as follows.In section 2 we describe the method used by Wegner in [28] for the simple case of the principal chiral field.This method consists of solving a Schwinger-Dyson equation in the background field expansion.In section 3 we explain how to apply these aforementioned method to the pure spinor AdS string case.The main derivation and results are presented in the Appendix B. Section 4 contains applications, where we use our results to compute the anomalous dimension of several conserved currents.Conclusions and further applications are in section 5.
Renormalization of operators in the principal chiral model
The purpose of this section is to review the computation of logarithmic divergences of operators in principal chiral models using the background field method.Although this is standard knowledge, the approach taken here is somewhat unorthodox so we include it for the sake of completeness.Also, the derivation of the full propagators in the case of AdS 5 × S 5 is analogous to what is done in this section, so we omit their derivations.
Consider a principal model in some group G, with corresponding Lie algebra g, in two dimensions.The action is given by where α is the coupling constant and g ∈ G. Using the left-invariant currents J = g −1 ∂g and defining √ λ = 1/α 2 we can also write 2) The full one-loop propagator is derived from the Schwinger-Dyson equation where δ z is an arbitrary local variation of the fundamental fields and O(y) is a local operator.This equation comes from the functional integral definition of • • • .In order to be more explicit, let us consider a parametrization of g in terms of quantum fluctuations and a classical background g = g 0 e X , where g 0 is the classical background, X = X a J a and J a ∈ g are the generators of the algebra.Then a variation of g is given by δg = gδX, and δX = δX a J a where we have the variation of the independent fields X a .Also, the variation of some general operator O is δO = δO δX a δX a .Then we can write the Schwinger-Dyson equation as and now it is clear that this is a consequence of the identity In the case that O(y) = X(y) we get the Schwinger-Dyson equation for the propagator This is a textbook way to get the equation for the propagator in free field theories and our goal here is to solve this equation for the interacting case at one loop order.The perturbative expansion of the action is done using the background field method.A fixed background g 0 is chosen and the quantum fluctuation is defined as g = g 0 e X .The expansion of the current is given by J = e −X J e X + e −X ∂e X , ( where J = g −1 0 ∂g 0 is the background current.At one loop order only quadratic terms in the quantum field expansion contribute and, as usual, linear terms cancel by the use of the background equation of motion.This means that we can separate the relevant terms action in two pieces S = S (0) + S (2) .Furthermore, S (2) contains the kinetic term plus interactions with the background.So we have S = S (0) + S kin + S int . (2.8) If we insert this into (2.6) the terms that depend purely on the background cancel and we are left with (2.9) Finally, this is the equation that we have to solve.It is an integral equation for X a (z)X b (y) = G ab (z, y) which is the one-loop corrected propagator.The interacting part of the action is where the boldface fields stand for the background fields.Now we calculate4 which is symmetric under exchange of (a, z) and (c, w), as expected.We define f ab c = f b cd η da .So we get the following equation for the propagator
.13)
Performing the Fourier transform we finally get The dependence on one of the coordinates remains because the presence of background fields breaks translation invariance on the worldsheet.We can solve the equation above iteratively in inverse powers of k.The first few contributions are given by With this solution we can finally do the inverse Fourier transform, to calculate G ac (z, y).If we are only interested in the divergent part of the propagator we can already set z = y.Furthermore, selecting only the divergent terms in the momentum integrals we get where in d = 2 + ǫ dimensions, using the standard dimensional regularization [28].Since ∂ X a ∂X c = ∂X a ∂X c + X a ∂ 2 X c we can further compute
.26)
From now on • will mean only the logarithmically divergent part of the expectation value.A simple way to extract this information is by defining for any local operator O. Furthermore, we define Following [31] we will call it "pairing" rules.For local operators these two definitions always give two delta functions, effectively setting all fields at the same point.So the computation of • can be summarized as where is the dilatation operator.We can also define With the above definitions, the divergent part of any product of local operators at the same point can be computed using. (2.32) Several known results can be derived using this simple set of rules.Following this procedure in the case of the symmetric space SO(N + 1)/SO(N) gives the same results obtained by Wegner [28] using a different method.
3 Dilatation operator for the AdS 5 × S 5 superstring In this section we will apply the same technique to the case of the pure spinor AdS string.We begin with a review of the pure spinor description, pointing out the differences between this model and the principal chiral model, and then describe the main steps of the computation.
Pure spinor AdS string
The pure spinor string [26,14,15] in AdS has the same starting point as the Metsaev-Tseyltin [32].The maximally supersymmetric type IIB background AdS 5 × S 5 is described by the supercoset The pure spinor action is given by where There are several difference between the principal chiral model action and (3.2).First, the model is coupled to ghosts.The pure spinor action also contains a Wess-Zumino term, and the global invariant current J belongs to the psu(2, 2|4) algebra, which is a graded algebra, with grading 4. Thus we split the current as J = A + J 1 + J 2 + J 3 , where A = J 0 belongs to the algebra of the quotient group H = SO(1, 4) × SO (5).The notation that we use for currents of different grade is The ghosts fields are defined as Note that A and A ′ indices on the ghosts mean α and α, but we will use a different letter in order to make it easier to distinguish which terms come from ghosts and which come from the algebra.The pure spinor condition can be written as Following the principal chiral model example, we expand g around a classical background g 0 using the g = g 0 e X parametrization.It is worth noting that X = x 0 + x 1 + x 2 + x 3 belongs to the psu(2, 2|4) algebra, but we can use the coset property to fix x 0 = 0.With this information the quantum expansion of the left invariant current is Where we take x 0 = 0 as mentioned before, and we used g −1 0 ∂g 0 = J = A + J 1 + J 2 + J 3 .The boldface terms stand for the background term, both for the currents and for the ghost fields.
Using all this information inside the action we get The full expansion can be found in the Appendix C. In order to compute the logarithmic divergences, we need to generalize the method explained in section 2 for a coset model with ghosts.The following subsection is devoted to explain this generalization.
General coset model coupled to ghosts
In this subsection we generalize the method of Section 2 to the case of a general coset G/H and then specialize for the pure spinor string case.We will denote the corresponding algebras g and h, where h should be a subalgebra of g.The generators of g − h will be denoted by T a where a = 1 to dim g − dim h and the generators of h will be denoted by T i where i = 1 to dim h .We also include a pair of first order systems (λ A , ω B ) and ( λA ′ , ωB ′ ) transforming in two representations (Γ iB A , Γ iB ′ A ′ ) of h.We will assume that the algebra g has the following commutation relations where f c ab = 0 for a general coset and f c ab = 0 if there is a Z 2 symmetry, i.e., G/H is a symmetric space.As in the usual sigma model g ∈ G/H and the currents J = g −1 ∂g are invariant by left global transformations in G.We can decompose J = J a T a + A i T i where J a T a ∈ g − h and A i T i ∈ h.With this decomposition K transforms in the adjoint representation of h and A transforms as a connection.We will also allow a quartic interaction in the first order sector.
, the interaction will be βN i Ni where β is a new coupling constant that in principle is not related with the sigma model coupling.
The total action is given by where ) are the covariant derivatives for the first order system ensuring gauge invariance.
The background field expansion is different if we are in a general coset or a symmetric space.Since we want to generalize the results to the case of AdS 5 ×S 5 , we will use a notation that keeps both types of interactions.Again, the quantum coset element is written as g = g 0 e X where g 0 is the classical background and X = X a T a are the quantum fluctuations.Up to quadratic terms in the quantum fluctuation the expansion of the action is where the covariant derivatives on The tensors (Z abc , Zabc , R abcd ) appearing above are model dependent.In the case of a symmetric space Z = Z = 0 and R abcd = f i ab f icd .In the general coset case Z abc = Zabc = 1 2 f abc .If there is a Wess-Zumino term, the values of Z abc and Zabc can differ.Since we want to do the general case, we will not substitute the values of these tensor until the end of the computations.In the action above the quantum connections have the following expansion where W i abc = 1 2 f d ab f i dc for a general coset and vanishes for a symmetric space.To proceed, we have to compute the second order variation of the action with respect to the quantum fields.The difference this time is that there are many more couplings, so we expect a system of coupled Schwinger-Dyson equations, corresponding to each possible corrected propagator.For example, in the free theory approximation there is no propagator between the sigma model fluctuation and the first order system, but due to the interactions there we may have corrected propagators between them.Since a propagator is not a gauge invariant quantity, it can depend on gauge dependent combinations of the background gauge fields (A i , Āi ).Furthermore, since we have chiral fields transforming in two different representations of h it is possible that the quantum theory has anomalies.In the case of the AdS 5 × S 5 string sigma-model it was argued by Berkovits [15] that there is no anomaly for all loops.An explicit one loop computation was done in [33].Therefore it is safe to assume that the background gauge fields only appear in physical quantities in a gauge invariant combination.The simplest combination of this type is Tr[∇, ∇] 2 .Since the classical conformal dimension of this combination is four and so far we are interested in operators of classical conformal dimension 0 and 2, we can safely ignore all interactions with (A i , Āi ).
We will assume a linear quantum variation of the first order system, e.g., λ A → λ A + δλ A .Instead of introducing more notation and a cumbersome interaction Lagrangian, we will simply compute the variations of these fields in the action and set to their background values the remaining fields.
With all these simplifications and constraints in mind, let us start constructing the Schwinger-Dyson equations.First we compute all possible non-vanishing second variations of the action We are going to denote these second order derivatives generically as I ΣΛ (z, w) where Σ and Λ can be any of the indices ( a , A , B , A ′ , B ′ ).Also, the quantum fields will be denoted by Φ Σ (z).
With this notation the Schwinger-Dyson equations are Note that the only non-vanishing components of δ Σ Λ are η ab , δ A B and δ A ′ B ′ .Since the type and the position of the indices completely identity the field, the propagators are going to be denoted by G ΣΛ (z, y) = Φ Σ (z)Φ Λ (y) .Since we five different types of fields, we have fifteen coupled Schwinger-Dyson equations to solve.Again we have to make a simplification.Interpreting (λ A , λA ′ ) as left and right moving ghosts and knowing that in the pure spinor superstring unintegrated vertex operators have ghost number (1, 1) with respect to (G, Ĝ), we will concentrate on only four corrected propagators X a (z)X b (y) , X a (z)λ A (y) , X a (z) λA ′ (y) and λ A (z) λA ′ (y) .As in the principal chiral model case we are going to solve the Schwinger-Dyson equations first in momentum space.It is useful to note that since we will solve this equations in inverse powers of k, the first contributions to the corrected propagators will have the form Regarding (A, A ′ ) as one type of index we can arrange the whole Schwinger-Dyson equation into a matrix notation with three main blocks.Doing the same Fourier transform as before we get a matrix equation that can be solved iteratively where All elements of the interaction matrix F ΣΓ are shown in Appendix C. As in section 2, the solution to equation (3.17) is computed iteratively and so on for higher inverse powers of k.
Pairing rules
As discussed in the introduction and Section 2, the computation of the divergent part of any local operator can be summarized by the pairing rules of a set of letters {φ P }.The complete set of these pairing rules can be found in the Appendix C. If we choose a set of letters such that φ P = 0, then the divergent part of the product of two letters is simply We computed the momentum space Green function up to quartic inverse power of momentum so we must restrict our set of letters to fundamental fields up to classical dimension one.The convenient set of letter we will use is If we extend the computation to take into account operators with more than two derivatives the set of letters has to be extended to include them.The matrix elements of the dilation operator D P Q = φ P , φ Q are the full set of pairings described in Appendix C.4.To avoid cumbersome notation, the pairing rules are written contracting with the corresponding psu(2, 2|4) generator.The computations done in next section are a straightforward application of the differential operator
Applications
In this section we use our results to prove that certain important operators in the pure spinor sigma model are not renormalized.The operators we choose are stress energy tensor, the conserved currents related to the global P SU(2, 2|4) symmetry and the composite b-ghost.All these operators are a fundamental part of the formalism and it is a consistency check that they are indeed not renormalized.All the computations bellow are an application of the differential operator (3.23).We use the notation O = D • O.
Stress-energy tensor
The holomorphic and anti-holomorphic stress-energy tensor for (3.2) are given by T =STr For the holomorphic one
Conserved currents
The string sigma model is invariant under global left-multiplications by an element of psu(2, 2|4), δg = Λg.We can calculate the conserved currents related to this symmetry using standard Noether method.The currents are given by They should be free of divergences.To see that this is the case, it is easier to compute by parts: We have defined AB, C as usual, but taking B as a classical field, thus AB, C = A, BC .From (C.101) we get A = 0, and using (C.100) we obtain For the currents, using the results (C.106-C.111), T n ]η mn , (4.9) but we already know that {[J 1,3 , T a ] , T b } g ab = 0, for a = {i, m, α, α}, see (B.7).Thus, By lowering all the terms in the structure coefficients, we can see that the first term is just (f iα β f j αβ − f i αβ f jα β )η α αη β β, and the second term is proportional to the dual coxeter number, see (B.5,B.6), which is 0. Thus, summing everything, we get For the antiholomorphic current we just obtain, using the same results as before, and using {[J 1,3 , T a ] , T b } g ab = 0 we see that doing the same as j, we arrive at j = 0.
b ghost
The pure spinor formalism does not have fundamental conformal ghosts.However, in a consistent string theory, the stress-energy tensor must be BRST exact T = {Q, b}.So there must exist a composite operator of ghost number −1 and conformal weight 2. The flat space b-ghost was first computed in [34] and a simplified expression for it in the AdS 5 × S 5 background was derived in [35].In our notation, the left and right moving b-ghosts can be written as where Let us first compute the divergent part of the left moving ghost; we will need the results from (C.143) to (C.153): The λ λ term is easy, where we have used (B.7).The ωJ 1 term is also 0. The other terms are which comes from the Jacobi identity, see appendix B. The remaining terms are computed using thus which is true due to the pure spinor condition.
For b one needs to use the same relations from above.
Conclusions and further directions
In this paper we outlined a general method to compute the logarithmic divergences of local operators of the pure spinor string in an AdS 5 × S 5 background.In the text we derived in detail the case for operators up to classical dimension two, but the method extends to any classical dimension.Although the worldsheet anomalous dimension is not related to a physical observable, as in the case of N=4 SYM, physical vertex operators should not have quantum corrections to their classical dimension.The main application of our work is to systematize the search for physical vertex operators.We presented some consistency checks verifying that some conserved local operators are not renormalized.
The basic example is the radius operator discussed in [35].It has ghost number (1, 1) and zero classical dimension.In our notation it can be written as If we apply the pairing rules to compute V we obtain where in the last equality we replaced the structure constants and used one of the identities in the Appendix A. This can be generalized to other massless and massive vertex operators.We plan to return to this problem in the future.
A more interesting direction is to try to organize the dilatation operator including the higher derivative contributions.As we commented in the introduction, the difficulty here is that the pure spinor action is not an usual coset action as in [29,30].However, it might still be possible to obtain the complete one loop dilatation operator restricting to some subsector of the psu(2, 2|4) algebra, in a way similar as it was done for super Yang-Mills dilatation operator [2].
The only non-zero Str of generators are For the raising and lowering of fermionic indices in the structure constants we use and for the f α αi the rule is the same.For the bosonic case we use the standard raising/lowering procedure.
B Some identities for psu(2, 2|4) Let A, B and C be bosons, X, Y and Z fermions, then, the generalized Jacobi Identities are In this theory the dual-coxeter number is 0, this implies The Jacobi identity yields f mαβ f n α β η mn η α α = 0 and f iα β f j αβ g ij η α α = 0.This implies that Another useful property of this theory is the pure spinor condition Eq. 3.6.Using it, it is easy to prove that λ, λ, A C Complete solution of the SD equation for the AdS 5 × S 5 pure spinor string In this Appendix we apply the method explained in Section 2, and generalized in Section 3, to the AdS 5 × S 5 superstring.
Step by step, the procedure is as follows: 1. Using an expansion around a classical background, g = g 0 e X , we compute all the currents up to second order in X, The expansion of the currents was already done in (3.7).The remaining subsections are devoted, each one, to each of the steps listed above.
We will drop the use of the boldface notation for the background fields in this section.All the quantum corrections come from either an x-term or a δω, δλ, δ ω, δ λ -term.Thus, every field in S int , the F -terms, the Green's functions and in the RHS of the pairing rules should be treated as classical.
C.1 Action
In (3.8) we showed the kinetic part of the expansion of (3.2) and we promised to show the interaction part later, here we fulfil our promise.Up to second order in X the interaction part is with The lack of covariant derivatives is, as explained previously, because the pure spinor sigma model is anomaly free.This means that physical quantities only appear in gauge invariant expressions, thus the interchange ∂ ↔ ∇ can be done at any moment in our computation.A more detailed explanation can be found in Subsection 3.2.
The interaction matrix is given by The directional derivative means that we compute the functional derivative of S int with respect to Φ Σ acting from right to left.Because we are working in momentum space is useful to write also F in momentum space, for that reason the equation we work with is Note that the f (y) stands for the previous Green's function and the exponential came from the Fourier Transform.The directional derivative has the same meaning as above.
We organize the interaction matrix by the Z 4 charge of its indices, and in the end we add the ghosts contributions.
The first we compute the F αΛ terms of the matrix: ) The terms of the F mΛ kind are The last contribution from the non-ghost terms is given by the F αΛ elements: Finally we compute the pure ghost terms, and we save some trees by not adding the symmetric terms already listed: where we have defined (C.45)
C.3 Green functions
With all the previous results, we begin the computation of the Green's Functions as a power series in 1/k.We follow the prescription given in (3.17).The Green functions are presented order by order, which makes the reading easier.
The only contributions of order 1/k come from the ghosts propagators (C.47) For the 1/k 2 terms, we have a contribution from the non-ghosts propagators and another from the ghost interactions (C.55) At order 1/k 3 we have interaction between the non-ghost fields.We organize these terms in the same order as in the previous section, when G ΛΩ = cG ΩΛ , with c = ±1 we only list the first term.
Using the given prescription, we find that the G αΛ 3 terms are For the G mΛ 3 terms we find The G αΛ 3 terms computed are The G 3 with only ghost indices are (C.90) The 1/k 4 terms are needed when we compute terms with two derivatives.Since we are not computing anything with two derivatives and at least one ghost field, we don't list those Green's functions.The G αΛ 4 terms are: Green's functions are The reason we don't compute terms such as G αm 4 is that we can deduce their contribution from the relation ∂X∂X = ∂ X∂X − X∂∂X , as explained in section 2.
C.4 Pairing rules
We split the current in its gauge part J 0 and the vielbein K: We also join the quantum fluctuations into a single term The following is the list of all divergent parts up to two derivatives.The order of the results is: first terms with no derivatives, then the currents, then one X with one current, and finally two currents.Finally, we list the pairing rules involving ghost fields.The definition of I in this appendix is I = −1/(2R 2 ǫ).
The non-vanishing terms with no derivatives are the ones given by the first term in the Schwinger-Dyson equation: Now we show the divergent part of the currents: For one X with one current, we find that the simplest current is J 0 for the other currents we find Now we show the divergent part of two currents.The first group are the J 0 , • terms: The J 1 , • terms are
. 3 )
We used the results in (C.102,C.127)and the identity (B.5).A similar computations happens to the antiholomorphic T , where now we use the results in (C.103,C.128)and the identity (B.6).
2 . 3 . 4 .
Expand the action (3.2) up to second order in X, Write down the Schwinger-Dyson equation for the model and compute the interaction matrix, Compute the Green functions in powers of 1 k , 5. Compute φ i , φ j . | 7,592.2 | 2015-09-02T00:00:00.000 | [
"Physics"
] |
RP-REP Ribosomal Profiling Reports: an open-source cloud-enabled framework for reproducible ribosomal profiling data processing, analysis, and result reporting
Ribosomal profiling is an emerging experimental technology to measure protein synthesis by sequencing short mRNA fragments undergoing translation in ribosomes. Applied on the genome wide scale, this is a powerful tool to profile global protein synthesis within cell populations of interest. Such information can be utilized for biomarker discovery and detection of treatment-responsive genes. However, analysis of ribosomal profiling data requires careful preprocessing to reduce the impact of artifacts and dedicated statistical methods for visualizing and modeling the high-dimensional discrete read count data. Here we present Ribosomal Profiling Reports (RP-REP), a new open-source cloud-enabled software that allows users to execute start-to-end gene-level ribosomal profiling and RNA-Seq analysis on a pre-configured Amazon Virtual Machine Image (AMI) hosted on AWS or on the user’s own Ubuntu Linux server. The software works with FASTQ files stored locally, on AWS S3, or at the Sequence Read Archive (SRA). RP-REP automatically executes a series of customizable steps including filtering of contaminant RNA, enrichment of true ribosomal footprints, reference alignment and gene translation quantification, gene body coverage, CRAM compression, reference alignment QC, data normalization, multivariate data visualization, identification of differentially translated genes, and generation of heatmaps, co-translated gene clusters, enriched pathways, and other custom visualizations. RP-REP provides functionality to contrast RNA-SEQ and ribosomal profiling results, and calculates translational efficiency per gene. The software outputs a PDF report and publication-ready table and figure files. As a use case, we provide RP-REP results for a dengue virus study that tested cytosol and endoplasmic reticulum cellular fractions of human Huh7 cells pre-infection and at 6 h, 12 h, 24 h, and 40 h post-infection. Case study results, Ubuntu installation scripts, and the most recent RP-REP source code are accessible at GitHub. The cloud-ready AMI is available at AWS (AMI ID: RPREP RSEQREP (Ribosome Profiling and RNA-Seq Reports) v2.1 (ami-00b92f52d763145d3)).
Introduction
While the principles for ribosomal profiling (RP) were invented decades ago, the application of next-generation sequencing recently set the stage for genome-wide assessments of translation at codon resolution [1][2][3] . The technique makes use of the facts that mRNAs that undergo translation in ribosomes can be fixated to each other using certain chemicals and that the joint ribosome/mRNA complexes can be isolated using chromatography after mRNAs not protected by ribosomes have been degraded using ribonucleases. Following isolation, as for RNA-Seq, mRNA fragments are reverse transcribed and sequenced. The resulting reads may not only represent true ribosomal footprints (reads that originated from mRNA bound to a ribosome, typically ranging between 25 to 35 nt in length) but artifacts/contaminants that were not actively translated such as spurious mRNA or rRNA. These artifacts need to be identified and removed before or during the reference genome alignment step. The resulting clean RP data can then be used for multiple purposes including mapping of translation sites such as initiation regions or elongation regions, characterization and timing of protein folding (when combined with ChIP), and quantification of genome-wide translation via counting of ribosomal footprints per gene 4 . The last can be utilized to assess changes in mRNA translation in individual cells or different groups of cells in response to certain drugs or therapeutics, providing insights into how such treatments work on the gene and pathway level and how these effects differ across patients or patient cohorts. In an ideal scenario, such information could then be utilized to develop predictive biomarkers to personalize treatment.
Before scientists can readily analyze RP data, key challenges must be overcome 5 . These include the provisioning of adequate hardware and software resources to meet the data processing and storage requirements for this type of analysis. Depending on the size of the project, both can be substantial. In addition, setting up a suitable RP data processing and analysis workflow requires significant bioinformatics programming resources and careful workflow parameterization. Analysis and visualization of high-dimensional RP data is not trivial requiring a thorough understanding of multivariate data analysis and statistical methods for appropriately modeling the data 6 . Even if all these challenges are addressed, ensuring fully reproducible results when all steps are being re-executed is very hard to accomplish unless all components are tightly integrated and automated, and software versions, arguments, and reference data are properly controlled.
Here we present RP-REP 7 , a new software that allows scientists to address these challenges and, at the same time, facilitates full reproducibility starting from the raw data. The software is designed to run on scalable cloud resources via AWS and pre-built AMI is available atami-00b92f52d763145d3. Alternatively, users can install the software on a local Ubuntu machine using our installation script (RPREP/ubuntu/installsoftware-v2.1.0.sh). The software also allows for joint data processing analysis of both RP and RNA-Seq data leveraging functionality of our previously published RNA-Seq software (RESEQREP) 8 . We demonstrate the joint capabilities of our RP-REP software for a published dengue virus study that collected cytosol and ER cellular fractions of human Huh7 cells pre-infection and 6 h, 12 h, 24 h, and 40 h post-infection and performed multiple replicate RNA-Seq and RP experiments (GEO:GSE69602) 9 . Figure 1 provides an overview of RP-REP software components. The software is organized into four main components: (1) setup (2) pre-processing (3) analysis, and (4) reporting ( Figure 1A). The software utilizes a variety of open-source software in combination with custom shell, R, and Perl scripts to process raw sequence data, quantify gene expression, and track storage, CPU, memory, and other runtime metrics. Preprocessing steps are organized into two stages. Stage 1 executes read filtering steps ( Figure 1B) while stage 2 executes read mapping and gene level quantification ( Figure 1C). Stage 1 performs RNA artefact filtering by retaining raw FASTQ reads that fail to map to an alignment index built from known human rRNAs, rRNA pseudogenes, tRNA pseudogenes, mitochondrial rRNAs (mt-rRNAs), mitochondrial tRNAs (mt-tRNAs), and mt-rRNA pseudogenes, as well as other known rRNA sequences from Ensembl and GenBank. Additional read processing such as adapter trimming, quality and read length filtering to retain reads that likely represent true ribosomal footprints (read length 25-35 nt), can be performed ( Figure 1B). Stage 2 performs splice-aware human reference genome alignment of reads that have been trimmed and/or filtered during Stage 1 followed by gene expression quantification carried out on the gene level, reference alignment QC including the generation of gene body read coverage plots ( Figure 1C). Processing of samples within each stage is parallelized using the Snakemake workflow management system 10 . Dependencies of steps within each stage are outlined in Figure 1B and 1C and are optimally prioritized based on available computing resources.
Implementation
The analysis component is based on R using both custom R programs as well as existing R/Bioconductor packages ( Figure 1A). The reporting component is based on R (Version 3.6.0), the knitr R package (Version 1.23), and LaTeX (Version TeX Live 2012/Debian) for reproducible and automatic PDF report and figure/table generation. All components read user-defined arguments from the respective tab in the RPREP/ config/config.xlsx spreadsheet.
Operation
All four workflow components can be run in sequence via the RPREP/run-all.sh script 7 or can be run individually to update results of the respective component. When running each individual step, the most recent version of the configuration file will be reloaded to ensure that any modifications to the configuration will be reflected. This is particularly useful for optimizing results by removing outliers, adjusting cut offs and for overall report customization such as color-coding. Step 1. Configuration parsing and setup: The RPREP/runsetup.sh script executes a parsing of the configuration .xlsx file, downloads the genome and gene models, and prepares the preprocessing and analysis/report result directories.
Step 2A. Stage 1 data preprocessing: The RPREP/source/ shell/run-pre-processing.sh script initiates the preprocessing workflow, reading in all user-specified arguments provided in the config.xlsx file. Reference data including user-specified versions of the human reference genome sequence and associated gene model information from the Ensembl database are accessed 11 . Input for pathway enrichment analysis is handled via Gene Matrix Transposed (GMT) files. GMT files, Entrez Gene IDs, Ensembl Gene IDs, and gene symbols are supported and will be automatically mapped to the human Ensembl reference annotations. We recommend that users obtain reference pathway GMT files from the Molecular Signatures Database (MSigDB) 12 . The MSigDB import is not automated as download requires registration, but the location of downloaded GMT file can be specified in the configuration file.
We do provide a script (RPREP/source/shell/download-gene-sets.sh) to automatically download Reactome, Blood Transcriptome Module, and KEGG pathway information and convert this information to GMT files (note, a license may be required prior to downloading the KEGG pathway information). Contaminant sequences of known human rRNAs, rRNA pseudogenes, tRNA pseudogenes, mitochondrial rRNAs (mt-rRNAs), mitochondrial tRNAs (mt-tRNAs), and mt-rRNA pseudogenes, as well as other known rRNA sequences from Ensembl and GenBank are downloaded using biomaRt software (Version 2.40.0) and the Ensembl Perl API (Version 90). Following the reference data download, a Bowtie2 index 13 of contaminant masking sequences will be created to optimize reference alignment searches. Based on FASTQ file input specifications in the config.xlsx, workflow execution downloads and decrypts (optional) FASTQ files from AWS S3 cloud storage, a local file location, or directly from SRA 29 via file references. Following the download, the script executes sequence data QC (FastQC). Next, 3' and 5' adapter sequences are trimmed from reads using Cutadapt (Version 2.3) 14 .
Reads with Phred quality score of less than 20 for the majority of bases are removed using FASTQ quality filter from the FASTX Toolkit software package (Version 0.0.14) 15 . During processing of ribosomal profiling data, reads that fall outside the typical length range of ribosomal footprints (25 nt to 35 nt) are removed. Reads are then aligned to the index of contaminant sequences using Bowtie2 (Version 2.3.5) with its local alignment option. Reads that map to contaminant sequences are removed, and those that do not are output to a FASTQ file for alignment to the human reference genome. With the exception of read length filtering, the RNA-Seq data is processed as described above.
Step 2B. Stage 2 data preprocessing: The human reference genome assembly, gene models, and associated gene annotation information in the form of a gene transfer format (GTF) are obtained from the ENSEMBL database. The genomic reference is built by merging all human chromosomes. Sequence reads from the Stage 1 data preprocessing that failed to map to the index of contaminant sequences are re-aligned to the reference genome using the HISAT2 splice-aware read aligner (Version 2.1.0) 16 on stranded, unstranded, or paired-end read data as specified in the config.xlsx, as well as reference based compression (samtools 17 ). Ensembl gene models are used to guide the alignment process. For each sample, the quality of reference alignments is evaluated using RSeQC software (Version 3.0.0) 18 . Gene expression quantification is carried out on the gene level using the featureCounts function as implemented in the Subread software (Version 1.6.4) 19 . Reads that overlap with multiple genes or map to multiple genomic locations on the reference genome are excluded. This is followed by assessment of gene body coverage to calculate the average read coverage over reference genome gene sequences using the RSeQC software. Additionally, for both Stage 1 and Stage 2, the workflow will track program arguments, program return codes, input and output file names, file size, MDS checksum, wall clock time, CPU time and memory consumption using the built-in Snakemake benchmarking utility.
Step 3. Data Analysis: The RPREP/run-analysis.sh script initializes analysis datasets for the final reporting steps including distance matrix calculations for global multivariate analysis (PCA, MDS, heatmaps), fold change calculations, and differentially expressed gene (edgeR 20 ), co-expressed gene clusters (pvclust 21 ), and enriched pathway (GoSeq 22 ) identification. Interim result files generated as part of these analyses are saved in gzipped .csv format within the analysis directory.
Minimal System Requirements
For local instance storage (storage immediately accessible by the instance's operating system), a 60 GiB Elastic Block Store (EBS) volume is sufficient for storing the Ubuntu Linux operating system, user accounts, and temporary analysis space for smaller studies like the dengue virus case study. For studies with larger sample sizes and sequence coverage, we recommend adding one or more additional EBS volumes (see information on AWS set-up on GitHub under RPREP/aws/aws_instructions.docx). We found an m5.2xlarge computational Elastic Compute Cloud (EC2) instance type (8 vCPUs, 32 GiB) to be sufficient for processing and analyzing the dengue virus case study data. Our benchmarks showed that the memory-limiting step is the index generation process executed by HISAT2/ Bowtie2 during the preprocessing steps. For the dengue virus case study, the maximum memory requirement was 20 GB, and we expect comparable requirements for studies of similar size.
Installation
We provide a pre-configured RP-REP AMI available on AWS (AMI ID: RPREP RSEQREP (Ribosome Profiling and RNA-Seq Reports) v2.1 (ami-00b92f52d763145d3)) that combines the Ubuntu Linux operating system Version 18.04.2 (long-term support) with all additional software that is required for RP-REP operation (RPREP/software.xlxs). We prepared a manual that provides step-by-step instructions on how to set up the AWS instance including mounting of EBS volumes for local storage and an optional Elastic IP for machine access (RPREP/aws/aws_instructions.docx). Alternatively, we provide installation scripts that can be executed on a local Ubuntu machine (Version 18.04.2) to install necessary dependencies (RPREP/ubuntu/install-software.sh). In both cases, (AWS or local setup) prior to workflow execution, users would need to pull the latest RP-REP source code from GitHub (git clone).
Configuration
RP-REP configuration is handled via the RPREP/config/config. xlsx file. The first tab allows users to specify sample metadata. Fields include subject ID, sample ID, sampling time point, a flag (is_baseline) that indicate if a sample was collected prior to treatment, the treatment group, specimen type (e.g. B-cells, PBMCs, etc.), FASTQ sequence file location (AWS S3, local, or remote SRA location), and assay type (ribosomal_profling or rna_seq). In addition, color-coding for time points, treatment groups, and specimen types can be defined. The second tab specifies options related to the pre-processing step. This tab uses a two-column key value pair format to define options. For example, to specify the Ensembl version, users can set the value of the ensembl_version key to 95. Other options include the type of data (stranded: yes/no), paths to all software utilities, and options for executing certain workflow processes (read distribution, FastQC). Paired-end experiments can be accommodated for each sample by specifying two input FASTQ files. The third tab allows users to customize analysis and reporting components. Options include specification of cut-offs to define lowly-expressed genes, differentially expressed (DE) genes, and enriched pathways, as well as the distance metric for heatmap and gene clustering analysis. For further information, see descriptions and examples for each of these options in the configuration file (RPREP/config/config.xlsx). We implemented the framework to dynamically adjust the report presentation depending on the number of subjects, time points, specimen types, and treatment group combinations. For example, Venn diagrams are shown for comparisons between up to five sets (e.g. five time points). Larger sets are accommodated via UpSet plots 23 . The configuration file allows users to subset the data by limiting the metadata file to samples, treatment groups, and time points of interest.
Use case
To demonstrate the functionality of the RP-REP software, we analyzed a public dengue virus (DNV) data set (GEO: GSE69602) 9 . The study assessed the impact of DNV infection on viral and host transcription (via RNA-Seq) and translation (via RP) in human Huh7 cells after 6 h, 12 h, 24 h, and 40 h post infection. Prior to running RNA-Seq and RP, Huh7 cells were fractionated to extract RNA and ribosome-bound RNA from cytosolic and ER compartments to understand how viral replication impacts each cellular fraction on the transcriptional and translational level. The same was done for mock infected Huh7 cells to determine results for uninfected cells. DNV is a plus-strand virus; as such it depends on the host to replicate and translate itself.
Here, we used RP-REP to assess how the host transcriptional and translational profile changed over time following DNV infection. The mock-infection sample timepoint was labeled with 0h. For RNA-Seq, 2 replicates were run per time point and cellular department for a total of 20 samples. For RP, 4 replicates were run for a total of 40 samples. The results (RP-REP report) and corresponding configuration file with public SRA FASTQ file references can be found as extended data in data files 1 and 2, respectively 7 . We provide the configuration file to exemplify the use case and to allow users to reproduce the case study analysis on their own RPREP/ RSEQREP AWS instance or Ubuntu Linux machine.
The RP-REP report for this study includes 182 figures and 82 tables (data file 1 7 ). Differential gene expression and translation was assessed by comparing pre-vs. post DNV infection read counts using negative binomial models as implemented in the edgeR R package 20 . Genes with an FDR-adjusted p-value < 0.05 and fold change ≥4 fold were selected as differentially expression (DE) or differentially translated (DT). The high fold change cut off was chosen to accommodate the strong signal post-DNV infection which required more stringent filtering of DE/DT genes. In the following sections we highlight a subset of the key findings (referenced supplemental tables and figures refer to the corresponding results in data file 1 7 ).
Host gene transcription following DNV infection.
A noticeable increase in differential transcript abundance in the cytosol of infected Huh7 cells was observed at 24 h (213 DE genes) and 40 h (899 DE genes) (Figure 2A). In the ER, DE gene expression increased from 10, to 24, to 82, and 786 DE genes at 6 h, 12 h, 24 h, and 40 h following DNV infection, respectively. While most of the DE genes expressed in the cytosol were up-regulated (98% at 24 h and 85% at 40 h), up-regulation was suppressed in the ER relative to the cytosol, in particular at 40 h (77% at 24h and 54% at 40 h) ( Figure 3A). At 24 h and 40 h, 37 (14%) and 285 (20%) DE genes overlapped between the two compartments. All DE gene results are presented in data file 1 Tables 5-12 Host gene translation following DNV infection. For the cytosol fraction, 24 h following infection, 10 differentially translated (DT) genes were identified ( Figure 2B). This signal increased to 267 DT genes at 40h post-DNV infection. Most of these DT responses were decreased relative to pre-infection. In the ER compartment, 42 and 1047 DT genes were detected at 24 h and 40 h post-DNV infection, respectively. The ratio of genes with increased translation was 100% for cytosol and 100% for the ER at 24h. While the ratio remained similar for cytosol at 40 h (94%), it dropped to 53% in the ER compartment indicating that protein translation in infected Huh7 cells was strongly suppressed in the ER relative to the cytosol compartment between 24 h and 40 h ( Figure 3B). The fraction of shared DT responses between compartments was 9/43 (21%) of DT genes at 24h and 192/1122 (17%) at 40 h indicating that in addition to suppression, fewer genes translated in the cytosol were translated in the ER between 24 h and 40 h. All DT gene results are presented in data file 1 Tables 46-49 LIPIDS AND LIPIDPROTEINS (59 DT genes, 51 DT decreased), and REACTOME POST TRANSLATIONAL PROTEIN MODFICATION (26 DT genes, 25 DT genes decreased). While some immune responses were still active at 40 h post DNV infection in the ER including REACTOME INTEREFERON SIGNALING (24 DT genes, 1 DT decreased), many immune system-related genes were deprioritized (GO IMMUNE SYSTEM PROCESS had 100 DT genes of which 49 were decreased relative to pre-infection). All pathway enrichment results based on DT genes are provided in data file 1 Tables 54-73 7 .
Time trend plots for co-translated DT genes are provided in data file 1 Figures 127-142 7 . A selection is shown in Figure 4. The first cluster highlights translational activation of a group of known interferon-inducible anti-viral genes ( Figure 4A). The trend line indicated that the antiviral response was first triggered between 12 h and 24 h post DNV-infection with an exponential increase in translation between 12 h and 40 h in both the cytosol and the ER. In contrast, translation of several genes encoding for proteins involved in lipid biosynthesis (HACD2), lipid transfer between ER and mitochondria (VPS13A), and transport (ATP13A, SLC35F5) sharply declined between 12 h and 40 h, suggesting increased competition in the ER between viral and host translation ( Figure 4B). Translation in the cytosol for this cluster increased over time potentially to account for the loss in the ER. A similar pattern for the ER compartment was seen for a group of genes related to lipid metabolism ( Figure 4C).
Discussion
RNA-Seq and RP are powerful sequencing-based tools to comprehensively assess cellular responses to treatment on the transcriptional and translational level, respectively. To extract meaning from such data is not trivial, requiring both computational resources as well as programming and biostatistical skills. While a multitude of RNA-Seq and RP software tools and R packages are available 24-26 , software that fully automate all steps starting from the raw sequencing data and ending with publication-ready tables, figures, and report are rare. Here we presented RP-REP, a new cloud-enabled software that enables researchers to analyze and contrast both RP and RNA-Seq data. The benefit of this software is that it facilitates reproducible research by automating key analysis steps including RP-specific data preprocessing including RNA contaminant filtering, reference alignment, expression/translation quantification, data QC, identification of DE/DT genes, co-expressed/translated gene clusters, and enriched pathways, and calculation of per gene translational efficiency. The software can be tailored to project needs and user data via a user-friendly configuration file. The open-source nature of the software allows for further customization. Another benefit is that the software was designed to handle large data volumes via utilizing the Snakemake workflow system for parallel data processing. In combination with the available pre-configured AWS virtual machine image (AMI), this allows for vertical scaling of processing to 96 cores (m5.24xlarge instance, largest single instance available at the time of writing). To track computational requirements, RP-REP monitors computational metrics such as CPU and memory utilization. We used this feature to benchmark computational performance of the RP-REP preprocessing step using the dengue virus case study as an example. To evaluate performance we ran the same 60 samples on increasingly powerful but also more expensive AWS EC2 instance types: m5.2xlarge (8 vCPUs; 32 GiB RAM), m5.4xlarge (16 vCPUs; 64 GiB RAM), m5.8xlarge (32 vCPUs; 128 GiB RAM), and m5.16xlarge (64 vCPUs; 256 GiB RAM) ( Figure 5). Doubling the computational resources (CPU cores and RAM) reduced the overall runtime by about 50% when running on an m5.4xlarge compared to an m5.2xlarge and an m5.8xlarge compared to an m5.4xlarge. However, we found that the m5.8xlarge (32 vCPUs; 128 GiB RAM) machine marks the ideal convergence of processing time and cost ( Figure 5). To generate the summary PDF report for the 60 samples starting from the raw FASTQ files, sample preprocessing took around 9.25 hours on an m5.8xlarge machine (32 vCPUs; 128 GiB RAM), and analysis and reporting steps took around 9.75 hours on an m5.2xlarge machine (8 vCPUs; 32 GiB RAM). Overall, the benchmark showed that software scaled data processing well with available CPU resources.
We demonstrated the utility of RP-REP using published RNA-Seq and RP data by Reid et al. 9 . Consistent with the authors findings, we found that the largest changes in transcription and translation occurred between 24 h and 40 h post DNV-infection in the cytosol and ER. Reid et al. and others showed that the virus hijacks a cell's ER to prioritize viral protein synthesis over non-viral membrane proteins 9 . Consistent with these results, we found that host translation of genes related to the ER, lipid metabolism, and components of the plasma membrane were strongly suppressed in the ER but not in the cytosol compartment at 40 h post-infection relative to pre-infection. To protect the ER from overload and avoid excess numbers of unfolded proteins, cells can activate the unfolded protein response (UPR) regulatory system 9 . Our pathway enrichment analysis confirmed gene expression activation of the UPR in the cytosol 40 h after DNV infection. In addition, cellular anti-viral defense mechanisms related gene signatures such as those induced following interferon signaling were activated on the transcriptional and translational level in both cellular compartments at 40 h post DNV infection. While interferon signaling related genes showed an exponential increase of translation over time in both the ER and Figure 5. Dengue virus case study: Computational processing benchmarks for different AWS EC2 instances. The two ribosomal pre-processing stages were run using increasingly larger AWS instances to assess scalability and to estimate runtimes. The following AWS instance type were utilized: 8 help users predict how long processing would take for their project based on the size of their input files. Also, how would a human dataset scale. Presumable, the sequence search space would be different for this organism, and may impact the processing time requirements.
Software:
The masking sequences in the source code are for humans, but the dataset tested for this work is not. What was the rationale for doing so, and would the user need to modify masking sequences manually for their organism of interest? I may have just not been clear about this in the writing, but I think clarification would be helpful. It seems like other scripts (e.g. download-ensembl-genome.sh) are also hardcoded for humans. The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact research@f1000.com | 6,074.6 | 2021-02-24T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Heteroepitaxy of diamond semiconductor on iridium: a review
Abstract As one of the representatives of carbon-based semiconductors, diamond is called the “Mount Everest” of electronic materials. To maximize its properties and realize its industrial applications, the fabrication of wafer-scale high-quality diamonds is critical. To date, heteroepitaxy is considered as a promising method for the growth of diamond wafers with considerable development. In this review, fundamentals of diamond heteroepitaxy is firstly introduced from several perspectives including nucleation thermodynamics and kinetic, nucleation process at the atomic level, as well as the interplay between the epitaxial film and substrate. Second, the bias enhanced nucleation (BEN) method is reviewed, including BEN setup, BEN process window, nucleation phenomenology (mainly on Iridium), nucleation mechanism by ion bombardment, and large-scale nucleation realization. Third, the following textured growth process is presented, as well as grain boundary annihilation, and dislocation and stress reduction technologies. Fourth, the applications of diamonds in electronic devices are studied, showing its excellent performances in the future power and electronic devices. Finally, prospects in this field are proposed from several aspects.
Research background
Silicon-based semiconductors, for example Si and Ge, are currently the most mature and widely used materials, which have triggered great changes in the information age [1][2][3]. As the development of silicon-based devices approaches physical limits (device size, device performance, device power, processing cost, etc.), exploring new wide-bandgap semiconductors becomes an urgent issue [4,5].
In the post-Moore era, carbon-based semiconductors have gradually attracted attention, including graphene, carbon nanotubes, diamond, etc. [6][7][8][9][10]. Among them, diamond is treated as an insulator for a long time. However, it can present conductive properties by doping. To date, diamond has been regarded as the ultimate semiconductor owing to its ultra-wide bandgap of 5.47 eV [11].
Except for the wide bandgap, diamond also presents many intrinsic advantages due to its unique atomic configuration, such as the high hardness, high elastic modulus, high thermal conductivity of more than 2000 W/(m K) [12][13][14], high carrier mobility (4500 cm 2 V/s for electrons; 3800 cm 2 V/s for holes) [15,16], high breakdown electric field [17], high carrier-saturated drift velocity and low dielectric constant of 5.7 [18]. Therefore, diamond is a quite promising candidate of the next-generation electronic devices in the semiconductor industry [18][19][20].
Technology bottlenecks
Diamond was fabricated by the high-pressure high-temperature method for the first time in 1955. Despite it has been discovered and synthesized for almost seventy years, the application of diamond is still in the ascendant stage. Now, the thermal and electrical applications of diamond are far from reaching the expectation of broad market, especially some extreme conditions generally require the devices to be voltage-resistance, high temperature-resistance and radiation-resistance materials [21]. Due to the absence in grain boundaries, single crystalline diamond is always superior than polycrystalline one not only in electrical properties but also in thermal stability [16,[22][23][24]. Therefore, the synthesis of single crystalline diamond is essential.
Compared with Si-based and other semiconductors, the size and doping technique are two bottlenecks restricting diamond applications. On one hand, Si wafer is already scaled up to 12 in. with very mature processes, while,), 4-6 in. is also realizable for other compound semiconductors (Ga 2 O 3 , SiC, GaN, etc.) [25][26][27][28]. In the contrast, the maximum size of diamond was reported to be only 1-3.5 in. [29][30][31]. On the other hand, Si wafer conductivity can be easily improved by n-type or p-type doping, which provides fundamentals for the electronic applications (detectors, field-effect transistor, diodes, nuclear battery, etc.) [32]. As a comparison, it is still difficult for diamond to realize the good conductivity by n-type doping, because p-type bulk doping by boron is very well controlled over a wide range of concentrations [33] and the formation of two-dimension hole gas (2DHG) on hydrogen-terminated (C-H) diamond can provide a second method to improve the surface conductivity of p-type [34][35][36][37][38]. The hydrogen termination is able to induce the conductivity channel with the interface charge in the surface. As a result, C-H diamond with p-channel conduction is effectively obtained. Positively charged hydrogen atoms of surface C-H dipoles have the negative charged adsorbates from the atmosphere attracted and adsorbed at the diamond surface. The surface negative charge sheet induces the 2DHG layer with a high hole carrier density around 10 13 cm −2 . Once the diamond surface is terminated by oxygen, the surface conduction originated from 2DHG disappears [36,39,40]. Currently, many electronic devices have been fabricated, including detectors, field-effect transistor, diodes, nuclear battery, etc. [41][42][43][44][45][46].
Synthesis of high-quality diamond wafers
Epitaxy on a single-crystal substrate provides a feasible method for precisely controlling the grain orientation of epitaxial materials to achieve the large-area single-crystal growth. There are two basic concepts: homoepitaxy and heteroepitaxy [3,47]. Homoepitaxy [48][49][50] is a method to deposit diamond film on high-quality single-crystal diamond seeds in chemical vapor deposition (CVD) reactors, which include hot-filament CVD [51], microwave-plasma CVD [52][53][54], DC arc plasma jet CVD [55], etc. Though the gas excitation and activation methods are slightly different, the growth process is similar [56]. To enlarge the diamond size, three-dimension growth [53,57] and mosaic growth [51,[58][59][60][61][62][63][64][65] are generally adopted. Nad et al. [57] conducted the MPCVD growth of single crystalline diamond substrates with PCD rimless and expanding surfaces. The lateral SCD surface area increased up to two times greater than the initial seed surface area in one run. The lift-off method can be used to separate the freestanding single-crystal CVD diamond slice from the seed [66][67][68][69][70][71]. Yamada et al. [63,64] reported a 2-in. wafer (40 × 60 mm 2 ) growth by the mosaic growth, but the high cost and inevitable interface greatly hinder the development of this method.
Heteroepitaxy is a concept that an epitaxial layer is grown on a heterogeneous single crystal substrate, showing a great potential in preparing high-quality diamond wafers, and to date, the diamond wafer up to 3.5 in. has been fabricated by heteroepitaxy [29]. More details of heteroepitaxy growth of diamond by CVD methods as well as current status and strategies are discussed in this review.
Thermodynamic and kinetic
Classical nucleation theory is the most mature theoretical model to understand the nucleation of a new phase. Except for the classical nucleation theory, other important non-classical theories such as density-functional theory (DFT) and diffuse interface theory are also proposed in these years [72,73].
According to the classical nucleation theory, nucleation on a hetero-substrate is called heterogeneous nucleation. As illustrated in Figure 1 (right), the rate of heterogeneous nucleation is then studied in Equation (1) [74,75], where θ is the contact angle between the nucleus and substrate, N is the atom number per volume in gas, k B is the Boltzmann constant, h is the Planck constant, R is the molar gas constant, P is the actual vapor pressure, P 0 is the standard vapor pressure, σ f-s is the interface energy between the film and substrate, and T denotes the substrate temperature: where I 0 and ∆G n can be obtained from Equations (2)-(4), From this equation, the nucleation rate is related with the substrate temperature T, reactive atom number per volume (gas concentration) N, the gas pressure P, the contact angle θ, volume per film atom ν f , and the interface energy between reactive gas and film σ f-g , G f and G g are the free energy per film atom or per gas atom. Thus, by decreasing contact angle θ, increasing the substrate temperature T, gas concentration N, and gas pressure P, the nucleation rate I n can be enhanced greatly. This implies there is a process window for parameters mentioned above. This is confirmed in Section 3.2.
Atomic-scale process
The classical nucleation (Section 2.1) well describes mechanisms occurring during PVD methods. It cannot take fully into account the interactions occurring between the substrate and reactive species like radicals from the CVD plasma. At the atomic scale, heteroepitaxy using CVD (or physical vapor deposition) [76][77][78][79] generally includes diffusion, adsorption and desorption of species, species coalescence, and cluster formation and growth, the creation of preferential nucleation sites on the substrate surface, disruption of small clusters, increase in the effective adatom mobility or migration rate, and increase in the substrate temperature ( Figure 1).
Chen et al. [76,77] obtained the nucleation rate from gas based on the atomic process illustrated above. The dynamic equilibrium between adsorption and desorption process of species, and the kinetic process of atomic diffusion on the substrate surface are considered in their model. When the cluster formation and growth from one single atom is studied, the nucleation rate from the gas can be calculated in Equation (5), where C is approximately constant over a reasonable range of P and T, J is the imping flux rate, m is the atom mass, v m is the film atom volume, E a is the adsorption energy, E d is the activation energy of surface diffusion: where K 1 and K 2 can be obtained from Equations (6) and (7), From the equations above, the nucleation rate can be enhanced by increasing the atom imping rate J, the surface adsorption energy E a , the interface energy between the gas phase and film σ f-s , and decreasing the activation energy of surface diffusion E d . Through the ion bombardment under the electric field, the imping rate and the surface adsorption energy can be further enhanced, the nucleation rate is thus improved.
Moreover, according to the Arrhenius equation, the nucleation rate is also related the temperature and the nucleation activation energy; thus, the energy barrier has to be overcome by either increasing the temperature or increasing the energy of species from other pathways (the kinetic energy provided by the ion bombardment during BEN). The analysis about how diamond nucleates epitaxially in Section 3.3 also confirms this point.
How these atomic-scale phenomena affect the nucleation and growth was studied by Matthews [80]. Based on the proposed kinetics theory for the thin film deposition, supposing the deposition rate of the species and the number of active nucleation sites to be constant, the diffusion of adsorbed species on the substrate then becomes the predominated factor. In this case, the surface temperature plays an important role in determining whether the adatoms are able to stay at the lowest free energy sites and configure into the orientation. Generally, a combination of a proper substrate temperature with an impinging flux is necessary for epitaxial nucleation.
Diamond heteroepitaxial nucleation usually occurs under the applied electric field in a CVD setup, which is called bias-enhanced nucleation (BEN) [81,82]. Thus, the whole stage is the ion bombardment process with high-energy atoms/ions, and the kinetic energies of the bombarding species and their flux rate toward the substrate are key parameters for the nucleation.
Inter-atomic interplay
Due to the strong interaction between the epitaxial film and substrate, chemical bonds are formed at the interface between the heteroepitaxy layer and substrate. This is also called the conventional heteroepitaxy. In the contrast, with the growth of two-dimensional materials attracting more and more attention, van der Waals epitaxy [83,84] is proposed referring to that the epitaxial layer interacting with the substrate by the weak van der Waals interaction, such as graphene growing on the Cu(111) surface [19,20].
The formation of chemical bonds is attributed to the strong interaction between dangling bonds of substrate surface and epitaxial layer, which is very common in the field of heteroepitaxy of semiconductors such as II-VI, III-V and IV-IV semiconductors [3]. The strong interaction of chemical bonds at the interface can force each atom of the epitaxial layer to match with the substrate atoms: the strong bonding is responsible for the crystalline ordering of the epitaxial layer, inducing the epitaxial layer to mimic the crystalline symmetry of the substrate [85][86][87]. Generally, these chemical bonds are strong enough to modify the lattice parameters of the epitaxial layer in the interface area. As a result, the crystal deformation at the beginning growth stage of the epitaxial layer is frequently observed, with the strain energy stored in the film. The strain energy accumulates with the thickness of the epitaxial layer increasing. According to the theory proposed by Matthews and Blakeslee [88], once the thickness of the thin film exceeds the critical value, defects such as misfit dislocations, are easily formed to relax the strain energy partially [89].
For diamond heteroepitaxy, chemical bonds are formed between diamond and the hetero-substrate due to the dangling bonds of the clean substrate surface and diamond layer. Arnault et al. [90] reported a diamond nanocrystal with a lateral size of 7 nm and a height of 2.7 nm formed at a single crystalline Ir surface. Diamond nanocrystal with the lattice of d 220 = 0.136 nm and d 002 = 0.192 nm, which gives a unit cell parameter of 0.381 nm. This is very close to the unit cell parameter of iridium (0.383 nm). Wang et al. [91] also observed that the diamond (111) interplanar spacing decreased from 0.222 to 0.206 nm with the increase of the distance from the interface, which means with the increase in the thickness of diamond film, the strain is gradually relaxed.
From the distribution of charge density and charge density difference between Ir atoms and carbon atoms, the formation of strong C-Ir ionic and covalent bonds is clearly shown [92].
MPCVD
According to Wang et al. [101] and Chavanne et al. [114], the substrate holder is negative and electrically insulated from the reactor walls which is grounded. During the BEN step, a negative bias voltage is applied between the sample holder and the walls. Schreck and Stritzker [115] described a second setup where a circular electrode inserted in the plasma and placed about 20 mm above the substrate is positively biased while the substrate remains grounded. Delchevalrie et al. [116] reported a similar configuration where a positive counter electrode is installed on a translator and electrically insulated from the reactor wall while the substrate holder is kept grounded. Yaita et al. [106][107][108] invented an antenna-edge CVD setup where the microwave electric field can be concentrated at the tip of the antenna (also as an electrode in the BEN process), and plasma density is greatly increased. As a result, the reactive gas is effectively decomposed, the amount of diamond nucleation precursors is increased, and the diamond crystal quality and growth rate are improved. These reported configurations are described in Figure 2(a-c).
DCCVD
Except in the MPCVD setup, Gsell et al. [117] described a second setup in their work (Figure 2(d)). The Ir substrate surface was irradiated with ions produced by a DC discharge in a CH 4 /H 2 gas mixture. A copper cylinder with a diameter of 5 cm was placed 2 mm above the substrate and a bias voltage is applied. Ohtsuka et al. [118] and Sawabe et al. [105] invented a three-electrode DCCVD setup to conduct the ion irradiation pretreatment. Only 1 mm above the substrate is a circular electrode, and the cylinder electrode is 2.5 cm away from the substrate. By controlling the potential between circular electrode, the cylinder electrode and the substrate, the growth or the ion irradiation can be realized, respectively.
HFCVD
In a hot-filament CVD (HFCVD) setup, Wang et al. [119] designed the distance of about 8 mm between the filament and the substrate. The negative bias relative to the filament is applied to the substrate through a graphite holder, and the resistance of the bias circuit is larger than 10 MΩ to avoid current leakage. Janischowsky et al. [120] designed an additional tungsten electrode (grid) for extraction and acceleration of electrons from the hot filaments, placed at a distance of 6 mm behind the filaments with a potential of +60 V with respect to the filaments. This generated a glow discharge between the filament and the grid. Positive ions within the discharge are accelerated through the filament openings toward the substrate, placed on a negative potential of typically −140 V with respect to the filaments. Arnault et al. [121] applied a negative bias voltage between an anode grid located above the upper filaments and a cathode grid placed between the two pairs of tungsten filaments. Positive ions are further accelerated by an extraction voltage produced between the cathode and the sample surface.
Bias-process window
Bias process parameters include the reactor pressure, reactive gas content, bias voltage, bias time, substrate temperature, etc. There is a parameter space where the high-density epitaxial nucleation can be attained. This parameter space is called the bias-process window. However, it should be noted that the bias process is indeed reactor-dependent. The process window for different reactors or research teams is always different.
Schreck et al. [109] and Thürer et al. [110] proposed this concept of process window when they studied the BEN on Si in Figure 3(a). Heteroepitaxial orientation is achieved over a wide range of different parameters provided that the bias time is within a definite time interval (time window). The width of the time window, and the bias time for optimal azimuthal alignment, strongly decrease with the absolute value increase of the bias voltage. Yaita et al. [106] found the bias current decreased at the beginning of the BEN process and then increase during diamond formation shown in Figure 3(b). An increase in the bias current of 10% leads to epitaxial diamond nucleation with a high nucleation density and is the optimum condition for the diamond nucleation on 3C-SiC in terms of the density and epitaxial nucleation. This technique is based on the difference between the secondary electron emission coefficient of diamond and that of the material underneath. Regmi et al. [122] presented a comprehensive study of BEN parameter space for high density epitaxial nucleation of diamond on Ir substrates in Figure 4(c,d). A high nucleation density exceeding 10 11 cm −2 occurs only in a narrow bias voltage range from 125 to 175 V and a narrow CH 4 content range from 1.5 to 3%. At bias voltages and methane concentrations outside these windows epitaxial diamond nucleation densities fall abruptly to near zero. [109]; (b) time-current-voltage process window for 3c-Sic substrate [106]; (c) voltage window for ir substrate [122]; (d) methane content window for ir substrate [122]. Reprinted from Journal of Applied Physics and Diamond and Related Materials with permission from aiP Publishing and Elsevier, respectively. [127]. (b) Fractural domains and modified ir surface observed after BEn [129]. (c, d) Fractural domains and respective diamond growth for the same region [129]. Reprinted from Diamond and Related Materials with permission from Elsevier.
Epitaxial nucleation phenomenology
Considering the largest single crystal diamond wafer is obtained on Ir substrates and the length of this review, the epitaxial nucleation phenomenon and process on Ir substrates are carefully illustrated in this part. Because a large-area single crystalline iridium is not available, the iridium heteroepitaxy on foreign substrates is necessary. Epitaxial Ir films are often deposited on MgO [123], SrTiO 3 [91], Al 2 O 3 [87], KTaO 3 [124], Pd/Al 2 O 3 [125], YSZ/Si [126] and SrTiO 3 /Si [90] substrates by e-beam evaporation, pulsed laser deposition, molecular beam epitaxy and magnetron sputtering. Currently, the synthesis of 4-in. epitaxial Ir film with a good crystallinity on foreign substrates is already not a difficult task.
Domain formation and surface modification are both most common phenomenon. Schreck et al. [117,127] investigated the domain formation on Ir(001) surfaces after BEN in Figure 4(a). Bright regions (called domains) with lateral dimensions up to a few micrometers are observed by scanning electron microscopy (SEM). Two round domains can merge when they meet each other, and no obvious boundaries are observed. From AFM morphologies [128], the relative height of inside a domain is about 1 nm lower than that outside a domain. In Figure 4(b), Vaissiere et al. [129] observed different nanostructures underneath domains, including Ir balls, furrows and ridges. In Figure 4(c,d), when the growth step is applied after the BEN, domains develop into islands of the same shape, composed of epitaxial diamond and with a high density of oriented grains. Surface modifications refer to the Ir surface modified by BEN and covered by microstructures (furrows and ridges) distributed along with some preferential orientations, which can be seen in Figure 4(b,c). In situ characterizations for domains is efficient to understand the nucleation. Meanwhile, it can be used to adjust BEN parameters. Delchevalrie et al. [116] developed the spectroscopic ellipsometry method, which may be a sensitive tool to monitor domains formation on Ir substrates because of its sensitivity to the optical index differences at interfaces.
To understand whether there exist diamond nuclei in domains, and further study the domain structure and composition, a variety of characterization methods are adopted to analyze domains. Some characterization methods prove a failure in confirming the existence of diamond nuclei due to the resolution. For instance, high resolution transmission electron microscopy (HRTEM) is used to confirm the interface between the Ir substrate and the domain layer, and there exist no diamond grains in the domain layer at the interface [130]. (Here, it should be noted the nanodiamond has been observed from HRTEM by Arnault et al. [90].) Electron backscatter diffraction patterns inside and outside domains never show any significant difference [127]. In RHEED and low energy electron diffraction, domains do not give any sign of crystalline diamond, either [131]. Some methods can distinguish the difference between inside and outside domains. The friction force-load curves inside and outside the domain are tested by lateral force microscopy, and the change curves inside and outside the domain are different from that in the Ir substrate reference showing a linear change trend. The difference may be due to the different plastic deformation of the metal layer under a low load [132]. Meanwhile, some methods can directly confirm the existence of diamond nuclei inside domains. In Figure 5(a,b), C KLL and Ir MNN Figure 5. Small spot aES spectra of the carbon Kll and ir mnn lines taken (a-1, a-2) outside and (b-1, b-2) insides the domain areas [132]. (c-1, c-2) XPD images for ir reference and diamond reference [131]. (d-1, d-2) XPD images of the surface with domains after BEn [131]. Reprinted from Diamond and Related Materials with permission from Elsevier. spectral lines measured by spatially resolved Auger electron spectrum (SR-AES) show a diamond characteristic peak at 262 eV inside domains and a weak characteristic peak of graphitic structure outside domains. SR-AES also shows that the C-phase density in domains is 30-50% higher than that outside domains. The thicknesses of the surface layer inside and outside a domain are calculated to be 1.73 ± 0.16 and 2.08 ± 0.07 nm, respectively [132]. As shown in Figure 5(c,d), X-ray photoelectron diffraction (XPD) shows a clear C 1s pattern in domains, indicating that carbon atoms in domains are arranged in the structure of a crystalline diamond. Finally, Bernhard et al. [133] used X-ray absorption spectroscopy to study the composition of domains, and most of these carbon atoms exist in the diamond structure. However, the true structure of the BEN layer is more complex than the pure composition of perfect diamond crystallites embedded in an amorphous matrix [129,131]. Figure 6 shows SEM, AFM, RHEED, and TEM characterization results of diamond grains with different growth times after BEN [128]. With the increase in the growth time, diamond grains can grow from domains ( Figure 6(a,b)), and the height difference inside and outside the domain gradually increases from 1.5 nm (5 s) to 6 nm (60 s) (Figure 6(c,d)). This means that the diamond grains in the domain begin to grow vertically or a certain etching phenomenon occurs in the area outside the domain. When the growth starts, the Ir substrate surface gradually transitions from a single Ir structure to a mixed structure of Ir and diamond (Figure 6(e)). From the cross-sectional TEM results (Figure 6(f)), as the growth progresses, the amorphous carbon layer on the surface of the Ir substrate in domains has completely disappeared, while diamond grains begin to appear in the crystal domain and grow laterally and vertically.
Epitaxial nucleation mechanism
When mentioning diamond nucleation and growth during CVD, the phase transformation of graphite into diamond is usually considered. Since graphite is more stable than diamond, a large amount of energy needs transferring to the system, inducing the lattice rehybridization and shift the thermodynamic balance from sp 2 -graphite to sp 3 -diamond. Moreover, the kinetic barrier hindering the phase transformation has to be overcome, too. For instance, even for the switch of diamond into the more thermodynamically favorable graphite, it usually takes a long time to occur spontaneously [134]. Thus, a fundamental understanding of the nucleation, thermodynamics and kinetics of diamond phase transitions is necessary to enable the application of gas phase-diamond transitions. Yugo et al. [135] firstly applied the bias voltage to enhance the nucleation, and several questions have to be addressed to study the epitaxial nucleation under the ion bombardment: (1) Why is diamond the more stable phase than graphite in the CVD atmosphere? (2) How is the diamond nucleation enhanced during BEN? (3) How is the epitaxial orientation determined?
For question (1), the role of hydrogen atom in the CVD atmosphere has been well illustrated. A high content of atomic hydrogen is crucial for a number of main processes [136], and CH 3 is considered the dominant reactive hydrocarbon radical. It is hydrogen that makes diamond a more stable phase than graphite [20,[136][137][138]. Though diamond nucleation and growth can be realized from the thermodynamics, the kinetic barrier has also to be overcome so that nucleation can occur. Due to the large surface energy difference between diamond and a heterogeneous substrate the nucleation density is so low that some enhancement methods need adopting. This is a core question.
For question (2), the key is related with the role of the ion bombardment and highlighted based on diamond epitaxial nucleation on Ir substrates. When the electric field is applied in the plasma, the charged species (ions and electrons) move directionally. The positively charged ions bombard the substrate while the negatively species move toward the anode. The process of positive ions moving toward the negative substrate is called the ion bombardment [29]. The sub-plantation, preferential etching, and secondary-electron emission are three main consequences under the ion bombardment. The widely used sub-plantation model is proposed by Lifshitz et al. [139,140]. They carefully studied the mechanism of the film growth from hyper-thermal species. The model involves a shallow sub-plantation process, energy loss, preferential displacement of atoms with low displacement energy, leaving the atoms with the high displacement energy intact, sputtering of substrate material, and inclusion of a new phase due to incorporation of a high density of interstitials in a host matrix. More specifically [141][142][143], a dense amorphous hydrogenated carbon (a-C:H) layer is firstly formed, and then pure sp 3 carbon clusters containing dozens of atoms are spontaneously precipitated in the a-C:H layer, caused by the "thermal spike" of the impinging energetic species [144,145]. By converting amorphous carbon to diamond at the amorphous matrix-diamond interface, diamond clusters grow to a few nanometers. This transition is caused by a "preferential displacement" mechanism mainly caused by the influence of high-energy hydrogen atoms. On this basis, Schreck et al. [29] proposed the ion bombardment induced buried lateral growth mechanism on Ir substrates in Figure 7. In their model, there are at least five stages in the nucleation process, and Figure 7(a-1-a-5) represents the exposed Ir surface before BEN, a-C:H layer formation, primary nucleation from the a-C:H layer at the interface or the domain formation, a great many secondary nuclei formation from highly defective crystalline matrix or the domain spreading, nuclei growth after BEN. By comparing the height difference across the domain (Figure 7(b-1,b-2)), a mechanism of the ion bombardment induced buried lateral growth is carefully described in Figure 7(c). The a-C:H layer is firstly formed (region III), and then the highly defective matrix including a large amount of sp 3 C-C bonds is formed (region II). Diamond nuclei with the crystallinity structure are further formed from this defective matrix (region I). The preferential etching [146,147] was once thought to be the key mechanism for epitaxial nucleation. However, it just refers to the difference of the ability of epitaxial or non-epitaxial nuclei resisting the ion bombardment. The secondary emission [113,115] refers to that diamond has the higher electron emission ability than the substrate. Thus, the bias current representing the number of moving species can increase once diamond is deposited. This has been shown in Section 3.2, but not thought to be so important in diamond epitaxial nucleation. Overall, the sub-plantation is more acceptable to researchers with the further research.
Moreover, the ion bombardment can lead to a further ionization and dissociation of the reactive gas [148][149][150], and an increase in the substrate temperature [91]. According to the classical nucleation theory, the specie concentration, the substrate temperature, and the substrate surface roughening by furrows and ridges can efficiently enhance the nucleation ratio and then the nucleation density.
For question (3), what determines the heteroepitaxy on a foreign substrate is a mystery. In recent years, with the development of 2D materials, a great many studies focus on the epitaxy of 2D materials. Dong et al. [151] proposed a general theoretical model for the epitaxial growth of a 2D material on an arbitrary substrate. From DFT calculations, the interplay between the 2D material and the substrate plays a critical role in the epitaxial growth of the 2D material. When the material is epitaxially grown, the binding energy between 2D material and the substrate is strongest. Through the calculation, the difference between the maximum and minimum binding energy is so large that 2D material can grow in a well-aligned configuration on the substrate. From these studies, we may explore the epitaxial mechanism of 3D materials (including diamond). It is inferred that only when diamond is grown epitaxially the binding energy between diamond and a foreign substrate is the largest and the following growth is the most stable.
Large-scale nucleation
There are two approaches to realize the large-scale nucleation. One is to develop the high-power CVD setup with the large-area plasma; fundamentally, each diamond research group has been trying to enlarge the plasma area (for instance, designing a 915 MHz MPCVD instead of 2.45 GHz) [29,[152][153][154]. Meanwhile, a second method is to adjust the plasma shape in situ by changing the electrical-magnetic field in the chamber. For example, Yoshikawa et al. [155] introduced a simple and effective method to extend the area of BEN for heteroepitaxial diamond growth by metal-covered half-ring Si plate right outside the substrates. It is claimed that this method has the ability to enlarge the nucleation region up to 2-in. -1-a-5) represent before BEn, amorphous hydrogenated carbon layer formation, primary nucleation from the a-c:H layer at the interface, highly defective crystalline matrix formation and a great many secondary nuclei formation, nuclei growth after BEn [29]. (b) aFm image for a round domain (b-1) and the line scan across the domain boundary (b-2) [29]. (c) the ion bombardment induced buried lateral growth schematic [29].
Growth of individual grain
When a nucleus with the size exceeding the critical size is formed at the substrate surface, it can continue to grow. For any grain, its growth meets the fundamental principle. Usually, the crystal plane with a lower surface energy is more stable. Thus, if the growth is not restricted by the substrate, grains are going to show the crystal plane with the lowest surface energy [156]. However, for heteroepitaxy on foreign substrates, the growth can be influenced by the substrate, too.
Growth parameters α, β, and γ based on the Bravais-Friedel-Donnay-Harker model [157] that the growth rate of a crystal plane is inversely proportional to the crystal plane spacing were proposed by Silva et al. [48,158], 100 113 . Figure 8 shows the grain shape when α, β, and γ parameters are in different regimes, where Figure 8(a,b) shows the facets coexistence domains are surrounded by topological boundaries that depend on α/β parameters, α/γ parameters, respectively. When β or γ parameter is very small, the grain shape mainly depends on α parameter. When α is smaller than 1, the grain is a cube; when α is larger than 3, the grain is an octahedron; when α is at the range from 1 to 3, the shape is cuboctahedron. To obtain the (001)-preferred diamond grains, Chavanne et al. [159] proposed that α close to 3 promoted a faster growth rate of <001> directions and that lower than 1.5 led to a preferential growth in <111> direction. For the former, pyramid or octahedral grains are attained while other non-epitaxial grains are also overgrown. For the latter, those epitaxial grains grow and smooth (001) surfaces are observed. CVD parameters such as methane fraction and substrate temperature can be adjusted to get both α cases. What should be noted is that the model above never takes the effect of the substrate on the grain growth into account [160].
Finally, the growth process is also studied using the in situ reflectance interferometry. Aida et al. [161] found the difference of the substrate temperature and the reflectance profile when polycrystalline or heteroepitaxial diamond film is grown on Ir/MgO substrates. In contrast with polycrystalline growth, when the heteroepitaxial growth is conducted, the reflectance profile shows a dynamic interference pattern. Oscillations in the reflectance profile are able to provide the information on the grown thickness and the growth rate in real-time. Overall, this method can be used to monitor the normal growth process of diamond.
Merging of neighboring grains
Once grains meet each other, the condition is a little different and the interaction between grains has to be considered. This merging process can be at least described from two perspectives [162][163][164][165][166].
The elastic energies associated with complete low-angle grain boundaries and with partial wedge disclinations is given by Michler et al. [165] and Schreck et al. [162]. Equation (7) is the derived equation describing the relationship between the grain size and the twist angle, where b is the Burgers vector b = a/2 < 110> and a = 0.35 nm, R is the grain radii, ω crit is the critical grain-boundary angle.
In their opinion, when grains meet, the grain size and the twist angle play a key role. Only if the grain and angle are small enough, can the merging with the low-angle grain boundary replaced by the disclination. Based on this, a very high epitaxial nucleation density is necessary; otherwise, a mosaic crystal (diamond on Si substrate) is easily formed instead of a single crystal (diamond on Ir substrate). Figure 9 is the schematic of grains merging and the disclination formation when they meet each other, where the blue, red, orange and yellow balls represent Ir, H, C and selected atoms, respectively. Before two diamond grains meet each other (a), more atoms are added with they grow laterally (b). If both grains have no tilt or twist, they will merge into a larger grain (c). If both grains have a small twist or tilt angle, disclination can form instead of the small-angle grain boundary (d, e). In contrast, if the twist or tilt angle is too large, the grain boundary can be observed.
In the meanwhile, DFT is also used to describe the merging process. Currently, in contrast with diamond, 2D graphene merging process has been studied systematically [164,166]. Dong et al. [164] calculated the formation energy of different grain boundary angles when studying the seamless stitching of graphene domains. In their opinion, the rotation and merging of graphene domains can minimize the formation energy and contribute to the formation of a larger domain. Though this work is applied in 2D graphene growth on liquid Cu substrate, this principle may also be applied in explaining the merging process of diamond grains.
Origin
When diamond is grown on foreign substrates via heteroepitaxy, two types of dislocations are introduced from two aspects: misfit dislocations and threading dislocations [89,167,168]. The threading dislocation is at a particular angle with respect to the growth interface and extends to the diamond epitaxial layer, and there are two sources for it. One is inherited from the heterogeneous substrates, and it propagates through the diamond epitaxial layer and terminates at the diamond surface. The other is generated with the movement of misfit dislocations. Therefore, misfit and threading dislocations are both inevitable during the growth of the heteroepitaxial diamond film. In contrast with misfit dislocation, the threading dislocation generation cannot reduce the strain. However, it is generally believed to primarily affect the quality of diamond epitaxial films and its bending also leads to the change of the intrinsic stress [169].
The residual stress of non-freestanding diamond films is a sum of the external stress and the intrinsic stress [165,170]. The former is caused by a difference in coefficients of thermal expansion (CTE) between the film and the substrate and builds up upon cooling down from the deposition temperature to room temperature [171][172][173], so it is also called the thermal stress. The latter is the cumulative result of chemical and microstructural defects incorporated during deposition [126,[174][175][176], the stress correlating with process parameters such as the substrate temperature or methane content in the gas phase. The microstructure, which involves coherency strain, surface energy effects, and disclinations, can contribute substantially to the stress in thin layers [172]. Moreover, the threading dislocation can also contribute to the stress formation. When dislocations move out of the glide plane spanned by the line direction and the Burgers vector, an effective climb is performed and increases the number of lattice cells per surface area at the growth stage, consequently, the stress is generated [175,176].
Reduction technologies
The reduction of dislocations and stress is an important issue for heteroepitaxial diamond growth. Wang et al. [89] summarized the dislocation reduction technologies from several perspectives. Generally, the threading dislocation density can be reduced to below 10 8 cm −2 from above 10 10 cm −2 with the Raman line width decreases from >10 cm −1 to 1.86 cm −1 when the film thickness increased to 1 mm. This is mainly due to the enhanced reactions between neighboring dislocations [177]. Moreover, adopting the off-axis substrate can also reduce the dislocation density by changing the dislocation propagation direction [30,175,178]. Epitaxial lateral growth is a very useful method to control the dislocation density, including the conventional epitaxial lateral growth [160,[179][180][181], pendeo-epitaxial lateral growth [180,181], and the patterned nucleation growth method [31,182,183]. Typically, Mehmel et al. [184] found the use of micrometric laser-pierced hole-arrays could lead to a reduction in dislocation density by two orders of magnitude to 6 × 10 5 cm −2 in the hole region where the lateral growth occurred. This value is equivalent to that typically measured for commercial type Ib single crystal diamonds. Kim et al. [31] adopted a similar method with the microneedle formation and then dislocation density of a 1-in. diamond wafer with the thickness of 500-600 μm was only 1.4 × 10 7 cm −2 .
Except the classical methods mentioned above, there are several novel methods reported that can effectively reduce the dislocation density. Ohmagari et al. [185,186] designed a method to control the dislocation propagation by incorporating W in a hot-filament CVD setup. A large reduction in the dislocation density from 10 6 cm −2 to 10 4 cm −2 was demonstrated. One possible reason for these phenomena is that dislocation propagation is suppressed by W impurities inadvertently provided from the heated filament wire. At the intersection of the dislocation and W impurity reactions, it is possible for the local compressive expansion strain to disappear complementarily, in which case the annihilation of the dislocation is more energetically stable.
The reduction in intrinsic stress can cause the continuous growth of diamond thick films without the crack. Kim et al. [30] proposed that the off-axis substrate allows step-flow growth of diamond films, and tensile stress is released in the diamond layer. Consequently, the 2-in. diamond layer delaminates naturally from the substrate without cracking. For the diamond grown on the Ir/Al 2 0 3 (11 20) substrate misoriented by 7° toward the [1 1 00] direction, the widths of the (004) and (311) X-ray rocking curves were 98.35 and 175.3 arc sec, respectively, which prove a better crystallinity of diamond films than the diamond wafer by Schreck et al. [29].
Diode
Diode known as rectifiers acts as a one-way switch for current, which allows current to flow easily in one direction, but severely restricts current from flowing in the opposite direction. Homoepitaxial diamond diodes have been widely studied in the last twenty years while the study for heteroepitaxial diamond diodes are mainly concentrated in the last few years [90,[187][188][189][190][191] in Table 1.
In 2015, Kawashima et al. [188] fabricated lateral Schottky barrier diodes (SBDs) using heteroepitaxial diamond films grown onto Ir/SrTiO 3 /Si substrates. High rectification ratio of 10 12 and forward current density of 10 A/cm 2 were achieved. A breakdown voltage of 52 V was observed corresponding to a breakdown electric field strength of ~1 MV/cm. Murooka et al. [189] demonstrated the device properties of lateral Schottky barrier diodes (SBDs) fabricated on heteroepitaxial diamond films grown onto 3C-SiC/Si substrates, and investigated the electrical properties of SBDs through temperature-dependent characterization. The fabricated SBDs exhibit clear diode properties with rectification ratios above 10 9 at ±5 V. The diode properties and rectification ratios are maintained above 10 8 even at 500 K. The current density of these diodes is around 1 A/cm 2 . Arnault et al. [90] fabricated and characterized the lateral SBDs on heteroepitaxial diamond film on Ir/SrTiO 3 /Si substrate. I-V characteristics evidence a yield of working diodes equal to 92%, close to that obtained for diodes on homoepitaxial films. Despite a higher expected defect density, when heteroepitaxial diamond is used as a substrate, the active layer doping and the device characteristics (serial resistance, Schottky barrier, and ideality) appear highly uniform. Kwak et al. [190] fabricated lateral Schottky barrier diodes on alpha-sapphire based heteroepitaxy diamond film. The ideality factor of 1.4 and maximum breakdown field of 1.1 MV/cm are measured by power device analyzer respectively. Most of threading dislocations in Diamond epilayer grown on [189] lateral Schottky 3c-Sic/Si (001) 50 (mo)~1 10 9 - [90] lateral Schottky ir/Srtio 3 /Si (001) 200 (Zr/Pt/au)~10 −3 10 4 -10 5 - [190] lateral Schottky al 2 o 3 (112 0) 100 (al)~6 10 6 50 (1.1 mV/cm) [191] lateral Schottky commercially product (mo/au) (5 × 10 −5 )~10 6 375 heteroepitaxial diamond substrate reveal as 45° mixed type deteriorating the device performance that induced early breakdown or large leakage current. To eliminate the impact of defects on the electrical performance, Sittimart et al. [191] used metal-assisted termination (MAT) method to reduce the defect density of the heteroepitaxy diamond film. After the insertion of the MAT buffer layer, the leakage current was depressed and yield of working diodes was enhanced. A breakdown voltage of 375 V was measured, which is the highest breakdown value among SBDs on heteroepitaxy substrates. Heteroepitaxial diamond substrate is expected to be more suitable for commercialization and application in semiconductor industry with lower cost and larger area. Those attempts to make Schottky barrier diodes show the potential of the heteroepitaxial diamond in electronic devices. However, more technologies should be applied to reduce the on-state resistance as well as enhance the breakdown voltage. Lateral SBDs usually have higher series resistance thus it should be replaced with pseudo vertical or vertical SBDs. Techniques such as MAT are required to improve the crystal quality.
Field effect transistor
Field effect transistors (FETs) in diamond [34,[192][193][194][195][196][197][198][199][200][201] should outperform FET structures on other wide bandgap materials such as SiC and GaN in high power/high temperature applications due to the desirable properties of diamond materials, which have attracted researchers' interest in the last few years. Syamsul et al. [192] deposited a 500 nm undoped homoepitaxial diamond layer on N-doped heteroepitaxial diamond on 3C-SiC/Si substrate and fabricated hydrogen-terminated diamond metal-oxide-semiconductor field effect-transistors (C-H MOSFETs). A maximum current density of 80 mA/mm and a I ON /I OFF ratio of 10 9 are achieved. A high breakdown voltage of more than 1 kV was obtained. Kasu et al. [193] reported diamond modulation-doped C-H MOSFETs and obtained high hole mobility of 2465 cm 2 /Vs fabricated on the heteroepitaxial diamond. The breakdown voltage as high as 703 V was achieved corresponding to a Baliga's figure of merit (BFOM) of 179 MW/cm 2 . Through surface Al 2 O 3 passivation layer, they further improved the breakdown voltage of the C-H MOSFETs on the heteroepitaxial diamond to 2608 V and BFOM of 344.6 MW/cm 2 [194]. Saha et al. [197] fabricated diamond C-H MOSFETs on chemical mechanical planarized heteroepitaxial diamond and achieved high current density of −6.8 A/mm, high breakdown voltage of −2568 V and highest BFOM for diamond of 875 MW/cm 2 . At the same time, they reported a −3326 V breakdown voltage diamond MOSFET [198]. Chen [196] grown boron doped layer on the heteroepitaxial diamond, and fabricated diamond metal semiconductor FETs. A breakdown voltage of −2360 V was obtained at room temperature.
Several works reported diamond FETs using heteroepitaxy diamond substrates recently and showed gratifying results. MOSFETs using hydrogen terminated diamond as conductive channel are most promising as they exhibited high current density as well as high breakdown voltage. More in-depth research should be conducted on C-H devices to further improve device performance.
Detector
The study for heteroepitaxial diamond detectors is not so many [23,202,203]. The first one is reported by Berdermann et al. in 2010 [202]. They conducted diamond detectors used for hadron physics research. However, the charge collection efficiency for hole is quite low of 12% compared to that of the single crystal diamond detectors (95%). Recently, Berdermann et al. [23] prepared diamond detectors for particle detection and studied the performances for several kinds of particles including α and β sources, swift ions from the heavy-ion synchrotron, and relativistic protons. The charge collection efficiency for holes is improved to be 95% while that of electrons is still low (40%). Further improvement of the crystal quality and understanding the conducting mechanism of the heteroepitaxy substrates are needed.
Summary
(1) The fundamental of diamond heteroepitaxy is systematically introduced from three aspects including the nucleation and growth thermodynamics, nucleation at the atomic level, the interplay between substrate and film, which provide the theory basis for diamond heteroepitaxy. (2) BEN is the most important method to realize the high-density epitaxial nucleation. Several mainstream BEN configurations in the MPCVD, DCCVD and HFCVD setups are illustrated; the substrate holder kept a lower electric potential can lead to the directional bombardment of positively charged species. (3) Bias process is a combination of many process parameters including the substrate temperature, reactor pressure, methane concentration, bias voltage and bias time. The narrow process window involving the bias voltage and bias time, and methane content parameters are widely studied whether the foreign substrate is Si, 3C-SiC or Ir substrate. Outside the process window, diamond nucleation density decreases dramatically or the nucleus loses its epitaxial orientation. (4) Diamond epitaxial nucleation mechanism under the ion bombardment is studied when taking Ir substrate as an example. The ion bombardment causes several consequences: ion sub-plantation, substrate surface modification, further reactive gas ionization and dissociation, and substrate temperature increase. From the perspective of the nucleation process, two stages including a-C:H layer and diamond nucleation from a-C:H layer are observed and simulated. It is inferred that the epitaxial orientation with the substrate is determined by the strongest binding energy between the diamond and substrate. (5) Textured growth mechanism and process after BEN are illustrated, controlled by the growth parameters. When grains meet each other, the disclination can be introduced instead of the low-angle grain boundary when the elastic strain energy is the minimum. A relationship exists between the critical grain size and a critical grain boundary angle. Usually, the larger the grain size, the smaller the critical grain boundary angle. (6) With the dislocation and stress in the diamond film influencing the device performance considered, the efficient method to reduce the dislocation and stress includes the epitaxial lateral growth, off-axis substrate growth, the metal-assisted termination method, etc. The dislocation density can be reduced to 10 4 -10 6 cm −2 . (7) The electronic applications of heteroepitaxial diamond are introduced from three aspects including diode, FET, and detectors. The performance of these devices shows the potential of heteroepitaxial diamond in electronic devices.
Prospects
(1) Though Ir has proven the optimum substrate, exploring the novel and cheap substrate still needs a study. (2) The study for an interdependent relationship between bias process parameters (bias voltage, reactor pressure, etc.) is necessary to understand the narrow bias process window. (3) BEN mechanism can be further studied by experiment methods. For instance, the influence of bias process parameters on the a-C:H layer thickness and composition can be characterized by X-ray photoelectron spectrum, elastic recoil detection analysis, etc., which may provide new insights to understand the different nucleation pathways when bias process parameters change.
(4) With the development of DFT, the mechanism of diamond heteroepitaxial nucleation and growth needs studying in the future, when 2D material growth process and mechanism has been simulated carefully from several aspects. Especially, the molecular-dynamic simulation method shows the big value in understanding the specific process how diamond nucleates from gas or amorphous carbon, etc. (5) 1-3.5 in. diamond wafers have been fabricated by some researchers. Developing the technology for the stable, reproducible, uniform, high-density, and large-scale plasma is the premise to attain the diamond wafer with a large size. | 11,531.8 | 2022-12-31T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Hamiltonian Evolutionary Games
We introduce a class of o.d.e.'s that generalizes to polymatrix games the replicator equations on symmetric and asymmetric games. We also introduce a new class of Poisson structures on the phase space of these systems, and characterize the corresponding subclass of Hamiltonian polymatrix replicator systems. This extends known results for symmetric and asymmetric replicator systems.
Introduction
State of the art. Evolutionary Game Theory (EGT) originated from the work of John Maynard Smith and George R. Price who applied the theory of strategic games developed by John von Neumann and Oskar Morgenstern to evolution problems in Biology. Unlike Game Theory, EGT investigates the dynamical processes of biological populations.
Independently A. Lotkla and V. Volterra introduced the following class of o.d.e.'s currently known as Lotka-Volterra (LV) systems, and usually taken as models for the time evolution of ecosystems in n species. Although historically this class of systems preceded EGT they are now considered an integral part of this theory. The entries a ij represent interactions between different species, while the coefficients r i stand for the specie's natural growth rates. In his studies [19] V. Volterra gave special attention to predator-prey systems and their generalization to food chain systems in n species, which fall in the category of dissipative and conservative LV systems. Denoting by A = [ a ij ] ij its interaction matrix, a LV system is said to be dissipative, resp. conservative, if there exists a positive diagonal matrix D such that AD + DA t ≤ 0, resp. AD is skew symmetric. The matrix D was interpreted by Volterra as some sort of normalization by the average weights of the different species. If the LV system admits an equilibrium point q ∈ R n the following function H : int(R n x j − q j log x j is either a decreasing Lyapunov function, if the system is dissipative, or else a constant of motion, if the system is conservative. Volterra proved that the dynamics of any n species conservative LV system can be embedded in a Hamiltonian system of dimension 2n. More recently, in the 1980's, Redheffer et al. developed further the teory of dissipative LV systems, introducing and studying the class of stably dissipative systems [14][15][16][17][18]. In [2] a re-interpretation was given for the Hamiltonian character of the dynamics of any conservative LV system: there is a Poisson structure on R n + which makes the system Hamiltonian. Another interesting fact from [2], which stresses the importance of studying Hamiltonian LV systems, is that the limit dynamics of any stably dissipative LV system is described by a conservative LV system.
Another class of o.d.e.'s, which plays a central role in EGT, is the replicator equation defined on the simplex ∆ n−1 = {x ∈ R n The coefficients of this o.d.e. are stored in an n × n real matrix A = [ a ij ] ij , that is referred as the pay-off matrix. A game theoretical interpretation for this equation is provided in section 3. Check [8] on the history of this equation. In [9] J. Hofbauer introduced a change of coordinates, mapping R n + to the simplex ∆ n minus one face, which conjugates any LV system in R n + to a time re-parametrization of a replicator system in ∆ n , and vice-versa. Thus when a LV system is conservative then the corresponding replicator system is orbit equivalent to a Hamiltonian system. On the other hand, any replicator system on ∆ n−1 with skew symmetric pay-off matrix extends to a LV system on R n + with r i = 0, and hence can be viewed as a restriction of a Hamiltonian LV system on R n + . Up to our knowledge these are the known subclasses of Hamiltonian replicator systems.
Asymmetric or bimatrix games lead to another fundamental class of models in EGT, the following system of o.d.e.'s whose coefficients are displayed in two pay-off matrices, A of order n × m and B of order m × n.
The phase space of this equation is the prism ∆ n−1 × ∆ m−1 . A game theoretical interpretation is given in section 3. It was remarked by I. Eshel and E. Akin [5] that up to a time re-parametrization these systems always preserve volume. For λ-zero-sum games (λ < 0) and λ-partnership games (λ > 0), with an interior equilibrium point in the prism ∆ n−1 × ∆ m−1 , J. Hofbauer proved in [7] that this bimatrix system is orbit equivalent to a Hamiltonian system w.r.t. some Poisson structure in the interior of the prism. Previously, E. Akin and V. Losert [1] had noticed the Hamiltonian character of this model in the zero-sum case. Polymatrix games, like n-player games, generalize the concept of bimatrix games. The main difference between them is that interactions between players are bilateral in the former game but not in the latter. The first reference we could find on the existence of equilibria for these games is the paper of J. Howson [10] who attributes the concept of polymatrix game to E. Yanovskaya (1968). More recently, the structure of Nash equilibria for polymatrix games is studied by L. Quintas in [13].
Main results. We introduce a class of o.d.e's, referred as polymatrix replicator equation, that generalizes to polymatrix games the symmetric and asymmetric replicator equations. We are not aware of any reference on this equation in the literature. The phase space of these systems are finite products of simplexes. We introduce the concept of conservative polymatrix game, which in the case of bimatrix games extends the λ-zero-sum games (λ < 0) and the λ-partnership games (λ > 0). In Theorem 3.13 we introduce a class of Poisson structures on finite products of simplexes (see (3.4)). We will show that these prisms are stratified Poisson spaces (see section 4). Then in Theorem 3.20 we show that any conservative polymatrix game determines a Hamiltonian polymatrix replicator. This work extends and unifies several known facts on Hamiltonian replicator o.d.e.'s. In the end of section 3 we compare our results with known facts mentioned in the state of the art subsection.
The paper is organized as follows. In section 2 we introduce the needed concepts from Poisson geometry. In section 3 we state and prove the main results. In section 4 we discuss a method introduced in [6], called singular Poisson reduction, which gives a geometric interpretation of the Poisson structures defined in section 3. In the last section we workout a couple of examples.
Generalities on Poisson Structures
In this section we will provide a short introduction to Poisson geometry focused on some dynamical aspects, see any standard textbook on Poisson manifolds and related topics, for example [3,11].
Let M be an n-dimensional smooth manifold. We denote by C ∞ (M ) the space of smooth functions on M . A Poisson structure on M is an R-bilinear bracket {., .} : 2) The dimension of the linear subspace D(x) is called the rank of the Poisson structure at point x, which is equal to the dimension of the leaf S x . Since this leaf is a symplectic manifold on its own it has even dimension.
3) The symplectic foliation S := {(S x , ω Sx )|x ∈ M } completely determines the Poisson structure. 4) By definition, it is clear that every symplectic leaf S x is an invariant submanifold for any Hamiltonian vector filed X H . In fact, the restriction of X H to S x is Hamiltonian with respect to the symplectic structure ω sx .
5)
Every symplectic manifold (N, ω) is a Poisson manifold with Poisson bracket defined by {f, g} N := ω(X f , X g ), where X f and X g are the Hamiltonian vector fields associated to f and g by symplectic structure. 6) A function f is called Casimir if {., f } = 0. Note that Casimirs are constants of motion for any Hamiltonian vector field. Furthermore, if f 1 , f 2 are two Casimirs then {f 1 , f 2 } is also a Casimir due to Jacobi identity.
In a local coordinate chart (U, x 1 , .., x n ), or equivalently when M = R n , a Poisson bracket takes the form ij is a skew symmetric matrix valued smooth function, and for every function f we write The Jacobi identity translates to: Clearly, every skew symmetric matrix valued function π : R n → Mat n×n (R) satisfying condition (2.1) defines a Poisson structure on R n . In the next section we shall introduce our Poisson structures through their associated skew symmetric matrix valued functions, referred as bivectors π : R n → Mat n×n (R). The term bivector means that π(x) is as a linear operator π(x) : (R n ) * → R n .
Remark 2.2. Regarding the function π we have 1) For any function H the associated Hamiltonian vector field is defined by 2) The characteristic distribution D π (x) is the one generated by the columns of the matrix π(x).
3) It transforms under a change of variable
where π M and π N are skew symmetric matrix valued functions associated to Poisson structures of M and N , respectively, and d m ψ is the Jacobian matrix of the map ψ at point m.
Polymatrix games
In this section we introduce the evolutionary polymatrix games to which our main result applies. This class of systems contains both the replicator models and the evolutionary bimatrix games.
Consider a population whose individuals interact with each other using one of n possible pure strategies. The state of the population is described by a probability vector p = (p 1 , . . . , p n ), with the usage frequency of each pure strategy. This vector is a point in the n − 1-dimensional simplex A symmetric game is specified by a n×n pay-off matrix A = [ a ij ] ij , where the entry a ij represents the pay-off of an individual using pure strategy i against another using pure strategy j. Given x ∈ ∆ n−1 , the value (A x) i = n j=1 a ij x j represents the average pay-off of strategy i within a population at state x. Similarly, the value x t A x = n i,j=1 a ij x i x j stands for the overall average of a population at state x, while the difference (A x) i − x t A x measures the relative fitness of strategy i in the population x. The replicator model is the following o.d.e. on ∆ n−1 which says that the logarithmic growth rate of each pure strategy's frequency equals its relative fitness. The flow of this o.d.e. is complete and leaves the simplex ∆ n−1 invariant, as well as every of its faces. Next we introduce the class of evolutionary asymmetric, or bimatrix games, where two groups of individuals within a population (e.g. males and females), or two different populations, interact using different sets of strategies, say n strategies for the first group and m strategies for the second. The state of this model is a pair of probability vectors in the (n + m − 2)-dimensional prism Γ n,m = ∆ n−1 × ∆ m−1 . There are no interactions within each group. The game is specified by two pay-off matrices: a n × m matrix A = [ a ij ] ij , where a ij is the pay-off for a member of the first group using strategy i against an individual of the second group using strategy j, and a m × n matrix B = [ b ij ] ij with the pay-offs for the second group members. Assuming the first and second group states are x and y, respectively, the value (A y) i is the average pay-off for a first group individual using strategy i, the number x t A y is the overall average pay-off for the first group members, and the difference (A y) i − x t A y measures the relative fitness of the first group strategy i. Similarly, (B x) j − y t B x measures the relative fitness of the second group strategy j when the group states are x and y. The bimatrix replicator is the following o.d.e. on the prism Γ n,m which again says that the logarithmic growth rate of each strategy's frequency equals its relative fitness. The flow of this o.d.e. is complete and leaves the prism Γ n,m invariant, as well as every of its faces.
Finally we introduce the class of polymatrix replicators. Consider p different populations, or else a single population stratified in p groups. We shall use greek letters like α and β to denote these groups. Assume that for each group α ∈ {1, . . . , p}, there are n α pure strategies for interacting with members of another group, including its own. Let us call signature of the game to the vector n = (n 1 , . . . , n p ). The total number of strategies is therefore n = n 1 + . . . + n p . The polymatrix game is specified by a single n × n matrix A = [ a ij ] ij with the payoff a ij for a user of strategy i, member of one group, against a user of strategy j, member of another group, possibly the same. The main difference between polymatrix games and the symmetric game, also specified by a single matrix A, is that in the polymatrix game competition is restricted to members of the same group. This means that the relative fitness of each strategy refers to the overall average pay-off of strategies within the same group. To be more precise we need to introduce some notation. We decompose A in blocks, is a n α × n β matrix. Similarly we decompose each vector x ∈ R n as x = (x α ) α , where x α ∈ R nα . We say that a strategy i belongs to a group α, and write i ∈ α, if and only if n 1 + . . . + n α−1 < i ≤ n 1 + . . . + n α . Similarly we write (i, j) ∈ α × β when i ∈ α and j ∈ β. With this notation we have which once more says that the logarithmic growth rate of each pure strategy's frequency equals its relative fitness. The flow of this o.d.e. is complete and leaves the prism Γ n = ∆ n 1 −1 × . . . × ∆ np−1 invariant. The underlying vector field on Γ n will be denoted by X A . The pair G = (n, A) will be referred as a polymatrix game, and the dynamical system determined by X A = X (n,A) as the associated polymatrix replicator on Γ n . Remark 3.2. When p = 2 and A 1, The proofs of the following three propositions are easy exercises.
Proposition 3.3 (Identity).
The correspondence A → X (n,A) is linear and its kernel is formed by matrices A ∈ Mat n×n (R) such that the block matrix A α,β has equal rows for all α, β = 1, . . . , p. Thus, two matrices A, B ∈ Mat n×n (R) determine the same vector field X (n,A) = X (n,B) on Γ n iff the block matrix A α,β − B α,β has equal rows for all α, β = 1, . . . , p. Equivalent matrices determine the same evolutionary polymatrix game on Γ n . In other words (n, A) ∼ (n, B) iff X (n,A) = X (n,B) . Proposition 3.5 (Equilibria). A point q ∈ Γ n is an equilibrium of X (n,A) if and only if (Aq) i = (Aq) j for all α = 1, . . . , p and every i, j ∈ α. Definition 3.6. Given a signature n = (n 1 , . . . , n p ), we define the set The correspondence between sets in I n and faces of Γ n is bijective. The following proposition says that the restriction of a polymatrix replicator to a face is another polymatrix replicator. Proposition 3.8 (Inheritance). Consider the system (3.3) associated to the polymatrix game G = (n, A). Given I ∈ I n , the face σ I of Γ n is invariant under the flow of X (n,A) and the restriction of (3.3) to σ I is the polymatrix replicator associated to the restricted game G| I .
We set some notation in order to produce neater formulas. In any matrix equality the vectors in R n , or R nα , should be identified with column matrices. We set 1 = 1 n = (1, 1, .., 1) ∈ R n and will omit the subscript n whenever the dimension of this vector is clear from the context. Similarly, we write I = I n for the n × n identity matrix, and we omit the subscript n whenever its value is clear. Given Given a polymatrix game G = (n, A), we define the matrix valued mapping π A : R n → Mat n×n (R) These computations reduce to the simple case p = 1, n 1 = n where Remark 3.9. Notice that π A (x) is a skew symmetric matrix valued map whenever A is a skew symmetric matrix.
Remark 3.11. A formal equilibrium of G = (n, A) is an equilibrium of the natural extention of X (n,A) to the affine subspace spanned by Γ n .
Next proposition says that the existence of a formal equilibrium is a sufficient condition for the vector field X (n,A) of system (3.3) to be a gradient of a simple function H with respect to π A . We denote by Γ • n the topological interior of Γ n in the affine subspace of R n spanned by Γ n . Proposition 3.12. Given A ∈ Mat n×n (R), assume there exists a formal equilibrium q ∈ R n of G = (n, A). Then, setting Proof. Consider the vector field Z = π A dH. For any α, and i ∈ α, denote by Z α i (x) the i-th component of Z(x). Using that j∈β q β j = 1 we have where the vanishing term follows from q being an equilibrium point and x α ∈ ∆ nα−1 . This completes the proof.
Theorem 3.13. If A is skew symmetric then the mapping π A in (3.4) defines a stratified Poisson structure on Γ n . Moreover the mapping φ : R n−1 → Γ • n is a Poisson diffeomorphism if we endow R n−p with the constant Poisson structure associated to the skew symmetric matrix B defined in (3.6).
Proof. The map φ : R n−1 → Γ • n is a diffeomorphism whose inverse is easily computed. If A is skew symmetric then so is B. Hence this matrix induces a constant Poisson structure on R n−p . We want to prove that π A determines a Poisson structure on Γ • n which makes φ a Poisson map. By (2.3) we just need to show that for every u ∈ R n−p and x = φ(u), The fact that π A also determines a stratified Poisson structure on Γ n , and on R n , will be proved later. See Remark 3.15. In order to prove (3.7), it is enough to see that for every Writting the components of φ α as φ α (u α ) = (φ α 1 (u α ), . . . , φ α nα (u α )) we compute for every i = 1, . . . , n α and j = 1, . . . , n α − 1, Hence if x = φ(u), the Jacobian of φ at the point u is where for every α = 1, . . . , p, A simple multiplication of matrices, using the relation x 1 + . . . + x nα = 1, shows that J α (x α )E α = T x α D x α for every α = 1, . . . , p. Therefore which completes the proof.
The next corollary gives a complete description of the symplectic foliation of (Γ • n , π A ). Two examples will be given in section 5. Corollary 3.14 (Symplectic Foliation). The symplectic leaves of (Γ • n , π A ) are the images of the symplectic leaves of (R n−p , B) under the diffeomorphism φ. The symplectic leaf S u of (R n−p , B) is the (even dimensional) affine subspace through u parallel to the subspace generated by the columns of B.
Remark 3.15. Given a face I ∈ I n consider the payoff matrix A I , see Definition 3.7. Applying Theorem 3.13 to any face σ I of Γ n we see that σ I is a Poisson manifold on its own with the Poisson structure π A I . Moreover (σ I , π A I ) is the restriction of (Γ n , π A ) in the sense that the inclusion map i : σ I → Γ n is a Poisson map. Hence the interiors of the faces of Γ n , regarded as Poisson manifolds, give (Γ n , π A ) the structure of a Poisson stratified space. In addition it will be shown that π A defines a Poisson structure on R n . On section 4 we provide a geometric explanation for these facts.
It follows from the previous remark that any generic skew symmetric matrix can be taken as a model for a conservative polymatrix game. More precisely, Proposition 3.19. Given a signature n = (n 1 , . . . , n p ) with p α=1 n α = n, the set of skew symmetric matrices A 0 ∈ Mat n×n (R) such that G = (n, A 0 D) is a conservative polymatrix game for some diagonal matrix D is an open and dense subset of the space of skew symmetric matrices.
Next theorem basically says that the replicator system (3.3) is Hamiltonian for every conservative polymatrix game.
Theorem 3.20. Consider a conservative polymatrix game G = (n, A) with formal equilibrium q, skew symmetric model A 0 and scaling co-vector (λ 1 , . . . , λ p ). Then X (n,A) is Hamiltonian in the interior of the Poisson stratified space (Γ n , π A 0 ), with Hamiltonian function Proof. In view of definition 3.4 we can assume that A = A 0 D. For every α, β, where q β /x β stands for the componentwise division of the vectors. Adding up in β, and using Proposition 3.12, we get In the next paragraphs we compare our results with previously known facts. Given a skew symmetric matrix A ∈ Mat n×n (R), since x t A x = 0 for all x ∈ R n , the replicator equation (3.1) reduces to a Lotka-Volterra equation with growth rates r i = 0 For any q ∈ R n such that A q = 0 the function H(x) = n j=1 x j − q j log x j is a constant of motion for (3.9). A Poisson structure on R n defined by the bivector π A (x) = D x A D x was introduced in [2]. System (3.9) is Hamiltonian in the interior of R n + w.r.t.π A having H as Hamiltonian function. Likeπ A the Poisson structure π A introduced here can be extended to R n , but unlike π A the structureπ A does not restrict to a Poisson structure on the simplex ∆ n−1 . Using the Poisson structure π A we can now say, if there exists q ∈ R n such that A q = 0 and n j=1 q j = 0, that the system (3.9) is Hamiltonian in the interior of the simplex ∆ n−1 . Furthermore, here we study the replicator equation itself and not a topologically equivalent LV system.
Consider now a bimatrix game with signature (n 1 , n 2 ) and matrix If λ > 0, resp. λ < 0, the polymatrix game ((n 1 , n 2 ), A) is conservative with scaling vector (1, λ) if and only if it has a formal equilibrium and the bimatrix game (A 12 , A 21 ) is λ-zero-sum game, resp. λ-partnership game, (see definitions in section 11.2 of [8]). Theorem 3.20 generalizes the main result (section 5) in [7], which says that the evolutionary system (3.2) associated to a λ-zero-sum or λ-partnership game is orbit equivalent to a bipartite Lotka-Volterra system that is Hamiltonian w.r.t. some Poisson structure. This leads to the same constant of motion (3.8), but from the work [7] we only derive the existence of a Poisson structure in the interior of the prism ∆ n 1 −1 × ∆ n 2 −1 for which some time re-parametrization of system (3.2) is Hamiltonian w.r.t. that Poisson structure. On the other hand here we provide a Poisson structure on the full prism that makes the original system Hamiltonian in the interior of the prism. We finish this section with an extension of the class of Hamiltonian polymatrix replicators. Given p smooth functions λ α : Γ n → R\{0}, α = 1, . . . , p, consider the matrix valued smooth function D : Γ n → Mat n×n (R), D(x) = diag(λ α (x)I nα ) α , and the system of o.d.e.'s associated with the vector field Y (x) = X (n,AD(x)) (x) on Γ n .
Proposition 3.21. Let A ∈ Mat n×n (R) be a skew symmetric matrix, q ∈ R n a formal equilibrium of G = (n, A), and consider the 1-form Then system (3.10) is the gradient of the 1-form ξ w.r.t. the Poisson structure π A in the interior of Γ n , i.e., System (3.10) is Hamiltonian if the form ξ is exact, i.e., there exists a smooth function H such that ξ = dH. But even if ξ is not exact, the dynamics of Y leaves invariant the symplectic foliation of (Γ • n , π A ). Proof. The proof is similar to that of Theorem 3.20.
The previous model (3.10) contains the following class of o.d.e.'s introduced by J. Maynard Smith as an extension of the asymmetric replicator equation (3.2).
See appendix J of [20], and system (9.1) in [7]. Taking A = 0 A 12 A 21 0 and D(x) = m 2 (x, y) I m 0 0 m 1 (x, y) I n system (3.11) reduces to (3.10). Since system (3.11) has a dissipative character for certain choices of the functions m 1 (x, y) and m 2 (x, y) it would be interesting to investigate analogous properties of system (3.10).
Singular Poisson Reduction
This section is devoted to elaborate Remark (3.15). We will review the singular Poisson reduction introduced in [6] and use it to show that the phase space of an evolutionary game with a skew symmetric payoff matrix is a Poisson stratified space.
A smooth action of a Lie group G on the manifold M is a smooth map such that for every g, h ∈ G and m ∈ M one has A(gh, m) = A(g, A(h, m)) and A(e, m) = m, where e is the identity element of G. For every g ∈ G, A g denotes the diffeomorpism defined by m → A(g, m).
The action is said to be proper if the map We recall the definition of a smooth stratified space, see [4,12].
Definition 4.1. Let X be a paracompact Hausdorff topological space. A smooth stratification of X is a locally finite partition of X into locally closed connected smooth submanifolds S i (i ∈ I), called the strata of the stratification, such that for a pair of submanifolds S i , S j if S i ∩S j = 0 then S i ⊂S j . When this happens S i is called incident to S j or a boundary piece of S j .
The following proposition is a well-known result in the theory of Lie group actions, see e.g. [4,12] for the proof.
Proposition 4.2. If the action of the Lie group G on M is proper then the orbit space M/G is a smooth stratified space. Furthermore, if the action is free then M/G can be equipped with a smooth manifold structure such that the projection map π G becomes a submersion.
Any G-invariant function reduces to a function on M/G so we define: Notice that if the action is Poisson i.e.
then Poisson bracket of any two G-invariant function is again G-invariant. Using this fact, a bracket can be defined on the algebra of smooth functions on M/G by Recall that a Poisson algebra is an algebra equipped with a skew symmetric bracket satisfying Leibniz's rule and Jacobi identity. We state Theorem 2.12 of [6] which will be used to show that the phase space of an evolutionary polymatrix game with a skew symmetric payoff matrix is a Poisson stratified space. The following is, basically, the example which is presented in [6, Section 2.5]. Let M = C n 1 \{0} × . . . × C np \{0}, where n 1 , . . . , n p are integers such that n = n 1 + . . . + n p . We will consider M as a real 2n dimensional manifold with coordinates (ξ, η) ∈ R 2n where z i = ξ i + iη i for i = 1, . . . , n. Equip M with the quadratic Poisson structure defined by: where i, j = 1, . . . , n for w = ξ, η and A is a skew symmetric matrix. In the language of bivectors: We shall denote by C * the group of non zero complex numbers. The group (C * ) n acts on M by component-wise multiplication. Denote this action by Lemma 4.6. The action of (C * ) n on M is Poisson i.e. for any λ ∈ (C * ) n the linear map A λ : M → M defined by z → (λ 1 z 1 , . . . , λ n z n ) is a Poisson map.
Proof. In real coordinates, we denote λ = (ξ 0 , η 0 ) and z = (ξ, η). By this notation where ξ 0 ξ stands for component-wise multiplication of these vectors. Similarly for ξ 0 η, η 0 ξ and η 0 ξ. We need to check condition (2.3). Clearly, Notice that in our case the condition (2.2) is an algebraic equality which holds on Γ • n . The equality (2.2) is invariant w.r.t. multiplication of x i , x j , x k with constant numbers. Hence our algebraic equality must hold on the open subset R n + , which in turn yields that it is satisfied all over R n , i.e., π A is actually a Poisson structure on R n .
Examples
It is possible to fully classify the dynamics of 2D and 3D conservative polymatrix replicator systems, but in this section we just briefly describe two examples of 3D polymatrix replicators.
First Example. Consider the signature n = (2, 2, 2), take the skew symmetric matrix 9 4 , 9 4 , 2, 2 . This matrix is By remark 3.18 ((2, 2, 2), A) is a conservative polymatrix game. The phase space of the associated replicator system is the cube In the model [0, 1] 3 , the equilibrium point q = D −1 p has coordinates q = 7 10 , 5 9 , 1 2 , and hence is an interior point. The line through q with direction v = The orbits of our polymatrix replicator foliate each symplectic leaf into closed curves around that equilibrium point. We can also check that C is a heteroclinic cycle of the vector field X (2,2,2),A . See Figure 1 The vector w = −1, 1 2 , 1 is orthogonal to the space spanned by the columns of B. The symplectic leaves of the constant Poisson structure on R 3 defined by the skew symmetric matrix B are the planes orthogonal to w. Thus, if we consider the Poisson diffeomorphism φ : R 3 → P • , φ(u 1 , u 2 , u 3 ) = e u 1 1 + e u 1 + e u 2 , e u 2 1 + e u 1 + e u 2 , e u 3 1 + e u 3 , the symplectic leaves on P • are the φ images of these planes. Inverting the map φ, the symplectic leaves are given by the equations with c ∈ R. Let U + , resp. U − , be the union of the faces {x + y = 1}, {y = 0}, {z = 0}, resp. {x = 0}, {z = 1}. On the interiors of these two open subsets of the prism's boundary the equation above is never satisfied. Therefore the closure of every symplectic leaf intersects the prism's boundary along the closed curve C = ∂U + = ∂U − ⊂ ∂P . The points r = (1, 0, 0) and s = (0, 0, 1) on C are respectively a global repeller and a global sink of the polymatrix replicator, and every symplectic leaf is foliated into orbits flowing from the repeller r to the sink s. The closed curve C is also the union of two heteroclinic chains from r to s. See Figure 1(b). Note that this dynamical behaviour does not contradict the Hamiltonian character of the system because the area of each symplectic leaf is infinite. | 8,034.6 | 2014-04-23T00:00:00.000 | [
"Mathematics"
] |
IL-6 facilitates cross-talk between epithelial cells and tumor- associated macrophages in Helicobacter pylori-linked gastric carcinogenesis
Purpose Helicobacter pylori (H. pylori) is a significant risk factor for development of gastric cancer (GC), one of the deadliest malignancies in the world. However, the mechanism by which H. pylori induces gastric oncogenesis remains unclear. Here, we investigated the function of IL-6 in gastric oncogenesis and macrophage-epithelial cell interactions. Methods We analyzed publicly available datasets to investigate the expression of IL-6 and infiltration of M2 macrophages in GC tissues, and determine the inter-cellular communication in the context of IL-6. Human gastric epithelial and macrophage cell lines (GES-1 and THP-1-derived macrophages, respectively) were used in mono- and co-culture experiments to investigate autocrine-and paracrine induction of IL-6 expression in response to H. pylori or IL-6 stimulation. Results We found that IL-6 is highly expressed in GC and modulates survival. M2 macrophage infiltration is predominant in GC and drives an IL-6 mediated communication with gastric epithelium cells. In vitro, IL-6 triggers its own expression in GES-1 and THP-1-derived macrophages cells. In addition, these cell lines are able to upregulate each other's IL-6 levels in an autocrine fashion, which is enhanced by H. pylori stimulation. Conclusion This study indicates that IL-6 in the tumor microenvironment is essential for intercellular communication. We show that H. pylori enhances an IL-6-driven autocrine and paracrine positive feedback loop between macrophages and gastric epithelial cells, which may contribute to gastric carcinogenesis.
Introduction
Despite a reduction in frequency over the past few decades, in the year 2020, gastric cancer (GC) remained the fifth most prevalent cause of cancer worldwide, affecting 1,089,103 people.In addition, GC remains the fourth most common cause of cancer-related mortality, accounting for 768,793 deaths worldwide.(Cancer today, https://gco.iarc.fr/today/).Infection of the gastric mucosa with the gram-negative bacterium Helicobacter pylori (H.pylori) is a key risk factor for GC development [1].Chronic H. pylori infection can lead to atrophic gastritis, intestinal metaplasia (IM) and, ultimately, GC.The inflammatory environment created by chronic infection is thought to play a significant role in H. pylori-induced gastric carcinogenesis [2,3].However, while H. pylori infects nearly 50% of the world's population, only a small percentage of these people develop GC [4].The factors driving this discrepancy remain unclear.
A recent study showed that the extent of inflammation in the stomach following H. pylori eradication is greater in patients that continue to develop GC than in those who do not [5].Multiple lines of evidence indicate that the pro-inflammatory cytokine interleukin-6 (IL-6) is a crucial factor in gastric carcinogenesis [6][7][8][9][10].Previous studies showed a substantial and relevant association between the systemic IL-6 levels Abbreviations: GC, gastric cancer; IL-6, Interlukin-6; H. pylori, Helicobacter pylori.;IM, intestinal metaplasia; NAG, non-atrophic gastritis; CAG, chronic atrophic gastritis; PBMCs, Peripheral blood mononuclear cells; TMB, Tetramethylbenzidine; STAT3, signal transducer and activator of transcription3; SOCS family, the suppressor of cytokine signaling family.
induced by H. pylori-infection and the relative incidence of GC [11].IL-6 serum levels are increased in GC patients, and are associated with disease progression and worse prognosis [9,12].In addition, polymorphisms in the genes encoding IL-6 and its cognate receptor have been identified as genetic risk factors for the development of GC [13].Experimental evidence indicates that IL-6 enhances the proliferation [14] and invasiveness [15] of stomach cancer cell lines, and the overexpression of IL-6 in mice results in the development of multiple carcinomas [16][17][18].In contrast, IL-6 knock-out mice presented a lower incidence of GC and reduced tumor size [19].Taken together, these results suggest that an exaggerated IL-6 response contributes to the development of GC.
Signal transducer and activator of transcription-3 (STAT3) is phosphorylated during IL-6 signaling, allowing for its nuclear translocation and the activation of a number of downstream target genes.IL-6 itself is one of these target genes, and may thereby encourage its own synthesis, creating a positive feedback loop for IL-6 production.Examples of such a feedback loop are observed in the development of liver cancer [20] as well as the maintenance of myofibroblast activity in breast cancer [21].Nonetheless, it remains unclear whether such a positive feedback loop also contributes to the development of chronic inflammation after H. pylori infection and stomach carcinogenesis.
Importantly, the IL-6/STAT3 axis is also involved in the polarization of macrophages, which are believed to be mediators in H. pylori-associated gastritis [22], through release of IL-6 [22,23].The presence of macrophages, particularly those of the M2 type associated with immune evasion, is a characteristic of stomach carcinomas [24] and has been linked to the likelihood of disease development and mortality [25][26][27].However, little is known regarding the role of IL-6 in the interplay between gastric epithelial cells and macrophages, and how this may drive gastric carcinogenesis.
Here, we investigated the role of IL-6 in gastric oncogenesis and the interaction between macrophages and epithelial cells.We demonstrate that IL-6 is strongly expressed in GC and promotes several tumor hallmarks.In addition, we show that IL-6 functions in a positive regulatory cross-talk between macrophages and epithelial cells which drives macrophage polarization and may contribute to gastric carcinogenesis.
Cells and cultures
Human immortalized gastric epithelium GES-1 and human monocyte THP-1 cell lines were routinely cultured in RPMI-1640 containing 10% fetal bovine serum (FBS) and 1% penicillin-streptomycin, at 37 • C in a humidified incubator under 5% CO2.For the induction of THP-1derived M0 macrophages, 100 ng/mL of phorbol 12-myristate 13-acetate (PMA) was added to THP-1 cells in RPMI-1640 for 24 h.Cultures were checked for mycoplasma on a regular basis.Peripheral blood mononuclear cells (PBMCs) were isolated using Ficoll (Amersham, Uppsala, Sweden) gradient density centrifugation [28].The cells were suspended in 2 mL of IMDM media supplemented with Ultraglutamine and then placed in a T25 flask at a density of 1*10^6 cells per square centimeter.The cells were left to adhere for a duration of 60 min at a temperature of 37 • C, after which the cells were subjected to two washes with PBS in order to eliminate any cells that were not adhered to the surface.Cells were subsequently cultured in IMDM containing L-glutamine, 10% FCS, 1% penicillin-streptomycin, and 50 ng/ml GM-CSF (Sigma-Aldrich).
Stomach organoids cultures
Stomach organoids were cultured as described previously [29,30].Briefly, both antrum and corpus biopsies were obtained from patients in the previously described Proregal cohort [31].Gastric biopsies were washed, minced and digested using collagenase type IA (Sigma-Aldrich) to obtain a single cell suspension.Subsequently, matrigel (Corning, New York, United States) was inoculated with cells.Cells were maintained in either expansion or differentiation medium containing: Wnt3A conditioned medium (expansion medium only), Noggin conditioned medium, R-Spondin conditioned medium, FGF10 (Peprotech, Londen, United Kingdom), EGF (Peprotech), B27 (Invitrogen), Gastrin (Sigma-Aldrich), TGF-βi (A-83-01, Bristol, United Kingdom), Nicotinamide (Sigma-Aldrich, only expansion medium) and RHOKi (Y-27632, Sigma-Aldrich, only during initiation).For stimulations, Matrigel was first disrupted mechanically, followed by a gentle washing to remove as much Matrigel as possible without damaging the structure of the organoids.The organoid suspension was then divided equally across 24 well plates and stimulated with control medium or H. pylori.
IL-6 measurement
To investigate the stimulation-induced IL-6 production in THP-1derived macrophages and GES-1 cell lines, cells were plated at 10 6 / cells/well in 6-wells plates.Following a 24-hour incubation period, medium was refreshed and cells were stimulated with recombinant human IL6 (50 ng/mL; InvivoGen, San Diego, CA) or 10 6 colony-forming units (CFU) of heat-killed H pylori (strain ATCC-43504 [cagA+, vacA (s1/m1), iceA+, and babA2+]; Manassas,VA).IL-6 production in supernatants was determined at 6, 24, 48, 72, 96, and 120 h of stimulation by enzyme linked immunosorbent assay (ELISA) (eBioscience, San Diego, CA), as per manufacturer's instructions.In short, the capture antibody was incubated overnight at 4 • Celsius in an immunosorbent 96well plate, after which each sample was added in duplicate, the detection antibody was added to bind with IL-6 from samples, and avidin-HRP was added to bind with the avidin-coated detection antibody.Tetramethylbenzidine (TMB) was added and the reaction was stopped after 30 min by addition of 2N H 2 SO 4 .Plates were read at 450 nm on a microplate reader (Infinite® M Nano, TECAN).Alternatively, for serum samples a non-competitive (sandwich) chemiluminescent immunoassay, the Roche Elecsys IL6 test was used.18 µL of sample was first incubated with IL6-specific antibodies, and hen with IL6-specific antibodies that have been labeled with ruthenium complexes to create a sandwich complex.Complexes are then magnetically trapped, and the magnetic charge causes a chemiluminescent emission that is proportional to the amount of IL6 present, with a measurement range of 1.5-5000 pg/mL, and a limit of quantitation (LOQ) is 2.5 pg/mL.
Quantitative real-time reverse-transcription polymerase chain reaction (qPCR)
Total cellular RNA was extracted and quantified using Macherey-Nagel NucleoSpin RNA II kit (Bioke, Leiden, The Netherlands) and Nanodrop ND-1000, respectively.Then, mRNA was transformed into cDNA using Primescript TM RT Master Mix kit (Takara Bio, Saint-Germain-en-Laye, France) according to manufacturer's instructions and stored at -20 • C. Real-time PCR was performed in a thermal cycler (GeneAmp PCR system 9700; Thermo Fisher) using SYBRGreen-based real-time PCR (Applied Biosystems).Table S1 contains a list of each quantitative reverse-transcription PCR primer pairs.The annealing temperature for all primer combinations was 58 degrees Celsius.The ΔΔCT method was used to quantify the relative gene expression of each cytokine in each subgroup.
Bioinformatic analysis
To investigate the increase in IL-6 expression in samples related to H. pylori infection or stimulation, we performed data analysis using the DESeq2 Package from the R (version 4.0.2).We utilized publicly available gene expression datasets, including GSE186902, GSE162056, GSE25146, GSE230869, GSE27411, and GSE231337.
Additionally, we examined TCGA-GTEx data and perused individual RNA sequencing datasets (GSE191275) to verify the upregulation of IL-6 expression in gastric cancer samples.We conducted the analysis using the R to assess the differential expression of IL-6 in these datasets.
Furthermore, we utilized the TCGA dataset to analyze correlations between IL-6 expression in gastric cancer patients with survival outcomes.Survival analysis was performed using GEPIA (http://gepia.cancer-pku.cn/),leveraging the TCGA gastric cancer data.
TCGA gastric cancer dataset was also used to explore the correlation between IL-6 expression and specific macrophage markers, namely the M1 marker CD80 and the M2 marker CD163.We conducted correlation analysis using GEPIA to determine the relationship between IL-6 expression levels and the expression of CD80 and CD163.
Tumor Immune Infiltration was investigated using the CIBERSORT algorithm via the "CIBERSORT" R package (CIBERSORT R script v1.03; http://cibersort.stanford.edu/)to determine the relative abundance of 22 categories of tumor-infiltrating immune cells and non-immune cells in the gastric cancer tumor microenvironment using the TCGA and GTEx datasets.
Single cell RNAseq data analysis
We analyzed a total of 15 samples from 10 patients, including three non-atrophic gastritis (NAG), three chronic atrophic gastritis (CAG), six IM, and two GC samples.The data from two collections of raw scRNAseq data were used: 1) gene expression omnibus (GEO) accession number GSE134520, consisting of 3 NAG, 3 CAG, and 6 IM samples; 2) accession number phs001818.v2 of the database of Genotypes and Phenotypes (dbGaP), containing three GC samples.These samples represented the entire spectrum of disease, from gastritis to GC.The outputs from the expression count meta-matrix were converted into data objects using R package 'Seurat'.Individual loaded Seurat data objects were merged iteratively using the 'FindIntegrationAnchors' function.Cells that expressed fewer than 200 genes, had more mitochondrial genes than 20%, or had a number of UMIs were excluded.With a default scale parameter of 10,000, the 'NormalizeData' function was used to normalize the data to log scale.Cell types were identified using markers defined in the original reports from which the data derived.Each marker (panel) is listed in Supplementary datafile Table S2.
The default settings of the R package 'CellChat' were used to analyze cellular connections.
Cell viability assay
3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide (MTT) assay was performed to quantify viable cell numbers.After IL-6 stimulation of GES-1 cells for 0, 3, and 24 h, MTT was added to a final concentration of 0.5 mg/ml, and plates were incubated for 3 h at 37 • C.After that, 100 µl di-methyl-sulfoxide (DMSO) was added to each test well.Prior to analyzing the absorbance at 540 nm, the plate was incubated for 10 min at 37 • C and subsequently shaken for one minute to dissolve crystals.
Cell migration assay
To assess cellular migration, we performed wound-healing assays.To this end, GES-1 cells were plated at 10 5 cells/ml and cultured until 50% confluence in a 6-well plate.The cells were then stimulated for 24 h with recombinant human IL6 (50 ng/mL; InvivoGen, San Diego, CA) or 10 CFU of heat-killed H pylori (strain ATCC-43504, Manassas,VA).A straight scratch line was then created using a sterile 1000 µl pipette tip.Dishes were carefully rinsed to eliminate detached cells, and fresh culture medium with stimulator was added.At 0, 6, 12, and 24 h, cells that migrated into the scratch line were photographed.Image J was used to analyze the migrated distance in these images, and migration was presented as migration speed.
Co-culture experiments
The GES-1 cell line and THP-1-derived macrophages were cocultured using a cell culture insert (Corning, NY, USA) with a 0.4-μm porous membrane to separate the upper and lower chambers [32].Inserts were coated with 5 g/cm 2 of bovine collagen type I.The coating was permitted to set for 2 h at 37℃.Inserts were then washed with 1x PBS three times.THP-1 monocytes (5 × 10 5 cells/ml) were seeded onto an plate, stimulated to differentiate into macrophages by the addition of 100 ng/ml PMA (Sigma Chemical) for 48 h.The GES-1 cells were placed in the upper chamber at a density of 2.5 × 10 5 cells/ml for 24 h.Differentiated THP-1 cells were harvested by Trypsin dissociation and counted. 1 × 10 5 cells were seeded on the bottom of a transwell insert containing a confluent GES-1 monolayer in a 50 µl droplet.Inserts were incubated for 2 h and were reversed.The inserts with the THP-1-derived macrophages and GES-1 cells separated by the membrane were then placed directly in a six-well plates, and the resulting co-culture systems were incubated for 48 h with or without 10 6 colony-forming units of heat-killed H pylori. Medium was collected from the apical (insert) and basolateral side (lower well).
Patients 48 patients at The First Affiliated Hospital of Zhengzhou University with histologically proven stomach cancer participated in this study from February 2023 to August 2023.The study was approved by the hospital review board after receiving the informed consent of every patient.All patients underwent C13 breath testing to determine their H. pylori infection status, and serum samples were taken to assess their IL-6 levels via the Roche Elecsys IL6 test.We procured both tumor and adjacent tissues of 7 of these patients, followed by sectioning and subsequent application of immunohistochemical staining analysis.
Immunohistochemistry
In brief, xylene was used twice to deparaffinize 4 µm sections before they were rehydrated using graded ethanol solutions of 100% ethanol twice, 96% solution ethanol, and 70% solution ethanol.Slides were washed once with tap water after being rinsed several times with fresh deionized water.In order to achieve heat-induced epitope retrieval, mM sodium citrate buffer (pH 6.0) was used for 15 min.Slides were gently chilled for 45 min after epitope retrieval, then three times for min in PBS.PBS/3% H2O2 solution was used to inhibit endogenous peroxides for 10 min at room temperature (RT).Slides were cleaned with PBS before being blocked with 10% normal goat serum in PBS for an hour at room temperature.The CD163 primary antibody (CD163 (D6U1J) Rabbit mAb, #93498, Cell signaling) was then added, and the mixture was incubated at 4 • C overnight.PBS was used to wash the slides.As a secondary antibody, rabbit envision (DAKO) was added and incubated for 30 min at room temperature.A Tris/HCL solution (pH 7.6) containing 0.03% H2O2 and 0.5 mg/ml diamino-benzidine (DAB) was used for visualization, which was incubated for 10 min.Slides were mounted with Pertex and a coverglass, counterstained with hematoxylin, rinsed with tap water, dehydrated with a 70% solution of ethanol, then a 96% solution of ethanol, twice in 100% ethanol, and finally twice in xylene.Stainings were scored based in intensity and proportion of positive cells using H-scoring ([(0 x % negative cells) + (1 x % weak positive cells) + (2 x % moderate positive cells) + (3 x % strong positive cells).
IL-6 is highly expressed in GC tissues and drives oncogenic characteristics
We first compared the IL-6 mRNA expression in publicly available datasets from healthy stomach samples (n=174, GTEx) and tumor samples (n=414, TCGA-stomach adenocarcinoma) and showed that IL-6 expression was significantly enhanced in tumor tissues (P ≤ 0.0001) (Fig. 1A).This was further substantiated through analysis of another publicly available dataset (GSE191275), which indicates a significantly higher IL-6 expression in GC tissues (n=10) compared to non-atrophic gastritis (NAG, n=10) and intestinal metaplasia (IM, n=10) (P ≤ 0.0001) (Fig. 1B).In addition, high IL-6 expression (above the 50th percentile) was associated with significantly decreased survival rates in GC patients (TCGA data, Fig. 1C).
To investigate whether IL-6 may directly contribute to gastric carcinogenesis, we employed an in vitro model using non-transformed gastric epithelial cells (GES-1).While IL-6 did not affect their viability, as indicated by the MTT assay (Fig. 1D), stimulation of these cells with IL-6 significantly enhanced their migration capacity, as demonstrated by the wound healing assay (Fig. 1E).These results suggest that IL-6 predominantly affects cell migration rather than proliferation..In addition, expression of genes involved in epithelial to mesenchymal transition (EMT: SNAI1, encoding Snail and VIM encoding Vimentin) were significantly increased after 48 h of IL-6 stimulation, as were genes involved in extracellular matrix remodeling (matrix metalloproteinases
IL-6 mediates a cross-talk involving macrophages and tumor cells in the tumor microenvironment
IL-6 has been thought to play a key role in the cross-talk between different cell lineages within the tumor microenvironment.To investigate the cell types involved in GC, we utilized the Cell Chat package to determine the inferred intercellular communication network for IL-6 signaling (Supplementary Fig. S1).The ensuing circle plot reveals that a strong association exists through IL-6 signaling between macrophages and the gastric epithelium as well as adenocarcinoma cells.Indeed, the main source of IL-6 signaling in this setting are macrophages (see heat map, Fig. 2F), which in turn show the largest association with gastric epithelial and adenocarcinoma cells.To investigate the importance of this finding, we analyzed the distribution of inferred immune cell subsets in gastric cancer using the TCGA dataset.Our findings reveal that in particular M2 macrophages are the second largest infiltrating immune cell subset among the 22 immune cell types evaluated, and significantly increased in gastric cancer tissues (Supplementary Fig. S2A).To validate these observations, we analyzed single cell RNAseq data sets from three NAG, three CAG, six IM and three GC samples (Supplementary Fig. S2B), and observed a consistent increase in the percentage of macrophages in GC samples compared to premalignant lesions.In particular the percentage of CD163+ M2 macrophages increased in gastric cancer samples (Supplementary Fig. S2C), showing that M2 macrophages are an integral part of the gastric microenvironment during carcinogenesis.Furthermore, immunohistochemical analysis of GC and adjacent paracancer tissues showed that almost 30% of tumor tissues exhibited high expression of CD163, while none of the adjacent paracancer tissues showed a high H score. Conversely, 40% of adjacent paracancer tissues vs 30% of cancer tissues demonstrated a low H score (Supplementary Fig. S2D).Supplementary Fig. S1 further indicates that IL-6 signaling displays both autocrine loops as well as reciprocal paracrine loops between the diverse cell lineages within this microenvironment.
H. pylori and IL-6-mediated macrophage polarization
To further explore the relationship between IL-6 and M2 macrophage polarization we again queried the TCGA gastric tumor dataset.We observed a significant positive correlation between IL-6 expression and the M2 macrophage marker CD163 (R=0.36,P=1.4e-13), while the correlation between IL-6 expression and the M1 macrophage marker CD80 was less strong (R=0.16,P=0.00093) (Fig. 2A1, A2).One of the first triggers contributing to IL-6 production and macrophage polarization in gastric carcinogenesis may be infection with H. pylori, as higher IL-6 mRNA levels are present in gastric mucosa (n=6 vs n= 6, P = 0.0891, Fig. 2B) as well as serum from H. pylori-infected healthy individuals (n=7 vs n= 4, P = 0.01, Fig. 2C) and cancer patients (n=24 vs n=24, P =0.3847 Supplementary Fig. S3) compared to uninfected controls.To directly investigate whether H. pylori and/or IL-6 affect macrophage polarization, we conducted stimulation experiments on THP-1 cells, a monocytic cell line which can be induced to differentiate towards the macrophage lineage through PMA stimulation [34].Somewhat unexpectedly, stimulation of THP-1 derived monocytes with H. pylori resulted in a significantly enhanced expression of the M1 marker CD80 (Fig. 2D1), with limited effect on CD163 expression (Fig. 2D2).Conversely, IL-6 enhanced expression of CD163 on THP-1-derived macrophages, but did not affect CD80 expression on these cells (Fig. 2D1, D2).These experiments were verified using peripheral blood-derived macrophages, which showed similar results (Fig. 2E1, E2).Thus, acute H. pylori infection may skew macrophages to an M1 phenotype, while IL-6 (potentially as a result of long term infection) polarizes macrophages towards the M2 lineage.This dual effect highlights the intricate relationship between the duration of H. pylori infection and cytokine levels.
IL-6 enhances its own production in gastric cells and macrophages
To investigate the cross-talk between different cell populations in IL-6 production in more detail, we employed GES-1 epithelial cells as well as THP-1 cells.First, we verified that these models were able to generate IL-6.Consistent with data from publicly available RNA sequencing datasets (Fig. 3A) and primary gastric organoids (Fig. 3B, C), stimulation of GES-1 cells with H. pylori results in a significant increase in IL-6 mRNA production within 24h (Fig. 3D1), with secretion of IL-6 showing a rapid start and subsequent plateau (Fig. 3E).Differentiated THP1 cells also significantly upregulate their IL-6 mRNA production upon H. pylori infection (Fig. 3D2), with a more gradual IL-6 protein release, which overtakes GES-1-produced IL-6 levels after 96 h (Fig. 3E).
Having confirmed the IL-6 production ability of GES-1 and THP-1 cells, we examined whether they exhibit an autocrine activation pattern, as suggested earlier (Fig. Supplementary S1).Although stimulation with recombinant IL-6 did not significantly affect IL-6 mRNA levels in GES-1 and THP-1 cells (Fig. 3F1, F2), Fig. 3G demonstrates that the medium obtained from both cell lines contained IL-6 levels that exceeded the added recombinant protein levels.This suggests that IL-6 induces its own protein expression in these cells, highlighting an autocrine activation mechanism.
H. pylori-stimulated macrophages further induce IL-6 production in gastric cells
To investigate the role of the IL-6 feedback loop in the cross-talk between different cell populations, conditioned medium was collected from H. pylori-stimulated or non-stimulated THP-1-derived macrophages.Subsequently, we treated GES-1 cell lines with these conditioned media, using culture media with or without H. pylori as controls (Fig. 4A).Following stimulation of GES-1 cells with THP-1-conditioned media, we observed a continuous increase in the secretion of IL-6.Intriguingly, conditioned medium from H. pylori-stimulated macrophages (which no longer contains H. pylori), induced a more pronounced increase in IL-6 levels compared to direct stimulation of GES-1 cells with H. pylori or conditioned medium from unstimulated THP-1 cells (Fig. 4B).Consistent with these findings, IL-6 mRNA levels also showed the highest elevation after stimulation with the H. pylori-stimulated THP-1 conditioned media (Fig. 4C).Moreover, GES-1 cell lines stimulated with the conditioned medium from H. pylori-stimulated macrophages exhibited the highest increase in expression of EPCAM, SNAI1, VIM, MMP2, and MMP9 mRNA compared to control conditions (Fig. 4D).To further validate the cross-talk observed between GES-1 cells and THP-1-derived macrophages mediated by the IL-6 feedback loop, we co-cultured these two cell types in a transwell culture system, allowing subsequent harvesting of cells and culture media separately from basolateral and apical sides of the insert (Fig. 5A).Again, coculture of these cells resulted in an enhanced release of IL-6 from GES-1 (Fig. 5B) and THP-1-derived macrophages (Fig. 5C).In addition, coculture with H. pylori resulted in a synergistic effect, increasing levels of IL-6 more than either H. pylori alone or co-culture without the presence of H. pylori, in particular for THP-1 cells (Fig. 5B and C).This was also reflected in the IL6 mRNA levels obtained from either GES-1 cells (Fig. 5D) or THP-1-derived macrophages (Fig. 5E).These findings suggest that stomach epithelial cells and macrophages increase each other's IL-6 levels, and that this effect is synergistically enhanced when either one or both cell lineages are primed with H. pylori.
IL-6 induces a positive feedback loop between GES1 and THP1-derived macrophages
Continuing our investigation into the IL-6 feedback loop and its impact on cross-talk between different cell populations in gastric carcinogenesis, we performed additional experiments as in Fig. 5A, but priming THP-1 cells with IL-6 instead of H. pylori.Fig. 6A shows that GES-1 stimulated with IL-6 start increasing their IL-6 secretion (see also Fig. 3G).However, IL-6 secretion is even further enhanced when GES-1 cells are treated with conditioned medium from THP-1-derived macrophages who have previously been primed by IL-6 (Fig. 6A).Conditioned medium of IL-6 primed macrophages also induced the highest expression level of EMT-associated genes and cancer hallmarks SMAI1, VIM, MMP2, MMP9 and EPCAM in GES-1 cells (Fig. 6B).We subsequently performed the reverse experiment, i.e. priming GES-1 cells with IL-6 and treating THP-1 macrophages with conditioned medium from GES-1 cells.As for gastric epithelium cells, IL-6 production in macrophages is significantly enhanced when treated with conditioned medium from IL-6-primed GES-1 cells (Fig. 6C).Together, these findings indicate the existence of an IL-6 feedback loop which can amplify the IL-6 signaling
Discussion
The mechanism behind H. pylori-associated inflammation and GC development remains incompletely understood, and identifying factors that could contribute to these processes remains important.A previous meta-analysis identified IL-6 as a potential key factor in this process [11], making it a subject for further investigation.We aimed to explore the role of IL-6 in H. pylori-associated gastric cancer and show that macrophages and epithelial cells demonstrate both autocrine and paracrine IL-6 induction, which is enhanced by H. pylori.
Overall, our data suggest that IL-6 can enhance its own levels in the gastric microenvironment, which can contribute to the development of carcinogenic properties.This is in line with previous studies showing that IL-6 mediates cellular cross-talk in the microenvironment in other cancer types [35].A self-stimulatory role for IL-6 has been shown for instance for fibroblasts in vocal fold leukoplakia [36] and for non-small cell lung carcinoma (NSCLC) [37].The action of IL-6 is regulated by an intracellular network of effectors, including its receptor (IL-6R and gp130), engagement of which results in phosphorylation of the Janus kinase (JAK)2 and its downstream effector signal transducer and activator of transcription (STAT)3.Activated STAT3 translocates to the nucleus, where it initiates the transcription of a series of genes responsible for cell growth, apoptosis inhibition, and cell cycle progression [38].Accumulation of unphosphorylated STAT3 in response to IL-6 can subsequently result in autocrine expression of the IL-6 gene [39], while nuclear STAT3 was shown to bind to the IL6 promotor [40].However, other intermediate signaling events may also contribute to the IL-6 induced positive feedback loop.In addition, H. pylori is also a known activator of STAT3, which in part may account for its induction of IL-6 production [41].However, during inflammation and tissue repair, the action of IL-6 is strictly controlled by negative regulators such as the suppressor of cytokine signaling (SOCS) family members [42].H. pylori has been found to induce hypermethylation of SOCS1 [43], which can reverse the suppression of IL-6.Additionally, IL-6 itself has been found to increase hypermethylation of SOCS3 [44], all of which may contribute to a positive feedback loop in the H. pylori-infected tumor microenvironment.
While autocrine IL6-JAK-STAT3 signaling has been shown before in GC cells [45], the role of this autocrine loop in the cross-talk between different cells of the microenvironment remained unclear.Previous studies showed that IL-6 derived from mesenchymal cells can drive polarization of M2 macrophages in GC [46,47].Our data suggest that IL-6 produced in the local microenvironment may also contribute to M2 polarization, as well as subsequent IL-6 production by these cells.Such a mechanisms appears to take place in lung cancer, where conditioned medium from Lewis lung cancer cells was shown to drive M2 polarization and IL-6 production in PBMCs [48].
Our data indicate that whilst IL-6 drives M2 polarization of macrophages, H. pylori may affect mainly M1 polarization, which is in line with previous results [49].This would fit in an hypothesis where initial infection with H. pylori in the (then) healthy stomach induces an inflammatory phenotype associated with M1 polarization, whereas once a chronic infection has established, ongoing auto/paracrine IL-6 production induces a switch towards M2 polarization and might promote gastric carcinogenesis.
Conclusion
This study confirms that IL-6 within the tumor microenvironment plays a crucial role in the intercellular communication.We demonstrate that IL-6 acts in a positive feed-back loop in both autocrine and paracrine fashion between macrophages and gastric epithelial cells, which can be further enhanced by H. pylori.These data provide a step forward in our understanding of the role of IL-6 in GC.Further research is required to elucidate the intracellular mechanisms involved.
Funding
Authors of the First Affiliated Hospital of Zhengzhou University were funded through Funding for Scientific Research and Innovation Team (QNCXTD2023022).
Fig. 1 .
Fig. 1.IL6 levels are increased in gastric cancer tissue.A. Gastric cancer tumor samples (GC; TCGA-STAD) present a significantly higher IL-6 mRNA expression than normal stomach samples (Control; GTE-x dataset).B. Gastric cancer (GC) samples show a significantly higher IL-6 expression than intestinal metaplasia (IM) or nonatrophic gastritis (NAG) samples (GSE191275 dataset).C. Patients with high IL-6 mRNA levels in their GC tissues show a significantly reduced overall survival time (TCGA dataset).D. MTT assay indicates that IL-6-stimulated GES-1 cells do not exhibit an increased proliferation ability.E. IL-6 treatment of GES-1 cells significantly enhances their migratory capacity in wound healing assays.F. GES1 cells stimulated with IL-6 present an increase in relative mRNA expression of the gastric cancer hallmarks SNAI1, VIM, MMP2, MMP9 and EPCAM, respectively.Mean ± standard deviation of three independent experiments is shown.*P<0.05,**P<0.01,***P<0.001.
Fig. 2 .
Fig. 2. IL-6 drives M2 macrophage polarization.A. In gastric cancer samples from the TCGA dataset, IL-6 is significantly correlated with the M2 macrophage gene CD163, and to a lesser extent with the M1 Macrophage gene CD80.B. Analysis of publicly available RNA sequencing dataset illustrates that IL-6 mRNA levels are elevated in gastric tissue from Helicobacter pylori (H.p)-infected individuals antrum.C. Serum levels of IL-6 protein are higher in Helicobacter pylori (H.p)-infected individuals compared to non-infected controls.D, E. Flow cytometric analysis of expression of CD163 and CD80 on THP-1 derived macrophages (D) or primary monocyte-derived macrophages (E) stimulated with IL-6 or H. pylori.Results are presented as mean fluorescence intensity of the cell populations.Mean ± standard deviation of three independent experiments is shown.*P<0.05,**P<0.01,***P<0.001.F. Heat map generated using the CellChat function illustrating the sources and targets of IL-6 signaling and the total outgoing and incoming interactions of IL-6 scores.
Fig. 3 .
Fig. 3. H. pylori as well as IL-6 enhance IL-6 expression in macrophages and epithelial cells.A. Publicly available RNA datasets were searched for gastric epithelium cell lines stimulated with H. pylori and queried for IL-6 expression.Four datasets were found, all showing an increase in IL-6 expression upon H. pylori stimulation of gastric cell lines.B. Gastric organoids were stimulated with H. pylori and IL-6 mRNA levels were determined by qPCR.C. Gastric organoids stimulated with H. pylori demonstrate a significant increase in IL-6 release as determined by ELISA.D. IL-6 mRNA expression in GES-1 (D1) and THP-1 (D2) cell-lines after stimulation with heat-killed H. pylori.E. IL-6 protein concentration in medium collected from GES-1 and THP1cell lines after stimulation with heat-killed H. pylori.Lower panels show time curve of IL-6 levels, upper panels show comparison per time point of collection.F. IL-6 expression in GES-1 (F1) and THP-1 (F2) cell-lines after stimulation with human recombinant IL6.G. IL-6 concentration in medium collected from GES-1 and THP-1 cell lines after stimulation with human recombinant IL6.Lower panels show time curve of IL-6 levels after subtraction of added recombinant protein (Δ), upper panels show comparison per time point of collection.Mean ± standard deviation of three independent experiments is shown.*P<0.05,**P<0.01,***P<0.001.
Fig. 4 .
Fig. 4. H. pylori stimulation of THP-1-derived macrophages causes release of factors triggering IL-6 production in GES-1 cells.A. Schematic representation of the experiment.Conditioned media (CM) was collected from THP-1-derived macrophages and filtered to remove cell debris and H. pylori.GES-1 cells were subsequently stimulated with these CM, with control wells stimulated with H. pylori directly or medium control.B, C. IL-6 protein concentration in medium collected from GES-1 cells after stimulation with CM from THP1 cells (treated or non-treated with H. pylori) or H. pylori.heat-killed H. pylori.Lower panels show time curve of IL-6 levels, upper panels show comparison per time point of collection.C. IL-6 mRNA expression in GES-1 stimulated as shown in panel A D. Relative mRNA expression of the gastric cancer hallmarks SNAI1, VIM, MMP2, MMP9 and EPCAM, respectively, in GES-1 cells stimulated as depicted in panel A. Mean ± standard deviation of three independent experiments is shown.*P<0.05,**P<0.01,***P<0.001.
Fig. 5 .
Fig. 5. Co-culture of THP1 and GES-1 cells in the presence of H. pylori synergistically enhances IL-6 production. A. Schematic presentation of co-culture set-up.Medium and cells were collected from co-cultures of THP-1-derived macrophages and GES-1 cells (with and without addition H. pylori) of the basolateral side (THP1) and apical side (GES-1), and compared to mono-cultures of THP-1 or GES-1 in the absence or presence of H. pylori, respectively.B. IL-6 concentration in medium obtained from GES-1 cells.C. IL-6 concentration in medium obtained from from THP-1 cells.D. Quantification of IL6 mRNA levels in GES-1 cells stimulated as described in panel A. E. Quantification of IL-6 mRNA in THP-1-derived macrophages stimulated as described in panel A. Mean ± standard deviation of three independent experiments is shown.*P<0.05,**P<0.01,***P<0.001
Fig. 6 .
Fig. 6.IL-6 participates in a positive feedback loop between GES-1 cells and THP-1-derived macrophages.A. IL-6 protein concentration in medium collected from GES-1 cells after stimulation with CM from THP1 cells (treated or non-treated with human recombinant IL-6) or human recombinant IL-6 directly.Lower panels show time curve of IL-6 levels after subtraction of added recombinant protein (Δ), upper panels show comparison per time point of collection.B. Relative mRNA expression of the gastric cancer hallmarks SNAI1, VIM, MMP2, MMP9 and EPCAM, respectively, in GES-1 cells stimulated as described for A. C. IL-6 protein concentration in medium collected from THP-1 cells after stimulation with CM from GES-1 cells (treated or non-treated with human recombinant IL-6) or human recombinant IL-6 directly.Lower panels show time curve of IL-6 levels after subtraction of added recombinant protein (Δ), upper panels show comparison per time point of collection.Mean ± standard deviation of three independent experiments is shown.*P<0.05,**P<0.01,***P<0.001. | 7,949.2 | 2024-02-28T00:00:00.000 | [
"Medicine",
"Biology"
] |
Bremsstrahlung as a probe of baryon stopping in heavy-ion collisions
In collisions between heavy ions at ultra-relativistic energies the participating protons lose energy, which is converted into new particles. As the protons slow down, they emit bremsstrahlung radiation. The yield and angular distribution of the emitted radiation are sensitive probes of how much energy the incoming protons have lost. In this paper, the spectrum of bremsstrahlung radiation is calculated for different stopping scenarios, and the results are compared with the expected yield of photons from hadronic interactions.
Introduction
In collisions between heavy ions at relativistiv energies there is convincing evidence that a new state of matter, a quark-gluon plasma, is formed [1].In this state, the quarks and gluons are no longer confined to nucleons but can move freely over distances large compared with the size of a single nucleon.The energy density in the quark gluon plasma formed in Pb+Pb collisions at the LHC has been estimated from measurements of the total transverse energy at midrapidity to be on the order of 12-14 GeV/fm 3 at a time of 1 fm/c after the collision [2,3].This is far above the densitites of 0.2-0.5 GeV/fm 3 lattice QCD calculations find are required for deconfinement [4].
The energy deposited in the quark-gluon plasma comes from the energy lost by the incoming nuclei, and one of the most fundamental question one can address in the study of high energy heavy-ion collisions is therefore how much energy the incoming baryons lose.This is usually referred to as the amount of baryon stopping.One can have scenarios ranging from complete stopping, where the incoming baryons lose all their energy, to full transparency, where the baryons lose no or very little energy.Full stopping would imply that all baryons end up close to midrapidity, whereas full transparency would leave the baryons near beam rapidity.Since baryon number is conserved, the fate of the baryons in the colliding nuclei can be determined from the rapidity distribution of net baryons, that is dn B /dy−dn B /dy.For experimental reasons, one is often restricted to study the net proton rather than the net baryon distributions.
Results from the Relativistic Heavy-Ion Collider (RHIC) [5][6][7] and fixed target experiments at the CERN SPS [8] show that the amount of stopping decreases with increasing collision energy in the range √ s N N = 7 − 200 GeV.There have been attempts to explain the energy loss in this energy range from hadron transport models [9] and models based on the Color Glass Condensate [10,11].
At the CERN SPS and RHIC, identified protons and anti-protons could be measured down to low transverse momenta p T ∼ 0 over a wide rapidity range, and the net-proton rapidity distributions could thus be determined.Such measurements were performed by the NA49 [8] and BRAHMS [6,7] experiments.At the LHC, the situation is different.The only experipment which has measured identified protons and anti-protons at low p T is ALICE.The results have shown that in the central rapidity region |y| ≤ 0.5 there are no net protons [12][13][14].But beyond that, there are no experimental constraints on how the net-protons are distributed.
To improve this situation, we propose to use the bremsstrahlung photons emitted when the nuclei slow down.This idea was first suggested before the start of the relativistic heavy-ion programs at CERN and RHIC [15,16].Before the start-up of RHIC, several studies were made where this process was considered [17][18][19].These also included a proposal to build a dedicated detector to study this radiation [18], but those plans were never realized.Recently, the idea was brought up again in the context of the LHC [20].
The previous studies mentioned above all use a similar, semi-classical approach to calculate the bremsstrahlung spectrum, based on the description in [21].Our calculations will follow the same path.We will, however, implement improved stopping scenarios which are either based on model calculations or phenomenological and consistent with existing data.The stopping scenarios considered in [20] are simplified and do not take into account the fact that the central region (|y| ≤ 0.5) at the LHC is baryon free.This was, however, recently followed up by a study of phenomenological stopping scenarios where the central region is almost baryon free [22].
We will also, for the first time, make a detailed estimate of the background from hadronically produced photons, primarily from the decay of π 0 mesons.This background is obtained from simulations with PYTHIA 8.3 [23].The goal is to determine in what regions of phase space one can expect bremsstrahlung photons to provide a realistic measure of the nuclear stopping.One can, in addition to photons from hadronic interactions, also expect a large background from secondary photons produced in the detector material.The latter is specific to a certain experiment and its material budget and is thus beyond the scope of this paper.
The bremsstrahlung spectrum
The energy radiated per solid angle from a current J(r, t) is given by [21] The vector n is a unit vector in the direction of the photon, and here and throughout we use units where = c = 1.We choose coordinates where the incoming beams move along the z-axis and the photon is emitted in the xz-plane with angle θ, giving n = (sin(θ), 0, cos(θ)).
In the center of mass, the incoming nuclei are Lorentz contracted in the longitudinal direction to a size ∼ R/γ, where R is the nuclear radius and γ the Lorentz factor of the beam.For a lead nucleus at the LHC, this corresponds to a longitudinal size of about 0.003 fm.It is thus justified to ignore the longitudinal relative to the transverse extension and write the currents for the incoming nuclei as Here, σ is the nuclear electric charge density in the transverse plane, v 0 the velocity, v 0 = tanh(y b ), where y b is the beam rapidity, and θ(t) the Heaviside step function.The charge density is normalized to σd 2 r ⊥ = Z, where Z is the number of protons in the beam nucleus.The outgoing protons will have a distribution in the transverse plane, which we assume is the same as for the incoming particles, and a distribution in velocities, which may or may not depend on the position in the transverse plane.Writing the velocity in terms of the rapidity, v(y) = tanh(y), and the corresponding density as ρ(y, r ⊥ ) one gets for the outgoing current The density of the outgoing protons is normalized to since it includes the contribution from both incoming nuclei.With the total current J = J + + J − + J f , the radiated energy becomes Integrating by parts in the integral over time one obtains The time, ∆t, and longtudinal distance over which the protons are slowed down can be expected to be small compared with the transverse size, ∆t << R. For low energy photons (ω << 1/∆t), it is therefore justified to neglect the time and longitudinal components in the phase factor and make the assupmtion With these assumptions the integrals over time and z in (6) can be performed, leading to Here, α = e 2 /4πǫ 0 is the fine structure constant.This result is in agreement with [18].To do the integral over the transverse dimensions, one has to know the function ρ(y, r ⊥ ).It is conceivable that there is a dependence of the rapidity loss of the protons on the transverse coordinate; protons close to the center of the nuclei can be expected to lose more energy than those on the periphery.Previous studies have, however, found that the dependence of ρ(y, r ⊥ ) on the transverse position has only a minor effect on the spectrum of bremsstrahlung photons [18,19].Moreover, the models we will use for the rapidity loss of the protons provide the average loss, independent of position in the transverse plane.We will therefore ignore the dependence on r ⊥ here and assume ρ(y, r ⊥ ) = ρ(y).The integral over r ⊥ can then be performed and the spectrum of emitted photons can be written Here, F (Q) is the nuclear form factor obtained from a Fourier transform of the nuclear charge distribution, ρ A : We use a form factor where R A = 6.62 fm, ρ 0 = 0.161 fm −3 , and a = 0.70 fm for a Pb nucleus.This parameterization has been shown to reproduce the Fourier transform of a Woods-Saxon distribution in configuration space very well [24].One can note that for low energy photons and small emission angles, ω sin(θ) << 1/R, the Form Factor in Eq. 9 is approximately 1.The energy and angular dependencies then factorize, with the energy dependence given by 1/ω and the angular dependence by To proceed with the calculations one has to define the function ρ(y), the rapidity distribution of the net-protons in the final state.We consider 3 scenarios for central Pb+Pb collisions at the LHC.
1.The net proton distribution as given by PYTHIA 8.3 [23].Heavy-ion collisions have been implemented in PYTHIA through the Angantyr model [25].
The scaling from proton-proton collisions is based on the Glauber model and the original PYTHIA framework is used to describe the individual nucleon-nucleon sub-collisions.It thus extrapolates the dynamics of pp collisions to heavy-ion collisions, without introducing any collective effects between the nucleon-nucleon collisions.It does reproduce the measured charged particle pseudorapidity distributions in Pb+Pb collisions at the LHC. 2. The net proton distribution as given by the hadron transport model SMASH-2.2 [9,26].The model combines a string model, where the colliding hadrons are excited to strings which fragment, with elastic and inelastic interaction between hadrons in the later stages of the collisions.This approach leads to a considerably larger amount of stopping compared with PYTHIA.In fact, the model predicts a non-zero number of net-protons at midrapidity.We use this result for the current calculation anyway, since it represents a valid result of the model at LHC energies [H.Elfner, private communication].3. A phenomenological model where the central region, |y| ≤ 0.5, contains no net protons, but where the protons have a considerable shift away from beam rapidity.This is modelled by the sum of two skewed Gaussians.
The net-proton rapidity distributions, ρ(y), for these 3 scenarios are shown in Fig. 1.Of the three models, Pythia clearly shows the least amount of stopping.While SMASH-2.2 exhibits a non-zero yield of net-protons at midrapidity, the protons are on average not shifted so much toward y = 0 as in scenario 3.
Results
The angular distributions of bremsstrahlung photons in the forward direction, integrated over the energy range 0.1 ≤ ω ≤ 0.5 GeV, from the 3 scenarios are shown in Fig. 2 a).The distributions are peaked close to 1/γ, as expected, and the yields increase with an increasing amount of stopping.The curves also exhibit different angular dependencies, the scenarios with more stopping having more persistent tails toward larger angles.Thus, the models are differentiated in both total photon yield and angular dependence.
To facilitate a comparison with experimental acceptances, which are usually defined in terms of pseudorapidity, η = − ln(tan(θ/2)), the angular distribution in Eq. 9 can be rewritten as The pseudorapidity distribution, integrated over azimuthal angle and energy interval 0.1 ≤ ω ≤ 0.5 GeV, is shown in Fig. 2 b).The difference between the spectra is more pronounced in the pseudorapidity distribution than in the angluar distribution.This is due to the factor sin 2 θ = 1/ cosh 2 (η) which varies rapidly for large |η|.Covering the entire range of emission angles, the figure illustrates the difference in total number of radiated photons between the stopping scenarios.Furthermore, while the peaks of all three spectra lie at large pseudorapidities, the scenarios give significantly differing photon yields at lower η as well, making such scenarios potentially discernable within experimental acceptances.To put these numbers in context, we compare them with the photon yield from the 5% most central Pb+Pb collisions from PYTHIA 8.3 in Fig. 3.These hadronically produced photons, most of which come from the deacy π 0 → γ+γ, constitute a background to the bremsstrahlung photons we are considering here.The background yield is shown by the black histograms in the figure.The sum of the yield of the background and bremsstrahlung photons is shown by the solid, blue histograms for scenario 1-3 in Fig. 3 a), b), and c), respectively.The number of bremsstrahlung photons is calculated from Eq. 13, integrated over azimuthal angle and photon energy 0.1 ≤ ω ≤ 0.5 GeV.
The bremsstrahlung calculations above assume that all protons participate in the collision.This will not be the case for collisions within a finite impact parameter range.We therefore calculate a correction factor ( N p, part /2Z) 2 , where N p, part = 151 is the average number of participating protons in the 5% most central collisions in Pythia.The result of applying this correction factor to the yield of bremsstrahlung photons is shown by the dashed, blue histograms in the figure .From the figure one can see that at low pseudorapidities, the background completely dominates.In the very forward direction, however, the background falls off quickly while the bremsstrahlung peak emerges.The yield of bremsstrahlung photons differ widely between the different scenarios, which emphasizes that this is indeed a very sensitive probe of the amount of nuclear stopping.The shapes of the pseudorapidity distributions are also quite different between the scenarios.This means that the limit in pseudorapidity where one can expect a significant signal over background is lower the larger amount of stopping one has.
As mentioned, a detailed discussion of in which experiments one might extract a bremsstrahlung signal is beyond the scope of this paper.We nevertheless indicate in Fig. 3 the experimental acceptances of the current and future experiments where it might be possible.The existing LHCb experiment has an electromagnetic calorimeter coverage between 2.0 ≤ η ≤ 4.5 [27].We include it here, although it might not be able to reach low enough photon energies at large pseudorapidities [28], [R.McNulty, private communication].During the Next Long Shutdown at the LHC (2026 -2029) it is foreseen to install a forward calorimeter (FoCal) in the ALICE experiment [29].It will consist of a high resolution electromagnetic and hadronic calorimeter covering 3.4 ≤ η ≤ 5.8.Finally, beyond LHC Run 4 there are plans to upgrade the ALICE experiment to ALICE-3 [30].The current design of ALICE-3 includes a Forward Conversion Tracker, which should have the possibility to measure photons with energies down to or below 100 MeV in the pseudorapidity range 3.0 ≤ η ≤ 5.0.The pseudorapidity coverages of these detectors are shown by the red lines in Fig. 3.
From the figure one can see that for scenario 3, "No central charge", there is a visible excess over the hadronic background within the pseudorapidity coverage of all the detectors mentioned above.For scenario 1, "PYTHIA 8.3", the situation is less favorable and one would have to go to the most forward regions of ALICE-3 and FoCal to find a good signal to background ratio.
Since the Photon Conversion Tracker in ALICE-3 aims at measuring photons with energies below 100 MeV, we also include a plot of the photon pseudorapidity distributions for the photon energy range 0.01 ≤ ω ≤ 0.1 GeV in Fig. 4. In this energy range, there is a significant excess inside the ALICE-3 acceptance for all stopping scenarios.In addition to being peaked in the forward direction, the bremsstrahlung spectrum increases rapidly with decreasing photon energy, approximately as 1/ω, as was mentioned above.This is contrary to the background from hadronically produced photons, which decrease with decreasing ω in the energy range considered here.To illustrate this, we plot the energy spectrum integrated over azimuthal angle and pseudorapidity range 4 ≤ η ≤ 5 in Fig. 5.As in Figs. 3 and 4, the background is given by the black histograms, and the uncorrected and corrected signal plus background by the solid and dashed blue histograms, respectively.As for the pseudorapidity distributions, the energy below which one can expect a significant signal over background is highly dependent on the stopping scenario.Also the yield is strongly dependent on the stopping scenario.
The inset in Fig. 5 a) shows the low energy region, and it emphasizes that if one can go to low enough photon energies, a signal will be visible also in scenarios with a small amount of stopping.One should keep in mind that it might be possible to extract a bremsstrahlung signal, even with a rather low signal to background ratio, by subtracting the hadronic background.The hadronic background should be well constrained from measurements of charged and neutral particle spectra.
Summary
To conclude, we have shown that even with realistic stopping scenarios the bremsstrahlung spectra show a strong sensitivity to the amount of nuclear stopping.Comparisons with Pythia show that a significant signal over the hadronic background is obtained in the range η 4−5 and ω 300−500 MeV.Again, the exact limits depend on the amount of stopping one has.
Considering the importance of determining the amount of stopping in heavy-ion collisions at the LHC, and given that no alternative methods are available, we believe the possibility to use bremsstrahlung photons should be considered seriously.Hopefully this paper can help in the design of future detectors to accomplish such a measurement.
2 Fig. 1
Fig.1Net-proton rapidity distributions for the 3 scenarios described in the text.
2 Fig. 2 a
Fig.2a) The angular distribution of bremsstrahlung photons with energies between 0.1 ≤ ω ≤ 0.5 GeV for the 3 scenarios.b) The pseudorapidity distributions, dNγ /dη, of bremsstrahlung photons within the same energy range.
Fig. 3
Fig.3The pseudorapidity distributions for photons with 0.1 ≤ ω ≤ 0.5 GeV integrated over the azimuthal angle.The black histogram shows the background from hadronically produced photons.The solid blue histogram shows the sum of the photons from bremsstrahlung radiation and hadronic production.The correction applied to the bremsstrahlung spectrum to obtain the dashed blue histogram is described in the text.
Fig. 5
Fig.5The energy distributions for photons with 4.0 ≤ η ≤ 5.0 integrated over the azimuthal angle.The black histogram shows the background from hadronically produced photons.The solid blue histogram shows the sum of the photons from bremsstrahlung radiation and hadronic production.The correction applied to the bremsstrahlung spectrum to obtain the dashed blue histogram is described in the text.The inset in a) shows the low energy region. | 4,222.4 | 2022-10-28T00:00:00.000 | [
"Physics"
] |
On well-posedness of incompressible two-phase flows with phase transitions: the case of non-equal densities
The basic model for incompressible two-phase flows with phase transitions consistent with thermodynamics is studied. The latter means that the total energy is conserved and the total entropy is nondecreasing. We consider the case of constant but non-equal densities of the phases, complementing our previous paper (Prüss et al. in Evol Equ Control Theory 1:171–194, 2012) where the case of equal densities is analyzed. The local well-posedness of such problems is proved by means of the technique of maximal Lp-regularity, in a configuration where the interface is nearly flat and initial data are small.
Introduction
Let ⊂ R n+1 be a bounded domain of class C 3− , n ≥ 1. contains two phases: at time t, phase k, k = 1, 2, occupies subdomain k (t) of . Assume ∂ 1 (t) ∩ ∂ = ∅; this means no boundary intersection and no contact angles. The closed compact hypersurface (t) := ∂ 1 (t) ⊂ forms the interface between the phases.
Let u denote the velocity field, π the pressure field, T (u, π, θ) the stress tensor, D(u) = (∇u + [∇u] T )/2 the rate of deformation tensor, θ the (absolute) temperature field, ν the outer normal of 1 , u the interface velocity, V = u · ν the normal velocity of (t), H = H ( (t)) = −div ν the curvature of (t), j the phase flux, and the jump of a quantity v across (t).
Several quantities are derived from the specific free energy ψ k (θ ) in phase k as follows. Further, d k (θ ) > 0 denotes the coefficient of heat conduction in Fourier's law, μ k (θ ) > 0 the viscosity in Newton's law, and σ > 0 the constant coefficient of surface tension.
Concerning the second equation of (1.3), we remind that balance of mass across (t) requires [[ρ(u − u )]] · ν = 0, which implies and so Therefore, this equation is well-defined on (t). This model is explained in more detail in our previous paper [19], where we consider the case of equal densities. It has been recently proposed by Anderson et al. [1], see also the monographs by Ishii [12] and Ishii and Takashi [13], and it is thermodynamically consistent in the sense that in the absence of exterior forces and heat sources, the total energy is preserved and the total entropy is nondecreasing, see [19]. It is in some sense the simplest sharp interface model for incompressible Newtonian two-phase flows taking into account phase transitions driven by temperature.
Note that in the case of equal densities, the phase flux j does not enter (1.1), and so in this case, we obtain essentially a Stefan problem with surface tension, which is only weakly coupled to the standard two-phase Navier-Stokes problem via temperature-dependent viscosities. We call this case temperature dominated, and it has been studied in [19]. But in the case of different densities, the phase flux j causes a jump in the velocity field on the interface, which leads to so-called Stefan currents that are convections driven by phase transitions. In this situation, it turns out that the heat problem (1.2) is only weakly coupled to (1.1) and (1.3), we call this case velocity dominated. The resulting two-phase Navier-Stokes problem is non-standard, and therefore, it requires a new analysis.
The analytical properties of the problem appear to be different in these two cases. The spaces for well-posedness are not the same, and in the velocity-dominated case, the pressure is uniquely determined, while in the temperature-dominated case, it is only unique up to a constant. In the temperature-dominated case [[ρ]] = 0, the phase flux j can be eliminated by solving the second equation in (1.2) for j. This yields as long as l(θ ) = 0; this is the essential well-posedness condition in this case. Then, the equation describing the evolution of the interface becomes On the other hand, in the velocity-determined case [[ρ]] = 0, we can eliminate j by taking the inner product of the fourth equation in (1.1) with ν to the result In this case, the equation for V becomes which does not contain temperature, in contrast to the first case. Therefore, the analysis for these two cases necessarily is different, too.
There is a large literature on isothermal incompressible Newtonian two-phase flows without phase transitions [2,15,22,23,25,26], and also on the two-phase Stefan problem with surface tension modeling temperature driven phase transitions [3,8,18,21,24]. On the other hand, mathematical work on two-phase flow problems including phase transitions is rare. In this direction, we only know the papers by Hoffmann and Starovoitov [10,11] dealing with a simplified two-phase flow model, and Kusaka and Tani [16,17] which is two-phase for temperature but only one phase is moving. The papers of Di Benedetto and Friedman [4] and Di Benedetto and O'Leary [5] deal with weak solutions of conduction-convection problems with phase change. However, none of these papers considers models which are consistent with thermodynamics.
It is the purpose of this paper to present a rigorous analysis of problem (1.1), (1.2), (1.3) in the framework of L p -theory in the case of non-equal densities and an initial interface which is nearly flat. We consider the nonlinear problem (1.1)-(1.3) for = R n+1 and a nearly flat interface represented as a graph over R n , namely in the regions We let 0 = 1 (0) ∪ 2 (0) and ν 0 be the outer normal of 1 (0).
Then, given any finite interval J = [0, a], there exists η > 0 such that (1.1)-(1.3) admits a unique L p -solution on J provided the smallness conditions The notion L p -solution is explained in more detail in Section 5. For a proof of this result, we perform a detailed analysis of the linearized problem in an L p -setting, following the approach in [22] for the standard two-phase Navier-Stokes problem without phase transitions. This requires the detection and analysis of the underlying boundary symbol. We then show maximal regularity for the linear part of the problem and finally employ the contraction mapping principle to solve the nonlinear problem. In a forthcoming paper, we will consider problem ( 3) in general geometries without smallness assumptions.
The plan for this paper is as follows. In Sect. 2, we transform the problem to the configuration of a fixed flat interface. The principal part of the linearization is studied in Sect. 3, and the property of maximal L p -regularity is proved in Sect. 4. The last section contains the proof of well-posedness for the nonlinear problem.
Transformation to a flat interface
In the situation of a nearly flat interface, the nonlinear problem (1.1)-(1.3) can be transformed to a problem onṘ n+1 := R n+1 \ [R n × {0}] by means of the transformations v(t, x, y) := (u 1 , . . . , u n ) T (t, x, y + h(t, x)), where t ∈ J = [0, a], x ∈ R n−1 , y ∈ R, y = 0. Here, θ ∞ > 0 denotes the (equilibrium) temperature at infinity and π ∞ the corresponding (equilibrium) pressure at infinity defined by the relations With a slight abuse of notation, we will denote in the sequel the transformed velocity again by u, that is, u = (v, w) T , the transformed temperature by θ , and the transformed pressure by π . For given initial data u 0 (x) and θ 0 (x), we set again u 0 (x, y) : With this notation, we have the transformed problem where the phase flux j already has been eliminated, according to Sect. 1. Here, it reads The nonlinear right-hand sides are defined by The curvature of (t) is given by where ∇ 2 h denotes the Hessian of h.
The linear problem
The principal part of the linearized problem in the case of a nearly flat initial interface reads as follows 2) decouples from the remaining problem. Since it is well-known that this problem has maximal L p -regularity, we concentrate on the remaining one. It reduces to two separate problems. With u = (v, w) T , the first one is the following non-standard Stokes problem.
Having solved the first one, the second one results from replacing g w by g w +σ x h and solving Before stating maximal regularity results of linear problems, let us introduce the relevant function spaces. Let ⊂ R m be open and X be an arbitrary Banach space. By L p ( ; X ) and H s p ( ; X ), for 1 ≤ p ≤ ∞, s ∈ R, we denote the X -valued Lebesgue and the X -valued Bessel potential spaces of order s, respectively. We will also make use of the fractional Sobolev-Slobodeckij spaces W s The spaces 0 H s p (J ; X ) are defined analogously. We remind that H k p = W k p for k ∈ N and 1 < p < ∞ and that W s p = B s pp for s > 0, s ∈ N. For s ∈ R and 1 < p < ∞, we consider the homogeneous Bessel-potentianl spacė where S (R n ) denotes the space of all tempered distributions, andİ s is the Riesz potential given bẏ For s ∈ R \ Z, the homogeneous Sobolev-Slobodeckij spacesẆ s p (R n ) of fractional order can be obtained by real interpolation aṡ where (·, ·) θ, p is the real interpolation functor. For problem (3.4), we have the following maximal regularity result.
if and only if the data ( f u , f d , g v , g w , g j , g π , u 0 ) satisfy the following regularity and compatibility conditions: In addition, for the pressure traces π k on the interface we have if and only if is continuous between the corresponding spaces.
For the problem (3.1) and (3.3), we also have maximal regularity result in the L p -setting.
if and only if the data ( f u , f d , g u , g j , g π , g h , u 0 , h 0 ) satisfy the following regularity and compatibility conditions: The solution map [( f u , f d , g u , g j , g π , g h , u 0 , h 0 ) → (u, π, h)] is continuous between the corresponding spaces. Equation (3.2) is a two-phase heat problem with Neumann condition, which has the property of the maximal L p -regularity (cf. [8]). Therefore, the linearized problem (3.1)-(3.3) has the property of maximal L p -regularity as well. To state this result, we introduce appropriate function spaces. We set and define the solution space for (3.1)-(3.3) as We denote by γ π the two one-sided traces of π on R n . The generic elements of E(a) are functions (u, π, γ π, θ, h).E(a) is a Banach space with norm Moreover we set and define the regularity of the data space for (3.1)-(3.3) as F(a) is a Banach space with its natural norm, and the generic elements of F(a) are functions ( f u , f d , g u , g j , f θ , g θ , g π , g h ). Finally, we define the time trace space X γ of E(a) as The main result on the linearized problem (3.1)-(3.3) now can be stated as THEOREM 3.3. Let 1 < p < ∞ be fixed, p = 3/2, 3, and assume that ρ k and μ k are positive constants for k = 1, 2, ρ 2 = ρ 1 , κ, d > 0. (u 0 , θ 0 , h 0 ) and ( f u , f d , g u , g j , f θ , g θ , g π , g h ) satisfy the regularity conditions
and the compatibility conditions
is continuous between the corresponding spaces.
Proofs for Theorems 3.1 and 3.2
For necessity, we employ trace arguments as in [15,22]. The more difficult part of sufficiency also follows the lines of these papers, but has to be modified as it is more involved.
1. In order to remove u 0 which has a jump at the interface, we first solve the parabolic problem where E R n+1 g j := (g j , 0) T , to meet the necessary compatibility conditions, and set π 1 = 0. Then, Next, to remove the divergence data we solve the problem According to [22], this problem has a unique solution in the maximal L p -regularity class. Note that the compatibility conditions do not involve the normal part of the stress boundary condition. Therefore, it remains to study the problem This way the problem for h is decoupled from the Stokes problem where we have set g 1 = σ x h and u = (v, w). Having solved this problem for given h we insert into and solve the remaining equation for h, which reads.
Here, the remaining data satisfy
2.
Assume for a moment that we have a solution of (4.5) in the proper regularity class even on the half-line J = R + . Then, we may employ the Laplace transform in t and the Fourier transform in the tangential variables x ∈ R n , to obtain the following boundary value problem for a system of ordinary differential equations onṘ : This system of equations is easily solved to the result for y > 0, and for y < 0. Here a k ∈ C n and α k ∈ C have to be determined by the interface conditions which in frequency domain readv Inserting the representation of the transformed solution into the first two of these equations, we obtain the following system.
Using the formulas for β k and solving the resulting system in terms of α k , we arrive after some elementary algebra at the expressions where we have set Here, we observe that the surface pressure π k have transforms λα k . Since the entries in the matrix defining λα k are bounded and holomorphic, we may conclude that π k have the same regularity as g k , and that the pressure π belongs to L p (J ;Ḣ 1 p (Ṙ n+1 )). Next, let us compute the boundary velocities v b w k (x, 0). Their transforms are given bŷ Some algebra yield for w b k the representation This representation shows thatŵ b k is bounded by |ξ |ĝ i /ω 1 ω 2 . As in [22], Section 4, we obtain that the operator with symbol |ξ |/ω 1 ω 2 maps L p (J ;Ẇ 1−1/ p p (R n )) into the right space for the boundary values of w, i.e., we have To keep this paper self-contained, we prove the mapping properties stated above. We set G : . It is well-known that G is closed, invertible and sectorial with angle π/2, and −G is the generator of a C 0 -semigroup of contractions in L p (R n ). Moreover, G admits an H ∞ -calculus in X with H ∞ -angle π/2 as well; see e.g. [9]. The symbol of G is λ, the time covariable.
Next, we set D n := − , the Laplacian in L p (R n ) with domain D(D n ) = H 2 p (R n ). It is well-known that D n is closed and sectorial with angle 0, and it admits a bounded H ∞ -calculus which is even R-bounded with RH ∞ -angle 0; see e.g. [6]. These results also hold for the canonical extension of D n to X , and also for the fractional power is |ξ |, where ξ is the covariable of x. By the Dore-Venni theorem for sums of commutating sectorial operators (cf. [7]), we see that L k := ρ k G + μ k D n with natural domain are closed, invertible and sectorial with angle π/2.L k also admits a bounded H ∞calculus in X with H ∞ -angle π/2 (cf. [20]). The same results are valid for operators k , their H ∞ -angle is π/4, and their domains are The symbol of L k is ρ k λ + μ k |ξ | 2 and that of F k is ρ k λ + μ k |ξ | 2 .F −1 k have the following mapping properties: hence, inserting the expressions for α k and β k , we obtain after some more algebra Here g 4 is determined by the data alone and has the same regularity as g 3 . We set τ = |ξ |. The boundary symbol s(λ, τ ) is defined by (4.14) where we employed the scaling z = λ/τ 2 . The holomorphic function m(z) in turn is given by , (4.15) with the abbreviations .
We derive the formula in the "Appendix". Note that ω k (z) is holomorphic in the sliced plane C \ (−∞, −μ k /ρ k ]; hence, the function ϕ k (z) has this property as well. This function has exactly one zero z k in this set, it is real and satisfies −μ k /ρ k < z k < −8μ k /9ρ k < 0. It is easy to see that ϕ k mapsC + into C + , and as ϕ k (0) = 2 and ϕ k (z) ∼ √ ρ k z/μ k as z → ∞, we see that ϕ k (C + ) ⊂ φ k , for some angle φ k < π/2.
On the other hand, choosing |λ| ≥ C|τ | we obtain If λ 0 is chosen large enough, this implies the lower bound In order to economize our notation, we set z = (u, π, γ π, θ, h) for (u, π, γ π, θ, h) ∈ E(a). and set z 0 = (u 0 , θ 0 , h 0 ) for (u 0 , θ 0 , h 0 ) ∈ X γ . With this notation, the nonlinear problem (2.1) can be recast as where L denotes the linear operator on the left hand side of (2.1), and N denotes the nonlinear mapping on its right-hand side.
In the following, we say that a function space is a multiplication algebra if it is a Banach algebra under the operator of multiplication. LEMMA 5.2. (Lemma 6.1 in [22]) Suppose p > n + 3. Then, γ π (a), G j (a), G u (a), G θ (a), G π (a), and G h (a) are multiplication algebras.
Concerning the nonlinearity N , we have the following result. Proof. This result is proved in a similar way as Proposition 6.2 in [22]. Now we prove Theorem 5.1, where a > 0 is a fixed life time.
Step 1. First, we reduce the problem to initial values 0 and resolve the compatibility conditions. Thanks to Theorem 6.3 in [22], we find an extension f * d ∈ F d (a) which satisfies f * d (0) = div u 0 . We define and set f * u = f * θ = 0. With these extensions, by Theorem 3.3, we may solve the linear problem (3.1)-(3.3) with initial data (u 0 , θ 0 , h 0 ) and inhomogeneities ( f * u , f * d , g * u , g * j , f * θ , g * θ , g * π , g * h ) which satisfy the required regularity conditions, and by construction, the required compatibility conditions, to obtain a unique solution z * = (u * , π * , γ π * , θ * , h * ) ∈ E(a) with u * (0) = u 0 , θ * (0) = θ 0 , and h * (0) = h 0 . As the solution map of Theorem 3.3 is continuous, we know that for any r > 0, there exists η > 0 such that z 0 X γ ≤ η ⇒ z * E(a) ≤ r. (5.4) Step 2. We rewrite problem ( We may assume that M ≥ 1. Thanks to Proposition 5.3 and due to K (0) = N (z * ) − Lz * , we may choose first r > 0 and then η > 0 sufficiently small such that for all z ∈ 0 E(a) with z E(a) ≤ r , hence which ensures that L −1 K : B 0 E(a) (0, r ) → B 0 E(a) (0, r ) is a contraction. Thus, we may employ the contraction mapping principle to obtain a unique solution on the fixed time interval [0, a]. As the map z 0 → z * is continuous, the solution map z 0 → z is continuous as well. This completes the proof of Theorem 5.1. | 4,960.2 | 2011-09-09T00:00:00.000 | [
"Physics"
] |
Waveguide-Integrated Colloidal Nanocrystal Supraparticle Lasers
Supraparticle (SP) microlasers fabricated by the self-assembly of colloidal nanocrystals have great potential as coherent optical sources for integrated photonics. However, their deterministic placement for integration with other photonic elements remains an unsolved challenge. In this work, we demonstrate the manipulation and printing of individual SP microlasers, laying the foundation for their use in more complex photonic integrated circuits. We fabricate CdSxSe1−x/ZnS colloidal quantum dot (CQD) SPs with diameters from 4 to 20 μm and Q-factors of approximately 300 via an oil-in-water self-assembly process. Under a subnanosecond-pulse optical excitation at 532 nm, the laser threshold is reached at an average number of excitons per CQD of 2.6, with modes oscillating between 625 and 655 nm. Microtransfer printing is used to pick up individual CQD SPs from an initial substrate and move them to a different one without affecting their capability for lasing. As a proof of concept, a CQD SP is printed on the side of an SU-8 waveguide, and its modes are successfully coupled to the waveguide.
■ INTRODUCTION
Colloidal semiconductor nanocrystals (NCs) are known for their size-tunable electronic and optical properties, discrete density of states, and low-temperature solution processing, 1−4 which make them very attractive as the gain medium of lasers. 5dditionally, NCs can be used on many different material platforms and have great prospects for integrated photonics where they could form the basis of miniature optical sources and nonlinear elements. 6In this context, if their desired performance and scalability can be achieved, then they have the potential to enable photonic chips of future generations.
Several different NC laser geometries have been reported to date, e.g., Fabry−Peŕot cavities, 7 microring resonators, 8 vertical cavities implemented with distributed Bragg reflectors, 9 distributed feedback cavities, 10 and microsphere cavities where dielectric spheres are doped or coated with NCs. 11,12he fabrication of such lasers typically requires top-down patterning of the nanocrystals at a submicron level (e.g., using photo or contact lithography) or a way to add them to an optical microcavity that is fabricated separately.An elegant fabrication alternative has been recently reported, where the NCs self-assemble from the bottom up in solution to form both the gain material and the laser cavity.This approach leads to supraparticles (SPs), often in the form of microspheres, with a crystalline structure. 13,14Such supraparticles take advantage of the high refractive index of the densely packed semiconductor nanocrystals and the shape of the self-assembled structure to efficiently trap light, thus generating a whispering gallery mode (WGM) cavity. 15In contrast to lasers that integrate NCs by coating or doping resonators made of another material, SPs do not require a separate cavity, thereby simplifying the fabrication process. 14The high density of NCs in SPs also enhances the resonances, which can result in an increase of absorption efficiency by more than 2 orders of magnitude when compared to dispersed NCs . 15This enhancement is a prerequisite to achieving efficient micronscale lasers.
An early report of SP lasers made with CdSe/CdS colloidal quantum dots (CQDs), a subclass of NCs, has shown WGMs with quality factors (Q-factors) of up to 320. 13 The laser threshold fluence of these CdSe/CdS microspheres was approximately 100 μJ/cm 2 for a 100−200 fs pulse pumping (repetition rate of 1 kHz and spot size 10 μm full width at halfmaximum) and the emission exhibited a predominantly linear polarization. 13Enhanced excitonic coupling has also been observed in CQD-based SPs, which is promoted by the degree of order and distance between CQDs in the structure. 15In other works, it has been shown that the NC building blocks can be chosen for SP laser oscillation at a desired wavelength or at several wavelengths simultaneously by assembling a blend of alloyed CdSSe/ZnS CQDs having different characteristics. 16,17The building blocks can also be made of other NCs, such as colloidal quantum wells (CQWs).In this particular case, SPs made of CdSe CQWs were reported to have a lower lasing threshold and a higher quantum yield than CdSe CQDs. 18Single-mode laser emission has also been reported for 1.5−5 μm SPs made of CdSe/ZnS core−shell CQDs, which were tested for in vitro and in vivo biological imaging.The inter-CQD distance in these SPs was shortened via ligand exchange in order to further increase the density of CQDs and in turn the optical gain. 19Ps represent a novel family of NC lasers that retain the attractive properties of NCs, while also offering the advantage of combining a microsize resonating structure and straightforward fabrication.SP lasers are being researched as unbound structures in biological and medical applications, 19 but interest in their use when embedded within optoelectronics is now accelerating. 20Nevertheless, there is a lack of techniques capable of manipulating individual SPs for their deterministic placement on a substrate or within a system, which is a current major obstacle to their implementation in integrated photonics.
In this work, microtransfer printing is proposed and demonstrated as a solution to the challenge described above.
First, SPs made of CdS x Se 1−x /ZnS CQDs ranging approximately between 4 and 20 μm in diameter are fabricated and characterized individually under optical pumping.The origin of the whispering gallery mode lasing in these SPs is confirmed by comparing numerical simulations of the modes with optical pumping experiments as well as through cathodoluminescence measurements.The SP emission intensity below the threshold is fitted and studied using a modified Poissonian function model in order to extract ⟨N⟩, the average number of excitons per CQD in the microlaser, at different excitation levels and for different sizes of lasers.⟨N⟩ is a parameter that relates to the population inversion within the SP, and its value at the threshold is a performance benchmark for such microlasers.Microtransfer printing 21−23 is then shown to be a viable method to accurately select and transport individual SP lasers between substrates without damaging them.As proof of concept, an SP is transfer-printed next to a waveguide and optically pumped.Specific laser modes of the SP are successfully coupled and detected at the end of the waveguide (output facet).This successful demonstration is a pioneering step toward the integration of these microlasers into more complex optoelectronics applications.
Synthesis of SPs
SPs were synthesized from CdS x Se 1−x /ZnS CQDs with a nominal size of 6.0 ± 0.5 nm and an intrinsic emission peak of 630 nm (Quantum Dots Section).The synthesis followed an oil-in-water self-assembly process and used poly(vinyl alcohol) (PVA) as the surfactant (emulsifier) to stabilize the emulsion (see the Synthesis and Characterization of the SPs section).Water-based solutions are polar and therefore immiscible in d, e).The full optical setup can be seen in Figure S3.nonpolar organic solvents (e.g., chloroform).PVA adsorbs at the interface between these two phases, decreasing the surface tension and promoting stable emulsions.This synthesis used CQDs in chloroform and PVA in Milli-Q water as the oil and water phases, respectively.After mixing the two phases and while the chloroform evaporates, the building blocks (CQDs) inside each emulsion droplet begin to nucleate and grow into SPs of tightly packed CQDs 13,14 (Figure 1a).The average size of the emulsion droplets formed upon mixing these two phases is mainly determined by the volume of the oil phase and the concentration of the surfactant in the water phase.Likewise, the size of the self-assembled SP is determined by the amount of CQDs inside the emulsion droplet, which depends on the initial concentration of CQDs in the oil phase and on the volume of the emulsion droplet.The self-assembly process finishes once all of the chloroform inside the emulsion droplets is evaporated.A final washing step was used in this procedure to remove traces of PVA from the surfaces of SPs.The SPs formed this way were on average 2.8 ± 1.7 μm in radius (Figure S2).
Characterization of SPs
Sixteen SPs of different sizes were drop-casted on silica and characterized individually with a custom-made microphotoluminescence (μPL) setup (Figure S3).The optical pump source was a 0.76 ns pulse width microchip laser (λ = 532 nm) at a repetition rate of 7.1 kHz, and the beam spot area at the sample was 2.88 × 10 −7 cm 2 .The energy of the optical pump was controlled and measured with an attenuator wheel and a power meter, respectively (see the Optical Characterization Section).
The laser transfer function (emission intensity versus pump intensity) of a 9.8 ± 0.5 μm diameter SP can be seen as a typical example in Figure 1b.The lasing threshold is defined as the value of pump energy above which there is a drastic change in the slope of intensity (Figure 1b) and narrow emission peaks dominate.This SP occurs at an incident pump energy of 7 nJ.Above 7 nJ, spectrally narrow laser modes develop in the 635−640 nm wavelength range on top of the broader spontaneous emission pedestal (Figure 1c).The main photoluminescence (PL) peak, around 630 nm, corresponds to the excitonic and biexcitonic transitions of the CQDs, which is predominant at low pump energies (Figure 1c, red spectrum). 2At higher pump energies, emission from the negatively charged biexcitons or triexcitons and other multiexcitons of higher order can also be detected at around 580 nm (Figure 1c, green and blue spectra). 2,24,25In addition to the clear threshold behavior seen in the emission intensity (Figure 1b), and the change in the emission spectrum (Figure 1c), the differences between SPs below and above the lasing regime are also observed in the microscope images (Figure 1d,1e, respectively), with an evident transition in intensity and the appearance of a WGM lasing pattern characterized by the deep red corona on the SP periphery.The wavelengths of the lasing peaks observed experimentally in Figure 1c also match with the resonant wavelengths for the transverse electric and magnetic modes calculated numerically using the modal equations (Figure S4 and Table S1).The measurements on the resonance frequencies of a microsphere and the analysis of an analogous CQD SP below and above the laser threshold indicate that the laser emission arises from WGM (Figure S4 and Table S1).These results are consistent with previous reports in the literature. 13,15aser emission results described in Figure 1c are typical of the lasing SPs characterized in this work.In general, the different SPs display WGM laser oscillation with spectrometerresolution-limited peaks between 625 and 655 nm.They oscillate on one or several angular modes, depending on the SP size and pumping levels (Table S2).
The WGMs of a typical SP were also observed below the threshold using the scanning electron microscopy (SEM) technique of cathodoluminescence (CL) while imaging the SP at the same time (Figure 2a).The CL originates from the electron-beam excitation of the SP, which leads to the subsequent emission of photons.The CL spectrum was acquired with a spectrometer coupled to the SEM microscope (Figure 2b).Observing WGMs below the threshold proved easier with CL than with the PL setup because of a higher contrast between the WGM signature and the background luminescence.The WGMs are evidenced as the spectral modulation that is seen at the long-wavelength side of the CL spectrum (Figure 2b); the modes are not visible at lower wavelengths because of self-absorption (see the overlap between the emission and absorption spectra of CQDs in Figure S1).The pseudofree spectral range (pseudo-FSR), i.e., the frequency separation between WGMs of consecutive angular modes, is then obtained using a discrete Fourier transform of the CL spectrum (Figure 2c).The pseudo-FSR of spherical microresonators, Δν n,l
Δl
, is correlated to the radius of the sphere, r, as follows: , where the indices n and l correspond to the order of the spherical harmonic that describes the radial and angular field distributions, respectively, c is the speed of light in vacuum, and N is the refractive index. 26From the SEM image, the diameter of the SP in Figure 2a is 14.0 ± 0.5 μm.This value is consistent with the 13.7 ± 0.5 μm calculated using the pseudo-FSR correlation for consecutive modes 26 and a refractive index of N = 1.7, which is the expected refractive index value of a Cd-based CQD medium. 27The Q-factor of the modes can be calculated from Figure 2b as Q = λ/Δλ, where λ corresponds to the wavelength of the mode propagated in the cavity and Δλ to the full width at half-maximum of that mode.Here, the Q-factor is estimated to be 295 ± 15, which is consistent with the Qfactors previously reported on SPs of approximately the same size and composition self-assembled via microfluidics. 13rom the SEM characterization in this work and morphology reports in the literature, these CQD SPs are also expected to have a partially crystalline structure. 13,14
Average Number of Excitons per CQDs at Threshold: Modeling the Spontaneous Emission of SPs
The sublinear evolution of the emission intensity of SPs versus pump energy below the laser threshold (Figure 1b) indicates that lasing oscillation is reached in a regime where there is more than one exciton per CQD on average.Accurate estimations of the average number of excitons can be performed either via numerical gain modeling or transient absorption measurements. 13,28However, these require heavy computation or complex setups to perform the measurements.Here, the average number of excitons in SPs is estimated by establishing a parallelism to CQDs.
The spontaneous emission, I QD (k,⟨N⟩), of a CQD is proportional to the Poisson distribution 7,29,30 where ⟨N⟩ is the average number of excitons in the CQD during the acquisition.The transition from a given excitonic state identified by k (k 0 ) has its own signature emission wavelength, and the emission probability from this kth-state, Pois(k,⟨N⟩), can be estimated provided that the average number of excitons ⟨N⟩ in the CQD is known.The average number of excitons ⟨N⟩ can be accurately described provided that the time-integrated intensities of the incident (I i ), transmitted (I t ), and specularly reflected (I r ) pump beams are known (e.g., by measuring them in an integrating sphere) or it can be approximated to a power law as a function of the incident energy. 29,30n the case of SPs, ⟨N⟩ is expressed as in eq 2 where f corresponds to the frequency of the pump laser, V r = corresponds to the pumped volume (assumed for simplicity to be the volume of the whole SP), D is the density fraction of CQDs, V QD is the mean volume of a single CQD, E pump is the pump energy, hν is the pump photon energy, α and β are the power law constants, and r is the radius of the SP.
While the Poisson distribution is suitable to describe the emission of single CQDs or of an ensemble of noninteracting CQDs, 31 it is less evident in the case of SPs as these are made of many densely packed CQDs, each with their own ⟨N⟩.Energy transfer between CQDs is prone to occur in densely packed CQDs, and therefore their emission cannot be considered fully independent.Furthermore, even if the approximation of noninteracting CQDs was valid, it is not always possible to discriminate between all of the different excitonic transitions in the emission spectrum of such ensembles as CQDs since small changes in size will lead to the same excitonic state emitting at slightly different wavelengths.In addition to that, different excitonic states k i , i ( ), can also recombine and emit at similar wavelengths, which makes it impossible to describe the emission intensity as a single discrete Pois(k,⟨N⟩).A sum of discrete Poisson distributions, each describing the emission of a CQD in the SP, would also lead to a very large number of fitting parameters.In order to estimate ⟨N⟩ in SPs, a modified model based on the continuous analogue of the Poisson distribution is therefore proposed and applied below.
An average exciton state k̅ (k ) is defined to describe the combined emission probability distribution of multiple states.The population of excitons and biexcitons, with k = 1 and k = 2, respectively, has an emission peak that sits at approximately 630 nm and is assigned as k̅ 1 (k̅ 1 ≤ 2).The emission probability distribution for the population of the higher multiexcitonic states (k = 3, 4, 5 S3) as a function of the SP radius (a), extracted using eqs 3−5.The black data points correspond to the SPs that did not achieve lasing, and the red data points to those that did.The two data sets are visibly separable (dashed line), suggesting that the capability of an SP of a given size to operate as a laser is strongly intertwined with the parameter b.The average number of excitons ⟨N⟩ at a laser threshold was then calculated for the lasing SPs (b) based on the fitting parameters (eq 2 and Table S3).The optical pump energies required to reach the laser threshold were extracted from Table S2 and included in the callouts.SPs were optically pumped at λ pump = 532 nm (see the Optical Characterization Section).The full optical setup can be seen in Figure S3.
The continuous Poisson distribution, Pois C (k̅ , ⟨N⟩), is defined as 32 ( , ) ( ) where Γ(k̅ ,⟨N⟩) is the incomplete γ function and Γ(k̅ ) is the Euler γ function.The emission of SPs is then proportional to their volumes and emission probability distribution at those given wavelengths A multi-non-linear model fit 33 is performed on the emission intensity of the two emission wavelengths centered at 630 and 580 nm (Data 630nm and Data 580nm ) to find the best fitting parameters and uncertainties for a, b, α, k̅ 1 , and Sixteen randomly chosen SPs with sizes ranging between approximately 4 and 18 μm in diameter had their emission intensity recorded at different E pump below the laser threshold (Figure S5 and Table S3).The set of parameters (a, b, α, k̅ 1 , and k̅ 2 ) is then estimated for each SP by fitting the experimental data intensity peaks at around 630 and 580 nm to eq 5.The data from the 16 sets are split according to each parameter and analyzed as a function of the size of SPs (Figures 3 and S6).
Figure 3a shows that the parameter b ranges between 0 and 10 and is independent of the radius, r, of the SP.This random distribution of b and its independence on the size of the SP is ascribed to fluctuations in the density of CQDs, D, between SPs, as well as other factors (e.g., density of excess ligands in the SP, remnants of surfactant or debris in the SP, defects in the composition of SPs) affecting the pump absorption and collection of the emission.In this figure, the SPs that reached the laser threshold are identified by red data points.The relationship between b and D (eq 2) and the trend of the interface between lasing SPs and nonlasing SPs marked by the dashed line confirms that the density of CQDs plays a role in the laser threshold energy.For the current setup, the density of CQDs would actually need to decrease for the SPs of large diameter to reach a threshold.This appears counterintuitive but could in fact be explained by the reduced coverage of pump light in larger SPs due to the fixed size of the beam spot.This causes larger SPs to have inhomogeneously excited CQDs, thus favoring reabsorption over emission in their WGMs.
The parameter α stays approximately the same regardless of the size of the SP (α = 0.35 ± 0.06; Figure S6), and the fact that it is lower than 1 indicates the existence of nonlinear nonradiative processes during the recombination of electron− hole pairs.Some fluctuations attributed to the overlap between the optical pump spot size and the SP can be seen for parameter a (a = 0.06 ± 0.03; Figure S6).The overlap affects the pump light coupled into the SP and therefore the counts detected on the spectrometer.The two average exciton states k̅ , corresponding to the exciton/biexciton (k̅ 1 ) and multiexciton (k̅ 2 ) populations in the SP, show that the exciton/biexciton states plateau at k̅ 1 = 1.6 ± 0.4, and the multiexciton states plateau at k̅ 2 = 3.2 ± 0.6 (Figure S6).
Five out of the eight SPs of radii between 2 and 5 μm reached a threshold.The laser threshold was reached for an average number of excitons of approximately ⟨N⟩ = 2.6 ± 0.8 (Figure 3b).This result is consistent with the average number of excitons per dot of ⟨N⟩ = 2.5 modeled numerically in the state-of-the-art SPs made of type I CQDs. 13
Microtransfer Printing and Waveguide Integration of SP Lasers
Manipulation of SPs between substrates (bare glass to bare poly(dimethylsiloxane), i.e.1][22][23]34 This technique has been demonstrated to print thin LEDs onto diamond and silica with submicron resolution, 22 epitaxial nanowires onto polymers, 34 and more recently on the deterministic integration of nanowires, dense integration of micron devices, and advanced transfer printing methods.35−37 Prior to this work, however, the technique had not been explored for self-assembled microcavities made from colloidal materials.The transfer printing setup in this study used a modified dip-pen nanolithography system with a transparent polymer stamp made of PDMS and an in-line camera that allows visualization of the samples through the stamp.A schematic of the process is shown in Figure 4a−f. The samp is brought into contact with a single SP sitting on a donor substrate (e.g., glass slide with SPs drop-cast on it), and when the stamp is peeled from the donor substrate, the adhesion is strong enough to lift the SP from the donor onto the surface of the PDMS stamp.Likewise, when the stamp is brought into contact with the receiving substrate and then retracted, the SP adheres to the receiving substrate.SPs can then be individually selected with the stamp, picked up, moved, and dropped off at a desired location.The PDMS stamp (length × width: 100 μm × 200 μm) used in the transfer printing process was cast from a mold using a silicon elastomer and curing agent at a ratio of 10:1.The tip in the center, used to pick up and drop off SPs, corresponds to a small extrusion of the main block of PDMS (length × width: 10 μm × 30 μm).
The process was first demonstrated by printing 15 SPs with average radii of 2.8 ± 1.7 μm from glass onto PDMS into a pattern following the shape of the University of Strathclyde logo (size distribution of SPs in Figure S2). Figure 4g,h displays the micrograph of the printed SPs under bright field and dark field with ultraviolet (UV) flooding conditions, respectively.All SPs are seen to luminesce after printing.This same process was then tested as a way to couple SPs to waveguides without affecting their capability as lasers.A rendered schematic (Figure 5a) summarizes the proof of concept experiment for the integration of an SP with a waveguide, where an SP is placed in contact with a waveguide, on its side, to enable evanescent field coupling between the two structures.
An SP (7.7 ± 0.5 μm in diameter) was transferred onto the silica surface and placed near one of the two facets of the waveguide with its surface in contact with the waveguide, 38 as shown on the rendered inset (i) of Figure 5a.The stamp was then gently translated sideways and moved upward to release the SP.The light emitted by the SP and coupled in the waveguide was measured from the output facet at the other edge of the chip (rendered inset ii, Figure 5a), approximately 8 mm away.A second charged-coupled device (CCD) camera with a long pass filter (cutoff wavelength of 550 nm) was used to image the waveguide output facet through an objective lens.Figure 5b,c shows setup images of the SP under optical pumping and the end facet of the waveguide, respectively.The microscopic image in Figure 5b and the CCD camera view in Figure 5c are analogous to the rendered insets (i) and (ii).The full setup can be seen in Figure S7.
In dark-room conditions, the SP was excited at different pump energies (spot size of 4.85 × 10 −7 cm 2 ) and a micrograph of the waveguide output facet was acquired by the CCD camera for each of these energies, while the emission spectrum of the SP was recorded via the μPL setup simultaneously.
The signal-to-noise ratio (eq 6) is calculated from the pixel intensity of the images acquired by the CCD camera with the laser off (noise) and laser on (signal) within the region of interest, i.e., the end facet of the waveguide (Figure S8) , where the sample is simultaneously aligned with the laser pump (a-i) and the CCD camera (a-ii).The SP (diameter ≈7.7 ± 0.5 μm) is being pumped on one edge of the waveguide, and the other edge the facet is being monitored by the CCD camera, which is preceded by a long pass filter (550 nm) to cut out any scattered light from the pump (λ pump = 532 nm; see the Optical Characterization Section).The acquired microscope (b) and CCD camera (c) views correspond to the illustrations (a-(i, ii)), respectively.The spectrometer and image readings from the CCD camera were acquired simultaneously and compared to verify which modes were coupled to the waveguide.The full optical setup can be seen in Figure S7.
The end facet data collected by the CCD camera and the corresponding spectral data acquired by the spectrometer can be seen in Figure 6a. Figure 6b complements the measurements of Figure 6a below the laser threshold.The laser transfer function based on the data of the spectra below and above the laser threshold can be seen in Figure 6c.From Figure 6c, the laser threshold occurs at approximately 3 nJ.This corresponds to an average number of excitons of approximately ⟨N⟩ = 1.7 (Figure S9), which is close to the values estimated prior to the transfer printing process (Figure 3b).The consistency between the average number of excitons before and after the transfer printing process is a strong indicator that this method can be reliably used to transfer SPs between substrates.The collected spectra were split in intervals of 4 nm over the range where modes oscillate (628−648 nm) to facilitate the comparison between the intensity at each interval and the signal-to-noise ratio (SNR).This comparison is complemented by a Pearson The data acquired at that wavelength was used to plot the emission intensity versus pump energy (c), and the laser threshold of the transfer-printed SP was found to be at approximately 3 nJ.The spectral range where the lasing peaks were located in the SP (i.e., from 628 to 648 nm) was divided into 5 equal parts of Δλ = 4 nm each.A Pearson correlation test between the counts registered on the CCD camera and the counts registered on the spectrometer was then performed on each of those 5 parts, using the data of the 4 different optical pump energies (d).The two test parameters, r and p, correspond to the Pearson correlation coefficient and p-value, respectively.SPs were optically pumped at λ pump = 532 nm (see the Optical Characterization Section).The full optical setup can be seen in Figure S7.
correlation test to study the relationship between these two types of measurement (Figure 6d).The test is assessed based on two test parameters, the Pearson correlation coefficient (r), and the p-value (p).The Pearson correlation coefficient measures the linear correlation between the two sets of data (−1 ≤ r ≤ 1), and the p-value gives the probability of obtaining test results that are at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct (0 ≤ p ≤ 1).If the null hypothesis of this test is established as the linear independence between the readings on the CCD camera and a given spectral range on the spectrometer, the results show that the null hypothesis is not rejected at the ≤5% level between the 628 and 640 nm (Figure 6d).However, for the longest wavelength modes (640−648 nm), the same null hypothesis is rejected at ≤5%.This indicates that the waveguide output at the end facet is strongly correlated (>95% confidence) to the longer-wavelength laser modes of the SP and therefore indicates that these longwavelength modes are preferentially coupled into the waveguide (Figure 6d).This behavior can be explained by the geometry of the system and the location of the different WGMs in the SP.The diameter of the SP (7.7 ± 0.5 μm) is bigger than the cross section of the waveguide (2 μm × 2 μm).Once the SP reaches the laser threshold, the first modes are likely confined to the equatorial region of the SP.However, higher pump energies enable higher azimuthal modes to oscillate, in this case with longer wavelengths, that are more easily coupled to the waveguide.
■ SUMMARY AND CONCLUSIONS
A current barrier to the use of colloidal nanocrystal SP lasers in integrated optics is the lack of a controllable and scalable technique for the deterministic manipulation of such microlasers.In this work, microtransfer printing has been proposed and demonstrated as a solution to this problem.CdS x Se 1−x / ZnS CQD SPs with diameters ranging from 4 to 20 μm and Qfactors of approximately 300 were self-assembled via an oil-inwater process.SPs with sizes below 10 μm in diameter achieved lasing on the μPL setup between 625 and 655 nm, with one or several angular modes oscillating depending on the pump level and the SP.Using the proposed model for the emission of SPs below the laser threshold, the average number of excitons ⟨N⟩ at the threshold was found to be ⟨N⟩ = 2.6 ± 0.8, a value that is consistent with the ⟨N⟩ = 2.5 simulated for SPs of CQDs. 13This model could help in the assessment of enhancements of SPs by studying the evolution of its parameters.The microtransfer printing process demonstrated the pickup and drop off of individual SPs onto different substrates without affecting their laser capability.As a proof of concept, an SP laser was integrated and side-coupled to a polymer waveguide.The laser threshold fluence of the transferprinted SP (≈6.2 mJ.cm −2 ) is within the range of laser threshold fluences of SPs studied in Figure 3 (2.7−19.1 mJ.cm −2 ).The fitted parameters give an estimated average number of excitons for the transfer-printed SP of ⟨N⟩ ≈ 1.7, which is also close to the values of SPs studied in Figure 3, with ⟨N⟩ = 2.6 ± 0.8.Under optical pumping, specific laser modes of the SP were successfully coupled and detected at the end of the waveguide facet.This makes the transfer printing method a strong contender for future integrated photonic applications of SPs and paves the way to more complex designs.
Synthesis and Characterization of the SPs
The self-assembly of SPs followed an oil-in-water emulsion prepared at room temperature.Two immiscible solutions were prepared, one with the CQDs dissolved in chloroform at a concentration of approximately 250 mg/mL and another with PVA dissolved in Milli-Q water at a mass ratio of 1.25%.The emulsion was prepared by vortexing 115 μL of the CQD solution with 450 μL of the water solution for 10 min and stirring the mixture for approximately 2 h at 750 rpm.Once the stirring was completed, self-assembled SPs were diluted in water at a volume ratio of 1:50 and vortexed again to remove traces of PVA on their surfaces.Samples in this work were prepared by drop-casting 10 μL of cleaned SPs onto a glass substrate.Their size distribution and polydispersity can be seen in Figure S2.
Microfabrication and Characterization of Waveguides
Waveguides were fabricated by laser lithography.A 2 μm layer of SU-8 2 (MicroChem), with a viscosity of 45 cSt, was spin-coated (30 s at 2000 rpm) onto a pretreated glass substrate.The pretreatment of the glass included a 10 min ultrasonic bath in acetone and a 10 min ultrasonic bath in isopropanol before it was rinsed in water and dried.The SU-8-coated glass substrate was soft-baked 1 min at 65 °C and 3 min at 95 °C.The photolithography step to pattern the waveguides was done with a custom maskless laser lithography tool (λ = 370 nm).After photolithography, the sample was baked again for 1 min at 65 °C and 1 min at 95 °C.The postexposure baking was followed by the development of the SU-8 resist for 1 min, using a MicroChem's SU-8 Developer.The waveguides were cleaved on both ends.These ended with 8.0 ± 0.5 mm in length and with a cross section of 2 μm × 2 μm, which makes them multimode at the wavelengths of interest (visible light spectrum) with an increased numerical aperture (NA) for input coupling and compatible with several optical interconnect techniques. 39The propagation losses were measured with a narrow linewidth tunable laser (λ = 1550 nm).The propagation loss of the waveguides was measured using the fast Fourier transform (FFT) method and estimated to be less than 3 dB•cm −1 for the fundamental mode. 40
Transfer Printing the SPs
A polymer μ-stamp of poly(dimethylsiloxane) (PDMS) was made from elastomer and curing agent at a 10:1 ratio (SYLGARD 184 Silicone Elastomer Kit) and cured at 60 °C.The stamp was then used in a modified dip-pen nanolithography system to pick up SPs (Figure 4).SPs were moved with submicron resolution from the substrate where they were initially drop-casted to the substrate with the waveguide. 41An auxiliary camera embedded in the system allows the user to control the transfer printing process. 42
Optical Characterization
The PL and absorbance spectra of CQDs can be found in Figure S1.The SPs were optically pumped with a 0.76 ns pulse width microchip pulsed laser (λ = 532 nm, MNG-03 × 10 −100 , Teem Photonics) at a repetition rate of 7.1 kHz and with a beam spot area of approximately 2.88 × 10 −7 cm 2 for the individual characterization and 4.85 × 10 −7 cm 2 for the coupling.The beam was attenuated with a variable wheel attenuator and focused on the sample with an objective lens (4×, NA0.13, Nikon).A spectrometer (AvaSpec-2048−4-DT, Avantes) with a 0.7 nm spectral resolution between 220 and 1100 nm was used to acquire the spectrum data.A power meter was used to calibrate the output energy as a function of the attenuator filter before each experiment.More details on the setup can be seen in Figure S3.
SP-Waveguide Coupling
To visualize if light was being coupled on waveguides, an extra camera (DCC1645C, Thorlabs) was installed on the μPL setup for this experiment (Figure S7).Measurements were taken in the dark.A long pass filter (FEL0550, Thorlabs) was attached to the camera to cut off stray light from the pump.SPs were coupled and pumped on one end of the waveguide and the camera was focused on the facet of the other end to image the light coupled to it.Images were acquired and processed at different pump fluences.
SEM Characterization
SPs were also characterized by an FEI Quanta 250FEG scanning electron microscope (SEM).A custom-built CL setup collects light perpendicular to the beam excitation through a reflecting objective.The spectrum is measured using a 0.125 m spectrometer containing a 50 μm slit and a 600 lines/mm grating, paired with a cooled backilluminated electron multiplying charge-coupled device. 43Elemental analysis was performed on SPs through energy-dispersive X-ray spectroscopy (EDS) to map and visualize the elements present.
■ ASSOCIATED CONTENT * sı Supporting Information
Figure 1 .
Figure 1.Illustration of the nucleation process occurring inside the emulsion droplets that leads to SPs (a); emission intensity versus pump energy, with laser threshold at approximately 7 nJ, (b) and emission spectra (c) of an SP with a diameter of 9.8 ± 0.5 μm; micrographs of an SP under optical pumping (λ pump = 532 nm; see the Optical Characterization Section) below and above the lasing threshold (d, e).The full optical setup can be seen in Figure S3.
Figure 2 .
Figure 2. SEM image of an SP of approximately 14 μm in diameter (a) and its CL spectrum (b) and discrete Fourier transform analysis (c).The Qfactor estimated from the WGMs was 295 ± 15.
Figure 3 .
Figure 3. Study on the free parameter b (from TableS3) as a function of the SP radius (a), extracted using eqs 3−5.The black data points correspond to the SPs that did not achieve lasing, and the red data points to those that did.The two data sets are visibly separable (dashed line), suggesting that the capability of an SP of a given size to operate as a laser is strongly intertwined with the parameter b.The average number of excitons ⟨N⟩ at a laser threshold was then calculated for the lasing SPs (b) based on the fitting parameters (eq 2 and TableS3).The optical pump energies required to reach the laser threshold were extracted from TableS2and included in the callouts.SPs were optically pumped at λ pump = 532 nm (see the Optical Characterization Section).The full optical setup can be seen in FigureS3.
Figure 4 .
Figure 4. Illustration of the transfer printing process applied to the waveguide coupling of an SP: selection of the SP (a); pick up (b, c); selection of the target destination, e.g., substrate with a waveguide (d); and drop off (e, f).Proof of concept with 15 SPs transfer-printed onto a PDMS substrate to mimic the University of Strathclyde logo under white light (g) and a UV light lamp, λ lamp = 365 nm (h).Logo used with permission from University of Strathclyde, Glasgow.
Figure 5 .
Figure5.Illustration of the SP-waveguide coupling setup (a), where the sample is simultaneously aligned with the laser pump (a-i) and the CCD camera (a-ii).The SP (diameter ≈7.7 ± 0.5 μm) is being pumped on one edge of the waveguide, and the other edge the facet is being monitored by the CCD camera, which is preceded by a long pass filter (550 nm) to cut out any scattered light from the pump (λ pump = 532 nm; see the Optical Characterization Section).The acquired microscope (b) and CCD camera (c) views correspond to the illustrations (a-(i, ii)), respectively.The spectrometer and image readings from the CCD camera were acquired simultaneously and compared to verify which modes were coupled to the waveguide.The full optical setup can be seen in FigureS7.
Figure 6 .
Figure 6.Readings of the CCD camera, with the enhanced facet pictures and corresponding data, depicted alongside the pictures of the transferprinted SP (7.7 ± 0.5 μm in diameter) and readings on the spectrometer (a).These measurements were done under four different excitation intensities above the laser threshold.Spectrometer readings below a threshold and under three different excitation intensities are also shown in panel (b).The dashed line seen in spectra (a, b) tracks one of the modes of the SP at approximately 640 nm.The data acquired at that wavelength was used to plot the emission intensity versus pump energy (c), and the laser threshold of the transfer-printed SP was found to be at approximately 3 nJ.The spectral range where the lasing peaks were located in the SP (i.e., from 628 to 648 nm) was divided into 5 equal parts of Δλ = 4 nm each.A Pearson correlation test between the counts registered on the CCD camera and the counts registered on the spectrometer was then performed on each of those 5 parts, using the data of the 4 different optical pump energies (d).The two test parameters, r and p, correspond to the Pearson correlation coefficient and p-value, respectively.SPs were optically pumped at λ pump = 532 nm (see the Optical Characterization Section).The full optical setup can be seen in FigureS7. | 9,094.4 | 2023-11-15T00:00:00.000 | [
"Physics",
"Materials Science",
"Engineering"
] |
Stability of functional differential systems applied to the model of testosterone regulation
In this paper we propose a method for stability studies of functional differential systems. The idea of our method is to reduce the analysis of an n-dimensional system to one for an $(n+m)$(n+m)-dimensional system, where m is a natural number, to obtain stability and then to come back and make conclusions on the stability of the given n-dimensional system. As an example, a model describing testosterone by distributed inputs feedback control is considered. The aim of the regulation is to hold testosterone concentration above an appropriate level. The feedback control with integral term is proposed. We have to increase the testosterone level to the normal one. The control we proposed could destroy the stability of the model. That is why we have to choose the parameters of our distributed control, namely a dosage or intensity of assimilation of a medicine in a human body in such a form that the stability of our system is preserved. Thus the problem of regulation of testosterone level leads us to the stability analysis of the functional differential system describing a connection between the concentrations of hormones (GnRH), (LH), and testosterone (Te). Constructing the system, we discard the connections which seem nonessential. To estimate the effect of these connections is an important problem. We construct the Cauchy matrix of integro-differential system to estimate this influence.
Introduction
Functional differential equation of the form where B(t) is an n × n matrix with essentially bounded coefficients and K : C n → L n ∞ is a linear bounded operator acting from the space of continuous functions C n to the space of essentially bounded functions L n ∞ , f ∈ L n ∞ (all functions we understand as x : [0, ∞) → R n ), appears as a mathematical model describing processes in medicine, biology, and technology [18].
The operator K can be, for example, of the integral form (KX)(t) = t 0 k(t, s)X(s) ds. (1.2) Although the control with distributed input control frequently appears as a challenging problem, only a few papers are devoted to it (see, for example, the works [2, 11-14, 16, 17, 23]). Noise in the feedback delay control is the main obstacle appearing in mathematical models because of the fact that it is impossible to base our control on the value of process X(t) at a moment t j only, and we have to use an average value of the process X(t) = col{x 1 (t), . . . , x n (t)} at a corresponding neighborhood of the point t j . Increasing the number of points, we come actually to the control of the integral form (1.2).
It was demonstrated in the works [18,21] that integro-differential systems can be used to model endocrine regulation in relation to the delay incurred by transport of hormone from the secretion site to receptors. It should be noted (see, for example, [17]) that signaling the receptors is sensitive to the mean value of the hormone concentration over a certain period of time rather than the instantaneous value. Another way to arrange integral terms is, for example, the time required for describing assimilation of medicine. The integral term with a kernel defining a weight of every value adopts this role. It is pointed out in [23] that models with distributed inputs can appear in population dynamics, in propellant rocket motors, and in network control systems. Sufficient conditions of stability for IDE (1.1) were obtained in many well-known works (see, for example, [7,8,14,24,25,29]).
In our model, we have to increase the testosterone level to the normal one. The control we proposed could destroy the stability of the model. We have to be sure that the stability of the system is preserved. Note the positivity-based approach to stability of delay equations developed, for example, in the works [1,9,10,15,19,20,26,28]. Our method and positivity-based approach can be used also for analysis of nonlinear systems on the basis of the classical approach of the book [22].
Description of the model
Simplifying the process of testosterone regulation, we can depict the signal transduction pathway initiated in the brain that leads to the testosterone production in the leading cells.
This complex pathway encompasses a number of chemical and biological events and can be divided into three stages. The first stage takes place at the brain level. The hypothalamus produces the gonadotrophin releasing hormone (GnRH) that activates the luteinizing hormone (LH) in the pituitary gland. The second stage is reflected by release of LH into the bloodstream. The blood flow transfers the LH hormone to the Leydig cells. The third, final, stage begins with activation of the cascade of biochemical events in Leydig cells that results in production and subsequent release of testosterone [3,5,6,27]. This pathway, as many others in our body, is cyclic and includes the mechanism of negative feedback control. Thus, when the level of testosterone rises, the hypothalamus receives a signal about a sufficient level of hormone and stops producing the GnRH that subsequently inhibits LH release and as a results leads to lowering the concentration of testosterone. After lowering the level of testosterone, the brain receives a signal about the renewal of the process. Some relevant descriptions of the model can be found, for example, in [4][5][6].
where Φ and F are nondecreasing and nonincreasing functions, respectively.
The model describes an interaction of the concentration of hormones GnRH, LH, and Te which will be denoted as x 1 , x 2 , and x 3 , respectively. The values b i , 1 ≤ i ≤ 3, correspond to the respective half-life times of GnRH, LH, and Te. In the healthy male body all elements involved in the process work in consent.
We would like to propose a mechanism that allows us to hold the testosterone on a normal level T(t), although the normative exchange of information in the biological system described by the impulses (see the formulas after equation (2.1)) falls and the signals do not enter the brain at all or their influence is not enough to hold the corresponding level of testosterone. We will use the control in the form setting it in the right-hand side of the third equation of system (2.1).
The idea is clear: if T(t) > x 3 (t), the control has to increase the testosterone level x 3 (t), if , it has to decrease. Thus we come to the third equation in (2.1) of the form where the operator K : C → L ∞ (C and L ∞ are the spaces of continuous and essentially bounded functions respectively) is defined by the equality After substituting control (2.2) in the third equation of system (2.1), we come to the following system: (2.4) The corresponding homogeneous system is As usual, the coefficients are supposed to be positive, i.e.,
Stability of integro-differential system
Our approach is based on the fact that system (2.4) of integro-differential equations can be reduced to the following system of ordinary differential equations: The corresponding homogeneous system is the following: The solution-vectors col(x 1 (t), x 2 (t), x 3 (t)) of system (2.4) and three first components of the solution-vector col(x 1 (t), x 2 (t), x 3 (t), x 4 (t), x 5 (t)) of system (3.1) satisfying the initial condition x 4 (0) = 0, x 5 (0) = 0 coincide.
, then the matrix of the coefficients is the following: Proof The characteristic polynomial of system (3.2) is of the form It is clear that all the roots of P(x) = 0 have negative real parts. It means, according to Lemma 3.1, the exponential stability of (2.5). Solving system (3.2) in the case of (4.1)-(4.3), we obtain
The second column of the Cauchy matrix is the following: The third column of the Cauchy matrix is the following: The fourth column of the Cauchy matrix is the following: .
The fifth column of the Cauchy matrix is the following: Construction of the Cauchy matrix of system with ordinary differential equations can be found, for example, in [13].
Effects of changes in the right-hand side and of uncertain coefficient on the behavior of solutions
Constructing a system, we neglect the influences of different factors that seem to be nonessential. We also cannot know exactly the values of the coefficients describing the model. The Cauchy matrix C(t, s) allows us to estimate the influences of all these factors on the testosterone concentration. Consider the systems and where the (5 × 5) matrix A is described in (3.3), X(t) = col(x 1 (t), . . . , x 5 (t)), and F(t) describes a change of the right-hand side. We assume that F(t) and F(t) are 5-vectors with essentially bounded components F i (t) and F i (t). The general solution of system (5.0) has the following representation: where C(t, s) is the Cauchy matrix of system (5.0). In the following assertion, we estimate the difference between the solution-vector Y (t) of system (5.1) and the solution X(t) of system (5.0) with the same initial condition (i.e., X(0) = Y (0)).
The elements C ij (t, s) were obtained in paragraph 4. The proof follows from the representation of solution of system (5.0).
Consider now the following system of equations with uncertain coefficient: t 0 e -α 3 (t-s) x 3 (s) ds = f (t).
We can write our system in the following form: where The general solution of the auxiliary system X -AX = Z (5.5) can be represented in the following form: | 2,231.6 | 2019-12-01T00:00:00.000 | [
"Mathematics"
] |
Categorizing Malware via A Word2Vec-based Temporal Convolutional Network Scheme
As edge computing paradigm achieves great popularity in recent years, there remain some technical challenges that must be addressed to guarantee smart device security in Internet of Things (IoT) environment. Generally, smart devices transmit individual data across the IoT for various purposes nowadays, and it will cause losses and impose a huge threat to users since malware may steal and damage these data. To improve malware detection performance on IoT smart devices, we conduct a malware categorization analysis based on the Kaggle competition of Microsoft Malware Classification Challenge (BIG 2015) dataset in this article. Practically speaking, motivated by temporal convolutional network (TCN) structure, we propose a malware categorization scheme mainly using Word2Vec pre-trained model. Considering that the popular one-hot encoding converts input names from malicious files to high-dimensional vectors since each name is represented as one dimension in one-hot vector space, more compact vectors with fewer dimensions are obtained through the use of Word2Vec pre-training strategy, and then it can lead to fewer parameters and stronger malware feature representation. Moreover, compared with long short-term memory (LSTM), TCN demonstrates better performance with longer effective memory and faster training speed in sequence modeling tasks. The experimental comparisons on this malware dataset reveal better categorization performance with less memory usage and training time. Especially, through the performance comparison between our scheme and the state-of-the-art Word2Vec-based LSTM approach, our scheme shows approximately 1.3% higher predicted accuracy than the latter on this malware categorization task. Additionally, it also demonstrates that our scheme reduces about 90 thousand parameters and more than 1 hour on the model training time in this comparison.
Introduction
Recent developments in the field of edge computing have led to extensive attention on smart device security in the Internet of Things (IoT) environment [1]. Nowadays, smart devices interact with networks for various purposes. A mass of personal information, including health (2020) 9:53 Page 2 of 14 on malware detection and categorization on IoT remains imperative and promising. Malware detection and analysis have received extensive discussion, yet traditional approaches are not fully available on edge devices in the IoT environment. Certain traditional defend techniques applied to general desktop computing environments rely on pre-defined rule libraries. However, the portability of smart devices causes that they are not always connected to fixed and trusted networks, and thus perimeter-based defenses, including firewalls and intrusion detection, are not available for edge devices [5]. Moreover, as smart devices put more emphasis on real-time interaction, the corresponding malware identification requires faster response speed than on traditional platforms. Current malware identification for edge devices mainly relies on the malware signature databases from software distributors, yet this approach can not meet the demand of detecting the ongoing number of malware in edge computing paradigm. Research on automatic malware analysis techniques in the IoT environment is exceptionally urgent. In our previous works, to measure the stability of cyber-physical systems (CPSs) under malicious attacks, we developed a finitetime observer to estimate the state of the CPSs [6]. Then, we proposed a kernel learning algorithm to improve the malware detection performance on complex datasets with noise [7]. In addition to detection performance, memory footprint and response speed are also of enormous importance for current smart devices on IoT, and this poses higher requirements for edge malware analysis. In this article, we are committed to improving edge malware identification performance with low memory footprint and fast response speed.
As one of the most energetic technology companies, Microsoft has paid great enthusiasm into the IoT field, and Windows-based applications have been well-developed via their Azure IoT platform services [8]. Focused on the Windows-based malware invasion problem on the IoT platform, this article proposes a malware categorization scheme for attributing malware into different families through a Word2Vec-based temporal convolutional network (TCN). The model performance is evaluated by comparing with several representative works, i.e., Naive Bayes Classifier, OneHot-based TCN, Word2Vec-based long short-term memory (LSTM), on the Microsoft Malware Classification Challenge (BIG 2015) dataset.
In this research, opcode and application programming interface (API) call name sequences are extracted from the malware assembly files firstly. Then, in consideration of the benefits of pre-training strategy for achieving better performance, a Word2Vec model, which encodes textual data with distributed representation by considering the context, is implemented for input name vectorization. Compared with one-hot encoding approach, Word2Vec encodes the input names into more compact numeric vectors by training a language model, and it leads to lower memory footprint and better representational ability. Finally, a TCN, as an advanced convolutional network structure for sequence modeling tasks, is developed to attribute the malware. Compared with other recurrent neural networks (RNNs), e.g., gated recurrent unit (GRU) and LSTM, TCN is easy to implement in parallel because of its convolutional structure. In addition, TCN demonstrates significant advantage of lower memory requirement than canonical recurrent networks due to the shared filters across the convolutional layers. Our contributions in this article are summarized as follows. The remainder of this article is organized as follows. The next section gives a summary of the background consisting of Word2Vec model, TCN structure, and recent works on IoT malware classification and categorization. Following that, the proposed scheme and the time complexity are elaborated and analyzed. Then, the next part describes the experimental settings and results for model evaluation. The final section includes a conclusion of the proposed scheme and a promising direction for further research.
Word2Vec model
Input name sequences from malware samples are textual data that should be encoded into numeric vectors for feature representation. Word embeddings are general approaches to map primitive representation of words into high-dimensional numeric vectors in an embedding space with maintaining word distances. Nowadays, word embeddings have gained an incresed research interest, and among which Word2Vec is one of the most significant text representation models [9,10]. Word2Vec assumes that the contexts in the natural language are of high correlation, and hence words can be vectorized according to the contexts [11]. Then, word vectors can be obtained from training corpus to measure the semantic similarities between words in natural language. Note that word vectors are generally generated from the weights of trained language models rather than the direct training targets in Word2Vec. Generally, Word2Vec includes two kinds of architectures, i.e., contextual bag-of-words (CBOW) and skip-gram (SG), to learn distributed representation [12][13][14]. A simple skip-gram model architecture is shown in Fig. 1 [10]. A large and growing body of literature has studied the effectiveness of Word2Vec model in various areas. In [15], Word2Vec technique was applied to social relationship mining in a multimedia recommendation method. This method recommended users multimedia based on a trust relationship, and Word2Vec here was used to encode the sentiment words in related comments into word vectors. In [16], a Word2Vec-based music modeling method adopted skip-gram to model slices of music from a large music corpus. Word2Vec was proved a useful embedding technique to capture meaningful tonal and harmonic relationships in music according to their experimental results. Word2Vec has also shown powerful representation ability for inverse virtual screening in the early stage of drug discovery process. In [17], Word2Vec was combined with a dense fully connected neural network algorithm to perform a binary classification on input protein candidates. In addition, several recent studies investigating Word2Vec in the areas of malware classification and detection have been carried out. In [18], a malware detection method named DroidVecDeep was designed to detect unknown malicious applications on the Android platform. Here, features were extracted by static analysis and ranked by mean decrease impurity firstly, and then were transformed into compact vectors to train a deep classifier according to Word2Vec model. In [19], a LeNet5 structure was developed for malware classification based on the multi-channel feature matrixes, which were converted from malware binary files and assembly files via Word2Vec technique.
Temporal convolutional network
RNNs are considered the general methods for sequence modeling tasks. However, certain convolutional structures show state-of-the-art performance in some sequence modeling tasks, such as audio synthesis, machine translation, and language modeling [20][21][22]. Then, to verify whether convolutional structures are subject to some specific sequence modeling applications, TCN structure was developed and compared with common RNNs, such as GRU and LSTM, on a comprehensive set of sequence modeling tasks. The comparison results on these tasks indicate better performance and longer effective memory of TCN structure [23].
TCN uses a specific 1D convolutional structure for sequence information representation. Assuming x = (x 1 , . . . , x t , . . . , x l ) is the input sequence, l denotes the input sequence length, x t denotes the input at time step t, g ∈ R h×n represents n convolutional filters with kernel size h, " " denotes convolution operator, and then a canonical 1D convolutional operation can be formed as [24]: However, 1D convolutional networks are facing information leakage and output shrink problems. To overcome these limitations, TCN combines 1D fully-convolutional network (FCN) and casual convolutions [25]. In 1D FCN, hidden layers have the identical length as input sequence to prevent output length shrink. In casual convolutions, output at time step t is convolved only with the neural nodes at time t and the earlier ones in the previous layer. Moreover, considering the receptive field of 1D FCN is linear to the number of convolutional layers, dilated convolution technique is integrated into TCN structure for longer effective memory. Then, the dilated convolutional layer can be defined as: where " d " denotes the convolution operation with dilation factor d.
Residual connection is another important ingredient of TCN [26]. According to residual connection, the output of a branch which contains a series of transformations G is added to the input of the block. Assuming the input of the residual block is z, and the output of the block is o, then the residual block can be defined as: Compared with canonical RNNs, such as LSTM and GRU, TCN always has longer effective memory and better performance. Additionally, two other advantages are determined by the particular TCN structure. The fact that neural nodes in each hidden layer are not sequentially connected enables parallel computation for higher computational efficiency, and the shared filters across each layer lead to fewer parameters in TCN. A common TCN structure is illustrated in Fig. 2 [27].
Machine learning methods on edge malware detection and categorization
With the rapid development of IoT, smart devices have suffered various attacks in edge computing paradigm. For instance, in the distributed denial-of-service (DDoS) attack on October 21, 2016, large amounts of IoT devices, such as digital video recorders (DVRs) and internet protocol (IP) cameras, were infected by Mirai to participate in this attack [28]. Therefore, research on malware categorization and analysis in the IoT environment is of great significance. As machine learning methods, such as support vector machine (SVM), extreme learning machine (ELM), neural network (NN), have shown good achievements on classification tasks, there has been a surge of interest in machine learning methods on edge malware detection in recent years. In [29], Sagar developed a three-stage malware detection model to improve detection performance. Term frequency-inverse document frequency (TF-IDF) and information gain (IG) features were extracted in the first stage, and then principal component analysis (PCA) technique was brought in for feature extraction. Finally, a deep belief network (DBN) with optimized activation function was constructed to attribute the malware. In [4], Niu et al. combined static analysis and extreme gradient boosting (XGBoost) method to overcome the low accuracy of static analysis and high resource overhead of dynamic analysis on X86-based IoT devices in an autonomous driving application. In [30], the opcodes of IoT applications were transmuted into a vector space, and then fuzzy and fast fuzzy tree methods were developed to detect and classify the malware. In addition, control flow graph (CFG) was another common choice for malware classification. In [31], a CFG-based deep learning model was constructed to identify malware and benignware IoT disassembled samples.
The proposed malware categorization scheme
In this section, a brief introduction to the malware dataset for this work are described firstly. Then, pre-processing to filter the input sequences is analysed. Furthermore, a Word2Vec-based TCN for malware categorization is elaborated. Through the employment of a pre-trained Word2Vec model, the input name sequences are embedded into a vector space, and then a TCN structure is developed for malware categorization. The whole process is illustrated in Fig. 3. The comparison between the state-of-the-art Word2Vec-based LSTM approach (left) and our proposed scheme (right) is illustrated in Fig. 4. The comparison in Fig. 4 shows that the main differences between our proposed scheme and this Word2Vecbased LSTM are pre-processing and categorization network. In pre-processing, we apply extra useful tricks for feature extraction. Continuously repeated names representing repeat processes in program execution provide no additional information for malware categorization. Therefore, the strategy to remove the repeat is designed here. In addition, too short sequences, which provide The whole process of our proposed scheme. The whole process consists of input, output, preprocessing and test set validation process, and three network modules Word2Vec pre-trained model, input embedding module and TCN categorization module inadequate information for family classification and lead in much noise for feature representation, are eliminated in our scheme. Considering the categorization network, TCN in our scheme has longer effective memory due to the dilated convolution structure. Moreover, residual structure is another reason that our scheme performs better than Word2Vec-based LSTM. More details about the proposed scheme are described in the following parts.
Dataset
In dataset [32] are performed to evaluate the proposed scheme. The original dataset with approximately 500GB consists of more than 20K malware samples belonging to nine malware families. In this work, considering the test data with no labels are unavailable for supervised tasks, only labeled training data in the whole competition dataset are utilized. The corresponding assembly source file of every malicious program is produced from binary file through interactive disassembler pro (http://www. hex-rays.com/products/ida/). Then the opcode and API call name sequences are extracted from the corresponding assembly source files.
Pre-processing
Input name sequences are roughly extracted from assembly source files, and therefore further data processing is an essential and primary step before feature representation [33]. Some extracted sequences contain many consecutive duplicate opcode and API call names which supply no more information for modeling. Then reducing consecutive repeated names is an imperative procedure. Meanwhile, extracted sequences from assembly source files have unequal lengths so that unifying the length of the sequences is another consideration. As a whole, main data pre-processing techniques in this work are as follows: • Filter consecutive duplicate opcode and API call names: Remove the consecutive and identical names in input sequences to avoid redundant information. • Filter short sequences: Some sequences from assembly source files which just consist of several opcode and API call names may contain insufficient information to identify the corresponding programs, and these sequences will be removed from the dataset. • Unify the sequence length: Samples with various length are tricky for neural networks, and therefore unifying the sequence length is imperative for malware categorization. In this work, a sequence length L is pre-set to equalize the lengths [34]. The sequences with length longer than L retain the first L names, and those shorter than L are unified via zero-padding.
After the data pre-processing, the sample size in dataset reaches 10868 and the vocabulary contains 1121 unique opcode and API call names. In the experiments, the extracted sequences are split into training set, validation set, and test set with the proportion of 0.64, 0.16, and 0.2, respectively. The statistical information of each category is shown in Fig. 5 and data samples are shown in Fig. 6.
Word2Vec-based TCN structure
Word2Vec-based TCN mainly consists of a Word2Vec and a TCN sequence analysis model. In this structure, input sequences are transmitted to Word2Vec model in the first step, and then the embedding layer weights are initialized with the numeric vectors from the trained Word2Vec model. Subsequently, a specific TCN for malware categorization is trained. Finally, the Word2Vecbased TCN model is automatically evaluated on the test set. The algorithm description is presented in Algorithm 1. TCN, which consists of several specific convolutional strutures, is an advanced sequence modeling structure. Compared with common RNNs, such as LSTM and GRU, TCN is characterized by fewer network parameters and faster training speed with better performance on sequence modeling tasks. In this article, a TCN structure as illustrated in Fig. 7 is developed for malware categorization. In Fig. 7, the TCN is constructed by stacked residual blocks where the dilation factor is exponentially grown as the blocks are stacked. In addition, each residual block contains two dilated causal convolutional layers and all the convolutional layers contain 32 filters in this TCN. Finally, in the last layer, "fc" which is a fully connected layer with 9 hidden neurons and softmax activation function outputs the predicted family probabilities.
Loss function and optimization
Considering malware categorization on the Microsoft Malware Classification Challenge (Big 2015) dataset is affiliated with multi-class problems, categorical crossentropy loss function is adopted in this article.
Assuming y ij denotes the true probability of the ith sample belonging to malware family j,ŷ ij denotes the predicted probability of the ith sample belonging to family j, N denotes the sample size, M denotes the number of malware families, and then categorical cross-entropy loss function is defined as: Adam optimizer, which combines the first moment estimation and the second moment estimation of the gradient, is a common optimizer in neural networks [35]. Hence Adam optimizer is employed in this work.
Time complexity
When 1D convolutional structure is used for sequence modeling in natural language processing, the input sequences are always encoded into numeric vectors firstly. Then, assuming x ∈ R l×m is an input sequence where l denotes the length of the input sequence and m denotes the dimensionality of embedding space, n denotes the number of convolutional filters, h denotes the length of the 1D convolutional filter kernel (l h), and then the time complexity of the 1D convolutional layer is: Assuming d is the dilation factor of the dilated convolutional layer in TCN, the time complexity of this dilated convolutional layer is: Moreover, the mathematical form of a residual connection is: where o denotes the residual block output, z denotes the input of the block and G denotes a series of transformations. It can be seen that the residual connection is linear, assuming G in a residual block contains two dilated convolutional layers which is the general case, then the time complexity of this TCN structure can be approximately estimated as: From (5) and (8), the time complexity between TCN residual block and 1D convolutional structure is roughly comparable. Considering the input data are determinate after pre-processing and embedding space construction, the number of convolutional filters and the length of filter kernels is the main variable parameters in convolutional structure for time consumption. Moreover, since dilated convolutions are potent tricks in TCN structure for a large receptive field, TCN residual blocks enable less computing time with the growth of the dilation factor. Finally, the TCN will achieve good performance with less time consumption by stacking with several residual blocks.
Experiments
To evaluate the performance of our proposed malware categorization scheme, the classical Naive Bayes Classifier for N-gram model (Ngram NBC, for short) is the baseline in our experiments [36]. In addition, to verify that the numeric vectors from pre-trained Word2Vec model are capable to represent the malware feature sequences more precisely, the current popular one-hot encoding technique combined with TCN (OneHotTCN, for short) is compared in our experiments. Then, our proposed scheme (Word2VecTCN, for short) is compared with the state-of-the-art malware categorization model in [34] (Word2VecLSTM, for short). Finally, our scheme is compared with some other recent works on the same malware dataset.
Experimental environment
Our experiments are conducted on the Kaggle competition of Microsoft Malware Classification Challenge (BIG 2015) dataset to evaluate the malware categorization performance on the IoT malware recognition task. Considering the samples in each category are different in quantity, we divide the dataset into training, validation, and test set in a stratified fashion to ensure the same relative proportion in each set. More dataset statistical information is in the previous section.
Here, our experiments are implemented by Python with some additional libraries, such as TensorFlow, Keras, and some others, while the training and evaluation processes are conducted on Tesla K80 GPU in Google Colaboratory system, which is a Google cloud service supporting artificial intelligence research [37]. In addition, early stopping and learning rate schedule are extra strategies in the training phase. The learning rate is initially 0.001, and then reduced to 10% of the original value if the validation loss stops declining for 5 epochs.
Metrics
The following basic criteria are universally defined for performance evaluation of machine learning techniques: true positive (TP), true negative (TN), false positive (FP), and false negative (FN). Here, to evaluate the performance of the malware categorization models, some metrics based on the above criteria such as true positive rate (TPR), false positive rate (FPR), positive predictive value (PPV), Fmeasure (F-M), accuracy (ACC) are calculated and compared [38]. In the experiment, the metrics of each malware family are computed firstly. Then, considering class imbalance problem in this dataset, further weighted results of the nine malware families are also calculated in this article. The metrics are defined as: Additionally, total training time, test time, and training time per epoch are other important indexes to be used for time consumption evaluation in this article.
Parameter selection
The parameters in our proposed scheme and comparison methods are elaborated in this section. As shown in Table 1, TCN and LSTM have some similar parameters. Here, "max sequence length" is the maximum length of the opcode and API call name sequences. Then, the sequences whose lengths are longer than the threshold are clipped to "max sequence length", and the shorter ones are padded with 0 to reach the fixed "max sequence length". The parameter "batch size" is the number of samples fed into the models in each iteration. The parameter "learning rate" is the learning rate in the optimization procedure. Malware opcode and API call names should be mapped into numeric vectors before feature representation, and "embedding size" is the dimension of embedding space. The parameter "number of layers" points out the number of the network layers. For example, two LSTMs are stacked in this article. The parameter "dropout rate" is the dropout proportion of the network nodes in the training phase. The parameter "hidden layer neuron" represents the number of neural neurons in LSTM hidden layers. The parameter "number of filters" is the filter size in the convolutional layers. The parameter "number of stacks" is the number of stacked convolutional structures in residual blocks. Considering dilated convolution used in TCN, "dilations" is a list of dilation factors in dilated convolution. The parameter "kernel size" is the filter kernel size in the convolutional layers. There is no need to tune all parameters in both networks, and "-" represents the corresponding parameter is inexistent in current network. Moreover, the parameters in OneHot-based method are basically identical with those in Word2Vec-based TCN, except that there is no "embedding size" in OneHotbased TCN.
Results
Experimental results are shown in this section. Figures 8 and 9 reveal the accuracy and loss comparisons between our scheme and OneHot-based TCN in the training phase. Figures 10 and 11 reveal the accuracy and loss comparisons between our scheme and Word2Vec-based LSTM in the training phase. The confusion matrix on the test set of our scheme is illustrated in Fig. 12. Then the metrics on each family of our scheme are computed in Table 2.
The weighted evaluation metrics and the time consumption comparisons on this malware categorization task are presented in Tables 3 and 4, separately. Finally, accuracy comparison between our scheme and other works on this dataset is conducted in Table 5.
The comparisons between our proposed Word2Vecbased scheme and OneHot-based one are shown in Figs. 8 and 9. From Fig. 8, the validation accuracy of our scheme is initially 29.8% and increases to a final value of 97.9%, while the validation accuracy of OneHot-based TCN method is initially 11.4% and grows to a final accuracy of 96.5%. From Fig. 9, the validation loss is initially 5.88 and decreases to 0.12 finally of our scheme, while the validation loss of OneHot-based TCN is initially 2.34 and decreases to 0.21 finally. The two figures reveal Word2Vec owns stronger feature representation ability than the onehot encoding on this malware category dataset. Specifically, in terms of the embedding layer, the dimension of numeric vectors generated from one-hot encoding reaches 1121 which is the number of unique opcode and API call names, while the dimension of numeric vectors trained from Word2Vec is 300. It can reduce large memory footprint in edge devices.
The comparisons between our proposed scheme and the state-of-the-art Word2Vec-based LSTM model are shown in Figs. 10 and 11. From these two figures, considering that the "dropout rate" in our scheme is higher than that in Word2Vec-based LSTM, our scheme is a bit behind Word2Vec-based LSTM model at the beginning of the training phase. Still, with the powerful feature representation ability, our scheme achieves higher accuracy and lower loss than Word2Vec-based LSTM model both on the training set and validation set finally. Furthermore, Word2Vec-based model needs to train about 672 thousand parameters while our scheme just requires approximately 582 thousand parameters, and the results show that Word2Vec-based TCN has better representation ability and lower running memory in the training phase. Figure 12 presents the predicted results on test set of our scheme visually, and Table 2 computes the metrics on each malware family. The result combining Fig. 12 and Table 2 reveals that FPR of "Ramnit" is the highest among the nine families, and therefore how to identify the malware samples which are conceived as "Ramnit" more accurately is the bottleneck to enhance the performance of Word2Vec-based TCN scheme. When applying this scheme to the practical IoT environment, the samples recognized as "Ramnit" need to be paid more attention. Tables 3, 4, and 5 show the comparisons of our scheme and some representative methods. From Table 3, the weighted metrics are computed and compared. The results show that the weighted F-measure and the accuracy of our scheme are approximately 1.2% and 1.3% higher than those of the Word2Vec-based LSTM, and the weighted FPR of our scheme is approximately 0.3% lower. Among all these metrics, Word2Vec-based TCN achieves the best performance. In Table 4, "Training time" is the runtime in the whole training phase, "Test time" is the Considering the convolutional structure is easy to be trained in parallel and the parameters of our scheme are fewer than those in LSTM, TCN takes much less training time than LSTM. In addition, our proposed scheme has been compared with the other three recent works which are also on the same Microsoft malware dataset in Table 5.
The comparison also verifies the good performance of our scheme.
Conclusion
In this article, a Word2Vec-based TCN scheme is proposed for malware categorization in consideration of edge computing security. Opcode and API call name sequences are extracted from malicious samples firstly, and then the pre-processing is conducted for data cleaning. Subsequently, through the Word2Vec pre-training on the feature sequences, numeric vectors of the input names are generated. Additionally, the malware feature sequences represented by numeric vectors are fed into TCN to fit an IoT malware categorization model. Finally, the model performance is evaluated on the test set. The comparisons with other representative works verify that our proposed scheme can achieve decent performance while requiring a small quantity of memory and training time. From the occupancy of resources point-of-view, the benefits of combining Word2Vec model and TCN structure are noticeable.
Considering the low occupancy of resources and good computing performance of our scheme, it has potential applications on smart devices for security. As a universal The entries in boldface show the highest accuracy and the corresponding method among this performance comparison malware categorization scheme, our scheme suggests its promising applications in multiple fields of edge computing security, such as intelligent transportation system security control, smart factory protection and some others. The applications of our scheme on these edge computing fields will be considered in future work. | 6,609.4 | 2020-09-23T00:00:00.000 | [
"Computer Science"
] |
Spatially Modulated Vacua in a Lorentz-invariant Scalar Field Theory
Spatial modulation has been studied for a long time in condensed matter, nuclear matter and quark matter, so far in non-relativistic field theories. In this paper, spatially modulated vacua at zero temperature and zero density are studied in relativistic field theories. We first propose an adaptation of the Nambu-Goldstone theorem to higher derivative theories under the assumption of the absence of ghosts: when a global symmetry is spontaneously broken due to vacuum expectation values of space-time derivatives of fields, a Nambu-Goldstone (NG) boson appears without a canonical kinetic (quadratic derivative) term with a quartic derivative term in the modulated direction while a Higgs boson appears with a canonical kinetic term. We demonstrate this in a simple model allowing (meta)stable modulated vacuum of a phase modulation (Fulde-Ferrell state), where an NG mode associated with spontaneously broken translational and $U(1)$ symmetries appears.
Introduction
Spatially modulated ground states were theoretically proposed in superconductors a half century ago [1,2], and such states are now called Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) states.
More precisely, Fulde-Ferrell (FF) and Larkin-Ovchinnikov (LO) states denote modulations of a phase and amplitude of a condensation, respectively. The LO states were shown to be ground states in the presence of a magnetic field inducing the spin imbalance for a Cooper pair of a superconductor [3]. In the last couple of years, there have been several claims of its observation (see Ref. [4] for a review). Recently, ultracold atomic Fermi gases have renewed interest in FFLO states (see Ref. [5] for a review). The spin polarized superfluid state was observed in Ref. [6] and it was claimed that the FFLO state has been achieved in this experiment. FFLO states in a ring were also proposed in cold Fermi gases [7] and in superconductors [8].
FFLO states, called twisted kink crystals, were also studied in the chiral Gross-Neveu model in 1+1 dimensions [9,10,11] (see [12] for application to a superconductor). Spatially modulated chiral condensations, such as FF states (called dual chiral density wave or chiral spiral) [13,14] and LO states (called real kink crystal) [15,16], have been proposed to appear in a certain region of the phase diagram of QCD in 3+1 dimensions (see Ref. [17] as a review). Although the Cooper pair is usually refers to the particle-particle condensates, the chiral condensation is related to the particle-antiparticle (or hole) pairing. They were also proposed in diquark condensations exhibiting color superconductivity in high density QCD (see [18,19] as a review) and were also discussed in the context of the AdS/CFT correspondence [20,21,22,23].
These spatial modulations were originally proposed in condensations of fermions forming Cooper pairs. In terms of the Ginzburg-Landau effective theory, which is a scalar field theory, these states are realized as ground states of the theory due to the presence of a wrong sign of a gradient term and positive higher derivative terms. In general, these kinds of inhomogeneous states spontaneously break translational as well as rotational symmetries. Nambu-Goldstone (NG) modes associated with these broken symmetries in such backgrounds were studied in Refs. [24,25]. After all, inhomogeneous states in condensed matter, nuclear matter and quark matter studied so far are all realized in theories where the Lorentz invariance is explicitly broken due to the finite density/temperature effects and so on.
In this paper, we study spatially modulated vacua at zero temperature and zero density (but not ground states in finite density and/or temperature) in manifestly Lorentz invariant field theories, with a particular attention to spontaneous symmetry breaking and NG bosons. From a viewpoint of low-energy effective theories, field theories generically receive higher derivative corrections.
We assume that there is no ghost in the theory implying the absence of more than one derivative on one field which can not be eliminated by partial integration. For example, the term ∂ 2 ϕ = ∂ m ∂ m ϕ with space-time index m, generally causes the so-called Ostrogradski instability [26]. This is a crucial difference with non-relativistic cases. Then, all higher derivative terms come in a way that only a single space-time derivative acting on one field, ∂ m ϕ. Thus, the effective theory is in general a function of ∂ m ϕ (complemented by a potential term). In this set-up we study an adaptation of the NG theorem to higher derivative theories , stating that when a global symmetry is spontaneously broken due to vacuum expectation values of spacetime derivatives of fields, an NG boson appears without canonical kinetic (quadratic derivative) terms with a quartic derivative term in the modulated direction, while a Higgs boson appears with a non-zero canonical kinetic term.
After giving general discussion of the stability of general higher derivative models, we give a simple model illustrating this. Our model admits (meta)stable modulated vacuum of a phase modulation (Fulde-Ferrell state), where an NG mode associated with spontaneously broken translational and U(1) symmetries appears.
2 Adaptation of the Nambu-Goldstone theorem to higher derivative theories In this section, we apply the NG theorem to the case that global symmetries of a Lagrangian are spontaneously broken due to vacuum expectation values (VEVs) of space-time derivatives of fields. Here, we consider the case that there is no Ostrogradski instability [26] assuming that there is no more than one space-time derivative on a field. We show that an analogue of the NG boson appears without canonical kinetic term with a quartic derivative term. In addition, we will show that a Higgs boson, which is defined by a mode that is orthogonal to the abovementioned NG mode, appears with a non-zero canonical kinetic term in the vacuum.
In the following, we consider d-dimensional relativistic field theories where the Lorentz invariant Lagrangian L is given by a functional of ∂ m ϕ a . Here m = 0, 1, . . . , d − 1 is the spacetime index and ϕ a (a = 1, . . . , N) are complex scalar fields. The energy functional E of the theories depends only on the first time derivative of fields which we denote Φ I = ∂ m ϕ a , Φ † I = ∂ mφa . The index I = 1, . . . , dN labels the fields and directions of the space-time derivatives.
Vacua |0 in the theories are defined such a way that they provide extrema of the energy E with respect to Φ I , Φ † I : In these extrema, we assume that the fields Φ I , Φ † I develop VEVs: Here some of these VEVs are not zero and they need not to be constants in general. Indeed, as we will see later, they are spatially varying functions for modulated vacua. Since Φ I are in fact given by the space-time derivative of the fields (therefore they are Lorentz vectors), the non-zero VEVs (2) generically break the translational and rotational symmetries. Hereafter, we assume that VEVs are spacelike vectors 1 .
Now we introduce the dynamical fieldsΦ
a as fluctuations around a vacuum determined by the condition (1). We shift the fields around the VEVs, Φ I → v I +Φ I , and the energy is expanded as where we have defined the following Hermitian matrix: Here the symbol * | v stands for the values evaluated in the vacuum. We note that the matrix M determined by the second derivatives of E is just the curvature of the energy density and it is in general a function of x i (i = 1, 2, 3). In order that the extrema defined by (1) become local minima of the energy, M should be a positive semi-definite matrix for all the regions in x.
These conditions of vacua do not guarantee that they are global minima but meta-stable local minima are allowed in general. From the expression (3), one observes that the eigenvalues of M correspond to coefficients of the quadratic kinetic terms of the dynamical fieldsφ a ,φ † a .
Since M is a positive semi-definite matrix, there are no fluctuation modes whose kinetic terms have the wrong sign (negative sign in the energy functional). However we stress that there are possible zero eigenvalues for a general M. When M has zero eigenvalues, the quadratic terms of the corresponding modes vanish.
In order to see the meaning of the zero eigenvalues of M, we elucidate the relation between the matrix M and the spontaneous symmetry breaking. The fields Φ I , Φ † I transform according to symmetries of theories: Here Q A are generators of the symmetry groups and the Hermitian matrices T A are an irreducible representation of Q A . In a vacuum |0 , some of the fields Φ I develop non-zero VEVs and we have We now define the following vector, For generators that satisfy TÂ v = 0, the corresponding symmetry is preserved in the vacuum while for T A ′ v = 0 the symmetry is spontaneously broken. The energy functional E is invariant under the following transformation, where ε A are infinitesimal real parameters. Then we have By differentiating the above relation with respect to Φ I , Φ † I and evaluate the result in a vacuum, we find Therefore T A ′ v are eigenvectors associated with the zero eigenvalues of M. The relation (10) indicates that when some of the fields Φ I , Φ † I develop VEVs that spontaneously break symmetries, then the canonical quadratic kinetic terms for the modes that correspond to the zero eigenvalues of M vanish. We call these Nambu-Goldstone (NG) modes. On the other hand, the modes that are orthogonal to the NG modes appear with quadratic kinetic terms in the energy functional. We call these Higgs modes. We note that since the vector T A ′ v generically depends on x i , there is a possibility that T A ′ v(x) vanishes at some specific points x i = x i 0 in a general setup. At these points, the broken symmetries are recovered locally and one expects that a non-zero quadratic term associated with the NG mode recovers. We do not exclude this possibility but it is nevertheless not always the case.
Indeed, as we will show in an explicit example of the spatially dependent VEV in the next section, the vector T A ′ v never vanishes at special points and the theorem discussed in this section completely works well in all regions in space-time.
A model for spatially modulated vacua
In order to understand the discussion in the previous section concretely, we introduce a Lorentz invariant scalar field model where, in addition to the canonical quadratic kinetic terms, higher derivative corrections are involved. We begin with the observation that the global stability of modulation is guaranteed when the highest power of the derivative terms |∂ m ϕ| 2 is odd and an appropriate sign of the terms are chosen. We propose a scalar field model of the simplest example where a spatially modulated state is allowed as a (meta-)stable vacuum. We then apply the Nambu-Goldstone theorem discussed in the previous section to the model and show that there are modes where the quadratic kinetic terms vanish (NG modes). We demonstrate that there are always associated modes with the non-zero quadratic kinetic terms (Higgs modes).
Global stability of modulation
Let us consider a complex scalar field ϕ. The general Lorentz invariant Lagrangian containing finite powers of |∂ϕ| 2 = ∂ m ϕ∂ mφ is where n ∈ Z is the highest power of the derivative terms and · · · implies lower orders. The space-time index m is contracted by η mn = diag.(−1, 1, 1, 1). The dot inφ stands for the derivative of the field with respect to x 0 and ∇ is spatial derivatives. The canonical conjugate momentum is Then, the Hamiltonian associated with the Lagrangian (11) reads Let us discuss the stability of a vacuum in the model. First, by looking at the second term in (13), we see that the energy is bounded from below only when one chooses the upper sign in (11). Otherwise, the spatial gradient of the field causes an instability as |∇ϕ| 2 → ∞. Second, the first term in (13) implies that the energy is bounded from below only when the highest order n is odd. For even n, an instability in the temporal direction grows |φ| 2 → ∞. Therefore, the simplest Lagrangian is of the third order in |∂ϕ| 2 containing six derivatives. In the next subsection, we consider an example of the third order Lagrangian allowing a modulated vacuum.
A model and vacua
We propose a four-dimensional Lorentz invariant complex scalar field model whose Lagrangian is given by Here k > 0, λ > 0, α > 0 are real constants 2 . The Lagrangian (14) contains the ordinary kinetic term of the complex scalar field ϕ and the higher derivative corrections. The Lagrangian (14) is invariant under a global U(1) transformation ϕ → e iθ ϕ with a constant θ, in addition to the Poincaré symmetry including the SO(3, 1) Lorentz and translational symmetries. The Lagrangian can contain a potential term of ϕ too. In this paper, for simplicity we do not consider a potential term. In this case, the Lagrangian possesses a shift symmetry where c is a constant.
We now employ an ansatz ϕ = ϕ(x 1 ) for one-dimensional spatial modulation along the x 1 -direction: 0|∂ 1 ϕ|0 = 0. We also assume static configurations. Then the energy functional (16) becomes The function E(x) is interpreted as a potential for has a local minimum at x = 0 in which the vacuum energy is E(0) = 0 and the scalar field ϕ has a constant VEV. Whether E(x) has another minimum or not crucially depends on the parameters k, λ, α. Since E ′ (x) = 3αx 2 − 2λx + k, if the condition λ 2 − 3αk > 0 is satisfied, E(x) has another vacuum. In this case, the function E(x) has extrema at Since k > 0, x = x − corresponds to a local maximum while x = x + = 0 is a minimum which is a candidate of a modulated vacuum. Note that λ should be positive in order that x + > 0.
The condition α > 0 is necessary in order that the potential is bounded from below. We find that the vacuum energy is classified according to the discriminant condition of the function We have three distinct types of vacua. When the parameters k, λ, α satisfy the condition λ 2 − 4αk < 0, then the function E(x) becomes positive definite. The local vacuum This is a meta-stable vacuum which decays to the global minimum (true vacuum) x = 0 within a finite time. See fig.1 (a) for the potential profile.
When the parameters satisfy the condition λ 2 − 4αk = 0, the potential function E(x) looks like meta-stable and the vacuum at x = x + is energetically favoured. Then the true vacuum is In each vacuum we have |∂ 1 ϕ| 2 = x + . The general solution that satisfies this relation is where c is a constant and F (s) is a real function. We are interested in a spatially modulated vacuum state. The most conservative choice is a linear function F (s) = ps where p is a constant.
As we will see below, this vacuum preserves the highest symmetry in the theory. The vacuum is then given by where the constants p, ϕ 0 satisfy p 2 |ϕ 0 | 2 = x + . This is the ground state where the spatial modulation along the x 1 -direction occurs. The period of the modulation is given by 2π/p. It is easy to confirm that the modulated vacuum (20) satisfies the equation of motion as follows.
We have a spatially modulated vacuum (20) along the x 1 -direction in which 0|∂ 1 ϕ|0 = v = ipϕ 0 e ipx 1 = 0. It is obvious that the translational symmetry along the x 1 -direction and the rotational symmetries in the (x 1 , x 2 ) and (x 1 , x 3 ) planes are spontaneously broken in the modulated vacuum. Then the four-dimensional Poincaré symmetry is broken down to that in three dimensions.
As a matter of fact, due to the U(1) symmetry ϕ → e iθ ϕ, the simultaneous operation of the translation x 1 → x 1 + a and the U(1) transformation ϕ → e −ipa ϕ is preserved in the modulated vacuum (20). Here a is a constant. Meanwhile, the combination of the translation and the inverse rotation x 1 → x 1 + a, ϕ → e +ipa ϕ is broken. Therefore the global symmetry including the translational operation along the x 1 -direction is broken in such a way that P 1 × U(1) → [P 1 × U(1)] sim . Here P 1 represents translation along the x 1 -direction and "sim" means the simultaneous operation. As we have remarked above, this symmetry breaking pattern is a consequence of the simplest choice of the modulated vacuum (20). No other choices of F (s) results in this [P 1 × U(1)] sim symmetry. Note that the translation P 1 and the rotations in the (x 1 , x 2 ) and (x 1 , x 3 ) planes are not independent each other [30]. Therefore we expect that there is one NG mode associated with the spontaneous symmetry breaking P 1 ×U(1) → [P 1 ×U(1)] sim 3 . We will clarify this issue in the followings subsections.
Linear analysis: Nambu-Goldstone and Higgs bosons
In this subsection, we show the stability analysis in our model, but the analysis employed here is completely general at the linear level for any model admitting local vacuum exhibiting a spatial modulation. We now shift the field from the modulated vacuum (20) and introduce the fluctuationφ as a dynamical field: where ϕ = ϕ 0 e ipx 1 is the modulating VEV. In the following, we will show that there are no fluctuation modes that cause instabilities of the vacuum (20). The quadratic terms of the dynamical scalar fieldφ are extracted from the energy: Here the indexm = 0, 2, 3 is contracted by δmn. The vector ϕ and the matrix M are defined Each block diagonal sector is found to be We have separated the quadratic terms to the SO(2, 1) Lorentz invariant sector (transverse direction) and the direction of the modulation. Since M is an Hermitian matrix, it is diagonalized by a unitary matrix: Here U 1 , U 2 are 2 × 2 unitary matrices. The eigenvalues s 1 , s 2 of M 1 are found to be It is easy to show that s 1 > 0 and s 2 = 0 by the definition of x + . The eigenvalues t 1 , t 2 of M 2 are calculated as Again we find t 1 > 0 and t 2 = 0. There are positive and zero eigenvalues as anticipated. This implies that our assumptionφ = ∂ 2 ϕ = ∂ 3 ϕ = 0 guarantees the minimization condition of the energy. Therefore, the eigenvalues and eigenvectors of M are given by The unitary matrices are We are faced with the fact that there are modes whose quadratic kinetic terms disappear in the transverse (s 2 = 0) and the modulation (t 2 = 0) sectors. In order to understand the nature of the zero eigenvalues of M, we analyze the broken generator of the symmetry in the vacuum.
The vacuum vector v is non-zero only in the modulation sector. Namely we have ∂mϕ = 0 The action of the translation P 1 and the U(1) transformation generators T P 1 , T U (1) on the VEVs are given by The generator associated with the unbroken symmetry is defined by T ub = T P 1 − T U (1) . Indeed, we find that the action of T ub on v gives the vanishing result T ub v = 0. On the other hand, the generator associated with the broken symmetry is given by T b = T P 1 + T U (1) . Then we find This is exactly the eigenvector for the zero eigenvalue t 2 = 0 in the modulation sector. We note that the other zero eigenvalue s 2 = 0 corresponds to the flat direction in the SO(2, 1) invariant sector which does not accompany with the spontaneous symmetry breaking.
By using the unitary matrix U 2 in (32), the matrix M 2 is diagonalized and we derive the Higgs and the NG modes associated with t 1 > 0, t 2 = 0 as These are the linear combinations of the fields that contain the derivative along the x 1 -direction.
It is natural to define the modes (36) as the derivative of the Higgs and NG fields ∂ 1 H(x) and Here and hereafter, the indexm is contracted by ηmn = diag. (−1, 1, 1), and we have defined Hereφm ,c is the mode associated with s 1 > 0 and therefore it has the quadratic canonical kinetic term. On the other hand,φm ,0 is the one for s 2 = 0 whose quadratic kinetic term vanishes.
Note that they are distinguished from the Higgs and the NG modes in the modulation direction.
Again we define the fields ∂mA, ∂mB that correspond to the modes (38). Then, the linear transformations (38) are interpreted as the following field re-definition: 4 Then, the NG and the Higgs modes in the modulation directions are represented in terms of We obtain the Lagrangian for the dynamical fields A, B in the modulated vacuum at the quadratic order as One observes that the field A does not propagate in the modulation direction while B does not propagate in the transverse direction. Only the gradient of B in the modulation direction contributes to the energy. This is a reflection of the fact that the term ∂ 1 A is included in the NG modeφ NG and it never appears in the Lagrangian at the quadratic order. A similar analysis has been done in [33] for the dispersion relation of NG and Higgs modes in a plane-wave type ground state in a Lorentz non-invariant theory.
We did not consider a potential term for ϕ, and consequently the system has the shift symmetry in Eq. (15). What we have identified as a "Higgs boson" here is actually an NG boson associated with the spontaneous breaking of the shift symmetry. If we add a potential term in the original Lagrangian, the Higgs boson obtains a mass. Therefore, the gapless property is originated from the shift symmetry, but what we have found here is the existence of the quadratic kinetic term of the Higgs boson, in contrast to its absence for the NG boson.
Higher order terms
Here, we study higher order expansion, and show that the cubic order of the expansion of the Lagrangian contains no term consisting of only the NG bosonφ 3 NG , while in the quartic order there exists a term consisting of only the NG bosonφ 4 NG . In general, we cannot exclude a possibility ofφ 3 NG a priori, since the cubic derivative terms (∂ 1φ ) 3 exist after translational symmetry along x 1 -direction and the rotational symmetries in the (x 1 , x 2 ) and (x 1 , x 3 ) are broken. To see this, we calculate the cubic derivative terms of fluctuation ∂φ in the Lagrangian (14) with the introduction of the fluctuation ϕ → ϕ +φ in (24). The explicit calculation leads to the cubic derivative terms of the Lagrangian L cub. as By using Eq. (36), we find where the ellipsis · · · means that the terms with ∂mφ. Thus, we find that the pure cubic NG termφ 3 NG vanishes. Now we see that the NG mode appears with a quartic derivative term. The quartic derivative terms in the Lagrangian (14) L quart. is similarly obtained as The quartic derivative term containing purely NG modeφ NG is found by using (36) as where the ellipsis · · · expresses the terms other thanφ 4 NG . Therefore, we conclude that a term consisting of only the NG mode appears with the quartic derivative term.
Conclusion and discussions
In this paper, we have studied the spatially modulated vacua in a Lorentz invariant field theory where no finite density/temperature effects are included. The NG theorem, for a global symmetry spontaneously broken due to vacuum expectation values of space-time derivatives of fields, states that there appears an NG boson without a canonical quadratic kinetic term but with a quadratic derivative term in the modulated direction and a Higgs boson. We demonstrated this in the simple model whose energy functional can be written by the derivative terms of the scalar fields. The potential for the derivative terms allows a local vacuum as the modulated vacuum, where translational symmetry along one direction which we choose x 1 and the rota- (26). We have explicitly shown that the "mass eigenstates" in the modulation and the transverse directions are different. Therefore we are not able to perform the diagonalization in these directions simultaneously. We have employed the linear combination of the fields (39) and represented the NG and Higgs modes in terms of A and B. The A-mode propagates in the transverse directions while the B-mode only oscillates in the modulation direction. Finally, we have demonstrated that a term containing only NG modes appears in the quartic derivative order. The Higgs mode, which is defined as the orthogonal mode to the NG mode in our discussion, is indeed an NG mode associated with the spontaneously broken shift symmetry. We note that this Higgs mode has its non-zero quadratic kinetic term. This is a consequence of the application of the NG theorem to higher derivative field theories.
Although we have illustrated the stability of the modulated vacuum in our simplest model, we would like to emphasize that the stability analysis employed in this paper is general at the linear level for any model admitting local vacuum exhibiting a spatial modulation.
Among general solutions in Eq. (19) which are energetically degenerated, we have focused on the FF state, which has the highest unbroken symmetry. Which vacuum is chosen among energetically degenerated vacua with different unbroken symmetry is known as a vacuum alignment problem first discussed in the context of technicolor models [34,35]. In such the cases, quantum corrections pick up the vacuum with the highest unbroken symmetry, and therefore we expect the same happens in our case too. We note that the structure of the vacuum modulation crucially depends on models we consider. For example, inhomogeneous chiral condensates in dense QCD appears to be a FFLO instead of a FF one.
We have studied the modulated vacuum in the Ginzburg-Landau type effective theory in the Lorentz invariant framework, without assuming any underlying microscopic theory. However, there is an argument about a no-go theorem for modulated vacua by fermion condensates in relativistic QCD-like theories [36]. It is an interesting open question whether our model can be obtained as the low-energy theory of a fermion condensation in relativistic theories or not. Possible future directions are in order. Beyond the semiclassical level in this paper, we need a more rigorous proof for the generalized NG theorem in full quantum level. The Higgs mechanism in a U(1)-gauged model, and spatial modulations along two or more directions [37] as well as a temporal modulation [38] are interesting directions. Applying our discussion to more general higher derivative theories such as higher-order Skyrme model [39] is also one of future directions. We also would like to embed our model to supersymmetric theories based on the formalism in Ref. [40], and supersymmetry breaking in modulated vacua will be reported elsewhere [41]. | 6,539.4 | 2017-06-09T00:00:00.000 | [
"Physics"
] |
Comparative Analysis of Mathematical Models for Blood Flow in Tapered Constricted Arteries
and Applied Analysis 3 moderate shear rates, whereas H-B fluid’s constitutive equation can be used still at low shear rates and represents fairly closely what is occurring in blood. Chaturani and Palanisamy 6 propounded that when blood flows in arteries of diameter 0.095mm, it behaves like H-B fluid rather than other non-Newtonian fluids. Moreover, Casson fluid’s constitutive equation has only one parameter namely the yield stress, whereas the H-B fluid’s constitutive equation has one more parameter, namely, the power law index “n”, and thus one can obtain more detailed information about the blood flow characteristics by using the H-B fluid model rather than Casson fluid model 32 . Hence, it is appropriate to represent the suspension of all the erythrocytes in the core region of the two-fluid model of blood when it flows in narrow diameter arteries at low shear rates by H-B fluid rather than Casson fluid. Sankar 33 and Sankar and Lee 34 studied the two-fluid H-B model and twofluid Casson model, respectively, for blood flow in a narrow artery with mild axisymmetric stenosis under body accelerations. The pulsatile flow of two-fluid H-B fluid model and two-fluid Casson fluid model for blood flow through narrow tapered arteries with mild overlapping stenosis under periodic body acceleration has not been studied so far, to the knowledge of the authors. Hence, in this study, a comparative study is performed for the pulsatile flow of two-fluid H-B and Casson models for blood flow in narrow tapered arteries with mild overlapping stenoses in the presence of periodic body acceleration. For the two-fluid H-B model, the expressions obtained in Sankar 33 for shear stress, velocity distribution, wall shear stress, and flow rate are used to compute the data for the present comparative study. The aforesaid flow quantities obtained by Sankar and Lee 34 for twofluid Casson model are also used to compute the data for this comparative study. The layout of the paper is as follows. Section 2 mathematically formulates the two-fluid H-B and Casson models for blood flow and applies the perturbation method of solution. In Section 3, the results of two-fluid H-B model and two-fluid Casson model for blood flow in narrow tapered arteries with mild overlapping stenosis are compared. Some possible clinical applications to the present study are also given in Section 3. The main results are summarized in the concluding Section 4. 2. Mathematical Formulation Consider an axially symmetric, laminar, pulsatile, and fully developed flow of blood assumed to be incompressible in the axial z direction through a narrow tapered artery with mild overlapping stenosis. Geometry of the segment of a narrow artery with mild overlapping stenosis is shown in Figure 1 a . For different angles of tapering, the geometry of the stenosed artery is depicted in Figure 1 b . The geometry of the stenosed tapered artery at a cross-section in a time cycle is sketched in Figure 1 c . The segment of the artery under study is considered to be long enough so that the entrance, end, and special wall effects can be neglected. Since, the stenosis developed in the lumen of the segment of artery, it is appropriate to treat the segment of the stenosed artery under study as rigid walled. Assume that there is periodical body acceleration in the region of blood flow. Blood is modeled as a two-fluid model, treating the suspension of all the erythrocytes in the core region as non-Newtonian fluid with yield stress and the plasma in the peripheral layer region as Newtonian fluid. The non-Newtonian fluid in the core region is represented by i Herschel-Bulkley H-B fluid model and ii Casson fluid model. Cylindrical polar coordinate system r, ψ, z is used to analyze the blood flow. 4 Abstract and Applied Analysis
Introduction
Atherosclerosis is an arterial disease in humans, which leads to the malfunctioning of the cardiovascular system 1 . The intimal thickening of an artery is the initial stage in the progression of atherosclerosis 2-4 . The lumen of the arteries is narrowed by the development of atherosclerotic plaques that protrude into the lumen, resulting in stenosed arteries. The wall of the artery is stiffened by the growth of plaque with a lipid core and a fibromuscular cap and narrowing of lumen of the artery by the deposit of fats, lipids, cholesterol, and so forth 5 . Stenoses in different shapes are formed in the arterial lumen and some of the stenoses shape are axisymmetric, asymmetric, overlapping, and multiple 1, 6-8 . When a stenosis is developed in an artery, its serious consequences are the increased resistance and the associated reduction of blood flow in the downstream 9, 10 . Thus, the development of a stenosis in the lumen of an artery leads to the serious circulatory disorder. Chakravarty et al. 11 pointed out that the blood vessels bifurcate at frequent intervals and although the individual segments of arteries may be treated as uniform between bifurcations, the diameter of the artery reduces considerably at each bifurcation. How and Black 12 pronounced that the study of blood flow in tapered arteries is useful in the design of prosthetic blood vessels as the use of grafts of tapered lumen has the surgical advantage. Hence, it is important to mathematically analyze the blood flow in tapered arteries with stenosis. In many situations of our routine life such as traveling in vehicles, aircrafts, ships, swinging in a cradle, subjecting to vibration therapy as a treatment for some disease, sudden movements of body in sports activities, our body is exposed to body accelerations or vibrations 8, 13-15 . In some situations like traveling in a bus/train, the whole of the body is subjected to vibrations, while in some other occasions such as when operating jack hammer or lathe machine, driving a car, applying vibration therapy as a medical treatment, some specific part of our body is forced to vibrations 16,17 . Exposure of our body to high level unintended external body accelerations for a long period causes serious health hazards due to the abnormal functioning of the cardiovascular system 18 , and this leads to serious cardiovascular diseases which show some symptoms like headache, abdominal pain, increase in pulse rate, venous pooling of blood in the extremities, loss of vision, and hemorrhage in the face, neck, eye sockets, lungs, and brain 16, 18-20 . Thus, it is useful to investigate the effect of periodic body accelerations on the physiologically important flow measurements of blood flow in arteries of different diameters.
Blood exhibits anomalous viscous properties. Blood, when it flows in larger diameter arteries at high shear rates, it behaves like Newtonian fluid, but when it flows through narrow diameter arteries at low shear rates, it shows notable non-Newtonian behavior 21 . Several researchers investigated blood flow properties in constricted narrow arteries in the absence and presence of externally imposed periodic body accelerations 22-27 . Several researchers 11, 28, 29 mentioned that when blood flows in smaller diameter blood vessels at low shear rates, there is erythrocyte-free plasma layer adjacent to the vessel wall and core layer of suspension of all erythrocytes and thus it is not realistic to model blood as simply a single fluid non-Newtonian model. Hence, it is appropriate to model blood as a two-fluid model when it flows through narrow diameter arteries at low shear rates diameter up to 1300 μm 30 , treating the suspension of all the erythrocytes in the core region as a non-Newtonian fluid and the cell free plasma in the peripheral layer region as Newtonian fluid. Herschel-Bulkley H-B fluid model and Casson fluid model are some of the non-Newtonian fluid models with yield stress which are commonly used as the non-Newtonian fluids to represent the suspension of all the erythrocytes in the core region of blood flow in narrow arteries 21, 28 . Some advantages of using H-B fluid rather than Casson fluid to model the suspension of all the erythrocytes in the core region of the two-fluid flow modeling of blood in narrow arteries are mentioned below. Iida 31 reports "the velocity profiles of blood when it flows in the arterioles having diameter less than 0.1 mm are generally explained fairly by both Casson and H-B fluid models. However, the velocity profiles of blood flow in the arterioles whose diameters are less than 0.065 mm do not conform to the Casson fluid, but can still be explained by H-B fluid." Tu and Deville 22 reported that blood obeys Casson fluid's constitutive equation only at Abstract and Applied Analysis 3 moderate shear rates, whereas H-B fluid's constitutive equation can be used still at low shear rates and represents fairly closely what is occurring in blood. Chaturani and Palanisamy 6 propounded that when blood flows in arteries of diameter 0.095 mm, it behaves like H-B fluid rather than other non-Newtonian fluids. Moreover, Casson fluid's constitutive equation has only one parameter namely the yield stress, whereas the H-B fluid's constitutive equation has one more parameter, namely, the power law index "n", and thus one can obtain more detailed information about the blood flow characteristics by using the H-B fluid model rather than Casson fluid model 32 . Hence, it is appropriate to represent the suspension of all the erythrocytes in the core region of the two-fluid model of blood when it flows in narrow diameter arteries at low shear rates by H-B fluid rather than Casson fluid. Sankar 33 and Sankar and Lee 34 studied the two-fluid H-B model and twofluid Casson model, respectively, for blood flow in a narrow artery with mild axisymmetric stenosis under body accelerations. The pulsatile flow of two-fluid H-B fluid model and two-fluid Casson fluid model for blood flow through narrow tapered arteries with mild overlapping stenosis under periodic body acceleration has not been studied so far, to the knowledge of the authors. Hence, in this study, a comparative study is performed for the pulsatile flow of two-fluid H-B and Casson models for blood flow in narrow tapered arteries with mild overlapping stenoses in the presence of periodic body acceleration. For the two-fluid H-B model, the expressions obtained in Sankar 33 for shear stress, velocity distribution, wall shear stress, and flow rate are used to compute the data for the present comparative study. The aforesaid flow quantities obtained by Sankar and Lee 34 for twofluid Casson model are also used to compute the data for this comparative study. The layout of the paper is as follows.
Section 2 mathematically formulates the two-fluid H-B and Casson models for blood flow and applies the perturbation method of solution. In Section 3, the results of two-fluid H-B model and two-fluid Casson model for blood flow in narrow tapered arteries with mild overlapping stenosis are compared. Some possible clinical applications to the present study are also given in Section 3. The main results are summarized in the concluding Section 4.
Mathematical Formulation
Consider an axially symmetric, laminar, pulsatile, and fully developed flow of blood assumed to be incompressible in the axial z direction through a narrow tapered artery with mild overlapping stenosis. Geometry of the segment of a narrow artery with mild overlapping stenosis is shown in Figure 1 a . For different angles of tapering, the geometry of the stenosed artery is depicted in Figure 1 b . The geometry of the stenosed tapered artery at a cross-section in a time cycle is sketched in Figure 1 c . The segment of the artery under study is considered to be long enough so that the entrance, end, and special wall effects can be neglected. Since, the stenosis developed in the lumen of the segment of artery, it is appropriate to treat the segment of the stenosed artery under study as rigid walled. Assume that there is periodical body acceleration in the region of blood flow. Blood is modeled as a two-fluid model, treating the suspension of all the erythrocytes in the core region as non-Newtonian fluid with yield stress and the plasma in the peripheral layer region as Newtonian fluid. The non-Newtonian fluid in the core region is represented by i Herschel-Bulkley H-B fluid model and ii Casson fluid model. Cylindrical polar coordinate system r, ψ, z is used to analyze the blood flow.
c Changes in the shape of the arterial geometry in a time cycle at z 2.3 and ψ −0.1 Figure 1: Pictorial description of segment of the artery with overlapping stenosis.
Governing Equations and Boundary Conditions
The geometry of the artery as shown in Figure 1 is mathematically defined as follows 29, 35 : 1, α mz r 0 a 1 t otherwise,
2.1
Abstract and Applied Analysis where R z, t , R 1 z, t are the radius of the tapered stenosed arterial segment in the peripheral layer region and core region, respectively; r 0 is the radius of the artery in the normal region; ψ and m tan ψ are the angle of tapering and slope of the tapered vessel respectively; d is the location of the stenosis; 3L 0 /2 is the length of the stenosis; δ P cos ψ, δ C cos ψ are the critical heights of the overlapping stenosis in the peripheral layer region and core region, respectively; δ C αδ P , a 1 t is the time variant parameter; b is a constant; ω 2πf P is the angular frequency with f P as the pulse frequency. Length of the arterial segment is taken to be of finite length L. It has been reported that the radial velocity is negligibly small and can be neglected for a low Reynolds number flow in a narrow artery with mild stenosis. The momentum equations governing the blood flow in the axial and radial directions simplify respectively to 33 as follows: where the shear stress τ |τ rz | −τ rz since τ τ H or τ τ N . The constitutive equations of the fluids in motion in the core region for H-B fluid and in the peripheral region for Newtonian fluid are given by Abstract and Applied Analysis the shear stress is less than the yield stress which implies a plug flow whenever τ H ≤ τ y and normal flow otherwise. The boundary conditions are
2.8
Since the blood flow in arteries is due to the applied pressure gradient due to the pumping action of the heart and is highly pulsatile, it is appropriate to assume the pressure gradient as the following periodic function of z and t 16, 20 .
where A 0 is the steady component of the pressure gradient, A 1 is the amplitude of the pulsatile component of the pressure gradient, and ω p 2πf p , f p is the pulse frequency in Hz. Both A 0 and A 1 are functions of z 16 . The periodic body acceleration in the axial direction is given by where a 0 is the amplitude, ω b 2πf b , f b is the frequency in Hz and is assumed to be small so that the wave effect can be neglected 20 , and φ is the lead angle of F t with respect to the heart action.
Nondimensionalization
Let us introduce the following nondimensional variables:
2.11
Abstract and Applied Analysis , which has the dimension as that of the Newtonian fluid's viscosity, α H is the pulsatile Reynolds number or generalized Wormersly frequency parameter, and when n 1, we get the Wormersly frequency parameter α N of the Newtonian fluid. Applying 2.11 into 2.1 -2.2 , one can get the nondimensional form of the equations for the geometry of the tapered stenosed arterial segment as follows:
2.13
Using the above nondimensional variables in 2.3 and 2.5 -2.7 , we obtain The boundary conditions in dimensionless form are
2.19
The volumetric flow rate Q in nondimensional is given by where Q Q/ πR 4 0 A 0 /8μ 0 and Q is the volume flow rate.
Perturbation Method of Solution
As 2.14 -2.18 form a system of nonlinear partial differential equations, it is not possible to obtain the exact solution to it. Perturbation method is applied to solve this system of differential equations with the boundary conditions 2.19 . Since, the present study deals with the slow flow of blood low Reynolds number flow where the effect of pulsatile Reynolds numbers α H and α N are negligibly small and also they occur naturally in the nondimensional form of the momentum equation, it is more appropriate to expand the unknowns u H , u N , τ H , and τ N in 2.14 and 2.18 in the perturbation series about α 2 H and α 2 N . The plug core velocity u p and the velocity in the core region u H are expanded in the perturbation series of powers of α 2 H where α 2 H << 1 as follows:
2.21
Similarly, we can expand τ P , τ H and R P in powers of α 2 H and u n and τ n in powers of α 2 N . Applying the perturbation series expansions of u H and τ H in 2.14 and then equating the constant terms and α 2 H terms, we obtain
2.22
Approximating 2.16 using binomial series and then applying the perturbation series expansions of u H and τ H in 2.16 and thereafter equating the constant terms and α 2 H terms, one can get
2.23
Abstract and Applied Analysis 9 Substituting the perturbation series expansions of u N and τ N in 2.15 and then equating the constant terms and α 2 N terms, one can obtain
2.24
On applying the perturbation series expansions of u N and τ N in 2.18 and then equating the constant terms and α 2 N terms, we can easily get
2.25
Use of the perturbation series expansion of u H , τ H , u N , and τ N in 2.19 and then equating the constant terms and α 2 H and α 2 N terms, the boundary conditions decomposes respectively to τ 0P and τ 1P are finite at r 0,
2.26
On solving the system of differential equations 2.22 -2.25 with the help of boundary conditions 2.26 -45 , one can get the following expressions for the unknowns 0N , and τ 1N detail of obtaining these expressions is given in Sankar 33 : Abstract and Applied Analysis 11 where g t 1 e cos t B cos ωt φ , D 1/g t dg t /dt , and q 2 θ/g t . The expression for wall shear stress τ w is obtained as follows see 33 for detail :
2.28
The expression for the volume flow rate is obtained as follows for detail see 33 :
2.29
The expression for plug core radius is obtained as follows detail of obtaining this expression is given in 33 :
Abstract and Applied Analysis
The resistance to flow in the artery is given by when R 1 R; the present model reduces to the single-fluid H-B model and in this case, the expressions obtained in the present model for velocity, shear stress, wall shear stress, flow rate, and plug core radius are in good agreement with those of Sankar and Ismail 14 .
Governing Equations and Boundary Conditions
Equations 2.1 -2.2 which mathematically define the geometry of the tapered artery with overlapping stenosis are assumed in this subsection. The momentum equations governing the flow in the core region and peripheral layer region simplify to 34 where the shear stress τ |τ rz | −τ rz since τ τ C or τ τ N ; τ C and τ N are the shear stress of the fluid in the core region Casson fluid and peripheral layer region Newtonian fluid , respectively; u C and u N are the axial velocity of the fluid in the core region and peripheral layer region, respectively; ρ C and ρ N are the densities of the Casson fluid and Newtonian fluid, respectively; p is the pressure; t is the time. Equations 2.9 and 2.10 which define mathematically the body acceleration term F t and pressure gradient − ∂p/∂z are assumed in this subsection. The constitutive equations of the fluids in motion in the core region Casson fluid and peripheral layer region Newtonian fluid are
2.35
Abstract and Applied Analysis 15 where τ y is the yield stress; R P is the plug core radius; μ C and μ N are the viscosities of the Casson fluid and Newtonian fluid, respectively. The appropriate boundary conditions of the two-fluid flow are 2.36
Nondimensionalization
Let us introduce the following nondimensional variables: where α C and α N are the pulsatile Reynolds numbers of the Casson fluid and Newtonian fluid, respectively. Using the nondimensional variables in the momentum equations 2.32 and 2.33 and the constitutive equations 2.35 , the simplified form of these equations can be obtained respectively as follows: Using the nondimensional variables, the boundary conditions become
2.43
Equations 2.12 -2.13 which mathematically defines the nondimensional form of the geometry of the segment of the tapered artery with overlapping stenosis is assumed in this subsection.
The nondimensional volume flow rate Q is given by where Q Q/ πR 4 0 A 0 /8μ C ; Q is the volume flow rate.
Perturbation Method of Solution
As it is not possible to find an exact solution to the system of nonlinear partial differential equations 2.38 -2.42 , perturbation method is applied to obtain the asymptotic solution to the unknowns u C , u N , τ C , and τ N . Since, the present study deals with the slow flow of blood low Reynolds number flow where the effect of pulsatile Reynolds numbers α C and α N are negligibly small and also they occur naturally in the nondimensional form of the momentum equation, it is appropriate to expand 2.38 -2.42 in the perturbation series about α 2 C and α 2 N . The plug core velocity u p and the velocity in the core region u C are expanded in the perturbation series of α 2 C as follows where α 2 C << 1 :
2.45
Similarly, one may expand u N , τ P , τ C , τ N , and the plug core radius R P in the perturbation series about α 2 C and α 2 N , where α 2 N << 1. Using the perturbation series expansions of u C and τ C in 2.38 and then equating the constant terms and α 2 C terms, the momentum equation of the core region decomposes to ∂ ∂r rτ 0C 2 1 e sin t B cos ωt φ r,
2.46
Abstract and Applied Analysis
17
Applying the perturbation series expansions of u C and τ C in 2.40 and then equating the constant terms and α 2 C terms, the constitutive equation of the core region simplifies to
2.47
Similarly, substituting the perturbation series expansions of u N and τ N in 2.39 and then equating the constant terms and α 2 N terms, the momentum equation of the peripheral region decomposes to
2.48
Applying the perturbation series expansions of u N and τ N in 2.42 and then equating the constant terms and α 2 N terms, the constitutive equation of the peripheral region reduces to
2.49
Using the perturbation series expansions of u C , u N , τ C , and τ N in 2.43 and then equating the constant terms and α 2 C and α 2 N terms, one can obtain τ 0P and τ 1P are finite at r 0,
2.51
where g t 1 e cos t B cos ωt φ , q 2 r| τ 0P θ R 0p θ/g t ; D 1/g t dg t /dt . The expression for wall shear stress τ w can be obtained as follows for detail, see 34 :
2.52
The expression for volume flow rate is obtained as follows see 34 for detail : Abstract and Applied Analysis 21
2.53
The expression for the plug core radius R P can be obtained as follows see 34 for details :
2.54
The longitudinal impedance to flow is given by Λ 1 e cos t Q .
2.55
When R 1 R, the present model reduces to the single-fluid Casson model and in this case, the expressions obtained in the present model for velocity, shear stress, wall shear stress, flow rate, and plug core radius are identical with those of Nagarani and Sarojamma 16 .
Numerical Simulation of the Results
The objective of the present mathematical analysis is to compare the two-fluid H-B and Casson models for blood flow in narrow tapered arteries with mild overlapping stenosis and bring out the advantageous of using the two-fluid H-B fluid model rather than the twofluid Casson fluid. It is also aimed to bring out the effects of body acceleration, tapering of the artery, depth of the stenosis, yield stress, power law index, lead angle, frequency ratio, and pressure gradient on the physiologically important flow quantities such as plug core radius, plug flow velocity, velocity distribution, flow rate, wall shear stress, and longitudinal impedance to flow. Range of the values of various parameters used in this mathematical analysis is grouped below 33-36 . The pulsatile Reynolds number ratio α is defined as α α N /α H or α α N /α C and its value is taken as the same as those of α H or α C 29 . The value of α N is computed from these relations. The value of the ratio β of central core radius βR 0 to the normal artery radius It is observed that the plug core radius of the artery decreases rapidly with the increases of the axial variable z from 0 to 2.3 and then it increases slowly with the increase of z from 2.3 to 2.8 and then it decreases slowly with the increase of z from 2.8 to 3.2 and then it increases rapidly when z increases further from 3.2 to 3.5. One can see that for a given set of values of the parameters and for any angle of tapering ψ, the plug core radius of the two-fluid H-B model is considerably lower than that of the two-fluid Casson model. The variation of plug core radius with maximum depth of the stenosis for different values of the amplitude parameter b of the time dependent radius of the artery and two-fluid H-B and Casson fluid models with ψ −0.1, t 45 • , δ P θ 0.1, B 1, e 0.5, φ α H α C 0.2, ω 1, z 2.3, and β n 0.95 is illustrated in Figure 3. It is noted that for both of the two-fluid models, the plug core radius decreases slowly with the increase of the maximum depth of the stenosis. Figures 2 and 3 bring out the effect of angle of tapering, depth of the stenosis, and amplitude of the time dependent artery radius on the plug core radius of blood flow in a tapered artery with overlapping stenosis. from 0 • to 120 • and then it increases slowly when t increases from 120 • to 180 • and then it decreases slowly when t increases from 180 • to 210 • and then it increases very rapidly when t increases further from 210 • to 360 • . But, for the two-fluid Casson model, its plug flow velocity decreases rapidly when the time variable t increases from 0 • to 90 • and then it increases very slowly when t increases from 90 • to 120 • and then it decreases very slowly when t increases from 120 • to 180 • and then it increases very slowly when t increases from 180 • to 210 • and then it decreases very slowly when t increases from 210 • to 240 • then it increases rapidly when t increases further from 240 • to 360 • . It is observed that for the fixed value of the parameter b of the time dependent artery radius, the plug flow velocity decreases slightly with the increase of either the power law index n or the peripheral layer thickness. On the other hand, the plug flow velocity decreases considerably with the increase of the amplitude parameter b of the time-dependent artery radius when all the other parameters are kept as invariables. Figures 4 and 5 bring out the effect of body acceleration, angle of tapering, peripheral layer thickness, yield stress and power law index on the plug flow velocity of blood in a tapered narrow artery with mild overlapping stenosis. Figure 6 shows the velocity distributions for two-fluid and single-fluid non-Newtonian models and Newtonian fluid model and different values of the body acceleration parameter B with n β 0.95, t 210 • , b 0.1, δ P θ 0.1, e 0.5, φ α α H α C 0.2, ω 1, z 2.3, and ψ −0.05. It is found that the velocity is higher for fluids without yield stress than that of the fluids with yield stress. It is also seen that the highest velocity distribution is attained for the power law fluid model with n 0.95. The velocity distribution of the Newtonian fluid model is slightly lower than that of the power law fluid model with n 0.95 and the velocity distributions of the two-fluid models are considerably higher than those of the respective single-fluid models. For a given set of values of the parameters, the velocity of two-fluid H-B model is significantly higher than that of the two-fluid Casson model. It is also found that the velocity of two-fluid H-B and Casson models or single-fluid H-B and Casson models with body acceleration is significantly higher than those of the respective fluid models without body acceleration. It means that the presence of the body acceleration influences the velocity
Flow Rate
The variation of flow rate with pressure gradient ratio for two-fluid H-B and Casson models and different values of B and ψ with t Figure 7. It is clear that the flow rate of blood increases linearly with the increase of the pressure gradient when blood is modeled by either of these two-fluid models. But, for a given set of values of the parameters, the flow rate of two-fluid H-B model is significantly higher than that of the two-fluid Casson model. It is also noticed that for a given set of values of the parameters, the flow rate increases with the increase of either the body acceleration B or angle of tapering ψ. But, the increase in the flow rate is significant when the body acceleration parameter B increases and marginal when the angle of tapering ψ increases. Figure 8 illustrates the variation of flow rate with yield 3. It is seen that the flow rate of blood decreases very slowly with the increase of the yield stress θ when blood is represented by two-fluid H-B model, whereas the flow rate of blood decreases significantly with the increase of the yield stress θ from 0 to 0.025 and then it decreases slowly with the increase of the yield stress from 0.025 to 0.2. Also, it is observed that the flow rate of blood increases considerably with the increase of the peripheral layer thickness and the amplitude parameter b of the time dependent artery radius. Figures 7 and 8 spell out the effect of peripheral layer thickness, angle of tapering, and body acceleration on the flow rate of blood when it is flowing through a tapered artery with mild constriction. Figure 9 depicts the variation of wall shear stress with frequency ratio for two-fluid H-B and Casson models and different values of φ and b with t 60 • , B 1, ψ −0.1, β n 0.95, δ P 0.1, α α H α C 0.2, and z 2.3. It is found that the wall shear stress decreases slowly when the frequency ratio ω increases from 0 to 0.2 and then it decreases rapidly nonlinearly when the frequency ratio ω increases further from 0.2 to 1. It is also clear that the wall shear stress in blood flow increases considerably with the increase of the amplitude b of the time dependent artery radius when the lead angle is fixed. On the other hand, the wall shear stress in blood flow decreases significantly with the increase of the lead angle φ when all the other parameters were held constant. One can observe that the wall shear stress of two-fluid H-B model is slightly lower than that of the two-fluid Casson model.
Longitudinal Impedance to Flow
The variation of the longitudinal impedance to flow with axial distance for two-fluid H-B and Casson models and different values of B and ψ with β n 0.95, θ δ P 0.1, t 60 • , φ α α H α C 0.2, and b 0.1 is illustrated in Figure 10. It is observed that the longitudinal impedance to blood flow increases rapidly when the axial variable z increases from 2 to 2.3 and then it decreases slowly with the increase of z from 2.3 to 2.8 and then it increases slowly when z increases from 2.8 to 3.2 and then it decreases rapidly when the axial variable z increases further from 3.2 to 3.5. One can notice that for a given set of values of the parameters, the longitudinal impedance to flow of the two-fluid H-B model is significantly lower than that of the two-fluid Casson model. It is also found that the longitudinal impedance of the blood flow with body acceleration is considerably lower when compared to the longitudinal impedance of the blood flow without body acceleration, meaning that the presence of body acceleration in blood flow considerably reduces the impedance to flow. It is clear that the longitudinal impedance to blood flow decreases with the increase of the angle of tapering of the artery. Figure Table 1. It is observed that the estimates of the increase in the longitudinal impedance to blood flow increase slowly with the increase of the maximum depth of the stenosis and they decrease very slowly with the increase of the angle of tapering of the artery. It is also recorded that the estimates of the increase in the longitudinal impedance to flow of the two-fluid H-B model are marginally lower than those of the two-fluid Casson model.
Some Possible Clinical Applications
To discuss on some possible clinical applications of this study, the physiological data for different types of arteries, their corresponding radii, and steady and pulsatile pressure gradient values reported by Chaturani and Issac 20 are given in Table 2 and are used in this part of study. For this clinical data given in Table 2 , the estimates of the mean velocity for two-fluid H-B and Casson models and different values of m and B with t 60 • , β n 0.95, z 2.3, θ δ P 0.1, φ α α H α C 0.2, e 0.5, ω 1, and b 0.1 are computed in Table 3. It is noted that the mean velocity of blood decreases significantly with the increase of the artery radius except in the arterioles and it increases considerably with the increase of the angle of tapering. It is also observed that the mean velocity of blood increases significantly with the increase of the body acceleration. From Tables 3 a and 3 b , it is recorded that the estimates mean velocity of the two-fluid H-B model are significantly higher than those of the two-fluid Casson model. For the clinical data given in Table 2, the estimates of mean flow rate for the two-fluid H-B and Casson models and different values of m and B with t 60 • , β n 0.95, z 2.3, θ δ P 0.1, φ α α H α C 0.2, e 0.5, ω 1, and b 0.1 are computed in Table 4. It is found that the mean flow rate of blood increases very significantly with the increase of the artery radius and it increases considerably with the increase of the angle of tapering. One can also note that the mean flow rate of blood increases significantly with the increase of the body acceleration. From Tables 4 a and 4 b , it is observed that the estimates of the mean flow rate of the two-fluid H-B model are considerably higher than those of the two-fluid Casson model.
Conclusions
The present comparative analysis brings out several useful rheological properties of blood when it flows through narrow tapered arteries with mild overlapping time-dependent stenosis in the presence of external periodic body acceleration, treating it as i two-fluid H-B model and ii two-fluid Casson model. Some major findings of this mathematical analysis which reveal in blood flow modeling, the advantages of treating blood as two-fluid H-B model rather than two-fluid Casson model, are summarized below.
i The plug core radius, wall shear stress, and longitudinal impedance to flow are marginally lower for the two-fluid H-B model compared to the corresponding flow quantities of the two-fluid Casson fluid model.
ii The plug flow velocity, velocity distribution, and flow rate of blood are considerably higher for the two-fluid H-B fluid model than to those of the two-fluid Casson fluid model.
iii The estimates of the mean velocity and mean flow rate of the two-fluid H-B model are considerably higher than those of the two-fluid Casson model. On the other hand, the following similarities are noticed when modeling blood by either of these two models.
iv The plug core radius and longitudinal impedance to flow increases with the increase of the maximum depth of the stenosis.
v When the angle of tapering increases, the plug flow velocity and flow rate increase and the longitudinal impedance to flow decreases.
vi The estimates of the mean velocity and mean flow rate increase considerably with the increase of the body acceleration and this behavior is reversed when the maximum depth of the overlapping stenosis increases.
From the results discussed, one can observe that there is substantial difference between the flow quantities of two-fluid H-B model and two-fluid Casson model, and thus it is expected that the use of two-fluid H-B fluid for blood flow in diseased artery may provide better results which may be useful to physicians in predicting the effects of periodic body accelerations and maximum depth of the stenosis in the artery on the physiologically important flow quantities. Also, the results of this study may provide some useful information to surgeons to take some crucial decisions regarding the treatment of patients, whether the cardiovascular disease can be treated with medicines or should the patient undergo a surgery. Hence, it is concluded that the present study can be considered as an improvement in the mathematical modeling of blood flow in narrow tapered arteries with mild overlapping stenosis under periodic body accelerations.
Nomenclature r:
Radial distance r: Dimensionless radial distance z: Axial distance z: Dimensionless axial distance n: Power law index p: Pressure p: Dimensionless pressure P : Dimensionless pressure gradient Q: Flowrate Q: Dimensionless flow rate R 0 : Radius of the normal artery R z : Radius of the artery in the stenosed region R z : Dimensionless radius of the artery in the stenosed region F t : Body acceleration function a 0 : Amplitude of the body acceleration R P : Plug core radius R P : Dimensionless plug core radius | 9,233 | 2012-09-26T00:00:00.000 | [
"Medicine",
"Mathematics"
] |
Regional financial performance and human development index: Study in Central Java and South Kalimantan provinces
The purpose of this study is to find out: (1) the difference Human Development Index (HDI) between Central Java Province and South Kalimantan Province, and (2) the effect of the financial performance of the Regional Government on HDI. The samples of this study were the city and regency in the Provinces of Central Java and South Kalimantan. This study found that there were differences in the level of people ’s prosperity as reflected by HDI between Central Java and South Kalimantan. Other finding showed that the financial performance of the Regional Government affected the level of people’s prosperity as measured by HDI.
Introduction
With regional autonomy, Regional Governments are mandated to carry out several tasks. One of the main tasks of the Regional Government listed in the Law of Regional Government Number 32 of 2004 is to carry out the broadest autonomy, except those belonging to central governmental affairs, with the aim to improve people's welfare, public services, and regional competitiveness. Human development which is reflected in the Human Development Index (HDI) is highly dependent on the commitment of the government in providing supporting facilities. One of the most important elements in the government administration and regional development is regional finances managements which meet the development aspirations and the demands of the people. In the context of realizing regions with high human quality, regional governments use the Regional Budget to finance developments in these sectors. Local governments must work hard to reduce the poverty rate. The low capacity and capability of regional finances management often bring a negative effect, namely the low-quality service for the people and not being able to improve HDI. Government performance which is often used as a reference in viewing the level of public welfare is the financial performance. There are many measuring instruments to assess the government financial performance, including the analysis of financial ratios to the Regional Budget of Revenue and Expenditure (Harliyani & Haryadi, 2016).
Previous studies have proven the influence of local government financial performance on HDI. Indramawan (2018)examined the performance of Local Governments and HDI in the provinces of West Papua and Papua. The result showed that the Fiscal Decentralization Ratio had a positive impact on HDI in both provinces. Meanwhile, The Regional Government's Financial Dependency Ratio, as well as the Capital Expenditure Ratio had a negative impact on HDI in both provinces. Ananda (2017) found that the Fiscal Decentralization Degree Ratio, the Regional Financial Independence Ratio and the Regional Original Revenue Effectiveness Ratio had a significant effect on HDI. On the other hand, the Regional Financial Efficiency Ratio, and Capital Expenditure Allocation did not affect the HDI. Harliyani and Haryadi (2016) stated that the ratio of the degree of fiscal decentralization and the expenditure balance directly affected HDI, while the three variables namely the regional financial dependency ratio, the effectiveness of regional original revenue and the efficiency of regional original revenue did not affect on HDI.
Based on the description above, the problems of this study are: 1) Is there any difference in the level of people's welfare as measured by the Human Development Index (HDI) between Central Java and South Kalimantan? 2) Does the Regional Government Financial Performance affect HDI in Central Java and South Kalimantan? This research was conducted to find out differences in the Human Development Index between Central Java and South Kalimantan. Besides, it also aimed to find out whether the Regional Government's financial performance affects the HDI in both provinces. The results of this study are expected to contribute to the Government as a consideration for making policies related to equitable development. In addition, the government must pay attention to the importance of the financial performance of local governments to improve the prosperity of the people.
Literature Review Agency Theory in Government
From the perspective of agency theory, the relationship between society and government is like the relationship between principals and agents. The community is the principal and the government is the agent. Principals give authority to regulate agents, and authorize resource management to agents (in the form of taxes and others) (Prasetyaningsih, 2014). Since the implementation of the regional autonomy system, each region has been given authority by the central government to take care of its own household affairs based on the initiatives and aspirations of its people within the framework of the unitary state of the Republic of Indonesia. Based on Law number 23 of 2014 in lieu of Law Number 32 of 2004 and the previous law on Regional Government, there is a strict separation between the function of the regional government (executive) and the function of the people's representatives (legislative). Based on the differentiation of these functions, the executive carries out the planning, implementation and reporting of the regional budget which is a manifestation of service to the public. Whereas the legislative who is a community representative plays an active role in implementing legislation, budgeting, and supervision (Halim & Abdullah, 2006).
Regional Government Financial Performance
Financial performance is a performance measurement that uses financial indicators. According to Munir et al. (2004), some ratios that can be developed based on financial data sourced from the Regional Budget are as follows: (1) Regional Financial Independence Ratio. Regional financial independence (fiscal autonomy) shows the ability of regional governments to fund their activities, development, and services to the people who have paid taxes and retributions as sources of revenue needed by the region. The independence of regional finances is shown by the size of the regional original revenue compared to regional revenue that comes from other sources, for example, central government grants or from loans. The independence ratio illustrates the region's dependence on external funding sources. The higher the ratio of independence means that the level of regional dependence on external parties (especially the central and provincial governments) is lower, and vice versa. The ratio of independence also illustrates the high level of people participation in paying taxes and retributions which are the main components of regional original revenues. The higher the people pay taxes and regional retributions, the higher the level of people's welfare will be, (2) Fiscal Decentralization Degree Ratio. This measure shows the authority and responsibility given by the central government to regional governments to explore and manage revenue. This ratio is intended to measure the level of contribution of Regional original Revenue as a source of self-managed revenue to the total regional revenue. Regional original revenue is the revenue derived from the results of regional taxes, regional retribution, regional owned companies and the management of regional property and other legitimate revenues. Total Regional Revenue is the sum of all revenues in one fiscal year, and (3) Regional Original Revenue Effectiveness Ratio. The effectiveness ratio illustrates the ability of local governments in realizing planned Regional Original Revenue compared to targets set based on the real potential of the region. The ability of the regions to carry out their tasks is categorized as effective if it reaches a minimum of 1 (one) or 100 percent. The higher the ratio of effectiveness, the better the ability of the region.
Human Development Index (HDI)
HDI explains how people can access the results of development in obtaining income, health, education, and so on. HDI was firstly introduced by the United Nations Development Program (UNDP) in 1990 and it is published regularly in the annual Human Development Report (HDR) report. HDI is formed by 3 (three) basic dimensions: (1) long life and healthy living, (2) knowledge, and (3) decent standard of living. Benefits of HDI are: (1) HDI is an important indicator to measure success in the efforts of building human life quality (community/people).
(2) HDI can determine the ranking or level of development of a region/country, and (3) For Indonesia, HDI is strategic data, because it is not only being a measure of Government performance but also used as one of the aspects in determining the General Allocation Fund (DAU) (BPS, 2014).
Previous Researches and Hypothesis Development
Indramawan (2018) examined the impact of regional government financial performance on the Human Development Index (HDI) on regencies and cities in the provinces of West Papua and Papua. Both provinces were taken because they have the lowest HDI levels in Indonesia. To measure the financial performance of the regional governments, this study used several ratios, namely Fiscal Decentralization Ratio, Regional Government Financial Dependency Ratio, Regional original Revenue Effectiveness Ratio, and Capital Expenditure Ratio. The data in this study were panel data that were a combination of cross-section and time series. The regression model used was the Fixed Effect Model (FEM). The results of the study were the Fiscal Decentralization Ratio had a significant positive impact on HDI in West Papua and Papua. Regional Government Financial Dependency Ratio and Capital Expenditure Ratio had a significant negative impact on HDI in both provinces. Variable of Effectiveness Ratio of Regional Original Revenue had a negative effect on HDI, but not significant.
Ananda (2017) conducted a study to find out the influence the Regional Government Financial Performance, namely in the form of Fiscal Decentralization Degree Ratio, Regional Financial Independence Ratio, Regional Original Revenue Effectiveness Ratio (PAD), Regional Financial Efficiency Ratio, and Capital Expenditure Allocation Ratio to Human Development Index (HDI) in 38 regencies/cities in East Java Province in 2011-2015. The data analysis tool was panel data analysis. The data of this research were secondary data. Based on the results of tests, it showed that: (1) Ratio of Fiscal Decentralization Degree had a significant effect on HDI, (2) Ratio of Regional Financial Independence and Regional Original Revenue Effectiveness affect the HDI, (3) Ratio of Regional Financial Efficiency had no effect on HDI, and (4 ) Ratio of capital expenditure allocation had no effect on HDI. From these results, the Government of East Java Province was considered to be able to provide an optimal portion of the sectors that support the improvement of people's welfare, namely by optimizing its expenditure on capital expenditure for public services compared to operational expenditure and employee expenditure. Harliyani and Haryadi (2016) conducted a study to analyze the development of regional revenue and expenditure in Jambi Province. It was aimed to analyze the financial performance seen from the ratio of fiscal decentralization degree, regional financial dependency, regional independence, effectiveness of regional original revenue, efficiency of regional original revenue and the balance of direct expenditure, as well as to analyze the effect of financial performance on the Human Development Index (HDI). The method of analysis used descriptive statistical method by describing and explaining data that had been collected descriptively, namely the status/level of the observed variables in the form of ratios/percentages, graphical tables or diagrams, and by using several classic assumption test analysis tools, multiple linear regression and hypothesis testing. The results showed that: among the research variables, only 2 (two) variables significantly influenced the HDI, namely the ratio of fiscal decentralization degree and the balance of direct expenditure. Meanwhile, the other three variables namely the ratio of regional financial dependence, the effectiveness of regional original revenue and the efficiency of regional original revenue did not significantly affect the HDI. Based on the research findings, it was concluded that the HDI in Jambi Province in the period of 2001-2014 was influenced by the ratio of the fiscal decentralization degree and the balance of direct expenditure.
Agency problems that occur between the government and the community can be minimized by achieving good performance by the government. The community as a principal can see and measure how the results of local government performance. The government must be able to manage and measure its performance by using a correct performance measurement system in order to provide better services to the community and get community support (Nurdin et al., 2014).
Based on the description and previous studies, the hypothesis proposed are as follows. H1: There are differences in the Human Development Index between the Provinces of Central Java and South Kalimantan. H2: Regional Government Financial Independence influences the Human Development Index (HDI). H3: Regional Government Fiscal Decentralization influences the Human Development Index (HDI). H4: Effectiveness of Regional Original Revenue influences the Human Development Index (HDI).
Human Development Index in South Kalimantan
Differences Financial performances of regional government: 1. Financial independence 2. Fiscal decentralization 3. Regional original revenue effectiveness
Research Population and Samples
The population of this research were the regency/city governments in the provinces of Central Java and South Kalimantan using a purposive sampling data collection method to obtain samples that meet the criteria. The samples of this study were selected based on the following criteria: (1) District and City Government Financial Statements in Central Java and South Kalimantan from 2014 to 2017, and (2) Receiving Unqualified opinion and Qualified opinion from Supreme Audit Board of the Republic of Indonesia which contains reliable information.
Research Variables
The variables used in this study were independent variables, namely the financial performance of the Regional Government which included several parameters in the form of ratios according to Munir et al. (2004) consisting of: (1) Independence Ratio (IR), this variable is calculated by Total regional original revenue divided by Grants from central government and loan, (2) Fiscal Decentralization Ratio (FDR), this variable is calculated by Total regional original revenue divided by Total regional revenue, and (3) Regional Original Revenue Effectiveness Ratio (RORER), this variable is calculated by Total Regional Original Revenue realization revenue divided by Total regional revenue.
The dependent variable in this study was the Human Development Index (HDI). It was calculated as a geometric average of the health, education, and expenditure indices. This variable is calculated by the following formula.
Data Types and Sources
This research used secondary data. The data in this study were in the form of Regional Government Financial Statements that have been audited by the Indonesian Supreme Audit Board. This research traced several documents in the form of annual financial statements of the regional government that had been audited by the Indonesian Supreme Audit Board in the period of 2014 to 2017. In addition, the study used Human Development Index (HDI) data. This data was obtained from the Central Statistics Agency (BPS).
Data Analysis and Hypothesis Testing
Before the hypothesis test was performed, an analysis of the data normality was done to determine the method to test the results of the research. This analysis was needed to determine whether the research data had a normal distribution or not. This normality analysis was a requirement of the different tests for two independent samples (model 1) and multiple regression tests (model 2). To detect the data normality in this study, a non-parametric test, Kolmogrov-Smirnov test, was used. There were some possible choices of statistical test tools on the results of the study after the normality test. For Model 1, if the results of the normality test resulted in a normal distribution of financial ratios, then the independent sample t-test was performed, but if the normality test results in an abnormal distribution of financial ratios, so the Mann-Whitney Utest was used for different ratios.
For Model 2 test, there were several possible choices of statistical test tools on the results of the study after the normality test. If the results of the normality test results in a normal distribution, then the ratios were tested using multiple regression test, but if the data normality test results in an abnormal distribution, then the run test was performed to that ratios.
Population and Sample
The population of this research were all regency and city in Central Java and South Kalimantan. The total population of the two provinces were 36 regency and city in Central Java and 14 regency and city in South Kalimantan. The research sample can be seen in the following table. By using the Kolmogorov-Smirnov Test, the data of this study are normal because the significance value of the Kolmogorov-Smirnov test is 0.069 greater than 0.05. Based on these results, the hypothesis test in this study used the Multiple Regression Test. Source: analyzed data Based on tables 4 and 5, the significance value of the t-test is 0.004. This value is smaller than the alpha significance value which is set 0.05. This means that hypothesis 1 is accepted. There are differences in HDI between the Provinces of Central Java and South Kalimantan. On average, the HDI of Central Java Province is 70.09 higher than the HDI of South Kalimantan Province which is only 67.78. Based on table 8, the Regional Government Financial Performance as measured by the Regional Financial Independence Ratio, Fiscal Decentralization Ratio, and Regional original revenue Effectiveness Ratio affect the level of people's welfare as measured by the Human Development Index (HDI). This is indicated by the significance value of t of all independent variables which are smaller than the alpha value specified, which is 0.05, which means hypotheses 2, 3, and 4 are accepted. Based on the results of data analysis and hypothesis testing, the findings of this study show that there are differences in the level of welfare as measured by the Human Development Index (HDI) between Central Java and South Kalimantan. In addition, the average HDI of Central Java is higher than the HDI of South Kalimantan. This study supports Hill's research that eastern Indonesia is still lagging behind the western region because eastern Indonesia has always been poorer. Eastern Indonesia is still underdeveloped. Although lagging behind other provinces, eastern Indonesia is also advanced in terms of growth. There is not too much difference between western Indonesia and eastern Indonesia. Because eastern Indonesia was poorer in the past, the growth rate was not as high as the western regions, so the gap was getting bigger and bigger (Antaranews.com, 2007). Other finding shows that Regional Government Financial Performance as measured by the Regional Financial Independence Ratio, Fiscal Decentralization Ratio, and Effectiveness of Regional Original Revenue affect the level of welfare as measured by HDI. The results of this study support research conducted by Harliyani and Haryadi (2016), and Ananda (2017). The results support agency theory, the community is the principal give authority to regulate government (agents), and authorize resource management to agents (in the form of taxes and others) in order to increase community (principal) welfare. The results of this study have contributions to the Government as a consideration for making policies related to equitable development. In addition, the government must pay attention to the importance of the financial performance of local governments to improve the prosperity of the people. | 4,300.8 | 2019-01-01T00:00:00.000 | [
"Economics"
] |
Plant Transglutaminases: New Insights in Biochemistry, Genetics, and Physiology
Transglutaminases (TGases) are calcium-dependent enzymes that catalyse an acyl-transfer reaction between primary amino groups and protein-bound Gln residues. They are widely distributed in nature, being found in vertebrates, invertebrates, microorganisms, and plants. TGases and their functionality have been less studied in plants than humans and animals. TGases are distributed in all plant organs, such as leaves, tubers, roots, flowers, buds, pollen, and various cell compartments, including chloroplasts, the cytoplasm, and the cell wall. Recent molecular, physiological, and biochemical evidence pointing to the role of TGases in plant biology and the mechanisms in which they are involved allows us to consider their role in processes such as photosynthesis, plant fertilisation, responses to biotic and abiotic stresses, and leaf senescence. In the present paper, an in-depth description of the biochemical characteristics and a bioinformatics comparison of plant TGases is provided. We also present the phylogenetic relationship, gene structure, and sequence alignment of TGase proteins in various plant species, not described elsewhere. Currently, our knowledge of these proteins in plants is still insufficient. Further research with the aim of identifying and describing the regulatory components of these enzymes and the processes regulated by them is needed.
TGases are known to be widely distributed in nature, being found in vertebrates, invertebrates, molluscs, plants, and microorganisms [2,4]. Among plants, TGase activity has been reported in angiosperms [5,6] and studied in several cellular processes. It is distributed in different organs, such as leaves, tubers, roots, flowers, buds, and pollen, as well as in various cell compartments, including chloroplasts, the cytoplasm, and the cell wall [7,8]. TGases have been reported as being associated with growth (e.g., cell cycle, apical growth, seedling growth, and root growth), pollen-pistil interactions, differentiation, programmed ferent from mammalian ones and suggesting that the active sites of plant TGases may be similar but not identical to the active sites of mammalian tissue enzymes. Furthermore, the activity of mammalian tissue TGases is regulated by GTP at low concentrations of Ca 2+ [22]. In Arabidopsis, AtPNG1p was demonstrated to efficiently polymerise bovine serum albumin (BSA) in a Ca 2+ -and DTT-dependent manner. Both GTP and EGTA inhibited enzyme activity, thought it was not affected by magnesium, sodium, or potassium [19]. The TGase activity in chloroplasts isolated from the leaves of Helianthus tuberosus was inhibited by SH reagents, such as dithiobis-ethylamine (DTEA), N-ethylmaleimide (NEM), and DTT. In particular, DTT showed an inhibitory effect that reached its maximum at 10 mM (98% inhibition at 10 mM and 34% at 1 mM), whereas 1 mM DTEA caused 86% inhibition and 10 mM NEM caused 43% inhibition [24]. In lupine seedlings, DTT slowed down the rate of casein polymerisation induced by TGase activity [25]. Del Duca and co-workers reported that the in vitro TGase activity of chloroplasts was enhanced by 1 mM Ca 2+ and severely inhibited by 1 mM EGTA in a dose-dependent manner [24]. Additionally in Oryza sativa, TGase activity was shown to be increased by exogenous Ca 2+ , and inhibited by EGTA. The presence of specific compounds, such as GTP, monodansyl cadaverine (MDC), and DTT, completely inhibited TGase activity [26]. In the green alga Chlamydomonas reinhardtii, TGase activity was not impaired by of 1 mM Zn 2+ but was completely blocked by NEM and p-chloromercuribenzoate [27].
The TGase of maize leaf, when expressed in bacteria, was found to be inhibited by the competitive substrate MDC, GTP, and in the absence of exogenous Ca 2+ . TGase activity in chloroplasts (thylakoids and grana) was inhibited by MDC, GTP, diethyldithiocarbamic acid (DIECA), and 3-(3,4-dichlorophenyl)-1,1-dimethylurea (Diuron) [28]. In Rosmarinus officinalis, TGase activity was increasingly stimulated by 2-6 mM CaCl 2 (from 5 to 20%), and was not inhibited by 2-14% NaCl [29]. In addition, TGase activity is affected by different ions; in particular, it was reported that Mg 2+ had a slightly inhibitory effect [24]. Moderate inhibition was reported for Fe, Cu, and Mn; differently, in rosemary, the TGase was not inhibited by Mg, Ba, and Zn [29]. Shu and co-authors [30] reported an upregulated activity in the presence of NaCl, but in the presence of o-phenanthroline, the gene expression level of TGase declined and the application of exogenous o-phenanthroline significantly decreased endogenous PAs content in cucumber leaves.
The inhibitory roles and precise functions of proteases are still unclear. What is known is that proteases can cleave TGase substrates, thereby favouring accessibility to the binding sites. Furthermore, in mammalian cells and microorganisms, TGases can be directly activated by protease-induced processing [31,32], and this activation mechanism cannot be excluded in plants. It has been hypothesised that the direct action of protease inhibitors may inactivate the Cys thiol group in the active site of plant TGases [24].
The influence of biogenic diamines on enzyme activity has also been exhaustively studied. When TGase activity was checked by testing the incorporation of 3 [H]putrescine (Put) into N-N -dimethyl casein, cadaverine showed a higher apparent inhibition of TGases than diaminopropane [6]. Spermidine (Spd) and spermine (Spm) showed a better incorporation than Put in sprout apices of H. tuberosus and in apical meristematic tissue of etiolated pea seedlings. Additionally, 5 mM histamine, a competitive substrate of TGases, caused a 64% inhibition of the activity, as measured by the incorporation of labelled PAs into protein substrates [17]. Several authors have reported that TGase activity was affected by different substrates and amine concentrations; in particular, the large subunit of RuBisCO was shown to be a protein substrate in Medicago sativa [33,34].
TGase activity in plants impacts the photosynthetic machinery. The enzyme was shown to be light-inducible in Quercus ilex [35]. Likewise in rice, TGase activity was shown to be light-sensitive and completely inhibited by darkness [36]. A recent study indicated that the overexpression of TGases could promote the CO 2 assimilation rate by activating Calvin cycle enzymes [37].
Only few researchers have attempted to highlight the enzymatic kinetics of plant TGases. Michaelis-Menten kinetics were calculated by testing the incorporation of 3 into N-N -dimethyl casein catalysed by a pea seedling TGase. The Lineweaver-Burk plot of the data showed an apparent V Max of 41 nmol/mg protein h and an apparent K M of 9.63 mM of Put [6]. A purified recombinant maize TGase had a K M of 3.98 µmol L −1 and a V Max of 2711 µmol L −1 min −1 , as calculated with a fluorometric method [38].
[H]Put
The presence and distribution of TGases in some angiosperms and algae with their main biochemical features is presented in Table 1. Plant TGases have been identified in different cell compartments. Molecular weight varies significantly, from the 30 kDa thylakoid-localised TGase isolated from Cucumber sativus cotyledons [39] to the 150 kDa TGases found in Zea mays thylakoids and grana extracts [28] and the 160-180 kDa band found in mature maize pollen [40]. However, the most frequently found form, based only on molecular weight, is the 58 kDa one. Most reports have indicated that the optimum pH for TGase activity assays falls within the range of 7.5-8.5.
Bioinformatics Analyses
Bioinformatics analyses, such as comparisons of gene sequences, can support biochemical data and add new knowledge regarding the phylogenetic relationships and genomic organisation of TGases in plants. Here, we present a comparison by sequence alignments, phylogenetic relationships, and data on the genomic organisation of different TGases in various plant species for the first time. To date, TGases have been identified in an increasing number of plant species, but a comparative analysis of their characteristics has not been performed. We selected the TGase family members from model plants [57] available in the PLAZA_5.0 database (https: //bioinformatics.psb.ugent.be/plaza/, accessed on 17 March 2022): a total of 41 TGase genes were found to be distributed in 30 plant species (Figure 1). Among angiosperms, Glycine max, Nicotiana tabacum, and Miscanthus sinensis have two duplicated genes each and Eucalyptus grandis has three duplicated genes. Notably, Selaginella moellendorffii (Lycophyta) has five duplicated genes. Thus, gene duplication seems to have played a dominant role in the expansion of the TGase family in plants. Gene duplication, expansion, and subsequent diversification are features of the evolutionary process. The abundance of duplicate genes in plant genomes originated from ancient duplication events and a high rate of the retention of extant pairs of duplicate genes. These duplicates have contributed to the evolution of novel functions, such as in growth and development, disease resistance, and stress tolerance [58].
The phylogenetic analysis ( Figure 1) classified the plant TGases into taxonomic groups, i.e., monocots, dicots, bryophytes, lycophytes, marchantiophytes, and chlorophytes. The monocots and dicots (angiosperms) form a separate clade, suggesting that they are more evolutionarily divergent than the other species. This analysis is consistent with plant evolutionary history. The gene structure analysis ( Figure 1) showed that angiosperms have different intron/exon arrangements compared to other plant taxa, though no major differences were observed between monocots and dicots. TGase gene size varied from 2 to 28 kbp in most of the examined plants; however, it was significantly larger in Vitis vinifera (54 kbp) and barley (60 kbp). Though V. vinifera was found to have genome size of only~500 Mb, its TGase gene was large with long introns. This might be because of the repetitive/transposable elements (TEs) abundance (41%) in the grapevine genome [59]; moreover, introns are quite rich in repeats and TEs. The large size of the barley TGase might also have been due to the specific characteristics of its genome, which is rich in pseudogenes and small gene fragments mainly located towards chromosome tips or as tandemly repeated units [60]. These repetitive regions are present in introns and/or intergenic spaces [61,62]. Most plant TGases include one conserved large exon, which might be associated with the enzyme's active site. The level of conservation of plant TGases was compared to animal and microbial ones via ConSurf analysis using PF01841 (Transglut_core, https://pfam.xfam.org/family/PF01841 accessed on 17 March 2022).
Physiological Role of TGases in Plants
In plants, TGases have been primarily studied by focusing on the molecular mechanisms linking PAs to proteins by inter-and intramolecular bonds. These findings have been correlated to several aspects of growth and differentiation, as well as to stress responses [5,15]. Research on plant TGases has been hampered by difficulties in the purification of the enzyme and by the limited/scarce sequence identities between animal TGases and those reported in the available plant databases [66]. In general, studies on plant TGases have mainly dealt with its distribution and function [48].
It is known that TGases are present in most plant organs and organelles. Here, we report the most recent evidence for their involvement in various processes, since the last extensive review on this topic did not account for the last decade of results [5,66]. A schematic model of the main physiological roles of plant TGases is shown in Figure 3.
Physiological Role of TGases in Plants
In plants, TGases have been primarily studied by focusing on the molecular mechanisms linking PAs to proteins by inter-and intramolecular bonds. These findings have been correlated to several aspects of growth and differentiation, as well as to stress responses [5,15]. Research on plant TGases has been hampered by difficulties in the purification of the enzyme and by the limited/scarce sequence identities between animal TGases and those reported in the available plant databases [66]. In general, studies on plant TGases have mainly dealt with its distribution and function [48].
It is known that TGases are present in most plant organs and organelles. Here, we report the most recent evidence for their involvement in various processes, since the last extensive review on this topic did not account for the last decade of results [5,66]. A schematic model of the main physiological roles of plant TGases is shown in Figure 3. The enzyme is also present in the cytosol, where it can post-translationally modify the cytoskeletal proteins directly or do so via the binding of PAs, contributing to its organisation. This happens in the elongation of the pollen tube (B). In plant-pathogen interactions, TGases behave similarly to a PAMP, e.g., the PEP-13 motif of Phytophthora infestans. Otherwise, specific plant TGase isoforms might contribute to plant defence mechanisms (C). Different abiotic stresses (temperature, wounding, light, and salt stress) stimulate TGase activity, improving plant resilience through the activation of several signalling pathways and the stimulation of several physiological processes, e.g., photosynthetic efficiency, HSP response, antioxidant system, and PA-based cell signalling. By specific inhibitor, TGase inhibition reduces resistance to abiotic stresses and causes decreases in the bound PA contents, decreases in the photochemical efficiency of PSII, and growth reduction (D). In the leaf senescence process, TGases increase the accumulation of HSPs and bound PAs. TGases are also involved in the modification of chloroplast proteins and the modulation of anti-senescence enzymes and ATP synthases, finally increasing the photosynthetic efficiency (E).
TGases and Photosynthesis
Villalobos et al. first reported that a TGase was mainly present in the grana-appressed thylakoids of light-exposed maize chloroplasts [54]. The activity of maize TGase was found to be inhibited by GTP, DTT, and other compounds but significantly increased when the enzymatic assay was performed in the presence of light [49]. The gene sequence analysis showed that maize TGase possesses a chloroplast import peptide composed of 47 amino acids and B-type repeats that are located in a non-catalytic domain of the enzyme. The overexpression of the maize plastidial TGase increased the activity of TGases in In chloroplasts, TGases contribute to photosynthetic efficiency by increasing the level of bound PAs, ROS scavenging, and CO 2 assimilation. These aspects are related to the action of TGases on different substrates, e.g., RuBisCO PSII proteins, ATPase, and PMF proteins (A). The enzyme is also present in the cytosol, where it can post-translationally modify the cytoskeletal proteins directly or do so via the binding of PAs, contributing to its organisation. This happens in the elongation of the pollen tube (B). In plant-pathogen interactions, TGases behave similarly to a PAMP, e.g., the PEP-13 motif of Phytophthora infestans. Otherwise, specific plant TGase isoforms might contribute to plant defence mechanisms (C). Different abiotic stresses (temperature, wounding, light, and salt stress) stimulate TGase activity, improving plant resilience through the activation of several signalling pathways and the stimulation of several physiological processes, e.g., photosynthetic efficiency, HSP response, antioxidant system, and PAbased cell signalling. By specific inhibitor, TGase inhibition reduces resistance to abiotic stresses and causes decreases in the bound PA contents, decreases in the photochemical efficiency of PSII, and growth reduction (D). In the leaf senescence process, TGases increase the accumulation of HSPs and bound PAs. TGases are also involved in the modification of chloroplast proteins and the modulation of anti-senescence enzymes and ATP synthases, finally increasing the photosynthetic efficiency (E).
TGases and Photosynthesis
Villalobos et al. first reported that a TGase was mainly present in the grana-appressed thylakoids of light-exposed maize chloroplasts [54]. The activity of maize TGase was found to be inhibited by GTP, DTT, and other compounds but significantly increased when the enzymatic assay was performed in the presence of light [49]. The gene sequence analysis showed that maize TGase possesses a chloroplast import peptide composed of 47 amino acids and B-type repeats that are located in a non-catalytic domain of the enzyme. The overexpression of the maize plastidial TGase increased the activity of TGases in thylakoids of Arabidopsis thaliana [67]. In chloroplasts, TGases appear to stabilise both the photosynthetic complexes and Rubisco. Being regulated by light and other external factors, TGases might exert a photoprotective effect on photosynthesis [7]. The overexpression of TGases has been shown to increase the CO 2 assimilation rate through the activation of Calvin cycle enzymes in tomato leaves [37]. Changes in cellular redox homeostasis have been proposed to be involved in the activation of Calvin cycle enzymes [37]. The enhanced TGase-mediated binding of PAs to thylakoid membranes is surely involved in the aggregation of the light-harvesting complex (LHCII), which exerts a key regulatory role in dissipating excess excitation energy, thus improving photochemical efficiency under salt stress [68]. TGase activity increases salt stress tolerance in cucumber plants due to an increased endogenous PAs content and ROS scavenging capacity, as well as the promotion of carbon assimilation and photosynthetic products. However, the mechanism by which TGase regulates the photochemical efficiency of plants under salt stress remains unclear [30]. Plastidial proteins involved in photoprotection and in promoting the thylakoid electrochemical gradient are TGase substrates. Consequently, TGase interconnects more PSII proteins with other photoprotective and proton motive force (PMF) proteins (e.g., LHCII and ATPase); moreover, TGase changes the balance of pmf, thereby increasing the PA-linked protein pool [67]. A recent study confirmed that a PNG1 gene containing a typical TGase catalytic triad domain, like that of AtPNG1, plays a positive role in improving plant salt tolerance in cucumber plants [21].
TGases and Plant Fertilisation
Several studies have highlighted the involvement of TGases in the fertilization process of angiosperms. In particular, the enzyme plays a role in pollen-pistil recognition and pollen rejection; it is a crucial factor for pollen tube growth, being involved in the organisation of cytoskeleton proteins [49,69]. Moreover, several plant models suggest that TGase activity is involved in the self-incompatibility response [9,12,13,70].
TGases have been reported to not only be localised inside the pollen tube but also exist extracellularly. In the pollen tube cytosol, TGases modify cytoskeletal proteins, thereby regulating apical growth. Some reports also suggest an extracellular localisation of the enzyme and its involvement in pollen tube cell-wall construction and organisation [5,49,71]. During pistil fertilisation, the pollen tube grows through the stigma and style following a precise set of extracellular signals including PAs, which can regulate the growth of the pollen tube [72,73]. In fact, in in vitro pollen germination experiments, PAs were released into the germination medium together with other factors (RNAs and proteins). It has been reported that an extracellular TGase is required for apple pollen tube growth, suggesting its possible involvement in pollen tube and style adhesion, thus favouring cross-talk between male and female counterparts [74]. In addition to a cytosolic form of TGase, data suggest the existence of TGase forms associated with the internal membranes and the cell wall of pollen tubes. This different localization extends the functional range of pollen TGases that can be precisely redistributed in different cellular compartments [75]. The presence of an extracellular TGase raises a question regarding the function locally exerted by this enzyme.
TGases and Biotic Stress Responses
TGases may play an important role in plant-pathogen interactions and the resulting defence responses. Thus, TGases are involved in the hypersensitive reaction (HR), which consists of programmed cell death at the site of pathogen entry and it is associated with restriction of pathogen multiplication and spread [76,77]. The HR is accompanied by an increase in TGase activity and its products that are distributed in different fractions, though mainly in those containing proteins released from membranes and cell walls by high ionic strength and detergents [11]. The synthesis of mono-(γ-glutamyl)-Put and bis-(γ-glutamyl)-Spd, which represent solid evidence of TGase catalysis, revealed that both transamidating and cross-linking activity were enhanced in leaves undergoing the HR but not in mock controls [78]. In a recent article, healthy susceptible and resistant to Phytophthora capsici pepper (Capsicum annuum) plants showed a very similar pattern of TGase accumulation, thought it was distinct when inoculated with the pathogen. Such differently expressed post-infection patterns of TGases indicate that the defence mechanism of resistant plants might be based on the activation of specific plant TGase isoforms. These data support the hypothesis that TGases play role sin defence responses against some pathogens [42], such as Phytophthora spp., one of the most dangerous plant pathogens.
TGase activity has also been detected in different Phytophthora species. A 42-kDa cell wall-associated TGase (GP42) of Phytophthora sojae was found to contain a surfaceexposed fragment called PEP-13 that acts as an elicitor of defence responses in parsley and potato [79,80]. The PEP-13 motif was reported to be highly conserved in several Phytophthora species. TGases activate defence responses, thus suggesting their function as genus-specific recognition determinants in host and non-host plants [15]. In addition, TGase structural sequences with eliciting activity, associated with plant defence mechanisms, were isolated and characterised in Phytophthora cinnamomi. The fragments were found to encode a 533 deduced amino acid protein that includes an ORF with high identity similarity to Phytophthora sojae (70%), Phytophthora megasperma (70%), and Phytophthora infestans (61%) TGases. The alignment of a TGase gene with several TGase proteins revealed that the protein contains the conserved catalytic domain [81]. In addition, a recent study on the biochemical characterization of an acyltransferase enzyme, responsible for the pathogenicity of Phytophthora melonis, indicated that this protein possesses two domains, A (ranging from residues 260 to 620) and B (ranging from 141 to 219). The A domain possesses TGase-elicitor properties [82].
TGases and Abiotic Stress Responses
TGases regulate the posttranslational modification of proteins involved in a wide range of plant responses to environmental stresses. In general, the stress-related function of TGases could be ascribed to a positive relationship between enzyme activity, PA biosynthesis, and the photosynthetic efficiency maintained by the activation states of the Calvin cycle [37]. In addition, TGase-improved photosynthetic capacity seems to be supported by changes in the cellular redox status and activation of antioxidant enzymes [37]. This was confirmed by studies showing that TGase-deficient mutants (tgase-1 and tgase-2) of tomatoes exhibited a decreased activity of antioxidant enzymes engaged in the ascorbate (AsA)-GSH cycle, while TGase-overexpressing (TGaseOE) plants showed enhanced activities of ascorbate peroxidase (APX), dehydroascorbate reductase (DHAR), and glutathione reductase (GR). High TGase activity in TGaseOE plants also correlated with significantly increased ratios of both GSH/GSSG and AsA/DHA [37]. This upregulated antioxidant machinery can prevent redox homeostasis misbalance provoked by the over-accumulation of reactive oxygen and nitrogen species under stress conditions. More recently, Jahan et al. [83] reported that the overexpression of TGases in tomatoes enhanced tolerance to heat stress. A comparative transcriptomic study between wild-type (WT) and TGaseOE tomato plants revealed that in TGase-induced heat tolerance, a crucial role is played by genes associated with pathways responsible for protein processing in the endoplasmic reticulum, as well as carbon fixation [83]. Moreover, the specific high-temperature response of TGaseOE plants was associated with increased expression of heat-induced heat shock factors compared to WT plants, which was consistent with the expression patterns of heat shock proteins, thus indicating that heat shock factors might perform a pivotal role in the thermotolerance of TGaseOE plants.
An enhanced salt tolerance was observed in tobacco plants overexpressing cucumber CsTGase [84]. The transgenic plants showed vigorous growth and higher net photosynthetic rate, as well as stomatal conductance. In turn, the CsTGase-induced salt tolerance was associated with increased levels of chloroplast PAs, enhanced transcript levels of photosynthesis-related genes, and the accumulation of thylakoid membrane proteins such as D1 and D2 [84]. It is noteworthy that significantly higher TGase activity was observed in a salt-tolerant cultivar of cucumber in comparison to a salt-sensitive one [30,85]. The TGasemediated tolerance to NaCl was proven by spraying leaves with 1 mM o-phenanthroline, which resulted in decreased bound PA levels, the decreased photochemical efficiency of PSII, and the growth reduction of both cucumber cultivars [30].
Interestingly, TGases appear to play functional roles during acclimation to high salinity levels. As indicated in the green halophilic microalga Dunaliella salina, acute hyper-saline stress under light caused an immediate change in the concentration of chloroplast TGases with concomitant variations in enzymatic activity [44]. Moreover, a PA-deficient variant of Dunaliella exhibiting low TGase activity was found to be more severely affected by salt stress; however, put application visibly recovered TGase activity and led to considerable enhancements of chlorophyll a and b content [44]. In the marine macroalga Grateloupia doryphora (Mont.) Howe exposed to moderate hyposaline conditions, diminished TGase activity correlated with an increased pool of free PAs [86]. The positive relationship between TGases and free PAs could constitute a simple metabolic adjustment during acclimation to hyposaline conditions, since free PAs were found to be able to increase photosynthetic rate in the macroalga.
According to Pinto-Marijuan et al. [35], TGases are involved in adaptations to different light conditions. Holm oak leaves exposed to darkness until midday and then subjected to abrupt high light intensity showed enhanced TGase activity, resulting in the maximum accumulation of bound Put. The photoprotective role of TGases was hypothesised to be due to their enhanced activity during increasing light intensity, as previously observed in the systematically distant PA-deficient strain of Dunaliella [44]. Although TGase activity in the microalga was induced by salt stress, it was always higher in the light than in the dark [5].
Finally, TGases could also mediate the response to wounding and the wound-healing process. As documented by Serafini-Fracassini et al. [87], TGase activity was enhanced in tuber explants of Helianthus tuberosus as a result of wounding, in which the enzyme triggers the resumption of the cell cycle. This highlights the role of TGases in linking abiotic and biotic stimuli since insect/herbivore feeding and pathogen attack is often related to plant tissue injury. Recently, the involvement of TGases in wounding was also reported in Arabidopsis thaliana [88]. In this experimental model, an Atpng1 knockout (KO) line was analysed during plant development and under heat and wounding stress. WT and KO lines were compared in terms of response to wounding and recovery from wounding (e.g., the formation of a scarring tissue that covered the entire wound). TGases accumulated differently in the two lines: in the stem of the WT line, TGases were mainly localised in the 2-3 cell layers underneath dead cells on the stem surface, thus suggesting their involvement in the wound healing process (probably by exerting a gluing function and by strengthening cell walls), as previously observed during senescence in Nicotiana tabacum petals [51]. In the KO line, the lack of TGase activity in the cell walls may have been related to the observed weaker anatomical structure, characterised by parenchyma with large spherical cells and wide intercellular spaces. These features were probably due to the reduced stiffness of the KO cell walls, failing to counteract the internal turgor pressure [89]. In WT leaves, a rapid increase in TGase activity was observed; within 15 min, it was about three-fold higher than the basal activity observed at time 0 and then decreased. On the contrary, in KO leaves, the wound was effective, as activity remained at a constantly low level for 24 h. In the WT line, wounding-induced AtPNG1 transcript accumulation was observed within the first 5 min and reached minimum levels after 15-30 min. The potential involvement of TGases in the healing processes is still poorly understood for plants, but it has already been demonstrated for animal tissues [90,91]. These preliminary results suggest that the enzyme plays a role in plant wounding responses.
TGases and Leaf Senescence
In plants, senescence is a highly controlled and active process requiring global metabolic reprogramming, aimed at the organised disintegration and remobilization of valuable resources. It is a fundamental aspect of plant development that is necessary to optimise resource allocation and promote phenotypic plasticity in order to acclimate to adverse environmental conditions [92]. Structural changes of the chloroplast, eventually resulting in chloroplast degradation, mark the first phase of a sequential process that leads to leaf senescence, both developmental and stress-induced [93].
Physiological and structural changes in chloroplasts during senescence are associated with PA conjugation, modifications of chloroplast proteins, and the modulation of chloroplast-localised TGases (ChlTGases). The barley ChlTGase was found to be activated during dark-induced leaf senescence, which is associated with enhanced local TGase accumulation and activity, as well as the increased expression of the barley HvPng1-like gene [3]. Results with barley leaves also showed that TGase activity was lower when the samples were incubated with cytokinin, a phytohormone known for its anti-senescence properties [3]. The ChlTGase localization within chloroplast structures, as well as the identification of the post-translational modification of plastid proteins (PA-conjugated proteins), suggested a notable contribution of ChlTGases to the dark-induced senescence-associated process, including stress response, photosynthesis inhibition, and cell death manifested by the chloroplast-to-gerontoplast conversion and subsequent degradation [94]. In situ localization and changes in ChlTGase activity during dark-induced senescence were shown to mirror the increase in the level of plastid membrane-bound Put and Spd [3,94]. In fact, ChlTGase was shown to catalyse the binding of 3 [H]Put and 3 [H]Spd to photosystem proteins [94]. Substrates of ChlTGases in senescing and non-senescing leaves include apoproteins of the chlorophyll a/b antenna complex, LHCII, ATP synthase, and pSbS (photosystem II 22kDa protein)-proteins that are essential in energy-dependent quenching and increased thermal dissipation of excessively absorbed light energy in photosystems [7,23,47,55,94]. Several stress-responsive proteins detected in the PA-bound fraction only after induced senescence include the antioxidant enzyme peroxiredoxin, a heat shock protein, ent-copalyl diphosphate synthase, and IAA-amino acid hydrolase [94][95][96][97][98]. The senescence-associated changes in the amount of mono-and bis-(γ-glutamyl)-Put in senescent leaves also corroborated earlier studies on the tobacco corolla. In the latter experimental system, the amount of bis-(γ-glutamyl)-Put and bis-(γ-glutamyl)-Spd decreased and the amount of mono-(γ-glutamyl)-Put increased during petal senescence [50]. In this experimental model, it was also shown that TGase activity was involved in the PCD that takes place following senescence [51]. The fact that PAs, in concert with TGases, are functionally involved in induced leaf senescence is supported by proteomic analyses and TGase activity/transcript modulation [3,94]. The most studied plant gene coding for a protein with TGase activity, AtPNG1, is constitutively expressed at low levels in all plant organs during various stages of development and under various light conditions [55]. A similar expression pattern was found for the HvPNG1-like homolog in barley. However, HvPNG1-like transcription increased as soon as senescence was induced in the dark, concomitant with the start of cell structure disintegration [94].
Conclusions
In this review, we summarise information about plant TGases from studies carried out mainly during the last decade. These enzymes are involved in numerous cellular processes and are present in most plant organs of the species investigated so far. Deeper knowledge would allow us to better understand whether plant TGases are involved in the same basic cellular functions as those of animal TGases. Some features of plant TGases are shared with animal ones, such as their involvement in the wounding response and PCD, as well as in some cellular processes such as cell-to-cell adhesion, which, in plants, occurs in the pollen-style interaction during fertilisation. We also highlight some characteristics that, at least for now, seem to be specific for plant TGases, such as light dependence and apical growth.
In addition to the main biochemical characteristics of plant TGases, we present a bioinformatics analysis of TGases reported from different plant species for the first time. Gene structure results highlight that angiosperms have different intron/exon arrangements than other plant taxa; no substantial differences were observed between monocots and | 7,152.6 | 2022-05-01T00:00:00.000 | [
"Biology"
] |
Seasonal variation of genotypes and reproductive plasticity in a facultative clonal freshwater invertebrate animal (Hydra oligactis) living in a temperate lake
Abstract Facultative sexual organisms combine sexual and asexual reproduction within a single life cycle, often switching between reproductive modes depending on environmental conditions. These organisms frequently inhabit variable seasonal environments, where favorable periods alternate with unfavorable periods, generating temporally varying selection pressures that strongly influence life history decisions and hence population dynamics. Due to the rapidly accelerating changes in our global environment today, understanding the population dynamics and genetic changes in facultative sexual populations inhabiting seasonal environments is critical to assess and prepare for additional challenges that will affect such ecosystems. In this study, we aimed at obtaining insights into the seasonal population dynamics of the facultative sexual freshwater cnidarian Hydra oligactis through a combination of restriction site‐associated sequencing (RAD‐Seq) genotyping and the collection of phenotypic data on the reproductive strategy of field‐collected hydra strains in a standard laboratory environment. We reliably detected 42 MlGs from the 121 collected hydra strains. Most of MLGs (N = 35, 83.3%) were detected in only one season. Five MLGs (11.9%) were detected in two seasons, one (2.4%) in three seasons and one (2.4%) in all four seasons. We found no significant genetic change during the 2 years in the study population. Clone lines were detected between seasons and even years, suggesting that clonal lineages can persist for a long time in a natural population. We also found that distinct genotypes differ in sexual reproduction frequency, but these differences did not affect whether genotypes reappeared across samplings. Our study provides key insights into the biology of natural hydra populations, while also contributing to understanding the population biology of facultative sexual species inhabiting freshwater ecosystems.
| INTRODUC TI ON
Facultative sexual organisms, such as Cnidarians or Cladocerans, are very important elements of marine and freshwater ecosystems, and their large numbers make them essential for the construction of aquatic food web system. Facultative sexual organisms often inhabit ephemeral or highly seasonal environments where favorable periods alternate with unfavorable ones in either predictable or unpredictable ways. In favorable periods, clonal reproduction often occurs, allowing the maximal utilization of available resources (Hadany & Otto, 2007;Stelzer, 2012;Stelzer & Lehtonen, 2016). On the contrary, the onset of adverse periods often triggers sexual reproduction, which ultimately results in the formation of resting eggs, such as in case of aphids (Simon et al., 2002), rotifers (Schröder, 2005;Stelzer & Lehtonen, 2016), water fleas (Tessier & Caceres, 2004), and hydras (Steele et al., 2019). The study of such reproductive systems has become more important nowadays, as the frequency of extreme environmental conditions has significantly increased due to recent climate change, which resulted in the large-scale disappearance or extinction of ecosystems engineered by facultative clonal species (e.g., coral reefs and mangrove forests; Carpenter et al., 2008;Polidoro et al., 2010;Waycott et al., 2009). However, genetic variation of sexually produced clonal lines, coupled with the capacity of asexual reproduction to achieve quick population growth, could enable such species to better adapt to the challenges posed by climate change and sustain high population sizes even under changing conditions (Pistevos et al., 2011).
The population genetic characteristics of facultative sexual organisms differ from those of obligate sexual organisms, partly because increase in their population size is not necessarily accompanied by the emergence of new genotypes and partly because selection affects them in different ways. For facultative sexual organisms, Nunney's "lineage-selection" model suggests that solely asexual lines may enjoy short-term benefits (rapid exploitation of resources and high reproduction rates), but suffer disadvantages on the long term due to higher extinction rates compared with obligately sexual lines (Nunney, 1989). This lineage-selection model is probably even more important in a changing environment (e.g., seasonal habitats), as the creation of resistant formulas (e.g., resting eggs) is linked to sexual reproduction (e.g., in Daphnias, Decaestecker et al., 2009;in rotifers, Stelzer, 2012; or in hydras, Steele et al., 2019) and asexually reproducing lines can be easily removed by natural selection from the population when conditions deteriorate (Stelzer & Lehtonen, 2016).
However, genotypes that reproduce only sexually can also be at a significant disadvantage in such an environment (Kokko, 2020), as they cannot reach a sufficiently large number of individuals during the optimal growth period. As a result, it is expected that in these populations genotypes that follow a strategy in which both modes of reproduction appear will prevail. In this case, the differences in the frequency and timing of their sexual reproduction (phenotypic plasticity of reproduction, Stelzer & Lehtonen, 2016), as has already been observed in rotifers (Tarazona et al., 2017), can greatly influence the survival of these genotypes. Thus, a special, seasonally changing genetic structure can emerge, in which the main driving force is the intermittent but large-scale genetic recombination due to sexual reproduction and clonal selection.
The genetic structure of populations of facultative sexual organisms, such as Daphnia species (cyclically parthenogenetic), is determined by the genetic consequences of combining sexual and asexual reproduction in the same life cycle (Carvalho, 1994;De Meester et al., 2006;Decaestecker et al., 2009), the pattern of which can be greatly influenced by the seasonal environment.
At the start of the growing period (usually in spring), hatching of sexually produced dormant eggs increases the genetic variation in the population (De Meester, 1996;De Meester et al., 2006). In contrast, asexual reproduction during the favorable season results in erosion of clone diversity through natural selection and extinction of clones, thus ultimately leading to lower genetic variation and deviations from Hardy-Weinberg equilibrium by the end of the favorable period ("clonal erosion," De Meester, 1996;De Meester et al., 2006;Tessier et al., 1992).
Directional/stabilizing selection in such natural populations can build up significant genetic imbalances in polygenic traits during the period of asexual reproduction. Levels of genetic disequilibria have a strong effect on the genetic structure of natural populations, so that selection during the period of asexual reproduction can erode the expressed genetic variation in quantitative traits.
Thus, when genetic disequilibrium exists, a portion of the total genetic variation is "hidden" by this disequilibrium. Therefore, significant amounts of "hidden" genetic variation may be present in such populations where the individuals are clonally reproducing for long time, but in unfavorable periods undergo sexual reproduction (Deng & Lynch, 1996;Pfrender & Lynch, 2000). Studies to date have shown that a reduction in the disequilibrium in case of a purely additive-based polygenic trait converts up to 50%-75% of the "hidden" genetic variation into an expressed genetic variation (Lynch & Gabriel, 1983). Moreover, even a single random mating is sufficient to reduce the gametic phase imbalance, thus increasing the expressed genetic variations in the population. Natural selection acts differently during the two reproductive phases (King & Schonfeld, 2001;Pfrender & Lynch, 2000), because during the asexual phase, all genes belong actually to one linked group, and so selection affects the whole genome. For this reason, clonal selection also shapes the interaction of genetic variation, in contrast to sexual reproduction, which interrupts the relationships of these linked alleles (Decaestecker et al., 2009
T A X O N O M Y C L A S S I F I C A T I O N
Behavioural ecology; Ecological genetics; Evolutionary ecology; Genomics; Life history ecology; Population ecology; Population genetics identify three key factors (population size, length of the favorable season, and strength of clonal selection) that influence the genetic structure of cyclical parthenogens. The extent to which "clonal erosion" affects the genetic structure of cyclic parthenogens is primarily determined by the three factors mentioned above (De Meester et al., 2006). The remarkable effect of clonal erosion on seasonal changes in the genetic composition of populations has already been shown by (Yin et al., 2010) in Daphnia, where the survival of clonal genotypes varied from population to population, significantly altering the genotype composition of the following year's population. Life history strategies and physiological condition (e.g., age, body size, nutrition-state, and stress-state) of individuals can also significantly influence genetic composition of populations (genotype diversity), as these are factors that clearly play a role in initiating sexual reproduction and thereby alter population dynamics (Hadany & Otto, 2007). However, the exact role of genotype-specific life history strategies has not been adequately explored in previous studies.
Hydra oligactis living in seasonal habitats of the temperate zone can serve as an excellent model animal for the ecological study of the reproductive system of facultative sexual organisms. H. oligactis polyps reproduce asexually (budding) throughout much of the year but switch to sexual reproduction in response to cooling (Reisa, 1973). In natural habitats, within the distribution range of H. oligactis, sexual reproduction occurs from late summer to December (Ribi et al., 1985;Sebestyén et al., 2018;Welch & Loomis, 1924). During sexual reproduction, persistent/diapausing embryos are produced which can tolerate desiccation and freezing (Steele et al., 2019). Based on our observations, however, in some of the natural populations, asexually and sexually reproducing individuals occur simultaneously before the unfavorable periods and adults can survive the unfavorable periods in large numbers MM & JT, personal observations). However, the contribution of sexual individuals with diapausing eggs to the genetic structure and seasonal population dynamics of this species has not been studied so far.
In a recent study , we described genetic structure underlying sexual and asexual reproductive strategies in H. oligactis in a spatial setting. That study showed substantial phenotypic plasticity in the mode of reproduction, with coexisting hydra polyps belonging to the same genotype often differing in their mode of reproduction. However, it is still unclear whether more subtle genotypic variance in propensity of reproduction exists, and how this variation affects population dynamics. This is because field studies are less likely to detect small differences in life history among clonal lines due to: I) difficulties in designing an adequate sampling strategy and the small number of individuals per clonal line that are often obtained from random sampling of natural populations (Halkett et al., 2005) and II) the fact that individuals in their natural environments can be exposed to a diversity of environmental effects, generating variation within clonal lineages that masks potential genotypic variation (Deng & Lynch, 1996;Thorson et al., 2017). To solve these problems, studying a large number of clonally descended individuals from multiple genotypes kept under standard laboratory conditions would be required. In this study, we sought to fill this gap by simultaneously identifying the temporal genetic variation in a H. oligactis population and the reproductive strategies of the genotypes involved. To this end, we used data on reproductive strategy from laboratory strains that were collected from a single population during spring and autumn in two years (four collections in total) and genotyped these strains using restriction site-associated DNA sequencing (RAD-Seq). The population we sampled is a small, shallow temperate lake where a H. oligactis population persists year-round (although with a highly variable population size; MM & JT, pers. obs.). Asexual individuals can be observed year-round (with peak population sizes during late winter/early spring), while sexual individuals are generally detected between late October and early December (Sebestyén et al., 2018). However, sexual and asexual individuals coexist during the autumn sexual period and we do not know how these contribute to the changes in the genetic composition of the population. We used the combination of laboratory-collected phenotype data and RAD-Seq genotyping to ask first if clone lineages survive the unfavorable period and how this affects the genetic composition of a population in such a seasonal environment in temperate climate. Second, we wanted to explore the role of different genotypes in the propensity for sexual reproduction and its timing, as well as in the associated population dynamics.
| Study design and field collection of hydra polyps
Laboratory Hydra strains were established from a single oxbow lake in Eastern Hungary (Tiszadorogma,47.6712 N,20 that is directly connected to the Tisza River through a canal. The water temperature in the lake can rise above 25 ° C in the warmest months (even though the lake is surrounded by woody vegetation, which provides substantial shade), while it stays below 12°C between October and April. The data on laboratory strains used here were collected as part of a previous study aimed at comparing spring-and autumn-collected hydra strains . Table S1). Hydra polyps were collected from free-floating and submerged macrophytes (most often Ceratophyllum demersum, Ceratophyllum submersum, Myriophyllum spicatum, and Stratiotes aloides), then were put in a Falcon tube with lake water. On the day of collection, animals in Falcon tubes were transported to the laboratory in a cool box, where they were identified by stereomicroscopy (with Euromex StereoBlue stereo microscope) based on morphology-tentacle length / body length, the presence of stalk, and tentacle formation in buds (Schuchert, 2010).
We selected up to five polyps from each location/collection to create strains from them through their budding (asexual reproduction). Both natural-collected polyps and their asexual offspring were kept individually in 6-well plastic plates, which contained 5 ml M-solution per well. Experimental animals were asexually propagated for 10 weeks in the first phase and then placed into cold circumstances to induce sexual reproduction in the second phase. Details of the standard living conditions of the hydra polyps, both for asexual reproduction phase and cooling phase, can be found in Tökölyi et al. (2021). To keep samples at a manageable size, the maximum number of polyps/strain was set up to N = 18 were retained to collect data on reproductive mode.
Experimental animals were kept for 5 months under second phase and were checked twice per week under a stereo microscope (with Euromex StereoBlue stereo microscope) to detect the start of gonadogenesis.
| Drying and DNA extraction
Asexual buds detached from experimental animals were used to genotype strains. They were dried using silica gel and stored at room temperature to preserve DNA quality (see Miklós et al., 2021).
| RAD-Seq library preparation
Details of the library preparation protocol can be found in the supplement of ; (Supplementary Methods 1).
Quality and quantity of the library were checked with Bioanalyzer (High-Sensitivity DNA Kit). Libraries were sequenced on an Illumina NovaSeq platform (paired-end, 150 nt) at NovoGene (Beijing, China).
Our samples were sequenced in three separate RAD libraries (56 strains in the first, 53 strains in the second, and 30 strains in the third). In all library preparations, we used the same methodologies, but in the third library, only 15 cycles were used during PCR amplification, to reduce the presence of PCR duplicates (the number of PCR cycles used to create the first two libraries was 18). Finally, GenBank association numbers associated with identified specific genotypes were recorded (Table S1).
| Sequence processing and decontamination
First, raw Illumina reads were processed using Stacks process_radtags pipeline (Catchen et al., 2013;Miklós et al., 2021). We first ran the pipeline with default parameters and calculated the GC content of the resulting RAD loci with the BBmap suite of tools (https:// sourc eforge.net/proje cts/bbmap/). This showed a secondary GC peak suggesting sequence contamination from bacterial DNA. Therefore, we performed in silico decontamination using the NCBI Basic Alignment Search Tool (BLAST, v. 2.7.1; Altschul et al., 1990) to map sequences to the NCBI nucleotide collection database (nt, downloaded 31st March 2021), with blastn task and E-value cutoff set to 1e-05. RAD loci whose best match was a cnidarian sequence in the nt database or showed no hit were retained, while loci that mapped to any other taxonomic group were separated to form a contaminants database. We then mapped our paired-end demultiplexed reads to this contaminants database with Bowtie 2 --very sensitive and retained only unmapped reads. Sequence handling was carried out with the BBmap suite of tools (https://sourc eforge.
F I G U R E 1
Map showing the collection point of Hydra oligactis polyps in two distinct seasons (spring vs. autumn) in two consecutive years (2018 and 2019, four samplings in total) from a single population in Central Hungary. The numbers in the white circles represent the identified MLGs (genotypes) from that sampling point net/proje cts/bbmap/), while taxonomic annotation was carried out with the taxonomizr R package (v. 0.5.3; Sherrill-Mix, 2019); R Core Team, 2020).
Next, we ran the de novo pipeline on the unmapped reads, setting the parameters minimum depth of coverage required to create a stack (−m), the number of mismatches allowed between stacks within individuals (−M), and the number of mismatches allowed between stacks between individuals (−n). The optimal value for these parameters depends on sequencing error, genetic polymorphism and level of ploidy, among others (Paris et al., 2017). We set all three parameters (−m, −M, and -n) to based on our previous study that included some individuals from the population now being studied and included a detailed exploration of the parameter space in this study system.
From the resulting set of loci, we retained those that were shared by at least 80% of the samples. Additional filtering was performed on the resulting locus catalog using VCFtools (Danecek et al., 2011), with the following parameters: We required a minor allele count
| Clone detection and sibship reconstruction
To identify clones, we first inspected the spectrum of genetic diversity, that is, the distribution of pairwise genetic distances of the samples (Rozenfeld et al., 2007). Clonally derived individuals in theory should be genetically identical to each other. However, due to sequencing errors and somatic mutations, they frequently show a distribution of genetic distances >0, but less than the genetic distance of distinct genotypes, which often results in a bimodal distribution ( Figure S1). We also used the software COLONY (v. 2.0.6.6; Jones & Wang, 2010) to infer clones more formally using an optimized threshold that takes into account mistyping rates, missing data, and the number and allele frequencies of markers (Wang, 2016). The method implemented in COLONY uses a likelihood framework to assign individuals to candidate relationships of clone mates and close competitive relationships (e.g., full sibships) and has been shown to accurately identify individuals belonging to the same multilocus genotypes (MLGs) through simulations (Wang, 2016). All individuals were included as potential offspring in the analysis, as the presence of clonality in Hydra implies that generations can be overlapping and there is no unequivocal way to assign candidate parents. These potential offspring were then assigned into clonal lineages. For the COLONY analysis, we used a full-likelihood-pair-likelihood score combined (FPLS) method, assumed a polygamous mating system for both parents and kept all other parameters at their default values.
Initial error rates were set to 0.01 for both allelic dropout rate and other error rate of each locus in COLONY.
| Seasonal genetic structure
To visualize the seasonal distribution of sample genotypes, minimum spanning networks (MSN) were constructed using the function poppr.msn (Kamvar et al., 2014) in R. The network was constructed on the basis of genetic distance matrix calculated in ape's R package dist.gene function, with pairwise deletion of missing loci. These relationships were visualized with MSN (because for clonal organizations it can be a better visualization tool than tree drawing methods) generated using the R packages igraph and poppr (Csardi & Nepusz, 2005;Kamvar et al., 2014).
Basic population genetic statistics (expected heterozygosity, observed heterozygosity, fixation index, allelic richness, the number of private alleles and clonal richness, evenness, and diversity) were calculated for two datasets. In the first analysis, all samples were included. However, as H. oligactis is a clonal species, the presence of clones can bias the results of population genetics statistics. Therefore, we also prepared a reduced dataset that included one strain from each MLG per sampling (based on results obtained from COLONY) and repeated the calculations. To detect genetic structure with respect to sampling dates, we performed discriminant analysis of principal components (DAPC; Jombart et al., 2010) on the reduced dataset. DAPC analysis was performed using the adegenet (v. 2.1.1) package in R (Jombart, 2008; R Core Team, 2020). The number of principal components used in the DAPC analysis was set to 13 following alpha-score optimization and we generated inertia ellipses encompassing ~67% of the cloud of points for each population. For the DAPC analysis, we only included 1 individual from each MLG. As DAPC might be sensitive to missing data, we repeated this analysis on a dataset that resulted from a more stringent selection of loci (we included only loci that were shared across 90% of the samples).
| Genetic structure and reproductive mode
To find out whether genotypes differ in sexual propensity, we fitted a generalized linear mixed model (GLMM) with binomial distribution to the phenotype data collected from the polyps. In the model, mode of reproduction was dependent variable, season and polyp age were explanatory variables and genotype (as inferred from the Colony analysis) was included as a random factor. Then, we used the get_ variance function from the R package insight (Lüdecke et al., 2019) to extract variance components associated with the fixed and random effects, as well as the residual variance. From this, we calculated the proportion of variance explained by the random factor (MLG ID), to find out the degree to which genotype identity contributes to variation in sexual propensity.
| Establishment of field-collected strains in laboratory
We established strains from N = 211 polyps (N = 54, 40, 59, 58, respectively, for the four collection dates). However, some of these strains were lost due to mortality before yielding usable data and DNA samples. A total of 138 strains were genotyped (N = 40, 30, 38, 30 for the four sampling occasions).
| Read statistics and decontamination
There were altogether 699.8 million raw paired-end reads. From these raw reads, 93.6% were retained after filtering for low-quality reads, adapter contamination, ambiguous barcodes, and ambiguous RAD-tags. There were an average of 4.7 million reads per sample (range 2.3-20.9 million).
Running the Stacks de novo pipeline with default settings identified 1,602,803 million RAD loci. 46.9% percent of these loci showed no hit in the nt database, a further 13.6% percent mapped to cnidarian sequences. The remainder mapped to other taxonomic groups and were filtered out to form a contaminants database. The top contaminants were Pseudomonadales and Burkholderiales ( Figure S1), two bacterial orders that are commonly found within the Hydra microbiome (Fraune et al., 2015). After removing presumed contaminant loci, the secondary GC peak diminished substantially ( Figure S1).
| Clone detection
Inspection of the spectrum of genetic diversity showed no clear threshold to delineate multilocus genotypes ( Figure S2). While there was a clear peak of low genetic distance (less than ~0.06), we also detected a smaller, secondary peak at genetic distance ~0.11. This secondary peak could stem from the fact that genotyping error rates are higher in some of the samples (as seen in our previous study for samples with low coverage; Miklós et al., 2021), or because somatic mutations are more prevalent in some of the samples. Given that we used very stringent filtering of SNPs (minimum genotype quality of 30 and minimum read depth of 10), we think that genotyping error rates in general should be low. Nonetheless, the presence of this secondary peak in the spectrum of genetic diversity makes the identification of clones more difficult.
The COLONY analysis also revealed these difficulties. We identified N = 53 multilocus genotypes in the set of N = 132 strains included in the analysis. However, 11 of these 53 MlGs (each of them consisting of a single strain) were inferred with a probability <0.9 (while all other MLGs were inferred with probability 1.0). Therefore, we decided to remove these individuals from subsequent analyses as we cannot unequivocally assign them to MLGs. As a consequence,
| Seasonal genetic structure
Clonal richness and evenness did not show clear seasonal trends, although Shannon-Wiener diversity was somewhat higher in the spring samples. Observed heterozygosity, expected heterozygosity, F is , allelic richness and the number of private alleles did also not show marked seasonal trends (Table 1.).
The minimum spanning network showed that individuals collected from different seasons did not show significant separation from each other, but clustered into 4 major branches (Figure 3.).
Of these, three branches contained individuals from all collections, while in the fourth branch we found only polyps from spring collections (spring 2018 and spring 2019).
The DAPC analysis likewise indicated substantial overlap between seasons, with the last sample being more distinct from the rest (Figure 4). We repeated this analysis on a more stringent selection of loci (those shared across 90% of the samples; 269 RAD loci with average missingness of 8%, range: 1%-43%) and obtained very similar results ( Figure S3).
| Genetic structure and reproductive mode
Reproductive mode was inferred based on data from N = 921 polyps (that belonged to the 121 genotyped strains). There were on average 7.74 polyps per strain to estimate reproductive mode (range: 1-18). We could not estimate reproductive mode for two strains where all polyps were lost before data collection. After genotyping, there were 21.9 polyps per MLG to estimate reproductive mode (range: 2-136). The proportion of sexual individuals in MLGs that were found in multiple samplings was 0.81 ± 0.20 (mean ± SD, N = 7), compared to 0.73 ± 0.31 in MLGs that were detected in a single sampling (N = 35); however, this difference was statistically nonsignificant (Kruskal test, χ 2 = 0.014, p = .905).
| DISCUSS ION
The primary objective of our study was to investigate how the genetic composition of a population of a common freshwater facultative sexual organism, Hydra oligactis, changes in a temperate seasonal environment. We detected (1) limited changes in the seasonal population genetic composition and found that: (2) some hydra clone lines can survive between years and seasons. Furthermore, we also found that (3) distinct genotypes differ in sexual reproduction frequency.
Finally, (4) the above differences did not affect whether these genotypes reappear between samplings. We discuss the consequences and circumstances of these findings below.
We found no clear evidence for an abundance of spring genotypes in the hydra population, despite the fact that it would be a logical consequence of their high rate of sexual reproduction in winter (generation of new genotypes) and has been described previously for other facultative sexual organisms (Daphnia; De Meester, 1996; F I G U R E 2 Heatmap showing genetic distance matrix between N = 121 genotyped Hydra oligactis strains (comparing the genetic distance of two individuals in each small cube in the figure) collected at four distinct time points (lower diagonal). Pairs of strains that were inferred to be clones in the COLONY analysis are shown in red in the upper diagonal of heatmap De Meester et al., 2006). Based on previous studies, we would have expected that asexual reproduction during the favorable period would lead to a reduction in clonal diversity (through natural selection and random extinction of clones), resulting in lower genetic variation at the end of the favorable period and greater deviations from Hardy-Weinberg equilibrium (clonal erosion;De Meester, 1996;De Meester et al., 2006;Tessier et al., 1992), but in our study hydra population, we were not able to detect this clonal erosion clearly.
Several factors could be behind this observation. One possibility is that the contribution of sexually produced offspring is low compared with asexually produced ones and there is no marked increase in the abundance of new genotypes during spring. Another possible explanation for the lack of clonal erosion is that no strong selection factors appear in the study population to induce clone extinction.
For example, oversupply of food organisms available throughout the favorable period (Cladocerans and Copepods) could generate a relatively stable environment where most clonal lines can survive, thus eliminating clonal erosion. However, it is also possible that our sample size was too low to detect clonal erosion. This could be especially the case if only a few persistent eggs hatch in the spring and grow in a large asexual population, or clonal selection is very week (i.e., the signal for clonal erosion is weak), in which case we would need a very large sample size to detect it. Unfortunately, we do not know much about the reproductive biology of the species in natural populations (neither the rate of eggs hatch in the spring, nor how long they are viable, etc.). However, our data are more in line with a scenario of weak clonal erosion in this system, as we did not observe a clear decrease in the diversity of clones as a function of season, but we did find MLGs that clearly survive the winter. Finally, another reason could be that we could not actually identify all clones when analyzing the data. Identifying clones in this study proved difficult because of the relatively high frequency of pairs of individuals that had a genetic distance intermediate between clones and distinct genotypes.
We used very stringent filtering of SNPs to increase genotyping accuracy, which further reduced sample size. However, all retained strains were assigned to MLGs with a high certainty in the COLONY analysis; therefore, we think that the obtained results reflect true biological patterns. In general, we can conclude that there is definitely a set of genotypes in the population that persists for a relatively long time regardless of seasonality, without significant restructuring of the genetic composition with seasons, as we did not find significant genetic variation among spring and autumn-collected samples. In addition, we observed a significant (although not large) change in genotype composition in the second autumn, but the exact reason for this is not known (it is possible that this was a consequence of an unusually warm summer before, which may have enhanced and modified the usual selection effects).
We reliably demonstrated that some clonal lines can survive seasons and even years, contrary to the literature that assumes that H. oligactis polyps die in winter due to freezing waters, and only survives in sexual enduring forms (Brien, 1953) in temperate or cold climates, such as Rotifers (Walsh et al., 2014) or Cladocerans (Decaestecker et al., 2009). In climates where mild winters often occur, the survival of clones can also have a significant adaptive advantage in terms of a given clone genotype, resulting in a better return on investment in winter asexual reproduction. It has already been observed in Cladocerans that greater investment in asexual reproduction may be adaptive in a relatively mild winter climate where the risk of freezing is small and the adaptive value of dormancy may be low (Tessier & Caceres, 2004 We also observed that different genotypes differ in sexual reproduction frequency. Based on previous studies (Tökölyi et al., 2017;Tomczyk et al., 2015), genotypic differences may contribute to variation in the propensity of sexual reproduction in hydras because, under normal laboratory conditions, H. oligactis strains express differences in the probability of initiating sexual reproduction and in post-sexual survival rates (Ngo et al., 2021). Conversely, in another previous study analyzing reproductive mode under field condition, we detected a high rate of phenotypic plasticity in reproduction modes in this species . Now, however, it has also been shown that there is a high degree of plasticity in the expression of the mode of reproduction between the different genotypes.
Different genotypes show different propensity to initiate sexual reproduction in the same environmental conditions. Thus, due to the fact that individuals in the same genotypes respond differently to the same environmental stimulus (which may even be due to the dif- Daphnia populations (Cousyn et al., 2001;Hairston et al., 2001). In addition, in previous cases, relevant differences between clone lines (genetic individuals) in some marine species in their response to environmental stimuli, including the choice of mode of reproduction, have been described (Langer et al., 2009;Pistevos et al., 2011). This trait is of great importance for these species in adapting to the environment driven only by selection. This is also important because such metazoan will be more easily able to adapt to rapid global change through natural selection affecting existing genotypic variations F I G U R E 3 Minimum spanning network based on a dissimilarity matrix as calculated in poppr of N = 121 Hydra oligactis strains established from polyps collected in two distinct seasons (spring vs. autumn) in two consecutive years (2018 and 2019, four samplings in total) from a single population in eastern Hungary. Node colors represent sampling occasions. Edges length is arbitrary F I G U R E 4 Differentiation of Hydra oligactis strains derived from four distinct sampling types based on discriminant analysis of principal components (DAPC) performed on a reduced dataset containing one individual from each MLG per sampling. The DAPC was constructed using 13 principal components (PCs). The inset shows eigenvalues for the discriminant analysis (Balanyá et al., 2006;Bradshaw & Holzapfel, 2001), despite the slow onset of their mutational changes (Hoffmann et al., 2003).
Interestingly, differences in genotype traits did not affect the reappearance of specific genotypes in different samples. The simplest explanation for this may be that such a sampling time interval is not sufficient to accurately describe such consequences of population dynamic effects, because random effects may still obscure them. An alternative explanation could be that each genotype is so plastic that even if a significant proportion of individuals in genetic lineages are likely to reproduce sexually, they may also be survived by asexually reproducing polyps, which thus maintain the genetic lineage (bet-hedging; Simons, 2009;Steele et al., 2019).
| CON CLUS I ON AND PER S PEC TIVE
In conclusion, the above findings suggest that the facultative sexual H. oligactis could maintain different reproductive strategies (asexual and asexual reproduction) in parallel, which may give them a significant advantage in predictably changing environments, thus increasing their adaptive capacity even in the face of unpredictable changes. This makes the study of this ability even more relevant today, as ecosystems with populations with these traits could be the key to mitigating ecological damage caused by climate change. Conceptualization (equal); data curation (lead); formal analysis (lead); funding acquisition (lead); investigation (supporting); methodology (supporting); visualization (lead); writing -original draft (equal); writing -review and editing (supporting).
ACK N OWLED G M ENTS
We are grateful to Jinliang Wang for help with the COLONY anal- | 7,919.8 | 2021-09-17T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Bovine elastin and kappa-elastin secondary structure determination by optical spectroscopies.
Elastin is the macromolecular polymer of tropoelastin molecules responsible for the elastic properties of tissues. The understanding of its specific elasticity is uncertain because its structure is still unknown. Here, we report the first experimental quantitative determination of bovine elastin secondary structures as well as those of its corresponding soluble κ-elastin. Using circular dichroism and Fourier transform infrared and near infrared Fourier transform Raman spectroscopic data, we estimated the secondary structure contents of elastin to be ∼10% α-helices, ∼45% β-sheets, and ∼45% undefined conformations. These values were very close to those we had previously determined for the free monomeric tropoelastin molecule, suggesting thus that elastin would be constituted of a closely packed assembly of globular β structural class tropoelastin molecules cross-linked to form the elastic network (liquid drop model of elastin architecture). The presence of a strong hydration shell is demonstrated for elastin, and its possible contribution to elasticity is discussed.
Elastin is the macromolecular polymer of tropoelastin molecules responsible for the elastic properties of tissues. The understanding of its specific elasticity is uncertain because its structure is still unknown. Here, we report the first experimental quantitative determination of bovine elastin secondary structures as well as those of its corresponding soluble -elastin. Using circular dichroism and Fourier transform infrared and near infrared Fourier transform Raman spectroscopic data, we estimated the secondary structure contents of elastin to be ϳ10% ␣-helices, ϳ45% -sheets, and ϳ45% undefined conformations. These values were very close to those we had previously determined for the free monomeric tropoelastin molecule, suggesting thus that elastin would be constituted of a closely packed assembly of globular  structural class tropoelastin molecules crosslinked to form the elastic network (liquid drop model of elastin architecture). The presence of a strong hydration shell is demonstrated for elastin, and its possible contribution to elasticity is discussed.
The elasticity required for the appropriate functioning of skin, lung, and large blood vessels is due to the presence of elastic fibers within their extracellular matrix (1). The predominant component of these complex structures is the elastin protein, which endows them with its characteristic property of elastic recoil. Elastin is a macropolymeric protein synthesized by mesenchymal cells as a soluble precursor, tropoelastin, whose primary transcript undergoes alternative splicing resulting in the translation of several protein isoforms (2,3). After release in the extracellular space, most of the lysyl residues of tropoelastin are enzymatically deaminated. Following a series of non-enzymatic reactions, the activated residues condense to form specific tetrafunctional cross-links, named desmosines, which appearance allows the spreading of the elastic network within the microfibrillar component of the fiber (for a review, see Ref. 1).
The primary structure of BTE 1 (3,4) consists of an alternance of cross-linking regions, where the lysyl residues are located, and of large hydrophobic domains responsible for elastin elasticity. The highly hydrophobic BTE molecule possesses a very basic C-terminal sequence where its only two Cys residues are located. Those were recently shown to form an intrachain disulfide bridge stabilizing an hydrophilic pocket (5). This C-terminal feature seems involved in elastin fiber assembly (6).
The presence of numerous cross-links and the extreme hydrophobicity of BTE chains are responsible for the great resistance of polymeric BE as well as its total insolubility in any solvent (1). BE-K is the heterogeneous mixture of peptides obtained from BE when it is solubilized by KOH (7). It is a form more suitable for biological tests than BE, as it is soluble. BE-K is thought to be a good model of insoluble BE because of its ability to form a matrix (coacervate) akin to hydrated insoluble BE, the elastic form of BE, at physiological temperatures and high concentrations (7).
The elasticity of BE has an entropic nature (8). However, its exact origin remains uncertain, as the results gathered about BE and BE-K structures are few. Indeed, the very peculiar physico-chemical properties of these molecules do not allow significant structural results using the classical physical investigation methods. This lack of structural data explains why BE molecular models forward an explanation of the elastic mechanism without knowledge of its structures. A description of BE conformation is urgently needed.
Among the various models proposed (see Ref. 1 for a review), only three are still discussed. (a) In the globular liquid drop model of Weis-Fogh and Andersen (9, 10), BE is described as an aggregate of tropoelastin globules. Elasticity originates from hydrophobic interactions at the protein solvent interface of the globules as they deform during stretching. (b) The random network (8) regards BE as a protein devoid of any organization, much as rubber. It is connected to a classical elasticity theory. This model is supported by the works of Tamburro and coworker (11,12) who have established that peptides found in BTE sequence form transient -turns, which stability is influenced by both the surrounding water (13,14) and the length of the peptide (15). (c) In the fibrillar -spiral model of Urry (16,17), BE is considered as a regular arrangement of consecutive -turns (-spiral). In this context, elasticity arises from librational motions at the level of the spiral -turns.
Following predictive (18) and experimental (19) evidences, we have proposed a -class molecular model for BTE. The present work reports the structural investigation of the polymeric forms of this molecule, BE and BE-K, using CD, FT-IR, and NIR FT-R spectroscopies. The first estimations of BE and BE-K secondary structure contents are presented. The numerical values obtained for the polymers are compared to those formerly determined for monomeric BTE. Their consequences toward BE existing models and the possible elasticity mechanisms are discussed.
MATERIALS AND METHODS
BE Purification-A fresh bovine ligamentum nuchae was collected at the local slaughterhouse. BE was purified according to the sequential method previously described (7,20). Prior to collagen removal, the Clostridium histolyticum collagenase (Sigma, type VII, clostridiopeptidase A, EC 3.3.24.3) was purified by affinity chromatography on a column of BE, as suggested earlier (20).
Amino Acid Analysis-Samples were hydrolyzed, in vacuo, at 110°C for 24 h in 6 N HCl and analyzed by high performance liquid chromatography using a Waters PICO TAG TM amino acid analysis system equipped with a reverse-phase C 18 PICO TAG column.
BE-K Preparation-Solubilization of elastin was achieved using 1 M KOH in 80% aqueous ethanol as described previously (7).
CD Spectroscopy-BE-K was used at 0.5 mg/ml in distilled water to avoid coacervation. The spectrum was measured at 21°C in 0.1-cm path length cells from 260 to 190 nm with a Mark III dichrograph (Jobin Yvon). Data are expressed as mean ellipticity per residue [] r . The residue mean molecular mass used was 85.3 Da as derived from the BTE sequence (4). The secondary structure contents were determined according to the method of Provencher and Glöckner (21) using their CONTIN program.
FT-IR Spectroscopy-KBr pellets of BE and BE-K samples were prepared with 1 mg of protein per 100 mg of KBr. The spectra were recorded on a BRUCKER IFS 48 spectrometer by the accumulation of 200 interferograms with a 4-cm Ϫ1 resolution. The conformation-sensitive amide I, II, and III bands were checked for secondary structure variations, while amide A bands were used to investigate the N-H shieldings of the peptidic bonds (22).
NIR FT-R Spectroscopy-The spectra of BE and BE-K in powder were recorded at room temperature on a BRUCKER FRA 106 system coupled to a IFS 88 spectrometer in the frequency range 200-4000 cm Ϫ1 with 4-cm Ϫ1 resolution. The infrared laser excitation line was 1.06 m. Its power was 300 mW. The signal-to-noise ratio was improved by the accumulation of 200 interferograms. Secondary structures were determined by decomposition of the amide I band (CϭO stretching mode of the peptidic bond, 1630 -1700 cm Ϫ1 ) into individual components assigned to substructures, as the Raman sensitivity of that band to conformation is well known (23)(24)(25). First, Fourier self-deconvolution (26), second derivative (27), and maximum entropy (28) methods were independently applied to the original amide I bands. Second, among the components yielded by the resolution enhancement methods, only the positions of the most conserved and prominent ones were used as input parameters for a least square curve-fit procedure. No parameters were fixed during the calculation except the nature of the underlying profiles, which were assumed to be 80% gaussian and 20% lorentzian. The structural assignments of the computed components were made according to both their positions before and after reconstruction (23)(24)(25). The cumulated fractional area contribution assigned to a given substructure represented its relative total content in the protein conformation. The enhanced profiles were computed by the SPOV program (developed in Ovtchinnikov and Shemyakin Institute in Moscow). The decompositions were made using the CURVEFIT module of the LabCalc package (Galactics Industries).
For side chains such as alanine, valine, leucine, and isoleucine, the most prominent frequencies are those associated with the CH 2 bending mode found at 1465 Ϯ 20 cm Ϫ1 (29) and with the CH 3 antisymmetric deformation mode found at 1450 Ϯ 20 cm Ϫ1 (22). The behavior of the band centered around 940 cm Ϫ1 has been studied, as it is characteristic of the ordered ␣-helices as shown for poly-L-lysine used as a model (30,31). Bands arising from aromatic (tyrosine, phenylalanine) and sulfur (cysteine, methionine) residues and from the polypeptide backbone were readily identified (32,33).
RESULTS AND DISCUSSION
The amino acid composition of BE (Table I) was in good agreement with those obtained by others (7,20). The preparation was free of collagen as the level of Hyp residues was low, and no hydroxylysine was detected. Likewise, the presence of Asp and Glu residues with values that compared to BTE Asp and Gln ones, respectively, demonstrated the absence of microfibrillar proteins. Fundamentally, no Trp nor His was detected, and the estimated quantities of Gly, Ala, Pro ϩ Hyp, Val, Leu, and Ile residues compared well with those of BTE composition (Table I). The main discrepancy between elastin and BTE compositions was the estimated number of lysyl residues. This arose from the great difficulty in detection of all elastin crosslinking amino acids. With the occurrence of one Met residue being below the technique precision, the composition of BE (Table I) indicated a high level of purity, allowing its solubilization and the use of optical spectroscopic methods to analyze its structures.
The main features observed in the FT-IR spectra of proteins are those associated with the planar peptidic bond vibrational modes, the so-called amide bands, which positions, widths, and intensities are characteristic of the vibrational modes associated and thus of the local geometry of the peptidic chain. They are the amide I (CϭO stretching), amide II (mainly C-N stretching), amide III (N-H in plane deformation), and amide A (N-H stretching) bands; the first three are very sensitive to conformational changes (34,35), while the last one brings information about the hydrogen bondings undergone by the peptidic N-H groups (22). The FT-IR spectra of our samples in KBr pellets (Fig. 1) compared well with the data obtained by others for insoluble elastin (36) and solubilized elastin (37). The two molecules shared close global conformations as their structure-sensitive amide I, II, and III bands were found at comparable positions (amide I at 1659 and 1657 cm Ϫ1 , amide II at 1538 and 1542 cm Ϫ1 , and amide III at 1237 and 1238 cm Ϫ1 , for BE and BE-K, respectively). However, BE amide I band was much broader than that of BE-K, underlining some structural differences. The occurrence of amide A bands at different positions (3322 and 3307 cm Ϫ1 , respectively) also demonstrated that their peptidic N-H groups were involved in different types of hydrogen bondings.
Laser-visible Raman spectroscopy is a very powerful technique to investigate the conformation of biological molecules, as it can provide information about the secondary structures, the microenvironment of the residues, and the polypeptidic backbone geometry. However, BE is so highly fluorescent that convenient standard Raman data could not be reached (38,39). Thus, we have preferred NIR FT-R spectroscopy instead of normal visible Raman, as the use of an infrared source was less likely to excite the intense protein autofluorescence.
The characteristic bands of a protein Raman spectrum were observed in the BE spectrum (Fig. 2) as follows: 1) the conformationally sensitive amide I (CϭO stretch, 1630 -1700 cm Ϫ1 ) and amide III bands (N-H in plane deformation, 1230 -1310 cm Ϫ1 ), which arise from the Raman-active vibrational modes of the planar CONH peptidic bond; 2) bands assigned to residue side chains like aromatic cycles (only Phe and Tyr in the present case), CH, CH 2 , and CH 3 groups, and stretching of bonds containing sulfur atoms; and 3) bands corresponding to the C ␣ -C and C ␣ -N stretches of the polypeptidic backbone. The existence of disulfide bridges in BE was clearly demonstrated by the occurrence of two bands assigned to S-S (527 cm Ϫ1 ) and C-S (665 cm Ϫ1 ) stretching modes, respectively. These should mainly possess a local gauche-gauche-trans-geometry, as the S-S mode was observed at 525 Ϯ 10 cm Ϫ1 (32). They certainly arose from BTE intrachain bondings. The NIR FT-R spectrum of BE-K, in contrast to the BE spectrum, showed a relatively poor signal-to-noise ratio reflecting the very heterogenous nature of the solubilized elastin (data not shown). Nevertheless, its amide I band was clear enough for structural analysis.
Our analysis of the Raman data has mainly been focused on secondary structure quantitation. The amide I band originates from the CϭO stretching modes of all the peptidic bonds of the protein. Depending upon the particular secondary structure, a CϭO group is involved in a given type of hydrogen bonding, whose characteristics influence its frequency of vibration. That way, all the CϭO occurring in ␣-helices will vibrate similarly but differently from those encountered in -sheets. Likewise, regular -sheets and irregular ones will yield different signals. The vibrational frequencies from one substructure to another are not very different but sufficient to be distinguished (22). They all occur in the same characteristic spectral range (1630 -1700 cm Ϫ1 ), and their respective Raman signals overlap to yield the complex amide I band. Decomposition methods aim at directly accessing those structural contributions that overlap. In this manner, it is thereafter possible to determine the secondary Table II and were used for quantitation (Table III). structure contents of the molecule by the standard assumption that their respective areas correspond to their conformational contributions (23)(24)(25). The mathematical solution to a given decomposition problem is never unique, and it is always very difficult to tell which solution is the best, as one has no idea of how many contributions really exist and where they are located. But, fortunately, the use of more than only one resolution enhancement method permits assessment of these parameters. Here, we have used the three major ones. Each of them proposed several underlying contributions in our amide I profiles (data not shown). By comparison and correlation between their results, we were able to choose a small representative number of components whose positions could be accurately estimated. The calculated components centered in 1630 -1700-cm Ϫ1 range were assigned to ␣-helices, -strands, and undefined (turns ϩ coils) secondary structure elements according to both theoretical and experimental results (22)(23)(24)(25).
A component near 1640 cm Ϫ1 corresponding to the hydration water bending mode (32) was resolved in both decompositions (Fig. 3, Table II) as the molecules were in the solid state. This result was in good agreement with the paradoxical demonstration that the very hydrophobic and insoluble BE formed very tight hydrogen bondings with water (40). One helix, two -strands, and two undefined components were evidenced for either BE (Fig. 3a) or BE-K (Fig. 3b). The occurrence of an ␣-helical component correlated well with the observation of a band centered around 934 cm Ϫ1 in the BE spectrum (Fig. 2), as that feature is characteristic of ordered helices (30,31). Moreover, the presence of -strands was confirmed by the position of BE amide III band (1248 cm Ϫ1 ) since it fell within the typical Raman amide III domain of -structures (24).
The quantitative results compiled in Table III showed that BE and BE-K possessed similar global conformations, which were consistent with high levels of both extended (43 and 46%, respectively) and unordered (48 and 41%, respectively) structures. However, the analysis of their respective NIR FT-R amide I components (Fig. 3, Table II) indicated strong variations in their local structures. For example, the first undefined structure component (1661 cm Ϫ1 for BE and 1656 cm Ϫ1 for BE-K; see Table II) accounted for 45% of the global structure before solubilization and 12% after. Meanwhile, the second undefined structure component (1672 cm Ϫ1 for BE and 1666 cm Ϫ1 for BE-K; see Table II) varied from 3 to 29%. So, an inversion of population between the modes giving rise to those components had occurred, and the conditions of hydrogen bondings were different between the two molecules as revealed by the FT-IR data analysis. Insoluble and soluble elastins had quantitatively but not qualitatively identical conformations. This observation strongly suggests that BE-K is probably not a good model for BE, as far as local conformations are concerned.
The CD spectrum of BE-K in water (Fig. 4) was in good agreement with those recorded previously by others for soluble elastins (41,42). The broad negative band observed at 200 nm tended to support the view that BE-K was disordered. However, this spectral feature could be assigned to short and distorted -sheets (43) as was the case for the BTE spectrum (19). The CD quantitation (Table III) confirmed this possibility, as a high level of -structures was estimated for BE-K disolved in water. The value (47%) compared well with that estimated for BE-K in the solid state (46%). Nevertheless, the structural contents of BE-K seemed to change when it was dissolved (Table III). The most striking feature was the apparent disappearance of ␣-helical structures. This was quite surprising, as BTE helices (ϳ5%) were preserved in solution (19). A hydration effect upon BE-K helices remained possible but uncertain all the more so since the precision of the quantitative methods used was Ϯ5%.
The conformation contents of BE (Table III) were in very good agreement with those of our free BTE -class molecular model (19). For BTE, they were 5% of helices (cross-linking domains), 50% of -strands, and 45% of undefined conformations (rest of the molecule or elastic regions). The finding that TABLE II Spectral parameters and assignments of elastins NIR FT-R amide I components i is the estimated initial frequency in cm Ϫ1 , f the calculated frequency in cm Ϫ1 , and is the computed width at half-height in cm Ϫ1 . Assignments were made according to i and f values following experimental and theoretical results (22)(23)(24)(25). For components assigned to secondary structure elements, S represents the respective fractional area contribution in percent rounded to the nearest integer. For a given substructure, the sum of the S values was assumed to represent its total content in the molecule conformation (see Table III the global structures of free and cross-linked BTE were significantly identical strongly suggested that our monomeric model could apply to the elastin polymer, meaning that BE would be constituted of a three-dimensional arrangement of globular BTE molecules connected by cross-links. This structural definition typically corresponded to the liquid drop model of BE (9,10). This model was also greatly supported by recent scanning tunneling microscopy observations of reconstituted BE (44) and human recombinant tropoelastin coacervates. 2 Our structural results clearly contradicted the random network (8) and -spiral (16, 17) models of BE architecture, as BE did possess high levels of ordered structures. The structure of BE could thus be described as a threedimensional repetition of our molecular model of BTE (19), which is to say that -class BTE molecules are closely packed together and cross-linked by helical domains (ϳ10% ␣-helices), while the entropic "elastic" regions would consist mainly of buried short and/or distorted antiparallel -strands (ϳ45%), which are probably packed in -barrels and alternate with external turns and coil substructures (ϳ45% for the sum). In addition, we would like to underline that the hydrophobic domains of BE are highly mobile (45,46) and that coil-turn (11,12,15) or sheet-coil-turn (18) conformational transitions are possible. These transitions are most certainly mediated by the hydration water molecules (13,14,18).
The present work reports the first experimental estimation of the secondary structures of insoluble bovine elastin. Conclusions about the tertiary and quaternary structures of the elastomer have also been reached. Our results provide valuable information for understanding the elastic function of BE as they demonstrate that the structure-elasticity relationships must be envisaged in a liquid drop architecture context (9, 10).
Nevertheless, we point out that our results do not mean that the elasticity mechanism (hydrophobic interactions) proposed in 1970 by Weis-Fogh and Andersen (9, 10) is correct. We only agree with the architecture they have proposed for the molecule. Indeed, we suggest that their explanation of elasticity is incorrect, as it is based on a diphasic description of the swollen polymer (protein chains ϩ water) and neglects the hydration water of the molecule.
Recently, water solvent was shown to act as a plasticizer for elastin (47), that is to say it enhances its mobility. Moreover, the action of solutes on the structure of elastin is indirect and seems to be mediated through its hydration shell (48). That way, if solvent water molecules are considered as particular solutes, their plasticizing effect should be processed through the hydration shell of the molecule. We thus feel that the strong hydration shell demonstrated for elastin could have some functional significance. Swollen elastin would then be better described as a triphasic system, protein ϩ hydration water ϩ solvent water, and, in this view, the conformational transitions we and others have proposed for BE hydrophobic domains (11,12,15,18) should have a functional role. The elasticity theory connected to this proposal and the molecular events occurring during stretching or relaxation need now to be completely described. Further experiments in this way are in hand as well as molecular modelings of isolated and/or cross-linked tropoelastins. | 5,149.2 | 1995-11-03T00:00:00.000 | [
"Chemistry"
] |
Adsorption/desorption performance of Pb2þ and Cd2þ with super adsorption capacity of PASP/CMS hydrogel
Super-absorbent polyaspartic acid/carboxymethyl Salix psammophila powder (PASP/CMS) hydrogel was prepared by aqueous solution polymerization. PASP/CMS hydrogel was characterized by Fourier-transform infrared (FTIR) spectroscopy, X-ray diffraction (XRD), scanning electron microscopy (SEM) and X-ray photoelectron spectroscopy (XPS). The results showed that PASP/CMS hydrogel is prepared by graft copolymerization of the -COOH of polyaspartic acid (PASP) and the -CH2OH of CMS. The surface of the hydrogel became dense from loose porosity, and Pb2þ and Cd2þ were adsorbed onto the surface of hydrogel. The crystallinity of CMS was destroyed by the addition of PASP. The initial concentration of Pb2þ and Cd2þ, pH, adsorption time and adsorption temperature on the adsorption effect were studied through experiments. Results showed that hydrogel has a good removal effect on Pb(II) and Cd(II) ions. Pseudo-second-order kinetics and Langmuir isotherm models are represented in the process, which are spontaneous, exothermic and decreased in randomness, and it is a single layer chemical adsorption. At the same time, the effect of desorption experimental parameters (HNO3 initial concentration, desorption time, and desorption temperature) on the experiment was studied and optimized.
INTRODUCTION
The relative density of heavy metals is high, especially for lead, mercury, cadmium, chromium (Ma et al. 2018). Sources mainly include the arbitrary discharge of industrial wastewater and waste residue. Heavy metals influence water, soil and human beings (Naushad & ALOthman 2015;Du et al. 2016). Among the various technologies for treating metal ions, adsorption is the most effective method, and the adsorbent is very important (Naushad 2014;Marinah et al. 2017;Ma et al. 2018;Sofiah et al. 2018). The traditional adsorbent has the disadvantages of non-regeneration, causing secondary pollution to the environment (Xiong et al. 2016). Therefore, environmentally friendly adsorbents have attracted increasing attention in recent years.
Salix psammophila (SPP) is a renewable resource in nature, and can grow in saline-alkali land, it has the characteristics of strong regeneration and wide source, and has been investigated by many researchers Hao & Li 2019). Cellulose and lignin are present in SPP, and the hydroxyl group is the main active ingredient. By breaking the binding effect of hydrogen bonds in SPP, the adsorption performance of hydroxyl groups can be better developed and its application field can be expanded. Carboxymethyl salix wood powder was prepared by treating alkalized salix powder with monochloroacetic acid, and its adsorption performance in methylene blue solution was studied in our past research . It was indicated that adsorption capacity of methylene blue solution (1,908 mg/g) onto carboxymethyl salix wood powder was higher than for salix (257 mg/g). It is an excellent polymer adsorbent in nature.
Polyaspartic acid (PASP), as a polyamino acid, which not only has the performance of water-soluble carboxylic acid (Yang et al. 2019), but also the strong functions of chelation, adsorption and dispersion due to its side-chain hydroxyl group. In addition, PASP has been explored because of its valuable biodegradability ( Jv et al. 2019), but its single functional group and poor adsorption performance limit its application. Polyaspartic acid/lignocellulose (PASP/LNC) hydrogel was prepared and the adsorption performance of Pb 2þ onto PASP/LNC hydrogel was studied in our past research (Ye & Wang 2016). It was shown that PASP/LNC hydrogel has good adsorption performance for Pb 2þ , and its adsorption capacity for Pb 2þ was as high as 972.35 mg/g.
Hydrogel is a kind of high polymer which is hydrophilic but insoluble in water and can rapidly reach the swelling equilibrium. There are many functional groups (-OH, -COOH and -NH 2 , etc.). These functional groups can adsorb and exchange ions with heavy metal ions (Xiong et al. 2016). Hydrogel, as an absorbent, has become a popular option to treat heavy metal ions due to its high adsorption capacity, high adsorption speed and recyclability (Song et al. 2020). Wang et al. prepared (Wang & Wang 2016) polyvinyl alcohol/carboxymethyl cellulose (PVA/CMC) hydrogel using the freeze-thaw method and studied its performance. The research results confirmed that the adsorption capacity of the PVA/CMC hydrogel for Ag þ was higher (8.2 mg/g) than that of PVA hydrogel for Ag þ (4.7 mg/g). Therefore, PVA/CMC hydrogel prepared in this study can be used to treat heavy metal ions.
To date, adsorption/desorption performance of heavy metal ions by polyaspartic acid/carboxymethyl salix wood powder (PASP/CMS) hydrogel has been rarely reported at home and abroad. Here, PASP/CMS hydrogels were prepared and adsorption/desorption performance of Pb 2þ and Cd 2þ were studied. The adsorption isotherm, kinetic and thermodynamic models of Pb 2þ and Cd 2þ onto hydrogel were explored.
Materials
SPP is produced in Erdos Xinjie, Inner Mongolia. KMnO 4 , glutaraldehyde, ethylenediamine tetraacetic acid disodium, and nitric acid were produced by Sinopharm Chemical Reagent Co., Ltd. PASP, lead nitrate, cadmium nitrate, hexamethylenetetramine and xylenol orange were produced by Shandong West Asia Chemical Industry Co., Ltd, concerned with development of fine chemicals for wind boats in Tianjin Co., Ltd, Hunan Jinjinle Chemical Co., Ltd, Hongyan Reagent Factory, Hedong District, Tianjin, and Tianjin Shengao Chemical Reagent Co., Ltd, respectively.
Preparation of CMS
SPP was pulverized, sieved through a 200-mesh and dried; 2 g of the dried SPP was immersed in 15% sodium hydroxide solution for 12 h, filtered, and transferred into a flask, and 20 mL of absolute ethanol was added. Chloroacetic acid was added in batches, reacted at room temperature for 30 minutes, then reacted at 60°C for 2 hours, then filtered and dried to obtain CMS.
Preparation of PASP/CMS hydrogel
The CMS (0.1 g) and KMnO 4 solutions (0.06 mol/L, 50 mL) were added into a three-necked flask, and mechanically stirred in a water bath for 15 min at 50°C. Next, 20 mL of distilled water, CMS pretreated with KMnO 4 , 15 g of PASP, and 1.0 g of glutaraldehyde were put into the three-necked flask, and reacted at 70°C for 3.5 h. The product was dried in an oven at 105°C.
Characterization
FTIR was used to test CMS and PASP/CMS hydrogel with an FTIR spectrometer (6701F, Japan JSM Company). The samples were treated with KBr, and the FTIR spectra were recorded at a resolution of 4 cm À1 and the scanning ranges 500-4,000 cm À1 , and 160 scans per sample.
XRD was used to test CMS and PASP/CMS hydrogel with an X-ray diffractometer (Tensor27, Germany Bruker Company). The samples were placed on a blank slide with an incident wavelength of 0.514 nm, scanning ranges 5-60°, and speed of 4°/ min, and interval of 0.02°, CuKα target wavelength.
SEM was used to test samples on a scanning electron microscope (XRD-6000, Japan Shimadzu Company), and were observed under the conditions of accelerating voltage of 5 kV and magnification of 40,000 times.
XPS was used to test samples on an X-ray electron spectrophotometer (ESCALAB21, British VG Company). AlKα target wavelength was used as excitation source, photoelectron energy was 1486.6 eV, electron binding energy was corrected with 284.6 eV of Cls as reference, and eigenvalue error was +0.47 eV.
Adsorption of Pb 2þ and Cd 2þ by PASP/CMS hydrogel
A solution containing metal ions was prepared at specified concentrations, 5 mL was transferred into a conical flask, the pH was adjusted with buffer solution. Ion concentration in solution was determined by EDTA complexometric titration. Next, 50 mL of the prepared solution was transferred to another conical flask, the pH was adjusted to the set value, 0.1 g of adsorbent was added into the conical flask and shaken on an oscillator. Then, 5 mL of supernatant was added to the conical flask after adsorption equilibrium, and pH adjustment, 2-3 drops of xylenol orange solution were added into the solution to color Pb 2þ and Cd 2þ . Molar concentration of the residual ions in solution is determined by complexometric titration. The equation of adsorption capacity Q e (mg/g) is shown in (1) (Yatim et al. 2018): where,
Desorption of Pb 2þ and Cd 2þ by PASP/CMS hydrogel
Then, 50 mL of HNO 3 solution, and 0.1 g of adsorbent saturated with adsorption were added to a conical flask, and then put into a constant temperature water bath oscillator. The PASP/CMS hydrogel was desorbed (determined temperature and time), and was centrifuged for 5 min after desorption reached equilibrium. The method for determining the molar concentration of ions is the same as above. The desorption capacity Q t (mg/g) is shown in Equation (2): where, C t (mol/L): the molar concentration after desorption.
RESULTS AND DISCUSSION
FTIR Figure 1(a) shows the FTIR spectra of CMS and PASP/CMS hydrogel. The characteristic peaks at 3,442, 2,900, 1,598 and 1,423 cm À1 were assigned to -OH, -CH, -COOH and -CH 2 OH stretching vibrations of CMS. The absorption peak of the PASP/CMS hydrogel was enhanced at 3,444 cm À1 after graft copolymerization of CMS and PASP, which was attributed to the introduction of PASP containing hydroxyl side chains. Therefore, the number of hydroxyl groups increases after graft polymerization. The peak enhancement at 2,904 cm À1 was attributed to the introduction of long saturated alkane chains into the PASP/CMS hydrogel after polymerization (Gaurav et al. 2017). The peak enhancement at 1,602 and 1,407 cm À1 indicated that the hydrogel was formed by graft copolymerization of the -COOH of PASP and the -CH 2 OH of CMS.
XRD Figure 1(b) shows that the XRD spectra of CMS and PASP/CMS hydrogel. The characteristic diffraction peak of CMS was at 2θ ¼ 21.91°, and the characteristic peak of PASP/CMS hydrogel was obviously weakened here, indicating that the order of CMS was destroyed and crystallinity was decreased with the addition of PASP, which added the side-chain hydroxyl group in PASP. The diffraction peak at 2θ ¼ 31.22°was observed, which was the crystalline structure of cellulose I and the characteristic diffraction peak of CMS, indicating that CMS has been grafted onto PASP and formed the PASP/CMS hydrogel, which was consistent with the infrared analysis results.
SEM
SEM of PASP/CMS hydrogel (a) and PASP/CMS hydrogel adsorption Pb 2þ (b) and Cd 2þ (c) are shown in Figure 2. The images clearly indicated the loose and porous surface of PASP/CMS. The surface of the hydrogel became dense after the reaction, indicating that ions were successfully adsorbed on that the surface of the hydrogel.
XPS
The XPS spectra of the PASP/CMS hydrogel (a) and PASP/CMS hydrogel adsorption of Pb 2þ (b), Cd 2þ (c) are shown in Figure 3. Peaks for Pb4f (139 eV) and Cd3d (412 eV) were detected in the adsorbed hydrogel with Pb 2þ and Cd 2þ , indicating that the active sites on the surface of the hydrogel had been filled with heavy metal ions, which was consistent with the SEM analysis results.
Initial concentrations of Pb 2þ and Cd 2þ
Figure 4(a) shows the effect of initial ions concentration on adsorption of Pb 2þ and Cd 2þ by the PASP/CMS hydrogel. The extent of Pb 2þ , Cd 2þ removal by PASP/CMS hydrogel increased first and then decreased with the increase in initial concentration. The absorption capacity reached the maximum value at 0.04 mol/L, and the maximum value was 1,657.6 and 719.6 mg/g on the PASP/CMS hydrogel for Pb 2þ and Cd 2þ , respectively. This can be attributed to the presence of more Pb 2þ and Cd 2þ , which can provide a higher driving force for effective collision between ions and active sites of the hydrogel, thereby promoting the transfer of ions between the two phases (diffusion from the solution phase to PASP/CMS hydrogel phase) and more collisions between ions and active sites of the hydrogel. The hydrolysis of the metal ions in solution was promoted with the increase in ion concentration, the adsorption capacity of hydrogel for adsorbing the ions was reduced . Therefore, the initial concentration of 0.04 mol/L was reasonable.
pH of solution
Figure 4(b) shows that the adsorption capacity first increased to the maximum and then decreased with the increase in pH. The absorption capacity reached the maximum value at pH 5.5, and maximum value was 1,656.4 and 717.3 mg/g on the PASP/CMS hydrogel for Pb 2þ and Cd 2þ , respectively. At lower pH, the high concentration of H þ present competed with Pb 2þ and Cd 2þ at active sites. In addition, the amino, carboxyl and other groups in the hydrogel were protonated to have positive charges, which generated electrostatic repulsion with the Pb 2þ and Cd 2þ in solution. The adsorption capacity was reduced because the diffusion of heavy metal ions was reduced (Gamze et al. 2010). When the pH was greater than 5.5, the concentration of -OH in the solution increased, resulting in the hydrolysis of metal ions and destruction of the hydrogel network structure, and the adsorption capacity was reduced (Gaurav et al. 2017). Therefore, it is reasonable to choose the pH of the solution to be 5.5.
Adsorption time
Figure 4(c) shows that the adsorption capacity was first increased to a maximum and then gradually leveled off with time. The absorption capacity reached the maximum value at 60 min, and maximum value was 1,657.2 and 718.4 mg/g on the PASP/ CMS hydrogel for Pb 2þ and Cd 2þ , respectively. Heavy metal ions were more easily contacted abundant active sites on the hydrogel surface at the early stage of the adsorption reaction, and adsorption capacity was continuously increased with time. In the late adsorption period, the adsorption sites of hydrogels were gradually saturated, and the adsorption capacity gradually reached equilibrium until reaching adsorption saturation with time. Therefore, it is reasonable to choose the adsorption time to be 60 min.
Adsorption temperature
Figure 4(d) shows that the adsorption capacity decreased with the increase in adsorption temperature. At a temperature of 30°C (which is close to room temperature and easy to control), the absorption capacity reached the maximum value, and was 1,657.5 and 719.2 mg/g on the PASP/CMS hydrogel for Pb 2þ and Cd 2þ , respectively. The heavy metal ions were easily desorbed into the solution by removing the interactions within the hydrogel with the temperature increase, thereby causing the adsorption capacity to reduce. The adsorption reaction was an exothermic reaction, which was consistent with the adsorption thermodynamics, and was not conducive to the adsorption of the hydrogel by increasing temperature. Therefore, it is reasonable to choose the adsorption temperature to be 30°C.
Adsorption isotherm model
Langmuir and Freundlich models were used to analyze the experimental data (Maneechakr & Karnjanakom 2017). The models are given in Equations (3) and (4). Langmuir: Freundlich: where, q e (mg/g): equilibrium adsorption capacity; Table 1 shows that parameters of the adsorption isotherm. In Table 1, R 2 values of Langmuir models for adsorption of Pb 2þ and Cd 2þ onto PASP/CMS hydrogels were 0.9957 and 0.9913, respectively, and the theoretical adsorption capacity (1,954.7 and 847.5 mg/g) was not much different from the actual adsorption capacity (1,657.6 and 719.6 mg/g). In addition, R 2 values of Freundlich models for adsorption of Pb 2þ and Cd 2þ onto hydrogels were 0.9045 and 0.9278, respectively. To sum up, the adsorption reaction accords with the Langmuir isothermal model, which is monolayer chemisorption.
Adsorption kinetic model
Adsorption equilibrium and mechanism can be better explained by using pseudo-first-order and pseudo-second-order kinetic models . Here, the kinetic models are shown in Equations (5) and (6), respectively. Pseudo-first-order kinetic rate equation: Pseudo-second-order kinetic rate equation: where, q t (mg/g): adsorption capacity at time t; K 1 (min À1 ): the rate constant of pseudo-first-order kinetic; K 2 (mg/g·min À1 ): the rate constant of pseudo-second-order kinetic. Figure 6 and Table 2 show the pseudo-first-order, pseudo-second-order kinetic models and parameters. In Table 2, R 2 values of the pseudo-first-order and the pseudo-second-order kinetic models for adsorption of Pb 2þ (Cd 2þ ) onto PASP/CMS hydrogels were 0.9720 (0.9552) and 0.9932 (0.9944), respectively, and the theoretical adsorption capacities were 1,480.3 (1,388.9) and 1,785.7 (808.4) mg/g, respectively. The maximum adsorption capacity of Pb 2þ (Cd 2þ ) adsorbed on the hydrogel in the test was 1,657.6 (719.6) mg/g. From the analysis data, the adsorption reaction was more in line with the pseudo-second-order kinetic model, and chemical adsorption was dominant in the process, which was consistent with the analysis result of the adsorption isotherm model.
Adsorption thermodynamics
The adsorption degree and internal driving force of adsorbent to adsorbate can be studied by adsorption thermodynamics , Gibbs and van 't Hoff equations (Seyed et al. 2020) are shown in Equations (7) and (8): In Tables 3 and 4 show the thermodynamic parameters of adsorption of Pb 2þ (Cd 2þ ) onto PASP/CMS hydrogels. The values for analytical parameters available, adsorption enthalpy ΔH, Gibbs free energy ΔG, and adsorption entropy ΔS were all negative, indicating that the adsorption process was an exothermic reaction (Mensah et al. 2019). The decrease in adsorption capacity was mainly caused by the increase in adsorption temperature (see Adsorption temperature). The adsorption processes are spontaneous and decreased randomness.
Desorption of Pb 2þ and Cd 2þ onto PASP/CMS hydrogel HNO 3 concentration Figure 7(a) shows the effect of HNO 3 concentration on desorption capacity of PASP/CMS for Pb 2þ and Cd 2þ . The desorption capacity was found to be first increased to a maximum and then decreased with the increase in the HNO 3 concentrations.
The desorption capacities were 580.2 and 382.2 mg/g when HNO 3 concentration was 0.08 mol/L. The affinities of the hydrogel to different ions were different, and for the size of the affinities: the hydrogen ion is greater than lead ion(II) and cadmium ion(II). Therefore, the desorption capacity of Pb 2þ and Cd 2þ onto PASP/CMS hydrogel increased with the increase in HNO 3 concentration. The concentration of H þ in solution increased, and the H þ and Pb 2þ , Cd 2þ produced electrostatic repulsion, which inhibited the desorption of Pb 2þ and Cd 2þ and reduced the desorption capacity (Mensah et al. 2019). Therefore, it is reasonable to choose the 0.08 mol/L HNO 3 concentration. Desorption time Figure 7(b) shows that the effect of desorption time. The results showed that desorption of Pb 2þ and Cd 2þ by PASP/CMS increased first and then stabilized with time. The maximum desorption capacities of the PASP/CMS hydrogel for Pb 2þ and Cd 2þ were 579.6 and 381.6 mg/g, respectively. At the early stage, the existence of a large amount of H þ in the solution formed competitive adsorption with the saturated hydrogel, thus the desorption capacity of the hydrogel was increased. At the later stage, the concentration difference between Pb 2þ , Cd 2þ on PASP/CMS hydrogel and H þ in solution became smaller, the desorption capacity gradually became stable and the final desorption capacity was unchanged. Therefore, it is reasonable to choose the desorption time to be 90 min.
Desorption temperature
Figure 7(c) shows that desorption capacity was first increased and then gradually leveled off with the increase in desorption temperature. The maximum desorption capacities of the PASP/CMS hydrogel for Pb 2þ and Cd 2þ at 60°C, and maximum desorption were 580.1 and 381.6 mg/g, respectively. With the temperature increases, the thermal movement of the H þ increased, which made it more easy to enter the microporous structure of the hydrogel. Heavy metal ions in the PASP/CMS hydrogel combined with functional groups were replaced in the solution, resulting in desorption capacity increase. When the desorption temperature was further increased, the adsorption capacity of hydrogel for heavy metal ions on the hydrogel reached an equilibrium (Sun et al. 2018). Therefore, it is reasonable to choose the desorption temperature to be 60°C.
CONCLUSIONS
A new hydrogel, PASP/CMS was synthesized by graft copolymerization and characterized by Fourier-transform infrared (FTIR) spectroscopy, X-ray diffraction (XRD), scanning electron microscopy (SEM) and X-ray photoelectron spectroscopy (XPS). It was found to be an excellent adsorbent for the removal of Pb 2þ and Cd 2þ from aqueous medium. The adsorption of Pb 2þ and Cd 2þ onto the PASP/CMS hydrogel reached maximum at the initial concentration 0.04 mL/L, pH 5.5, shaking time 60 min, and temperature 30°C, which were 1,657.6 and 719.6 mg/g, respectively. The desorption of Pb 2þ and Cd 2þ onto the PASP/CMS hydrogel reached maximum at the HNO 3 concentration 0.08 mL/L, pH 5.5, desorption time 90 min, and temperature 60°C, which were 580.16 and 382.16 mg/g, respectively. The results show that the adsorption process accords with the Langmuir isotherm and quasi-second-order kinetic models, and is a spontaneous exothermic reaction with the degree of chaos decreasing in the process.
The PASP/CMS hydrogel has ultra-high adsorption capacity for heavy metal ions (such as Pb 2þ , Cd 2þ ), and has excellent performance including environmental protection, economy and regeneration, which provides the certain theoretical basis for treatment of metal ions (such as Pb 2þ , Cd 2þ ) in wastewater. | 4,814.6 | 2021-07-01T00:00:00.000 | [
"Materials Science"
] |
N-representability of the Jastrow wave function pair density of the lowest-order
Conditions for the N-representability of the pair density (PD) are needed for the development of the PD functional theory. We derive sufficient conditions for the N-representability of the PD that is calculated from the Jastrow wave function within the lowest order. These conditions are used as the constraints on the correlation function of the Jastrow wave function. A concrete procedure to search the suitable correlation function is also presented.
where γ ′ ′ = rr rr ( ; ) SSD N N (2) 0 is the PD calculated from the SSD. In the preceding paper 17 , we have confirmed that Eq. (2) meets four kinds of necessary conditions for the N-representability of the PD, and may become "approximately N-representable" [42][43][44][45][46][47] . However, the possibility of it being N-representable has not been discussed 17 . This is an arguable problem that is concerned with whether the reproduced PD is physically reasonable or not.
The aim of this paper is to discuss the N-representability of Eq. (2) and to show the way to search the suitable correlation function | − | f r r ( ) i j . The organization of this paper is as follows. For the convenience of the subsequent discussions, we first examine the properties of the LO-Jastrow PD in the next section. Then, the sufficient conditions for the N-representability of Eq. (2), which are imposed on the correlation function, will be derived recursively. Next, concrete steps for searching the correlation function that meets these conditions are discussed. Finally, concluding remarks are given in the last section.
Results
Properties of the LO-Jastrow PD. In this section, we shall discuss the properties of the LO-Jastrow PD. To this aim, the properties of PDs that are calculated from SSDs are investigated. The cofactor expansion of Scientific REPORTs | 7: 7590 | DOI:10.1038/s41598-017-07454-8 In what follows, suppose that these spin orbitals are given as the solutions of simultaneous equations of previous work 17 , and therefore they are orthonormal to each other.
(3) denote (N 0 − 1) -electron SSDs that are defined as the minor determinants multiplied by . By the repetition of this procedure, we arrive at 2 is given by the following Jastrow wave function; where γ ′ ′ = rr rr ( ; ) N n (2) denotes the PD operator for an n-electron system. Substituting Eq. (9) into Eq. (6) and rearranging, we get Scientific REPORTs | 7: 7590 | DOI:10.1038/s41598-017-07454-8 It should be noticed that the right-hand side of Eq. (10) has a characteristic form. Concerning the N-representability of this form, the following theorem holds: Theorem. If there exists the set of single-valued, continuous, smooth and finite functions {a α (x n + 1 )} (1 ≤ α ≤ n + 1) that satisfy the conditions; then the following equations hold: N n n n n N n n 1 1 1 1 (1 ≤ α ≤ n + 1) denote the n-electron wave functions, and σ is a permutation operator upon the electron coordinates, and σ is the number of interchanges in σ.
Proof. The left-hand side of Eq. (13) seems to be related to the average of γ ′ ′ = rr rr ( ; ) N n (2) with respect to a density matrix for a mixed state. Indeed, if the density matrix for the mixed state is given by is calculated as . On the other hand, it is expected that the average ρ γ ′ ′ =ˆr r rr Tr [ ( ; ) ] n N n (2) may be given as the expectation value of γ ′ ′ = rr rr ( ; ) N n (2) with respect to a pure state for the whole system that includes the n-electron system as a subsystem 48,49 . We shall take an (n + 1)-electron system as the whole system, and suppose that the wave function for such the (n + 1)-electron system is given by Eq. (14) In addition to Eq. (8), the existence conditions for (21) and (22), are also the parts of sufficient conditions for the N-representability of Eq. (2). We assume that the set of functions
Discussions
Concrete steps for constructing the N-representable LO-Jastrow PD. In the preceding section, the sufficient conditions for the N-representability of the LO-Jastrow PD are derived. In this section, we consider the concrete steps for searching the correlation function that meets these conditions or checking its existence.
1. First, we give a trial form of the correlation function. Using this, simultaneous equations for the N 0 -electron system are solved in a self-consistent way 17 . 2. Let us consider the SSD that consists of the resultant spin orbitals for the simultaneous equations. The SSD can generally be expanded using the cofactor. The SSD for the N 0 -electron system is expanded along the N 0 th row, then we get the N 0 number of SSDs for the (N 0 − 1)-electron system, i.e., Φ ≤ N 0 ). Successively, each of the SSDs for the (N 0 − 1)-electron system is expanded along the (N 0 − 1)th row, and then the (N 0 − 1) number of SSDs for the (N 0 − 2)-electron system, i.e., , can be obtained for each i. After that and later, the cofactor expansions are likewise repeated, and we finally arrive at the SSDs for the two-electron system, i.e.,Φ ⋅⋅⋅⋅ x x ( , ) that are satisfied with Eqs. (21) and (22 that are satisfied with Eqs. (11) and (12), together with modifying the correlation function. If we successfully find the correlation function that meets the conditions (11) and (12) for any n(≤N 0 − 1), the LO-Jastrow PD of the N 0 -electron system becomes N-representable.
As easily inferred, the above steps are feasible only for the small-electron systems from the practical viewpoint. However, it should be noted that the correlation function that makes the LO-Jastrow PD N-representable may, in principle, be found along the above steps, though there is a possibility that the suitable correlation function may not exist in some system.
Other possibilities to obtain antisymmetric wave functions. In the above-mentioned concrete steps, antisymmetric (n + 1)-electrons wave functions are built up from given antisymmetric n-electrons wave functions via Eq. (14). In this subsection, we show that there are other possibilities to obtain antisymmetric (n+1)-electrons wave functions without using Eq. (14).
Instead of Eq. (14), we can use the following expression for Ψ where a α (x i ) and b α (x n + 1 ) denotes functions that should be determined. If these functions satisfy the following conditions; 1 becomes an antisymmetric wave function and yields the PD just given by the left-hand side of Eq. (13). Therefore, if we use the expression Eq. (28), then we have to search a set of functions a α (x i ) and b α (x n + 1 ) that satisfy Eqs. (29)-(31) instead of Eqs. (11) and (12) , i.e., we suppose where a α (x i , x n + 1 ) denote a function that should be determined. If a α (x i , x n + 1 ) satisfies the following conditions; then Eq. (32) becomes an antisymmetric (n + 1)-electrons wave function and yields the PD just given by the left-hand side of Eq. (13). Therefore, if we use the expression Eq. (32), then what we have to do in the concrete steps is to search a set of functions a α (x i , x n+1 ) that satisfy Eqs (33) and (34) instead of searching a set of functions that satisfy Eqs (11) and (12).
Thus, various expressions for Ψ ⋅ ⋅ ⋅ = + + x x ( , , ) N n n 1 1 1 can be adopted. This means that the present method can provide various prescriptions to construct the N-representable LO-Jastrow PD.
Concluding remarks
In this paper, the sufficient conditions for the N-representability of the LO-Jastrow PD are discussed. Using the properties of the LO-Jastrow PD, we derive the sufficient conditions that are imposed on the correlation function of the Jastrow wave function. As shown in the previous section, additional steps to search the suitable correlation function, which satisfies the sufficient conditions, are attached to the computational scheme proposed previously 17 . Although the number of steps rapidly increases with that of electrons, the concrete steps that are presented in the previous section are feasible for a small-electron system. Of course, there is a possibility that the suitable correlation function cannot be found out. In this case, as mention in the previous section, we can adopt other expressions for Ψ ⋅ ⋅ ⋅ = + + x x ( , , ) N n n 1 1 1 , so that we may possibly find out a suitable correlation function. Otherwise, as mentioned in the previous paper, LO-Jastrow PDs are approximately N-representable in a sense that they satisfy four kinds of necessary conditions 17,[42][43][44][45][46][47] . | 2,116.8 | 2008-01-04T00:00:00.000 | [
"Physics"
] |
Quantum gravity with THESEUS
In this paper we explore the possibility to search for a dispersion law for light propagation in vacuo with a sample of Gamma-Ray Bursts detected by the THESEUS satellite. Within Quantum Gravity theories, different models for space-time quantization predict relative discrepancies of the speed of photons w.r.t. the speed of light that (in a series expansion) depend on a given power of the ratio of the photon energy to the Planck energy. This ratio is as small as 10− 23 for photons in the soft γ −ray band (100 keV). The dominant effect is determined by the first significant term of this expansion. If the first order in this expansion is relevant, these theories imply a Lorentz Invariance Violation (LIV hereafter) and are generally dubbed LIV-theories. Therefore, to detect this effect, light must propagate over enormous distances and the experiment must have extraordinary sensitivity. Gamma-Ray Bursts, occurring at cosmological distances, could be used to detect this tiny signature of space-time granularity. Once the photons of a Gamma-Ray Burst are emitted at a given (cosmological) distance, they arrive on the detector with relative delays that linearly depends on the energy differences and on the distance travelled, that, given a set of cosmological parameters, is a unique function of the redshift. The strong temporal variability of the Gamma-Ray Bursts light-curves allows, with different techniques (e.g. cross-correlations), to compute these delays by comparing light-curves of Gamma-Ray Bursts for which the redshift is known, in adjacent energy bands covering a sufficiently wide energy range. In this way, LIV-theories can be effectively constrained. THESEUS offers the opportunity to collect a homogeneous set of GRBs for which the redshift is known, with a signal to background ratio sufficient to compute delays through cross correlation techniques, and covering an energy band (from few keV to few MeV) wide enough to produce significant delays. In this article we explore the possibility to constrain LIV-theories with THESEUS by means of Monte Carlo simulations. In summary, within the nominal duration of 3 years, THESEUS could constrain (or detect) Quantum Gravity Lorentz Invariance Violation effects at al level of 17 times the Planck Length (1.6 × 10− 33 cm); if the mission is extended up to 7 years, this constrain is improved down to a level of 11 times the Planck Length.
GRB simulations and cross-correlation analysis
We have used cross-correlation techniques to investigate the temporal delays between the light-curves of a given GRB in different energy bands. The light-curve of a bright long GRB observed by Fermi-GBM is shown in Fig. 1, Panel a. The bright Long GRB lasted for t GRB = 40 s, with an average flux in the 50-300 keV energy band of φ GRB = 6.5 photons/s/cm 2 , and a background flux of φ BCK = 2.8 photons/s/cm 2 . Moreover, the GRB was characterised by variability on timescale of the order of ∼ 5 ms.
Starting from this, we derived a template with millisecond resolution (see [1] for more details). Figure 1, Panel b, shows the detail of the main peak of the GRB template where the timescale of the fast variability is about 5 ms. Using Monte-Carlo simulations, we generated light-curves as seen by detectors of different effective areas. Since the number of photons collected in a given energy energy band is a fraction of the total number of photons collected, light-curves obtained from detectors of different areas are equivalentw.r.t. cross-correlation accuracies -to light-curves in different energy bands providing that the number of photons expected in a given energy band is equal to the number of photons detected with a detector of a given area. We performed cross-correlation analysis between pairs of simulated GRB with the aim to investigate the capability to reconstruct time delays between the observed signals. As an example, the cross-correlation function at 1 μs resolution for a pair of detectors of 100 m 2 area is shown in Fig. 2, Panel a. Panel b shows the detail of the cross-correlation function around the peak and the best fit Gaussian. To determine a reliable estimation of the accuracy achievable using cross-correlation analysis, we repeated the procedure described 1000 times, and we then fitted the distribution with a Gaussian model, from which we estimated an accuracy of 0.27 μs. Distributions for different effective areas, 56 cm 2 (HERMES, see [2,3] for more details on the HER-MES project), 125 cm 2 (Fermi-GBM), 1 m 2 , 10 m 2 , 50 m 2 , and 100 m 2 , are shown in the six panels of Fig. 3, top panel. The bottom panel shows the one sigma delay accuracy as a function of the effective area. The accuracy scales as the inverse of the effective area A to the power of 0.6, close but slightly better than the theoretical lower limit of 0.5 (derived from counting statistics). The best fit formula is: In terms of the number of collected photons N (adopting the same 0.8/6.5 ∼ 40% overall background) we obtain: ( 2 )
A shallow dive into quantum gravity: minimal length hypothesis, Lorentz invariance violation, and dispersion relation for photons in vacuo
Several theories proposed to describe quantum Space-Time, for instance some String Theories, predict the existence of a minimal length for space of the order of Planck length, PLANCK = G /c 3 = 1.6 × 10 −33 cm (see e.g. [4] for a review). This implies the following facts: i) these theories predict a Lorentz Invariance Violation (LIV, hereafter). According to Special Relativity, a proper length, , is Lorentz contracted by a factor 1/2 when observed from a reference system moving at speed v w.r.t. the reference system in which is at rest. If MIN = α PLANCK (where α ∼ 1 is a dimensionless constant that depends on the particular theory under consideration) is the minimal length physically conceivable (in String Theories MIN is the String length), no further Lorentz contraction must occur, at this scale. This is a violation of the Lorentz invariance. ii) These theories predict the remarkable fact that the space has, somehow, the structure of a crystal lattice, at Planck scale. iii) In perfect analogy with the propagation of light in crystals, these theories predict the existence of a dispersion law for photons in vacuo [5]. Since, for photons, energy scales as the inverse of the wavelength, this dispersion law can be expressed as a function of the energy of photons in units of the Quantum Gravity energy scale, which is the energy at which the quantum nature of gravity becomes relevant: is a dimensionless constant that depends on the particular theory under consideration, m PLANCK = √ c /G = 2.2 × 10 −5 g is the Planck mass, and the Planck energy is E PLANCK = 1.2 × 10 19 GeV. We have: where ξ ∼ 1 is a dimensionless constant that depends on the particular theory under consideration, v PHOT is the group velocity of the photon wave-packet, and E PHOT is the photon energy. The index n is the order of the first relevant term in the expansion in the small parameter = E PHOT /(ζ m PLANCK c 2 ). In several theories that predict the existence of a minimal length, typically, n = 1. Finally, the modulus is present in (3) takes into account the possibility (predicted by different LIV theories) that higher energy photons are faster or slower than lower energy photons (discussed as sub-luminal, +1, or super-luminal, −1, as in [6]).
We stress that not all the theories proposed to quantise gravity predict a LIV at some scale. This is certainly the case for Loop Quantum Gravity (see e.g. [7][8][9]). No LIV is expected as a consequence of the recently proposed Space-Time Uncertainty Principle [10] and in the Quantum Space-Time [11]. In some of these theories it is possible to conceive a photon dispersion relation that does not violate Lorentz invariance, although the first relevant term is quadratic in the ratio photon energy over E QG , i.e. n = 2 in this case. We explicitly note that, since E QG ∼ 10 19 GeV, second order effects are almost not relevant even for photons of at 0.1 PeV energies (10 14 eV), the highest energy photons ever recorded, recently confirmed to be emitted by the Crab Nebula [12]. Indeed also for these extreme photons (E PHOT /E QG ) 2 ∼ 10 −28 .
Dispersion relation for photons in vacuo
During motion at constant velocity, travel time is the ratio between the distance travelled D TRAV and the speed. Therefore, differences in speed result in differences in the arrival times t QG of photons of different energies E PHOT departing from the same point at the same time, such as those emitted during a GRB event. For small speed differences, as those predicted by the dispersion relations discussed above, these delays scales with the same order n -in the ratio E PHOT /E QG -as that between photon energy and Quantum Gravity energy scale: where ξ ∼ 1 is a dimensionless constant that depends on the particular theory under consideration and the sign ± takes into account the possibility (predicted by different LIV theories) that higher energy photons are faster or slower than lower energy photons respectively, as discussed above [6].
On the other hand, the distance traveled has to take into account the cosmological expansion, being a function of cosmological parameters and redshift. The comoving trajectory of a particle is obtained by writing its Hamiltonian in terms of the comoving momentum [13]. The computation of the delays has to take into account the fact that the proper distance varies as the universe expands. Photons of different energies are affected by different delays along the path, so, because of cosmological expansion, a delay produced further back in the path amounts to a larger delay on Earth. Taking into account these effects this modified "distance traveled" D EXP can be computed [13].
More specifically we adopted the so called Lambda Cold Dark Matter Cosmology ( CDM) with the following values [14]: H 0 = 67.74(46) km s −1 Mpc −1 , k = 0, curvature k = 0 that implies a flat Universe, R = 0, radiation = 0 that implies a cold Universe, w = −1, negative pressure Equation of State for the so called Dark Energy that implies an accelerating Universe, = 0.6911(62) and Matter = 0.3089(62) (see [14], for the parameters and related uncertainties).
With these values we have: where z is the redshift. Substituting D TRAV of (4) with D EXP derived in (5) we finally obtain the delays between the time of arrival of photons of different energies as a function of the specific Dispersion Relation adopted, the specific Cosmology adopted, and the redshift:
Computation of the expected delays: long GRB at different redshifts
We considered a bright Long GRB lasted for t GRB = 40 s, with average flux in the 50-300 keV energy band φ GRB = 6.5 photons/s/cm 2 , background flux of φ BCK = 2.8 photons/s/cm 2 , and variability timescale ∼ 5 ms discussed in Section 1.
We selected eight consecutive energy bands from 5 to 50 MeV. For an overall collecting area of 100 m 2 , the number of detected photons in each band was computed adopting a Band function, an empirical function that well fits GRB spectra [16]: where E is the photon energy, dN E (2) adopting the most conservative assumption that E CC (N) scales as (N/3.7 × 10 6 ) −0.5 (as expected from counting statistics) and not as (N/3.7 × 10 6 ) −0.58 of (2). We adopted the geometric mean of the lower and upper limits of a given energy band, E min and E max respectively, as representative of the average energy of the photons in that given band With this, the energy difference between photons of different energy bands w.r.t. photons of very low energy E AVE ∼ 0, are E PHOT = E AVE . We adopted the cosmology described in Section 2.1, a first order dispersion relation i.e. n = 1, and, finally, ξ = 1 and ζ = 1. The Quantum Gravity delays of the time of arrival of photons of different energy bands were computed with (6), for values of the redshift z = 0.1, 0.5, 1.0, 3.0, typical of GRBs as shown in Fig. 4. The results are shown in Table 1. Numbers in red and blue refer to delays below Table 1 Photon fluence and expected delays induced by a Quantum Gravity first order Dispersion Relation for the bright Long GRB described in Section 1 and observed with a detector of cumulative effective area of 100 m 2 (e.g. obtained by adding the photons collected by N = 10 4 nano-satellites of 100 cm 2 each) Quantum Gravity delays predicted with a first order photon dispersion relation and just above one sigma accuracy, respectively. Numbers in black are above three sigma.
Intrinsic delays or quantum gravity delays?
Because of unknown details on the Fireball model, intrinsic delays in the emission of photons of different energy bands are possible. For a given GRB, these intrinsic delays can mix to, or even mimic, a genuine quantum gravity effect, making its detection impossible. However, intrinsic delays in the emission mechanism are independent of the distance of the GRB. On the other hand, the delays induced by a photon dispersion law are proportional both to the distance traveled (known function of redshift) and to the differences in energy of the photons. This double dependence on energy and redshift is the unique signature of a genuine Quantum Gravity effect. This behaviour, shown in Table 1, demonstrates that, given an adequate collecting area, GRBs are indeed excellent tools to effectively search for a first order dispersion law for photons, once their redshifts are known.
The sample
We considered a sample of long GRBs detectable by THESEUS in one year of mission as derived from the mock catalogue of 2 millions of long GRB produced by Ghirlanda et al. (see Ghirlanda et al., 2021, for details). Moreover we considered the fraction of long GRBs in the adopted mock catalogue that are detectable by THE-SEUS for which a measure of the redshift is obtainable according to the sensitivity of the mission and the conditions under which a good redshift measurement is possible (see Mereghetti et al., 2021, for details). This gave us a sample of about 200 long GRBs per year. In order to apply the accuracy in delays of (2), derived in the hypothesis of a signal-to-background ratio ∼ 2, we further selected the GRBs that fulfil these criteria (∼ 4 per year). It is possible, in principle, to use the whole sample of 200 GRBs per year, however, for GRBs with signal-to-background ratio < 2, the accuracy in computing the delays has a much more complex dependence on the number of photons in the GRB, than that expressed in (2). In this case further studies are required to obtain a reliable relation that express the accuracy in computing the delays as a function of the number of photons in the GRB and the number of photons in the background. The inclusion of the whole sample of GRBs in our analysis is possible once this more complex relation is adequately investigated, but this is beyond the scope of this short report and will be discussed elsewhere. Finally we considered two scenarios: a mission lasting for its nominal duration, 3 years lifetime (13 GRBs in the sample), and an extended mission, lasting for 7.5 years (27 GRBs in the sample). 1
The analysis
We firstly explored, for each GRB in a given energy band, E PHOT i , i = 1, ..., N bands , where N bands is the total number of energy bands adopted, the dependence of the delays on the redshift. This, according to (6) with n = 1 and ξ ∼ 1 is: Fits of (8) as a function of f (z), are shown in Fig. 5, for the eight energy bands chosen to cover the whole energy range of THESEUS, from 10 keV to 10 MeV. In particular these energy bands are: 2-25 keV; 25-50 keV; 50-150 keV; 150-300 keV; 300-1000 keV; 1000-2000 keV; 2000-5000 keV; 5000-10000 keV. where for i = 1, ..., N bands . Fit of (10) as a function of h(E PHOT i ) = g(E PHOT i )/H 0 , are shown in Fig. 5, for the eight energy bands chosen to cover the whole energy range of THESEUS, from 10 keV to 10 MeV. The slope of this linear fit gives the quantity As it is customary in this field, the parameter μ express the strength of the LIV effect. The higher the μ, the greater the effect of Quantum Gravity, in the sense that a value of e.g. μ = 5 implies that the effects of Quantum Gravity are already relevant at energies equal to 1/5 of Planck energy, m PLANCK c 2 .
To quantify the capability of THESEUS in deriving a robust upper limit (or detection) of a LIV effect, we injected, in our sample, a LIV parametrised by a dummy value of μ. In particular we adopted μ = 30 in the simulations performed. The three sigma upper limit on μ has been computed from the one sigma error on the slope quoted in the fit.
To be conservative, in performing this last fit we made two assumptions: i) we artificially amplified by one order of magnitude the statistical errors on the quantities s i w.r.t. obtained by applying (2) derived from our Monte-Carlo simulations of long GRBs. This is done to take into account several effects that could worsen the statistical accuracies. Among these the most relevant is the unknown (a priori) variation of the shape of the GRB light-curve in different energy bands that could bias the value of the cross correlation; . We adopted μ = 30 for generating a dummy LIV effect and, as an upper limit to the value of the μ parameter, three times the one sigma uncertainty obtained from the distribution of 1000 Monte-Carlo simulations ii) we considered two scenarios, the first in which the similarity of the light-curves is guaranteed over the whole THESEUS energy band 2 keV-10 MeV. In this case all the s i were included in our analysis. In a more conservative approach, we assumed that the similarity of the light-curves holds only for energies ≤ 1 MeV and, consequently, all the s i above this threshold were excluded from our analysis.
Our results are shown in Fig. 6 where we explored the two scenarios mentioned above, namely a mission lasting for its nominal duration, 3 years (13 GRBs in the sample), and an extended mission, lasting for 7.5 years (27 GRBs in the sample).
We compare our results with and [17] who set a robust constraint on LIV using Fermi-LAT GRB data. Applying different estimations procedures developed on the basis of statistical measures to the eight observed GRBs relatively bright in multi-GeV energies detected by Fermi-LAT, they constrained μ ≥ 50 and μ ≥ 15 depending on different hypotheses and statistical methods adopted. For a mission lasting for its nominal duration, 3 years, the results are a three sigma upper limit on μ ≤ 9, considering the whole energy range (2 keV-10 MeV) and a three sigma upper limit on μ ≤ 20, considering an energy range 2 keV-1 MeV. For a mission lasting for an extended duration, 7.5 years, the results are a three sigma upper limit on μ ≤ 7, considering the whole energy range (2 keV-10 MeV) and a three sigma upper limit on μ ≤ 15, considering an energy range 2 keV-1 MeV. | 4,621.4 | 2021-12-01T00:00:00.000 | [
"Physics"
] |
Characteristic wave velocities in spherical electromagnetic cloaks
We investigate the characteristic wave velocities in spherical electromagnetic cloaks, namely, phase, ray, group and energy-transport velocities. After deriving explicit expressions for the phase and ray velocities (the latter defined as the phase velocity along the direction of the Poynting vector), special attention is given to the determination of group and energy-transport velocities, because a cursory application of conventional formulae for local group and energy-transport velocities can lead to a discrepancy between these velocities if the permittivity and permeability dyadics are not equal over a frequency range about the center frequency. In contrast, a general theorem can be proven from Maxwell's equations that the local group and energy-transport velocities are equal in linear, lossless, frequency dispersive, source-free bianisotropic material. This apparent paradox is explained by showing that the local fields of the spherical cloak uncouple into an E wave and an H wave, each with its own group and energy-transport velocities, and that the group and energy-transport velocities of either the E wave or the H wave are equal and thus satisfy the general theorem.
Introduction
Invisibility cloaking has received considerable attention since the theory was first presented in [1] and experimentally demonstrated to within an approximation in [2]. The approach presented in [1] provides a conceptually simple method for designing a cloak. One can imagine that the trajectories of electromagnetic waves passing through a region of warped space must conform to the local metric. Once the desired trajectory is determined through a transformation applied to Cartesian straight trajectories, the differential operators in Maxwell's equations in the transformed space lead to space-dependent coefficients that can be reinterpreted in terms of constitutive relations of an anisotropic, inhomogeneous medium. In [3], it was shown that cloaking can be reformulated as a boundary value problem with a single first-order Maxwell differential equation for linear anisotropic media, leading to explicit formulae for the relative permittivity and permeability dyadics and fields of ideal spherical and circular cylindrical annular cloaks.
Causality-energy conditions imply that electromagnetic incident fields with a finite bandwidth cannot be perfectly cloaked [3]. Consequently, it is desirable to evaluate the performance degradation of electromagnetic cloaks as a function of the frequency for permittivity and permeability dyadics that satisfy realistic causality-energy relations [4]- [8]. Toward this end, we investigate some characteristic wave velocities of spherical electromagnetic cloaks as determined by the Poynting vector, the local differential propagation constant, and the phase velocity. Of particular interest are the group and energy-transport velocities, which provide information on the potential cloaking bandwidth.
Explicit formulae are provided for the phase velocity and the Poynting vector inside an ideal spherical cloak illuminated by a plane wave. It is shown that the 'complex' Poynting vector is real-valued and divergence-free inside the cloak, and that it is aligned with the 'compressed' rays obtained with the radial transformation function [9]. On the other hand, the phase velocity is not aligned with the Poynting vector, thus leading to the definition of a 'ray velocity' as the phase velocity along the direction of the Poynting vector.
The calculation of the group and energy-transport velocities requires a realistic frequency dependence for the permittivity and permeability dyadics. Since the frequency dependences of 3 these two dyadics may be significantly different in an anisotropic material, it is useful to assume physics-based forms for these functions in order to investigate the degradation in cloaking as a function of frequency about a center frequency ω 0 at which the relative permittivity and permeability dyadics are equal. We calculate the local group velocity v g = ∇ k ω(k) and the energy-transport velocity defined as the time-average Poynting vector S(k) divided by the energy density U (k) [v e = S(k)/U (k)] for a general spherical cloak. If the variation with frequency of the relative permeability and permittivity dyadics are the same, we find that the group and energy-transport velocities are identical (v g = v e ). However, if the variation with frequency of the relative permeability and permittivity dyadics are different, it appears that the group and energy-transport velocities are not necessarily equal even at the center frequency where the relative permeability and permittivity are the same [4].
This apparent discrepancy between the values of the local group velocity and the energytransport velocity is paradoxical in view of the general theorem provable from Maxwell's equations for linear, lossless, frequency dispersive and anisotropic material, namely that the local group velocity and the energy-transport velocity are identically equal. In order to resolve this apparent paradox, we provide an especially transparent derivation of this general theorem and examine its application to the material of dispersive spherical cloaks [10].
The paper is organized as follows: section 2 summarizes the results for the phase and ray velocities in spherical cloaks at their operational frequencies. Section 3 provides an especially transparent proof of the equality between the local group velocity and the energy-transport velocity in linear, lossless, frequency dispersive and bianisotropic material. In section 4, the electromagnetic fields of the spherical cloaks are shown to uncouple into E and H waves, and the local group and energy-transport velocities are proven equal for the individual E and H waves. Numerical results for the ray, group and energy-transport velocities are presented in section 5 and the paper ends with a concluding section 6.
Ray and phase velocities in spherical cloaks
The boundary value formulation of electromagnetic cloaking presented in [3] is based on the requirements that the cloaking occurs for all possible incident fields, the cloaks have continuous tangential components of E and H fields across their outer surfaces, and the normal components of the D and B fields are zero at the inner material surfaces of the cloaks. The tangentialfield boundary conditions at the outer surface of a cloak ensure zero scattered fields, and the normal-field boundary conditions at the inner surface of a cloak are compatible with zero total fields inside the interior cavity of the cloak. For a cloak consisting of a spherical annulus of anisotropic material with inner radius a and outer radius b, these boundary conditions lead to the following expressions for the relative permittivity and permeability dyadics at the operational frequency ω 0 where 0 and µ 0 are the free-space permittivity and permeability, respectively, r is the radial coordinate of a spherical coordinate system (r, θ, φ) with origin at the center of the spherical cloak, f (r ) is the 'compressed radial coordinate function' satisfying the boundary conditions f (a) = 0 and f (b) = b, and the prime superscripts denote differentiation with respect to r .
4
The field distributions associated with the relative permittivity-permeability dyadic in (1) are where (E inc , H inc ) are the incident fields. The particularly simple choice for the radial function yields the spherical cloak of Pendry et al [1].
Poynting vector and phase velocity
It is interesting to investigate the simple case of a plane wave impinging on the spherical cloak. It is not restrictive to assume that the plane wave travels in the −ẑ-direction with electric field polarized alongŷ, that is where ζ 0 is the free-space impedance and an exp(−iω 0 t), ω 0 > 0 time dependence has been assumed and suppressed. Inserting these equations into (2), one obtains From these fields, a simple expression can be derived for the time-average Poynting vector as The 'complex' Poynting vector, E × H * /2, inside the cloak is real, and this is true whenever the incident field exhibits a local plane-wave structure in the region captured by the space compression. Furthermore, it is easily verified that ∇ · S = 0 everywhere, a result that confirms that there is no power loss at any point of the cloak. The unit vectorŝ in the direction of the Poynting vector within the material of the spherical cloak is given bŷ This unit vectorŝ defines the tangent to the power-flow rays at each point in the material of the cloak and can be interpreted in terms of a generalized ray-direction, thus leading to pictures that resemble ray-path propagation in inhomogeneous isotropic media. However, conventional ray interpretation is not fully possible because of the anisotropy of the medium. The generalized ray-paths inside the cloak are plotted in figure 1(a) from (7) for the case b = 2a. It is apparent that the rays penetrate the cloaking material while avoiding the central free-space cavity.
Within the transformational optics framework (see, for instance [1]), the power-flow rays are directly defined from a mapping of the free-space incident ray-fields to the incident ray-fields 'compressed' within the spherical cloak material by the coordinate transformation function f (r ). We shall now show that the rays obtained from this mapping are identical to the powerflow rays obtained from the Poynting vector. Indeed, the equation of the incident ray-field at a distance ρ 0 from the z-axis is r = ρ 0 /sinθ , while the equation of the 'compressed' ray-field within the material of the spherical cloak is f (r ) = ρ 0 /sin θ from which we obtain The unit tangent vectort at a generic point in the cloak iŝ t =θ r dθ +rdr Comparing (9b) with (7) shows thatt =ŝ, that is, the unit tangent vector at any point of the rays formed by simply mapping the original rays through the field transformation function f (r ) is equal to the unit vector in the direction of the Poynting vector at that point. We have proven this for each straight-line ray of an incident plane wave illuminating the spherical cloak. However, because the incident rays of any source (lying outside the cloak) illuminating the spherical cloak are straight lines in free space, each incident ray can be considered locally as part of a plane wave with an unlimitedly high frequency. Thus, the proof holds for any incident ray illuminating the spherical cloak. In other words, the rays defined by the Poynting vector in a spherical cloak illuminated by an arbitrary incident field are identical to the rays formed by mapping the incident rays through the compression function f (r ).
The equiphase wavefronts are obtained by equating the argument of the exponential in equations (5) to a constant: f (r )cos θ = constant, so that f (r )cos θ dr − f (r ) sin θ dθ = 0. The unit tangent vector to these equiphase wavefronts is then τ =θ r dθ +rdr which is plotted in figure 1(b). Comparing (10) with (7) shows that the tangents to the wavefronts are rotated 90 • with respect to the Poynting vector lines (power-flow rays). This occurs at any plane of constant φ and holds for any compression function f (r ).
The local differential propagation constant normal to the equiphase wavefront surfaces is given by and its direction is shown in figure 1(c). It is worth noting that the local normal to the wavefronts is not aligned with the Poynting vector, in contrast to what happens for conventional rays. The magnitude of the local differential phase velocity normal to the wavefronts is where c is the free-space speed of light. The magnitude of the phase velocity along the Poyntingvector rays is therefore Along the ray path inside the cloak, the generalized rays propagate with an average phase velocity greater than the speed of light and recover, as they exit, the same phase as the plane wave along the straight path. The latter statement can be easily verified by considering an incident ray at a distance ρ 0 from the z-axis, as shown in figure 2. In the absence of the cloak, the time delay accumulated by the ray along the path from the point A to the point B is where 2z 0 is the distance between A and B. On the other hand, in the presence of the cloak, the time delay is Hence, as expected, all the rays travel from one side of the cloak to the other in the same time as they would take in the absence of the cloak.
Group and energy-transport velocities in bianisotropic media
Consider Maxwell's equations in k-ω space for lossless, homogeneous, spatially nondispersive, source-free media with bianisotropic constitutive relations in which the exp[i(k · r − ωt)] space-time dependence has been suppressed, where the rectangular components of the propagation vector k can take any values on the real line, (−∞, −∞, −∞) < (k x , k y , k z ) < (+∞, +∞, +∞). In a lossless medium, the permittivity, permeability and magnetoelectric constitutive dyadics obey the relations Elimination of all the fields but E from (16) and (17) from which the dispersion equation follows: where k is the antisymmetric dyadic defined as Because the medium is lossless and k is a real propagation vector, energy conservation requires that the solutions ω(k) to (20) be real. Also, it can be proven [12] that for lossless, homogeneous, spatially non-dispersive, bianisotropic material that is also reciprocal, solutions for a given ω come in pairs with propagation vectors k and −k * , which equals −k for the real k propagation vectors considered here in three-dimensional (3D) Fourier k-space.
Depending on the values of the constitutive dyadics, more than one characteristic value of each positive or negative ω(k) may satisfy (20) and each of these characteristic values generally has a different associated characteristic field solution to (16) and (17). If this is the case, that is, there exists more than one characteristic wave solution to (16) and (20), then in the following derivations, ω(k) refers to any one of these characteristic values and the fields are those associated with that particular characteristic value.
Wavepacket and group velocity
The real space-time fields in the medium can be expressed as a four-fold integral consisting of a 3D Fourier transform of the fields in k space and an analytic Fourier transform over positive frequencies ω [13, chapter 5]. Moreover, for any one characteristic value of ω(k) > 0 obtained from the dispersion equation (20), the frequency Fourier transform reduces to this one discrete value and we have for the electric field with dk = dk x dk y dk z and the integration limits k ± are (−∞, −∞, −∞) < (k x , k y , k z ) < (+∞, +∞, +∞). A wavepacket is defined by assuming that the wave propagation vectors are concentrated about a central propagation vector k 0 such that E(k) ≈ 0 unless |k − k 0 |/|k 0 | = | k|/|k 0 | 1. Assuming ω(k) is expandable about k 0 , we have so that E(r, t) in (22) can be approximated as or simply where ω 0 = ω(k 0 ) and g(r) is an envelope function that varies slowly over a distance equal to the wavelength λ 0 = 2π/|k 0 | in the medium. The velocity of this envelope or wavepacket is called the group velocity and is determined from (24b) as which is not necessarily in the same direction as the phase velocity v p = (ω 0 /|k 0 |)k 0 . As previously explained, if the material is reciprocal, the group velocities in the ±k 0 directions are equal and thus only values of (k 0x , k 0y , k 0z ) > 0 as well as ω(k) > 0 need to be considered for reciprocal material.
Proof that group and energy-transport velocities are equal
Taking the differential of (16a) and (16b) dotted into H * and E * , respectively, one obtains By subtracting these two equations and then using the constitutive and lossless relations in (17) and (18) for bianisotropic media leads to where is the time average Poynting vector (power flow per unit area) and is the energy density in a lossless, homogeneous, frequency dispersive (but spatially nondispersive), bianisotropic medium [14]. Since (27) holds for all dk, it follows that where v e (k) = S(k)/U (k) is the 'energy-transport velocity' for a wavepacket with phase velocity in the k-direction. The relationships in (25), (30) and (31) can be summarized in the one extended equation which says that the group and energy-transport velocities of a wavepacket are equal in a lossless, homogeneous, frequency dispersive (but spatially non-dispersive), bianisotropic medium.
Evaluation of group and energy-transport velocities in a spherical cloak
In order to evaluate the group and energy-transport velocities in a spherical cloak, specific frequency dependencies must be assumed for the relative constitutive dyadics. These frequency dependencies may be significantly different for the permittivity and permeability dyadics, but they are assumed to be equal in accordance with (1) at a center frequency ω 0 . Reasonable functional forms for these frequency dependencies will be explicitly given in the next section.
Here, we simply let the relative permittivity and permeability dyadics have the general frequency dependence where (ω 0 )/ 0 = µ(ω 0 )/µ 0 such that but they are not necessarily equal for ω = ω 0 . Note that r , s , µ r , µ s are elements of the relative permittivity and permeability dyadics. The expressions in (25) and (31) for the group velocity and energy-transport velocity have been derived under the assumption that the medium is homogeneous. In order to apply these equations to the inhomogeneous material of the spherical cloak, we let the local values of (ω)/ 0 and µ(ω)/µ 0 in (33) form part of an infinite homogeneous medium, insert those values into (20), and solve for ω(k). (The group and energy-transport velocities evaluated under this assumption of 'local homogeneity' give the 'ray-optics approximation' to the velocity of an actual pulse propagating in an inhomogeneous spherical cloak-an approximation that becomes more accurate with less variation per wavelength in (ω)/ 0 and µ(ω)/µ 0 [15, sections 1.6 and 1.7]. The issue that we are addressing here in section 4 is the discrepancy that occurs between the group and energy-transport velocities if it is not realized that the fields in the cloak separate into uncoupled E and H waves. This discrepancy exists independently of the accuracy of the ray-optics approximation of local homogeneity that allows us to apply (25) and (31).) Letting the local spherical components (k r , k θ , k φ = 0) coincide with the rectangular components of k of the wavepacket in the associated infinite homogeneous medium, we find that the dispersion equation (20) can be locally factored as where The individual dispersion equations F E (k, ω) = 0 and F H (k, ω) = 0 admit two, generally different uncoupled solutions, ω = ω E (k) and ω = ω H (k), respectively, corresponding to a local E wave (H φ = 0) and a local H wave (E φ = 0); specifically The individual group velocities of the E and H waves in (37) and (38) can be determined from the identities Unless (41) is satisfied, the group and energy-transport velocities in (42b) and (43) are not equal and neither of them give the group-energy velocity in (40) of the E wave or the H wave. Failure to recognize that the spherical-cloak material supports two uncoupled wavepackets and that these wavepackets can have different group-energy velocities if the variation of µ(ω)/µ 0 and (ω)/ 0 do not satisfy (41) for ω = ω 0 , even though µ(ω 0 )/µ 0 = (ω 0 )/ 0 , leads to the imperfect results in (42b) and (43) for the group and energy-transport velocities in the spherical cloak, rather than the correct results in (40).
Numerical results
In order to numerically evaluate the group and energy-transport velocities inside the spherical cloak, we shall assume the same realistic frequency variations for the relative permittivity and permeability dyadics as those adopted in [4]: • s (ω) and µ s (ω) can be assumed to be slowly varying with frequency since s (ω 0 ) = µ s (ω 0 ) > 1, and thus approximately constant • a Drude model is assumed for r (ω), namely • a Lorentz model is assumed for µ r (ω), namely µ r (ω) = 1 + Note that these values of r , s , µ r , µ s satisfy (34) at ω = ω 0 . The expressions in (44)-(46) for the constituent parameters have been used together with (3) and (40) to numerically calculate the values of the group-energy velocities at the frequency ω 0 inside a spherical cloak with b = 2a. Figure 3 shows the values obtained for different values of the incident ray distance ρ 0 from the z-axis. It is apparent that the groupenergy velocities for the E and H waves have similar behaviors but different values inside the cloak. The corresponding values of the ray velocity given in (13) are also depicted on the same plot. Finally, bidimensional plots of the ray velocity and the group-energy velocities for E and H waves are shown in figures 4(a)-(c), respectively. The generalized ray trajectories [ŝ(k 0 )], which are the same for both the E and H waves, are also shown in these figures.
As the plots in figures 3 and 4 indicate, there is an infinitely large pulse time delay for the ray that is directed toward the center of the cloak, a result found previously by Chen and Chan [4].
Conclusion
Characteristic phase, ray, group and energy-transport velocities have been determined in spherical electromagnetic cloaks with material constitutive parameters having realistic causal frequency variations. After providing explicit expressions for the phase and ray velocities, it is confirmed that the ray travel time across the space occupied by the cloak is the same with and without the cloaking material. Particular attention is given to the group and energytransport velocities. The general theorem stating that the group velocity [∇ k ω(k 0 )] equals the energy-transport velocity [S(k 0 )/U (k 0 )] in linear, lossless, homogeneous, frequency dispersive, source-free, bianisotropic material applies to a single characteristic local wavepacket in that material. If the dispersion equation allows for more than one characteristic solution and thus more than one characteristic positive frequency ω(k) and wavepacket, the general theorem applies to each separate characteristic frequency and wavepacket but not to the combined dispersion equation and the combined fields of the characteristic waves. The discrepancy between local group and energy-transport velocities in the anisotropic material of spherical cloaks, that arises from a cursory application of the formulae for these velocities, is traced to the failure to recognize that the wavepacket solution in the spherical-cloak material uncouples into E and H characteristic waves with different characteristic frequencies if the frequency variation of the relative permittivity and permeability dyadics are not the same. A necessary | 5,297.6 | 2009-11-01T00:00:00.000 | [
"Physics"
] |
On the turbulent flow past a realistic open-cell metal foam
Abstract Turbulence is investigated in the lee of an open-cell metal foam layer. In contrast to canonical grids, metal foams are locally irregular but statistically isotropic. The solid matrix is characterised by two lengths, the ligament thickness $d_f$ and the pore diameter $d_p$. A direct numerical simulation is conducted on a realistic metal foam geometry for which $d_f/d_p = 0.14$ and the porous layer thickness is five times the pore diameter. The Reynolds number based on the pore size is ${\textit {Re}}_{d_p} = 4000$, corresponding to a Taylor-scale Reynolds number ${\textit {Re}}_\lambda \approx 80$. Closer to the foam than two pore diameters, the pressure and turbulent transports of turbulent kinetic energy are non-negligible. In the same region, ${\textit {Re}}_\lambda$ undergoes a steep decrease whereas the dissipation coefficient $C_{\epsilon }$ increases like ${\textit {Re}}_\lambda ^{-1}$. At larger distances from the porous layer, the classical grid turbulence situation is recovered, where the mean advection of turbulent kinetic energy equals dissipation. This entails a power-law decay of turbulent quantities and characteristic lengths. The decaying exponents of integral, Taylor and Kolmogorov scales are close to one-half, indicating that the turbulence simulated here differs from Saffman turbulence. Analysis of the scaling exponents of structure functions and the decorrelation length of dissipation reveals that small-scale fluctuations are weakly intermittent.
grids have attracted a lot of interest because of the specific type of turbulence generated. A remarkable increase in the Reynolds number based on the Taylor scale with respect to usual passive grids was noticed by Seoud & Vassilicos (2007). Further studies investigating the decaying turbulence downstream of a set of multiscale grids (Krogstad & Davidson 2012) and a square-element fractal grid (Hearst & Lavoie 2014) have revealed that, while the region close to the grids can be characterised by residual inhomogeneity and is grid-dependent, in the far field -where development is accomplished -flow characteristics are in accordance with classical grid turbulence measurements.
The in-depth study of turbulence generated by fractal grids has revealed a peculiar behaviour of the dissipation coefficient C = L/u 3 (here is the rate of dissipation.) in a region close to the turbulence-generating grid but in conjunction with energy spectra, which follow the −5/3 slope for a wide range of wavenumbers. This behaviour has been described as a breakdown of the classical dissipation scaling and is observed in wind tunnel experiments of grid-generated turbulence of different geometries (Mazellier & Vassilicos 2010;Isaza, Salazar & Warhaft 2014;Valente & Vassilicos 2014;Mora et al. 2019) and also in direct numerical simulation (DNS) data of decaying turbulence in a periodic box (Goto & Vassilicos 2016). The breakdown consists in a behaviour of C at the initial stages of the decay that depends upon the inlet Reynolds number Re M and the local Reynolds number Re λ as follows: C ∼ Re 1/2 M /Re λ . This implies that L/λ ∼ Re 1/2 M along the direction of Re λ decay (Valente & Vassilicos 2012).
The turbulent flow behind an open-cell metal foam has never been investigated before. Similar to regular grids, the open-cell metal foam geometry is characterised by two main length scales, i.e. the mean pore diameter d p and the mean ligament thickness d f . In contrast to regular grids, open cells are arranged randomly in space and their morphology, based on a polyhedral frame, is never exactly repeated. This generates a structure that is highly irregular and anisotropic at the pore scale but statistically isotropic at the macroscale. In addition, the metal foam layer investigated here has a thickness larger than a single ligament or a pore. Moreover, the solidity of metal foams, measured as 1 − ε, where ε represents grid porosity, is very different from the solidities typical of grids employed for grid turbulence, which generally range between 0.30 and 0.45. In high-porosity metal foams, this value is typically lower than 0.10 (Calmidi & Mahajan 2000).
In this paper, a description is reported of turbulence downstream of a high-porosity open-cell metal foam. After a qualitative introduction to the flow investigated in § 3.1, the degree of homogeneity and isotropy of turbulence is investigated in § 3.2. The power-law behaviour of turbulent kinetic energy k and its dissipation rate are assessed in § 3.3 and § 3.4, respectively. The streamwise variations of turbulent length scales are considered in § 3.5. In § 3.6 the behaviour of the dissipation rate coefficient is investigated in detail, also in the context of recent research on the topic. In § 3.7 the discussion is about whether or not Saffman turbulence is achieved for the present flow configuration. The multiscale nature of the flow is examined by means of one-dimensional spectra in § 3.8. Then § 3.9 investigates high-order statistics, the role of intermittency and the decorrelation length of dissipation. Budget terms of turbulent kinetic energy and of velocity variances are reported in § 3.10, where the main mechanisms for the transport of energy and fluctuations are described in the vicinity of the solid structure and in the fully developed region. Final remarks are drawn in § 4.
Mathematical formulation and flow configuration
The system of equations solved numerically comprises the mass conservation and the Navier-Stokes equations for incompressible flows: complemented with appropriate boundary conditions. The subscripts i and j take the values 1, 2 and 3 to denote the streamwise direction, x, and the two cross-flow directions, y and z, respectively. All the variables are made non-dimensional by the velocity at the inlet U ∞ and the mean pore diameter of the metal foam d p . The Reynolds number based on the unperturbed velocity and the mean pore diameter is Re d p = 4000. The governing equations (2.1) are solved using the high-order finite-difference method implemented in Incompact3d (Laizet & Lamballais 2009). Sixth-order compact schemes are used for spatial discretisation (Lele 1992), and time integration is done by the third-order Adams-Bashforth scheme. The velocity field is evaluated on a Cartesian grid with uniform spacing along the three directions, and pressure is defined on a staggered grid. Pressure-velocity decoupling is accomplished by a fractional-step method, which determines the divergence-free velocity field by solving a Poisson equation. The Poisson problem is tackled in the spectral space by using a modified wavenumber formalism, which allows for any kind of boundary conditions for the velocity field in the physical space. Inflow/outflow boundary conditions are enforced along the streamwise direction and periodic boundary conditions are set along the cross-flow directions to represent statistical homogeneity. A uniform velocity field is prescribed at the inlet (U ∞ , 0, 0), while at the outlet the velocity is determined by a convection equation: The convection velocity c is calculated at each time step as the mean between the maximum and minimum values of the streamwise velocity component at the outlet. The representation of the intricate metal foam geometry is achieved via an immersed boundary method (IBM) based on direct forcing (Gautier, Laizet & Lamballais 2014). The IBM enforces the no-slip condition at the solid walls while preserving the simplicity of the finite-difference schemes applied to the Cartesian grid (Laizet & Lamballais 2009). Further details on the numerical methodology employed here are provided in Corsini & Stalio (2020).
A sketch of the computational domain is displayed in figure 1. The extents of the domain along the streamwise and the two cross-flow directions are L x = 45d p and L y = L z = 11.25d p , respectively. The origin of the coordinate system is located at the centre of the downstream face of the porous matrix; thus x = 0 describes the most upstream cross-flow plane where the fluid is not in contact with the solid phase. The thickness of the metal foam layer in the streamwise direction is 5d p and it spans the whole domain in the cross-flow directions. It is placed at a 5d p distance from the inlet section; this avoids interference between the upstream boundary and the solid matrix. The computational domain is discretised by n x = 3073 grid nodes in the streamwise direction and n y = n z = 768 in the cross-flow directions. The spatial resolution is sufficiently fine to ensure that x = y = z 2η for x 5. Close to the porous layer (0 < x < 5), where dissipation is larger, x = y = z 5η. In the above comparisons, the Kolmogorov microscale η is calculated a posteriori from its definition. The time step is kept at t = 0.001d p /U ∞ during the simulation, which in terms of the Kolmogorov time scale τ η yields t 0.033τ η . This corresponds to a Courant-Friedrichs-Lewy number CFL < 0.3. Statistical quantities are computed by averaging in time and along the homogeneous y and z directions. Gathering of statistics begins after one flow-through time from the start of the simulation. In order to obtain well-converged statistics, the time interval of collection is T = 225d p /U ∞ , and the three-dimensional snapshots of the velocity and pressure field are sampled at equal time intervals of T = 4.5d p /U ∞ .
Metal foam geometry
The problem of the computer modelling of an intricate metal foam porous structure has been tackled in different ways. Pore-scale morphology can be reconstructed through X-ray tomography (Piller et al. 2013) or generated mathematically assuming an ideal cell geometry based on a virtual sample of regular polyhedra (Boomsma, Poulikakos & Ventikos 2003). In this study, where geometric periodicity is a key feature of the numerical representation and irregularity of the foam is a requirement, a third approach is adopted: the open-pore cellular structure is generated synthetically through a numerical algorithm, developed by August et al. (2015). Besides the excellent realism and periodicity, one further favourable feature of synthetic metal foams is the possibility to tune their porosity and permeability. Thanks to a diffuse interface representation of the phase-field approach (August et al. 2015), the thickness of ligaments can be easily adjusted. Figure 2 shows the details of a couple of cells of the synthetic structure.
The synthetic metal foam structure used in the simulation is characterised by the geometrical features listed in table 1. Both d p and d f are calculated by spatial averages and thus represent the mean pore diameter and the mean ligament thickness of the metal foam sample. The value of grid porosity set, ε = 0.92, is representative of high-porosity metal foams (Calmidi & Mahajan 2000). Based on typical sizes of open-cell aluminium foams, an inflow velocity of U ∞ = 15 m s −1 is obtained at Re d p = 4000, assuming air at standard conditions as the working fluid. Table 1 reports experimental conditions from previous wind tunnel studies on regular planar grids. Figure 2. Cells of the aluminium foam generated algorithmically (August et al. 2015).
A sample of the metal foam geometry with superimposed computational points is displayed in figure 2 of Corsini & Stalio (2020). Staircase patterns of the immersed boundaries approximate the rounded borders of the solid region. This referenced picture and the computed ratio between average ligament diameter and grid spacing d f / x ≈ 10 suggest that the ligaments are discretised by an adequate number of grid points. Figure 3(a) shows the streamwise component of the instantaneous velocity field in one of the snapshots collected. The uniform free stream of the inflow is disrupted by the irregularly arranged ligaments of the solid structure and velocity fluctuations arise within the porous matrix. Vortices of different orientations are shed from the ligaments and a wake is formed. The largest perturbations are observed close to the downstream edge of the porous matrix. The wakes originated by the ligaments develop in a non-uniform fashion and interact at variable lengths. The smaller wakes are seen to disappear after a couple of pore diameters, whereas larger wakes stemming from ligament clumps meet at a further distance from the foam. The larger wakes also last in time, as revealed by the streaks in the time-averaged velocity field U t shown in figure 3(b).
Homogeneity and isotropy
The approximation to statistical homogeneity in the cross-flow directions in grid turbulence is known to depend on the grid geometry and the Reynolds number. While, for regular grids, experiments suggest that the flow becomes nearly homogeneous for x/M > 40 (Comte-Bellot & Corrsin 1966;Mohamed & Larue 1990), for fractal grids, homogeneity is usually retrieved further downstream (Hearst & Lavoie 2014) and, for example, Valente & Vassilicos (2011) The distribution of U t in cross-flow planes is shown in figure 4 for six streamwise positions at increasing distance from the metal foam. Solid lines mark regions where the percentage of variation of U t relative to U ∞ has magnitude greater than 10 %, while dashed lines encompass regions of magnitude greater than 5 %. These are seen to gradually shrink along x. While inhomogeneity is in part to be ascribed to the limited size of the sample composed only by the collection of snapshots in time, for x > 20 their extent is Krogstad & Davidson (2010) and Kitamura et al. (2014) on the turbulence generated by regular grids are also included. Here M, d and σ denote the mesh width, the rod thickness and the solidity of the grid, respectively. Asterisk * denotes quantities expressed in dimensional form. Quantities in parentheses indicate values that have been deduced from other quantities provided in the same work. still appreciable. In the x = 30 station, the time-averaged velocity U t varies between 0.86 and 1.12. The isotropy level of the large scales of motion can be investigated through the ratio of root-mean-square (r.m.s.) velocity fluctuations along orthogonal directions. In the present case, the fluctuations of the x-component of velocity are the largest: in the developed region, indicators u rms /w rms and u rms /v rms oscillate within the interval (1.5, 1.6) about a mean of 1.55 for u rms /v rms and 1.56 for u rms /w rms , where the difference is finally due to the size of the sample employed in the simulations. In previous grid turbulence measurements (Kurian & Fransson 2009;Krogstad & Davidson 2010;Kitamura et al. 2014), the observed isotropy indicators are in general smaller than those measured here. Kitamura et al. (2014), who also collected experimental results by other authors, report u rms /w rms < 1.2 in all cases; similar values are also reported in the lee of fractal grids (Hurst & Vassilicos 2007;Gomes-Fernandes, Ganapathisubramani & Vassilicos 2012). More details about large-scale isotropy measures downstream of the present metal foam are reported in Corsini & Stalio (2020). Figure 5 displays the streamwise evolution of the variance of the three components of velocity u i u i as well as the turbulent kinetic energy k . Very close to the foam and for x < 1, velocity fluctuations are observed to remain constant. The negative slope increases gradually in x until fluctuations exhibit a power-law decay that persists until the end of the computational domain. As demonstrated, for example, in Tennekes & Lumley (1972), this is expected in the region where advection and dissipation of the turbulent kinetic energy become the only non-negligible terms in the transport equation of k ; see § 3.10.
Decay of velocity fluctuations
Power-law parameters are sought in the form through a numerical procedure. In (3.1), A is the multiplicative coefficient, x 0 is the virtual origin and n is the decay exponent. As n is positive, the power law has a vertical asymptote (and a singularity) at the virtual origin x = x 0 . As the parameters in (3.1) depend greatly upon the interval of sampling data considered, also the interval limits are determined inside the fitting procedure. A similar approach has been applied in Hearst & Lavoie (2014).
In the present work, a developed region I d = [x min , x max ] is employed, where the right border of the interval is kept fixed to x max = 30.0, clear of possible -yet not evidentoutflow condition effects, while x min is discretely varied in the interval [0.015, 24.7] to seek an x min coordinate that ensures the best fit. The coordinate x min will be taken as the start of the developed region. For each selection of a value for x min , the virtual origin x 0 is discretely varied within I 0 = [0.015, x min ], as the singularity is not supposed to belong to I d . The intervals of variation of both x min and x 0 are discretised by the same subdivision as the computational mesh. Parameters A and n are determined through a least-squares fit. Deviations between computed data and fitting laws are then calculated as the Euclidean norm of the error divided by the number of data points: In (3.2), δ j represents the difference between computed statistics and the least-squares fitted power-law approximation of u 2 rms at the jth point of I d ; and N d is the number of uniformly spaced points in the data fit region I d . The (x 0 , x min ) couple which ensures the smallest deviation from computed data provides the final A and n coefficients. This procedure leads to the results given in table 2 with error distribution as in figure 6. In the region of parameters investigated, only one minimum is found, which is located far from the boundaries of the region investigated. The dependence on porosity of the power-law exponents obtained is shown to be only weak in the Appendix.
In order to check the dependence of the results from the fitting method employed, a procedure from the literature is also applied to the same set of data; this alternative technique is that utilised by Hearst & Lavoie (2014). In the application of the method to the present case, the virtual origin x 0 and lower bound x min are varied within I 0 = [0, 4] and [4, 14.7], respectively. For both u 2 rms and k , the process converges after three iterations. Applied to u 2 rms , this fitting procedure provides x 0 = 0.610 and x min = 6.78, which yield the estimate forñ u = 1.12. In the case of k , the values obtained are x 0 = 0.360, x min = 5.02 andñ k = 1.14. The results obtained by the method proposed by Hearst & Lavoie (2014) are only marginally different from those obtained as described above and reported in table 2.
Comte-Bellot & Corrsin (1966) found 1.15 n 1.29 for regular grids, while according to Mohamed & Larue (1990) n ≈ 1.3. More recently, Krogstad & Davidson (2010) found n = 1.13 ± 0.02. Besides the parameters x 0 , A and n in (3.1), the procedure provides the coordinate x min of the start of the developed region as the left boundary of the interval I d . All the subsequent fittings in this work will be carried out on I d = [7.98, 30.0].
Turbulence decay rate
As opposed to experimental studies, where rate of dissipation needs to be evaluated using the frozen turbulence assumption or isotropy (3.5), in DNS studies can be computed directly from its definition, is the fluctuating rate of strain. Figure 7 displays the spatial evolution of , together with the least-squares fitted power law in the form ∼ a x h . The coefficients that fit the data over I d = [7.98, 30.0] are a = 0.217 and h = −2.20.
The scaling of dissipation can also be set in relation to the scaling of k in § 3.3. As also shown quantitatively in § 3.10, in the developed region of a statistically steady, high-Reynolds-number flow, one has Equation (3.4) suggests that the decay exponent in this case should equal h = −(n k + 1).
In the present study, h = −2.20 and n k = 1.14 are calculated. The percentage difference between −(n k + 1) and h is below 3 %. Dissipation is compared in figure 7 to the same quantity computed under the hypothesis of isotropic turbulence: (3.5) The close similarity between and iso is only in apparent contradiction with isotropy ratios larger than 1.5 reported in § 3.2. From the demonstration by Taylor (1935), it appears that hypotheses less strict than isotropy are required for equation (3.5) to hold. The weaker set of hypotheses hold true in the present case; see Corsini & Stalio (2020).
Length scales
The length scales examined for the characterisation of the turbulence generated by a metal foam are the Kolmogorov scale η, the Taylor microscale λ and the integral scales. All the length scales are computed directly from their definitions. Their distribution within the developed region is approximated by a power law of the distance x. The range of variation of each length scale along I d is reported in table 1 together with data from the literature on classical grid turbulence.
Kolmogorov scale
The Kolmogorov scale is defined through dissipation . It is predicted from the power-law expression for (derived from the expression for k ) that η can be represented by a function of the form η ∼ a η x s , where s equals −h/4 and finally s = (n k + 1)/4. The decay exponent for k computed here, n k = 1.14, gives s = 0.54. The power-law approximation is displayed in figure 8 Figure 9 displays the streamwise distribution of the Taylor microscale, defined by Taylor-scale values in a few measurement points (see their table 4); their data fit a power law of exponent c = 0.53. It is predicted in the study by George (1992) that the Taylor scale of homogeneous isotropic turbulence increases in time with t 1/2 and, for grid turbulence, λ grows as x 1/2 in the laboratory frame. More recently, Kurian & Fransson (2009) reported that λ increases approximately like the square-root of the streamwise coordinate. In the present study, fitting a power-law approximation of the form λ ∼ a λ x c over I d gives a λ = 0.0577 and c = 0.52. Results from the present investigation, and not reported for brevity, show that the difference between (λ/η) 2 / √ 15 and Re λ is less than 3 % over the whole computational domain.
Integral scales
The autocorrelation coefficient is defined as the ratio between the autocorrelation function of separation r = re j and the autocorrelation function for r = 0: The autocorrelation coefficients along y of the streamwise ρ 11 (x, re 2 ) and the cross-flow ρ 22 (x, re 2 ) velocity components are reported in figure 8 of Corsini & Stalio (2020). A distinction is made between the transverse and longitudinal correlations depending on the relative direction of velocity components and separation vector. Transverse correlations built with streamwise velocity fluctuations have longer tails with respect to longitudinal correlations built with spanwise fluctuations. Integral scales are defined as integrals over r of autocorrelation coefficients: (3.9) These depend upon the coordinates along non-homogeneous directions as well as on the direction of separation e j . In practice, since the computational domain has finite boundaries, the integral scales are calculated here as the distance over which the autocorrelation function decreases from 1 to 1/e, where e is Euler's number. For the calculation of integral scales, correlations are assumed to decay exponentially. Variable Power law Multiplicative coefficient Exponent Table 3. Parameters of the power-law functions of the form f (x) ∼ ax b fitting the turbulent quantities analysed.
Integral scales for the present case are based on the streamwise or the cross-flow velocity components. Given the inhomogeneity of the streamwise direction, separation is set in the cross-flow directions e 2 and e 3 . Thus, the integral scale based on the streamwise velocity is always transverse and will be denoted by L. The integral scales based on cross-flow velocity components can be either longitudinal or transverse and are denoted by L g and L t , respectively. As statistically L 11 (x, e 2 ) = L 11 (x, e 3 ) = L(x), L 22 (x, e 2 ) = L 33 (x, e 3 ) = L g (x) and L 33 (x, e 2 ) = L 22 (x, e 3 ) = L t (x), these equalities have been exploited in the calculation of integral scales. Figure 11 displays the integral scales. The power-law approximations L ∼ a L x q have the coefficients reported in table 3; the exponents computed are very close to the L ∼ x 1/2 behaviour predicted by Wang & George (2002). Also, in the case of higher porosity presented in the Appendix, the power-law exponent is close to 1/2.
Notice that L is one order of magnitude smaller than the domain size, the transverse length of which is 11.25d p ; this suggests that the imposed lateral periodic boundary conditions can satisfactorily represent cross-flow homogeneity in the present case. The evolution of the Reynolds number based on the integral scale, defined as Re L ≡ Lu rms /ν, along the x-axis is displayed in figure 9 of Corsini & Stalio (2020).
Dissipation rate coefficient
In high-Reynolds-number turbulent flows away from solid walls, the dissipation rate can be scaled on the integral length scale and velocity fluctuations through an order-one constant: (3.10) Figure 12 shows that, in the present case, after an initial steep increase for x < 2, the dissipation rate coefficient C based on L fluctuates over I d between 0.45 and 0.50, where its spatial average isC = 0.483. This value is very close to values reported in Pearson, Krogstad & van de Water (2002) for shear turbulence at different Re λ numbers. With regard to the initial steep increase, in recent years Vassilicos and coworkers have observed that, close to the turbulence-generating grid, there is a region characterised by spectra that closely match the −5/3 power law, in conjunction with an increase in C . In the hypothesis of = iso , combining the definition (3.10) with equation (3.5) leads to As only small variations of the ratio between length scales L/λ are observed for given Reynolds number of the mesh size, C is seen to increase like Re −1 λ . This behaviour is reported for both fractal and regular grids (Valente & Vassilicos 2012). Equation (3.11) is studied for the present case in figure 13, where the logarithmic plot emphasises the C (Re λ ) behaviour close to the metal foam. Corresponding to the initial steep decrease in Re λ shown in figure 10, C is seen to increase like Re −1 λ . In the same region, a well-defined −5/3 energy spectra behaviour is observed over a broader wavenumber range than the fully developed region; see the inset of figure 15 in § 3.8. The situation described in Valente & Vassilicos (2012) is thus observed in the present case.
The streamwise evolution of C experiences a transition between the Re −1 λ behaviour and a region where the variations in C are much smaller; see figures 12 and 13. This transition is about x = 2. The turbulent kinetic energy budgets reported in § 3.10 indicate that, downstream of this location, the turbulent transport terms become negligible and the mean advection of k equals dissipation. On the contrary, for x < 2, this equality does not hold, and the variations of C suggest a non-equilibrium condition between energy at the large scales and dissipation. It should be noted that the present results confirm the predictions reported by Tennekes & Lumley (1972) in their (3.2.29) and (3.2.30), where transition is expected at streamwise distances from the grid that are much larger than the integral scale. In the present configuration, x = 2 corresponds to x ≈ 6L. In the theory by Richardson and Kolmogorov, the constancy of C requires that turbulence is at a high Reynolds number and far from solid walls. While the small variations of C in the fully developed region can be attributed to the Reynolds number, which is not very high, the steep increase in C in the vicinity of the porous matrix (x < 2) is to be ascribed to the vicinity of the solid filaments and ultimately to non-negligible turbulent transport terms in the budget equation of k .
Is grid turbulence Saffman turbulence?
In recent articles (Krogstad & Davidson 2010;Kitamura et al. 2014) it was discussed whether grid turbulence can be considered to be of the Saffman type. Both Krogstad & Davidson (2010) and Kitamura et al. (2014) conclude that grid turbulence is Saffman turbulence.
The theory by Saffman (1967) describes the decay of homogeneous turbulence as u 2 rms = KC 2/5 t −6/5 , L = K C 1/5 t 2/5 , (3.12a,b) where K and K are constants and C is expressed by an invariant integral. Both equations (3.12a,b) have to hold for turbulence to be of the Saffman type. This is sometimes expressed by the requirement that u 2 rms L 3 = const. during decay, but the latter is a necessary, not sufficient, condition for Saffman turbulence.
The exponents reported in tables 2 and 3 suggest that turbulence investigated in the present study is not of the Saffman type. Figure 14 provides graphical confirmation for this conclusion. In the fully developed region of grid turbulence, as well as in the case investigated here, the advection of turbulent kinetic energy is almost perfectly balanced by dissipation: (3.13) (see § 3.4). Setting n k = n u = n, (3.13) combined with the definition of C in (3.10) and the hypotheses L ∼ x q and C ∼ x f leads to the following relation: As is apparent from (3.12a,b) and (3.14), grid turbulence generated at n = 6/5 is not of the Saffman type unless C stays constant during kinetic energy decay (f = 0).
Spectral scaling and energy transfer
One-dimensional spectra E ii (κ j ) are obtained from the discrete Fourier transform of the two-point velocity correlation functions along e j , where κ j is the jth component of the wavenumber vector. Only j = 2 and j = 3 are used here because the streamwise direction is not homogeneous. Depending upon the pairing between velocity and wavenumber components, three distinct spectra can be calculated: streamwise spectrum E s = E 11 (κ j ), While, in general, spectra at different stages of the decay could be expected to scale with different reference quantities, it is observed here that, as factors ηu 2 η , λu 2 rms and Lu 2 rms evolve at very similar rates in the streamwise direction (see table 3), any of those can be used equivalently. Spectra scaled by λu 2 rms and evaluated at increasing distance along the x-axis are displayed in figure 15. Figure 16 compares the three types of spectra (E s , E g and E t ) at a coordinate x = 20. For κ 2 η > 0.2, the streamwise and transverse spectra almost coincide, which is confirmation that anisotropy is at large scales only; see Mohamed & Larue (1990) and Corsini & Stalio (2020). Only a narrow range in the wavenumber space (0.025 < κ 2 η < 0.1) is noticed where E s exhibits a −5/3 behaviour. Figure 16 includes the longitudinal spectra E 11 (κ 1 ) of Comte-Bellot & Corrsin (1971) for two regular grids of different mesh size and Re λ values in the same range as the present case (Re λ = 65 and 41, compared to the present Re λ = 81). Close agreement is observed between measurements in turbulence generated by classical grids and turbulence simulated in the high-porosity metal foam for κ 2 η > 0.1. The present results match canonical grid turbulence spectra for κ 2 η > 0.1, the −5/3 law is not observed for wavenumbers κ 2 η > 0.1, and local isotropy is observed for κ 2 η > 0.2. That is the range of scales where dissipation becomes non-negligible. The distinction between scales containing the bulk of the energy from those responsible for dissipation is done by considering the energy spectrum E(κ) and the dissipative spectrum D(κ) = 2νκ 2 E(κ).
The turbulence spectrum E(κ) is obtained by
where the summation convention is applied. The spectrum in (3.15) removes the directional information from both the velocities and the Fourier modes, as E(κ) is given as a function of the wavenumber magnitude κ = |κ|. The kinetic energy cumulated at wavenumbers lower than κ is the complement to k (0,κ) is indicated by k (κ,∞) . The corresponding quantities for the dissipation are obtained in the same fashion using D(κ) and are indicated by (0,κ) and (κ,∞) . In figure 17 the fraction of cumulative turbulent kinetic energy at wavenumbers higher than κ and the fraction of cumulative dissipation at wavenumbers lower than κ at a distance from the foam x = 20 are depicted as functions of κη and of the corresponding wavelength /η = 2π/κη. The peak of the energy spectrum occurs at κη ≈ 0.02 (see figure 16) but it may be observed that the main fraction of the kinetic energy (k (κ,∞) = 0.1k) is contained in the range of wavenumbers up to κη ≈ 0.25, one decade further. The peak of dissipation is roughly at κη ≈ 0.25 (which corresponds to the maximum derivative in (0,κ) / ). The contribution to dissipation reaches (0,κ) = 0.9 at κη ≈ 0.7. The bulk of turbulent kinetic energy is contained in motions of length scales > 25η ≈ 1 2 L; this range could be viewed as the energy-containing range. On the other hand, the dissipation is effective at the length scales 10η < < 50η. The overlap between k (κ,∞) and (0,κ) reveals that energy starts to be dissipated at a length scale where the energy content is non-negligible. 3.9. Structure functions In this section, the local structure of turbulence is investigated at x = 25 by analysing the scaling properties of the structure functions with separation r. The general definition is given by (3.17) Because of turbulence decay along x, two different structure functions are identified in this work: longitudinal structure functions (δu g ) p = (δu j,j ) p and transverse structure functions (δu t ) p = (δu i,j ) p , where i = 2, j = 3 or i = 3, j = 2. According to the first, original theory by Kolmogorov (1941), for high Reynolds numbers when the separation lies in the inertial subrange η r L, the moments of velocity difference (δu g ) p take a universal form that depends only on through the following scaling property: In the refined similarity theory (Kolmogorov 1962), the intermittency effects are taken into account by rewriting expression (3.18) in terms of r , the dissipation averaged over a volume of linear dimension r, and assuming that r has a log-normal distribution, where μ is the exponent of the dissipation autocorrelation function, again for inertial range separations (Monin & Yaglom 1975). Given the Reynolds number Re λ ≈ 80 of the present case, the study is conducted using the extended self-similarity (ESS) observation by Benzi et al. (1993). The pth moments that the effect generated by intermittent structures on small-scale turbulence in the moderate-Reynolds-number range is less intense than at high Reynolds numbers. It is noticed that the method based on sixth-order structure functions, (3.21), displays a decay in the x-direction that is not expected nor recovered in the calculation of μ directly from the definition (3.20).
The decorrelation scaler is defined as the length of decorrelation of the instantaneous dissipation , i.e. the separation scale r where (x + r) (x) / 2 becomes unitary with a 1 % error. In figure 21 the development ofr along x is compared to that of the integral scale L. The decorrelation scale can be considered as a very large length scale, which in homogeneous isotropic turbulence depends on the Reynolds number and also the intermittency characteristics of the flow.
Turbulent energy budgets
The equation governing the transport of k in a statistically steady case can be written in symbols as where A is the contribution by mean advection and T p , T t and D v represent pressure, turbulent and diffusive transports; P and ˜ stand for the production and pseudo-dissipation rate of turbulent kinetic energy, defined as (3.23) As the difference − ˜ is seldom important (Pope 2000), in this section is used. In grid turbulence cases, where mean velocity gradients are zero and cross-flow directions y and z are homogeneous, the production term vanishes: In addition, for the Reynolds number computed here, the molecular contribution to transport is negligible. The resulting expression for the budget equation (3.22) thus becomes where the single terms are distributed along the x-axis as depicted in figure 22. Two regions can be identified in figure 22: a near-field region where the three conservative terms redistribute the kinetic energy along x, and a far-field region where x pressure and turbulent transports become negligible. Thus dissipation is solely balanced by turbulent kinetic energy advection; see figure 22(a) and its inset. A possible downstream boundary for the near-field region can be set at x = 2, where the turbulent and pressure transports become 100 times smaller than advection and dissipation. The near-field region can be subdivided into two subregions. By considering the ligament diameter d f as the length scale, a region very close to the metal foam is identified for x * /d f < 0.5, where turbulent transport is larger than the advective one. This region is characterised by the sustainment of turbulence by means of streamwise fluctuations, which is counter-balanced by dissipation, as pressure and advective transports account for negligible shares. The second subregion extends for 0.5 < x * /d f < 14, where turbulent kinetic energy is provided through pressure and advective mechanisms while dissipation and turbulent transport drain k ; see figure 22(a).
Data gathered by Norberg (1998) indicate that the vortex formation length f for the present Reynolds number based on the mean diameter of the filaments, Re d f = 560, lies in the range 1.5d f < f < 2d f ; see Bloor, Gerrard & Lighthill (1966) for other possible definitions of the vortex formation length. Figure 22(a) displays that this range matches the streamwise location where all the transport terms in (3.25) peak. Therefore, the present data suggest that kinetic energy is originated in the second part of the near-field region, where transport mechanisms show local peaks. Turbulent kinetic energy k is drained from here by the turbulent transport, which transfers turbulent fluctuations upstream, as indicated by the negative sign of the turbulent flux (see figure 22b), feeding the region very close to the metal foam. On the other hand, pressure and advective mechanisms are found to provide turbulent kinetic energy in the entire region 0.5 < x * /d f < 14, and the signs of the relative fluxes indicate that velocity fluctuations are transferred downstream. At the coordinate x * /d f = 14 (x = 2), fluxes up and uk become constant in the streamwise direction, leading to the budget reported in (3.4) for x > 2, which is typical in developed decaying flows; see figure 22(a) and its inset, which reports the budget terms in the I d region.
In order to separately assess the behaviour of streamwise and cross-flow velocity fluctuations, the transport equations for velocity variances are analysed. The budget of streamwise velocity variance reads (3.26) where, from left to right, the first three terms, respectively, represent the advective, turbulent and pressure transports of u 2 , p∂u/∂x is the pressure strain term and u stands for the u-variance pseudo-dissipation rate, u = 2((∂u/∂x j ) (∂u/∂x j ))/Re d p . The diffusive transport is again neglected with respect to the other terms. Equations similar to (3.26) can be derived for the transverse velocity components, v and w. As the flow is isotropic on y-z planes, statistics for the transverse velocity component are obtained by averaging results along y and z. For conciseness, the transverse velocity component is indicated by v and its direction by y, (3.27) where the terms have the same interpretation as in (3.26), and the pseudo-dissipation rate of the v-variance is computed as v = 2((∂v/∂x j ) (∂v/∂x j ))/Re d p . Note that, with respect to equation (3.25), budgets of velocity component variances include the pressure strain terms. As will be shown in the following, the role of these terms is to redistribute kinetic energy among the velocity components and thus it is responsible for the 'return to isotropy' in homogeneous turbulence. Such a phenomenon has been studied for several decades as it is involved in second-order turbulence models; see, for example, the works by Rotta (1951) and Lumley & Newman (1977). Pressure strain terms are not included in (3.25) as their sum S u + 2S v vanishes because of incompressibility. Figures 23 and 24 show the velocity variance budgets and fluxes along the streamwise coordinate. It appears that transport mechanisms are more intense in the u 2 budget with respect to v 2 , while dissipations u and v are almost equal. The behaviour of the pressure strain terms reveals that turbulent energy is drained from the streamwise velocity fluctuations, as S u represents a sink term in the u 2 budget, and provided to cross-flow velocity fluctuations, where S v acts as a source; see figures 23(a) and 24(a). The role of S u and S v does not change along the whole streamwise extension of the domain, as shown by the insets in figures 23(a) and 24(a). This occurs because, as reported in § 3.2, flow anisotropy is conserved in the flow domain considered. In the decaying region I d , the pressure strain terms maintain an intensity comparable to the advective and dissipation terms, suggesting that the isotropic condition would have been reached in a longer computational domain.
The profiles in figures 23 and 24 show that the u-variance terms behave very similarly to the budgets and fluxes of turbulent kinetic energy reported in figure 22. In particular, the peaks of turbulent, advective and pressure transports are located at the same distance from the metal foam, comparable to the vortex formation length. This suggests that velocity fluctuations are more intensely triggered along the streamwise direction with respect to the transverse ones. On the other hand, the v-variance terms peak slightly downstream with respect to terms in the u 2 and k budgets, and the negative peak of turbulent transport is not very intense (see figure 24a), but it drains enough energy to sustain cross-flow fluctuations in the region just downstream of the metal foam. Another difference with respect to the u 2 terms is the positive -yet not that intense -turbulent flux very close A
Conclusions
An analysis is presented of the turbulent flow behind a synthetic metal foam layer of thickness equal to five times the mean pore diameter. Unlike classical grid turbulence geometries, the metal foam ligaments are variably oriented, unevenly spaced and in general less ordered. Similar to classical grids, metal foams are mainly characterised by two length scales i.e. the pore diameter and the ligament thickness. The analysis encompasses one single Reynolds number Re d p = 4000 and ε = 0.92 but the effect of a different porosity (ε = 0.97) on selected quantities is addressed in the Appendix.
The analysis of the turbulent kinetic energy budget suggests that turbulence is triggered in the region close to the metal foam, approximately at a distance equal to the vortex formation length indicated by Norberg (1998), x * /d f ≈ 2. In the near-field region, x < 2, turbulent kinetic energy is distributed by transport mechanisms; the turbulent transport moves k upstream, sustaining fluctuations in close proximity to the metal foam, while pressure and advective transports provide velocity fluctuations to larger x-coordinates. At x = 2, pressure and turbulent transports are found to be negligible with respect to the other terms, entailing for x > 2 the expected behaviour of grid turbulence, where viscous dissipation is balanced by the mean advection of k . Budgets of streamwise and cross-flow velocity variances indicate that fluctuations along x are the most intensely triggered and pressure strain terms act to redistribute turbulent energy towards the isotropic condition, which however is not observed due to the limited domain extension.
In that same near-field region, Re λ decreases steeply and the dissipation rate coefficient C approximates C ∝ Re −1 λ while L/λ remains almost constant. These observations, in conjunction with the typical −5/3 slope in the turbulent power spectra, represent a typical behaviour already observed in the literature (Valente & Vassilicos 2012). Given that this occurs very close to the porous matrix, where the hypotheses for the Richardson cascade are not verified, a considerable variation in C is not interpreted here as an anomalous behaviour.
The developed region is defined here as the region where u rms decays following a power law and starts at x min = 7.98. Besides k , a number of relevant quantities, like the integral length scale, the Taylor microscale and the Kolmogorov scale of the flow, are observed to follow a power-law behaviour. The exponents obtained from a least-squares fit are in many cases very close to values predicted and measured in classical grid turbulence experiments. Given n k = 1.14 and q = 0.52 (where n k and q are the exponents of the power laws for k and L), the constancy of u 2 rms L 3 does not hold in the developed region simulated here, and thus the turbulence does not follow the theory by Saffman.
Structure functions of different orders are calculated at a fixed position x = 25 in the fully developed region. The extended self-similarity is used to calculate the scaling exponents. The results are interpreted in the light of the refined similarity hypothesis. The intermittency exponent μ computed directly from its definition is seen to behave uniformly within the computational domain. It reveals that intermittency is not very intense at the moderate Reynolds number of this study. The decorrelation length of dissipationr is larger than the integral scale. It depends on the Reynolds number as well as the intermittency characteristics of the flow. body of the paper. The two foams have equal pore size but different ligament thickness, i.e. the foam with larger porosity is characterised by a thinner ligament, d f < d f . The comparison is conducted by means of DNS performed with the same numerical parameters as outlined in § 2 except for the computational grid, which consists of n x = 2049 and n y = n z = 512 grid nodes. The porous matrix of higher porosity generates a turbulent field where fluctuations are less intense in all the spatial directions. Application of the fitting procedure outlined in § 3.3 on the high-porosity foam provides decay exponents for u 2 rms and k of n u = 1.05 and n k = 1.08, respectively. The results obtained on the coarse grid for the ε = 0.92 case yield n u = 1.12 and n k = 1.13; see table 2 for comparisons against results on the fine mesh. The power-law decay of the turbulent kinetic energy begins more upstream than in the ε = 0.92 case; see figure 25.
Inside the developed region, the Kolmogorov and the longitudinal integral length scales evaluated for ε = 0.97 behave according to power laws with growing rates estimated by the exponents s = 0.54 and q = 0.47, respectively (see table 3 for notation). The lower-porosity case exhibits decay exponents along the x-direction of s = 0.56 and q = 0.51 (see table 3 for comparisons against results on the fine mesh) but the Kolmogorov and the integral scales are different in magnitude, as displayed by figure 26(a) and (b). Associated with the smaller kinetic energy content of the flow and a smaller dissipation rate observed at ε = 0.97, the turbulence induced by the higher-porosity foam is characterised by a larger Kolmogorov scale and a smaller integral scale. Figure 27 displays the effects of porosity on the terms of the turbulent kinetic energy budget. The transport mechanisms of k for ε = 0.97 are the same as described in § 3.10 for ε = 0.92 but are characterised by an upstream shift of the transport peaks with respect to that case. This shift is consistent with the reduction of the vortex formation length associated with a smaller ligament thickness, as indicated by Norberg (1998) and reported in § 3.10.
In summary, while the results obtained for ε = 0.92 show small quantitative differences with respect to ε = 0.97, no significant discrepancies are observed between the two cases. An upstream shift in the peaks of k transports characterises the higher-porosity case, which in turn implies a reduced vortex formation length and the achievement of power-law decays at shorter distances from the porous matrix. The size of such displacement is apparently very small. | 11,534.6 | 2021-06-04T00:00:00.000 | [
"Physics"
] |
Integrated Metabolite and Transcriptome Profiling-Mediated Gene Mining of Sida cordifolia Reveals Medicinally Important Genes
Sida cordifolia is a medicinal shrub that is conventionally used in the Indian system of medicine;however, the genes contributing to its medicinal properties have been minimally explored, thus limiting its application. High-throughputsequencing and Liquid Chromatography with tandem mass spectrometry(LC-MS/MS) technologies were applied to unravel the medicinally important bioactive compounds. As a result, transcriptomic sequencing generated more than 12 GB of clean data, and 187,215 transcripts were obtained by de novoassembly. These transcripts were broadly classified into 20 classes, based on the gene ontology classification, and 6551 unigenes were annotated using Kyoto Encyclopedia of Genes and Genomes (KEGG) database with more than 142 unigenes involved in the biosynthesis of secondary metabolites. LC-MS/MS analysis of three tissues of Sida cordifolia revealed that acacetin and procyanidin are some important metabolites identified thatcontribute to its medicinal value. Several key enzymes witha crucial role in phenylpropanoid and flavonoid biosynthetic pathways were identified, especially phenylalanine ammonia lyase, which might be an important rate-limiting enzyme. Real-Time Quantitative Reverse Transcription Polymerase chain reaction (qRT-PCR) analysis revealed enzymes, such as Phenylalanine ammonia lyase (PAL), Cinnamyl alcohol dehydrogenase 1 (CAD), Cinnamoyl-CoA reductase 1 (CF1) and Trans cinnamate 4-monooxygenase(TCM), which were predominantly expressed in root compared to leaf and stem tissue. The study provides a speculative insight for the screening of active metabolites and metabolic engineering in Sida cordifolia.
Introduction
Plants have been used as an immortal source of medicine forages. Globally, herbal medicines have played an important role in human health to treat chronic and acute conditions without any toxic effect. They are usually used as therapeutics to health conditions including diabetes mellitus, wounds, cancer, heart diseases, tuberculosis, hypertension, etc. These herbs are used in medicinal fields, as they are rich in bioactive phytocompounds, such as flavonoids, alkaloids, terpenoids, polyphenols, and tannins possessing various pharmacological properties [1]. A traditional sub shrub, Sida cordifolia, belonging to the family Malvaceae, is widely spread across countries like India, Brazil, and Africa and is extensively used in the ayurvedic system of medicine. It has laid the foundation to perform the transcriptomic approach to elucidate the various biosynthetic pathways responsible for its medicinal properties [2]. Some important constituents isolated from various extracts of Sida cordifolia are 1, 2, 3, 9-Terta hydropyrrolo [21-b]-quinazolin-3-yl-amine, ephedrine, vasicine, pseudo-ephedrine, vasicinone, hypaphorine, vasicinol, stigmasterol, and sterculic acid [3][4][5][6][7]. Pharmacological properties exhibited by different extracts of Sida cordifolia are antimicrobial, anti-inflammatory, analgesic, anti-ulcer, nephroprotective, anti-diabetic, hepatoprotective, anticancer, and central nervous system depressant activity [8]. The inevitable identification of genes responsible for the production of various secondary metabolites by S.cordifolia is achieved using the next-generation sequencing technology to unravel the novel transcripts from medicinal plants with respect to secondary metabolite biosynthesis and gene expression analysis [9]. The technology is employed to generate functional data for non-model plants and EST sequences for the annotation of genes and analysis of targeted discovery. The de novo sequencing prompts large volumes of information that can be used to identify various molecular markers, novel gene identification, or discovery and polymorphism [10][11][12][13]. Transcriptomics is the study of collection of all the transcripts available in the specific tissue andhas been used to study the structure and function of the gene at the molecular levels.The study of small molecular weight metabolites that exist in all cells to maintain their growth and function is referred to as metabolomics. The metabolite profiling reflects the overall biochemical and physiological conditions of the plant for the survival in that particular environment [14]. LC-MS analysis is preferable for evaluating large groups of secondary metabolites like phenolic compounds, flavonoids, and alkaloids [14]. Though the plant has been extensively used in herbal formulations, the biosynthetic pathways responsible for the synthesis of secondary metabolites that possess the pharmacological properties have yet to be unveiled. Thus, in this study, we aimed to annotate the transcripts, identify the putative transcripts that are involved in the major biosynthetic pathways attributing to the medicinal properties of the plant using transcriptomics, and analyze the different metabolites present in them by metabolomics.
Sample Collection and RNA Extraction
Root, stem, and leaf tissues were collected from a healthy plant near Potheri in Tamil Nadu, and identification was taxonomically done in SRMIST, IIISM. The RNA was extracted using TRizol ® Reagent followed by phase-separation using Chloroform. After centrifugation at 12,000 rpm for 15 min at 4 • C. To the aqueous phase, 0.7 volume of isopropanol was added and centrifuged at 12,000 rpm for 10 min at 4 • C, followed by ethanol, which was to the pellet obtained from the previous step. Then, nuclease-free water was addedto the air-dried pellet. To remove the DNA contamination, the RNA was treated with DNase. A further purification of the total RNA was done using Qiagen RNeasy ® MinElute clean-up kit. The integrity of RNA was analyzed using the Agilent Bioanalyzer with RIN value range of 7.4 to 9.2, a Thermo Scientific Nanodrop Lite spectrophotometer was used to check the quality of RNA, and the samples were visualized in the agarose gel electrophoresis [15,16].
RNA Library Preparation and Illumina Sequencing
Purified RNA was used for cDNA preparation. The poly A tail present in the mRNA was refined from the total RNA using magnetic beads attached to poly-T oligo, and a fragmentation buffer was used for fragmenting the mRNAs into smaller fragments. The first strand synthesis of these shorter fragments was carried out using Invitrogen's Superscript II reverse transcriptase, and for the second strand synthesis, RNase H and DNA polymerase I were used. Single addition of dATP was done to the fragmented cDNA and then connected with adapters followed by a further selection of templates for PCR amplification. NextSeq 500 was used for sequencing, and the raw data in the FastQ format were obtained (Illumina Inc., San Diego, CA, USA) [15,16].
Assembly of Transcripts
The raw data obtained from sequencing were subjected to quality control and reads with (Phred score ≥ 30), were removed by the software FastQCv.0.11.9, and the adapter sequences were trimmed by Cutadapt software. TRINITY v2.14.0 de novo assembler was used for the re-construction of transcriptome data with three software modules: the Inchworm module assembles the transcripts by k-mer; the Chrysalis module clusters the contigs, thereby constructing a de Bruijn graph for the contigs; and the Butterfly module analyzes the contigs based on the graphs created and lists the isoforms. CD-HIT v4.8.1 program was used to reduce the sequence redundancy and increase the analysis performance [17,18].
Gene Ontology Classification and Functional Annotation
The de novo assembled transcripts of Sida cordifolia were collated against different databases including protein non-redundant database (nr) from NCBI with the BLASTX tool, and an E-value not greater than 1E-05 was considered as a significant match. Further Omics Box tools were used for gene ontology (GO) classification and annotation with enzyme codes (EC). The pathway mapping was done by KEGG Automated Annotation Server for retrieving the pathway maps of Kyoto Encyclopedia of Genes and Genomes (KEGG) [19].
SSR Identification
The microsatellites were identified using the Krait software, which offers an ultrafast and user-friendly graphic interface for investigating genome-wide microsatellites. The software also identifies VNTRS from a large genome size, locates the SSRs in the gene coding region, and statistically analyzes and plots the graphs [20]. Simple sequence repeats are short repeat DNA sequences with 1-6 bp in length, which have high polymorphism and can be used as a tool in genetic mapping, population genetics, and phylogenetic analysis [21].
Transcript Quantification
The SALMON quantification tool was used to quantify the transcript abundance from the sequenced data andcombines the dual-phase parallel interface algorithm. The tool quantifies the transcripts based on the GC content, quasi mapping was conducted for accuracy, and fast mapping of genes was conducted to study the expression levels [22].
Reverse Transcription PCR Validation
The validation assembled transcriptome data of Sida cordifolia from the root tissue by reverse transcription PCR was carried out. The full-length transcripts were found using the BLAST tool to find the complete transcript to design the primers. Then, the PRIMER BLAST tool from NCBI was used to design the primers, and in silico PCR was conducted, where Actin was considered as a housekeeping gene. The gene expression analysis was normalized using reverse transcription PCR, and gradient PCR was conductedto optimize the annealing temperatures from 52 • C to 60 • C. PCR amplification was carried out by the following parameters: 95 • C for 5 min, 95 • C for 30 s, 57-59 • C for 30 s, 72 • C for 30 s, and 72 • C for 5 min for 40 cycles.
Validation of Gene Expression Analysis
S.cordifolia transcriptome data were validated by quantitative real-time PCR with QuantStudio 5 (Thermo Scientific, Wilmington, DE, USA) PCR machine and the Quanti-NovaSYBRGreen PCR (Qiagen Inc., GmbH, Germany) kit. Actin was used as an internal reference, and a negative control reaction was set up in all experiments where it was used. The relative gene expression levels were analyzed and calculated using the2 -∆∆Ct method, which represents the cycle threshold of the target gene with the housekeeping gene Actin. qRT-PCR analysis was carried out with three replicates [23].
Extraction of Metabolites from Sida cordifolia
The leaf, root, and stem tissues of Sida cordifolia were collected, dried using a microwave oven, and pulverized using an electric blender. Then, 10 g of powdered plant tissues were transferred to a Schott bottle, and 100 mL of 99.9% methanol was added to it and macerated for three days. The solvent was evaporated using a rotary evaporator. The samples were centrifuged at 3000 rpm, and the supernatant was transferred into a fresh tube and stored at room temperature until further analysis.
LC-MS/MS Analysis
The secondary metabolites were analyzed using Shimadzu LC-MS/MS-8040 (QQQ) (liquid chromatography-mass spectrometry, Triple Quadrupole, North America) with a scan range of 50-1000 m/z in both positive and negative modes. Mobile phase A: 5 mM ammonium formateand 0.1% formic acid in water; phase B: 5 mM ammonium formateand 0.1% formic acid in methanol with a flow rate 0.6 ml/min an isocratic A-20% and B-80% were used. The column Union was used at a temperature 40 • C with a flow rate of 500 µL/min. The data acquisition was done by the software LabSolutions™ LCMS, and the compounds were identified from free-source online tools, such as METLIN, PUBCHEM, and KEGG databases, along with some previously published articles.
RNA Sequencing Analysis of Raw and Processed Data
The root transcriptome of Sida cordifolia was obtained with 59,484,771 raw reads. From the raw reads, adapter sequences were trimmed to reduce the redundancy, and 59,484,597 clean reads were obtained further with a GC content of 40.52%. The raw reads were submitted to NCBI Sequence Read Achieve (SRA) and an accession number PR-JNA841821 was obtained. Overall, 187,215 transcripts were assembled using a de novo assembler TRINITY.The average length of bases was 1035.19, and the median contig length was 717. The contig N50 value was found to be 1622 (Table 1).
Functional Annotation of Unigenes
A similarity search using BLASTX was conductedagainst the nr database from NCBI; with a total of 187,215 unigenes assembled, a total of 54,375 unigenes were non-annotated and 132,840 unigenes were annotated ( Figure 1). Due to inadequate information of the Sida cordifolia genome, some annotated genes were classified into predicted, uncharacterized, or hypothetical proteins.The results from BLASTX were transferred to OMICS BOX to annotate further. The similarity search of assembled transcripts showed high similarity with Gossypium hirsutum, Gossypium raimondii, Gossypium arboretum, Duriozibethinus, Theobroma cacao, Herrania umbratica, and Quercus suber and the least similarity with Stipa magnifica and Stipa borysthenica (Supplementary Figure S1).
Functional Classification of Unigenes
Using the OMICS BOX software, the assembled unigenes were annotated onto GO classification into three different categories viz. molecular function, cellular process, and biological process.The unigenes were categorized into 60 subcategories. In the cellular component, 9467 unigenes were classified into 20 classes. This category includes intracellular anatomical structure with high unigenes in the organelle, cytoplasm, and membrane and the leastunigenes in the nucleoplasm, supramolecular complex, and external encapsulating structure. In the molecular function, a total of 19,593 unigenes were classified into 20 different classes with high unigenes in organic cyclic compound binding, heterocyclic compound binding, and ion binding and low unigenesin isomerase activity, protein-containing complex binding, and carbohydrate binding. In the biological process, 25,315 unigenes were grouped into 20 classes with the maximum number of unigenes in organic substance metabolic process, cellular metabolic process, and primary metabolic process and the minimum number in signal transduction, response to chemicals, and vesicle-mediated transport (Supplementary Figure S2).
Untargeted Metabolic Profiling of Sida cordifolia
Transcriptomic analysis of metabolic pathways and the validation of key metabolites require further confirmation by the identification of metabolites. The metabolomic analysis of samples was completed by the untargeted metabolomic method. There were a total of 298 different metabolites in leaf, stem, and root tissues of S.cordifolia in both the positive and negative mode. Flavonoids were the major group of secondary metabolites found in the metabolome analysis of S.cordifolia where Naringin, Cinnamic acid, Cinnamaldehyde, Caffeic acid, Kaempferol derivatives, and Quercetin were predominantly present in root, stem, and leaf tissues. Some tissue-specific metabolites, such as Rosmarinic acid, Caffeic acid and its derivatives, Kaempferol derivatives, Rutin, Quercitrin, and Gallic acid were identified in root tissue. Apigenin, Gallagic acid, Quercetin and its derivates, Caffeic acid and its derivatives, and Kaempferol and its derivatives were found in leaf tissue. In stem tissue, Ferulic acid, Gallic acid, Quinic acid derivative, and Malic acid were found ( Table 2). In some metabolic pathways, such as, flavonoid biosynthesis, isoflavonoid biosynthesis, flavones and flavonol biosynthesis, phenylalanine, tyrosine and tryptophan biosynthesis, and ubiquonone biosynthesis, intermediate metabolites could be identified in the mass spectra. The results revealed that the transcriptomic and metabolic analysis of the transcripts that catalyzes the enzymes is consistent with their metabolites that belong to the pathway. Figure S4).
Gene Expression Analysis of Sida cordifolia
The qRT-PCR analysis was done to validate the transcriptome date and the expression analysis of the selected genes. The selected transcripts included Phenylalanine ammonia lyase (EC: 4.3.1.24, 4.3.1.25), Trans-cinnamate4-monooxygenase (EC: 1.14.14.91), Chalconeflavonone isomerase (EC: 5.5.1.6), and Cinnamyl alcohol dehydrogenase 1(EC: 1.1.1.219) ( Table 3). The primer sequences used for the analysis are depicted in the Supplementary Table S1. The analysis revealed varied expression patterns of the selected genes, where all the genes were upregulated in the root tissue rather than the stem and leaf. Actin was used as a housekeeping gene. The results obtained showed significant agreement with the transcriptome data ( Figure 3). RNA sequencing is a high-throughput sequencing method thathas been an integral part of metabolome research in non-model species with relatively high rapidity. This sequencing technology is effective to study the annotation and to elucidate various transcripts responsible for the biosynthetic pathways, gene expression analysis, and distribution of secondary metabolites in different tissues [52].The Illumina sequencing of the root tissue of Sida cordifolia revealed 59,484,771 reads, which were further processed to remove the adapter and redundant sequences. The de novo assembler TRINITY was used to assemble the transcripts, and a total of 187,215 numbers of transcripts were identified.
Functional annotation and classification comprise a vital step thatprovides us with the homology alignment. Gene ontology classification was done to annotate the transcripts into three major categories such as cellular function, molecular function, and biological process. In our study, out of 187,215 unigenes, a total of 54,375 unigenes were non-annotated, and 132,840 unigenes were annotated against different databases. Sida cordifolia showed high similarity with the genus Gossypium and the least similarity with Stipa genus. These results can be because of the scarcity of reference genomic resources for Sida genus.
KEGG Pathway Analysis and Identification of Candidate Genes Involved in Secondary Metabolite Biosynthesis
Biological pathway analysis against the KEGG database was conductedto identify the various pathways involved in the transcriptome data with a total of 150 pathway maps, out of which a higher number of transcripts were annotated into metabolism pathway than the secondary metabolite biosynthesis pathway maps. The phenylpropanoid biosynthesis pathway is the key biosynthetic pathway responsible for the production of various secondary metabolites. The pathway is initiated by catalyzing phenylalanine to cinnamate and ammonia by the enzyme Phenylalanine ammonialyase (PAL). PAL is the key regulatory enzyme involved in the pathway that is found ubiquitously in plants [53]. The enzyme serves as a precursor for the flavonoid and lignin biosynthetic pathways [54]. A total of 12 transcripts were identified that are responsible for synthesis of PAL gene. Trans-cinnamic acid is also a precursor for the flavonoid and biosynthetic pathways. The increased activity of the enzyme PAL in turn increases the production of phenylpropanoid products, which vary with different stress stimuli, developmental stage, and tissue and cell differentiation. The enzyme is delineated to be stimulated by infection, radiations, drastic change in temperature, or drought stress [55][56][57][58]. Cinnamoyl-CoA reductase 1 involved in the monolignol pathway engenders the conversion of p-coumaroyl-, feuloyl-, and sinapoyl-CoA to p-coumaraldehyde, coniferaldehyde, and sinapaldehyde [59,60].
Flavonoids are naturally occurring secondary metabolites that are rich in antioxidant, antibacterial, anti-inflammatory, antifungal, and antidiabetic activity and act as an anticancer agent. On the basis of a number of hydroxyl groups, flavonoids are grouped into flavones, flavonols, flavones, anthocyanidins, and isolflavones. Flavonoids are synthesized from the phenylpropanoid biosynthesis pathway with the key enzymes involved in them. The enzymes that are involved in the first step of flavonoids biosynthesis are 4-Coumaroyl-CoA ligase, 4-Coumarate-3-Hydroxylase, and Phenylalanine ammonialyase. The genes involved in flavonoid biosynthesis pathway are flavone synthase, Dihydroflavonol-4-Reductase, Chalcone synthase, and Chalcone isomerase [61], wherethe enzyme 4-coumarate-CoA ligase catalyzes the activation of 4-coumarate, which occurs in multiple isoenzymes forms and exhibits substrate affinity with the metabolic function. It has a pivotal role in the biosynthesis of secondary metabolites from general phenylpropanoid metabolism. Other secondary metabolite biosynthesis pathways identified are ubiquinone, triterpenoid, sesquiterpenoid and isoquinoline alkaloid biosynthesis pathways.
A flavonoid-quercetin-and its derivatives are extensively used due to their bioactive effects as they possess many pharmacological properties such as anti-arthritic, cardiovascular, anticancer, anti-Alzheimer's, antimicrobial, and wound-healing effects [62][63][64]. Studies suggest that flavonoids are found to be potential inhibitors of coronaviruses against the SARS-CoV-2 infection by binding to the targets that promotes the replication and entry of viruses [65,66]. The KEGG pathway analysis revealed that the flavonoids and its derivatives are derived from the phenylpropanoid biosynthesis pathway from L-tyrosine with intermediate metabolites, such as Naringenin and Kaempferol. Phenolic compounds function by reducing the reactive oxygen species and stabbing the lipid peroxidation, thus acting as a potent antioxidant agent. They also play a vital role in prevention and treatment of chronic illnesses like neurodegenerative disorders. The integrated LC-MS/MS and transcriptomic analysis revealed that acacetin and procyanidin metabolites were some important secondary metabolites that could be responsible for the pharmacological properties exhibited by the plant, which is used to treat various inflammations and bronchodilator activity and has a cardiovascular protecting nature.
Simple Sequence Repeats Analysis
Molecular markers reveal the genetic relationship between the species as they are unaffected by the environment and have easy detection and high heritability [67,68]. These markers are widely used in plant improvement and genetic conservation [69]. A total of 36,197 SSRs were identified with highest number of mononucleotide motifs compared to di-, tri-, tetra-, penta-, and hexa-nucleotide motifs. SSRs with five tandem repeats were most common in Sida cordifolia with highly abundant A (35%) mono-nucleotide repeats and in tri-nucleotide repeats, and the highest frequency was observed in AAG (11.51%) followed by AAT (4.36%). The SSRs were discovered to help plant genetics for molecular reproduction as well as identification of candidate molecular markers for understanding the genetic variations in the Sida family.
Conclusions
To expedite molecular level studies in Sida cordifolia and the characterization of the root transcriptome to identify the unitranscripts responsible for the biosynthesis of secondary metabolites, since the root tissue of the plant contributesto several medicinal properties of the plant. The result suggests that flavonoids are the major secondary metabolites that are responsible for the medicinal properties exhibited by the plant. The assembled transcripts were validated using reverse transcription PCR method, and the expression levels of the key genes from the phenylpropanoid and flavonoid pathways wereobtainedusing qRT-PCR. The secondary metabolites identified from the metabolomic studies lead the way for understanding the molecular mechanism of pharmacological properties exhibited by S.cordifolia. This is the first study that integrates the molecular basis and metabolomics of this plant.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/genes13101909/s1, Figure S1: Top hit species distribution of Sida cordifolia. Figure S2: Gene ontology classification ofSida cordifolia. Figure S3: KEGG pathway analysis based on the metabolism pathways in Sida cordifolia. Figure S4: KEGG pathway analysis of phenylpropanoid biosynthesis. Figure S5: KEGG pathway analysis of flavonoid biosynthesis. Figure S6: SSR identification from root transcriptome ofSida cordifolia. Table S1: List of genes selected to validate gene expression and their primer sequences.
Author Contributions: Conceptualization, writing-review and editing, supervision, S.P.; software, P.N.; methodology, validation, writing-original draft preparation, D.P. All authors have read and agreed to the published version of the manuscript. | 4,888.8 | 2022-10-01T00:00:00.000 | [
"Biology"
] |
Coulomb-actuated microbeams revisited: experimental and numerical modal decomposition of the saddle-node bifurcation
Electrostatic micromechanical actuators have numerous applications in science and technology. In many applications, they are operated in a narrow frequency range close to resonance and at a drive voltage of low variation. Recently, new applications, such as microelectromechanical systems (MEMS) microspeakers (µSpeakers), have emerged that require operation over a wide frequency and dynamic range. Simulating the dynamic performance under such circumstances is still highly cumbersome. State-of-the-art finite element analysis struggles with pull-in instability and does not deliver the necessary information about unstable equilibrium states accordingly. Convincing lumped-parameter models amenable to direct physical interpretation are missing. This inhibits the indispensable in-depth analysis of the dynamic stability of such systems. In this paper, we take a major step towards mending the situation. By combining the finite element method (FEM) with an arc-length solver, we obtain the full bifurcation diagram for electrostatic actuators based on prismatic Euler-Bernoulli beams. A subsequent modal analysis then shows that within very narrow error margins, it is exclusively the lowest Euler-Bernoulli eigenmode that dominates the beam physics over the entire relevant drive voltage range. An experiment directly recording the deflection profile of a MEMS microbeam is performed and confirms the numerical findings with astonishing precision. This enables modeling the system using a single spatial degree of freedom.
SUPPL. 1. SYMMETRIC EULER-BERNOULLI EIGENMODES
For a prismatic beam, clamped at both ends, the Euler-Bernoulli eigenmodes are defined by the eigenvalue equation Eq. (1) subject to the boundary conditions Eq. (11). The eigensystem for even indices, λ 2n and ψ 2n (ξ), are given by , (S1) where the β 2n are the zeros of and can be approximated using β 2n = 2n + 3 2 π+ (S4) Table S1 illustrates numerically determined β 2n from Eq. (S3) in comparison to the approximation using only the first term in Eq. (S4). It is readily verified that the function in Eq. (S1) satisfies Eq. (1) and the boundary conditions in Eq. (11). Note that the eigenmodes as defined above are ortho-normal, Moreover the even eigenmodes form a complete ortho-nomal base of the Hilbert space of symmetric square integrable functions over the interval − 1 2 , + 1 2 . This means that we can expand any symmetric deflection profile w(ξ) in terms of these eigenmodes where we have due to ortho-normalitŷ Note that from these definitions we also get Parseval's equation To asses the relative contribution of the individual eigenmodes to the deflection profile Finally, to be able to apply the above formula for the evaluation of the experimental data, we use Parseval's equation (S10) gorithm. We decided to use a fourth order collocation algorithm provided by the SciPy library 1,2 , which requires a system of first order differential equations as input. To this end we cast Eq. (12) into a set of four ordinary equations, containing however the integral γ (S11) In order to deal with γ, we introduce a new function The function Γ(ξ) can be directly obtained during the solution procedure by adding the differential equation to the system in Eq. (S11). We then define a simple iteration scheme starting with γ 0 = 0 and solving the new system according to where i is the iteration index and γ i is updated using The iteration is terminated upon meeting the criterion where γ is the targeted accuracy. Depending on parameter settings, a damping constant c ID < 1, adjusting the step size, may be required to reach convergence
SUPPL. 3. EULER-BERNOULLI BEAM SUBJECT TO A CONCENTRATED LOAD
In this section we compile the solution to the concentrated load equation (Eq. (21)) including stress-stiffening γ according to Eq. (13) and subject to the boundary conditions Eq. (11). We start by noticing that outside the beam center we need to solve Eq. (20) It is worth emphasising that Eq. (20) holds, irrespective of our claims about the shear force.
These can therefore be verified by inspecting the results below. The respective bending profile, compatible with the boundary conditions in Eq. (11) and satisfying Eq. (15) is where the the parameters c a , c b and c c are given by (S19) This may be readily verified by direct computation upon inserting Eq. (S18) into Eq. (20) and into Eq. (15).
The bending profile w c (ξ), its first and second derivative are continuous at the beam center ξ = 0. However the third derivative w This establishes that w c (ξ) as defined in Eq. (S18) is the bending profile resulting from a concentrated load at the beam center, i.e. it is the solution to Eq. (21). In the limit of a vanishing stress stiffening we obtain To establish the relation between γ and α 1 , required to analytically deal with stressstiffening at the upper bifurcation point, we use Eq. (13). Performing the necessary integration and solving for α 1 yields For practical applications it is helpful noticing Finally we compute the contributions of the Euler-Bernoulli eigenmodes to w c (ξ) according to Eq. (S10), which admittedly is a bit tedious
SUPPL. 4. TIMOSHENKO BEAM SUBJECT TO A CONSTANT LOAD
At very low drive voltages the Coulomb force generates a constant beam load of magnitude α 1 v 2 . In this section we will compile the relevant formulae. Actually we will do this for the more general case of a Timoshenko beam. This helps us exploring the limits of Euler-Bernoulli theory when varying the beam thickness. In contrast to the Euler-Bernoulli assumptions, Timoshenko beam theory allows for a rotation of the normal to the mid-surface of the beam. The rotation angle is denoted here by φ(ξ). In the formulae below, w(ξ) is as usual, the displacement in dimensionless form. The Timoshenko beam equations for a prismatic beam of rectangular cross section, subject to a constant unit load then read 3 where α 1 is taken from Eq. (9) and the value for ν zx is given in Eq. | 1,394.4 | 2021-05-28T00:00:00.000 | [
"Engineering",
"Physics"
] |
Lipopolysaccharide Inhibits Virus-mediated Induction of Interferon Genes by Disruption of Nuclear Transport of Interferon Regulatory Factors 3 and 7*
We have studied the effects of lipopolysaccharide (LPS) on the Newcastle disease virus (NDV)-mediated induction of cytokine genes expression. Raw cells treated with LPS before or after virus infection showed down-regulation in the expression of interferon A and, to a lesser extent, interferon B genes. In contrast, induction of the interleukin (IL)-6 gene was enhanced. The effects of LPS were not a result of the suppression of virus replication, because the transcription of viral nucleocapsid gene was not affected. Consistent with these findings, LPS also suppressed the NDV-mediated induction of chloramphenicol acetyltransferase reporter gene driven by murine interferon A4 promoter in a transient transfection assay. Furthermore, LPS inhibited virus-mediated phosphorylation of interferon regulatory factor (IRF)-3 and the consequent translocation of IRF-3 from cytoplasm to nucleus. The LPS-mediated inhibition of IFNA gene expression was much weaker in infected Raw cells that constitutively overexpressed IRF-3. The nuclear translocation of IRF-7 in infected cells was also inhibited by LPS. These data suggest that LPS down-regulates the virus-mediated induction of IFNA genes by post-translationally targeting the IRF-3 and IRF-7 proteins.
We have studied the effects of lipopolysaccharide (LPS) on the Newcastle disease virus (NDV)-mediated induction of cytokine genes expression. Raw cells treated with LPS before or after virus infection showed down-regulation in the expression of interferon A and, to a lesser extent, interferon B genes. In contrast, induction of the interleukin (IL)-6 gene was enhanced. The effects of LPS were not a result of the suppression of virus replication, because the transcription of viral nucleocapsid gene was not affected. Consistent with these findings, LPS also suppressed the NDV-mediated induction of chloramphenicol acetyltransferase reporter gene driven by murine interferon A4 promoter in a transient transfection assay. Furthermore, LPS inhibited virusmediated phosphorylation of interferon regulatory factor (IRF)-3 and the consequent translocation of IRF-3 from cytoplasm to nucleus. The LPS-mediated inhibition of IFNA gene expression was much weaker in infected Raw cells that constitutively overexpressed IRF-3. The nuclear translocation of IRF-7 in infected cells was also inhibited by LPS. These data suggest that LPS down-regulates the virus-mediated induction of IFNA genes by post-translationally targeting the IRF-3 and IRF-7 proteins.
IFNs 1 are a family of natural proteins serving as part of the defense systems against infections. Cells can produce IFNs in response to virus infection, and the newly synthesized IFNs are secreted extracelluarly, bind to IFN receptors, and activate the Jak-Stat signaling pathway that leads to the stimulation of expression of cellular genes generally called ISGs (1)(2)(3). Some of these genes encode proteins that can inhibit viral replication, thus conferring the antiviral state to the cells (3). While the molecular mechanism involved in the induction of IFN and ISG genes has been studied in great detail in vitro, it is largely unknown how much this system is affected by other stress factors. Of special concern is the potential impact of the bacterial infection on this system because concomitant infection with both bacteria and virus is a common clinical situation.
LPS is the major component of the outer membrane of Gramnegative bacteria (4). Through activation of the target cells such as macrophages and B cells, LPS induces innate immune response and expression of cytokine genes which include IL-1, IFNB, and TNF␣ (5,6). These cytokines are responsible for most of the biological effects of LPS and deregulated production of these cytokines results in the generalized inflammation or endotoxic shock. The signal transduction pathway for LPS is initiated by its binding to LBP (LPS binding protein) (7), an acute phase reactant produced by the liver, followed by the binding to CD14, a glycosylphosphatidylinositol-anchored membrane protein (8,9) and Toll-like receptor (TLR2) (10). This receptor is activated by LPS and the response depends on the binding of LPS to LBP and is enhanced by CD14. After binding to the receptor, LPS activates a number of tyrosine and serine kinases, including Raf-1, p42 and p44 isoforms of the MAP kinase, the p38 kinase, c-Jun kinase, ceramide activated kinase (CAK) and CD14 receptor-coupled kinase p56 lyn (11)(12)(13), with consequent activation of nuclear transcription factors such as NF-B, Stat1 (signal transducer and activator of transcription), Stat3, and NF-IL6 (C/EBP) (14 -18). LPS was also shown to activate nuclear factors binding to the interferon stimulation responsible element although the identity of these factors has not been elucidated (19,20). Furthermore, functional cooperation between LPS and IFNs, especially IFN␥, was shown to result in expression of a set of pro-inflammatory genes (21,22). Considering that IFNA and IFNB promoters contain cis-elements which are highly homologous to the interferon stimulation responsible element, these findings indicate that LPS may potentially modulate the expression of IFN genes.
The signal transduction pathway leading to the induction of IFN genes expression in virus-infected cells is largely unknown. The analysis of the virus responsive element (VRE) of both IFNA and IFNB promoters has identified highly conserved purine-rich sequence repeats that were shown to bind the transcription factors of the IRF family (23)(24)(25)(26)(27)(28). In addition, IFNB gene promoter also contains a functional NF-B site. Three of the IRF factors, IRF-1, IRF-3, and IRF-7, were shown to activate the promoters of IFNA or IFNB gene in transient transfection assays (23, 29 -35). However, the virus-mediated induction of IFNA and IFNB genes was not impaired in mice or fibroblasts with homozygous deletion of the IRF-1 gene (36,37). The identification of IRF-3 and characterization of its role as a signaling transducer in virus-infected cells has provided a major step toward the understanding of the virus-mediated signaling pathway leading to the expression of type I IFN genes * This work was supported by Grant AI19737 from the National Institutes of Health (to P. M. P.). This study is part of the Ph.D. thesis requirement for Y. T. Juang. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
ʈ To whom reprint requests and correspondence should be addressed: The Johns Hopkins University, Oncology Center, 418 N. Bond St., Baltimore, MD 21231-1001. Tel.: 410-955-8871; Fax: 410-955-0840. 1 The abbreviations used are: IFN, interferon; IRF, interferon regulatory factor; LPS, lipopolysaccharide; NDV, Newcastle Disease virus; IL, Interleukin; CMV, cytomegalovirus; CAT, chloramphenicol acetyltransferase; ISG, interferon-stimulated gene; CHX, cycloheximide; m.o.i., multiplicity of infection; GFP, green fluorescence protein; NF-B, nuclear factor B; Stat, signal transducer and activator of transcription; bp, base pair(s). (38,39). It was shown that IRF-3 is expressed constitutively in a variety of tissues and cell lines and synergistically cooperates with virus in the induction of both IFNA and IFNB genes. Virus infection induces phosphorylation of IRF-3 at one threonine and several serine residues at the carboxyl-terminal end. The phosphorylated IRF-3 is then translocated from cytoplasm to nucleus where it forms a complex with the transcription coactivator, p300/CBP (29,30). Recently, another IRF member, IRF-7, was identified that seems to play a critical role in the induction of IFNA genes (33)(34)(35). IRF-7 is also phosphorylated in infected cells and transported from cytoplasm to nucleus. In contrast to IRF-3, IRF-7 is preferentially expressed in cells of lymphoid origin, and its transcription is stimulated by IFNA and virus infection (34). While the role of IRF-3 and IRF-7 in the induction of IFN genes has been gradually unveiled, kinases that are responsible for the phosphorylation of IRF-3 and IRF-7 have not been identified yet.
We have been interested in the influence of other pathogens such as bacteria, fungus, or parasites on the virus-mediated induction of IFNs. Mixed infection is a clinical condition that occurs with high frequency in the immune-compromised people as a consequence of HIV-1 infection, organ transplantation or cancer. As the initial step to address this question, we used LPS as the model to mimic the bacterial infection and examined whether it can influence the virus-mediated signaling pathway and induction of IFN gene expression. We have found that LPS is a potent suppressor of virus-mediated induction of IFN genes. Characterization of the underlying molecular mechanism has correlated the LPS-mediated suppression with the inhibition of the virus-mediated phosphorylation and nuclear translocation of IRF-3 and IRF-7. Furthermore, overexpression of IRF-3 partially reverted the LPS effect. Our study illustrates the scenario in which bacterial infection interferes with the virus-mediated induction of IFN genes expression.
EXPERIMENTAL PROCEDURES
Cells, Virus, and Reagents-Raw cells were purchased from ATCC (American Type Culture Collection) and grown in RPMI medium supplemented with 10% fetal calf serum. NDV was propagated in the allantoic cavity of 10-day-old eggs. Sendai virus was purchased from Specific Pathogen-Free Avian Supply (Preston, CT). The antibody to mouse IRF-3 was a gift from Dr. T. Fujita (The Tokyo Metropolitan Institute of Medical Science). Antibodies to human IRF-3 were prepared by immunization of rabbit with GST-IRF-3 fusion protein (32). CD14 antibody was purchased from Santa Cruz Biotechnology (Santa Cruz, CA). LPS (Escherichia coli, serotype 055B5) was purchased from Sigma (St. Louis, MO), and mouse IFN (mixture of IFNA and IFNB) was purchased from Lee Biomolecular Research (San Diego, CA). The Raw-CMV-IRF-3 cell line was established by transfecting the pcDNA-IRF-3 into Raw cells and selecting for cells resistant to G418. A pool of stably transfected colonies was used in the experiments.
Northern Blot Hybridization-Total RNA was isolated with TRIzol reagent (Life Technologies, Gaithersburg, MD) and purified according to the protocol of the manufacturer. Ten micrograms of purified RNA was analyzed on 0.8% formaldehyde-agarose gel and followed by transfer to nitrocellulose paper. The filters were prehybridized in the hybridization buffer (50% formamide, 5ϫ SSC, 150 g/ml herring sperm DNA, 5ϫ phosphatidylethanolamine) for 1 h and then hybridized with the 32 P-labeled riboprobes in the same buffer in 65°C (for cDNA probe, 50°C) overnight. Blots were washed sequentially with the following buffers: 2ϫ SSC,0.1% SDS; 0.5ϫ SSC, 0.1%SDS; 0.1ϫ SSC, 0.1%SDS until clear background was achieved. The ethidium bromide-stained gel was used as the loading control.
Transfection and CAT Assay-Raw cells (3ϫ10 6 ) were seeded on 60-mm dishes 1 day before transfection. Five micrograms of reporter IFNA 4 /CAT plasmids and 100 ng of -galactosidase were transfected into Raw cells with Superfect reagent (Qiagen, Chatsworth, CA). When indicated, cells were infected with NDV or treated with LPS for the indicated time periods as described in the respective figure legend. The results from the CAT assay were normalized to an equal amount of -galactosidase.
Western Blot Analysis-Raw cells were treated with LPS or infected with Sendai virus as described in the figure legends. Whole cellular extracts were prepared by incubation of cell pellets in cell lysis buffer (20 mM HEPES, pH 7.9, 50 mM NaCl, 10 mM EDTA, 2 mM EGTA, 0.1% Nonidet P-40, 10% glycerol, 1 mM dithiothreitol, 50 mM sodium fluoride, 5 mM sodium orthovanadate) on ice for 30 min. The extracts were cleared by ultracentrifugation at 15,000 rpm for 30 min, and proteins (30 g) were separated by electrophoresis in 7.5% acrylamide, SDS gel and transferred to nitrocellulose paper. The Western blotting was performed by following the protocols of the manufacturer (ECL method, Amersham Pharmacia Biotech) Fluorescence Microscopic Study-Raw cells (5ϫ10 5 cells) were seeded in the chambered cover glass (Nunc, Naperville, IL) 24 h before transfection with 0.5 g of GFP/IRF-3 (29) or GFP/IRF-7 (34) expression plasmids. Transfection was done with Superfect reagent (Qiagen), and 16 h after transfection, cells were treated with LPS at the time points described in the figure legends and infected with Sendai virus for 6 h. Cells were examined under fluorescence microscope at the wavelength of 507 nM. The pictures presented were recorded under the same magnification power and the same exposure time.
Pretreatment of Raw Cells with LPS Differentially
Modulates the NDV-mediated Induction of Cytokines-To examine the effect of LPS treatment on the virus-mediated induction of IFNA and IFNB genes, the mouse macrophage cell line Raw 264.7 was treated with LPS (1 g/ml) for 1 h and then infected with NDV (m.o.i. 5) for 6 h. Analysis of the relative levels of IFNA and IFNB mRNA has shown that IFNA mRNA could only be detected in NDV-infected cells but not in cells that were pretreated with LPS (Fig. 1A). The relative levels of IFNB mRNA were also suppressed in LPS-treated cells, but the level of suppression was lower than that of IFNA. In contrast, induction of IL-6 gene expression by NDV was enhanced by LPS pretreatment. The dose response experiments (Fig. 1B) Raw cells were either untreated or treated with different concentrations of LPS as indicated for 1 h, cells were then washed and infected with NDV for 6 h. Total RNA was isolated as described under "Experimental Procedures," was size-separated on 0.8% formaldehydeagarose gel, and was analyzed by Northern hybridization with IFNA, IFNB, and IL-6 specific probes. The ethidium bromide-stained RNA on agarose gel before transfer to the nitrocellulose filter is shown. onstrated that as low as 10 ng/ml of LPS was sufficient to suppress the levels of IFNA mRNA by about 80%, whereas the same amount of LPS suppressed the levels of IFNB mRNA only by about 30%. The kinetics study shown in Fig. 1B demonstrated that the suppression of IFN genes expression by LPS was very fast; treatment with LPS for 10 min was already able to suppress virus-mediated induction of IFNA and IFNB genes. In an independent experiment, the effective dose of LPS to mediate the suppression of IFNA gene induction has been determined to be as low as 3 ng/ml (data not shown). It was shown that the effect of LPS can be divided into low (ϳ 1 ng/ml) and high dose effects and that the low dose effect was CD14dependent (7). Because the inhibition of IFNA and IFNB genes expression could be observed with low levels of LPS, it seemed to indicate that LPS may employ a CD14-dependent pathway to suppress the virus-mediated induction of IFN. However, co-incubation of anti-CD14 antibody (Santa Cruz Biotechnology) with LPS neither blocked the LPS inhibition nor modulated the virus-mediated induction of IFNA and IFNB genes (data not shown).
The LPS-mediated Suppression of IFN Induction by Virus Is Not a Consequence of Suppression of Viral Replication-To
determine whether the observed inhibition of IFN genes expression by LPS is a result of the inhibition of viral replication, we examined NDV replication in LPS-treated cells and untreated controls by analyzing the NP gene transcripts. An earlier study has observed that the levels of NDV NP transcripts could be correlated with the induction level of IFN mRNA (43). As shown in Fig. 2, at 6 h post-NDV infection, the high levels of NP transcripts could be detected (Fig. 2A, lane 2), which were not modulated by LPS treatment applied at 1, 2.5, or 3.5 h after NDV infection (Fig. 2A, lane 3-5). The relative levels of NP transcripts were, however, lower in cells infected with NDV in the presence of CHX (5 g/ml) (Fig. 2A, lane 7). Only when cells were treated with high levels of LPS (1 g/ml) 1 h before virus infection (Fig. 2A, lane 6) was viral replication inhibited, and the relative levels of NP transcripts were lower than in the infected, untreated controls. Pretreatment with LPS at 10 ng/ml and lower has not affected NP synthesis. Therefore, in the following experiments, we have used LPS at the concentration of 10 ng/ml for pretreatment and 1 g/ml for LPS treatments initiated after virus infection.
LPS was shown to induce a low level of IFNB in the monocytes and macrophages (44,45). A previous study has disclosed that some of the effects of LPS on the macrophages, such as induction of nitric oxide, is due to the production of IFNB (46). Although LPS treatment (1 g/ml, 4 h) was unable to induce IFNA gene expression in Raw cells, IFNB mRNA could be detected by reverse transcriptase-polymerase chain reaction under the same treatment (data not shown). While LPS pretreatment (1 h) totally suppressed the virus-mediated induction of the IFNA gene, the exogenously added IFN (250 units/ ml, 1 h) slightly enhanced the virus-mediated induction of the IFNA gene (data not shown). Furthermore, LPS-mediated suppression of IFNA genes expression was not affected by the presence of CHX (5 g/ml) (data not shown). This concentration of CHX blocked the transcription of viral nucleocapsid gene by 80% ( Fig. 2A, lane 7). These data demonstrated that the effect of LPS pretreatment was not caused by IFN or any other LPS-induced protein.
The study of the interaction between LPS and virus in the context of IFN induction was further extended by treating Raw cells with LPS at different time points after the virus infection to determine at which stages of viral infection the LPS-mediated inhibition is still effective. As shown in Fig. 2B, the inhibition of IFNA and IFNB genes expression was less effective if LPS treatment was started later than 3 h after NDV infection. In contrast, the induction of IL-6 was enhanced by LPS treatment even when started as late as 3.5 h post-infection. The LPS-mediated inhibition was also not specific for NDV infection and also could be demonstrated in Raw cells infected with Sendai virus (data not shown).
LPS Suppresses the Virus-mediated Activation of IFNA Promoter-To determine whether the LPS-mediated inhibition of the IFN genes induction is regulated at the transcriptional level, we analyzed the effect of LPS on the inducible expression of a reporter gene in transiently transfected Raw cells. The cells were transfected with a plasmid containing the CAT gene under the control of the IFNA4 promoter, infected with NDV, and treated with LPS as indicated in Fig. 3. It can be seen that
FIG. 2. Comparison of the relative levels of the NDV nucleocapsid (NP) mRNA and the respective cytokine mRNA in LPStreated cells and NDV-infected Raw cells.
A, Raw cells were infected with NDV for 6 h, and LPS was added either 1 h before (designated as Ϫ1) or at 1, 2.5, or 3.5 h post-infection (designated as ϩ1, ϩ 2.5, and ϩ3.5, respectively). When indicated, CHX (5 g/ml) was added to the medium 80 min before NDV infection. Total RNA (10 g) was analyzed by Northern hybridization with the NP probe. B, LPS can suppress virus-mediated induction of the IFN genes when LPS was added shortly after virus infection. Raw cells were infected with NDV for 6 h, and LPS was added at the indicated time points post-infection. Ten micrograms of total RNA was then analyzed by Northern hybridization with the cytokine probes as indicated. LPS treatment, either before or after virus infection, suppressed virus-mediated induction of IFNA/CAT plasmid by 2-5-fold, indicating that LPS suppression occurs at the transcriptional level. The suppression was more effective in the cells treated with LPS before or 1 h after the infection than at a later time post-infection. This is consistent with the results obtained when the expression of the endogenous IFNA gene was analyzed. To determine whether the LPS inhibits the virus-mediated transcriptional activation of IFNA promoter by induction of phosphatases, we treated cells with okadaic acid, a known inhibitor of phosphatases (PP1 and PP2A) before and simultaneously with LPS treatment. However, the inhibition of virus-mediated induction of IFN/CAT plasmid was not affected by okadaic acid.
To further define the cis-elements of the IFNA 4 promoter that are critical for LPS-mediated suppression, plasmids containing deletion mutants of IFNA 4 promoters linked to the CAT reporter gene were used for transfection. We have found that as short as Ϫ118 bp of the IFNA promoter, which contains a 35-bp long virus-inducible element (IE), is able to confer the LPSmediated inhibition (data not shown).
LPS Suppresses the Virus-mediated Phosphorylation of IRF-3 and Impedes Its Translocation from Cytoplasm to Nucleus-We and others have recently described that IRF-3 serves as a transducer of virus-mediated signaling from cytoplasm to nucleus and plays a critical role in the induction of IFNA and IFNB genes (29,30,34). It was shown that IRF-3 is phosphorylated in infected cells and consequently translocated to the nucleus, where it interacts with the transcription coactivator CBP/p300. To determine whether the LPS-mediated inhibition of IFNA and IFNB genes expression is the result of interference with IRF-3 function, we analyzed the effect of LPS on the virus-mediated phosphorylation of IRF-3. IRF-3 is constitutively present in uninfected Raw cells and infection of Raw cells with Sendai virus for 6 h resulted in a decrease of relative levels of IRF-3 and the phosphorylation of IRF-3, which was reflected by the appearance of a slow migrating band (Fig. 4) on Western blot analysis. These results are in agreement with our previous findings in which we demonstrated that phosphorylated IRF-3 is targeted for degradation presumably by the ubiquitin pathway (29). In infected, LPS-treated cells, phosphorylation of IRF-3 was prevented as indicated by the absence of the slow migrating IRF-3 band. The absence of this slow migrating band was most clear in cells treated with LPS 1 h before or 1 h after the infection (Fig. 4). Thus, for both the suppression of IRF-3 phosphorylation as well as for the inhibition of expression of the IFN genes, LPS treatment has to be initiated before or soon after virus infection. LPS treatment alone has not induced phosphorylation of IRF-3 protein (data not shown).
To determine whether the suppression of phosphorylation of IRF-3 by LPS results in the inhibition of translocation of cytoplasmic IRF-3 into nucleus, we analyzed the effect of LPS treatment on nuclear translocation of the GFP-tagged IRF-3.
In a transfection experiment with IFN/CAT plasmid, the GFPtagged IRF-3 was able to stimulate expression of this plasmid in infected cells with the efficiency of about 45% of the wild type IRF-3 (data not shown). Raw cells were transfected with the expression plasmid encoding the GFP/IRF-3 fusion protein (29), and 24 h later, transfected cells were infected with Sendai virus for 6 h. It can be seen that, in uninfected cells, GFP/IRF-3 is located only in cytoplasm (Fig. 5a), whereas it is efficiently translocated into the nucleus at 6 h post-infection (Fig. 5b). Treatment with LPS 1 h before or 1 h after Sendai virus infection efficiently suppressed the nuclear translocation of GFP/IRF-3 (Fig. 5, c and d). When treatment with LPS was initiated 4 h after virus infection (5e), some cells showed the presence of GFP/IRF-3 only in cytoplasm, whereas others had the GFP/IRF-3 dispersed both in the cytoplasm and nucleus. These data clearly show that LPS treatment of the cells initiated either before or within the first hour after viral infection interferes with the phosphorylation of IRF-3 and its transportation from the cytoplasm to the nucleus.
Overexpression of IRF-3 Reverts the LPS-mediated Suppression of IFN Induction-To further determine whether IRF-3 is the target for LPS suppression of the virus-mediated induction of IFNA and IFNB genes expression, we generated a Raw cell line, Raw-CMV-IRF3, which constitutively overexpressed human IRF-3. Using Western blot analysis with antibodies to human IRF-3, we could detect expression of human IRF-3 in Raw-CMV-IRF3 cells but not in the parental Raw cells (Fig. 6A). Infection of the Raw-CMV-IRF3 cells with Sendai virus resulted in the expression of IFNA genes, as determined by the presence of IFNA mRNA. As shown in Fig. 6B, treatment of Raw cells with LPS initiated 2 or 4 h after virus infection resulted in a decrease in the relative levels of IFNA mRNA (70 and 50% suppression, respectively), while no LPS-mediated suppression could be seen in the infected CMV-IRF-3 cells. These results indicate that overexpression of IRF-3 can overcome the LPS-mediated inhibition of IFNA genes expression in infected cells.
LPS Interferes with the Nuclear Transport of IRF-7 in Infected Cells-We and others have recently shown that IRF-7 plays an important role in the expression of IFNA genes (33)(34)(35). Similarly as IRF-3, IRF-7 is transported from cytoplasm to nucleus in infected cells, and the phosphorylation of serine residues in the carboxyl terminus of this protein is important for its transactivating activity. We have, therefore, examined whether LPS treatment also affects the nuclear transport of GFP/IRF-7 fusion protein in infected cells. The results in Fig. 7a show that, in transfected Raw cells, GFP/IRF-7 can be detected predominantly in the cytoplasm. In contrast, at 7-h post-infection, GFP/IRF-7 starts to accumulate in the nucleus although the transport is not completed yet (Fig. 7b). In cells which were either pretreated with LPS or treated with LPS 1 h after the infection, GFP/IRF-7 is localized only in cytoplasm (Fig. 7, c and d). When LPS treatment was started at 4-h post-infection, some cells showed the presence of GFP/IRF-7 in both nucleus and cytoplasm, while others showed only the cytoplasmic localization (Fig. 7e). These data indicate that LPS treatment, when started before or early after infection, prevents transport of IRF-7 from cytoplasm to nucleus. However, these data also show that when overexpressed, low levels of GFP/IRF-7 can be detected in the nucleus even in the absence of viral infection. DISCUSSION Two families of transcription factors play a critical role in the virus-mediated induction of IFNA and IFNB gene expression. The IRFs factors, especially IRF-3 and IRF-7, can serve as direct transducers of virus-mediated signaling from cytoplasm into the nucleus and activate transcription of IFNA and IFNB genes. In addition, viral infection also leads to the phosphorylation and degradation of IB and subsequent transport of NF-B (p50/p65 heterodimer) into the nucleus where it participates in the activation of transcription of the IFNB gene. In the present study, we have demonstrated that LPS is a potent suppressor of virus-mediated stimulation of IFN genes expression. The suppression of IFN genes expression in Raw cells could be demonstrated when cells were treated with LPS before or soon after virus infection at concentrations as low as 3 ng/ml. Although LPS suppressed both the induction of IFNA and IFNB genes expression in infected cells, the suppression of IFNA gene expression was stronger than that of IFNB gene. One reason for the lower sensitivity of IFNB to the LPS-mediated inhibition could be the presence of functional NF-B site in the promoter of the IFNB gene. In macrophages and monocytes, LPS treatment was shown to activate IKK kinases (47), resulting in the phosphorylation of IB and release of NF-B factor (p50/p65 heterodimer) into nucleus. Indeed, we have found that treatment of Raw cells with LPS for 15 min led to the accumulation of p65/RelA in the nucleus and stimulated the binding of NF-B to the positive regulatory domain II (PRDII) of the IFNB promoter (data not shown). The role of NF-B in virus-mediated induction of the IFNB gene has been well demonstrated (48 -50). Thus, these data indicate that the LPS-mediated modulation of the transcriptional activity of IFNB gene involves both positive and negative regulations.
The observation that the LPS-mediated suppression of IFN genes expression is most effective at the early stages of viral infection indicates that LPS interferes with the initial stages of virus-induced signaling pathway. The suppression of IFN induction by LPS is not a result of an inhibition of virus replication because LPS can suppress IFN induction without affecting the transcription of the nucleocapsid genes. While LPS efficiently suppressed virus-mediated induction of IFNA and IFNB genes, induction of the IL-6 gene in infected cells was stimulated by LPS. LPS alone was found to induce expression of several early inflammatory genes in macrophages, including TNF␣, TNFR-2, IP-10, IL-1, IFNB, and IRF-1. This activation is dependent on the LPS-induced tyrosine phosphorylation of the MAP kinases (51,52). Thus, the selective susceptibility of IFN induction to LPS may indicate that LPS suppresses activation of factors crucial for the induction of IFNs, but not for IL-6 and other cytokines.
As discussed above, IRF-3 and IRF-7 were recently identified as transducers of virus-mediated signaling pathway (29,30,34,53). IRF-3 was shown to be post-translationally phosphorylated at Ser-385 and Ser-386 (30) and at Ser-396 and Ser-398 (29). Phosphorylation was shown to facilitate translocation of IRF-3 into the nucleus and stimulate transcription of both IFNA and IFNB genes in infected cells. Our data clearly show that LPS suppresses virus-mediated phosphorylation of IRF-3 and its consequent translocation to the nucleus. Complete inhibition of nuclear transport of IRF-3 in infected cells was observed in cells that were pretreated or treated with LPS within 1 h after the infection. When the LPS treatment was initiated 4 h after virus infection, IRF-3 could be detected both in cytoplasm and in the nucleus. These data correlate the inhibition of virusmediated IRF-3 phosphorylation and nuclear translocation with the inhibition of IFNA and IFNB gene expression in LPS-treated cells. Further evidence of the involvement of IRF-3 in the LPS-mediated inhibition came from experiments using a cell line constitutively overexpressing IRF-3. In these cells, suppression of virus-mediated induction of the IFNA gene by LPS was partially reverted when cells were pretreated with LPS (data not shown) and no suppression was observed when the cells were treated with LPS at 2 or 4 h after virus infection. These data indicate that LPS interferes with the function of IRF-3 which results in the impairment of IFNA gene induction by virus. In addition to IRF-3, IRF-7 was recently demonstrated to stimulate expression of IFNA genes in infected cells (33)(34)(35) and together with IRF-3 to be a part of the transcriptional enhansosome binding to the promoter region of IFNB gene in infected cells (31). Similarly as IRF-3, IRF-7 is transported from cytoplasm to nucleus (34,35) in infected cells. Phosphorylation of IRF-7 on two carboxyl-terminal serines was shown to be required for induction of the IFNA genes (33). Our data indicate that LPS treatment also interferes with the nuclear transport of GFP/IRF-7 in infected cells. Additional experiments have to determine whether LPS also inhibits virusmediated phosphorylation of IRF-7.
While the signaling pathway(s) activated in NDV or Sendai virus-infected cells is being unfolded, it is still unclear which serine/threonine kinase(s) phosphorylates IRF-3 or IRF-7. The similarity in the phosphorylation sites present on IRF-3 and IRF-7 as well as similar kinetics of the transport of these two factors to the nucleus in infected cells suggests that a similar kinase is phosphorylating IRF-3 and IRF-7. It was shown previously that double-stranded RNA-dependent protein kinase (PKR) can be activated in infected cells; however, it is unlikely that this kinase phosphorylates IRF-3 or IRF-7 because homozygous deletion of the PKR gene does not abolish virus-mediated induction of the IFN genes (54). LPS induces a complex signaling pathway which includes tyrosine phosphorylation of several targets, activation of G protein (55), PKC (56), as well as activation of Stat3 (17). LPS was also found to post-transcriptionally regulate the transcriptional transactivator C/EBP (15).Which one of these pleiotropic responses to endotoxin abrogates the virus-induced signaling and consequent phosphorylation of IRF-3 and IRF-7 remains to be established. | 7,134.4 | 1999-06-18T00:00:00.000 | [
"Biology",
"Medicine"
] |
The time evolution of $M_{\mathrm{d}}/\dot M$ in protoplanetary discs as a way to disentangle between viscosity and MHD winds
As the classic viscous paradigm for protoplanetary disk accretion is challenged by the observational evidence of low turbulence, the alternative scenario of MHD disk winds is being explored as potentially able to reproduce the same observed features traditionally explained with viscosity. Although the two models lead to different disk properties, none of them has been ruled out by observations - mainly due to instrumental limitations. In this work, we present a viable method to distinguish between the viscous and MHD framework based on the different evolution of the distribution in the disk mass ($M_{\mathrm{d}}$) - accretion rate ($\dot M$) plane of a disk population. With a synergy of analytical calculations and 1D numerical simulations, performed with the population synthesis code \texttt{Diskpop}, we find that both mechanisms predict the spread of the observed ratio $M_{\mathrm{d}}/\dot M$ in a disk population to decrease over time; however, this effect is much less pronounced in MHD-dominated populations as compared to purely viscous populations. Furthermore, we demonstrate that this difference is detectable with the current observational facilities: we show that convolving the intrinsic spread with the observational uncertainties does not affect our result, as the observed spread in the MHD case remains significantly larger than in the viscous scenario. While the most recent data available show a better agreement with the wind model, ongoing and future efforts to obtain direct gas mass measurements with ALMA and ngVLA will cause a reassessment of this comparison in the near future.
INTRODUCTION
The gaseous component of protoplanetary disks has traditionally been described as undergoing viscous accretion (Lynden-Bell & Pringle 1974, Pringle 1981. In recent years, however, a growing observational evidence is challenging this picture, as the low levels of turbulence detected in protoplanetary disks appear incompatible with the observed evolution (Pinte et al. 2016, Flaherty et al. 2018, Rosotti 2023. The best alternative to the classic viscous scenario is currently provided by MHD disk winds, originally proposed by Blandford & Payne (1982). This model has gained increasing popularity in the recent years, as several studies (see Lesur 2021 for a review) have shown it to reproduce the key evolutionary features of protoplanetary disks; moreover, Tabone et al. (2022a) have developed a simple analytical parametrization, making it a valid alternative to the viscous theory.
A compelling question is which of these mechanism, or which combination of the two, drives angular momentum transport in protoplanetary disks (Manara et al. 2023). Answering this question has proven to be a surprisingly difficult task: even though the two models are in principle well distinguishable through their characteristic theoretical predictions, the observational counterpart is lagging behind (e.g., Rosotti et al. 2019b, Ilee et al. 2022. A good example of this problem is viscous spreading, a fundamental feature of viscous evolution that causes the gaseous component of disks to expand in radius as they evolve. As MHD evolution does not arXiv:2309.04496v1 [astro-ph.EP] 7 Sep 2023 show a similar behavior (Zagaria et al. 2022b), it would in principle be a good candidate for disentangling between the two predictions: however, the high sensitivity required to detect it has until now represented a limit. While Class 0 objects are widely accepted to be born small (< 60 au: Maury et al. 2019, also supported by the numerical experiments of, e.g., Lebreuilly et al. 2021) and grow wider in the first 1-2 Myr of evolution (Najita & Bergin 2018), whether the radius of Class II disks increases or decreases with time is widely debated. Dust continuum radii are observed to be shrinking with time (Hendler et al. 2020, Zagaria et al. 2022b, as an effect of radial drift, while gas observations (Ansdell et al. 2018, Sanchis et al. 2021, Toci et al. 2021, Long et al. 2022 have covered too small of a sample at too low sensitivities to draw firm conclusions. The advent of ALMA Band 1 (Carpenter et al. 2020) and the next-generation VLA (ngVLA, Tobin et al. 2018) in the near future will allow to perform surveys of protoplanetary disks at unprecedentedly long wavelengths, which will play a crucial role in determining the leading evolutionary mechanism. At the same time, finding novel approaches to tackle this problem is crucial to obtain significant results.
In this Letter, we suggest a new method to distinguish between the two models from the population perspective: through a joint theoretical and population synthesis approach, we investigate the time evolution of disks in the disk mass -accretion rate plane, proving it to be a good approach for our goal. This work is structured as follows: in Section 2 we describe the evolutionary prescriptions that we adopt and we discuss their numerical implementation. In Section 3 we present our results and we compare them with the observations. Finally, in Section 4 we discuss the implications of these results and draw our conclusions.
Secular evolution
The simulations presented in this work have been carried out using the 1D Python population synthesis code Diskpop. For a detailed description of the code, as well as its public release, we refer to our upcoming paper (Somigliana et al. in prep; earlier implementations of the code, its basic assumptions and features have been described in Rosotti et al. 2019a, Rosotti et al. 2019b, Toci et al. 2021, Somigliana et al. 2022. The viscous and MHD evolution are implemented following Lynden-Bell & Pringle (1974) and Tabone et al. (2022a) respectively. In this section we briefly present both models, referring to the original papers for a deeper discussion.
In the viscous case, we solve the classic evolution equation following the prescription by Shakura & Sunyaev (1973), the viscosity ν is modeled as α SS c s H, where α SS is a dimensionless parameter, c s is the sound speed, and H is the height of the disk. Furthermore, assuming the viscosity to be a power-law of the disk radius for ease of solving the equation, and R c is a scale radius), the analytical solution by Lynden-Bell & Pringle (1974) holds.
In the MHD case instead (Tabone et al. 2022a), the evolution equation is given by where Ω is the keplerian orbital frequency, λ is the magnetic lever arm parameter, and α DW is a magnetic equivalent of α SS . Equation (2) is a generalization of Equation (1) if the gas surface density evolves not only because of the viscous torque (first term on the RHS) but also because of the effects of MHD disk winds, which extract angular momentum and induce a mass loss (second and third term on the RHS respectively). Assuming that both λ and α DW are constant across the disk, and that α DW ∝ Σ c −ω (where Σ c = Σ(R = R c )), Equation (2) can be solved analytically (see Tabone et al. 2022a).
Isochrones
Isochrones are defined as the curves described by a population of objects of the same age in a given plane. In the case of protoplanetary disks, isochrones in the M d −Ṁ plane have been the focus of recent studies (Lodato et al. 2017, Somigliana et al. 2020. For viscously evolving disks (Lodato et al. 2017), the isochrone readsṀ the only free parameter in Equation (3) is the initial disk mass M 0 , which only sets the starting point of the isochrone. Nonetheless, at late stages (when M d ≪ M 0 ) all disks in a population are bound to reach the same locus on the M d −Ṁ plane: while this happens at different ages for each disk, depending on its viscous timescale t ν = R c 2 /(3(2 − γ) 2 ν c ), a fully evolved population (t → +∞) will necessarily sit on the theoretical isochrone of the corresponding age.
For MHD disks, the isochrone is defined as (Tabone et al. 2022a) Equation (4) depends not only on M 0 , but also on the equivalent of t ν in the MHD winds case, the initial accretion timescale t acc,0 , through f M,0 (determined by the disk radius -see Tabone et al. 2022a for details).
The interpretation of the isochrones in the two models is therefore different: while the viscous curves for all disks in a population lie on top of each other (except at the early stages, when M d ∼ M 0 ), MHD evolution never loses memory of the initial conditions. This is because, depending on whether we fix M 0 or t acc,0 , we can define two types of isochrones for an MHD population. As a result, disks with a different M 0 will occupy an area of the M d −Ṁ plane, rather than sitting on a single curve, and this will be the case even for evolved populationswhich means that it is not possible to use the isochrones to obtain age estimates for disk populations. Based on this argument, we investigate whether the evolution of a population of disks in the M d −Ṁ plane could carry tangible signatures of the evolutionary model.
Population synthesis
In this work we adopt a population synthesis approach, which consists of generating and evolving a synthetic population of protoplanetary disks via numerical methods. We employ the Python tool Diskpop, which we expanded from our previous work (Somigliana et al. 2022) to include MHD disk wind evolution. In this section, we present a brief outline of the workflow, referring to the upcoming code release for a detailed description of the methods and the implementation.
First, we generate N ∼ 100 stars, whose masses M ⋆ follow the Kroupa (2001) initial mass function. We then assemble a Young Stellar Object (YSO) by assigning a disk to each star: to determine the initial mass and radius of said disk, we assume that the initial disk mass and accretion rate scale as power-laws of the stellar mass (M d ∝ M ⋆ λm andṀ ∝ M ⋆ λacc ). In our previous work (Somigliana et al. 2022) we have demonstrated how λ m,0 ∈ [0.7, 1.5] and λ acc,0 ∈ [1.2, 2.1] can reproduce the slopes of observed correlations of disk properties with stellar mass at later ages; we refer to that paper for a detailed discussion. We determine M d andṀ for each disk drawing from a log-normal distribution, centered in the mean value computed via the power-law correla-tions and with a width (σ) of choice; R d is then derived from considerations onṀ (see Somigliana et al. 2022 for details). The other relevant quantities besides M ⋆ , M d and R d are fixed in our model: Table 1 shows the parameters that we used in the simulations presented in this work, based on the disc evolution studies of Lodato et al. (2017) and Tabone et al. (2022a) for viscosity and MHD winds respectively. While a detailed study of the parameters space is outside of the scope of this work, we have tested two more combinations of parameters (shown by Tabone et al. (2022b) to reproduce the Lupus star-forming region) and we found that our results are independent on the particular combination chosen. Once the population of YSOs is generated, it is evolved following the viscous or MHD prescription via a 1D implementation of the models described in Section 2.1. Although Diskpop allows to numerically solve the evolution equations, in this work we have used the analytical solutions to Equation (1) and (2); it is therefore important to note that our results depend on the assumptions needed to obtain such solutions (e.g., the power-law scaling of viscosity with the disk radius).
It is crucial to point out that disk dispersal is an intrinsic feature of MHD winds, but not of viscous evolution. Our code includes an observational effect by considering as dispersed disks with masses lower than 10 −6 M ⊙ ; this simulates a dispersal effect even in the viscous scenario, which would otherwise generate disks with infinite lifetime, that do not match the observed disk fraction (see Appendix C). This problem is usually solved in the literature by adding other physical effects to the purely viscous model, such as internal photoevaporation (see e.g. Hollenbach et al. 1994, Clarke et al. 2001, Owen et al. 2011, Picogna et al. 2019, Emsenhuber et al. 2023. In order to account for the statistical effect of reducing our sample throughout the evolution caused by disk dispersal, we performed 100 simulations for both setups described in Table 1 and then considered not only the median evolution of the interesting quantities, but also the interval between the 25th and 75th percentile (see Section 3).
RESULTS
In this Section, we show the results of the evolution of viscous and MHD populations of protoplanetary disks in the M d −Ṁ plane: in particular, we consider the ratio of the two quantities (hereafter t lt , disk lifetime -see Jones et al. 2012). We first discuss the expected evolution of the distribution of disk lifetimes from an analytical point of view (paragraph 3.1), and then we confirm our theoretical results through Diskpop simu- (2) lations (paragraph 3.2); finally, we compare our results with the observations (paragraph 3.3).
Disk lifetimes distribution
In the traditional viscous picture (Dullemond et al. 2006, Lodato et al. 2017, disks lie on the theoretical isochrone (Equation 3) at a given age t if their initial viscous timescale t ν,0 is much shorter than t; as evolution proceeds, more and more disks reach this stage and therefore the population converges around the cor-responding isochrone. As a consequence, the spread around the isochrones decreases with time: eventually, once the population is fully self-similar (i.e., its age is larger than all of the viscous timescales), the spread will be vanishingly small and the correlation between M d andṀ will be perfectly linear. This trend is illustrated in the top panel of Figure 1: the solid lines show three theoretical isochrones at different ages, while the dots represent a synthetic population of 100 disks obtained with Diskpop evolving in time with the same color coding. The aforementioned convergence to the theoretical isochrone starts as early as 1 Myr, while at 10 Myr the population is almost fully evolved and closely resembles the theoretical curve. From this argument, we can expect the moments of the distribution of t lt to evolve in the viscous case as follows: (i) the mean value of t lt will converge towards the actual age of the region, (ii) the spread will decrease until t ν < t for every disk in the population, (iii) the skewness will increase. For a more detailed discussion on the expected and observed evolution of the skewness, we refer to Appendix A.
The bottom panel of Figure 1 shows a synthetic population of disks evolved via MHD winds in the M d −Ṁ plane. As discussed in paragraph 2.2, the evolved population does not converge to the same isochrone: the large spread at all ages is such that making a prediction on the time evolution of the distribution of t lt is not as straightforward as for a viscous population. Tabone et al. (2022b) have shown that, assuming an exponential distribution of t acc,0 (which is determined fitting the observed disk fraction), the distribution of t lt does not depend on time; however, this result is specific of the exponential distribution. If we consider a different distribution of t acc,0 , that of t lt for an evolved population may depend on time: this is the case for our choice of a log-normal distribution of t acc,0 , which can still reproduce both the disk and accretion fraction (see Appendix B). Figure 2 shows the time evolution of the mean (top) and width (bottom) of the distributions of t lt for the viscous (blue) and MHD (yellow) models. The lighter shades of both models include an additional observational uncertainty, σ obs , that we implemented by adding an extra spread on the disk mass and the accretion rate, of 0.1 dex and 0.45 dex respectively (as an estimate of the observational uncertainty, see Manara et al. 2023, Testi et al. 2022. As stated in Section 2, we performed 100 runs for each simulation: the solid line represents the median, while the shaded areas around it show the 25th-75th percentile intervals. As the MHD model removes disks more effectively, the sample size decreases more than in the viscous case, making the statistical fluctuations between different simulations larger: this leads the yellow lines to have broader shaded areas. Considering the mean values of the distributions, adding σ obs only slightly shifts the curves for both the viscous and MHD case, resulting in a negligible difference. The two evolutionary models differ at early stages (< 1 Myr), but soon reach a common behavior that makes them indistinguishable within the 25th-75th percentile intervals. On the other hand, the widths of the distribution (bottom panel) significantly differ from one case to the other. The viscous case without additional uncertainty (darker blue) steeply decreases, as expected from viscous theory (Lodato et al. 2017) and discussed in paragraph 3.1. This is not the case for the MHD prescription (orange): while the general trend is still decreasing, it is not as steep as the viscous, and ultimately does not tend to zero but rather to an evolved value determined by the initial conditions.
The convolution with observational uncertainty in the viscous case (light blue) significantly shifts the curve up, as well as modifying its shape. The total width of the distribution is the root sum squared of the intrinsic spread (σ int ) and the observational uncertainty (σ obs ), σ tot = σ int 2 + σ 2 obs . The intrinsic spread σ int , given by the initial conditions, tends to zero as discussed above: therefore, we expect the final width to tend to σ obs , which is exactly what we recover. This causes the evolved population to have a significantly larger spread than that predicted by theory. On the contrary, despite still being shifted at larger values as an effect of the additional uncertainty, in the MHD case (yellow) the shape of the curve is not dramatically modified. This is because σ int is comparable to σ obs at all times, which makes this argument strongly dependent on the initial condition: as the total spread is given by √ σ int 2 + σ obs 2 , the behavior of the MHD case will only be significantly different from the viscous case if σ int is non negligible with respect to σ obs . In our previous work (Somigliana et al. 2022) we have shown how initial spreads of 0.65 dex and 0.52 dex for M d and R d respectively are able to reproduce the observed spreads around the correlations with the stellar masses; therefore, we set these values for the MHD simulation, while we choose a bigger spread of 1 dex for the viscous case, as it can better reproduce the observed values (see 3.3).
As mentioned in Section 2.3, the purely viscous model does not account for disk dispersal. Without exploring the whole parameter space, which is beyond the scope of this Letter, we have run a test model with photoevaporation, assuming the standard model of Owen et al. (2010), with a mass-loss rate of 10 −10 M ⊙ yr −1 following the latest constraints (Alexander et al. 2023). The mean and the width of the distribution of t lt increase with respect to the purely viscous case, but the difference is minimal and becomes negligible including the and width (bottom) for the viscous (blue) and MHD (yellow) models, including observational uncertainties, with the observations (gray diamonds). The shaded areas are as in Figure 2, while the gray bars represent the interval between the 16th and 84th percentiles (top) and the uncertainty on the width (bottom). While both models overestimate the mean values (see text for details), the evolution of the width of the distribution suggests a better match with the MHD model. observational uncertainty; therefore, our conclusions are not affected.
Comparison with the observations
In paragraph 3.2 we have shown the viscous and MHD predictions for the time evolution of the mean and width of the distribution of t lt ; in this paragraph, we compare our results with observations of different star-forming regions. We used the table 1 compiled by Manara et al. (2023) for Taurus, Lupus, Chameleon I and Upper Sco, and the data by Testi et al. (2022) for L1688 (to limit the contamination from sub-populations with different ages in the Ophiuchus complex).
Before commenting on the comparison itself, it is important to note that our simulations do not include dust evolution, making our definition of disk mass solely based on the gas content of disks; on the other hand, the observed disk masses rely on sub-mm fluxes, tracing the dust content instead. As the bulk of disk masses is in the 1 The table is available at http://ppvii.org/chapter/15/. gaseous phase, inferring the total mass from dust observations requires to i) constrain the dust-to-gas ratio in disks and ii) assume optically thin emission; however, as the accuracy of these assumptions is debated, the community is striving towards obtaining more reliable disk mass estimates (see Bergin et al. 2013, McClure et al. 2016Veronesi et al. 2021 for dynamical measurements; Anderson et al. 2022, Trapman et al. 2022 for a combination of gaseous tracers). The results of the ALMA Large Programs AGE-PRO and DECO will further contribute to this goal; moreover, the advent of the ALMA Band 1 and ngVLA will allow to move to longer wavelengths, where dust emission is less optically thick (Tazzari et al. 2021). In light of these forthcoming developments, our work can be considered a prediction that will be interpreted to its full potential with the results of this observational effort. The data comparison presented in the following is therefore intended as a state-of-the-art, which we anticipate to revise in the near future. Figure 3 shows the result of our comparison: the mean and width of the distribution are shown in the top and bottom panel respectively, and both include the viscous and MHD (blue and yellow line, as in Figure 2 and 5) numerical evolution. The gray diamonds represent the observed star-forming regions. None of the two evolutionary mechanisms reproduces the observed mean values, which are systematically lower. A potential reason for this mismatch could be an underestimation of disk masses; a difference of a factor as little as 3 in the observed masses would be sufficient to explain the discrepancy with the models -confirming the need to repeat this comparison with more accurate disk masses estimates. Moreover, Zagaria et al. (2022a) have shown how taking stellar multiplicity into account can explain the high accretors in Upper Sco; we expect this effect to shift the theoretical prediction to lower values of t lt for evolved populations. Dust growth and evolution prescriptions, which were not included in this work, are also likely to play a role as they can better explain the observed disk mass -accretion rate correlation (Sellek et al. 2020). The width of the distribution, on the other hand, provides more interesting results. The viscous prediction manages to marginally recover observed values at the earliest evolutionary stages, but as such values increase in time, the discrepancy with the viscous expectation grows larger and larger. This result was already anticipated by Manara et al. (2020) (see also Manara et al. 2023). It should be kept in mind that our viscous simulations have a σ int of 1 dex for both the disk mass and radius (see Table 1); as large as the intrinsic spread can be, the steeply decreasing viscous trend will always evolve the width of the distribution to σ obs . The MHD simulation instead falls within the error bars of the earliest observed star-forming region, up until ages on ∼ 2.5 Myr. There is an increasing discrepancy for more evolved populations, up until around 20% for Upper Sco; however, the oldest populations also represent the less complete samples, and therefore they carry a significant bias that should be kept in mind when comparing with simulations. Moreover, there are caveats to our own simulations, as in the viscous case we neglect disk dispersal mechanisms (such as internal or external photoevaporation, e.g. Malanga et al. in prep.) and only consider a detection threshold in disk masses.
DISCUSSION AND CONCLUSIONS
In this work, we have investigated how the time evolution of the distribution of a population of disks in the M d −Ṁ plane is impacted by the evolutionary model, considering the viscous and MHD prescriptions respectively. We have presented a combination of analytical considerations and numerical simulations, performed through the 1D population synthesis code Diskpop, in the case of a log-normal distribution of initial accretion timescales (which reproduces both the disk and accretion fraction). We find that, while the mean of the distribution of t lt = M d /Ṁ is not significantly impacted by the chosen model, the expected behavior of the width shows considerable differences depending on the evolutionary prescription; when including the observational biases in the form of additional uncertainty, this distinctive behavior is maintained.
Our predictions will be exploited to their full potential through a comparison with the results of the current observational effort to obtain direct estimates of disk gas masses; for the time being, we compare our evolutionary trends with the latest available observational data (based on dust observations) in different star-forming regions. We find that the purely viscous case only manages to marginally reproduce the observations at the earliest ages, while the MHD curve resembles them better. Based on these results, we suggest the analysis of these distributions as a viable method to disentangle between the viscous and MHD evolutionary models; our data comparison hints at a better agreement with the MHD model.
ACKNOWLEDGMENTS
We thank an anonymous referee for their comments that helped us improving the clarity of the manuscript. This The skewness of a distribution, defined as the third standardized moment, measures the asymmetry of the distribution about its mean. As we mentioned in paragraph 3.1, alongside the mean value and the width, in the viscous case we expect also the skewness of the distribution of t lt to evolve in time; in this Appendix we discuss this theoretical expectation and show the results of our numerical simulations. Figure 1. Full dots represent disks whose initial viscous timescale is shorter than the age of the population, and that can therefore be considered evolved.
The left panel of Figure 4 shows a population of viscously evolving disks (dots) at three subsequent ages, as well as the corresponding theoretical isochrones (solid lines). Full dots represent disks whose initial viscous timescale is shorter than the age of the population, which as a whole can therefore be considered evolved: from viscous theory, such disks are expected to have reached the self-similar condition and lie on the analytical isochrone, i.e., to show a linear correlation between the disk mass and the accretion rate. On the other hand, empty dots represent not-yet-evolved disks, which lie below the theoretical isochrone. As the population evolves, more disks satisfy the t ν < t condition, as can be visualized by the increasing number of full dots in Figure 4; this implies that more disks lie on the theoretical isochrone, bringing the population on the M d −Ṁ plane closer to a line. While this causes the width of the distribution of t tl to decrease with time, the skewness on the other hand increases -as we show in the right panel of Figure 4, which represents the corresponding histograms at all ages. This skewing effect is due to the fact that younger disks, which do not lie on the isochrone yet, have a t lt longer than the actual age of the region, and therefore contribute to positively skew the distribution -while evolved disks, which make up the bulk of the population, cluster close to the mean value. Figure 5 shows the evolution of the skewness of a population of disks generated and evolved with Diskpop with the same color coding and shaded areas as Figure 1; the left panel represents the case with no observational uncertainty, where the viscous distribution (blue) gets more and more skewed as expected, growing by a factor of 2 between 0.1 and 10 Myr. On the other hand, the MHD distribution (orange) remains symmetrical within the 25th-75th percentile for the whole evolution, resulting in a factor 3 difference from the viscous model for evolved populations. As significant as this theoretical difference is, including the observational biases (right panel) completely smooths it out: the two expected observed behaviors are indistinguishable once convoluted with the additional observational uncertainties.
In conclusion, while the evolution of the skewness makes an interesting theoretical argument stemming from the different interpretation of isochrones in the two models, it does not provide a reliable method to compare viscosity and MHD from the observational point of view.
B. TIME EVOLUTION OF THE DISTRIBUTION OF t lt
As t lt depends on t acc,0 as t lt = (1 + f M )(2t acc,0 − ωt), the evolved distribution of t lt is determined by the choice of initial distribution of t acc,0 : Tabone et al. (2022b) have shown that, choosing an exponential distribution for t acc,0 , the corresponding distribution of t lt reads where f M is defined in Tabone et al. (2022a) and τ = 2.5 Myr to fit the disk fraction, f D (t) = exp (−t/τ ). As f D is only a normalization factor, (B1) still have an exponential shape; moreover, it does not depend on time, as well as its mean value. On the other hand, if we pick a log-normal distribution for t acc,0 , we can still reproduce both the disk and the accretion fraction (see Appendix C) but in that case the evolved distribution of t lt becomes where µ and σ are the mean value and width of the initial log-normal distribution. Notice that Equation (B2) is not a log-normal in t lt ; moreover, it does depend on time, and so does its mean value and spread.
C. IMPACT OF INTERNAL PHOTOEVAPORATION
As mentioned in the main paper, disk dispersal in an intrinsic feature of MHD winds. These models manage to reproduce both the disk and accretion fraction, defined as the fraction of young stars with infrared excess (Hernández et al. 2007) and accreting (i.e., withṀ > 10 −11 M ⊙ yr −1 following Fedele et al. 2010) objects respectively, as shown by the orange lines in Figure 6. On the other hand, purely viscous models do not account for disk dispersal. This leads to a mismatch between the predicted and observed disk and accretion fraction, represented by the blue lines in Figure 6: the disk fraction is almost constant to 1, the little decrease being due to the observational threshold that we introduced in our simulations (considering dispersed disks with masses lower than 10 −6 M ⊙ , see Section 2.3), while the accretion fraction does decrease, but not enough to match the observed values. This problem is usually overcome in the literature by including internal photoevaporation, a two-timescale process that introduces a disk dispersal mechanism, allowing to reproduce the observations as shown by the purple lines in Figure 6. We ran the test simulation presented in this Appendix using the standard photoevaporative model of Owen et al. (2012), with a mass-loss rate of 10 −1− M ⊙ yr −1 , consistent with the latest constraints (Alexander et al. 2023). Figure 2. The dashed blue lines show the exponential fits to the data. Following the original paper, we define the accretion fraction as the fraction of sources with accretion rate higher than 10 −11 M⊙/yr. Our choice of a log-normal distribution of initial accretion timescales for the MHD model reproduces both the disk and accretion fraction, as does the exponential distribution chosen by Tabone et al. (2022b). The viscous model does not reproduce any of the fractions due to the lack of a disk dispersal mechanism, while including internal photoevaporation allows to recovered the observed behavior.
Once internal photoevaporation kicks in, it lowers the accretion rates for a given disc mass, introducing therefore a spread in the M d −Ṁ plane (Somigliana et al. 2020); therefore, it could in principle affect the conclusions of this work. However, we have tested that the mean and width of the t lt distribution in the presence of photoevaporation do not significantly deviate from the purely viscous prediction; without observational spread the photoevaporative case lies between the viscous and MHD models, and becomes indistinguishable from the viscosity when the observational spread is included. | 7,652.4 | 2023-08-29T00:00:00.000 | [
"Physics"
] |
Socio-Economic Inequalities in the Double Burden of Malnutrition among under-Five Children: Evidence from 10 Selected Sub-Saharan African Countries
Background: Africa is unlikely to end hunger and all forms of malnutrition by 2030 due to public health problems such as the double burden of malnutrition (DBM). Thus, the aim of this study is to determine the prevalence of DBM and degree of socio-economic inequality in double burden of malnutrition among children under 5 years in sub-Saharan Africa. Methods: This study used multi-country data collected by the Demographic and Health Surveys (DHS) Program. Data for this analysis were drawn from the DHS women’s questionnaire focusing on children under 5 years. The outcome variable for this study was the double burden of malnutrition (DBM). This variable was computed from four indicators: stunting, wasting, underweight and overweight. Inequalities in DBM among children under 5 years were measured using concentration indices (CI). Results: The total number of children included in this analysis was 55,285. DBM was highest in Burundi (26.74%) and lowest in Senegal (8.80%). The computed adjusted Erreygers Concentration Indices showed pro-poor socio-economic child health inequalities relative to the double burden of malnutrition. The DBM pro-poor inequalities were most intense in Zimbabwe (−0.0294) and least intense in Burundi (−0.2206). Conclusions: This study has shown that across SSA, among under-five children, the poor suffer more from the DBM relative to the wealthy. If we are not to leave any child behind, we must address these socio-economic inequalities in sub-Saharan Africa.
Introduction
According to the World Food Programme (WFP), malnutrition is "a state in which the physical function of an individual is impaired to the point where he or she can no longer maintain adequate bodily performance process such as growth, pregnancy, lactation, physical work and resisting and recovering from disease" [1]. Malnutrition can also be termed as an abuse of food or bad nutrition, such as over-nutrition and under-nutrition. Hunger and protein-energy malnutrition (PEM) have led to high mortality rates in children and mothers, contributing to poor growth and a rise in opportunistic infections and The underlying causes of DBM vary by sub-region in sub-Saharan Africa. For example, one study found that cultural perceptions such as a heavier body size in females may signify wealth, good, stable marital home, and exceptional achievement [33]. However, these perceptions differ across sub-regions as some regions expect women to work hard and have increased physical activity. Another study attributed obesity and increased weight to the rise in consumption of cheap, processed food at the expense of fresh, non-processed foods of subsistence farming [34]. The rise in the commercialization of food production is correlated to the decrease in subsistence farming. This has led to household diets with low nutritional value, high sugar, high fat, and energy-dense food that leads to obesity [35]. This article uses a population-based study on ten African countries (Burundi, Ethiopia, Guinea, Malawi, Mali, Senegal, Sierra Leone, South Africa, Zambia, and Zimbabwe) to understand the socio-economic inequalities in the double burden malnutrition among under-5 children in the continent. Thus, the aim of this study is to determine the prevalence of DBM and degree of socio-economic inequality in double burden of malnutrition among children under 5 years in sub-Saharan Africa.
Data Source
This study used multi-country data collected by the Demographic and Health Surveys
Stunting
Children whose height-for-age Z-score was below minus three standard deviations (−3 SD) were considered stunted, and those above minus three standard deviations (−3 SD) were considered not stunted. Stunting was coded as binary variable assigned values of zero and one. Those that were not stunted above −3 SD were coded as "0", and those that were stunted below −3 SD were coded as "1".
Wasting
Children whose Z-score was below minus three standard deviations (−3 SD) from the median of the reference population were considered thin (wasted), and those above a Z-score above −3 SD were considered not wasted. Wasting was coded as binary variable assigned values of zero and one. Those that were not wasted above −3 SD were coded as "0" and those that were wasted below −3 SD were coded as "1".
Underweight
Children whose weight-for-age Z-score was below minus three standard deviations (−3 SD) from the median of the reference population were classified as underweight, and those that were above −3 SD considered not underweight. Underweight was coded as binary variable assigned values of zero and one. Those that were not underweight above −3 SD were coded as "0" and those that were underweight below −3 SD were coded as "1".
Overweight
Children whose weight-for-height Z-score was more than 2 standard deviations (+2 SD) above the median of the reference population were considered overweight. Overweight was coded as binary variable assigned values of zero and one. Those that were not overweight below +2 SD were coded as "0" and those that were overweight above +2 SD were coded as "1".
Double Burden of Malnutrition (DBM)
The outcome variable for this study is the double burden of malnutrition (DBM). This variable was computed from 4 indicators: stunting, wasting, underweight and overweight. The first step was calculating the row total of stunted, wasted, and underweight children. Then children who had a row total greater than 1 and were overweight were defined as having experienced a double burden of malnutrition. The outcome variable, double burden of malnutrition (DBM), was then recorded as a binary variable where a value of "1" was assigned if DBM was present, and a value of "0" was given if DBM was absent.
Socio-Economic Status (SES)
Socio-economic status was depicted by the household wealth index, which measures a household's cumulative standard of living, such as ownership of select assets, type of housing, sanitation services, and type of water access, among others, using Principal Component Analysis (PCA) [32]. The household wealth index is considered a more reliable measure of wealth compared to income and consumption because it reflects a household's long-term standard of living, and this makes it possible to identify problems particular to the poor members of society, such as unequal access to health care and unequal access to recommended nutrition [33]. For this study, wealth was grouped into 5 quintiles-poorest (Q1), poorer (Q2), middle (Q3), richer (Q4) and richest (Q5).
Statistical Analysis Data Analysis
The study analyzed the data using STATA 17.1 statistical software. Univariate and bivariate analyses were performed to describe the sample and patterns of DBM. Inequalities in DBM among children under 5 years were measured using concentration indices (CI). The concentration index approach is a standard measure of assessing health inequalities. The indices and curves investigate whether health inequalities exist in one group. However, they do not estimate the magnitude of health inequalities [36]. This paper used the Erreygers normalized concentration indices [37] to measure the socio-economic inequalities among children in DBM: wasting, stunting, underweight and overweight. Among many of the indices that could have been used, we opted to adopt the Normalized Erreygers Indices as they have been corrected for bound issues; hence, they give more robust standard errors. The concentration index ranges from −1 to +1 and estimates the extent to which a health outcome (DBM) is concentrated among the rich or the poor. A negative concentration index value denotes a health outcome (DBM) concentrated among the poor. In contrast, a positive value implies that a health outcome (DBM) is concentrated among the rich [36]. A concentration index of zero implies that there is no socio-economic-related inequality, and a large absolute value of the concentration index depicts a greater concentration of inequality [36,38].
The concentration index can be computed by making use of the 'covariance' as shown below: where: y i is the health variable; y is the mean of y i ; R i is the fractional rank of the ith individual; COV denotes the covariance.
Socio-Economic Inequalities
Across all countries, stunting disproportionately affected the poorest, as stunting was more prevalent in the poorest quintile (Q1) ( Table 3). The concentration indices for stunting were all negative and statistically significant at a 95% confidence interval in all countries, ranging from −0.21 in Burundi to −0.03 in Zimbabwe (Table 3). Burundi, Ethiopia, Guinea, Mali, Senegal, Sierra Leone, and Zimbabwe all reported pro-poor wasting child health inequalities ( Table 4). The latter concentration indices were statistically significant at a 95% confidence interval. While Malawi, South Africa and Zambia reported pro-rich inequalities, the concentration indices were not statistically significant at a 95% confidence interval (Table 4).
Underweight children were also more prevalent in the poorest quintile (Q1) for most of the countries except for Zimbabwe (31.43%; Q2) and South Africa (35.71; Q3): Burundi (38.84%), Ethiopia (51.52%), Guinea (30.69%), Malawi (34.11%), Mali (28.94%), Senegal (52.50%), Sierra Leone (33.33%), and Zambia (38.57%) ( Table 5). All the underweight concentration indices were statistically significant at a 95% confidence interval across countries except for South Africa which had pro-rich inequalities; otherwise, all other countries reported negative indices, indicating pro-poor underweight inequalities (Table 5). Conversely, overweight disproportionately affected children from the rich households in many of the countries (Burundi, Ethiopia, Malawi, Mali, Senegal, Sierra Leone, Zambia and Zimbabwe); however, only Burundi, Mali, Senegal and Zimbabwe had statistically significant concentration indices at a 95% confidence interval (Table 6). While Guinea and South Africa reported pro-poor inequalities, the concentration indices were not statistically significant at a 95% confidence interval (Table 6). (Figure 2). Across all countries, DBM was most prevalent among children in the poorest quintile (Q1) except in Zimbabwe, where DBM was most prevalent among children from the richer quintile (Q4) ( Table 7). However, the adjusted Erreygers concentration indices were negative, showing pro-poor DBM inequalities among children across all countries (Table 7). DBM across all countries reported pro-poor inequalities as all the concentration indices were negative and statistically significant at a 95% confidence interval ( Table 8). The computed adjusted Erreygers Concentration Indices showed pro-poor socio-economic child health inequalities relative to the double burden of malnutrition. All the concentration indices were negative across all countries and were statistically significant at a 95% confidence interval ( Table 8). The DBM pro-poor inequalities were more intense in Zimbabwe (−0.0294) and least intense in Burundi (−0.2206) ( Table 8) DBM across all countries reported pro-poor inequalities as all the concentration indices were negative and statistically significant at a 95% confidence interval ( Table 8). The computed adjusted Erreygers Concentration Indices showed pro-poor socio-economic child health inequalities relative to the double burden of malnutrition. All the concentration indices were negative across all countries and were statistically significant at a 95% confidence interval ( Table 8). The DBM pro-poor inequalities were more intense in Zimbabwe (−0.0294) and least intense in Burundi (−0.2206) (Table 8) Table 8).
Discussion
SDG Target 2.2 aims to "End all forms of malnutrition, including achieving, by 2020, the internationally agreed targets on stunting and wasting in children under 5 years of age". In this regard, our study contributes several ways to the debate on DBM among children in African countries. First, our results provide evidence on individual nutritional status (underweight, wasting, stunting, and overweight) prevalence of children under 5 years of age at the national level and explain the existence of socio-economic inequalities. The quality of evidence for our approach is supported by representative DHS data from 10 African countries. The completeness of the dataset for analysis, as this study has drawn insights from the most recent dataset from 2015 to 2019, suggests that the geographic and social differences in DBM of under-five children in Africa and the extent of economic inequality can be fully understood.
Sub-Saharan Africa has been cited to be characterized by the double burden of malnutrition (DBM) and high levels of undernutrition as well as a growing burden of overweight/obesity and diet-related non-communicable diseases (NCDs) [39]. Recent research shows that despite a high prevalence of hunger and malnutrition, overweight and obesity epidemics are increasing in Africa [40]. This is still the case, as our study findings showed a significantly high prevalence of DBM, with Senegal reporting the least DBM prevalence of about 9%. In comparison, Burundi had the highest DBM prevalence of about 27%. A recent study raised a concern, citing the possibility that most countries will not meet the global nutrition targets by 2030 [39] and Africa is unlikely to reach the Sustainable Development Goals and end hunger and all forms of malnutrition by 2030. The current study findings seem to show the concern raised in earlier papers becoming a sad reality as this study reported a significantly high prevalence of stunting (Burundi; 24%), underweight (Burundi; 8%) and overweight (South Africa; 13%). Earlier research reported malnutrition commonly observed in developed and affluent communities, but as early as 1996, it was noticed in low-to-middle-income countries (LMICs) [41][42][43].
A recent study reported the prevalence of overweight and obesity among under-five children in South Africa to be almost double that of Malawi [44]. Our results also showed similar findings: South Africa had the highest overweight prevalence of 13% compared to Senegal, which had about 2%. However, it had the 5th highest DBM prevalence of 16%, with Burundi with the highest DBM prevalence of 27%. The reported that the high DBM prevalence of Burundi could be attributed to earlier trends of stunting and underweight among children, which have shown little to no changes since the 1980s [43].
Contrary to what was observed in previous studies that reported relatively socioeconomic well-off groups at a greater risk for the double burden of malnutrition [44][45][46], our study showed that DBM was more prevalent among the children from the poorest households (Q1). This may be because of a shift in NCDs' epidemiology, as they were earlier perceived as diseases for developed countries but are currently more prevalent in developing countries. Furthermore, a DBM is a global problem. It has been argued that it occurs when the prevalence of overweight and obesity in LMICs is increasing rapidly, while at the same time, the prevalence of malnutrition in these countries is declining slowly [47]. This was true for our study, as Burundi had the highest prevalence of stunting (24%) and underweight (8%). As a result, it had the highest DBM prevalence (27%).
Obesity in children under 5 years of age is still overlooked in the current literature. Our study provided evidence of the increasing burden of obesity in this age group, which was found primarily in households of high socio-economic status. Previous studies reported similar findings arguing that wealthier groups pose strong risk factors for the double burden of malnutrition as well as community-level poverty [48][49][50]. Considering that findings from this study showed that DBM is intertwined in underweight, stunting, wasting and overweight, addressing the social inequalities that share the double burden of child malnutrition in the African region therefore requires strategies that address why certain sub-populations are more exposed to these nutritional problems to avoid strategies that solve one nutritional problem and exacerbate another.
There is a need to increase the engagement of various stakeholders to mitigate the double burden of malnutrition in sub-Saharan Africa. The active collaboration and participation of representatives from local and international non-governmental organizations, major corporations, and government institutions across various sectors such as agriculture, finance, environment, education, communications, health care and nutrition will possibly stimulate dialogues around this menace and proffer solutions and recommendations. Furthermore, progress toward ending hunger and malnutrition by 2030 requires intensified efforts to reduce undernutrition and focused action on reducing obesity and diet-related non-communicable diseases. Key strengths of this study lie in it being a compilation of representative and generalizable DHS datasets from 10 countries. These are typically high-quality, highly responsive datasets from DHS surveys conducted using robust methodologies using well-documented data sources. These DHS surveys are conducted using standardized survey modules and implementations that allow comparisons between countries. However, these are cross-cutting datasets, which limited our ability to assign causality.
Conclusions
In summary, there is a shift in nutrition in Africa, with the increasing prevalence of overweight and obesity among children under five, making optimal child nutrition a key factor in achieving global health goals. The inequality of DBM was consistently pro-poor socio-economic across the ten SSA countries, such that the lower socio-economic groups were more likely to be experiencing DBM and bear a higher burden of the problem than the higher socio-economic groups. In addition, DBM was pro-poor although some of the nutritional indicators were pro-rich. Therefore, the double burden of malnutrition in lowand middle-income countries poses a major global public health problem that could hinder the achievement of the SDGs if not properly addressed. Furthermore, this study has shown the existence of pro-poor inequalities relative to the double burden of malnutrition among under-five children. Therefore, if we are not to leave any child behind, we must address these socio-economic inequalities in sub-Saharan Africa.
Author Contributions: O.A.A. and A.T.L. designed the study, wrote the paper, analyzed data, reviewed the paper, and submitted it for publication. K.N. and E.N. wrote the background section and reviewed all drafts in preparation for publication, D.O. wrote the methods section and reviewed all drafts in preparation for publication, A.S. wrote the discussion section and reviewed all drafts in preparation for publication. P.C. and O.A.S. wrote the paper and reviewed all drafts in preparation for publication. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding; however, the International Journal of Environmental Research and Public Health (IJERPH) supported the study as they waived the APC charges for processing this manuscript.
Institutional Review Board Statement:
This study used secondary analysis based on publicly available DHS datasets. Since the data used in this study were secondary, no ethics approval was sought.
Informed Consent Statement:
The parent study DHS surveys sort the consent from the participants. Data Availability Statement: All data sets are publicly available on the Demographic Health Survey website at: https://dhsprogram.com/what-we-do/survey/survey-display-406.cfm (accessed on 10 January 2023) and can be accessed upon request from the Demographic Health Survey team. | 4,169.2 | 2023-04-01T00:00:00.000 | [
"Medicine",
"Economics"
] |
Topological and electrical control of cardiac differentiation and assembly
Tissue engineering has developed many paradigms and techniques on how to best integrate cells and extracellular matrix to create in vitro structures that replicate native tissue. The strategy best suited for building these constructs depends mainly on the target cells, tissues, and organ of interest, and how readily their respective niches can be recapitulated in vitro with available technologies. In this review we examine engineered heart tissue and two techniques that can be used to induce tissue morphogenesis in artificial niches in vitro: engineered surface topology and electrical control of the system. For both the differentiation of stem cells into heart cells and further assembly of these cells into engineered tissues, these two techniques are effective in inducing in vivo like structure and function. Biophysical modulation through the control of topography and manipulation of the electrical microenvironment has been shown to have effects on cell growth and differentiation, expression of mature cardiac-related proteins and genes, cell alignment via cytoskeletal organization, and electrical and contractile properties. Lastly, we discuss the evolution and potential of these techniques, and bridges to regenerative therapies.
Th e adult mammalian heart is composed of a complex and well-integrated mosaic of anatomical modules. Th e contractile muscle (atria, and ventricles) positioned between the supporting epi-and endocardium, the conduction system (pacemaker nodes, and Purkinje fi ber network), and the highly dense vasculature (endothelial and smooth muscle cells) constitute the key elements of the cardiac system, which is the engine for the larger cardiovascular system. During development, complex tissues are formed as pluripotent stem cells diff erentiate into increasingly specialized cell types. A primary goal of tissue engineering is to recapitulate the conditions occurring during in vivo development in an in vitro setting. To do this eff ectively, the complete cellular microenvironment (auto-, para-, and juxtracrine signaling, extracellular matrix (ECM) interactions, and electromechanical stimuli) must be quantitatively measured, understood, engineered, and recapitulated experimentally. In the heart, the many cell types form specifi c integrated structures that contribute to their individual cell and overall organ function. To engineer these cells in the appropriate positions and to temporally give them the correct biochemical, physical, and electrical cues is the overarching goal.
A functional engineered heart tissue requires the following four criteria: 1) aligned syncytium of cardiomyocytes (and stromal cells) with synchronous electromech a nical coupling of adequate contractile force; 2) supportive ECM and scaff olding structure to mimic the mechanical and biochemical properties of native tissue; 3) functional microvasculature to provide adequate nutrient and oxygen delivery within a tissue of clinically relevant thickness; and 4) suitable degree of maturation for either successful implantation and host tissue integration or an appropriate in vitro model mimicking adult heart tissue.
Two techniques that have been used to manipulate cells progressing through cardiac diff erentiation and functional assembly into engineered heart tissue with positive functional eff ects are 1) control of extracellular surface topology and geometry, and 2) electrical control by stimulation and the use of conductive biomaterials.
The role of extracellular geometry and electrical properties in cells and tissue
Th e response of cells to the changes in micro environmental signals is enabled by biochemical pathways. A change in substrate stiff ness, surface topography, tugging force, or the molecular composition of the surrounding ECM is seen by the cell as a biochemical signal via mechanotransduction-mediated ligand receptor interactions. Similarly, a change in electrical charge density on either side of a cell membrane due to external stimulation, or a sudden infl ux of extracellular ions is also a bio chemical signal that the cell can understand. Many studies suggest that these types of signals are just as important as soluble factor-based autocrine and paracrine signaling in infl uencing cell fate and state [7,[16][17][18].
Th e Chen and Discher groups have shown the importance of surface topography and substrate stiff ness in directing mesenchymal stem cell fate [19,20]. Th e fi rst study, by McBeath and colleagues [20], determined the signifi cance of surface topography by micropatterning cells onto islands of ECM and observing the resulting eff ects on cell morphology. A connection was then made between cell morphology (round on small micropatterned islands versus spread out and fl at on larger islands) and lineage fate. Specifi cally, spread out and fl at cells under cytoskeletal tension were thought to mediate RhoA expression, which if expressed constitutively directed the mesenchymal stem cells into osteoblasts, and if not expressed, as in the non-spread and round cells, directed them into adipocytes [20]. Engler and colleagues [19] studied the eff ects of substrate stiff ness on directing mesenchymal stem cell fate and found that cells cultured on ECM that mimicked native tissue elasticities were directed to that tissue type. For example, mesenchymal stem cells cultured on brain-like ECM diff erentiated primarily into neurogenic cells, and cells cultured on muscle-like ECM diff erentiated into myogenic cells.
During heart development, certain key genes have been shown to be critical for normal cell growth and diff er entiation. One such gene, Wnt11, has been shown to be necessary for patterning an electrical gradient in zebrafi sh heart [21]. Interestingly, animals with this gene knocked down showed a uniform conduction velocity along the surface of the heart; in normal hearts, however, there were gradual changes in conduction velocity depending on the local area of the propagation. Th e researchers excluded the possibility for this gradient of electrical coupling due to cellular excitability, connexin localization, tissue geometry and mechanical inputs. Instead, they showed that Wnt11 expression was solely responsible and that it acted via expression of L-type calcium channels, which aff ected transmembrane calcium ion conductance in the conducting cardiomyo cytes [21]. It is important then to note from this study that a linear electrical stimulus and conduction pattern in heart tissue may not be functionally suitable; it is just as important to quantify the spatial distribution and temporal activity of the ion channels that mediate electrical propagation and directly lead to concerted contractile function.
Structuring engineered heart tissue using topographical cues
It is well known that the architecture of the extracellular environment infl uences cell behavior at the nano-, micro-and macroscale with respect to the expression of cardiac-specifi c genes and proteins, cytoskeletal structure, mor phology, and functionality. Th e main complexity involved in engineering functional myocardium is related to establishing appropriate structure-function correlation over diff erent scales. Assembly of appropriate structure is required to achieve a desired function, which is characterized by the development of active force (for example, for rat heart, 20 to 50 mN/mm 2 ) and impulse propagation (for example, for rat heart, 20 to 25 cm/s) [22], both of which are considered to be two critical functional measure ments. At the macroscale, native heart contains elongated myofi bers aligned in parallel; the structure enables coordinated contraction of the ventricle and expul sion of blood. At the microscale, adult cardio myocytes are rod shaped and contain registries of sarcomeres that enable cell contraction in response to electrical signals. At the nanoscale, each sarcomere contains precisely organized sarcomeric proteins (for example, sarcomeric α-actin/α-actinin and myosin heavy chain) that enable coordinated contractions of sarcomeres. By simply manipulating the topography of the surface on which cells are adhered to, repeated reports have indicated structural and functional eff ects pertaining to heart cells.
Kim and colleagues [23] constructed polyethylene glycol hydrogel substratum with anisotropic nanoscale features to mimic the native myocardial ECM. Although the topographic feature sizes in this study (nanoscale) were much smaller than those in previous studies (microscale), the cells still aligned along the direction of the presented topographic cue, showing a nanoto pographic cell-substratum interaction for the fi rst time. Distinguished from previous studies on the microscale [24], in which topographical cues were on the order of cell width, enabling the cells to be oriented by confi nement, this study showed nanotopographic cell-substratum interaction mimicking nanoscale cell-ECM interaction in vivo, which can also lead to cardiomyocyte orientation. Th ere were no diff erences in surface treatment amongst the diff erent groups, nor on the grooves versus the ridges of the engineered substratum, and as a result, cells were able to freely spread and adhere over several ridges. Analysis revealed that this alignment was due to the organization of focal adhesion proteins and the cortical cytoskeleton. Interestingly, the dimension of the grooves had an important eff ect on the cell-substratum interaction: when the grooves were too narrow (400 nm in this study), the cell membrane was unable to penetrate deep into the bottom of the grooves; whereas when the grooves were suffi ciently wide (800 nm in this study), the cell membrane penetrated deep enough to fi ll the grooves completely, resulting in a more extensive cell-substratum adhesion. As a result, the cells on 800 nm-wide-patterned substratum experienced stronger contraction-mediated stress, showed an increase in connexin-43 expression and an increase in conduction velocity of action potentials.
In an early study, Feinberg and colleagues [25] generated two-dimensional muscular thin fi lms by seeding neonatal rat ventricular cardiomyocytes on a polydimethylsiloxane membrane that could be detached from a thermosensitive poly (N-isopropylacrylamide) substrate. Once detached, the muscular thin fi lm spontaneously adopted a three-dimensional conformation determined by its fi lm properties and the alignment of the cardiomyocytes, including a continuous anisotropic fi lm or an array of discrete muscle fi bers [25]. By careful tailoring of the cell alignment pattern, thin-fi lm shape and electrical-stimula tion protocol, these cell-covered sheets could be designed to perform tasks such as gripping, pumping, walking and swimming and could generate forces as high as 4 mN per mm 2 .
High-resolution diff usion tensor magnetic resonance imaging (DTMRI) and microfabrication were combined by Badie and colleagues [26,27] to fabricate cell monolayers that replicate realistic cross-sections of native cardiac tissue. In-plane cardiac fi ber directions in native mouse ventricle were fi rst measured by DTMRI and then projected onto two-dimensional pixels to fabricate photomasks. Th e photomasks were then used to generate polydimethylsiloxane stamps via soft lithography, and to pattern fi bronectin on coverslips to guide the local alignment of cultured cardiomyocytes, ultimately yielding a monolayer with replicated cell orientation. Th is novel method provides an improved platform to study intramural structure-function relationships with one of their recent studies focused on incidence and spatiotemporal characteristics of conduction block [28].
Takahashi and colleagues [29] have built anisotropic cell sheets by patterning hydrophilic (PIPAAm-b-PAcMo) domains onto thermosensitive (PIPAAm) domains in a stripe pattern. During cultivation, normal human dermal fi broblasts were aligned along with the stripe patterns and showed physical and biological properties diff erent to that of isotropic cell sheets: the anisotropic cell sheets showed increased shrinking rates parallel to cell alignment due to the collective orientation of contractile actin fi bers. Moreover, the secretion of vascular endothelial growth factor by aligned fi broblasts was increased signifi cantly and the collagen deposited onto fi broblast sheets was anisotropic. Th is technology together with the cell sheet stacking technique [30] could generate threedimensional complex anisotropic tissue in vitro.
With a well-developed cell entrapment method, Tiburcy and colleagues [31] generated three-dimensional engineered heart tissue (EHT) from neonatal rat cardiomyocytes and observed terminal diff erentiation and tissuelike cardiomyocyte maturation supported by similar morphological and molecular features of EHT-and postnatal heart-derived cardiomyocytes. Th ey also showed that EHT development had similar distinct phases to cardiomyocyte maturation, including 1) a consolidation phase with high levels of apoptosis and ECM degradation, and 2) a maturation phase with myocyte binucleation, rod-shaped cardiomyocyte formation, a shift from fetalskeletal to adult-cardiac actin transcript expression, and ECM build-up.
Engelmayr and colleagues [32] created an accordionlike scaff old using laser boring of a 250 μm thick poly(glycerol sebacate) layer. Th e scaff olds were pretreated with cardiac fi broblasts by rotating culture, followed by seeding of enriched cardiomyocytes under static culture. At the end of cultivation, the authors obtained contractile cardiac grafts with heart cells aligned along the preferred direction and mechanical properties closely resembling those of a native rat right ventricle.
Th ere were interesting fi ndings in a study by Madden and colleagues [33] in which a bimodal scaff old archi tecture was developed that provided parallel channels and interconnected porous networks at the same time. Th e parallel channels were designed to develop cardiomyocyte muscle bundles in vitro while the surrounding sphere-templated porous network was intended to improve diff usive mass transfer. Th e scaff old was fi rst seeded with primary chicken embryonic-derived cardiomyo cytes (approximately 20 to 25% cardiomyocyte purity) by centrifuging cells into the parallel channels. During cultivation, the proliferation of non-myocytes within the porous network and around the scaff old edge decreased the supply of oxygen and nutrients to cardiomyocytes, which principally remained in the channels. Th erefore, the viability of cardiomyocytes was limited to within approximately 150 μm of the construct sur face. However, when the scaff old was seeded with human embryonic stem cell-derived cardiomyocytes (10 to 65% cardiomyocytes), non-myocytes declined over a 5-day cultivation period, resulting in predominantly cardiomyo cytes (approximately 95% β-myosin heavy chainpositive) in the cell population and porous channel walls free of cells. Because of the improved mass transfer, the cell survival was increased up to 300 μm into the scaff old. Th e mechanism responsible for the decrease in the nonmyocyte fraction within this scaff old is not entirely clear; however, it is likely related to the unique three-dimensional structure.
Understanding the mechanisms associated with topology-based signaling in two dimensions will certainly have implications in three-dimensional tissue engineering. Currently, however, there is a lack of established technologies that will permit three-dimensional topological patterning inside three-dimensional matrices such as hydrogels. It is clear that cells are aff ected by topology, but to preserve distinct topologies in engineered threedimensional substrates containing embedded cells that remain viable requires sophisticated technologies such as three-dimensional printing capabilities, and hydrogel post-polymerization techniques, both of which need to occur at high resolution in the nanometre range. Th erefore, current two-dimensional studies help determine favorable geometries of topology that may transfer well into three-dimensional systems once appropriate tech nologies are developed. Additionally, these studies can provide great bases for computational models that can be designed to simulate three-dimensional tissue topographies.
Electrical control of engineered heart tissue
During embryo development, cells are exposed not only to gradients of soluble factors but also to endogenous electrical fi elds that may determine the emergence of spatial patterns and aid in tissue morphogenesis [34]. Exogenously applied electrical stimulation has been shown to also infl uence cell behavior [35]. In the cardiac development context, electrical fi eld stimulation has been shown to aff ect the diff erentiation of mouse embryonic stem cells in vitro [36]. In the study by Sauer and colleagues [36], a single direct current fi eld pulse was applied to 4-day-old embryoid bodies and the authors found signifi cant eff ects of pulses applied for 90 seconds on cardiomyocyte diff erentiation with fi eld strengths of 250 and 500 V/m. Th is electrical stimulation protocol increased both the number of diff erentiating beating embryoid body foci as well as the size of the beating foci. A comparable increase in the number of beating embryoid bodies was achieved by incubation with H 2 O 2 , indicating that the electrical fi eld eff ect was transduced via the intracellular generation of reactive oxygen species. Th e radical scavengers dehydroascorbate and pyrrolidinedithiocarbamate, and the NF-kB antagonist N-tosyl-Lphenylalanine chloromethyl ketone inhibited cardiac diff er entiation, suggesting that reactive oxygen species and NF-kB may play a role in early cardiac development. Electrical stimulation has also been shown to play a role in cardiac diff erentiation of human embryonic stem cells [37], through mechanisms associated with the intracellular generation of reactive oxygen species. In the cardiac tissue engineering context, electrical fi eld stimulation has been used to improve tissue properties [38][39][40][41]. After 24 hours of regular electrical stimulation of adult ventricular myocytes in culture, cells displayed higher caff eine-induced Ca 2+ transients than non-stimulated controls [40]. Field stimulation also enhanced the mechanical properties of myocytes when compared to quiescent myocytes, suggesting that regular electrical stimulation is important when studying the function of adult ventricular myocytes in culture.
Radisic and colleagues [41] have shown that the application of electrical stimulation during construct cultivation markedly enhanced the contractile behavior of rat neonatal cardiomyocytes cultured on scaff olds. Th ere was also a decrease in the excitation threshold and an increase in maximum capture rate both with time and with electrical stimulation. Analysis of cardiomyocyte ultrastructure revealed that myofi brils aligned in the direction of electrical fi eld lines [41] and promoted a remarkable level of ultrastructural organization in threedimensional tissues. Importantly, it was shown that if applied early after seeding (day 1), electrical stimulation inhibited the accumulation of cardiac proteins and yielded poor contractile behavior. If applied late (day 5), electrical stimulation was less eff ective because of the reduced amounts of connexin-43 and contractile proteins available in the cells [41], suggesting that there is a window where electrical stimulation can yield more favorable results.
Th e eff ects of monophasic or biphasic electrical fi eld stimulation on the structure and function of engineered cardiac organoids was also studied and shown to yield diff erent results [38]. Field stimulation using symmetric biphasic square pulses was an improved stimulation proto col compared to no stimulation and stimulation using monophasic square pulses of identical total amplitude and duration. Th is was demonstrated by the highest success rate for synchronous contractions, lower excitation threshold, higher density, and higher expression of connexin-43 in the biphasic group compared to the monophasic group. Biphasic fi eld stimulation was also eff ective at improving electrical excitability of multicellular type cardiac organoids where fi broblasts and/or endothelial cells were also added [38].
Electrical stimulation can also be combined with bioreactor perfusion to generate thick, functional cardiac patches [42]. Bioreactor cultivation for 4 days under perfusion with continuous electrical stimulation promoted elongation and striation of rat neonatal cardiomyo cytes and increased expression of connexin-43 [42]. Th is illustrates the eff ectiveness of electrical fi eld stimulation even in a rather complex cultivation system such as a perfusion bioreactor. Electrical stimulation has also been shown to signifi cantly increase the average conduction velocity of neonatal rat cardiomyocyte constructs [43], which correlated with the improved contractile behavior of tissue constructs. Electrical stimulation during culture signifi cantly improved amplitude of contractions, tissue morphology, and connexin-43 expression compared to the non-simulated controls [43].
Taken together, these reports demonstrate the benefi ts of electrical stimulation to cardiac tissue engineering in animal models. To date, however, there are no reports in the literature of the eff ects of electrical fi eld stimulation in human cardiac tissue engineering.
Interactive eff ects of topographical and electrical cues
A small number of studies have focused on evaluating the interactive eff ects of topography and electrical fi eld stimulation. When both cues are simultaneously applied, an interesting study is to determine which of the two will preferentially guide the cell orientation and elongation response as well as determine the cell phenotype. In a related study, interactive eff ects were investigated using pulsatile electrical fi eld stimulation and substrates with approximately 700 nm deep 'V'-shaped abrasions [44]. Although both fi broblasts and cardiomyocytes elongated and aligned on non-abraded surfaces by application of electrical fi eld stimulation, topographical cues were a signifi cantly stronger determinant of cardiomyocyte orien tation than the electrical fi eld stimulation. Th e orien tation and elongation response of cardiomyocytes was completely abolished by inhibition of actin polymerization (cytochalasin D) and only partially by inhibition of the phosphatidyl-inositol 3 kinase (PI3K) pathway (LY294002).
In a subsequent set of related studies, precise topographical cues were engineered by hot embossing tissue culture polystyrene with defi ned microgrooves and microridges [45]. Th e electrical stimulation electrodes were deposited on the chip edges such that the grooves were oriented either parallel or perpendicular to the fi eld lines. Substrates consisted of 0.5 μm-wide grooves and 0.5 μm-wide ridges (1 μm period) or 3 μm-wide grooves and 1 μm-wide ridges (4 μm period); in all cases the grooves were 400 nm deep and the smooth substrates were used as controls. Neonatal rat cardiomyocytes elongated and aligned along the microgrooves forming a welldeveloped contractile apparatus, staining positively for sarcomeric α-actinin, with a more pronounced eff ect on substrates with 1 μm compared to 4 μm periodicity. Importantly, simultaneous application of biphasic electrical pulses and topographical cues resulted in gap junctions confi ned to the cell-cell end junctions rather than the punctate distribution found in neonatal cells. Electrical fi eld stimulation further enhanced cardiomyocyte elongation when microgrooves were oriented parallel to the electric fi eld lines.
By incorporating gold nanowires within alginate scaff olds, Dvir and colleagues [46] were able to increase the conductivity of this biomaterial and improve electrical communication between adjacent cardiac cells. Tissues grown on these composite matrices were thicker and better aligned than those grown on pristine alginate. In addition, higher levels of the proteins involved in muscle contraction and electrical coupling were detected in the composite matrices. When subjected to electrical stimulation, the cells in these tissues contracted synchronously.
Tandon and colleagues described a novel surfacepatterned microbioreactor array, where an excimer laserbased method was used to generate a micropatterned indium tin oxide substrate with an interdigitated array of electrodes designed for electrical stimulation of cultured cells. Th e excimer laser-based method enables direct patterning of the indium tin oxide in a single step, and without the use of harsh chemicals or a customized photomask. Th is allowed for the generation of a patternable and optical imaging-compatible substrate for longterm, microscale cell culture with electrical stimulation [47]. Th e system has been used to culture primary cardiomyocytes and human adipose-derived stem cells. Over 6 days of culture with electrical stimulation (2 ms duration, 1 Hz, 180 μm wide electrodes with 200 μm spacing), both cell types exhibited enhanced proliferation, elongation and alignment, and adipose-derived stem cells exhibited higher numbers of connexin-43-composed gap junctions.
Perspectives
It is clear that much work and development is required to advance the fi eld of stem cell and cardiac tissue engineering to the point of signifi cant clinical impact. Th e emerging technologies within the fi elds of biology, material science, micro-and nano-fabrication, and computational modeling are all progressing at a rapid pace. Th e challenge, however, is choosing the correct combination of technologies married with suitable biology to create human tissue replacements and in vivolike in vitro models that are functional.
In the context of microenvironmental control in the heart, it is necessary to mention the importance of the dynamic contractile forces that are present. Th e ECM plays a critical role in the heart cell niche during development, homeostasis, disease, and repair. One primary mode in which the ECM communicates with heart cells is through mechanotransductive cues. Aside from static biomechanical cues (facilitated by cell integrins and focal adhesions) dynamic cues that provide stretching forces to cells through the ECM have been shown to be important in heart development and maturation. Th e Eschenhagen and Zimmerman groups have investigated and reported on the role and benefi cial eff ects of mechanical stimulation in cardiac cells [31,[48][49][50]. External mechanical stimulation aims to recapitulate the electromechanical forces observed regularly in the contracting native heart. Much like electrical stimulation, mechanical stimulation directs the elongation and orientation of cardiomyocytes, in addition to improving force of contraction and stage of maturation. Electrical stimulation may, however, be a more physiological (albeit indirect) method of inducing mechanical stimulation (compared to stretching) as this occurs in vivo via excitation-contraction coupling.
Two methods that hold promise in generating mature engineered heart tissue are 1) the control of geometrical cues and 2) the manipulation of electrical properties within the cellular microenvironment. Figure 1 summarizes the main concepts discussed and how they link to downstream eff ects leading eventually to changes in function. Future development will likely bring interesting advances and marriages of the mentioned concepts; in fact, there is evidence for some aspects of this research ongoing currently.
Computational modeling is often underutilized in tissue engineering. Recent advances in the sophistication and complexity of theoretical mechano transduction models, in addition to empirical techniques with which to validate models, have made these approaches a rich source of insight and predictability (reviewed in [51]). Th e end function of heart muscle is to contract at a force and rate appropriate for blood circu lation. Th e contractility of cardiomyocytes has been modeled by numerous groups. In a recent study, Shim and colleagues [52] developed a model system that can detect the force of contraction exerted by a monolayer. Cardiomyocytes were seeded onto a thin fi lm that curled in response to the force of contraction of adhered cardiomyocytes. Th e magnitude of exerted force was calculated by the degree of curvature of the thin fi lm. In order to determine optimized designs for their model, they developed a fi nite element-based three-dimensional phenomenological constitutive model, which accounted for both the passive deformation, including pre-stretch, and the active behavior of the cardiomyocytes.
One notion that may prove useful in screening studies is a surrogate system for EHT that has the capability not only to provide the correct control cues for heart development and maturation, but also to simultaneously sense tissue function. Th is is currently a key obstacle for model system development, especially for a system that attempts to integrate a tissue mimetic (as opposed to two-dimensional monolayer culture) in a high-content and high-throughput manner. A few groups have utilized polymer-based cantilever systems to culture miniature tissues that simultaneously restrain the remodeling of tissue and report forces exerted [18,49,50,53]. It would be interesting to integrate electrical control with these types of systems to both stimulate and record electrical activity while maintaining appropriate force dynamics. A system like this would constitute a complete model whereby form and function of engineered heart tissue could be controlled and sensed concurrently.
In vivo, cells are able to communicate and self-assemble without much diffi culty. Self-assembly in vitro has always been a desirable option for tissue engineers, although it has proven diffi cult to recapitulate key signals present in vivo that infl uence cells to build appropriate structure and associated function. Recapitulation of tissue morpho genesis by inducing self-organization in vitro has so far been demonstrated in many organ subunits, includ ing the eye [54], liver [55], intestine [56], and brain [57], although not yet in the heart. Th is is a highly promising method of inducing tissue morphogenesis in parallel with directed cardiac diff erentiation, and may be supplemented with biophysical and electrical control of the microenvironment. Th e next generation of engineered heart tissue should take further advantage of the intrinsic self-assembly and self-organization capabilities of cells with the aid of external electrical and mechanical cues to facilitate functional tissue construction. Th is bottom up approach to tissue engineering may prove effi cient, provided the microenvironment can be accurately recapitulated. Depiction of current methods used to manipulate heart cells to develop, mature, and assemble into functional heart tissue. Tuning the cell microenvironment by means of geometry and electrical control exhibits upstream eff ects on adhesion, cell-cell and cellextracellular matrix interactions, growth and diff erentiation, cellular and tissue alignment via cytoskeletal organization, and electrical and contractile apparatus. The small dark arrows in the fl ow diagrams indicate the sequence by which the specifi c method of microenvironmental control eff ectively manifests downstream. These end changes in the cardiac cells include changes in gene/protein expression, electrical properties, and mechanical properties. Top: during development pluripotent stem cells diff erentiate into mesodermal progenitors, then cardiovascular progenitors that give rise to various cell types in the heart (cardiomyocytes, fi broblasts, endothelial and smooth muscle cells). Cell diff erentiation and assembly into a highly organized structure is governed by biochemical, mechanical and electrical stimuli in vivo. Tissue engineering aims to recapitulate some of these environmental factors in vitro. Middle: control of substrate topography and stiff ness aff ects cell orientation and, as a result, functional properties. Bottom: control of electrical properties is achieved by use of conductive biomaterials, electrical stimulation bioreactors or changes in gene expression of key ion channels. The large green arrows (middle and bottom) depict the span of current techniques used in the fi eld and link them to the regimes of cardiac diff erentiation and assembly where they have been applied (top). CM, cardiomyocyte; CVP, cardiovascular progenitor; E-C, excitation-contraction; EC, endothelial cell; ECM, extracellular matrix; ET, excitation threshold; FB, fi broblast; MCR, maximum capture rate; PSC, pluripotent stem cell; SMC, smooth muscle cell.
Conclusion
When guiding the diff erentiation of human pluripotent stem cells into heart cells, recapitulating key factors found in the native environment of the cardiac niche is critical. In addition to biochemical factors, it is necessary to integrate appropriate topology and electrical control of the system to enable the assembly of functional cardiac tissue. Engineered human heart tissue that has the capability to mimic the mature molecular signature and physiology of adult heart tissue will prove to be critical in drug testing applications, studies in cardiac patho physiology, and development of new cell replacement therapies. | 6,806.8 | 2013-02-14T00:00:00.000 | [
"Biology",
"Engineering"
] |
Crystal structure of Ag3Dy2(NO3)9 and quantitative comparison to isotypic compounds
The crystal structure of the title compound and its particular relation to isotypic compounds is considered.
In this work a new member of this group of compounds, Ag 3 Dy 2 (NO 3 ) 9 , is presented, the first one containing Ag and Dy, which has been found to crystallize in the abovementioned structure type.
Structural commentary
Similar to many related compounds, the title compound was obtained from a melt of nitrates, in this case silver nitrate and dysprosium nitrate pentahydrate. However, while for synthesis of related compounds, oxides are often used as lanthanide sources and the respective alkali metal nitrate or a eutectic combination of nitrates act as solvent as well as nitrate donor, in the present experimental setting the nitrates can be deployed in stoichiometric amounts. The crystals, which were found to be suitable for structure determination were obtained from a 2:1 mixture of Ag and Dy nitrates, i.e. a slight excess of AgNO 3 , as described in the experimental section. The surplus Ag is present as remaining AgNO 3 as well as elemental silver after partial thermal or light-induced decomposition. So far, no hint of another compound with a 2:1 composition of metals in the Ag/Dy system, as could be expected for smaller lanthanides similar to the alkali metal or ammonium systems (Manek & Meyer, 1992, 1993a, has been observed. Ag 3 Dy 2 (NO 3 ) 9 ( Fig. 1) crystallizes in space group P4 1 32 with most atoms at general positions except for Ag, N1 and O1 at 12d and Dy at 8c Wyckoff positions. The asymmetric unit comprises one Ag, one Dy, two N, and five O atoms. The Dy atom, being located on a threefold axis, is coordinated by six bidentate nitrate anions with Dy-O distances of 2.557 (11)-2.732 (11) Å (see Fig. 2a), the surrounding oxygen atoms form a distorted icosahedron (Fig. 2b). The polyhedra are connected to neighbouring icosahedra via common vertices, and inside this polyhedron the Dy atom is slightly off-centre, shown by formation of the shortest Dy-O distances to O3 and O4 as part of the same NO 3 À anion (the lower one in Fig. 2b), most probably driven by repulsion of next-neighbour Dy atoms. The silver atom is also coordinated by five nitrate ions in exclusively bidentate manner (Fig. 3). The Ag-O distances span quite a large range, so besides eight distances between 2.741 (11) and 3.004 (11) Å two relatively short distances of 2.383 (15) Å are found. These short bonds include oxygen atoms in almost opposite positions, which form an O-Ag-O angle of 154.7 (6) , indicating the preferred formation of AgO 2 dumbbells even in an environment of quite rigid complex anions, for instance observed in Ag 4 SiO 4 (Klein & Jansen, 2008), in contrast to a more spherical 'alkali metallike' coordination as in Ag 3 SbO 4 (distorted rock salt structure; Klein & Jansen, 2010 Twelvefold coordination of the Dy 3+ ion by six bidentate nitrate ions in Ag 3 Dy 2 (NO 3 ) 9 : (a) view along the threefold symmetry axis; (b) distorted icosahedron around Dy. Atoms are drawn at the 60% probability level.
Figure 1
Unit cell of Ag 3 Dy 2 (NO 3 ) 9 , view along the c axis, atomic displacement ellipsoids are drawn with a probability of 60%.
largest axis of the displacement ellipsoid perpendicular to the AgO 2 dumbbell direction (see Fig. 3), which also represents the largest extension of an anisotropic parameter of all atoms in this structure (see supporting information, U 22 ). The two independent nitrate ions are perfectly planar, with O-N-O angle sums of 360.00 and 359.79 around N1 and N2, respectively. Both the nitrate ions are situated between three bidentately coordinated metal atoms forming almost planar AgDy 2 (NO 3 ) and Ag 2 Dy(NO 3 ) units, respectively, as illustrated in Fig. 4. The longest N-O distances and the smallest O-N-O angles are found in the direction of coordinated Dy atoms, and in addition the Ag atom coordination, including a short Ag-O distance shows an O-N-O angle slightly below the mean value. The appearance of this structure type for the combination Ag-Dy is somewhat remarkable. While silver as an atypical single-charged cation deforms its direct environment slightly to achieve a more convenient coordinative situation as explained above, dysprosium represents the heaviest lanthanide and, thus, the one with the smallest ionic radius observed in this structure type so far (Shannon, 1976), and a twelvecoordinate site seems to be unusual for this small lanthanide. This view is supported by the finding that compounds that include smaller lanthanide cations avoid to adopt this structure type in favour of another structure with a smaller coordination number and even a slightly different composition (A/ Ln = 2:1; Manek & Meyer, 1992, 1993a. Additionally, this might be confirmed by the 'underbonding' of the Dy cation, as the bond-valence sums (Brown & Altermatt, 1985) are calculated to be 2.51 valence units for the threefold positively charged ion, according to the parameters of Brese & O'Keeffe (1991).
The crystal structure has been quantitatively compared to isotypic structures by applying the program compstru (de la Flor et al., 2016), available at the Bilbao Crystallographic Server (Aroyo et al., 2006). With Ag 3 Dy 2 (NO 3 ) 9 as the reference structure, Table 1 lists the absolute distances between paired atoms as well as the arithmetic mean of the distance (d av ) between paired atoms, the degree of lattice deviation (S) and the measure of similarity (Á). Generally, the low values for S and Á indicate a close relationship between all phases, including the trend to increasing numbers at larger differences of lattice parameters from Na to Rb compounds. The differences of d av , S, and Á are of course determined in a higher degree by the more differing radii of the (more frequent) alkali metal cations than by those of the more similar lanthanide ions. Significantly, in all cases the largest displacements between atom pairs are observed for O5, i.e. the closest Ag-coordinating O atom, confirming the special bonding situation for Ag including the above-mentioned AgO 2 dumbbells. Consequently, the whole NO 3 anion, of which O5 is a part, is shifted slightly more than the atoms of the other anion. The Ag atom is also affected, as indicated by Planar surrounding of the two independent nitrate anions: NO 3 (1) (upper) coordinating two Dy and one Ag, view perpendicular to the twofold symmetry axis through Ag, N1, and O1; NO 3 (2) (lower) coordinating one Dy and two Ag, the short Ag-O5 bond is drawn thicker than other Ag-O bonds. All atoms are shown at the 60% probability level. [Symmetry codes:
Figure 3
Coordination of the Ag + cation by five bidentate nitrate anions. The shorter Ag-O bonds, which define the AgO 2 dumbbell, are emphasized, displacement ellipsoids are drawn at the 60% probability level.
[Symmetry codes: (ii) y, z, x; (iii) x + 1 4 , Àz + 1 4 , y À 1 4 .] higher Ag-A displacements than those of the lanthanide cation pairs, while the coordination of the Ln cations remains similar (distortedly icosahedral, slightly off-centered), just accompanied by decreasing Ln-O distances with decreasing cation radii. An exception represents the, so far, only known Na structure, where the similarity as well as the relative displacements are about one order lower than for all other examples, indicating that the packing is distorted to a similar degree by the small Na cation as in the title compound by the Ag cation. However, the closest Ag-O distance is shorter than all Na-O distances in the related Na 3 Nd 2 (NO 3 ) 9 .
Synthesis and crystallization
An alumina crucible was charged with 359 mg AgNO 3 (2.1 mmol; Merck; p.A.) and 495 mg Dy(NO 3 ) 3 Á5H 2 O (1.1 mmol; Alfa Aesar; 99.99%). The mixture was melted together at 573 K for 72 h in an Ar atmosphere, and was cooled down to 453 K at a rate of 0.1 K min À1 . Within an amorphous yellow-grey matrix, pale-yellow plates were found that were hygroscopic. EDX measurements on several crystals confirm the presence of Ag and Dy as the only elements heavier than oxygen. For the X-ray data collection, crystals were immersed into perfluoroalkyl ether, which covers and acts as glue on a glass tip during the data collection at low temperatures.
Trisilver Didysprosium nonanitrate
Crystal data Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Refinement. Refined as a 2-component inversion twin. | 2,105.6 | 2023-06-01T00:00:00.000 | [
"Geology"
] |
Solution of Nonlinear Advection-Diffusion Equations via Linear Fractional Map Type Nonlinear QCA
Linear fractional map type (LFMT) nonlinear QCA (NLQCA), one of the simplest reversible NLQCA is studied analytically as well as numerically. Linear advection equation or Time Dependent Schrödinger Equation (TDSE) is obtained from the continuum limit of linear QCA. Similarly it is found that some nonlinear advectiondiffusion equations including inviscid Burgers equation and porous-medium equation are obtained from LFMT NLQCA.
Introduction
Quantum Cellular Automaton (QCA) [1] is a quantum version of (classical) cellular automaton (CA).The word QCA was introduced by Grössing and Zeilinger [2].But their model was not completely unitary.The QCA in the right meaning which has both locality and unitarity, was firstly investigated by Meyer [3] [4] [5] [6], then followed by Boghosian and Taylor [7] [8], although they used the term Quantum lattice gas automata (QLGA) for the two-component case.Since the middle of the 2000s, new axiomatic approaches of QCA different from previous conventional or ad hoc ones have been proposed by several researchers [9] [10] [11] in order to comprehend QCA in more systematic and unified way by clarifying the definitions and/or to cope with the difficulties for extending it in a form relevant to the infinite dimensional Hilbert space.In most axiomatic QCAs, the unitarity and the causality (namely the existence of the upper limit on the speed of the information propagation) are fundamental and the locality is derived from them [10].In this study, however, we describe QCA in a rather conventional fashion.There are several frameworks for quantum lattice systems other than QCA, namely Quantum Walk (QW) [12], Quantum Lattice Gas Automata (QLGA) [7] [8] and Quantum Lattice Boltzmann (QLB) [13].They are similar or mathematically equivalent to some QCAs [14] [15] [16].
Consider the simplest partitioned QCA on a 1D-time 1D-space lattice of which time evolution rule is given by Figure 1 and Equation (1).This rule is governed by the 2 × 2 basic unitary matrix (which is called scattering unitary matrix [11]) which operates on a vector consisting of functions at adjacent grid points.
The simplest is the QCA with constant U (independent from space and time).We then generalize it to the QCA with space dependent U as described by Equation ( 2).
( ) ( ) ( ) ( ) ( ) ( )
Moreover QCA/QW with time dependent U has been studied.Especially remarka- ble results are obtained for the QW of which parameter is given by Fibonacci sequence [17] [18].In this paper we propose a non-linear QCA (NLQCA) and investigate its properties.The basic 2 × 2 matrix is given by
( ) ( ) ( ) ( ) ( ) ( )
In the NLQCA the basic 2 × 2 matrix depends on the amplitude of wave function.QCA's fundamental and powerful properties are unitarity, locality, reversibility, and when we construct NLQCA, it is important to keep these properties.NLQCA was investigated by Meyer [19] in a rather general way.And several articles [20] [21] can be found especially in the name of nonlinear quantum walk (NLQW).However it seems that the concrete form of NLQCA has not been presented for the study.We here propose linear fractional map type (LFMT) nonlinear QCA (NLQCA) and study its properties in order to clearly understand NLQCA.
After this introduction, the rest of the article is organized as follows.In Section 2 we introduce LFMT phase rotation and define three typical types of it, type-0, type-1 and type-2.We also perform its fixed point analysis, which is a useful mean to investigate the characteristics of NLQCA.In Section 3 we introduce two kinds of reversible NLQCA using LFMT phase rotation, namely complex-LFMT NLQCA and real-LFMT NLQCA.In Section 4 we investigate the property of complex-LFMT NLQCA focusing on type-0.In Section 5 and 6 we investigate the property of real-LFMT NLQCA (type-0 in Section 5 and type-2 in Section 6).
Definition
Consider the following map (complex plane to the complex plane itself) which conserves its absolute value.
( ) ( ) ( ) ( ) ( ) ( )
* * e e e : e e , e Here ( ) ( ) , A r B r are complex numbers and functions of r z = .* X denotes the complex conjugate of X.As the numerator and the denominator of Equation ( 4) are complex conjugate with each other, the absolute value of Equation ( 4) is 1.Therefore Equation (4) represents a phase rotation map.The equivalence of Equations ( 4) and ( 5) is proved easily as follows.From Equation (5) It is easily shown that LFMT phase rotations are closed with respect to inversion and composition.(see Equations ( 44) and (55) in Appendix A) Generally A, B can be any function of r.However we restrict our discussion to the following 3 cases for simplicity.We discuss type-0 mainly in this study.
(1) type-0 ( ) Here, 0 1 2 , , A A A are constant complex numbers, and we assume ( ) B r is constant for all cases.There are several formulas on LFMT phase rotations, and we will summarize them in appendix A. We also summarize the extension of this discrete phase rotation to the continuous one in appendix B. Additionally, we use the following notation for simplicity. [Definition] ( ) ( ) For example, by using the above definitions, The function which multiplies the constant k is written as [k].Note that $ is not included in [] in this case.Complex conjugate operator C, inversion operator V are defined as respectively.We will omit the function composition symbol ( ) ° when [..] are uses.
Small Amplitude Limit and Large Amplitude Limit
In LFMT phase rotation (type-0) ( ) or Az B Az B corresponds to small or large amplitude region respectively.
In small amplitude region this map becomes a linear map and in large amplitude region, this becomes a linear map with complex conjugation . In Figure 2, we illustrate LFMT phase rotation in the case of A = B = 1.We can see that this map closes to a linear map at the limits of 0 or z z → → ∞ .
Fixed Points and Their Stability
Approaches from fixed point analysis are useful when we investigate the characteristics of complex-LFMT NLQCA.In general a map from a circle to the circle itself is called a circle map.LFMT phase rotation of Equation ( 4) is a circle map for any fixed r.Now we find fixed points of this circle map.The equation for the fixed points is ( ) Apparently the Equation ( 10) is satisfied when Therefore, Equation ( 11) has a real solution φ if and only if sin a θ ≥ .In order to investigate the stability of these fixed points, we calculate the gain of the linearized map around the fixed point φ .Let x be the small angle deviation from φ , then we obtain the linearized map as follows.
This means the phase gain at the fixed point φ is The 2nd equation in the parentheses of Equation ( 13) is obtained by using Equation (11).If φ is a fixed point then π φ − is another fixed point and it can be proved that the product of the gains for these two fixed points is 1 as shown in Equation (14).
because the product of the two denominators is Therefore one fixed point is stable and the other fixed point is unstable ( )
LFMT NLQCA
In this study, we investigate two kinds of NLQCA using LFMT phase rotation.The one is complex-LFMT NLQCA and the other is real-LFMT NLQCA.We refer the Time Dependent Schrödinger Equation (TDSE)-type QCA [22] with additional LFMT phase rotation at each grid point as complex-LFMT NLQCA.And we refer the NLQCA whose basic 2 × 2 unitary matrix (in the sense of Equation ( 3)) is given by the mapping of real and imaginal part of LFMT phase rotation as real-LFMT NLQCA.
In the above expression 0 1 is the x-component of Pauli matrices and . In this study, we perform the numerical experiment with 1, and vary the phase of B. (As the change of the phase of A can be compensated by that of initial value of ψ , we need not vary the phase of A when we investigate merely the qu- alitative behavior of the time evolution not sensitive to the initial ψ , therefore we set A = 1.)The obtained qualitative behavior of complex-LFMT NLQCA can be summarizes as follows.Before showing the summary of the simulation, we firstly illustrate the region for parameters A, B where a phase lock can occur in Figure 3, which is the key diagram to understand the qualitative behavior.|B| is assumed to be 1 without loss of generality.The unit of horizontal axis is rad 2π .The region ( ) (where a fixed point can exist) is colored blue, although the neighbor of pure imaginal B (namely 0.25, 0.75 in horizontal axis) is left white because both of their fixed points are marginally stable/unstable.
In type-0 LFMT NLQCA, a singular amplitude point exists where We work on the domain far from such singularity, namely A B ψ (small amplitude case) and A B ψ (large amplitude case) in all space and time.We discuss here only small amplitude case.Qualitative behavior is roughly classified by the occurrence of a phase lock.
(Case 1) when imag(B) = 0 [000 in Figure 4]: Phase lock occurs in all space and the waveform reaches a flat pattern at the end.(Case 2) when imag(B) is sufficiently small but not 0 [001,999 in Figure 4]: According to the magnitude relation between
( )
and imag A B ψ , a phase lock or an unlock occurs temporally or spatially.The noise from spatial high frequency is rather low, and the waveform is smooth.
(Case 3) when imag(B) is not small [050,075,100,900,925,950 in Figure 4]: A phase lock never occurs, and the phase is always rotating.
In The 3-digit number DDD in the legend indicates that ( ) . Periodic boundary condition is used.At t = 0, ( ) 2 0.01 exp 10 0.5 256 Absolute value of two-point averaged ψ are plotted When arg(B) = 0 (Case 0), the perfect phase lock occurs.At ( ) However even in the parameter region of Case 3, singular behaviors are observed at the neighbor of some special parameters which depend also on θ (the parameter of the linear QCA part).Especially, there is a singular point at And above this point (namely ) the behavior is significantly differ- ent from that of free TDSE.The observation of this phenomena is shown in Figure 5 in the case of π 5 θ = .Moreover other singular points are observed at Above these points, the noise level with reference to free TDSE becomes larger.In . We need further investigation on the state and the mechanism of these singularities.
Parameters for the Numerical Experiment
From this section we investigate real-LFMT NLQCA.Firstly we explain the parameters A, B we use in the case of type-0 described by Equation (17).
Symmetry and Classification
We consider the following CPTA symmetry (Equations ( 18)-( 21)).We find that the NLQCAs with a parameter which belongs to the same parameter groups behave qualitatively similarly.
The CPTA mapping for ID = (n, m) of Equation ( 17) is shown in Table 1.As illustrated in Figure 6, C-inversion is simply a redefinition of the sign of the amplitudes.A-inversion is also redefinition of the sign (in this case sign inversion of the all amplitudes).In this section we show simulated waveform of type-G/E/C.For these cases, evolution of the waveform can be explained almost by comparing to the corresponding continuous time counterpart equation we discuss later.In Figure 7 and Figure 8 the simulated waveforms of type-G are shown.In Figure 9 the simulated waveform of type-E is shown.In Figure 10 the simulated waveform of type-C is shown.In all cases initial (t = 0) waveform is set to the Gaussian form ( ) 2 0.01 exp 10 0.5 256 and simulated both in forward time (blue) and backward time (red) with a periodic boundary condition.Backward time evolution is performed using T-inversion formula (20).
Two-point-averaged ψ are plotted and the spatial axis indicates 2 x .
Type-G (Figure 7) corresponds to the in viscid Burgers equation, where T-inversion simply corresponds to the inversion of moving direction.After the steep slope collapses, the NLQCA waveform goes out of the applicable range of in viscid Burgers equation approximation and KdV's soliton-like wave packets spawn (see Figure 8).
In type-E (Figure 9) we can observe very slow diffusive behavior.This can be approximated by the certain kind of non-linear diffusion equation which we discuss later.In backward time simulation the sign of the amplitude is inverted at the beginning then behaves as a non-linear diffusion equation.This initial amplitude inversion behavior can be interpreted as the transition process from the non-positive diffusion constant to the positive one.Type-C (Figure 10) lies between type-G and type-E and behaves like a Burgers equation with viscosity, namely the steep slope does not collapse and keeps on moving with lowering its height.In backward time simulation the sign of the amplitude is inverted at the beginning as in type-E then behaves like a viscid Burgers equation.
Type-H (51), Type-F (71), Type-D (61)
In this section we show that type-F and type-H can be understood from the view point of fixed point.In the case of real-LMFT NLQCA the meaning of a fixed point is slightly different from that in the case of complex-LMFT NLQCA we discussed before.In this case, a fixed point waveform does not keep still but propagates to ±45 deg.direction in the spacetime.Now we consider the (pseudo) fixed point equation for a fixed waveform moving to the left or right at the speed of one.Namely ( ) or equivalently using ( ) Here or of + − ± means a true fixed point or a pseudo fixed point respectively, and "pseudo" means accepting temporal sign alternation.As
[ ] (
) [ ] ( ) ( ) (from Equations (45) (46) (48) in Appendix A), Equation ( 23) can be rewritten as Obviously, in order for z to be a true fixed point or a pseudo fixed point, where R, I denotes the set of real numbers and the set of pure imaginary numbers respectively.Sufficient (and presumably necessary) conditions for Equation ( 25) are , , , , or , , for : ture fixed point , , , , or , , for : pseudo fixed point Namely, using Equation ( 24) From Equation (17) and Table 1, we conclude that type-F has true fixed points, and type-H has pseudo fixed points.
In Figure 11, Figure 12, Figure 13, simulated waveforms of type-H, type-F, type-D are shown respectively.In all cases initial (t = 0) waveform is set to the Gaussian form ( ) 2 0.01 exp 10 0.5 256 and simulated both in forward time (blue) and backward time (red) with a periodic boundary condition.Backward time evolution is performed using T-inversion formula (20).Two-point-averaged ψ are plotted and the spatial axis indicates x/2.In all cases waveforms around the start time ( ) , after a short time (t = 9900 to 10100) and after a long time (t = 499900 to 500100) are shown.Type-H (Figure 11) is half stable and Type-F (Figure 12) is super-stable and Type-D (Figure 13) is unstable.These results correspond to the above fixed point analysis.In these simulations we used the NLQCA parameter .This corresponds to the advection type linear QCA with advection speed ( ) where the time evolution of the waveform is described as the superposition of the left-moving and rightmoving wave packets (see [22]).In this case component moves right with temporal sign alternation, and , 0 0, 1 , 0, 3 , 0, 5 , in Equation ( 27) respectively.)In type-H (Figure 11) two mountain-shaped wave packets move to opposite directions.The right-moving wave packet (which corresponds to the pseudo fixed point) is stable forever whereas the left-moving wave packet becomes unstable in the long run.
Here only the waveforms for even t are plotted, therefore the temporal sign alternation of the right-moving wave packet cannot be seen.In type-F (Figure 12) the right-moving wave packet disappears soon whereas the left-moving wave packet (which corresponds to the true fixed point) is super-stable.In type-D (Figure 13) the waveform is similar to type-F, namely the right-moving wave packet disappears soon whereas the left-moving wave packet keeps moving.However this left-moving wave packet becomes unstable in the long run.
Continuum Limit of Type-0 Real-LFMT NLQCA
It is known that the continuum limit of the simplest QCA becomes linear advection equation or TDSE (see for example [22]).We find that in the case of type-0 real-LFMT NLQCA the continuum limit becomes a nonlinear advection-diffusion equation.Concretely, in type-0 real-LFMT NLQCA ( ) , the continuum limit exists if B is real.Consider the case where the velocity sinθ depends on the amplitudes (a, b, c, d) in the advection type QCA (see for example [22]).
As 4 is the average of the amplitude and 2 b d a c + − − is the difference between the right side average and the left side average, p and (q/2) means the coefficients of ψ and x ψ in sinθ .Let , α β be , 4 4 p q p q − + respectively then we have By inserting Equation (28) to Equation (29) we have ) As Equation (31) implies that ( ) As NLQCA is unitary, its continuum limit must be unitary time evolution equation.The best candidate is ( ) Note that the operators Equation (33) can also be rewritten in the following forms.
( ) ( ) ( ) or if we set This implies ρ is a conserved quantity.We can regard this equation as nonlinear advection diffusion equation of which advection coefficient and diffusion coefficient are proportional to ρ .When p = 0 this type of nonlinear diffusion equation is called as a porous-medium equation [21] [23].(In this case for Equation (35) the degree of the porosity 3 2 = ).In the article [21] a certain kind of NLQW was proposed and it was shown numerically that its continuum limit obeys a porous-medium equation with the degree of the porosity approximately 1.5.
In Figure 14, we demonstrate using numerical simulation that the continuum limit of NLQCA of Equation (32) is indeed well described by Equation (33).The PDE (33) is solved using Finite Difference Method (FDM) (Runge-Kutta (4th order)).Real PDE parameters , p q are taken from A using Equation (32).The linear QCA governed by Equation ( 36) is related to the TDSE-type linear QCA of which basic 2 × 2 matrix is given by Equation (37).
Property of
Both linear QCAs (Equations ( 36) and (37)) have essentially the same dispersion relation Equation (38) and basically behave as TDSE [22].Note that according to the argument in [22], the 2 × 2 unitary matrices in the Ztransformation representation for Equations ( 36) and (37) are ( ) ( ) And the eigenvalues of ( ) U s and ( ) 0 U s have essentially the same value except the constant phase factor, which leads to Equation (38).However in the 1 s → limit, eigenvectors of ( ) , whereas eigenvectors of ( ) Therefore the linear QCA governed by Equation (36) (which does not have eigenvector ) is thought to be described by the superposition of two (something like "forward going" and "backward going" corresponding to ) TDSEs in the wavenumber ( ) limit just like the advection type linear QCA (see [22]).Now we assume the form of Equation (40) as in the case of type-0 NLQCA (Equations (28) (29)) expecting to obtain type-2 NLQCA.
( ) ( )
It is not straightforward to represent type-2 real-LFMT NLQCA using continuum limit approach as in the case of type-0 NLQCA.
Relation between Type-2 Large Amplitude and Type-0 Small Amplitude
In this section, we try to understand the large amplitude behavior of type-2 NLQCA by relating it to the small amplitude behavior of type 0. Using both-sides inversion and conjugation formula ( ) 54) in Appendix A), we can state that if z evolves by type-0 NLQCA ( ) slowly in space, and its continuum limit is some PDE for ( ) is expected to become approximately the PDE rewritten for ( ) (Note that it is important to consider the pair not of for the adjacent grid points pair which causes sign alternating behavior of ( ) We numerically examine the validity of the approximation in the case of inviscid Burges equation.Inviscid Burgers case (type-G) is the most promising for the above continuum limit argument to be applicable because the PDE for ( ) ψ is unitary time evolution too.(This can be easily verified by the fact that the flux ( ) , x J ρ ρ in Equa- tion (35) contains only ρ .) In Figure 15, we show the simulated NLQCA waveforms of both ( ) ( ) ( ) . Two waveforms match well, although a certain adjusting parameter ( ) need to be introduced.Note that we do not plot the solution of the inversed Burgers equation itself but plot its inverted values.Initial waveform for Burgers equation is ( ) 2 0.01 0.01 exp 10 0.5 256 and its inverse is used for the inversed Burgers equation.
Conclusion
Linear fractional map type (LFMT) nonlinear QCA (NLQCA), is studied analytically as well as numerically.Firstly we introduce LFMT phase rotation which maps the complex plane to itself conserving its absolute value.We employ this LFMT phase rotation in two ways in order to construct reversible NLQCA, namely complex-LFMT NLQCA and real-LFMT NLQCA.In order to categorize the qualitative behavior of the LFMT NLQCA, stability analysis around fix points is introduced.Complex-and Real-LFMT NLQCA are studied numerically using a simple model.Results are summarized and analyzed according to the category by the symmetry classification for real-LFMT NLQCA.We further study the continuum limit of the real-LFMT NLQCA analytically and verify it numerically.Linear advection equation or Time Dependent Schrödinger Equation (TDSE) is obtained from the continuum limit of linear QCA.Similarly it is found that nonlinear advection-diffusion equations including inviscid Burgers equation and porous-medium equation are obtained from real-LFMT NLQCA.Although it is already reported in the article [21] the emergence of this porous-medium equation as the continuum limit of some NLQW, real-LFMT NLQCA in our study includes more general dynamics.We also observe soliton-like behavior.
Here A, B are constants, or A, B can be functions of |$| if |k| = 1.
[Proof] Equation (45a) is the special case of the general formula (Note that the factor u(k) must be factored out to the left, or $ (=evaluated value of the right side) would be changed.)Equations ( 45c) and (45d) can be obtained from Equations (45b) and (45a) respectively by applying C from the right and using Equation (48).Note that if |k| = 1 Equations (45c) and (45d) can be obtained also by replacing A with A, B can be a function of |$|.This formula means that type-0 LFMT phase rotation is related to type-2 LFMT phase rotation via complex conjugation (C). [Proof] ( ) [Remark] A, B can be a function of |$|.Therefore especially for type-1 and type-2 [Proof] It is obvious by applying the right side conjugation formula then the both side inversion formula.
Both side inversion ( ) [Proof] ( ) Both side inversion and conjugation ( ) It is easily derived from both side inversion and both side conjugation formulas.Dif- By setting
Figure 1 .
Figure 1.Evolution rule of QCA: the unit system where grid spacing 1 x ∆ = and time step 1 t ∆ = is used.
phase rotation of the double angle of the argument).
Figure 2 .
Figure 2. Type-0 LFMT phase rotation in the case of A = B = 1.Polar coordinates before/after the mapping are shown.Left: before the mapping, Center: after the mapping (small amplitude region), Right: after the mapping (large amplitude region).
Figure 3 .
Figure 3.The region for parameters A, B where a phase lock can occur (which depends on the amplitude level
Figure 5 Figure 5 . 1 B
Figure 5 periodic boundary condition is used.When
(
Note that [ ] ( ) i C a ib b ia + =+ , namely [i] C means P-inversion which swaps a and b.) commutes with P, T, C(3) P commutes with T (PT = TP)(4) C "anti"-commute with T, P (CP = PCA, CT = TCA) We define the 6 groups C, D, E, F, G, H so that ID1 and ID2 belongs to the same group if ID1 and ID2 can be transformed each other by C, P, T, A. IDs that belongs to the same group have qualitatively similar behaviors.[Remark]Thesesymmetry formulas are same for the type-2, because A, B can be regarded as function of |$| and we can replace A with 2 $ A
in all cases therefore in the small amplitude limit ( )
Figure 15 .
Figure 15.The comparison of the type-0 NLQCA solution of the Burgers equation left side conjugation is equivalent to the both sides inversion if A and B are swapped (see Equation (52)).A, B can be a function of |$|.
the waveform behaves like an attractive Nonlinear Schrödinger Equation(NLS) (namely it does not diffuse as in free TDSE case) and condensates with a vibration.At we have only to investigate 32 out of 64 (= 8 × 8) pa- rameter sets, and moreover about a half of those 32 is sufficient if we employ the space reflection symmetry (P) which we explain next.
Table 1 .
Mapping table for ID = (n, m) by applying P, T, A, C or their combinations. | 5,980.8 | 2016-10-31T00:00:00.000 | [
"Mathematics",
"Physics"
] |
An electromagnetic coupling treatment for improving the cutting performance of cemented carbide-coated tools
ABSTRACT To improve the cutting performance and prolong the service life of a carbide-coated tool in the process of ductile iron machining, an electromagnetic coupling treatment (EMCT) was carried out. The cutting experiments show that the cutting force and cutting temperature are reduced after EMCT, and the roughness of the machined surface is reduced. It is found that after EMCT with optimal parameters the dislocation density, microscopic strain, microhardness and bonding strength of an alumina coating increase by 109.2%, 28.2%, 28.3% and 26.6%, respectively. Using the actual machining of a differential housing to verify the tool life, it is found that after EMCT, a single tool can process 18.4 more workpieces or in other words, the tool life increased by 44%. EMCT can promote element diffusion, optimize coating properties and have great potential in coating tool life extension.
Introduction
Ductile iron is widely used in wear-resistant parts such as a diesel engine body, cylinder head and crankshaft due to its prominent casting performance, corrosion resistance, wear resistance and vibration reduction.Ductile iron accounts for more than 40% of the cast iron market today [1].But ductile iron is difficult to process because of its high hardness and strength.Especially pearlite ductile iron, its strength is higher, but machining tool life is significantly reduced [2].The dispersed matrix structure of ductile cast iron is the main factor affecting its machinability.The hard phase in ductile cast iron will impact the tool and affect the stability of the cutting process and tool life [3].Chip discontinuity will increase tool vibration and wear, while heat accumulation in the tool nose will accelerate the oxidation and failure at that point.If the more expensive CBN cutting tool is used, the ferrite in cast iron will cause chemical erosion of the CBN particles in the tool, resulting in rapid wear and reduced tool life [4,5].Currently, coated cemented carbide tools with good wear resistance and oxidation resistance are mainly used for processing [6][7][8], but there are still some problems, such as short tool life and poor surface quality [9].It is therefore necessary to find a way to enhance the cutting performance of coated cemented carbide tools, so as to improve the machining quality and prolong the tool life [10].
Electromagnetic coupling treatment (EMCT) is one of the special energy field assisted manufacturing technologies.The technology is used to strengthen metal parts by coupling applied pulsed electric and magnetic fields in order to obtain modifications by adjusting the material microstructure [11].This technology offers the advantages of being quick to process, having a clear result and not polluting the environment.It is a brand-new tool that advances technology [12].A large amount of studies have found that the external magnetic field will affect the microstructure of materials and thus change their properties [13].Yongfeng Yang conducted magnetic treatment on a cemented carbide (WC-12Co) milling tool, and found that the cutting process changed from two-body friction to three-body friction, thus the wear resistance was improved.Moreover, magnetic field treatment causes dislocation movement and improves tool hardness [14,15].Qiuqin Li applied pulsed magnetic field treatment to a WC-6Co tool and carried out cutting experiments on TC4 with the treated tool, and found that the life of the tool was prolonged and the surface quality of the workpiece was enhanced [16].Li's research shows that magnetic field treatment improves tool thermal conductivity and improves wear performance.Hao Qiu applied pulse magnetic treatment to the coated tool and carried out cutting experiments, and found that the residual compressive stress of the coating increased, the adhesion improved, and the tool life prolonged [17].Hanlin Fei carried out pulsed magnetic field treatment on a CBN tool, and the cutting experiment found that after the treatment flank wear of the tool was greatly reduced [18].Fei's research suggests that pulsed magnetic field treatment can increase the compressive stress and bonding strength of the coating.Studies have shown that electric fields can promote atomic diffusion and defect repair of metal materials [19,20] An external magnetic field superimposed with an electric field will produce a more significant change in the performance of the metal material.Qianwen Zhang found that an electromagnetic coupling treatment can improve the thermal conductivity of WC-8Co [21], Min Yuan found that the electromagnetic coupling treatment can improve the fracture toughness and reduce the cutting heat of WC-TiC-Co cermet tools [22].Existing studies have shown that electromagnetic coupling treatment can optimize the properties of cemented carbide materials and extend the cutting life, but there are few reports on how to improve the mechanical performance of coated cemented carbide tools.
In this paper, an electromagnetic coupling treatment is used to treat coated carbide tools for machining ductile iron, with the aim of studying the mechanical and cutting properties of coated carbide after the treatment and exploring the strengthening mechanism of the mechanical properties.
Coated carbide tool
The substrate of the tool (CNMG120412-RK MC5015, Mitsubishi, Japan) used in this experiment is WC-8Co, with a transition layer of 9.7 μm TiCN and the outer layer of 4.8 μm Al 2 O 3 coating.The SEM image and EDS information of the carbide coated tool are shown in Figure 1.
Workpiece materials
The material used in the cutting experiment is the pearlite ductile iron bar of QT600-3 with a diameter of 70 mm and a length of 200 mm.Its chemical composition is shown in Table 1.In the actual production process, this blade is used to process the housing of a certain type of differential, and the material is also QT600-3.The photos of the ductile iron bar and the differential gear housing are given in Figure 2.
Electromagnetic coupling treatment (EMCT)
Figure 3 is a schematic diagram of the electromagnetic coupling field generator used in this experiment.
Copper electrodes are connected to both sides of the sample, and a pulsed current is applied.The excitation coil provides a spatial magnetic field.Current power supply and excitation coil power supply work synchronously through PLC control.When the magnetic pulse is released, the current pulse is applied to realize the electromagnetic coupling of the sample.Previous experiments demonstrate that the electric field provided by the current and the magnetic field provided by the excitation coil will affect the performance of the sample in varying degrees.The electric field intensity and magnetic field intensity are two variables of this experiment, and the experimental parameters are set as shown in Table 2.Among them, Tool 1 was the control sample.
Cutting experiment
In order to study the impact of EMCT on the cutting performance of coated cutting tools, dry cutting experiments were carried out on a CK6140 CNC machine with the above-mentioned numbered cutting tools (Figure 4).The cutting parameters were as follows: spindle speed 400 r/min, cutting depth 0.5 mm, feed speed 0.2 mm/r.The distance of each axial feed is 130 mm, and the feed process is repeated 13 times until the workpiece diameter is reduced to 45 mm.During the cutting process, a Kistler 9257B three-way dynamometer was used to collect cutting force data, and the sampling frequency was 800 Hz.The cutting temperature is measured by a FLIRA 655SC thermal imager (FLIR, USA).
Actual machining verification
To further verify the cutting performance of the tool after EMCT, tool was used in the actual production to process the differential housing.The tool life is evaluated based on the maximum number of pieces processed, and the machining quality is evaluated by the surface roughness of the workpiece.
Measurement
The tool wear was measured by A VSM-3020 optical profilometer.A Bruker Contour GT-K was used to measure the surface roughness of the workpiece.A Nano Indenter G200 was used to measure the nanohardness and elastic modulus of the tool coating at a pressure of 8 mN.The bond strength of coatings was measured by an Anton Paar Rst automatic scratch instrument with a loading speed of 198 N/min.The tool section morphology was observed with a Thermo ScientificTM Apreo S emission scanning electron microscope, and elemental analysis was performed with the Oxford X-Max energy spectrum detection system mounted with the electron microscope.The thermal conductivity of the tool as a whole is measured in this study.The shape of the tool is a rhombus.A pair of parallel planes are selected as the test surface.The device LW-9389 is used for testing according to the ASTM 5470 standard, and multiple measurements are averaged to get the final result.
Cutting force
Figure 5 shows the feeding force (Fx), radial thrust force (Fy), tangential force (Fz) and resultant force of the tools after EMCT with different parameters in the cutting experiment, where Tool 1 represents the control group, which is the original tool without EMCT.As can be seen from the figure, with the extension of cutting time, the cutting force in the three directions and resultant force gradually decrease and become stable.When the cutting time exceeds 2200 s, the cutting forces of Tool 1 and Tool 2 start to increase abruptly, which may be because the flank of the tool is continuously worn and consumed in the cutting experiment and the coating begins to fail [23].
Compared with Tool 1, the tools after EMCT show a lower level of cutting force and a longer stable cutting time, which may be related to the hardness and wear resistance of the tools' coatings [24].Among them, in all cutting experiments, the cutting force of Tool 3 is the smallest, and at 2600 s is still stable, indicating that the tool wear is still slight at this time and is not at the failure state.
Cutting temperature and thermal conductivity
The cutting temperature has a great influence on the tool life and workpiece surface quality.Continuous high temperature cutting easily leads to tool failure.Studying the changing laws of cutting temperature can provide effective information for analyzing tool life and machining quality.In the process of cutting, the cutting tool and the cutting material experience friction at high speed, resulting in considerable cutting heat.Some of the heat is carried away by the chip, and some is absorbed by the nose of the tool and carried backwards.
If the thermal conductivity of the tool is poor, it is easy to lead to the accumulation of heat in the tool nose, which increases the tool's thermal deformation and wear consumption, shorten the tool's life, and affects the surface roughness of the machined object [25].
The cutting temperature was recorded by an infrared thermal imager.Figure 6 shows the temperature trend from tool nose to body during steady cutting.The horizontal axis represents the distance between the test point and the tool nose (mm), and the vertical axis represents the temperature (°C).As can be seen from the figure, the temperature gradually decreases from the nose to the body of the tool.Lower tool nose temperatures for Tools 2 through 5 than for Tool 1 may be due to the EMCT, which lessens friction and heat generation during tool cutting [21,25].Tool 1 has the steepest line, while Tool 3 has a more gentle slope, which reflects the difference in thermal conductivity of different samples.The thermal conductivity determines how easily the heat from the nose can be transferred backwards.Tool 1 and Tool 3 with the largest trend difference were selected for the thermal conductivity test, and the results are shown in Table 3.It can be seen that after the EMCT with magnetic field intensity 1.0 T and electric field intensity 1.2 V, the tool's thermal conductivity is increased from 8.86 W/(m•K) to 9.47 W/ (m•K), an increase of 6.9%.Therefore, after EMCT, the tool heat transfer is quicker, resulting in a smaller temperature differential between the tool's two ends, quicker heat dissipation at the tool nose, less accumulation of cutting heat, lower oxidation losses, and longer tool life.
Surface roughness
Surface roughness is a vital index to evaluate the quality of processed surface, which can reflect the wear behavior of the contact area between the cutting edge and the workpiece [26], and indirectly evaluate the life of the tool.In industry, average roughness (Ra) is mainly used to characterize the level of surface finish.The test results of the surface roughness of the experimental samples are shown in Figure 7.
The workpiece surface roughness acquired by the unprocessed Tool 1 is 4.08 μm.However, the workpiece surface roughness acquired by the Tool treated with EMCT is reduced, and the value of the workpiece corresponding to Tool 3 is only 3.08 μm.After EMCT, the wear of tip nose is reduced, so the workpiece surface roughness is smaller [26].
XRD analysis
The literature [27] shows that the performance and service life of coatings are affected by the difference in microstructure.Figure 8 shows the transition and outermost phases of the tool coating.This is because the XRD test depth is about 10 μm, which is greater than the thickness of the outermost Al 2 O 3 (4.8μm), so the inner bonding layer TiCN is also detected.The diffraction peaks of all samples did not shift, indicating that neither phase transformation nor generation of a new phase took place in the samples.In XRD data, the Al 2 O 3 diffraction peak with high diffraction intensity is selected and combined with Scherrer equation, the dislocation density (δ), microscopic strain (ε) and average grain size (D) of each sample coating can be calculated [27].According to the data of elastic modulus (E) of Al 2 O 3 coating, residual stress can also be calculated.D is the coating average grain size (nm), which can be obtained from Formula (1).δ is the coating dislocation density (nm −2 ), which can be obtained from Equation (2).ε is the microscopic strain calculated by Equation 5. σ is residual stress calculated by Equation 5., where k = 0.89 (Scherer constant), λ = 0.15406 Å (Copper target), β=FWHM (radians), θ=peak position (radians).The relevant data are shown in Table 4.
It can be seen from the data in Table 4 that tool 3 has the highest microscopic strain, largest dislocation density and smallest grain size of Al 2 O 3 coating, while tool 1 has the opposite effect.The results show that the EMCT process increases the dislocation density (109.2%) and microscopic strain (28.2%) of the coating, and then increases the residual stress.According to the literature [18,28,29], the Al 2 O 3 coating should be in the state of compressive stress, and greater compressive stress is helpful to reduce the wear consumption of the coating and improve the service life.
Nanoindentation analysis
Figure 9(a) shows the load-displacement curves of five samples.According to the nanoindentation curve, the elastic modulus values (E), hardness (H) and elastic recovery rate can be calculated.The load-displacement curves of the same batch of cutters are obviously different after the EMCT with different parameters, which indicates that the mechanical performance of the coating is changed to different degrees during EMCT.The elastic recovery rate can be calculated from Equation 5.
where h max represents the maximum displacement during loading, and h f represents the displacement after unloading.Therefore, the coatings' Re values on the Tool 1 to Tool 5 were 53.80%, 69.35%, 81.55%, 64.50% and 54.60%, respectively.The higher the Re value of the material, the stronger the resistance to plastic deformation [30,31].It can be seen from the calculation results that, compared with the control samples, the anti-plastic deformation ability of the tool coating is enhanced in different degrees after EMCT, and Tool 3 has the largest improvement.Figure 9(b) shows the elastic modulus values and nanohardness of the five tools.As can be seen from the and H/E, the stronger the anti-plastic deformation ability of the coating, that is, the coating tool has better wear resistance under load [32][33][34].
Adhesion strength
The bonding strength between coating and substrate is a significant index to evaluate coating quality, which directly determines the cutting properties and service life of the coated tool.The adhesion strength of EMCT coatings with different parameters was measured by scratch method.In the scratching process, the sudden change of friction coefficient is taken as the signal that the coating is scratched, and the loading force at this time is the adhesion strength of the coating.Figure 10 shows a photograph of the scratch and the location of the friction coefficient mutation.As can be seen from the figure, the bonding strengths of Tool 1 to Tool 5 are 52.3N, 59.4 N, 66.2 N, 65.5 N and 60.2 N, respectively.In other words, the tool coating's bonding strength is improved following EMCT to varying degrees, with a maximum rise of 26.6% (Tool 3).The variation of bonding strength of the coating may be connected with the diffusion of elements between coating and substrate [27].The enhancement of adhesion may be due to the enhanced diffusion process of elements between coatings.The failure of coating mostly rests with plastic deformation and fracture.In the early phase of the scratch experiment, the coating began to change from elastic deformation to plastic deformation.With the increase of the load, the coating was damaged until the applied load exceeded the plastic deformation limit of the coating when the coating was completely broken and spalling [35].In the scratch experiment, the coatings' plastic deformation resistance is consistent with the results of nanoindentation analysis.After the EMCT with the electric field intensity of 1.2 V and the magnetic field intensity of 1.0 T, the bonding strength and plastic deformation resistance of the coating are improved.
The most immediate cause of increased adhesion strength is the diffusion of elements between coatings [36][37][38].In order to further verify the effect of EMCT with electric field intensity of 1.2 V and magnetic field intensity of 1.0 T on the bonding strength of the tool coating, element distribution experiments in quasi-insitu were carried out.An EDS line scan was performed on the specified location of the sample section first, and then the EMCT experiments of 1.2 V and 1.0 T were conducted on the sample.Finally, an EDS line scan was performed again on the same location of the section, and the results are shown in Figure 8.The main focus is on the metal elements of the sample, which are Al, Ti and W, respectively.The area where the element content changes abruptly is the area of coating transition.Figure 11(b) shows that, before treatment, there is an element diffusion region of about 1.80 μm between the outermost Al 2 O 3 and the bonding layer TiCN of the sample which is formed during the preparation of the coating.And a diffusion zone of 1.37 μm thickness exists between TiCN and WC substrate.
After EMCT, the thickness of element diffusion region between Al 2 O 3 and TiCN increased to 2.01 μm and that between TiCN and WC increased to 2.04 μm.The variation indicating that the EMCT process promoted the element diffusion between coatings, enhanced the movement of aluminum in the outer alumina to the inner layer and enhanced the migration of Ti in the inner TiCN to the outer layer so does the Ti and W in the next interface.Therefore, after EMCT with an electric field intensity of 1.2 V and magnetic field intensity of 1.0 T, two element diffusion gradient layers of about 200 nm and 700 nm were formed where the coating bonds, which improved the adhesion strength of the coating.
Figure 12 shows the migration of different elements in the tools during the EMCT process.The diagram on the left shows the initial state of the tool coating.The matrix is WC, the bonding layer is TiCN, and the outermost layer is Al 2 O 3 .The accumulation of atomic layers is a simple expression of coating thickness.At this time, there is a transition region between the outermost layer and the bonding layer, which is represented by the element mixing layer.There is also an element mixing layer between the bonding layer and the matrix, indicating the transition between TiCN and WC.During EMCT, elements begin to diffuse.Al in the outermost layer moves inward, while Ti in the middle layer also moves inward and outward, W in the matrix moves into TiCN layer.The result is an increase in the thickness of the mixing layer of elements, represented by two layers of atoms.The final coating obtained is shown in the right diagram.Compared with the initial state, the coating at this time has a greater thickness of the element diffusion gradient layer, which is of great significance for improving the binding strength, wear resistance and service life of the coating.
Under the action of the electric field, the disordered Al 2 O 3 molecules are no longer ionized but are turning direction polarization [39].The orientations of electric dipoles tend to be uniform, and the O-Al bond is deflected directionally and elongated under the electric field, lowering the breakage energy of the O-Al bond [40].In addition, due to the turning polarization, the electric dipole moment of the Al2O3 molecule is not zero, and the atoms are in an unstable state, so that the O-Al bond is easy to break, helping the O atoms and Al atoms to diffuse into the inner TiCN.Under the conditions of an electric field, diffusion is a bidirectional process [41,42].The magnetic field will cause the movement of the free vacancy in the material organization, resulting in the lattice rearrangement of the microscopic region.Chemical bonds are formed and broken during structural rearrangement.The microscopic residual stress after fracture becomes the driving force of atomic migration [43].Under the condition of electromagnetic coupling, the effects of electric field and magnetic field superposition promote each other, accelerate the fracture of the O-Al bond and the migration of Al and O atoms to the inner layer and finally realize element diffusion.The macroperformance is that the coating bonding force is enhanced and the cutting performance is markedly improved.The deeper theory of atomic migration between coatings during electromagnetic coupling treatment will continue to be explored in the following studies.
Tool life verification
The cutting experiments show that the cutting temperature, cutting force and surface roughness of workpiece are reduced after the EMCT.It is found that the EMCT can improve the hardness, plastic deformation resistance and bonding strength of the coating.However, whether the utility performance of the tool is improved must be tested through actual processing.The differential housing was processed with the tools that passed through the above EMCT, also numbered Tool 1 to Tool 5.After each machining of 10 pieces, the flank wear of the tools was measured (Figure 13), and the maximum machining life of each tool was noted.In order to evaluate the machining quality, the surface roughness of the 40th shell was observed and recorded (Figure 14).
The VB in Figure 13(a) shows the extent of tool wear.With the increase of the number of machining pieces, tool wear gradually intensifies.Compared with Tool 1, the wear extent of tools (Tool 2 to Tool 5) after EMCT is reduced.The blue line is the gentlest, indicating that Tool 3 has the lowest wear rate.After processing 40 workpieces, Tool 1's VB reaches 0.186 mm, while Tool 3's VB is only 0.141 mm, a 24.2% reduction.Reducing back tool wear is beneficial to prolong tool life and improve machining quality [44].It can be seen from Figure 13(b), the maximum number of processed pieces of Tool 1 is only 41.8, while other tools are improved.The corresponding value of Tool 3 is 60.2, indicating that after EMCT of 1.2 V and 1.0 T, one tool can process 18.4 more workpieces, and the tool life is increased by 44%.This verification result has a great significance for improving tool utilization and reducing production cost.
Figure 14 shows the machined surface roughness of the 40th housing.The surface roughness of the housing produced by Tool 1 is the highest, whilst the surface roughness of the housing produced by Tool 3 is the lowest.When the machining parameters are the same, the roughness of the machining surface is determined by the wear condition of the tool nose, so this part of the data is consistent with the above flank wear.Through the working condition verification, after EMCT, the tool hardness increases, the wear consumption slows down, the surface roughness of the housing decreases, and the machining quality is improved.
Figure 1 .
Figure 1.SEM images with EDS analysis of the tool.
Figure 2 .
Figure 2. Cast iron bar used in cutting experiment (a) and differential gear housing in actual machining (b).
Figure 6 .
Figure 6.Photos of cutting temperature: (a) measurement, (b) temperature distribution of turning tool.
Figure 12 .
Figure 12.Schematic for the elements migration of the coatings.
Figure 13 .
Figure 13.Tool life verification: (a) flank wear of tools, (b) maximum number of processed pieces.
Table 4 .
Microstructure and properties of Al 2 O 3 coating.
Table 5 .
Micromechanical properties of five tools., after electromagnetic coupling treatment, the elastic modulus of the tool has no significant change, but the hardness has increased.The tool's initial hardness was 15.05 GPa, and it became harder after treatment.The most obvious is Tool 3, which reaches 19.31 GPa, up 28.3%.Table5shows the micromechanical properties of the sample measured by the nanoindentation experiment.As can be seen from the table, Tool 1 coating deformation resistance index (H 3 /E 2 and H/E) is the lowest, while Tool 3 coating deformation resistance index is the highest.According to the plastic deformation theory, the greater the value of H 3 /E 2 figure | 5,825.4 | 2023-09-21T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Witten index in supersymmetric 3d theories revisited
We have performed a direct calculation of Witten index in N = 1,2,3 supersymmetric Yang-Mills Chern-Simons 3d theories. We do it in the framework of Born-Oppenheimer (BO) approach by putting the system into a small spatial box and studying the effective Hamiltonian depending on the zero field harmonics. At the tree level, our results coincide with the results of Witten, but there is a difference in the way the loop effects are implemented. In Witten's approach, one has only take into account the fermion loops, which bring about a negative shift of the (chosen positive at the tree level) Chern-Simons coupling k. As a result, Witten index vanishes and supersymmetry is broken at small k. In the effective BO Hamiltonian framework, fermion, gluon and ghost loops contribute on an equal footing. Fermion loop contribution to the effective Hamiltonian can be evaluated exactly, and their effect amounts to the negative shift k ->k - h/2 for N =1 and k ->k - h for N = 2,3 in the tree-level formulae for the index. In our approach, with rather natural assumptions on the structure of bosonic corrections, the shift k ->k + h brought about by the gluon loops also affects the index. Since the total shift of k is positive or zero, Witten index appears to be nonzero at nonzero k, and supersymmetry is not broken. We discuss possible reasons for such disagreement.
Introduction
It is known since [2] that N = 4 supersymmetric Yang-Mills theory in 4 dimensions is dual to supersymmetric string theory (10d supergravity in the leading strong coupling approximation) on AdS 5 ×S 5 background. In other words, many nontrivial results for N = 4 SYM theory for large N c and large 't Hooft coupling can be obtained by string theory methods. Recently, a new interesting duality has been established. It relates certain 3d supersymmetric gauge theories, involving the Chern-Simons terms and a particular set of matter fields and enjoying N = 8 or N = 6 supersymmetry, to string theories on AdS 4 × S 7 or AdS 4 × CP 3 backgrounds, respectively [3]. This means that, by duality, one can derive many nontrivial results for these 3d theories.
The theories in question are not so simple, and we do not understand their dynamics as well as we do it for 4d theories. In our opinion, it makes sense to study it in as much details as possible by purely field theory methods in order to be able to confront the results thus obtained with the results following from string-gauge duality. A wish to develop tools that would eventually allow us to perform such a comparison and to test the duality conjecture once again was the main motivation behind the present study.
As was mentioned, the 3d theories, for which duality was established, are complicated. Thus, we have decided to study first the simplest N = 1 SYMCS theory and, in particular, its vacuum dynamics. This question was addressed previously in Ref. [1]. Witten calculated the index (the difference of the numbers of bosonic and fermionic vacuum states) for this theory. His result for the theory with SU(N) gauge group at the level k = κ/(4π) is This is zero at |k| < N/2. For |k| ≥ N/2, it can be presented as (1. 2) The way this result was derived was not direct, however. That is why we have tried to evaluate the index anew using more direct and clear physical reasoning. We use the same method as Witten successfully applied in [4] for 4d supersymmetric gauge theories: put the system in a small spatial box and impose periodic boundary conditions on all fields. If the size of the box is made small enough, most of the variables in the field Hamiltonian become fast with large characteristic excitation energies. One can integrate them over and study the dynamics of the effective BO Hamiltonian that depends only on few slow variables (zero Fourier modes of gauge fields belonging to the Cartan subalgebra and their superpartners). However, it turns out that carrying out this program for 3d SYMCS theories is a more difficult task than for 4d gauge theories. It might even seem that it fails in the 3d case because it is not sufficient to restrict oneself here with the tree-level effective Hamiltonian. Loop corrections are important and they change essentially the value of the index. At the one-loop level, these corrections can be determined, however, and one can conjecture that higher-loop effects do not further change the result. This conjecture is not quite proven by now, but, following Witten, we find it plausible (the arguments in its favor will be discussed later) and adopt it.
As we will see, the index of the effective finite volume BO Hamiltonian depends on the r-th Chern class of a certain Abelian gauge field on the moduli space of flat connections, with r being the rank of the group. In the case of SU (2), it is just the magnetic field flux on the dual torus. There are one-loop contributions to this (generalized) flux, both due to fermion loops and due to gluon loops. These corrections are associated with the renormalization of the Chern-Simons coefficient k in the infinite volume theory. Thus, the index can be evaluated in two steps.
• At the first step, one evaluates the index for the tree-level effective BO Hamiltonian.
We have performed it by another method than Witten and confirmed his result, (This is for the SU(N) gauge group and positive k).
• At the second step, one takes into account loop effects, which boil down (we will argue that later) to 1-loop renormalization of k due to both fermions and bosons, Note that one would obtain Witten's result (1.1) by doing the same, but leaving only the contribution of the fermionic loops in (1.4). The fact that gluon loops contribute to the shift of k is firmly established [5,6]. It is less clear, however, whether such boson-induced shift of k is directly translated into the shift of index. In Witten's approach, it does not. In our finite volume approach, a direct and quite honest evaluation of the gluon contribution to the effective BO Hamiltonian is, technically, a more complicated problem than for the fermion contribution (the latter can be evaluated exactly), which is still to be solved. But under very natural assumptions, the boson contribution has the same structure as the fermion one. The result (1.5) is obtained under this assumption. The difference between (1.5) and (1.1) is essential. The product (1.1) vanishes at k < N/2, which suggests spontaneous breaking of supersymmetry. But the expression (1.5) does not display such feature meaning that supersymmetry is not broken. Neither is it broken in N = 2, 3 theories, where fermion loop and gluon loop effects in the renormalization of k cancel out (scalar loops contribute to renormalization of g 2 , but not to renormalization of κ), and the index is given by the tree level expression (1.3). We will discuss this controversy in more details in the last section.
In the next section, we fix notations and calculate the index at the tree level. The index of the original theory is evaluated as the index of the effective SQM Hamiltonian, where one should impose the additional constraint of Weyl invariance of wave functions (this is a corollary of gauge invariance of wave functions in the full theory). Before this restriction is imposed, one finds Nk N −1 vacuum states for the SU(N) gauge group. The wave functions of all these states can be explicitly determined: they represent generalized theta functions. Not all these functions are invariant under Weyl transformations, however, the total number of Weyl-invariant functions being given by the expression (1.3).
We also calculate the index for the symplectic gauge groups Sp(2r) and for G 2 . For symplectic groups, the calculation is even more transparent than for unitary groups. The (tree level) result is The result for G 2 is for odd k . (1.7) In Sect. 3 we discuss one-loop corrections. We show that they amount to shifting k, as dictated by (1.4). We discuss also the N = 2, 3 SYMCS 3d theories including extra adjoint Majorana fermions and extra adjoint real scalars, and show that the index there is just given by Eqs.(1.3), (1.6) with unshifted k.
Sect. 4 is devoted to discussions. We spell out again the reasoning leading to the result (1.5) and confront it with Witten's reasoning. In addition, we address the unclear by now question of what might be wrong with the string-inspired arguments of Ref. [7], which favor the result (1.1) rather than (1.5).
Tree level
The action of N = 1 SYMCS theory is 1 ; λ α is a 2-component Majorana 3d spinor belonging to the adjoint representation of the gauge group. We choose This is a 3d theory and the coupling constant g 2 carries the dimension of mass. The physical boson and fermion degrees of freedom in this theory are massive, In three dimensions, the nonzero mass brings about parity breaking. The parameter κ is dimensionless. It cannot be an arbitrary number, however. The functional integral should be invariant with respect to large gauge transformations that change the Chern-Simons number of the gauge field configuration, by an integer. The requirement for e iS to be invariant under such transformation leads to the quantization condition with integer k. Two reservations are in order, however. First, we consistently assume in this paper that the field theory (2.1) is regularized in the infrared by putting it on a spatial torus with periodic boundary conditions. If the so called twisted boundary conditions were imposed [9], Chern-Simons number could change by an integer multiple of 1/N, in which case k would be quantized to be an integer multiple of N [1]. Second, we have not taken into account loop effects yet. We shall learn in Sect. 3 that the loops may in some cases modify the quantization condition such that k must be half-integer. The parameter k is called the level of the theory.
Effective Hamiltonian
We put the system in a spatial box of size L and impose periodic boundary conditions on the fields. The Witten index does not depend on the size of the box and we are allowed to consider the limit mL ≪ 1 and hence [The second inequality follows from the first one, from the definition (2.3) and from the quantization condition (2.5)]. We expand the dynamic field variables in the Fourier series.
with integer n. When the condition (2.6) is satisfied, the zero Fourier components A (0) j and λ (0) α belonging to the Cartan subalgebra of the full Lee algebra play a special role: the characteristic excitation energies associated with these degrees of freedom are of order E (0) ∼ g 2 , which is much less than the characteristic excitation energy E higher modes ∼ 1/L associated with higher Fourier harmonics and much less than the characteristic energy associated with non-Abelian components of the vector potential E non−Ab ∼ (g/L) 2/3 . We can thus integrate over the fast variables A . The situation is exactly the same as for 4d theories [4]. In the tree approximation, the effective Lagrangian is obtained by a simple truncation of all fast modes in (2.1). Proceeding in a similar way for 4d theories, we would obtain the Lagrangian/Hamiltonian describing free motion on T × T × T , with T representing the maximal torus of the group [4]. In the 3d case, the situation is more complicated.
Consider first the simplest SU(2) case. There are two slow bosonic variables and their superpartners ψ α ≡ λ (0) 3 α . The truncated Lagrangian is To find the corresponding Hamiltonian, it is convenient to introduce ψ ± = ψ 1 ± iψ 2 . Then the fermion part of the Lagrangian is represented as We see that the only fermion dynamic variable is ψ − ≡ ψ. Note that it is transformed as under spatial plane rotations. The canonical momentum is π ψ = iL 2 ψ + /(2g 2 ). After quantization, it goes over to −i∂/∂ψ ≡ −iψ. Ordering the productψψ in a proper (Weyl) way and introducing also bosonic canonical momenta P j , we derive the quantum Hamiltonian It describes the motion in the presence of a uniform magnetic field B = κL 2 on the dual 2-dimensional torus C j=1,2 ∈ (0, 4π/L). The motion is finite because all the points C j + 4πn j /L with integer n j are gauge-equivalent. The motion of electron in a uniform magnetic field is the first and the simplest supersymmetric quantum problem ever considered [10]. The bosonic and fermionic sectors of the Hamiltonian (2.12) correspond in the usual approach to spin-up and spin-down electrons. The index of this Hamiltonian I = Tr{(−1) F e −βH } can be calculated as a functional integral, which is reduced for small β (semiclassical limit) to an ordinary phase space integral [11] When the motion extends over the whole plane, the index is infinite, indicating the infinite ground state degeneracy. When the motion is finite, the number of vacuum states is finite, being proportional to the total magnetic flux. In our case, B = κL 2 and 14) It is not difficult to generalize this analysis to other gauge groups. In general, we have 2r slow bosonic variables C ja and their superpartners ψ a . The index a = 1, . . . , r labels where B ab = ǫ jk ∂ aj A bk , describing a generalized multidimensional Landau-Dubrovin-Krichever-Novikov problem. For the tree-level Hamiltonian that corresponds to the truncated Lagrangian of Eq.(2.1), By the same token as in the SU(2) case, the motion is finite and extends for each C over a parallelepiped formed by simple coroots of the group (alias, the maximal torus T of the group). For SU(3), this is a rhombus represented in Fig.1 (do not pay attention for a while to the dashed lines bounding the Weyl alcove, neither to special fundamental coweight points marked by the box and triangle). The index of the effective Hamiltonian is evaluated semiclassically as a generalized magnetic flux (this is nothing but that the r-th Chern class of the U(1) bundle over T × T with the connection A ja ), (2.17) In the case of SU(N),
Counting Weyl invariant vacuum functions
The index of the effective Hamiltonian (2.15), (2.16) is given by the expression (2.18). But the index of the original theory is not. There are two reasons by which the result (2.18) is modified. The first reason (loop effects) was already mentioned. We will deal with loops in the next section. The second reason is that the Schrödinger equation with the effective Hamiltonian (2.15) should in fact be supplemented by the condition of Weyl invariance, which is a corollary of the gauge invariance of the original theory [4]. For example, for SU(2), wave functions should be invariant under the reflection C j → −C j , ψ → −ψ. Not all eigenfunctions of (2.15) satisfy this requirement. As a result, the value of the index is less than "pre-Weyl" index (2.18).
To find it, we simply write down explicit expressions for all vacuum wave functions and pick up Weyl-invariant ones. To begin with, consider the simplest SU(2) case and let first k be positive. The ground states of the effective Hamiltonian have then zero fermion charge such that the second term in the Hamiltonian (2.12) brings about a negative contribution to the energy.
Let us introduce x = C 1 L/(4π) ∈ (0, 1) and y = C 2 L/(4π) ∈ (0, 1). All eigenfunctions of the Hamiltonian satisfy the following boundary conditions Their origin can be traced back to the fact that the shifts x → x + 1 and y → y + 1 represent contractible (this is the non-Abelian specifics) gauge transformations. In most gauge theories, wave functions are invariant under such transformations. But the YMCS (or Maxwell + CS) theory is special in this respect [13]. Indeed, the Gauss law constraint in the YMCS theory has the form where Π a j = F a 0j /g 2 +(κ/2)ǫ jk A a k are the canonical momenta. The second term gives rise to the phase factor associated with an infinitesimal gauge transformation δA a j (ξ) = D j α a (ξ) (we denote here the usual spatial coordinates by ξ rather than x not to confuse them with rescaled vector potentials), This property holds also for the finite contractible gauge transformations α a = (4πξ 1,2 /L)δ a3 implementing the shifts C 1,2 → C 1,2 +4π/L. The phase factors thus obtained coincide with those quoted in Eq. (2.19); they are nothing but the holonomies exp i In other words, the (tree-level) index is This explicit analysis was done for the constant magnetic field. However, the symmetry properties of the wave functions are robust with respect to deformations. We thus can be sure that the number of Weyl-invariant wave functions is equal to k + 1 also for the Hamiltonian with nonuniform magnetic field of a given flux 2k.
Fast Hamiltonian and its ground state.
What happens at negative nonzero k ? The ground states of the effective Hamiltonian (2.15) are in this case not bosonic, but fermionic, involving ψ as a factor. This factor is odd under Weyl reflection. At first sight, to provide for Weyl-evenness of the wave function, this should be compensated by picking up Weyl-odd combinations of the functions (2.20). There are |k| − 1 such combinations which would lead to the conclusion that the index is equal to k + 1 also for negative k (giving |k| − 1 fermionic states). This is obviously wrong, however, the number of vacuum states cannot depend on the sign of k. To resolve this paradox, one should go into some details of the BO procedure.
When k is positive, the wave functions (2.20) are the ground states of the effective Hamiltonian (2.15). They depend on the slow variables C 1,2 and the factor ψ is absent in this case -the states are bosonic. The corresponding ground states of the full Hamiltonian are obtained when Ψ m are multiplied by the ground states of the fast Hamiltonian depending on all Fourier modes (2.7) of the charged (with respect to . The fault in the argument above (leading to the paradoxical result I(k < 0) = k + 1) does not depend, however, on the presence of higher Fourier modes, and it is sufficient to analyze the dimensionally reduced theory where the fields do not depend on x. Let us assume that and C ≫ m = κg 2 . Then the fast Hamiltonian (in the quadratic with respect to fast variables approximation) acquires the form where we have set for simplicity L = 1, and the index a takes two "transverse" values, a = 1, 2. Let us look first at the bosonic part. For each a, it describes the motion of a scalar particle in the magnetic field B = κ with an additional oscillatoric potential ∝ (A a 2 ) 2 . The spectrum of a generic such Hamiltonian, is well known [14], with In the case under consideration, The presence of two zero modes (as was mentioned above, H fast bos represents the sum of two identical Hamiltonians for a = 1, 2) is very natural. They are none other than the gauge modes corresponding to the action of the Gauss constraints operators G a on the vacuum and all other physical wave functions. If resolving explicitly the Gauss law constraints and expressing everything in terms of physical gauge-invariant variables, the zero modes associated with gauge rotations disappear. It is convenient, however, to leave the constraints unresolved. The bosonic vacuum wave function has then the form It is annihilated by the operator G 3 . The vanishing of G 1,2 Ψ is not explicit, but that is because the operators G 1,2 mix A a j and C j , while (2.28) was written in the assumption that the slow bosonic variables have only the third color component. The corresponding eigenfunctions of the full bosonic Hamiltonian depend only on gauge-invariant combinations, like R jk = 3 a=1 A a j A a k and are annihilated by all three constraint operators. The wave function (2.28) is multiplied by the ground state of the fermionic part of the Hamiltonian (2.23), The total energy is zero as it should: the contribution √ C 2 + m 2 of the bosonic part cancels the fermionic contribution − √ C 2 + m 2 . The Hamiltonian (2.23) and the wave functions (2.28, 2.29) were written in the assumption (2.22). It is equally easy to write them for arbitrary C j . We will only need the expression for the fermion wave function: Recalling (2.8) and the bosonic ( for k > 0) nature of the ground state of the effective Hamiltonian (2.12), we see that the ground states of the full Hamiltonian have the structure (where now a = 1, 2, 3). These wave functions are gauge invariant. 3 In the vicinity of the valley ǫ abc A b j A c k = 0 and for large C ≫ m, the approximate equality Φ 1 ≈ 4ig 2 |C|Φ 2 holds. Restoring the distinction between the fast and slow variables, we can represent 32) and the gauge invariance of Φ 1 , which means in particular its G-parity (invariance under rotations by π along the second color axis), entails the Weyl invariance of Ψ slow (C j ). We reproduce thereby our previous result. We are ready now to go over to the negative k case and to understand how the paradox is resolved. The point is that, when k < 0, the expression (2.30) is inconvenient. The convenient expression is obtained from (2.30) by multiplying it by the factor C 1 + iC 2 , Indeed, as far as the fast Hamiltonian and its eigenfunctions are concerned, the factors depending only on slow variables are absolutely irrelevant and can be chosen arbitrarily.
The product ψΨ can now be easily promoted to a gauge-invariant eigenstate of the full Hamiltonian, Again, Φ 1 can be represented as in (2.32), and the coefficients Ψ slow (C j ) (the effective wave functions being obtained from them by multiplying by ψ) should be even rather than odd with respect to Weyl reflections, such that Going back to (2.33), one can notice that, in contrast to the function (2.30), this function is odd with respect to rotations by π around the second color axis producing the reflections This oddness compensates for the Weyl-oddness of the factor ψ and requires for the coefficient Ψ slow (C j ) to be Weyl-even.
Higher unitary groups.
Consider first SU(3) and let k be positive. There are 2r = 4 slow bosonic variables, which are convenient to choose as x a = C a 1 L/(4π), y a = C a 2 L/(4π). Both x a and y a vary within an elementary cell of the SU(3) coroot lattice, alias the maximal torus. The latter represents a rhombus shown in Fig.1 such that exp{iLC a t a } = 1 in the vertices of the rhombus. The effective Hamiltonian (2.15) can be represented in the form where a = (1, 0), b = (−1/2, √ 3/2) are simple coroots. When k = 1, there are 3 such states: where the sums run over the coroot lattice, n = m a a + m b b with integer m a,b . Now, △ △, are certain special points on the maximal torus (called fundamental coweights) satisfying The group elements that correspond to the points 0, △, and belong to the center of the group, They are obviously invariant with respect to Weyl symmetry, which permutes the eigenvalues. 4 Thus, all three states (2.38) at the level k = 1 are Weyl invariant. But for k > 1, the number of invariant states is less than 3k 2 . For an arbitrary k and in the constant field, the wave functions of all 3k 2 eigenstates can be written in the same way as in (2.38), As a result, the number of Weyl invariant states is equal to the number of the coweights w n lying within the Weyl alcove. For example, in the case k = 4, there are 15 such coweights shown in Fig.2 and, correspondingly, 15 vacuum states. For a generic k, the number of the states is The analysis for SU(4) is similar. The Weyl alcove is the tetrahedron with the vertices corresponding to the center elements of SU(4). A pure geometric counting gives (2.43) The generalization for an arbitrary N is obvious. It gives the result (1.3).
The large k asymptotics is I ∼ k N −1 /(N − 1)!, which is simply the "pre-Weyl" index (2.18) divided by the order of the Weyl group. For negative k, the ground states of the effective Hamiltonian acquire the fermionic factor For odd N, the ground states are still bosonic and the index is still positive. For even N, the ground states are fermionic and the index is negative. One need not perform here a detailed analysis, as we did in the case of SU(2), but simply use the symmetry requirements. They dictate the formula
Symplectic groups
The counting of vacuum states for the symplectic groups Sp(2r) is simpler (sympler ?) than for unitary groups. The maximal torus of Sp(2r) can be represented as g = exp i r p=1 α p e p , where is the orthonormal basis in the Cartan subalgebra and α k ∈ (0, 4π). The coroot lattice is thus hypercubic. 5 The effective BO Hamiltonian represents a simple sum of r copies of the BO Hamiltonian for Sp(2) ≡ SU (2). The path integral for the pre-Weyl index is the r-th power of such path integral for SU (2) giving (2.46) The vacuum wave functions represent the products of the SU(2) wave functions (2.20). The Weyl group changes the sign for each α k and permutes them. Its order is thus 2 r r!. Thus, the number of Weyl-invariant vacuum states can be counted as the number of components of a symmetric tensor of rank r where each index can take k + 1 values. For positive k, it is given by Eq.(1.6). The index for negative k is restored by symmetry, Figure 3: Coroot lattice and Weyl alcove for G 2 .
The simple coroots for G 2 are a = (1, 0) and b = (−3/2, √ 3/2). The lattice of coroots and the maximal torus look exactly in the same way as for SU(3) (see Fig. 3). Hence, the pre-Weyl index is equal to 3k 2 , as for SU (3). The difference is that the Weyl group involves now 12 rather than 6 elements, and the Weyl alcove is two times smaller than for SU (3). As a result, for k = 4, we have only 9 (rather than 15) Weyl-invariant states (see Fig.2). The general formula is given in Eq.(1.7).
Loop corrections.
In (nonchiral) 4d SYM theories, the evaluations of the index based on the analysis of the tree effective BO Hamiltonian are not modified when loops are taken into account. For 3D SYMCS theories, this is not so and loop effects are important. It seems plausible, however, that one can restrict oneself by one-loop analysis; second and higher loops do not further modify the result. We will argue this point a bit later.
Infinite volume.
We are interested in one-loop corrections to the effective Hamiltonian in finite volume. But they are genetically related to one-loop renormalization of the infinite volume theory [15]. For pure YMCS theory, the latter was dealt with in Ref. [5]. For N = 1, 2, 3 SYMCS theories, the corresponding calculations have been performed in [6]. Let us remind their salient features.
After fixing the gauge and introducing the ghosts, the Lagrangian acquires the form It is convenient to use Landau gauge ξ → 0. Then the tree gluon propagator is It has the pole at p 2 = 0 associated with gauge degrees of freedom and the physical pole at p 2 = m 2 . The transverse gluon polarization operator has two structures Introducing also the ghost polarization operatorΠ(p 2 ), the bosonic part of the renormalized Lagrangian is expressed as 6 (Π(0) is the ghost polarization operator). Redefining the fields η, A, it can be rewritten as where The relevant 1-loop graphs are depicted in Fig.4. Let us discuss first the renormalization of κ. The simplest is the contribution of the fermion loop in Fig.4a. It gives
7)
p being the Euclidean momentum. The bosonic contribution can be obtained from Eqs. (17,18) of Ref. [6], . When c V is odd (in particular, when N is odd for SU(N) groups), the coefficient k is shifted by a half-integer. The physical requirement for k to be integer refers to k ren rather than k tree . This implies that, for consistency, k tree should be half-integer. 7 The renormalization of the coefficient 1/g 2 of the kinetic term can be obtained from the result for the mass renormalization 8 , from Eq.(3.9), and from the relation (2.3). One obtains Background field calculation. 7 Another way to see this is to notice that, for odd c V , a topologically nontrivial gauge transformation brings about the extra factor −1 due to the level flow in the fermion determinant. As a result, the quantization condition is not exp{2πik tree } = 1, but rather exp{2πik tree } = −1, giving half-integer k tree [1,16]. 8 See Eq.(23) in Ref. [6]. Note that Eq. (22) there involves a misprint with misplaced factor ln 3.
The calculations [5,6] were done in the conventional diagrammatic approach. But to generalize them to the finite volume case, the background field technique is more appropriate and relevant. We are not aware of a honest background field calculation in SYMCS or YMCS systems. However, the bosonic shift k → k + c V can be reproduced rather easily in the background field technique, if making a little surgery in the regulator sector and replacing the gauge-invariant YM action by a simple-minded gluon mass term [17,18]. Consider the pure CS term and split the gauge field A µ in two parts, (the factor 1/ √ κ being introduced for convenience). The background field A cl µ is assumed to satisfy the classical equations of motion F cl µν = 0. Then the CS action is reduced in the quadratic in a approximation to To do perturbative calculations, one has to fix the gauge. The most convenient one is the background Landau gauge D µ a µ = 0, where the covariant derivative D µ involves only the classical part. We are using then a slightly nonstandard way to implement this gauge condition by introducing the Lagrange multiplier φ and adding to the Lagrangian the term (3.14) There are also ghosts with the Lagrangian L ghost = −Tr cD 2 µ c , but they do not affect the renormalisation of κ we are interested in. One can now combine a µ and φ into a four-dimensional object B M = {a µ , φ} (M = 1, . . . , 4), such that the (relevant part of) the quantum action takes form where Γ µ are certain traceless 4 × 4 matrices satisfying the same (anti)commutation relations as Pauli matrices. This Lagrangian is very similar to the Dirac Lagrangian. There are two differences: (i) There are twice as many B M 's as λ α 's, giving a twice as large contribution to the effective action. (ii) B M are bosons rather than fermions and contribute to the effective action with an opposite sign. Thus, the bosonic contribution in this approach has exactly the same structure as the fermion one, up to a factor −2. Of course, we are a little bit cheating here. The renormalization of κ in the theory with the action (3.15) is zero or, better to say, not defined until it is regularized in the infrared. A natural regularization is provided by the Yang-Mills term in the action. But the calculation of Ref. [18] uses instead a simpleminded regularization consisting in adding to the Lagrangian the gauge boson mass term (with a properly chosen sign). This regularization is not so nice as the YM one (it is not gauge invariant, etc), but it has the advantage that the calculations become very simple. Actually, one does not need to do them again, but can simply use the fermion results. This gives ∆k bos = −2∆k ferm = c V , which coincides with the result of [5].
Finite volume.
Consider first the SU(2) theory. As was mentioned above, the coefficient κ (with the factor L 2 ) has the meaning of magnetic field on the dual torus for the effective finite volume BO Hamiltonian. Renormalization of κ means renormalization of this magnetic field. At the tree level, the magnetic field was constant. The renormalized field is not constant, but depends on the slow variables C. To find this dependence, one has to substitute in the integral ∼ d 3 p for ∆κ. 9 We derive for positive k For most values of C, this correction is of order ∼ mL 3 = κg 2 L 3 , which is small compared to B tree ∼ κL 2 if g 2 L ≪ 1, which we assume. Also in the "corner" of the torus |C| ≪ m, the correction ∆B ∼ 1/m 2 is small compared to B tree for very large k, k ≫ 1/(mL) 2 . Otherwise, ∆B dominates there. 10 In any case, the integral for the flux associated with the corrections (3.18) is saturated by the regions |C| m, etc in the vicinity of Weyl fixed points, being equal to which should be compared with the tree flux Φ tree = 4πk. The total flux is thus The renormalized flux means the renormalized index. For SU (2), we obtain the result 2(k + 1) for the pre-Weyl index. After taking into account the Weyl invariance condition, we derive I(k = 0) = sgn(k)(|k| + 2) .
(3.21)
When k = 0, the magnetic flux giving the pre-Weyl index is zero, and loop corrections do not modify this result (when k tree vanishes, this is also the case for k ren ). A vanishing index suggests breaking of supersymmetry, but whether or not supersymmetry is actually broken in this case is a nontrivial question requiring special studies. The result (3.21) involves the tree contribution and the one-loop correction. One can argue that higher-loop corrections must vanish. The reasoning is the same as for renormalization of k in the infinite volume: for large k a two-loop correction should be suppressed as ∼ 1/k. But the coefficient of 1/k should vanish -otherwise the renormalized flux and renormalized index would not be integer.
A similar analysis (see Appendix) can be done for the groups of higher rank. It displays that, at the level of one loop, the generalized magnetic flux (2.17) evaluated with renormalized B ab (C a ) is obtained from the corresponding tree expression by substituting k → k + c V /2. This suggests (though does not prove rigourously ) that there is no nontrivial renormalization of the generalized flux due to second and higher loops. 11 For SU(N > 2), the result is For symplectic groups, For G 2 , c V = 4, and the result for the index is given by the expression (1.7) with |k| being substituted by |k| + 2. When k = 0 (this is allowed for even N and for odd r), the index vanishes.
Metric and the index.
Let us restrict ourselves here by the discussion of SU (2). The index is a topological quantity and is determined by relevant topological invariants, like the magnetic flux (alias, the first Chern class of the relevant to the problem U(1) bundle on the moduli space of flat SU(2) connections on T 2 ). 12 Thus, it is sensitive only to the modifications of the flux due to loops and is robust with respect to other loop corrections to BO Hamiltonian. In particular, the index is not sensitive to corrections to the metric, which are well there and might modify significantly the effective Hamiltonian in corner of the torus and other Weyl fixed points. 13 At the one loop level, these corrections are associated to the renormalization (3.11) of the coupling 1/g 2 by the same token as the correction (3.18) is associated with renormalization of κ. The explicit calculation gives To see insensitivity of the index to the metric explicitly, let us write the supersymmetric Hamiltonian for the system with nontrivial metric and calculate the corresponding phase space integral, as in Eq.(2.13). The Hamiltonian is derived from the supersymmetric Lagrangian Z is a chiral superfield, DZ = 0, which is convenient to write in components as where A = i∂Φ and B = 2∂∂Φ. The canonical Hamiltonian is with f = g −1/2 . It can be represented as the Poisson bracket {Q, Q} of the supercharges When f = 1, the Hamiltonian (3.29) coincides with (2.15) (with g 2 /L 2 set to 1 and color index a suppressed) after identification z = ( It is straightforward to see that the index does not depend on the metric and the relation (2.13) still holds.
Higher
The N = 2 SYMCS theory involves one more adjoint Majorana fermion and an extra adjoint real scalar Φ. It has the following Lagrangian Its Yang-Mills part is obtained by dimensional reduction from the standard N = 1 4d SYM theory. The effective finite-volume Lagrangian depends now on 3r bosonic variables C a , Φ a , and on 2r holomorphic fermion variables λ a f , a = 1, . . . , r. The Lagrangian enjoys N = 2 SQM symmetry. 14 Similar to the effective Lagrangian for chiral 4d theories [21,22], the Lagrangian belongs to the class of generalized de Crombrugghe-Rittenberg supersymmetric Hamiltonians [23]. When r = 1, the latter reads where B = ∇ × A = −∇K, and K is an arbitrary function of three bosonic variables A. For chiral 4d QED, the function K was singular, K ∝ 1/|A|. The corresponding Hamiltonian described the motion in a monopole field with extra scalar potential ∼ 1/A 2 . The singularity at |A| = 0 led to nontrivial Berry's phase [21]. In our case, K is much simpler, K = mΦ (Φ ≡ Φ 3(0) ). This corresponds to uniform magnetic field supplemented by an oscillatoric potential in z direction. The Hamiltonian can be presented in the form with B = κL 2 (to establish its relationship to (3.32), one should renameψ 2 ↔ ψ 2 ). In spite of the presence of potential (such that the configuration space C j ,Φ has not the meaning of moduli space), the characteristic excitation energies associated withΦ are of the same order as the energies associated with C j , and to ignoreΦ would be inconsistent.
For positive k, the index of the Hamiltonian (3.33) is given, again, by the 2-dimensional flux of magnetic field as in (2.13). But, in contrast to what happens in N = 1 theory, it does not change sign for negative k. We derive for SU (2) When imposing the condition of Weyl-invariance we are left with only |k| + 1 bosonic vacuum states. Loop corrections do not change this result because, for N = 2 theory, fermion and gluon loop contributions in the renormalization of k and in the magnetic field flux cancel out. A generalization to higher N is straingforward. The final result for the index coincides with (2.44), but without the factor (−1) N −1 . When k = 0, supersymmetry is unbroken.
If k = 0, the pure N = 2 3d SYM field theory is known to involve the "runaway" vacuum: the degeneracy of the vacuum valley is lifted by a superpotential generated by instantons such that the minimum of energy is achieved at infinitely large field values [24]. It would be interesting to understand how this is reflected in the finite-volume version of the theory.
The Lagrangian of N = 3 theory, involves four fermions, ψ f =1,2,3 and χ. The fermion χ has the mass of opposite sign compared to that of ψ f . Besides, there are three real adjount scalars Φ f . The effective Hamiltonian (for SU(2) theory) has the form The pre-Weyl index of this Hamiltonian is
A paradox
The problem of calculating the index (i.e. the number of vacuum states) in SYMCS theory is closely related to the problem of calculating the total number of states in the topological pure CS theory. Indeed, the canonical momenta derived from the Lagrangian There is no time derivatives in the RHS, and we obtain thereby a set of second class (they do not all commute) constraints G a j = Π a j − (κ/2)ǫ jk A a k = 0 supplemented by the gauge constraints F a jk = 0. When quantizing, we have to replace, as usual Π a j → −iδ/(δA a j ) and impose the conditions on the wave functions (one has to use a kind of Gupta-Bleiler quantization procedure here and implement only a half of G a j [25]). On the other hand, it is not difficult to see that the supercharges of the SYMCS model (2.1) can be represented as . For positive k, ground states are bosonic and are annihilated byQ in a trivial way. The condition Q|Ψ = 0 is equivalent to the set of constraintsĜ a 1−i2 Ψ = 0. For negative k, the conditionQ|Ψ = 0 is equivalent to the set of constraintsĜ a 1+i2 Ψ = 0. It is not surprising therefore that our results [like (1.3)] for the tree-level index coincide with those derived earlier for pure CS theories. A conventional way to count the number of states in CS theories is to use their relationship [17] to 2d WZNW theories [26], the correspondance between WZNW theories and conformal theories, and the full conformal machinery [27]. But it can also be done by resolving directly the constraints (4.3) [28,29].
The SYMCS theory in question involves, however, also the supersymmetric YM part in the action, which might affect the index. Witten suggested that only the fermion part of this action does. His logic was the following [30]. Let us integrate over the fermions (after which the effective coupling is shifted according to k → k − c V /2) and obtain a purely bosonic theory. At low energies, this is the pure CS theory. It involves also the YM term and still higher derivative terms. Though these terms are irrelevant at low energies in a sense that the dynamics depends exclusively on the lowest dimension CS term in the Wilsonian effective Lagrangian, they can affect the coefficient of this term. However, in contrast to what happens, e.g., in a conventional 4d YM theory supplemented by a higher-derivative term ∼ Tr{F D 2 F }/M 2 , where the effective low-energy YM coupling constant involves a logarithmic dependence of M, in this case, renormalization of κ does not depend on the coefficient 1/g 2 of the YM term. Moreover, it does not depend on the form of the higher derivative terms, the result bein robust with respect to these details. One can therefore consider this shift as an immanent feature of pure CS theory, with quantum effects taken into account. Indeed, the shift k → k + c V appears in many exact formulae, like those for the energy-momentum tensor or Wilson loops expectation values, etc [17,27]. On the other hand, this shift does not show up in the formula (1.3) for the number of states in CS theory (on the conformal side, it is the number of so called conformal blocks). Thus, concludes Witten, one should not take into account the renormalization of k due to bosonic loops. The known pattern of the exact solution of pure CS theory displays that bosonic loops are present, indeed, they affect κ and other quantities, but do not affect the number of states. This reasoning looks OK. Besides, supersymmetry breaking at small k that it implies follows also from heuristically suggestive D-brane constructions [31]. However, it is somewhat formal, relying heavily on the correspondence with conformal theories and exact results there. It does not give a clear physical picture of what really happens. Our method, consisting in explicit evaluation of the low-energy Hamiltonian in finite volume, gives such physical picture, but, surprisingly, the result of this analysis is different -bosonic loops do contribute to the index. This is an obvious paradox, which should be resolved somehow. Being unable now to make essential comments on the conformal way of reasoning, let us try to see whether one can modify our prediction following from the analysis of the effective finite-volume Hamiltonian. 16 One of the places in our proof, which might involve a loophole 17 is the following. In Appendix, we evaluated accurately the contribution of the fermion loop into the effective finite-volume BO Hamiltonian and confirmed that, as far as the expression for the induced magnetic field is concerned, the simple rules (3.17) work and, as a result, the flux of induced magnetic field is rigidly connected with renormalization of κ in infinite volume. This generalizes to bosonic loops if the latter are evaluated with the simplistic infrared regularization (3.16). It is difficult to imagine that the results may depend here on the regularization, but we cannot logically exclude now that, when accurate calculations are done in full SYMCS theory, and the extra terms in the action involve a couple of derivatives, the recipe (3.17) breaks down for bosonic loops. As a result, the flux of the induced magnetic field might be zero in spite of the nonvanishing renomalization of κ in the infinite-volume theory... To patch the hole, such an accurate calculation should be performed.
Another potential source of trouble is the fact that the BO approximation we use breaks down near some special points (fixed points of Weyl transformation) on the flat connection space [see the footnote 10]. This allows one to suspect the presence of some extra contributions in the index that we did not take into account. They might be (i) extra one-loop contributions and/or (ii) higher-loop contributions. Speaking of the latter, we have not rigourously excluded their presence for higher-rank groups [see the footnote 11], but, for SU(2), we did. (We remind: for SU(2), higher-loop contributions (if any) should involve inverse powers of k, this is not allowed at large k and hence the coefficient should be zero for any k). Speaking of the former, we analyzed accurately only a possible correction to the index due to renormalization of the metric and showed that it vanishes. However, there are many other corrections with four and more derivatives in the Lagrangian. As the index is a topological quantity, it is difficult to imagine that something else besides the (generalized) flux might contribute, but, again, it is not a mathematical theorem. There is a logical possibility that something queer, like higher-derivative terms, contributes to the index and this cancels the contribution of the flux induced by the bosonic loop.
The third possibility is the following. The Weyl group W has a natural Z 2 gradation involving even and odd elements. (For example, for SU(N), the Weyl group S N involves even and odd permutations.) Imagine now that, for some reason, we should have picked up not Weyl invariant wave functions, but rather Weyl antiinvariant ones, i.e. the functions that are invariant under the action of even elements of W and change sign under the action of odd elements [28,32]. Weyl antiinvariant wave functions can be represented as where P (x) = ±1 depending on whether the element x is even or odd. It is not difficult to see then that the number of such Weyl antiinvariant states is equal to the number of points in the Weyl alcove excluding the points on its boundary. Indeed, the latter are invariant with respect to a Z 2 subgroup of the Weyl group that involves the unity and some odd element of W of second order. [ For the Weyl alcove of SU(3) depicted in Fig. 2, this odd element is one of the permutations (12), (13), or (23) -see the footnote 4]. A glance at Fig.2 tells that, for k = 4, there are only three points in the interior of the alcove. And this coincides with the number of points in the Weyl alcove for k = 1 counted in the conventional way (with inclusion of the boundary). One can note now that 1 = 4 − 3, i.e., for k = 4, the number of Weyl antiinvariant states for the SU(3) effective Hamiltonian that takes into account the contribution of the gluon loops bringing about the shift k → k + 3 is equal to the number of Weyl invariant states for the unshifted Hamiltonian. A pure geometric inspection of larger triangles and multidimensional tetrahedrons displays that this pattern also holds for all k and N. For higher unitary groups, Weyl antiinvariance condition "unwinds" the gluon loop shift k → k + N. We enjoyed observing this also for symplectic groups Sp(2r) (where counting the points in the interior of the alcove unwinds the shift k → k + c V = k + r + 1) and for G 2 . Indeed, looking at the Weyl alcove for G 2 in Fig. 2, one observes that, for k = 4, only one state is left, and this corresponds to unwinding k → k − 4 = k − c V [G 2 ]. This theorem can be proven for an arbitrary group [28,32]: Weyl antiinvariance requirement amounts always to the negative shift k → k − c V that compensates the shift (4.5) due to gluon loops. In other words, by imposing Weyl antiinvariance requirement on the wave functions, we would reproduce Witten's result. The problem is, however, that we do not see a reason to do that in the framework of our approach.
Going down onto the quotient.
We calculated the index by studying the dynamics on the moduli space of all (not necessarily gauge equivalent) Abelian flat connections and imposing then the Weyl in- variance condition on the quantum states. The advantage of this approach is simplicity of such moduli space -just the product T × T of two copies of the maximal torus of the gauge group. An alternative approach is to factorize [T × T ] over the Weyl group W at the classical level and study the dynamics on the (more complicated) moduli space thus obtained. This is the way the index was calculated in Sect. 3 of Ref. [1]. This calculation uses a bunch of nontrivial mathematical facts, which we have understood (with the help of colleages mathematicians) only partially. Still, we have decided to make here few explanatory comments, which might be useful for an unsophisticated physicist reader who shares with the author his mathematical illiteracy.
The first nontrivial fact is that the moduli space M = [T max × T max ]/W of gauge equivalent classes of flat SU(N) connections on T 2 is CP N −1 [33]. In fact, the proof of a similar statement for symplectic groups (that M Sp(2r) = [T max Sp(2r) × T max Sp(2r) ]/W Sp(2r) = CP r ) is much simpler. Consider first Sp(2) = SU (2). In this case, T max is just a circle and the Weyl group is Z 2 . Then M is a set of points (x, y) identified by periodicity (x, y) ≡ (x + 1, y) ≡ (x, y + 1) and by simultaneous Weyl reflection (x, y) ≡ (−x, −y). This gives a triangle with glued edges depicted in Fig.5: the points symmetric with respect to the middles of the edges are identified. An "envelope" thus obtained is topologically equivalent to S 2 .
For Sp(2r), T max is a direct product of r such circles. The Weyl group has 2 r · r! elements including reflections on each such circle and their permutations. It is clear then that with r S 2 factors. Introduce a complex structure on each factor. A point in M can be represented as an unordered set of r complex numbers (z 1 , . . . , z r ). One can represent this set as a set of roots of some polynomial of order r and map the set of all such sets to the set of all complex polynomials of degree r factorized over multiplication by a complex factor λ. Bearing in mind that a polynomial of degree r is represented by a set of r + 1 its coefficients, we derive M ≡ CP r , as promised.
Let us go over to unitary groups. The maximal torus of SU(N) is a set of matrices diag(e iα 1 , . . . , e iα N ) with N l=1 α l = 0. The product of two such tori can be represented as the space of sets {z 1 , . . . , z N }, where z l = α l + iβ l belongs to T 2 [α l , β l ∈ (0, 2π)] and l z l = 0. The Weyl group permutes z l . Thus, M SU (N ) is a set of unordered Ntuples on T 2 that add to zero. Similarly to what was done in the case of Sp(2r), such N-tuple can be represented by meromorphic elliptic functions defined on T 2 that have simple zeroes at N selected points and the pole of N-th order at zero. 18 There is a one-to-one correspondence between these N-tuples and the classes of such functions F (z) with identification F (z) ≡ λF (z). It is a known mathematical fact that the space of all such elliptic functions is a vector space of complex dimension N. Bearing in mind the identification with respect to multiplication by λ, the projective space CP N −1 arises.
Witten then relates the index to a certain topological invariant of CP N −1 associated with the presence of extra Abelian gauge field on this manifold. We do not want to go into further details (bearing especially in mind that we do not understand this question completely), but we would like to mention here that an elementary calculation of this invariant 19 was performed in [34]. The number of the states depends at the tree level on the parameter k and is given by (1.3). As was discussed above, Witten suggests that k should be shifted due to fermion loops to k −c V /2, while our analysis suggests the positive shift k → k + c V /2.
Strings and walls
The last rather confusive issue that we want to discuss here are the arguments of Ref. [7] relating the Witten index in 3d SYMCS theory at the level k to the multiplicity of domain walls in N = 1 4d SYM theory with SU(k) gauge group. The standard reasoning displaying the appearance of these walls is the following. The tree Lagrangian of this theory involves axial U(1) symmetry. Like in QCD, this symmetry is anomalous, being broken by instantons. An instanton possesses 2k gluino zero modes, the 't Hooft determinant involves the factor ∼ λ 2k , and that means that the discrete Z 2k subgroup of the axial U(1) group remains unbroken. This discrete symmetry is further spontaneously broken down to Z 2 , with the phase of the gluino condensate, λλ l = Σe 2πil/k , l = 1, . . . , k , (4.8) playing the role of the order parameter of this breaking [35]. This implies the existence of k distinct vacua and domain walls separating them [36]. There are domain walls of different kind interpolating between the vacua with phase differences p = l − l ′ = 1, . . . , k − 1. For given k, p, there are several different domain walls, their multiplicity being evaluated (by brane methods) in [7] as # walls (k, p) = k p . Based on certain D-brane and duality arguments, Acharya and Vafa relate this number to the number of vacuum states in N = 2 3d SYMCS SU(p) theory at level k (the main idea is that the effective theory on the domain wall is in fact a 3d SYMCS theory). And this relation holds if using the N = 2 generalization [31] of original Witten's formula (1.1) and not our formula (1.5) ! 20 Even though this agreement looks to be rather remarkable, it is not conclusive enough in the framework of our restricted rules of the game, where only pure field theory reasoning is admissible, and duality arguments are not.
SYM theory is a theory with strong coupling, and it is difficult to perform a honest study of domain walls there and count their number. The only "braneless" way to do it is to modify the theory by adding there extra fundamental matter multiplets [37]. If the matter fields are light enough, one can integrate over all other degrees of freedom to obtain the effective ADS Lagrangian [38]. It is a Lagrangian of Wess-Zumino type, with superpotential involving a special instanton-generated term. It has, indeed, k different vacua, and the classical solutions describing different domain walls can be explicitly constructed and counted, their number being given by Eq.(4.9). But it is not evident that the number of walls in the pure SYM theory should be the same. The latter can be achieved from the weakly coupled theory with light matter fields by increasing their mass. If the mass becomes very large, these fields decouple. If the number of walls is not changed under such deformation, the counting (4.9) works also for pure SYM.
This condition seems not to be fulfilled, however. In Refs. [39,40], this very question was studied in the framework of the Taylor-Veneziano-Yankielowicz Lagrangian [41] involving on top of matter superfields also the chiral superfield S, which takes effectively into account the gluon and gluino degrees of freedom. 21 This study has revealed that, when the mass is increased, a chain of phase transitions (or rather bifurcation points) occur such that most of the walls disappear at large masses. 22 For example, for k = 2, both walls disappear. For k = 3 only two "tenacious" walls out of three are left, etc. In other words, the counting (4.9) works for the ADS Lagrangian, 20 The exact agreement between (4.9) and (4.10) is achieved, if taking into account the presence of an extra U (1) factor in the effective theory. As a result, the number of walls is given by the SU (p) index (4.10) multipled by the factor k/p [7], which coincides with (4.9). 21 The TVY Lagrangian has correct symmetry properties, but it is not a Wilsonean effective Lagrangian, and one cannot be sure that the results obtained in the TVY framework hold also for the full SYM theory. Anyway, it is the only known to us field theory method to study domain walls in strongly coupled regime. 22 Disappearance of walls in the pure SYM theory might be associated the fact that the standard interpretation with spontaneous breaking of discrete chiral symmetry is actually questionable. The pure SYM theory, unlike a theory with fundamental matter, admits not only instanton Euclidean configurations with integer topological charge, but also configurations with fractional charge. Such configurations ('t Hooft torons [9]) are certainly there in a theory defined on a spatial torus with twisted boundary conditions [cf. a remark after Eq.(2.5)]. And then the phase of the fermion condensate λλ in Eq.(4.8) is not an order parameter, but plays the same role as the vacuum angle θ -it should be chosen once for ever, and there are no physical walls connecting vacua with different θ. See Refs. [37,40] for discussion of this controversial issue. but probably does not work for SYM theory. Bearing this in mind, the agreement between Eq.(4.9), which does not count correctly the number of walls, and Eq.(4.10), which is not a correct value of the 3d index, looks really misterious...
We are indebted to E. Witten for profound illuminating discussions and many valuable remarks. We aknowledge also useful discussions with B. Feigin, A. Gorsky, E. Ivanov, A. Pajitnov, V. Rubtsov, and S. Theisen.
Appendix. Magnetic flux induced by loops.
SU(2)
The formula (3.18) for the induced magnetic field on the dual torus is very natural and follows almost directly from (3.7) and the rules (3.17). However, this simple correspondence is formulated for the magnetic field B, while the bosonic part of the effective Lagrangian involves vector-potential A rather than B, and the formula for L eff is more complicated. Because of this and because of the controversy concerning the bosonic loop contribution, we decided to make here some explanatory comments.
Consider the fermion contribution. To find the correction to the effective Lagrangian, we have to evaluate the fermion loop in finite volume in external background field where τ is Euclidean time (to evaluate the graphs, we are going to perform, as usual, Wick rotation, etc.). For any multileg graph in the expansion of Tr ln(iD / − m), we have thus to insert such C(τ ) in each leg and keep only linear in E terms. 23 The way the calculations are done here [42] is very much parallet to the technique of calculations in background nonperturbative Euclidean 4d fields developped in [43] and based on the gauge choice [44] (x − x 0 ) µ A µ = 0 leading to This gauge is not translationally invariant, but the physical results must not (and do not) depend on the choice of the "fixed point" x 0 . This choice is in our hands. Likewise, the point τ 0 at which the linear term in the decomposition (A.1) vanishes, is a convention. We will choose τ 0 = 0 coinciding with the position of one of the legs in the graphs.
The graphs with an odd number of legs vanish, and we have to consider only the graphs with even number of legs. There is only one two-leg graph depicted in Fig.6. The factor 1/2 coming from the expansion of ln(iD / − m) is displayed. The blob marks the "fixed point" -the vertex at τ = 0. At this point, one can plug only the constant part where G(ǫ, 2πn/L) is fermion Green's function, and we took care to display explicitly the factor 1/2 coming from the expansion of the logarithm, but not other numerical factors. We have also suppressed from now on the subscript 0 for C. For the graphs in Fig.7, the factor τ multiplying E k goes over into the operator ∂/(∂ǫ) acting on all Green's functions between the point where E k τ is inserted and the blob in, say, the clockwise direction [43]. For example, the graph in Fig.7b gives Again, only the expansion factor 1/4 is explicitly displayed. The 6-leg graphs ∼ EC 5 involve the expansion factor 1/6, etc.
To resum all such contributions, let us compare the expressions (A.4) etc. to the corresponding terms in the expansion of the graph in Fig. 8, where the thick lines stand for Green's functions in the constant background C. These expansion terms have the same structure as in Eq.(A.4), but the coefficients 1/4, 1/6, etc are replaced by a universal combinatorial prefactor 1/2. To find L eff to any order in C, we have thus to take the This is nothing but Fock-Schwinger gauge representation (A.2) for the vector potential via magnetic field. We thus arrive at the result (3.18) for ∆B F (C). An explicit evaluation of ∆B B (C) in SYMCS theory is technically more involved. In the background field method, there are two types of vertices with single and double external field insertions. In addition, the expression for the gluon propagator is more complicated. What we can easily do, however, is to calculate the induced magnetic field in the model where the YM term in the action is replaced by the gluon mass term (3.16). Then the action is exactly the same as for the fermions and the results are also exactly the same up to the factor -2, ∆B B (C) = −2∆B F (C) .
(A. 8) In view of the controversy discussed in the paper (whether gluon loops are relevant or not), it would make sense to perform this calculation with the "honest" YM action. It is difficult to imagine, however, that some other result than (A.8) would be obtained. At C = 0, the equality (A.8) is manifest with any regularization.
The total 1-loop contribution to the effective Lagrangian is expressed as where a universal function A k (C) is taken from (A.7). Let us add now the contribution from gluon loops (this amounts to changing sign of A k ) and calculate B ab and its determinant. We obtain | 15,330 | 2009-10-05T00:00:00.000 | [
"Physics"
] |
Crash Analysis of Aluminum/CFRP Hybrid Adhesive Joint Parts Using Adhesive Modeling Technique Based on the Fracture Mechanics
This study describes the numerical simulation results of aluminum/carbon-fiber-reinforced plastic (CFRP) hybrid joint parts using the explicit finite-element solver LS-DYNA, with a focus on capturing the failure behavior of composite laminates as well as the adhesive capacity of the aluminum–composite interface. In this study, two types of adhesive modeling techniques were investigated: a tiebreak contact condition and a cohesive zone model. Adhesive modeling techniques have been adopted as a widely commercialized model of structural adhesives to simulate adhesive failure based on fracture mechanics. CFRP was studied with numerical simulations utilizing LS-DYNA MAT54 to analyze the crash capability of aluminum/CFRP. To evaluate the simulation model, the results were compared with the force–displacement curve from numerical analysis and experimental results. A parametric study was conducted to evaluate the effect of different fracture toughness values used by designers to predict crash capability and adhesive failure of aluminum/CFRP parts.
Introduction
Automotive structural parts are being replaced by lightweight materials, such as carbon-fiber-reinforced plastic (CFRP), plastics, and aluminum, instead of steel, to improve fuel efficiency and reduce carbon emissions in the automotive industry [1,2]. Due to the fact that mechanical joining methods, such as bolts or welding, are unsuitable for these materials, a structural adhesive is a good alternative to provide the required strength at joints for dissimilar materials [3]. In addition, it is important to predict the performance of the adhesive for joints by applying joining technology between different materials to automotive structures [4].
Many studies have been conducted to predict and evaluate the strength of adhesive joints using continuum mechanics and fracture mechanics approaches [5]. The continuum mechanics approach has been used to analyze the strength of the adhesive; it requires stress distributions and adequate failure criteria [6]. However, it is difficult to apply the stress or strain to the design of the structure because the stress or strain cannot be defined analytically, precisely owing to the stress singularity of the adhesive joint [7]. The cohesive zone model is based on fracture mechanics and it assumes that there is a softening zone in front of the crack tip. In the fracture process zone, the crack tip opening is resisted by tractions. Early conceptual work was conducted by Dugdale [8] and Barenblatt [9]. Hillerborg et al. [10] applied the cohesive zone formulation to cracking in a concrete beam.
Over the years, many adhesive modeling methods have been developed for adhesively bonded joints used in crash simulation [11,12]. Faruque et al. [13] proposed a practical modeling methodology for adhesively bonded structures using discrete springs for crash simulation. Dlugosch et al. [14] tested hybrid FRP (fiber-reinforced plastic)-steel tubes under dynamic axial loading and conducted the numerical analyses using Abaqus Explicit. To study their predictability, the adhesives used between the steel and the FRP interface were modeled using cohesive behavior and tied surfaces modeling methods. Shin et al. [15] investigated the damage behavior of an aluminum/composite beam under bending conditions by conducting a finite-element analysis. Debonding and delamination were modeled by a cohesive zone model. May et al. [16] proposed a rate-dependent constitutive cohesive law for the model. The model was validated as a test of the T-joint with high-strength steel and structural adhesive under quasi-static and dynamic loading conditions.
In this study, explicit dynamic analysis software, LS-DYNA, was used to analyze the joint performance of the aluminum/CFRP parts by using two adhesive modeling techniques based on fracture mechanics (the cohesive zone model and tiebreak contact condition). The results of crash tests and finite-element analysis were compared and analyzed. To define the material model, a fracture toughness test of the adhesive was performed. The results were then used to evaluate the strength analysis of parts under impact conditions, the failure of the composite material, and the failure behavior of the adhesive. The validity of the analysis model was verified. The purpose of this study was to develop a practical method to model a large-scale, adhesively bonded joint structure with a simple procedure and acceptable computational costs based on the existing modeling approaches.
Mechanical Properties of Aluminum, CFRP Plates
Aluminum 5052-O has good formability and ductility, and it was used to increase the impact absorption capability [17]. The material properties of aluminum 5052-O (Korea Non-Ferrous Metals Corporation, Asan, Korea) are presented in Table 1. The CFRP plates (SHINSUNG BASIC MATERIALS, Anseong, Korea) were made from eight plies of CFRP with a stacking sequence of [0] 8 . The plates were manufactured using a pultrusion manufacturing process [21]. The tensile, compression, and shear stiffness and strength tests were performed according to ASTM D3039, ASTM D6641, and ASTM D7078 standards to secure the material properties [22][23][24]. A material testing machine, Instron model 5985 (Instron Corporation, Norwood, MA, USA), was used to run tests. The obtained test results are presented in Table 2.
Fracture Toughness of Structural Adhesive
The fracture toughness test of the adhesive was performed to apply the critical energy release rate to finite-element analysis. The fracture toughness Mode I test was performed according to the ASTM D3433 standard [25]. In the case of the Mode II test, the fracture toughness value was measured using the tapered end-notched flexure (TENF) test method [26]. Figure 1 shows the dimensions of the specimen and fracture toughness test setup. In these tests, a urethane-based vehicle structural adhesive developed by Dongsung Chemical was used, and high-strength steel (STD-11, SeAH css Corporation, Changwon, Korea) material was used for the adherend. Additionally, the testing machine, Instron model 5882, was used to conduct fracture toughness tests. As a result of the fracture toughness tests, the values of 2.010 kJ/m 2 for Mode I and 7.666 kJ/m 2 for Mode II were obtained. The test results are presented in Table 3.
Aluminum/CFRP Component Test
Hat-profile specimens were fabricated to carry out crash tests on aluminum/CFRP hybrid joint parts. The Al5052-O aluminum alloy material (Korea Non-Ferrous Metals Corporation, Asan, Korea) with a thickness of 2.5 mm was manufactured by applying a bending manufacturing process. CFRP material of 2.0 mm was bonded using a structural adhesive [27]. The dimensions of the aluminum/CFRP hybrid joint parts are shown in Figure 2 and Table 4. The crash test was performed by dropping a semicircular impactor with a weight of 47.1 kg from a height of 2.3 m to impose an impact at an initial speed of 6.38 m/s. The crash test setup is shown in Figure 3. The speed and displacement were measured using a photonic sensor and a rotary encoder sensor. An aluminum/CFRP specimen was installed on the supporting parts made of a hardened steel tool. Figure 4 shows the failure of the aluminum/CFRP structure after the crash test. The aluminum parts had large plastic deformations that occurred while absorbing energy after a crash. The CFRP was damaged as a result of the excessive deformation of the aluminum part while supporting the impact load. Bending failure occurred in fiber and transverse directions. Tearing failure was observed at the corners of the aluminum, which was caused by a reduction in the width of the aluminum plate material due to bending during the manufacturing process [28].
In the graphs of the crash test results in Figure 5, the load time interval slightly decreased from t = 0.005 s to t = 0.010 due to the failure of the CFRP as well as the failure of the adhesive between the aluminum and the CFRP.
Material Models and Finite-Element Model
LS-DYNA (Ansys Inc, Canonsburg, PA, USA, which is explicit dynamic analysis software, was used to create a finite-element model of the aluminum/CFRP component crash test. The finite-element model is shown in Figure 6 in the same manner as the test conditions for crash analysis. The material model MAT20 (MAT_RIGID) was used to model the impactor in LS-DYNA. The impactor was constrained in the X, Y displacement, excluding the Z direction, which was the impact load direction, with an initial speed of 6.38 m/s, corresponding to 959 J. An automatic single surface contact option was used to prevent interpenetration for the contact condition. LS-DYNA provides various anisotropic material models related to the composites [29]. In this study, the MAT54 (*MAT_ENHANCED_COMPOSITE_DAMAGE) material card was used to model the CFRP. The MAT54 material model is widely used in the industrial field and is effective because it has simple input parameters and damage models for the failure mechanisms of complex composite materials, as shown in Figure 7. Elastic modulus (EA, EB, EC), Poisson's ratio (PRBA, PRCA, PRCB), and shear modulus (GAB, GBC, GCA) indicate elastic material properties (yellow section) in input parameters. The notations of A, B, and C indicate material direction. In addition, the input parameters of strength (blue section) are designated for each direction. XC and XT define the compressive and tensile strengths for the fiber direction. YC and YT denote compressive and tensile strengths for the matrix direction. The shear strength can be introduced by a parameter of SC. Experimental tensile tests, compressive tests, and shear tests are used to determine the mechanical properties of composite materials. Input parameters of elastic and the strength of material properties are not involved in the calibration of input parameters [30]. Chang-Chang criteria are two-dimensional failure criteria. It has been proposed to predict the progressive damage of composite structures under loading [31]. The strength parameters were applied to define the onset of ply degradation using the Chang-Chang failure criterion in MAT54. The Chang-Chang failure criteria for composite materials adhere to the following conditions [32]: • Tensile fiber failure mode: • Compressive fiber failure mode: • Tensile matrix failure mode: • Compressive matrix failure mode: In the MAT54 composite material model, failure criteria are related to failure strain parameters in the tensile/compression direction of fibers and matrix, such as DFAILT, DFAILC, DFAILM, DFAILS, and effective failure strain (EFS). To apply the simple failure criterion of the composite material in complex deformation behavior, the EFS value, which is the overall failure strain criterion, was applied as a value of 0.3 through a trial-and-error method, and the TFAIL and SOFT values, which are nonphysical parameters, were set to zero [33]. In the case of the aluminum material model, MAT24 (*MAT_PIECEWISE_LINEAR_PLASTICITY) was set, which is a material model that generally reflects the characteristics of the elastic-plastic behavior well. It can also define the failure criterion according to the stress-strain relationship [34,35].
Cohesive Zone Model
The fracture process zone is modeled as a cohesive zone [36]. The fracture characteristics are defined by the traction-separation law, constituting the cohesive element. The dissipated energy of the traction-separation relationship is equal to the critical energy release rate, which is the energy required for crack propagation [37].
In Figure 8, a few cohesive zone material models are presented that can be used to model adhesive bonds in LS-DYNA. The solid elements of the ELFORM 20 are intended for use in cohesive material models. Depending on the situation, the cohesive zone model can be applied to modeling to determine the plastic properties of the adhesive and the rate-dependency properties. In this study, the adhesive was modeled using MAT138 (*MAT_COHESIVE_MIXED _MODE), a cohesive zone model defined by the bilinear traction-separation relationship [38]. The cohesive zone model of the aluminum/CFRP hybrid joint parts is shown in Figure 9.
Tiebreak Contact
Adhesive debonding can also be modeled using the tiebreak contact condition between the adherends in LS-DYNA. Tiebreak contact is a penalty-based contact condition modeling technique. It is useful when constraints are applied to parts with different meshes and exists within the master segment projection area based on the slave node to define the contact condition. The interval between the slave node and the master segment is a specific value based on the dimension of the element. A contact option of CONTACT_AUTOMATIC_ ONE_WAY_SURFACE_TO_SURFACE_TIEBREAK was used for the finite-element model. The failure criteria were the same as those of MAT138. After the tiebreaking process, it becomes an automatic contact condition [39]. In the cohesive zone model, it is inconvenient to connect the nodes by modeling the adhesive in the joint area as a solid; however, it is easy to model the adhesive using the tiebreak contact condition. This is possible with simple conditions, such as designating a segment area, or parts on the surface of the joint [40]. Figure 10 shows the failure of the aluminum/CFRP parts, sequentially. It was confirmed that the main energy is absorbed by plastic deformation on the impacted region of the aluminum/CFRP part. The large deformation caused the edge part to tear and break the composite material and adhesive simultaneously. In the graph in Figure 11, a section exists where the load decreased as the adhesive at the aluminum-composite interface failed. Although the section where the composite material failure is different in the cohesive zone model and the tiebreak contact condition, the trend of the impact load is similar. In Figures 12 and 13, the photographs show the composite failure and finite-element analysis results. The fracture of the composite material occurred in the fiber direction and the transverse of the fiber. In the case of adhesive failure (Figure 14), the cohesive zone model had a wider debonding area than in the tiebreak contact condition. Energy absorption parameters for a structure to evaluate its performance under crash loading require the definition of some indicators. Generally, this parameter can be determined from the load-displacement curve. It is the area of the force-displacement curve in a crash situation and can be calculated as:
Finite-Element Analysis Results and Verification
F dδ (5) where F is the crushing force and δ is the crushing depth. In addition, the crushing displacement of the impactor can be characteristic of the energy absorption capability at the same amount of impact energy. Table 5 shows the energy absorption and the crush depth of the impactor.
Effect of Mesh Size on Finite-Element Analysis of Adhesive Joint
The mesh size of the finite-element analysis is one of the most significant limitations of the cohesive zone model method. It has been observed that it is essential to include between two and three interface elements in the cohesive zone model to precisely represent the softening ahead of the fracture process zone [41]. Analyses that violate this condition show a characteristic stick-slip behavior after failures. This violation results in an incorrect and uncertain solution [42,43]. To create a reliable, finite model with the appropriate mesh size, it is necessary to investigate the mesh dependency of a model. Figure 15 and Table 6 show the results of finite-element analysis for different mesh sizes. The results in the case of mesh size 2 mm are acceptable based on computational costs and prediction accuracy. In the case of a mesh size of 5 mm, since no failure of composite material occurred, it was not possible to observe a structure with a decreased structural rigidity in the graph.
Effect of Fracture Toughness on Finite-Element Analysis of Adhesive Joint
A parametric study was conducted to assess the effect of different fracture toughness values used by engineers to predict the impact strength and adhesive failure of aluminum/CFRP components. Parameters of fracture toughness values for the material models of the adhesive in the simulation were used to investigate the impact strength and adhesive failure of the aluminum/CFRP hybrid adhesive joint parts according to the adhesive fracture toughness values. Case I had low fracture toughness values, and Case II had high fracture toughness values. They were divided into two cases and analyzed for comparison. In general, because the fracture toughness value of Mode II was approximately three to four times that of Mode I [44], the fracture toughness values for each case were set as shown in Table 7. Table 7. Input parameters of fracture toughness values for a parametric study.
Cohesive Zone Model Contact Tiebreak
Case I G IC = 1.0 kJ/m 2 G IIC = 3.0 kJ/m 2 Case II G IC = 6.0 kJ/m 2 G IIC = 18.0 kJ/m 2 Figure 16 shows the comparison of force-displacement graphs according to fracture toughness values from crash simulation. In the case of the cohesive zone model, it was confirmed that the structural rigidity of the aluminum/CFRP component decreased as the adhesive with a low fracture toughness value initially de-bonded (t = 0.003). In Case II, the crushing depth was reduced due to a high fracture toughness value; however, the results of the tiebreak contact condition show that Case I and Case II are quite similar. In Figure 17, the CFRP was not damaged, but the adhesive at the aluminum-CFRP interface was broken and separated due to low fracture toughness. In particular, as seen in the results of the cohesive zone model, it was determined that the debonding occurred in the center part of the aluminum-CFRP interface; however, as a result of the tiebreak contact condition, the CFRP was separated on one side of the joint part. In Case II (Figure 18), the fracture area of the adhesive was significantly reduced compared with previous results. Because of the high fracture toughness of the adhesive, bending failure was also confirmed in the CFRP. As a result, when a dissimilar material part in a crash situation uses unidirectional CFRP with high stiffness as a reinforcing material, the structural rigidity of the part can be maintained by delaying the interfacial separation between the aluminum and CFRP by using an adhesive with a high fracture toughness value.
Conclusions
Special numerical methods are required to represent the adhesively bonded joint parts within a crash simulation. This paper presents the finite-element analysis results of aluminum/CFRP hybrid joint parts using LS-DYNA. This work focused on capturing the failure behavior of the structural adhesive interface as well as the aluminum and composite laminates. To apply adhesive models with reliable crash analysis of aluminum/CFRP hybrid adhesive joint components, fracture toughness tests were performed. The results of the finite-element analysis were compared to verify the validity of the structural adhesive modeling techniques. In addition, there was no decrease in the stiffness of the structure due to damage to the composite material. The results are summarized as follows: (1) A test setup for investigating the response of aluminum/CFRP structure was proposed in crash situations. The failure behaviors of the aluminum and CFRP were observed at the corners of the aluminum and the center of the CFRP. From the graphs of force-displacement, it was confirmed that the load and the stiffness of the structure decreased slightly due to the failure of the CFRP as well as the debonding between the aluminum and the CFRP; (2) A finite-element analysis model was constructed by selecting a material model suitable for the material characteristics of the aluminum/CFRP joint parts. The material model MAT54 in LS-DYNA was employed to simulate the failure of CFRP in a practical design process since it requires simple input parameters. For aluminum, the commercialized material model MAT24 was used to reflect the elastic-plastic behavior. The fracture toughness tests were performed for material models of structural adhesive. The obtained results were values of 2.010 kJ/m 2 for Mode I and 7.666 kJ/m 2 for Mode II; (3) Modeling techniques for structural adhesives between different materials (aluminum and CFRP) were proposed. The two adhesive modeling techniques proposed are particularly well suited for numerical analyses of adhesive joints in large structures since they provide a compromise between accuracy and computational costs. A crash analysis was performed to verify the reliability of the structural adhesive modeling techniques. The results of the two types of adhesive modeling techniques were similar for crash simulation; (4) To study the effects of mesh sizes, several analyses were carried out for element sizes 1 mm, 2 mm, and 5 mm. A mesh size of ≤2 mm is necessary to obtain converged solutions. The simulation results of coarse mesh, sized 5 mm, significantly overpredicted the experimental results. In addition, it was not possible to observe a decrease in the stiffness of the aluminum/CFRP component because there was no failure of CFRP in the simulation results of coarse mesh sized 5 mm; (5) The results of the finite-element analysis were compared and analyzed to confirm the impact strength of the aluminum/CFRP hybrid adhesive joint parts according to the adhesive fracture toughness values and the effect on adhesive failure. The numerical analysis results showed that the adhesive plays a critical role in maintaining the structural stiffness in a crash situation of the component when composite materials with relative stiffness are used as reinforcement in dissimilar material parts, such as aluminum and CFRP.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available upon request from the corresponding author. | 4,809 | 2021-09-30T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Scrambling and Recovery of Quantum Information in Inhomogeneous Quenches in Two-dimensional Conformal Field Theories
We study various quantum quench processes induced by the M\"obius/sine-square deformation of the Hamiltonian in two-dimensional conformal field theories starting from the thermofield double state in the two copies of the Hilbert space. These quantum quenches, some of which are directly related to the operator entanglement of the time-evolution operators, allow us to study scrambling and recovery of quantum information. In particular, under the SSD time-evolution, we show from the time-dependence of mutual information that the Bell pairs, initially shared by the subsystems of the two Hilbert spaces, may revive even after the mutual information for small subsystems is completely destroyed by quantum information scrambling dynamics. This mutual information is robust against the strong scrambling dynamics. As a consequence, the steady state has a non-local correlation shared not by any of two parties but by three parties. In the holographic dual description, a wormhole connecting the two Hilbert spaces may non-linearly grow with time during the quantum quenches. We also propose effective pictures that describe the dynamics of mutual information during the time-evolution by inhomogeneous Hamiltonians.
Non-equilibrium dynamics in quantum many-body systems is a subject of intense research. One of the recurrent themes is how quantum entanglement is generated and propagates during non-equilibrium processes. It has been shown that complex ("chaotic") quantum many-body systems can scramble quantum information non-locally. Quantum information scrambling entails the loss of the information of initial states at least locally and results in thermalization [1,2,3,4,5]. Experimental techniques to measure scrambling in laboratories have rapidly been developed in the past few years (e.g., [6,7,8,9,10,11,12,13,14,15,16,17,18,19]). Non-equilibrium dynamics in the context of (1+1)-dimensional conformal field theory (CFT) have been widely studied in recent years [5,20,21,22,23,24,25,26,27,28]. In particular, recent works constructed a series of solvable models of quantum quench and Floquet dynamics in (1+1)-dimensional CFT using a class of inhomogeneous Hamiltonians. These works provide rare examples where the dynamics of interacting many-body quantum systems can be solved exactly. The inhomogeneous Hamiltonians used in these works include, in particular, the so-called sine-square deformation (SSD) and Möbius deformation of (1+1)dimensional quantum many-body systems. In these deformations, evolution operators are given as a linear superposition of three Virasoro generators (L 0 , L ±1 ), which form an sl (2, R) subalgebra of the Virasoro algebra. Not only being exactly solvable, these quantum quench and Floquet dynamics exhibit rich behaviors, such as dynamical "phase transitions" that separate heating and non-heating behaviors during time evolution [29,30,31,32,33,34,35,36,37,38].
Ref. [31] studied quantum quench problems in 2d CFT using these inhomogeneous Hamiltonians starting from the Gibbs state as an initial state. One of the main findings of Ref. [31] is that the time evolution generates an inhomogenous temperature profile. In particular, when the inhomogeneous post-quench Hamiltonian is the SSD Hamiltonian, it heats up a spatial sub-region near the point where the Hamiltonian density vanishes, while it cools down the rest of the system. (The idea of using inhomogeneous Hamiltonians to prepare low-temperature states has been explored also outside the Möbius/sine-square deformation -see, for example, [39,40,41,42,43,44].) This heating process results in a local excitation that carries the (almost) entire entropy of the system, which we call a black-hole-like excitation (B.H.-like excitation). On the other hand, for the cooled region, non-local quantum correlations emerge under the inhomogeneous time evolution. The SSD Hamiltonian can thus be used to "simulate" the formation of a black hole.
In this paper, we further study inhomogeneous deformations of the CFT Hamiltonian and the associated non-equilibrium dynamics. To be concrete, we will discuss three setups presented in Section 2. All these processes are quantum quenches starting from the thermofield double (TFD) state defined on two copies of the Hilbert space, H 1 and H 2 .
There are three motivations for studying these setups (roughly one for each setup). First, in the previous works almost all properties discussed (cooling/heating, the formation of B.H.like excitations) are universal in the sense that they depend only on conformal symmetry. Little is known about the effects of the inhomogeneous deformations on the details of theories and quantum information scrambling. 6 Different CFTs can exhibit different kinds of dynamics, e.g., integrable, chaotic, or something in between. For these dynamics, effective descriptions of dynamics have been developed -the quasi-particle picture for integrable dynamics and the membrane picture (line-tension picture) for chaotic and holographic dynamics. As discussed in [46,47], quantum information scrambling can be detected by studying operator entanglement. In particular, the operator entanglement for undeformed CFT time-evolution operators was previously discussed in [47]. In this work, we study the effect of the inhomogeneous temperature profile and B.H.-like excitations on quantum information scrambling.
In the quantum quench setups starting from the TFD state, we will study bipartite and tripartite mutual information between subsystems in H 1 and H 2 which measures operator entanglement in disguise.
Second, by considering two-step time-evolution operators, we discuss the recovery of quantum information. In the past decades, information retrieval from a black hole has received considerable attention [48,15,49,50]. In the setups considered in these works, quantum information is thrown into a black hole, scrambled in its interior, and then emitted as the Hawking radiation. These works investigated efficient ways of retrieving the quantum state from the emitted Hawking radiations. Investigating the information retrieval from typical states, i.e., states in which information is scrambled, should lead to a deep understanding of quantum thermalization and black hole dynamics. Our setups using inhomogeneous time evolution operators in 2d CFT that are rather different from those considered in the above works, where quantum information theoretical models were considered. Nevertheless, we will demonstrate the recovery of quantum information in our setups: If we start from the TFD state or a typical state and then evolve the system with the SSD Hamiltonian acting on the single Hilbert space, then the mutual information between the subsystems on the different Hilbert spaces, H 1 and H 2 , locally returns to its initial value. (Here, in our setups, the time evolution operator acts solely on H 1 .) From this mutual information recovery, we can see the Bell pairs initially shared by the subsystems of H 1 and H 2 may be revived during the SSD time evolution. This recovered correlation may be robust against the scrambling effect of 2d holographic CFTs. Furthermore, under the evolution induced by the uniform holographic Hamiltonian, when the subsystems do not include the so-called fixed points, the system can develop a genuine tripartite correlation, i.e., a non-local correlation shared by three parties, but not by two parties only.
Finally, we are also interested in the dynamics of B.H.-like excitations. In Setup 3 presented in Section 2, we once again consider two-step time-evolution where the first step creates a pair of B.H.-like excitations while the second step induces non-trivial dynamics thereof.
We back up the above analyses for the specific setups by developing an effective description of the entanglement dynamics. In particular, we develop the line-tension picture for inhomogeneous time evolution. We also develop the holographic bulk description of these inhomogeneous quenches by keeping track of the spatiotemporal deformations of the bulk black hole horizon. Finally, we also discuss the wormhole connecting the two Hilbert spaces. Due to the non-trivial dynamics of the B.H.-like excitations, the size of the wormhole exhibits an oscillatory growth.
The rest of the paper is organized as follows: In Section 2, we will describe the inhomogeneouslydeformed Hamiltonians in 2d CFT, the three setups considered in this paper, and the measures of entanglement of our interest. In Sections 3 and 4, we will present the time-dependence of mutual information under the evolution by the inhomogeneous Hamiltonians, starting from the thermofield double and typical states. In the following three sections, we report the timedependence of the entanglement measures in the three setups: In Section 5, we will report the time dependence of entanglement entropy and mutual information when we start from the thermofield double state, evolve the system with the SSD Hamiltonian, and then subsequently evolve it with the uniform Hamiltonian. In Section 6, we will propose an effective model that describes the operator entanglement hydrodynamics of the Möbius/SSD time evolution operators. In Section 7, we will report the dual geometries of the systems considered in this paper, and also present the growth of wormholes. Finally, in Section 9, we will discuss the possible applications of our results to experiments, and comment on a few future directions.
Preliminaries
In this section, we describe the inhomogeneously-deformed Hamiltonian, the setups of our interest, and the measures of entanglement considered in this paper.
In these deformations, we naturally identify two special locations on the spatial circle, x = 0 ≡ X 1 f and x = L/2 ≡ X 2 f . Being the minimum or maximum of the envelope functions, we expect the effect of the envelope functions on quantum dynamics is most significant around these points. We will soon show that these points play special roles under the inhomogeneous time evolution by looking at various quantities such as the Heisenberg time evolution of operators.
For the bulk of the paper, we mainly focus on the Möbius and SS deformations. The details of the analysis and calculations of the entanglement dynamics under the CSD timeevolution are presented in Appendix C.
The systems evolved with the inhomogeneously-deformed Hamiltonians
We consider the following three setups in this paper. In all setups, we consider the thermofield double (TFD) state and |a i=1,2 denote the un-deformed 2d CFT Hamiltonian, and its eigenstates respectively. The regulator is half of the inverse temperature, = β/2. The square of the normalization factor N guarantees that TFD|TFD = 1. We will mainly work with holographic CFTs, i.e., CFTs that admit holographic dual descriptions. However, we also study the 2d free fermion CFT as a representative of non-chaotic (integrable) CFTs and make comparisons between the two. The TFD state was previously used as a "convenient" initial condition in quantum quench problems [41]. The TFD state is a short-range entangled state, and can be considered as a ground state of a gapped Hamiltonian [66,67]. Our setups above are hence in a similar spirit to the seminal work by Calabrese and Cardy on quantum quench in 2d CFTs [5,20].
Setup 2: In the second setup, we once again start from the TFD state, and then consider the two-step time-evolution first by e −it 0 H 0 and then e −it 1 H Möbius/SSD , both acting on H 1 : Here, the first time evolution can be interpreted as creating an excited state, which is then time evolved during the second step of the time evolution. Setup 3: Finally, in the third setup, we exchange the ordering of the two time evolution operators in Setup 2, and consider: Let us now elaborate on the motivations for studying these setups and provide an overview of our results.
-We first note that, in addition to the interpretation as quantum quench, we can give an interpretation of these states (and entanglement measures for these states) from the perspective of operator entanglement. Consider, for a unitary time evolution operator U unitary , an effective unitary time evolution operator U effective = U unitary e − H 0 . By using the channel-state map [46,68], define the dual state of U effective as the state on the doubled Hilbert space, where |· * is CPT conjugate of |· , and |a i is an eigenstate of H i . 7 The unitary time evolution operator acts only on H 1 . The dynamical properties of U effective are represented as the entanglement structure of the dual state. Thus, the above states can be interpreted as the dual states of the effective unitary time evolution operators. In particular, by considering the state (2.4) and its entanglement structure, we can discuss the operator entanglement of the Möbius/SS deformed time evolution operator and the effect of the inhomogeneous deformation on quantum information scrambling. For the case of regular, homogeneous Hamiltonian H 0 of 2d CFTs, the operator entanglement and quantum information scrambling were studied in [47].
-In Setup 2 and 3, we have two-step time evolution operators. In Setup 2, the first time evolution under H 0 is expected to scramble quantum information (for holographic CFTs). Our interest here is the effect of the second time evolution on the scrambled information. As we will see, the SSD evolution recovers the non-local correlation between subsystems A and B in H 1 and H 2 when the subsystem A includes X 1 f . Namely, by the SSD evolution, we can retrieve the information from the typical state, the state where the information is fully scrambled. The motivation for Setup 2 is thus in line with information retrieval from a black hole [48,15,49,50]. In these works, quantum information is first thrown into a black hole, scrambled in its interior, and then emitted as Hawking radiation. They investigated efficient ways of retrieving the quantum state from the emitted Hawking radiations.
-In Setup 3, the first part of the two-step time evolution (with the SSD Hamiltonian on H 1 ) can be interpreted as preparing a pair of black-hole-like excitations (B.H.-like excitations) [ Here, the first step of the time evolution is the same and still creates a pair of B.H.-like excitations. The second time evolution is however given by H CSD , instead of H 0 , whose envelope function profile is complimentary to H SSD . The details of the entanglement dynamics for (2.8) are presented in Appendix C.
Entanglement entropies and the twist operator formalism 2.3.1 Entanglement entropies, bipartite and tripartite mutual information
The main quantities of interest in this paper are entanglement entropies for various subsystems as well as the bipartite and tripartite mutual information (BMI and TMI, respectively). Below, we consider a subsystem (sub-Hilbert space) of H 2 , which we call A. Similarly, we consider a sub-Hilbert space B of H 1 . When discussing TMI, we consider two subsystems of H 2 , denoted as B 1 and B 2 . More specifically, subsystem A is a spatial interval with its left and right ends located at X 1 and X 2 , and, similarly, B is an interval with its left and right ends located at Y 1 and Y 2 . Here, 0 < X 2 < X 1 and 0 < Y 2 < Y 1 . (B 1 and B 2 are also intervals -their geometries are specified in the following.) Starting from the total density matrix |Ψ Ψ|, we consider the reduced density matrix ρ V (V = A, B, A ∪ B, · · · ), and the corresponding von Neumann and/or Rényi entropies. They are denoted by S V and S (n) V , respectively.
Bipartite mutual information (BMI) for A and B is defined as the linear combination of the entanglement entropies: (2.9) We note that I A,B is independent of the lattice spacing. Since the universal pieces of entanglement entropies cancel out, I A,B depends on only the non-universal pieces of these entropies.
To define tripartite mutual information (TMI) we consider three subsystems A, B 1 and B 2 . Then, the TMI for A, B 1 and B 2 is defined as a linear combination of BMI: (2.10) As in [69,47,70,71,72,73,74,75,76], the TMI for operator entanglement can be a measure of scrambling. The time-dependence of TMI may detect how the Bell pairs initially shared by A and H 1 are delocalized and become non-locally hidden in H 1 under the time evolution.
Parameter regimes of interest
For the bulk of the paper, we are interested in the above entanglement quantities in the coarsegrained regime. This regime is defined as follows. LetV denote the subsystem consisting of the spatial intervals, and then letL,l V ,â,ˆ , andt denote a system size, a subsystem size, a lattice spacing, a regularization parameter that guarantees the norm of states considered in this paper is one, and the times associated to some Hamiltonian considered. Here, * denotes a dimensionful parameter, and * is the dimensionless one defined as * a . In the following, we will use only dimensionless parameters. The parameter region considered is The interest in this regime comes from the expectation that in this regime we can potentially use effective descriptions of entropy propagation such as the quasiparticle picture or the line-tension (membrane) picture.
Path integral formulation and twist operators
To develop the path-integral formalism, let us define Euclidean density operators as 12) where N −2 E = tre −2 H 0 guarantees that trρ E = 1. These density operators may be obtained from the ones defined in Section 2.2 by analytically-continuing to imaginary time. Here, the Euclidean evolution operator is given, depending on the setups above, We now define the reduced Euclidean density operators for V as ρ E,V = tr V ρ E . They are given explicitly as whereĀ denotes the complement of A. Let us define Euclidean entanglement entropy associated with ρ E,V as von Neumann entropy for this reduced density matrix: Thus, in the von Neumann limit n → 1, the n-th Rényi entropy, S (n) E,V = 1 1−n log tr V (ρ E,V ) n , reduces to the Euclidean entanglement entropy. In the path-integral formalism, S , where Z n is the partition function on an n-sheeted geometry defined by sewing V together in a cyclic fashion as in [77,78]. At the end of the calculations, we analytically continue τ i=0,1,2 to it i=0,1,2 to obtain the time evolution of entanglement entropies. With this procedure in mind, from now on, we drop the subscript "E" and simply write S E,V → S V .
To compute S V , let us now employ the twist operator formalism where tr V (ρ V ) n is given by the 2m V -point functions arising from insertion of the twist and anti-twist operators on the torus. Here, V is composed of m V intervals. Consequently, the Rényi entropies can be expressed as where · 2 denotes the expectation value on the thermal torus where thermal and spatial circumstances are 2 and L, respectively. The complex coordinate is defined as (w x , w x ) = (ix, −ix), and h n = c(n 2 −1) 24n denotes the conformal dimension of twist and anti-twist operators. By using the identities,Ũ 1 (2.16) as the ones in Heisenberg picture. In the Heisenberg picture, the evolution of the twist and anti-twist operators in Euclidean time is given by are presented in Appendix A. During the evolution by U 1 E e − H 0 , the location of the operators is mapped to w New x, , w New x, . As a consequence, S (n) V is written as is independent of the details of 2d CFTs. We hence call this factor the universal piece. On the other hand, the two-and four-point functions of the twist fields on the torus depend on the details of 2d CFTs, and we call them the non-universal pieces. These variables, w New x, and w New x, , depend on the imaginary times τ i=0,1,2 . After we analytically continue τ i=0,1,2 to it i=0,1,2 , only these imaginary parts of w New x, and w New x, depend on these real times. In other words, during the evolution by U 1 E e − H 0 , the twist and anti-twist operators spatially move with time as in Appendix A.1. Under the evolution by H SSD/CSD , the primary operators at x = X f 1 = 0 or x = X f 2 = L 2 does not spatially move. We call X f 1 and X f 2 fixed points.
Non-universal pieces in 2d holographic CFTs
Let us have a closer look at the non-universal pieces of the entanglement entropy for the single and double intervals in 2d holographic CFTs. To compare the results on 2d holographic CFTs with the ones in the 2d free fermion CFT, we also calculated the non-universal pieces in the free fermion CFT. The results and calculations for the free fermion CFT are reported in Appendix D.1.
Single interval
Here, we present the non-universal piece of entanglement entropy for the single interval in the coarse-grained regime. In this regime, the gravity dual of the system on the torus is the BTZ black hole [79]. Therefore, in the von Neumann limit when n → 1, the non-universal piece is given by the geodesic length in the BTZ black hole [80,81]. Let V denote the subsystem, and also v 1 and v 2 denote the endpoints of V. Here, we assume that v 1 > v 2 > 0. The non-universal piece of entanglement entropy for the reduced density matrix associated with V is holographically given by
Double intervals
Let us turn to the non-universal piece of the entanglement entropy for a union of double intervals. In 2d holographic CFTs, the non-universal piece for a pair of intervals is given by where S dis is determined by the length of geodesic that connects the endpoints of intervals at the same Euclidean time slices, while S con is determined by that of geodesics connecting points on different Euclidean time-slices. Some details of S dis and S con are reported in Appendix B.1. The Euclidean temporal and spatial locations, τ New x, and X New x, , of endpoints are defined as 2i . (2.21)
Setup 1
Let us now turn to the analysis of the time-dependence of BMI and TMI in Setup 1, (2.4).
One of the main findings is Fig. 2 where we plot BMI as a function of time for various choices of θ. This plot should be compared with, e.g., Fig. 11 in Ref. [47] where BMI (or bipartite operator mutual information) of the regular, homogeneous time evolution operator for holographic CFTs was studied. Interestingly, we find a threshold value of θ that separates the two types of behaviors of BMI presented in the left and right panels of Fig. 2, respectively. We also compare holographic CFTs and the free fermion CFT described by the quasiparticle picture.
Analysis of the geodesic length
We first discuss the time-dependence of geodesics corresponding to the non-universal pieces of S A , S B , and S A∪B in the Heisenberg picture. For simplicity, let us suppose that the center of B is at x = X f 1 . The twist and anti-twist operators associated with ρ A are stationary, so that in the coarse-grained region, the entanglement entropy is approximated by a stationary constant, where l A is the subsystem size of A. Let us look closely at the time-dependence of the non-universal pieces of S B and S A∪B . The twist and anti-twist operators associated with B evolve under H Möbius/SSD and periodically move between the two fixed points x = X f 1 and x = X f 2 with period L cosh 2θ. In the SSD limit θ → ∞, the oscillation disappears, and these operators move asymptotically toward one of the fixed points, x = X f 2 . The traveling speed of these operators depends on their locations and θ. According to the time evolution of the twist and anti-twist operators, the size of the subsystem associated with these operators grows and shrinks with time. Consequently, the geodesic length associated with this subsystem increases and decreases. For the non-universal piece of S A∪B , in the small t 1 -regime, the non-universal piece of S A∪B may be given by the lengths of geodesics connecting the endpoints of A and B, S con , while in the large t 1 -regime, it may be given by the ones connecting the endpoints on the same Euclidean time-slices, S dis . Therefore, for large t 1 , the non-universal pieces of S B and S A∪B may be determined by the lengths of the geodesics connecting the endpoints of the subsystems on the same Euclidean time-slices as in Fig. 1. More specifically, for the t 1 -regime where w New whereŜ 2 is the same as the non-universal piece of S B in this time regime. Which of these contributions,Ŝ 1 andŜ 2 , is dominant depends on θ and there is a threshold value θ C separating the two cases. In the small θ regime, 0 ≤ θ ≤ θ C , S dis is given byŜ 2 , so that for small θ but large t 1 , I A,B is zero. On the other hand, in the large θ-regime, θ C < θ, S dis is given byŜ 1 . In this time regime, S A and S B are approximated by (3.1) and , respectively, so that I A,B is approximated by The critical value θ C separating these two cases depends on Y i=1,2 , X i=1,2 , and L and can be determined as follows. Let us suppose that σ n (w New Y 2 , , w New Y 2 , ) moves with time between x = X Nearest If θ becomes larger, then X Nearest i=Y 1 ,Y 2 gets closer to X f 2 . Let t 1,Max denote the time for the effective size of B to reach its maximum. This time, t 1,Max , depends on θ, Y 1 and Y 2 . Let θ C denote the value of inhomogeneous parameter, for which L−(X New System.2: We start from an excited state which is defined as and then evolve it with the SSD Hamiltonian acting on only H 1 . Under this evo system is in the state given by The operators evolved by H Möbius periodically moves between x = X 1 and x = X 2 with period T 1 = L cosh 2✓. In the SSD limit when ✓ ! 1, these operators move with time to x = X f 2 . The speed of motion of these operators depends on the locations of them and ✓. The system size grows and shrinks with time according to the motion of the evolved operators, so that the geodesic length associated to this subsystem increases and decreases with time.
Let us looks closely at the time-dependence of the non-universal pieces of S B and S A[B . For simplicity, let us suppose that B includes x = X f 1 , its center is this fixed, and A does not include this fixed point. For the non-universal piece of S A[B , the early time-dependence of it may be given by the lengths of geodesics connecting the endpoints of A and B, S con , while the late-time dependence may be given by the ones connecting the endpoints on the same Euclidean time-slices, S dis . Therefore, for large t 1 regime, the non-universal pieces of S B and S A[B may be determined by the lengths of the geodesics connecting the endpoints of the subsystems on the same Euclidean time-slices as in Fig. ??. Let us suppose that the operator corresponding to the right endpoint of B move with time between x = X Nearest and x = X Furtherest where 0 < X Furtherest < X Nearest < L 2 , while the one corresponding to the left endpoint of B move between x = L X Furtherest and x = L X Nearest . If ✓ becomes larger, then X Nearest gets closer to X f 2 . For the small ✓, S dis is given by the geodesic lengths that are proportional to the the sizes of A and B, so that S A[B cancels S A and S B . As a consequence, for the small ✓ but the large t 1 , I A,B is zero. For the large ✓, S dis is given by the geodesic lengths that are proportional to the the sizes ofà andB. Here,à and B are complements of H 2 and H 1 to A and B, respectively. In this time-regime, For H SSD , in the time-region where the black-holelike excitation, the excitation having the thermal entropy of H 1 , emerges at x = X f 1 , I A,B reduces to the value which is proportional to X 1 X 2 . One possible for the late-time I A,B that I A,B may measure the Bell pairs initially shared by A and H 1 5 . 5 Probably, this sentence is put on other sections.
12
ntents arts 1 H Möbius periodically moves between x = X f 1 and x = X f 2 with the SSD limit when ✓ ! 1, these operators move with time to ion of these operators depends on the locations of them and ✓. The inks with time according to the motion of the evolved operators, associated to this subsystem increases and decreases with time. the time-dependence of the non-universal pieces of S B and S A[B . ose that B includes x = X f 1 , its center is this fixed, and A does t. For the non-universal piece of S A[B , the early time-dependence lengths of geodesics connecting the endpoints of A and B, S con , dence may be given by the ones connecting the endpoints on the s, S dis . Therefore, for large t 1 regime, the non-universal pieces of rmined by the lengths of the geodesics connecting the endpoints ame Euclidean time-slices as in Fig. ??. Let us suppose that the the right endpoint of B move with time between x = X Nearest and X Furtherest < X Nearest < L 2 , while the one corresponding to the etween x = L X Furtherest and x = L X Nearest . If ✓ becomes loser to X f 2 . For the small ✓, S dis is given by the geodesic lengths he the sizes of A and B, so that S A[B cancels S A and S B . As all ✓ but the large t 1 , I A,B is zero. For the large ✓, S dis is given hat are proportional to the the sizes ofà andB. Here,à and and H 1 to A and B, respectively. In this time-regime, For H SSD , in the time-region where the black-holeion having the thermal entropy of H 1 , emerges at x = X f 1 , I A,B is proportional to X 1 X 2 . One possible for the late-time I A,B e Bell pairs initially shared by A and H 1 5 .
put on other sections.
12
where h(x) f (x), L denotes the Hamiltonian density, envelop function, and system size. We impose the periodic boundary condition on these Hamiltonians. The envelop functions considered in this paper are where in the SSD limit when ✓ ! 1, f Möbius (x) reduces to f SSD (x), while in the CSD limit , the inhomogeneously-deformed Hamiltonians are called as Möbius, sinesquare (SS), and cosine-square (CS) deformed Hamiltonians. For x ⇡ 0, the Hamiltonian density spatially modulated by f SSD (x) is smaller than the un-deformed one, while for x ⇡ L 2 , it is larger than the un-deformed one. For x ⇡ L 2 , the Hamiltonian density modified by f CSD (x) is smaller than the un-deformed one, while for x ⇡ 0, it is larger than the undeformed one. Therefore, the SSD and CSD may the dynamical properties of Hamiltonian stronger at x ⇡ 0 or x ⇡ L 2 , respectively, while they may those properties weaker at x ⇡ L 2 or x ⇡ 0, respectively.
The systems evolved with the inhomogeneously-deformed Hamiltonians
Let us now describe the systems evolved with the inhomogeneously-deformed Hamiltonians. System.1: Let us start from the thermofield double state of the finite inverse temperature : where H 1 Möbius/SSD and 1 2 denote the Möbius/SS deformed Hamiltonian acting on H 1 , and identity operator on H 2 , respectively.
System.2: We start from an excited state which is defined as and then evolve it with the SSD Hamiltonian acting on only H 1 . Under this evolution, the system is in the state given by [a] The geodesics associated with B in H 1 .
B.H. A
: and then evolve it with the SSD Hamil system is in the state given by The geodesics associated with A in H 2 . For 0 ≤ θ ≤ θ C , the non-universal piece of S A∪B is the orange dashed line, while for θ C < θ, it is given by the purple dotted lines. The red arrow illustrates the growth of X Nearest f , then for the large t 1 -regime under the SSD evolution, the non-universal piece of S A∪B is given by that of S A + S B where S B is approximated by the entanglement entropy of the vacuum state. As a consequence, I A,B is zero at late times. This means that the reduced density matrix on A ∪ B approximately factorizes as where ρ Thermal,A the reduced density matrix of a thermal state at inverse temperature 2 for subsystem A, and ρ Vacuum,B is the reduced density matrix of the vacuum state for subsystem B.
The θ-and position-dependence of I A,B
The behavior of the geodesic and the time-evolution of the subsystems in the Heisenberg picture described above is directly translated into the time-dependence of I A,B . In Fig. 2, we plot I A,B for various choices of θ as a function of t 1 . In this plot, the center of B is x = X f 1 . The solid lines illustrate the time-dependence of I A,B for A, the center of which is x = X f 1 , while the dashed line illustrates that for A, the center of which is x = L 4 . In Fig. 2 (a), we show the time-dependence of I A,B for the small θ-region where 0 ≤ θ ≤ θ C , while in (b), we show that for the large θ-region where θ C < θ. As discussed in Section 3.1, in the late time-regime, I A,B for 0 ≤ θ ≤ θ C is practically zero, while that for θ C < θ becomes positive. For 0 ≤ θ ≤ θ C , I A,B monotonically decreases with t 1 up to t 1, * , and then is practically zero. Here, t 1, * is the phase transition time where S con exchanges dominance with S dis . The details of early-time decay depends on θ: For larger θ, the early-time decay is slower (t 1, * is bigger). This behavior for θ < θ C is similar to what was found for bipartite operator mutual information of the regular homogeneous time-evolution operator of holographic CFTs in Ref. [47].
On the other hand, the behvior for θ C < θ is markedly different. Except for the SSD limit, I A,B first monotonically decreases with t 1 up to t 1, * , and then oscillates with periodicity L cosh 2θ. For larger θ (closer to the SSD limit), the early-time decay is slower, and I A,B at t 1 = t 1, * is larger. In the SSD limit, after t 1 = t 1, * I A,B grows with t 1 , and saturates to a value that is proportional to the size of A. We will revisit this behavior in Sec. 6 by developing the line-tension picture (membrane picture) for inhomogeneous chaotic time-evolution operators.
Let us turn to the analysis of the position-dependence of I A,B . For simplicity, let us consider the SSD limit, and l A = l B , and P C,A = P C,B = P C . We can see from the timedependence of I A,B how scrambling may destroy the non-local correlation between A and B. Also, we can see the times when ρ A∪B may approximately factorize into ρ A and ρ B : (3.6) In Fig. 3, we depict I A,B for various P C as the function of t 1 . In this figure, we take P C to be L 4 and L 2 . From the time-dependence of I A,B in Fig. 3, we can see that when θ becomes larger, the early-time decay of I A,B for P C = L 2 becomes faster and the time for ρ A∪B to factorize into ρ A and ρ B may become smaller. For P C = L 4 , the t 1 -dependence of I A,B may be independence of θ. One possible explanation for the t 1 -dependence is that the inhomogeneous deformation may promote scrambling to destroy the non-local correlation around P C = X 2 f , while this deformation may make scrambling destroy it around P C = X 1 f slower, and then prevent ρ A∪B from factorizing into ρ A and ρ B .
Theory-dependence of I A,B under evolution
We have so far focused on holographic CFTs. However, as one of our motivations is to understand quantum information scrambling behaviors and their theory dependence, we now for θ = 0 (dashed line) and in the SSD limit (solid lines) as a function of t 1 . Here, we choose l A = l B and P C,A = P C,B = P C . The solid lines illustrate the t 1 -dependence of I A,B for P C = L 4 , L 2 in the SSD limit. For θ = 0, the time-dependence of I A,B is independent of P C . make a comparison, for the time-dependence of I A,B , between 2d holographic CFTs with the 2d free fermion CFT. First, as we show in Appendix D, for the free fermion CFT with inhomogeneous time evolution, we can establish that its entanglement dynamics is described by the quasiparticle picture, just like the standard case of homogeneous time evolution. In this picture, the time-dependence of I A,B follows the propagation of quasiparticles at speeds given by H Möbius/SSD . Some details of the calculations of I A,B in the 2d free fermion CFT and a detailed description of the quasiparticle picture can be found in Appendix D. The upshot is that the BMI in the 2d free fermion CFT is carried separately by left and right-moving quasiparticles that move independently of one another. These quasiparticles are localized packets of information and their number is conserved. In Fig. 4, we plot I A,B in the SSD limit as a function of t 1 . We see that if the size and center of A are the same as B, then the time-dependence of I A,B in 2d holographic CFTs follows the quasiparticle picture. This is however not the case otherwise. We will propose an effective picture that describes the t 1 -dependence of I A,B in the 2d holographic CFTs in Section 6.
Tripartite mutual information
Let us turn to TMI. Suppose that we divide H 2 into A and its complement, and also H 1 into B 1 and B 2 , and then define TMI as Here, we also assume that l A = l B 1 and P C,A = P C,B = X 1 f . Then, the time-dependence of I A,B 1 is the same as that of I A,B reported in Section 3.1.1, while I A,B 2 is independent of t 1 and approximately zero. In the coarse-grained regime, I A,B 1 ∪B 2 is also independent of t 1 and approximated by (3.4). During the evolution by H Möbius with 0 ≤ θ ≤ θ C , I A,B 1 ,B 2 is a stationary constant in (3.4), while for θ C < θ, I A,B 1 ,B 2 is a periodic function of t 1 with period T = L cosh 2θ. The range of I A,B is between zero and (3.4). In the SSD limit, the time dependence of I A,B 1 ,B 2 is approximated by large t 1 -regime, I A,B 1 ,B 2 saturates to zero. One possible explanation for the late-time value of I A,B 1 ,B 2 is that the correlation initially shared by A and B 1 may not be scrambled in the whole H 1 , and this correlation between A and B 1 may be revived.
Setup 2
In this section, we present the time-dependence of BMI and TMI for Setup 2, (2.5). Recall that in (2.5) the state is first time-evolved by the homogeneous Hamiltonian and then by the SSD Hamiltonian. In holgraphic CFTs, the first step of the time-evolution scrambles quantum information of the initial state and produce a typical state (the Page state) [82,83]. Our focus here is the effect of the second step of the time-evolution on the scrambled information.
Let us focus on the analysis of the lengths of geodesics corresponding to I A,B . Let V 1 and V 2 denote the sub-regions on H 1 and H 2 , respectively, and also let l V i=1,2 denote the size of V i=1,2 , respectively. For large t 0 , t 0 O(L), the 2d holographic CFTs Hamiltonian evolves the system to the Page state, so that for all V i=1,2 where i=1,2 l V i < L, I V 1 ,V 2 should be completely destroyed. Subsequently, we evolve it with H SSD . In the large-t 1 regime, S con should be larger than S dis . For simplicity let us assume that A and B include x = X 1 f . In this case, w New For S dis and S B , the shifts by it 0 are canceled, so that S A∪B and S B in this setup is the same as those in Setup 1. Since for small t 1 , S A∪B = S A + S B , the early-time I A,B is zero. For large t 1 , excluding the t 1 -regime when (L,ϵ ,P C,A ,l A ,t 0 )=(100000,10,0,6000,10L) 2 500 000 5 000 000 0 , the t 1 -dependence of S A∪B should be given by The distance between X New Y 1 , and X New Y 2 , decreases with t 1 , so that I A,B may grow with t 1 and saturate to (3.4).
In fact, as in Fig. 5, for P C,A = P C,B = 0 and l A = l B , even in the large t 0 -regime, the I A,B grows with t 1 , and then saturates at the value in (3.4). One possible interpretation for the t 1 -dependence of I A,B in this figure is that the SSD evolution may recover the non-local correlation between A and the subsystem including x = X 1 f even when the system is in the typical state. The SSD time-evolution is able to recover the mutual information from the typical state.
The above recovery of quantum information is analogous to the one discussed in quantum circuit models of quantum information scrambling and black holes, e.g., the Hayden-Preskill thought experiment [84,15] where the authors considered the retrieval of the quantum information from a black hole. To make a comparison, we can describe Setup 2 in the quantum circuit language as in Fig. 5. Here, in the parameter region considered in this paper (see (2.11)), the TFD state may be approximated by the product of Bell states, |TFD ≈ L x=0 |Bell; x , where |Bell, x denotes the Bell state at the spatial location x. For example, if the dimension of the local Hilbert space at x is d, then the definition of a single Bell state is given by Let us divide these Bell pairs into two groups, G 1 and G 2 . Let R and E denote the sub-regions associated with G 1 of H 1 and H 2 , respectively, while let B and N denote the sub-regions associated with G 2 of H 1 and H 2 . In Fig. 5, U SSD and U Hol. denote the time evolution induced by the holographic uniform Hamiltonian and H SSD , respectively. The process under the dashed line is the same as the one considered in the Hayden-Preskill thought experiment. If we interpret U SSD as a unitary decoder, the location where this decoder acts is different from that discussed in the Hayden-Preskill thought experiment. Therefore, it would be interesting to consider the information retrieval in the system where the U SSD acts on E and R. This is left for future work.
Setup 3
In this section, we study the entanglement dynamics of the state (2.6). Here, the first part of the two-step time evolution (with the SSD Hamiltonian on (Fig. 7). As we will show below, the propagating B.H.-like excitations lead to periodic behaviors of entanglement quantities. Furthermore, in this setup, the system acquires genuine tripartite entanglement due to the strong scrambling effect of the dynamics. On the contrary, in the 2d free fermion CFT, the B.H.-like excitations are just the clusters of quasi-particle, and no such tripartite entanglement arises.
Entanglement entropy
Let us first study S B . In particular, we present the t 0 -dependence of S B in three cases: (a) Fig. 6, we plot S B for various t 1 as a function of t 0 . The t 0 -dependence of S B is periodic with period L. This periodic behavior follows from the evolution of twist and anti-twist operators reported in Appendix A.1. The larger t 1 is, the larger the amplitude of the oscillation of S B is, and the system deviates further from the typical state. The time-dependence of S B for the single interval can be understood by the quasiparticle picture (we provide the details in Appendix D). For t 1 1, the t 0 -dependence of S B is approximated by The subscript in (a i=1,2 ) distinguishes the small and large t 1 regimes (top and bottom rows, respectively). The dashed line illustrates the asymptotic behavior of S B in (5.1) in the large t 1 limit.
where n is an integer.
The periodic behavior (5.1) can be understood from the relativistic propagation of the two local objects that have a huge amount of information, i.e., B.H.-like excitations. Here, we introduce an effective model that describes the time evolution of S B induced by H 0 for the large t 1 -regime. This model describes the leading behavior of S B in the coarse-grained regime. At t 1 = t 0 = 0, in the coarse-grained regime, the leading behavior of the TFD state (2.3) may be approximated by the state consisting of the product of Bell pairs, wherex is defined asx ≡ x , and |Bell;x L,R denote the Bell pairs consisting of the two quasiparticles atx in H 1 and H 2 respectively as in
Bipartite and tripartite mutual information, and genuine tripartite entanglement
In the previous sections, we have developed an effective picture in terms of B.H.-like excitations to describe the time evolution of entanglement entropy for a single interval. We note that the above behavior is universal for any CFT. We now generalize it to the time evolution of BMI and TMI. Here, the distinction between integrable (e.g., the free fermion theory) and chaotic theories (holographic theories) becomes important. In the free fermion theory, In Fig. 9, we plot I A,B as a function of t 0 for various choices of t 1 , A and B. Here, for the configurations of A and B, we consider the following three cases: For (b), we assume that A and B are the disjoint intervals for simplicity. Then, I A,B is approximately zero. For (a) and (c), for large t 1 , I A,B is approximated by the following periodic function of t 0 with period L: The dashed lines in Fig. 9 illustrate these asymptotic behaviors. In these cases, there are the t 0 -regimes where both B.H.-like excitations are in B, and in these regimes, I A,B is approximated by cπl A 3 , while in (b), there are no such t 0 -regimes. Fig. 10, we take Y 1 > Y 2 > X 1 > X 2 > 0. In this case, for small t 1 , I A,B=B 1 ∪B 2 is practically zero, while for large t 1 the t 0 -dependence of I A,B is approximated by the following periodic function of t 0 with L: In these t 0 regimes, the I A,B 1 ∪B 2 is approximated by cπl A 3 . This suggests that we may be able to reconstruct I A,B from all quasiparticles in A and H 1 even under the 2d holographic time evolution. Also, we can see from the t 0 -dependence of I A,B for large t 1 that the time evolution of I A,B may follow the relativistic propagation of the local excitations as in [24,85,25].
Tripartite mutual information
By combining BMI for single intervals A, B, and for a single interval A and double interval B = B 1 ∪ B 2 , we can discuss TMI. By considering local and global TMI defined below, let us show that the amount of information scrambled by the dynamics depends on the observers. Define local TMI for (5.4) as of I A,B 1 ∪B 2 is given by (5.5). Thus, the local TMI, By contrast, the local and global TMI for free fermions is zero for the double interval setup in Fig. 10 in agreement with the quasiparticle picture.
Genuine tripartite entanglement
Let us also note the following behavior of BMI in the large t 1 regime. For the symmetric double intervals in (5.4), I A,B i=1,2 and I B 1 ,B 2 are approximately zero. On the other hand, the t 0 -dependence of I A,B 1 ∪B 2 is given by (5.5). One possible interpretation for these BMI is that the system in its steady state may have only tripartite entanglement, which we call genuine tripartite entanglement. This genuine tripartite entanglement may be a characteristic property of the system in the steady state during the 2d SSD holographic time evolution. In contrast, in the 2d free fermion theory, there are the t 0 -regimes where I A,B i=1,2 and I B 1 ,B 2 becomes positive (see Appendix D.3). The reduced density of the system in this steady state may be given by
An atypical state
At the end of this section, let us consider the entanglement structure of the steady state under the evolution by H 1 . In Table 1, we summarize the entanglement property of the system with various t 1 and large t 0 . From Table 1, we can see that if the system is highly inhomogeneous, then we can not evolve it with even 2d holographic Hamiltonian to the typical state. This atypical state may have a quantum nature because the t 0 -dependence of S B and I A,B is periodic (quantum revival).
6 Line tension picture Figure 11: A curve C that divides the unitary circuit into two parts. In the line-tension picture, the entanglement entropy S U (x, y, t 1 ) is given by the integral of the line tension T (v) along the curve C.
In Sec. 3, we studied the time-dependence of BMI after quantum quench with the inhomogeneous Hamiltonian as the post quench Hamiltonian. We observed that the BMI is not fully described by the quasiparticle picture. In this section, we propose a generalization of the so-called line-tension picture to a random unitary circuit with the SSD time-evolution. In a chaotic system, the entanglement production is effectively described by the line-tension picture introduced in [86,87,88,89,90]. To explain the basic idea of the line-tension picture, here we assume that the spatial direction is homogeneous and infinite. For simplicity, we assume that the system is time-evolved by the unitary operator U (t 1 ) from t = 0 to t = t 1 . We divide the infinite line where the system lives into two pieces at position x at t = t 1 . We also divide the line at position y at t = 0. Now the entanglement entropy S U (x, y, t 1 ) of the unitary operator is computed as where the minimization is taken over all the possible curves C that connects the point (x, t 1 ) and (y, 0). The symbol T (v) is the line-tension associated to a curve C that connects the point (x, t 1 ) and (y, 0) as in Fig. 11. The curve C in spacetime has a velocity v = dx/dt and a line-tension T (v) that depends on v. In our case, when the spacetime is uniform, the minimal curve is given by a straight line with a constant velocity v = (x − y)/t 1 .
The details of the function T (v) depend on the system and are estimated in a chaotic system using random unitary circuits which illustrate the phenomenon of quantum information scrambling. In the scaling limit and in the limit of large bond dimension q, the line-tension is simply given by counting the number of bonds cut which is To compute the entanglement of the unitary operator in a holographic CFTs using the linetension picture, we need to identify the bond dimension (the local Hilbert space dimension) q in the random unitary circuit. This can be accomplished by comparing the rates at which the information gets scrambled. While the entanglement entropy grows at a rate of log q in random unitary circuits, it is known that in holographic CFTs the entanglement of the unitary operator (computed as the entanglement between two CFTs in the time-evolved thermofield double state) grows at a rate of cπ 6 . Here, is dimensionless as it has been written in units of the lattice spacing. Therefore, we make the identification q ∼ e cπ 6 . (6.3) Notice that log q simply equal to the entropy density given by the Cardy formula S Cardy /(2πR) = cπ 6 . Using this, one can correctly reproduce the growth of the entanglement in holographic CFTs which is S U (x, y, t 1 ) ∼ cπ 6 t 1 . (6.4) Figure 12: SSD time-evolution deforms the spacetime in the line-tension picture nonuniformly
Line tension picture with inhomogenity
In the above we assumed the homogeneity and infiniteness of the space direction. Here, we describe how to generalize the line-tension picture to the situation where the spatial direction is inhomogeneous and compact, which fits the SSD time evolution in a compact space discussed in our paper. One can make similar arguments in the cases of other inhomogeneous Hamiltonians, and we will briefly comment on these in the last part of this section. The main idea is as follows. The aforementioned line-tension picture was based on a geometric representation of a random unitary circuit consisting of quantum gates uniformly arranged in the spatial direction. In the Schrodinger picture, the spatial direction is deformed nonuniformly by the SSD time evolution, and to give a line-tension picture that captures the dynamics of entanglement by the SSD time evolution, we should consider line-tension picture in a deformed inhomogeneous spacetime, see Fig. 12.
We look for an appropriate spacetime generated by the SSD time evolution. We are especially interested in the coordinate whose metric is conformally flat. As in [31], we introduce new coordinates in which action of the SSD Hamiltonian is simple. The evolution under the SSD Hamiltonian is simplified by introducing the Poincaré coordinate (z P ,z P ). The boundary global coordinate (w,w) and the boundary Poincaré coordinate (z P ,z P ) are related as where z P = x P − iτ P andz P = x P + iτ P are the complex coordinates in the Poincaré coordinate. The symbol τ P is the Euclidean time coordinate, and x P is the spatial coordinate (−∞ < x P < ∞) in the plane where the Poincaré coordinate is defined. Notice that in this Poincaré coordinate, the two fixed points of the SSD Hamiltonian are located at the origin and the spatial infinity. Now let us see how the Poincaré coordinate simplifies the translation under the SSD Hamiltonian. The flow of the Poincaré time is generated by the following Hamiltonian We use the usual transformation rule for the energy-momentum tensor Sch (z P , w) and move to the original global coordinate (w,w) as Therefore, the SSD Hamiltonian generates the time-flow in the Poincaré coordinate defined as (6.5). This indicates that the line-tension picture in the Poincaré coordinate appropriately captures the entanglement dynamics under the SSD time-evolution. The metric is given by We propose the entanglement entropy computed in the line-tension picture in a curved spacetime with metric g zz is given by the following line integral where γ A is the curve anchored at the edges of the subregion A and homologous to A. Specifically, using a pair of coordinates (z(s),z(s)) on the two-dimensional spacetime, we obtain where z = dz/ds andz = dz/ds. In our case, we have a curved metric (6.9) with line tension (6.2). Let us compute the entanglement entropy by simply taking a subregion as A = [Y 1 , Y 2 ] at time t after the SSD Figure 13: Configuration of the minimal curve in the line-tension picture time evolution. The entanglement: S 1 A is given by the space-like (or light-like) curve with T (v) = log q = c 6 as Fig. 13. Segments of the curve γ 1 and γ 2 intersects at the point (z M P ,z M P ). The entanglement entropy for the subregion A is computed as This can be simplified to the new coordinate system (w New ,w New ) with flat metric defined by pulling the curved coordinate (z P ,z P ) back to w coordinate after the SSD (=Poincaré) time evolution 8 I.e., w New andw New are related by the original w andw as (6.14) w New andw New are nothing but w New,α Since we can treat t just as a parameter in the integral, we have dw New = dw, thus we can compute the integral as where X New M is the intersection point (z M P ,z M P ) in the X New coordinate. This correctly reproduces the result obtained by the holographic computations (2.19) in the leading order of the coarse-grained limit.
We have more interesting configurations for the entanglement entropy for double intervals A and B placed at t = 0 and time t respectively. Let us consider a sufficiently late time when the disconnected configuration dominates over the connected ones as Fig. 14. Two candidates for the curve that would give the entanglement entropy are drawn in Fig. 14. In the case of the uniform Hamiltonian (see the left panel in Fig. 12, if you take small enough regions, the left configuration in Fig. 14 always dominates, and we have the trivial mutual information, i.e., S A∪B = S A + S B . This is not the case for the SSD Hamiltonian. As you can see in Fig. 14, if the subregion contains the fixed point X 1 f of the SSD Hamiltonian, the vertical lines representing "gates" originally aligned uniformly are condensed around X 1 f . The amount of the entanglement counts the number of the lines cut by the minimal curve. Therefore no matter how small a subregion is taken, at sufficiently late times, it is more efficient to take a curve like the right one in Fig. 14 that is not homologous to subregions A and B respectively (while the union of each curve is homologous to A ∪ B) than to take the right one. Such curves give nontrivial mutual information. This is the characteristic entanglement behavior of systems driven by Hamiltonians with fixed points, such as the SSD Hamiltonian. It correctly reproduces the holographic calculations.
This prescription of the line-tension picture proposed in this section can be generalized to other inhomogeneous time evolutions. In the case of the cosine-square deformation, an appropriate coordinate system that simplifies the action of its Hamiltonian is given by the coordinate transformatioñ z P = L tan iπw L ,z P = L tan iπw L . (6.16) and the w New θ andw New θ are defined as As we pointed out in [31], in the case of the general Möbius Hamiltonian, we can find an appropriate coordinate system by the coordinate transformation instead of (6.5) in the case of the SSD Hamiltonian. The Möbius Hamiltonian generates the simple time translation in the (z θ ,z θ ) coordinate. The (w New θ ,w New θ ) coordinates analogous to (6.12) is defined as which simplifies the integral that computes the entanglement. One can check that w New
Gravitational description
Let us now turn to the gravitational dual descriptions of Setups 1, 2, and 3. As in [91,92], these dual geometries are constructed from the expectation value of energy density under the evolution by the Hamiltonians considered. Equivalently, these geometries are given by a map from the BTZ-black hole in w New to the time-dependent one in terms of t i=0,1,2 . The dual geometry of the reduced density matrix associated with H 2 is a stationary BTZ-black hole. Since ρ H 1 is a mixed state, its gravity dual should be a black hole geometry. The details of the complicated metric associated with ρ H 1 are reported in Appendix E.1. Here, we describe the spacetime-profile of the black hole horizon in these dual geometries. Let us introduce the radial coordinate, r , that guarantees the asymptotic geometry near the AdS boundary is given by the pure AdS 3 or the modified geometry, the metric of which is given by replacing the time-component of the pure AdS 3 with g tt = −4L 2 r 2 sin 4 (πX/L). Then, the spatial and temporal dependence of the black hole horizon in the dual geometries for Setup 1 and 2 is almost the same as that in [31]. In Fig. 15, we plot the black hole horizon
Wormhole growth
In addition to the horizon, another geometrical object of our interest is a wormhole connecting the two Hilbert spaces. Here, as a measure of wormhole growth, we use the "free" energy defined from a two-point function as In the Heisenberg picture, this free energy islgiven by the universal and non-universal pieces as Here, we consider light operators with c h O 1. In this regime, the non-universal piece G is determined by the length of geodesics in the stationary BTZ black hole.
Setup 1
Let us begin by analyzing the free energy in (8.2) for Setup 1. We consider the t 1 -dependence of (8.2) with general Y 1 . We assume that L 2 > X 1 > Y 1 > 0. Under the evolution by H 1 Möbius with θ = ∞, the imaginary parts of w New Y 1 , and w New Y 1 , of (8.2) monotonically increase with t 1 . In the large t 1 -regime, F (X 1 , Y 1 ) is approximately given by a monotonically-increasing function of t 1 , Y 1 ) is approximately given by a function following the trajectory of the local operator, In this t 1 -regime, F (X 1 , Y 1 ) may decrease with t 1 .
In the SSD limit θ → ∞, if Y 1 = 0, F (Y 1 = 0, X 1 ) is a stationary constant, and approximated as F (Y 1 = 0, X 1 ) ≈ h O πX 1 . Unless Y 1 = 0, for large t 1 , the imaginary parts of w New Consequently, the t 1 -dependence of F (X 1 , Y 1 ) in this limit is approximated by Thus, F (X 1 , Y 1 ) is approximately stationary except for the logarithmic growth with t 1 . From these analyses, we can see that Möbius/SS deformation may prevent the wormhole from growing with t 1 . In Fig. 16, we plot F (X 1 , Y 1 ) in Setup 1 as a function of t 1 . We can see for larger θ, the growth of F (X 1 , Y 1 ) is slower.
Setup 2
In Setup 2, F (X 1 , Y 1 ) grows lineary with t 0 under the evolution by H 1 , and then grows with t 1 under the evolution by H 1 Möbius as in the previous section. Asy.
Setup 3
Let turn to the analysis on F (X 1 , Y 1 ) in Setup 3. We, as before, assume L 2 > X 1 > Y 1 > 0. As in Setup 1, for various t 1 , the imaginary parts of w New Y 1 , and w New Y 1 , of (8.2) monotonically increase with t 0 . Therefore, the early-time behavior of F (X 1 , Y 1 ) may be approximated by (8.4), while the late-time t 0 -dependence is approximated by (8.3). For large t 1 , the t 0dependence of F (X 1 , Y 1 ) is given by the asymptotic form, where m is an integer. In Fig. 16, we plot F (X 1 , Y 1 ) of Setup 3 for various t 1 as a function of t 0 . We can see that for larger t 1 , F (X 1 , Y 1 ) is not given by the simple linear growth, but approximated by a sequence of step-functions. The asymptotic behavior (8.6) can be interpreted by using the description in Section 5.2.1. For large t 1 , at t 0 = 0, two B.H.-like excitations emerge near x = X 1 f and move towards the left and right at the speed of light under the evolution by H 1 (Fig. 17). Here, we assume that the size of these excitations is O( ). Then, in the coarse-grained region, these excitations are approximated as the local excitations. Recall that we have operators O i=1,2 on H i=1,2 that are inserted as at Y 1 and X 1 , respectively. At t 0 ≈ mL ± Y 1 where m is an integer, 1 of x. I should write more. ormhole growth us analyse how wormhole grows with t 0 or t 1 in the geometries dual to the systems d in this paper. Here, as a measure of wormhole growth, we use the "free" energy o point function which is defined as rowth wormhole grows with t 0 or t 1 in the geometries dual to the systems r. Here, as a measure of wormhole growth, we use the "free" energy n which is defined as
Discussions and future directions
In this paper, we studied three quantum quench processes with the inhomogeneously-deformed Hamiltonians in 2d CFT. Of particular interest for us is interested in the interplay between inhomogeneous deformation and quantum information scrambling. With these setups, we discussed the operator entanglement, the recovery of quantum information, and the dynamics of B.H.-like excitations. As mentioned in Ref. [31], these inhomogeneously-deformed Hamiltonians may be engineered both in digital and analog quantum simulators, such as cold atoms and Rydberg atoms. Simulating our quench processes in these systems opens up the possibility of studying quantum aspects of black holes in the lab. In particular, from the findings in our paper, among others, it would be interesting to look into the following aspects: • Quantum black hole: The t 0 -dependence of the correlation function may be described by the propagation of the B.H.-like excitation (see Section 8.3). In the frame where one of the B.H.-like excitations is stationary, a local operator falls into and is radiated from this excitation. As in [84], this excitation has the almost same amount of entropy as the black hole and its interior may have a strong scrambling effect. Therefore, if we can create these excitations in the experimental systems, then we may simulate the dynamics of black holes in the laboratories.
• Genuine tripartite entanglement: Let us consider the application of the genuine tripartite entanglement obtained in this paper. In 2d holographic CFTs, for the large t 1 -regime, the local BMI is approximately zero, while the global BMI can be O( 1 ) in the certain t 1 -intervals (see 5.2.3). One possible interpretation for this entanglement property of the steady state is that in the t 1 -regime where only I A,B 1 ∪B 2 is O( 1 ), three persons belonging to A, B 1 , and B 2 , respectively, may be able to share the quantum information, while only two of them may not. In the other words, without the cooperation of these three people, they may never get the quantum information correctly. This entanglement property may be applied to secure quantum communications.
Finally, we conclude by listings some of the future directions: • Multipartite entanglement: It would be interesting to create a system where the local MI is effectively zero, while the global BMI that is shared by n(> 3)-parties is O( 1 ). If the number of fixed points increases [93], then the number of parties sharing the global BMI might increase.
• Quantum scars: In this paper, we discovered the systems which are not evolved to the typical state with a 2d homogeneous holographic Hamiltonian. These states may be interpreted as quantum scar states. It would be interesting to establish the relationship between these states considered in this paper and the quantum scar states [94,95,96,97,98,99,100,101,102].
A Evolution of operators induced by
The Euclidean time evolution operators considered in the main text and Appendices are (see (2.13) and (2.8)) In these appendices, we use the index α = 0, 1, 2, 3 to distinguish these cases. The new complex variables (w New,α x, , w New,α x, ) in (2.17) are given by w New,0 x, w New,0 x, w New,1 x, w New,1 x, w New,2 x, where the variables and parameters, z, z, and λ 1 , are defined by .
B The details of calculations and results in 2d holographic CFTs
Let us present the details of the calculations and results in 2d holographic CFTs.
B.1 Non-universal piece of OEE in 2d holographic CFTs
We now present the details of the non-universal pieces, S dis and S con , of the entanglement entropy. Let us concentrate on S dis . This non-universal piece, S dis , is given by Then, let us turn to S con . This contribution from the geodesics connecting the endpoints of the subsystems on the different Euclidean time slices is given by S con = Min S 1 con ,S 2,± con ,S 3,± con ,S 4,± con (B.3) whereS i con are defined bỹ
B.2 The definition of θ C
Here, we describe the definition of θ C that is introduced in Section 3.1.1. Let B be a subsystem including X 1 f of H 1 , and also let A be a subsystem including the origin of H 2 . Furthermore, let us assume that S dis for the small t 1 is given by The time for (B.5) to be maximized is determined by ∂ t 1 − X New,α=1 = 0. Let t 1,Max denote this time, and this time depends on θ, Y 1 , Y 2 , and L. Let us define θ C as the θ satisfying X New,α=1 C The entanglement dynamics for (2.8)
C.1 The t 2 -dependence of entanglement entropy
Let us consider the state (2.8). We report the t 2 -dependence of entanglement entropy of (2.8). for the subsystems considered in Section 5.1. In Fig. 18, we depict S B for various t 1 as a function of t 2 . In the t 1 -limit, the t 2 -dependence of S B is approximated by We can see from the t 2 -dependence of S B that except for the vacuum entropy, it may be described by the propagation of quasiparticles at the velocities, v L,R (x) = ±2 cos 2 πx L .
Here v L,R (x) denote the speeds of left-and right-moving quasiparticles, respectively.
C.2 The t 2 -dependence of BMI
Now, we report the t 2 -dependence of BMI for the subsystems discussed in Section 5.2.
C.2.1 The single interval
For the single intervals considered in Section 5.2.1, we depict the I A,B for various t 1 as a function of t 2 in Fig. 19. For (C), I A,B is approximately zero. In the large t 1 limit, the asymptotic behavior of I A,B for the single interval is given by
C.2.2 The double intervals
Let us now turn to the t 2 -dependence of I A,B 1 ∪B 2 for the subsystems in (5.4). In Fig. 20, we depict I A,B 1 ∪B 2 for large t 1 as a function of t 2 . The asymptotic behavior of I A,B=B 1 ∪B 2 for (a 1 ) For the small t 1 -regime.
Asympt. the t 1 -regime is given by
C.3 The t 2 -dependence of TMI
We present the asymptotic behavior of TMI in the large t 1 limit. The TMI which we consider are I A,B,B and I A,B 1 ,B 2 . They are defined by (5.6) and (5.7), respectively. The value of the global TMI for the large t 1 is zero. In the early t 2 -regime, L 2π tan πY 2 L > t 2 > 0, the local MI is zero, in the intermediate A , and then, in the late t 2 -regime, t 2 > L 2π tan πY 1 L , it is zero. We can see from the t 2 -dependence of the global TMI that as in the case of (2.6), there is no non-locally-hidden correlation between A, B and B. Furthermore, we can see the time t 2dependence of the local TMI that there may exist the non-locally-hidden correlation shared by A, B 1 , and B 2 .
By contrast, both the local and global TMI for the setup in Fig. 20 vanishes for both physical spin structures ν = 3, 4 in the free fermion CFT as expected. This is because the entanglement is carried by bell pairs in the free theory and hence there is no tripartite entanglement. Asymp.
C.4 Growth of wormhole for (2.8)
Let us present the t 2 -dependence of F (X 1 , Y 1 ) for the various t 1 for (2.8). In Fig. 21, we depict F (X 1 , Y 1 ) for the various t 1 as a function of t 2 . For the large t 1 -regime, the t 2 -dependence of F (X 1 , Y 1 ) is given by
D Non-chaotic theories
Let us present the details of the calculations and results in 2d free fermion CFT.
D.1 The entanglement entropy in 2d free fermion CFT
In this section, we outline the technique for calculating entanglement entropy for free Dirac fermions using bosonization as explained in [103]. There are two possible boundary conditions one can impose on the fermions along each cycle of the torus, namely, the periodic (R) or anti-periodic (NS) boundary conditions, The four possibilities are summarized in table 2. In this coordinate system, the cycle along τ = iL/2 corresponds to the spatial direction. Let A and B denote the subsystems of H 2 and H 1 , respectively. The edges of A are denoted by X 1 and X 2 . while those of B are denoted by Y 1 and Y 2 . Here, we assume that X 1 > X 2 > 0 and Y 1 > Y 2 > 0. The Rényi entanglement entropy is given by a two-point function of the twist operators on the 2-torus. This is equivalent to the partition function on the orbifolded theory with a branch cut running along the entanglement cut. Such a partition function can be computed using bosonization [103]. The resulting operator entanglement entropy can be divided into one piece that depends on the spin-structure and another that does not. For a subsystem V, the former shall be referred to as the non-universal piece S (n) V,ν,non-univ. while the latter will be referred to as the universal piece S A,ν,non-univ. = A∪B,univ. =S
(n)
A,univ. + S (n) B,univ. + n + 1 12n log A∪B,ν,non-univ. = where the log 2 terms come from rescaling the torus coordinates to have periodicities 1 and τ . Note also that when applying the bosonization formulas in [103], the coordinates of the twist operators in the different Hilbert spaces are swapped relative to one another as explained in [47].
D.2 Quasiparticle picture
Suppose that we prepare the systems considered in the thermofield double state, and then evolve them with the Hamiltonians acting on only H 1 . In the infinite temperature limit, the thermofield double state can be written as a product of Bell pairs of quasiparticles as in (5.2). The quasiparticles that live on the Hilbert space that is being acted upon by the Hamiltonians move according to inhomogeneous velocity fields f (x) and −f (x) for the right-moving and left-moving quasiparticles, respectively. These quasiparticles describe the dynamics of entanglement in non-chaotic theories. When the Hamiltonian changes as in the case where different unitary operators are composed, the velocity field simply gets replaced by the envelope of the new Hamiltonian that governs the time-evolution. In the uniform case where f (x) = 1, the quasiparticles simply propagate with unit speed as explained in [47]. In the SSD limit, the speed vanishes at the fixed point X 1 f . Therefore, the quasiparticles tend to cluster around the fixed point X 1 f as shown in [31], giving rise to black hole-like excitations. L (x, t). The superscript n denotes the Rényi index which determines the density of quasiparticles. Assuming that the quasiparticles are conserved, the corresponding densities have to obey the continuity equation where the +(−) sign is for the i = L(R) chiralities. Since the quasiparticles are moving with a speed f (x), a quasiparticle initially located at x 0 at time t 0 will be located at position x at a later time t as determined by where "+" refers to right-moving quasiparticles while "−" refers to the left-moving quasiparticles. The integral is straightforward to perform and yields the trajectories x i (t) for i = L, R. This trajectory can also be inverted to give the initial position of x i,0 (x, t) of a quasiparticle that is at position x at time t. Since the number of quasiparticles is conserved, the number of particles initially located in the interval dx i,0 , ρ (n) (x i,0 (x, t), 0)dx i,0 , is the same as the number of quasiparticles in dx at time t, ρ (n) (x, t)dx. Hence, the solution to the continuity equation (D.4) for a constant velocities ±f (x) is [104] for i = L, R. Since the trajectory, x i,0 (x, t), is a periodic function with period L cosh 2θ, the corresponding quasiparticle densities also possess the same periodicity. Now, we turn to the computation of entanglement entropy and mutual information using the quasiparticle picture. In this paper, the unitaries only act on one Hilbert space, so only the quasiparticles in that Hilbert space move while their immobile partners remain fixed at position x 0 . Each such Bell pair contributes to the correlation between the point x in H 1 and the point x 0 in H 2 . The methods for computing the mutual information and entanglement entropy in the quasiparticle picture are very similar but not identical so we explain the technique for computing each quantity separately.
Entanglement entropy
The entanglement entropy of a pure state measures the amount of correlation between the subsystem and its complement. Therefore, the entanglement entropy for a subsystem B is proportional to the number of bell pairs shared by subsystem B and its complement. Since the Bell pair partner of any quasiparticle in B lives in the other Hilbert space, any Bell pair with a quasiparticle that winds up in B at a certain time t contributes to the entanglement entropy S B (t). Therefore, the initial quasiparticle density in (D.6) is a simple constant that can be fixed by equating the quasiparticle prediction for the entanglement entropy to the entanglement entropy in 2d free fermion CFT. This constant turns out to be ρ 0 = n+1 24n π . For a single interval B = [Y 2 , Y 1 ], the entanglement entropy according to the quasiparticle picture is where the integral was carried out by a simple change of variables from x to the initial position x 0,i (x, t) and the modulo operation takes the periodicity of the system into account. This result simply states that the quasiparticles in the interval [x 0,i (X 2 , t), x 0,i (X 1 , t)] flow to [X 2 , X 1 ] at time t.
Mutual Information
The MI is obtained by the same integral. The only difference comes from the initial quasiparticle density in (D.6). This is because the MI between subsystems B and A of H 1 and H 2 measures the correlations between subsystems A and B and hence only receives contributions from Bell pairs one quasiparticle in subsystem A and the other in subsystem B. Since the quasiparticles in the second Hilbert space are immobile, only the quasiparticles that are initially in subsystem A can potentially contribute to the MI. Therefore, for the computation of mutual information, the initial quasiparticle density is π is a constant that is fixed by equating the initial MI for two symmetric intervals A = B with that of the 2d free fermion CFT. If B is the union of m disjoint [Y 2j , Y 2j−1 ] for j = 1, . . . , m, the MI between two subsystems A and B at a fixed time t is given by The second equality comes from the usual change of variables from x to x 0,i where t is held fixed so that x 0,i is viewed as a function of a single variable x. The final expression has a simple interpretation; the quasiparticles located in [x 0,i (Y 2 , t), x 0,i (Y 1 , t)] are the only ones that can be in subsystem B at time t. Out of these quasiparticles, only the ones that were also simultaneously in A can contribute to the mutual information between A and B.
D.2.2 System 2 and 3
System 2 and 3 correspond to time evolutions where two different unitaries are applied one after the other. The overall time evolution corresponds to a product of two unitary evolutions for durations t and T that sends a quasiparticle with an initial spacetime position Under each unitary, the quasiparticle density evolves according to (D.6), so the final quasiparticle density can be related to the initial density by the chain rule The entanglement entropy is given by (D.12) where the final equality comes from the exact same reasoning as in the Möbius/SSD case. Just as in System 1, this result simply says that the entanglement entropy of a subsystem at a particular instant in time is given by the number of quasiparticles that end up in the subsystem at that time. The mutual information as predicted by quasiparticles is similar to the entanglement entropy except for the initial quasiparticle density. If the subsystem B is a union of m disjoint intervals [Y 2j , Y 2j−1 ], j = 1, . . . , m, the mutual information is where a change of variables from the final spatial coordinate y to the initial position x i,0 was made to carry out the integral. The physical meaning of this result is identical to that in the Möbius/SSD case.
D.3 Summary of results for non-chaotic systems
Using the formulas outlined in the previous subsections, the entanglement entropy and MI can be computed in 2d free fermion CFTs and quasiparticle pictures. For the various subsystems and unitary time evolutions, the entanglement for the two physical spin-structures ν = 3 and ν = 4 are found to be identical. Furthermore, the global TMI for 2d free fermion CFT vanishes in all the cases considered. This is because the entanglement entropy and MI for the 2d free fermion CFT agree with the quasiparticle picture to leading order in 1/ 9 . The agreement with the quasiparticle picture describes the key differences between 2d free fermion and holographic CFTs. Firstly, for finite values of θ, the quasiparticle distributions are periodic with a period of L cosh 2θ, so that the MI will possess the same periodicity. Secondly, the MI is separately carried by the right-moving and left-moving quasiparticles which travel independently of one another, as opposed to the holographic theory where the MI is non-zero only when the subsystem contains both the left and the right-moving B.H.-like excitations. Lastly, the TMI is observed to vanish for the 2d free fermion CFTs but that is not always the case for the holographic theories. In Fig. 22, we show a representative plot comparing the 2d free fermion CFT MI with the quasiparticle prediction. In this setup, we first evolve the system with the SSD Hamiltonian Figure 22: Plots of the operator mutual information when the system is first acted upon by the SSD evolution for a duration of t 1 followed by a time evolution of t 2 under the CSD Hamiltonian. The solid lines are the 2d free fermion CFT results while the dots are the predictions by the quasiparticle picture. before evolving it with the CSD Hamiltonian which is essentially the SSD Hamiltonian but with the envelope function vanishing at X 2 f instead. The holographic results for this kind of evolution are discussed in appendix C. The subsystems in Fig. 22 are placed away from both fixed points X 1 f and X 2 f . The quasiparticles will pass through B, giving rise to a non-zero BMI. However, since the subsystem does not contain the CSD fixed point, these quasiparticles will eventually leave B although they take a long time to do so for the subsystem in Fig. 22 because B is located close to the CSD fixed point X 2 f where the quasiparticle speed is small. This figure also highlights the key difference between the dynamics of BMI in the free fermion CFT as well as the holographic CFTs. The BMI vanishes for this choice of subsystems in the latter but not the former for large values of t 1 . This is because BMI is non-zero in the holographic theory when both chiral and anti-chiral B.H.-like excitations are simultaneously present in B which does not occur when B does not contain any of the fixed points and when the SSD evolution time t 1 is large which causes the B.H.-like excitation to be sharply peaked. By contrast, BMI is separately carried by the left and right moving quasiparticles so as long as either one of them is present in B, BMI is non-zero. For this choice of subsystems, the left-moving quasiparticles travel leftwards around the spatial circle and approach the CSD fixed point X 2 f from the side opposite to subsystem B and hence do not contribute to the BMI. When the SSD quench time is t 1 = 5000, there are already right-moving quasiparticles in the output subsystem, so the initial value of BMI is non-zero. Some right-moving quasiparticles start off at t 2 = 0 at positions infinitesimally close to the CSD fixed point X 2 f and take a long time to go around the spatial circle leading to a long tail in the BMI. When t 1 = 11000, the right-moving quasiparticles start off at t 2 = 0 to the right of the CSD fixed point at X 2 f and eventually circle around back to subsystem A giving rise to a bump in the BMI.
E The gravity dual of the systems
Here, we report the gravity dual of the systems considered in this paper.
E.1 The dual geometries
The dual geometries of ρ H 1 considered in this paper are given by where the details of metric are summarised in Appendix. E.3. In the expression in (E.1), the components such as dt j dt i =j exist. However, in the timeevolution considered, one of them should be constant. In the case of the system 1, this system is evolved with H 1 from t 0 = 0 to t 0 = t 0,const. , and then it is evolved with H 1 SSD from t 1 = 0. Therefore, we should take t 0 to be constant and consider the t 1 -dependence of the geometry. In this procedure, let us rewrite the radial direction as r = L 2 π 2 f 1 α=1;XX r that guarantees that the asymptotic geometry near the boundary, r → ∞, is given by the SSD AdS 3 geometry: where the time-component of the metric depends on X.
In the case of the systems.2 and 3, the system is evolved with H 1 SSD from t 1 = 0 to t 1 = t 1,const. , and then it is evolved with H 1 from t 0 = 0 or H 1 CSD from t 2 = 0. Therefore, we should take t 1 to be constant and consider the geometries. Rewrite the radial coordinate as r α=2,3 = L 2 π 2 f 1 α=2,3;XX r, and then the metric near the boundary, r i=2,3 → ∞, is given by the global AdS 3 for α = 2 and the CSD AdS 3 for α = 3: As a consequence, the location of the black hole horizon in r α=2,3 coordinate is given by Thus, r α=2:Horizon depends on X, t 0 and t 1 , while r α=3:Horizon depends on X, t 1 , and t 2 .
E.1.1 The temporal and spatial dependence of the inhomogeneous horizon Let us focus on the temporal and spatial dependence of the black hole horizon of the black hole geometries dual to the system 2 and 3.
Asymptotic behavior of horizon with small t 1 Let us begin by looking closely at the temporal and spatial dependence of inhomogeneous black hole horizon in the small t 1 -region. At the second order of the small t 1 expansion, the t 0 -dependence of r α=2;Horizon and the t 2 -dependence of r α=3;Horizon are given by r α=2:Horizon ≈r 0 π 2 L 2 + 2 where r α=2:Horizon at X = 0, L 2 is independent of t 0 , and r α=3:Horizon at X = L 2 is independent of t 1 .
Asymptotic behavior of horizon in t 1 → ∞ Now, turn to the temporal and spatial dependence of inhomogeneous black hole horizon in the large t 1 -regime. In the large t 1 -regime excluding t 0 ≈ X + nL and t 0 ≈ −X + nL, the asymptotic time-dependence of r i=2,Horizon is approximated by where n is and integer number, and
Extremes of r α=2,Horizon
Let us analyze the spatial extremes of r α=2,Horizon . These spatial extremes are determined by ∂ X r α=2,Horizon = 0, and these solutions are given by For X j=L,R , r α=2,Horizon is given by In the large t 1 -limit, X i=L,R are approximated by where m is an integer number. By using the physical interpretation discussed in section 5.2.1, X + mL = ±t 0 are interpreted as the trajectories at t 0 of the right-and left-moving B.H.-like excitations, respectively. In other words, the spatial extremes for the large t 1 are determined by the trajectories at t 0 of the right-and left-moving B.H.-like excitations. As a consequence, the asymptotic behavior of r α=2,Horizon for the large t 1 is given by where n is an integer number. Thus, if the B.H.-like excitations are at X = X f i=1,2 , then r α=2,Horizon depends on only t 0 , while if these excitations are at X = X f i=1,2 , then r α=2,Horizon depends on only t 1 , and it linearly increases with t 1 . Note that the asymptotic form of the black hole horizon for t 0 = nL 2 is invalid in the t 0 -regimes where t 0 ≈ nL 2 . In these t 0 -regimes, we need more detailed calculations.
Extremes of r α=3,Horizon
Now, let us turn to the analysis of the spatial extremes of r α=3,Horizon . The spatial extremes are determined by ∂ X r α=3,Horizon = 0, and the solutions of this equation are given by .
(E.15) Thus, these extremes linear grow with t 2 . In Fig. 23, we depict r α=3,Horizon for various t 1 and t 2 as a function of X.
E.3 The metric of the inhomogeneous black holes
Here, we present the inhomogeneous black hole geometries. The dual geometries of ρ H 1 considered in this paper are given by (E.1) and the components are given as follows. For α = 1, , f 2 XX = 4π 6 L 6 (L 2 (t 2 − t 1 ) + 4π 2 t 2 1 t 2 ) 2 sin 2 2πX L d 2 | 21,936.6 | 2023-02-16T00:00:00.000 | [
"Physics"
] |
A New Competitive Binary Grey Wolf Optimizer to Solve the Feature Selection Problem in EMG Signals Classification
: Features extracted from the electromyography (EMG) signal normally consist of irrelevant and redundant features. Conventionally, feature selection is an effective way to evaluate the most informative features, which contributes to performance enhancement and feature reduction. Therefore, this article proposes a new competitive binary grey wolf optimizer (CBGWO) to solve the feature selection problem in EMG signals classification. Initially, short-time Fourier transform (STFT) transforms the EMG signal into time-frequency representation. Ten time-frequency features are extracted from the STFT coefficient. Then, the proposed method is used to evaluate the optimal feature subset from the original feature set. To evaluate the effectiveness of proposed method, CBGWO is compared with binary grey wolf optimization (BGWO1 and BGWO2), binary particle swarm optimization (BPSO), and genetic algorithm (GA). The experimental results show the superiority of CBGWO not only in classification performance, but also feature reduction. In addition, CBGWO has a very low computational cost, which is more suitable for real world application.
Introduction
Electromyography (EMG) signals recorded from the residual muscles have the potential to be used as a control source for assistive rehabilitation device and myoelectric prosthetic [1]. EMG is a bioelectrical signal that offers rich muscle information, which can be used to identify and recognize hand motions [2]. The development of EMG-based rehabilitation devices is becoming of major interest to many biomedical researchers. However, the development of EMG-controlled prosthetics is still a challenging issue in developing countries [3]. In past studies, most researchers have applied advanced signal processing, feature extraction, machine learning, and feature selection algorithms to enhance the performance of the EMG pattern recognition system [4][5][6][7]. Generally, signal processing performs the signal transformation to obtain useful signal information. Feature extraction aims to extract the valuable information from the signal. The feature selection algorithm attempts to evaluate the optimal features from the original feature set. Finally, machine learning acts as the classifier, to classify the features for recognizing the hand movements.
In recent days, many EMG features have been proposed and applied in EMG pattern recognition [8][9][10]. The increment in the number of EMG features not only increases the complexity of
EMG Data
In the present study, the fourth version of EMG database (DB4) from Non-Invasive Adaptive Prosthetics (NinaPro) project (https://www.idiap.ch/project/ninapro) is applied [22]. DB4 comprises of the surface EMG signals acquired from 10 healthy subjects. In this work, the EMG signals of 17 hand movement types (Exercise B) are used. There were 12 EMG electrodes that had been implemented in the process of recording. The EMG signals were sampled at 2 kHz. In the experiment, each subject was instructed to perform the hand movement for 5 s each, followed by the resting state of 3 s. Each movement was repeated six times. Note that all the resting states were removed before further processing was conducted.
Feature Extraction Using STFT
Short-time Fourier transform (STFT) is the most fundamental of time-frequency distributions. As compared to other advance signal processing tools, such as Stockwell transform, B-distribution, and Choi-William distribution, STFT is known to be the simplest and fastest. Mathematically, STFT can be formulated as [8]: where x(n) is the input EMG signal and w(n − τ) is the Hanning window function. In this study, STFT, with window size of 512 ms (1024 samples), is utilized. Generally, STFT transforms the signal into a two-dimensional matrix. As a result, the signal is represented in both time and frequency planes, which consist of high dimensions. To reduce the dimensionality, ten time-frequency features, namely Renyi entropy, spectral entropy, Shannon entropy, singular value decomposition-based entropy, concentration measure, mean frequency, median frequency, two-dimensional mean, variance, and coefficient of variation, are extracted from STFT coefficients.
Renyi Entropy
Renyi entropy (RE) is a time-frequency feature that estimates the complicacy of the signal itself. A higher RE indicates the signal consists of a high degree of non-stationary component [9]. RE can be defined as where S is the magnitude of STFT, a is the Renyi entropy order, and L and M are the total length of time and frequency bins, respectively. Previous work affirmed alpha, a, should be an odd integer and must be greater than 2. In this work, the alpha, a, is set at 3 [9].
Spectral Entropy
Spectral entropy (SE) is used to determine the randomness of energy distribution of the signal. A higher SE indicates signal energy is less concentrated to specific region on the time-frequency plane [9,23]. SE can be expressed as where P is the power spectrum, and L and M are the total length of time and frequency bins, respectively.
Shannon Entropy
Shannon entropy (Sh) is the fundamental of the entropy's family, and it can be written as where S is the magnitude of STFT, and L and M are the total length of time and frequency bins, respectively.
Singular Value Decomposition-Based Entropy
Singular value decomposition-based entropy (E SVD ) is an entropy estimated from singular value decomposition (SVD). Initially, SVD is applied to decompose the time-frequency amplitude into signal subspace and orthogonal alternate subspace. The entropy based on singular values offers the time-frequency information related to the complexity and magnitude of STFT [9]. Mathematically, E SVD can be formulated as where S k is the normalized singular value, and it can be calculated as where S k is the singular value of matrix S[n,m] that obtained from the singular value decomposition.
Concentration Measure
Concentration measure (CM) is a time-frequency feature that describes the concentration of signal energy distribution on time-frequency plane [9]. CM can be defined as where S is the magnitude of STFT, and L and M are the total number of time and frequency bins, respectively.
Mean Frequency
Mean frequency (MNF) is the sum of product of frequencies with its corresponding power spectral that is divided by the total power estimated from the power spectrum [24]. MNF, at each instant of time sample, is represented as where P is the power spectrum, f m is the frequency value at frequency bin m, and M is the total number of the frequency bin. In this work, the averaged MNF across multiple instants of time sample is calculated.
Median Frequency
Median frequency (MDF) is the frequency that partitions the power spectrum into two equal halves [24]. MDF at each instant of time sample is given by where P is the power spectrum, and M is the total number of the frequency bin. In this study, the averaged MDF across multiple instants of time sample is calculated.
2.2.8. Two-Dimensional Mean, Variance, and Coefficient of Variation Generally speaking, statistical features that refer to one-dimensional statistical properties, such as mean, variance (VAR), and coefficient of variation (CoV), can be extended into two dimensions, as follows [9,24]: where σ is the standard deviation, and µ is the mean value.
Grey Wolf Optimizer
Grey wolf optimizer (GWO) is a recent metaheuristic optimization method developed by Mirjalili and his colleagues in 2014 [25]. Normally, grey wolves live in a pack with a group size of 5 to 12. GWO mimics the hunting and searching prey characteristic of grey wolves in nature. In GWO, the population are divided into alpha, beta, delta, and omega. Alpha wolf is the main leader which is responsible for decision-making. Beta wolf is the second leader that assists the alpha in making the decision or other activities. Delta wolf is defined as the third leader in the group, which dominates the omega wolves.
Mathematically, the top three fittest solutions in GWO are called alpha (α), beta (β), and delta (δ), respectively. The rest are assumed to be omega (ω). In GWO, the hunting process is guided by α, β, and δ, while ω follows these three leaders. The encircling behavior for the pack to hunt a prey can be expressed as where X p is the position of prey, A is the coefficient vector, and D is defined as where C is the coefficient vector, X is the position of grey wolf, and t is the number of iterations. The coefficient vectors, A and C, are determined by where r 1 and r 2 are two independent random numbers uniformly distributed between [0, 1], and a is the encircling coefficient that is used to balance the tradeoff between exploration and exploitation. In GWO, parameter a is linearly decreasing, from 2 to 0, according to Equation (17).
where t is the number of iterations, and T is the maximum number of iterations. In GWO, the leader alpha, beta, and delta wolves are known to have better knowledge about the potential position of prey. Thus, the leaders are guiding the omega wolves to move toward the optimal position. Mathematically, the new position of wolf is updated as in Equation (18).
where X 1 , X 2 , and X 3 are calculated as follows: where X α , X β , and X δ are the position of alpha, beta, and delta at iteration t; A 1 , A 2 , and A 3 are calculated as in Equation (15); and D α , D β and D δ are defined as in Equations (22)- (24), respectively.
where C 1 , C 2 , and C 3 are calculated as in Equation (16). Generally, GWO is designed to solve the continuous optimization problems. For binary optimization problems, such as feature selection, a binary version of GWO is required. Recently, Emary et al. [15] proposed two novel binary grey wolf optimizations (BGWO1 and BGWO2) to tackle the feature selection problems. The operation of BGWO1 and BGWO2 are described as follows.
Binary Grey Wolf Optimization Model 1 (BGWO1)
For the first approach, BGWO1 utilizes the crossover operator to update the position of wolf as follows: where Crossover (Y 1 , Y 2 , and Y 3 ) is the crossover operation between solutions, and Y 1 , Y 2 , and Y 3 are the binary vectors affected by the movement of alpha, beta, and delta wolves, respectively. In BGWO1, Y 1 , Y 2 , and Y 3 are defined using Equations (26), (29) and (32), respectively.
where X d α is the position of alpha, d is the dimension of search space, and bstep d α represents the binary step that can be expressed as where r 3 is a random vector in [0, 1], and cstep d α denotes the continuous valued step size that can be calculated as in Equation (28).
where A d 1 and D d α are determined by applying Equations (15) and (22).
where X d β is the position of beta, d is the dimension of search space, and bstep d β represents the binary step that can be expressed as where r 4 is a random vector in [0, 1], and cstep d β denotes the continuous valued step size that can be calculated as in Equation (31).
where A d 1 and D d β are determined by applying Equations (15) and (23).
where X d δ is the position of delta, d is the dimension of search space, and bstep d δ represents the binary step that can be expressed as where r 5 is a random vector in [0, 1], and cstep d δ denotes the continuous valued step size that can be calculated as in Equation (34).
where A d 1 and D d δ are determined by applying Equations (15) and (24). After obtaining Y 1 , Y 2 , and Y 3 , the new position of the wolf is updated using the crossover operation, as follows: where d is the dimension of search space, and r 6 is a random number uniformly distributed between [0, 1]. The pseudocode of BGWO1 is shown in Figure 1. Initially, the population of grey wolves is randomly initialized (either bit 1 or 0). Afterward, the fitness of each wolf is evaluated. The best, second best, and third best solutions are defined as alpha, beta, and delta. For each wolf, Y 1 , Y 2 , and Y 3 are computed using Equations (26), (29), and (32), respectively. Then, the position of wolf is updated by applying the crossover between Y 1 , Y 2 , and Y 3 . Next, the fitness of each wolf is evaluated. Iteratively, the positions of alpha, beta, and delta are updated. The algorithm is repeated until the terminated criterion is satisfied. At last, the alpha solution is selected as the optimal feature subset.
Binary Grey Wolf Optimization Model 2 (BGWO2)
For the second approach, BGWO2 updates the position of wolf by converting the position into a binary vector, as shown in Equation (36).
( )
where r7 is a random vector in [0, 1], d is the dimension of search space, and S is the sigmoid function, and it can be expressed as The pseudocode of BGWO2 is represented in Figure 2. Firstly, the initial population of wolves is randomly initialized (either bit 1 or 0). Secondly, the fitness of grey wolves is evaluated. The three leaders, alpha, beta, and delta, are selected based on the fitness. For each wolf, the X1, X2, and X3 are computed using Equations (19)-(21), respectively. Next, the new position of grey wolf is updated by applying Equation (36). Afterward, the fitness of wolves is evaluated, and the position of alpha, beta, and delta are updated. The algorithm is repeated until the terminated criterion is satisfied. Finally, the alpha solution is selected as the optimal feature subset.
Binary Grey Wolf Optimization Model 2 (BGWO2)
For the second approach, BGWO2 updates the position of wolf by converting the position into a binary vector, as shown in Equation (36).
where r 7 is a random vector in [0, 1], d is the dimension of search space, and S is the sigmoid function, and it can be expressed as .
The pseudocode of BGWO2 is represented in Figure 2. Firstly, the initial population of wolves is randomly initialized (either bit 1 or 0). Secondly, the fitness of grey wolves is evaluated. The three leaders, alpha, beta, and delta, are selected based on the fitness. For each wolf, the X 1 , X 2 , and X 3 are computed using Equations (19)-(21), respectively. Next, the new position of grey wolf is updated by applying Equation (36). Afterward, the fitness of wolves is evaluated, and the position of alpha, beta, and delta are updated. The algorithm is repeated until the terminated criterion is satisfied. Finally, the alpha solution is selected as the optimal feature subset.
Binary Grey Wolf Optimization Model 2 (BGWO2)
For the second approach, BGWO2 updates the position of wolf by converting the position into a binary vector, as shown in Equation (36).
( )
where r7 is a random vector in [0, 1], d is the dimension of search space, and S is the sigmoid function, and it can be expressed as The pseudocode of BGWO2 is represented in Figure 2. Firstly, the initial population of wolves is randomly initialized (either bit 1 or 0). Secondly, the fitness of grey wolves is evaluated. The three leaders, alpha, beta, and delta, are selected based on the fitness. For each wolf, the X1, X2, and X3 are computed using Equations (19)-(21), respectively. Next, the new position of grey wolf is updated by applying Equation (36). Afterward, the fitness of wolves is evaluated, and the position of alpha, beta, and delta are updated. The algorithm is repeated until the terminated criterion is satisfied. Finally, the alpha solution is selected as the optimal feature subset.
Competitive Binary Grey Wolf Optimizer
Generally, BGWO has the advantages of being simple, flexible, and adaptable, as compared to other metaheuristic optimizations. However, BGWO also has the limitation of restricting local optimal. BGWO applies the best three solutions (leaders) in the position update, which means all the wolves are trying to move toward the positions of leaders. In this way, the wolves will slowly, or nearly, become the same as the leaders. All the wolves are slowly getting trapped in the local optimal. This will lead to low diversity and premature convergent [26,27]. Therefore, we propose a new competitive binary grey wolf optimizer (CBGWO) to address the limitation of BGWO in feature selection.
The general idea of CBGWO comes to mind from the concept of competition among each couple of wolves in the population. In CBGWO, the wolves are randomly selected, pairwise, from the population, for competition. To explain this concept, the N wolves in the population are randomly divided into N/2 couples, where N is the number of wolves in the population. After that, the competition is made between two wolves in each couple. This indicates that each wolf is only participating once in the competition. From the competition, the wolves with better fitness are called winners. On the contrary, the wolves that lose in the competition are known to be losers. The winners are directly passed to the next generation without performing the position update. On the other side, the losers can update their positions by learning from the winners. In other words, only the position of N/2 wolves in the population will be updated. The general concept and idea of competition in CBGWO is illustrated in Figure 3.
Competitive Binary Grey Wolf Optimizer
Generally, BGWO has the advantages of being simple, flexible, and adaptable, as compared to other metaheuristic optimizations. However, BGWO also has the limitation of restricting local optimal. BGWO applies the best three solutions (leaders) in the position update, which means all the wolves are trying to move toward the positions of leaders. In this way, the wolves will slowly, or nearly, become the same as the leaders. All the wolves are slowly getting trapped in the local optimal. This will lead to low diversity and premature convergent [26,27]. Therefore, we propose a new competitive binary grey wolf optimizer (CBGWO) to address the limitation of BGWO in feature selection.
The general idea of CBGWO comes to mind from the concept of competition among each couple of wolves in the population. In CBGWO, the wolves are randomly selected, pairwise, from the population, for competition. To explain this concept, the N wolves in the population are randomly divided into N/2 couples, where N is the number of wolves in the population. After that, the competition is made between two wolves in each couple. This indicates that each wolf is only participating once in the competition. From the competition, the wolves with better fitness are called winners. On the contrary, the wolves that lose in the competition are known to be losers. The winners are directly passed to the next generation without performing the position update. On the other side, the losers can update their positions by learning from the winners. In other words, only the position of N/2 wolves in the population will be updated. The general concept and idea of competition in CBGWO is illustrated in Figure 3.
New Position Update
By applying a competition strategy, CBGWO allows the winners (half of the population) to directly pass to the next generation, while the rest N/2 wolves will update their positions according to Equation (38).
( )
where S is the sigmoid function as shown in Equation (37), r8 is a random vector in [0, 1], X1, X2, and X3 are defined as follows:
New Position Update
By applying a competition strategy, CBGWO allows the winners (half of the population) to directly pass to the next generation, while the rest N/2 wolves will update their positions according to Equation (38).
where S is the sigmoid function as shown in Equation (37), r 8 is a random vector in [0, 1], X 1 , X 2 , and X 3 are defined as follows: where X α , X β , and X δ are the positions of alpha, beta, and delta at iteration t; A 1 , A 2 , and A 3 are computed as in Equation (15); and D α , D β , and D δ are calculated as in Equations (42)-(44), respectively.
where X w is the winner wolf, X l is the loser wolf, C 1 , C 2 , and C 3 are calculated as in Equation (16).
As can be seen in Equations (42)-(44), the losers update their positions by learning from the winners. This means that losers are not only instructed by the alpha, beta, and delta wolves, but also guided by the winners to move toward the best prey position. In this way, CBGWO can explore the search region effectively.
Leader Enhancement
The leaders, alpha, beta, and delta, play an important role in CBGWO. Generally, the wolf populations are guided by these leaders to move to a better prey position. To prevent CBGWO from being trapped in the local optimum, these leaders can enhance themselves with a leader enhancement strategy. In this strategy, random walk is used to perform a local search around these leaders (alpha, beta, and delta). The random walk is given by where R is the change rate, X L is the leader (either alpha, beta or delta), rand (0,1) is a random number generated-either 1 or 0, and r 9 is a random number uniformly distributed between [0, 1]. In CBGWO, the R is linearly decreasing from 0.9 to 0, as shown in Equation (46).
where t is the number of iterations, and T is the maximum iteration number. According to Equation (46), a larger R in the beginning of the iteration allows more positions to be changed, thus leading to high exploration. As the time (iteration) passes, a smaller R tends to promote the exploitation around the best solutions. Since there are three leaders in CBGWO, thus, only three new leaders are generated in each iteration using Equation (45). Hence, not much additional computational cost is acquired. In the leader enhancement process, if the fitness value of a new leader is found to be better, then the current leader will be replaced. Otherwise, the current leader is kept for the next generation.
The pseudocode of CBGWO is demonstrated in Figure 4. In the first step, the population of wolves is randomly initialized (either 1 or 0). In the second step, the fitness of the wolves is evaluated. The alpha, beta, and delta wolves are selected according to the fitness value. Next, the population is randomly partitioned into N/2 couples. The competition is made between two wolves in each couple. From the competition, the wolves with better fitness are defined as winners. The winners are directly passed into the new population. On the other hand, the losers update their positions by applying Equation (38). After that, the fitness of new losers is evaluated, and the new losers are added into the new population. The alpha, beta, and delta are then updated. Furthermore, the new leaders are generated by performing the random walk around alpha, beta, and delta. Afterward, the fitness of newly generated leaders is evaluated. The alpha, beta, and delta are again updated according to the newly generated leaders. The algorithm is repeated until the termination criterion is satisfied. In the final step, the alpha solution is chosen to be the optimal feature subset.
The following observations illustrate how the proposed CBGWO theoretically has the ability to tackle the feature selection problem in the classification of EMG signals.
•
In CBGWO, only the positions of N/2 (half of the population) wolves are updated. This means that the processing speed of CBGWO is extremely fast. • CBGWO applies leader enhancement, which has the capability to avoid the leaders (alpha, beta, and delta) from being trapped in the local optimum. • CBGWO includes the role of winner and loser in the position update. This indicates that the process of hunting and searching prey of wolves, is not only guided by the leaders, but also the winner wolf in each couple. • CBGWO employs the dynamic change rate, R, in the random walk strategy, which aims to balance the exploration and exploitation in the leader enhancement process.
Computers 2018, 7, x FOR PEER REVIEW 11 of 18 The following observations illustrate how the proposed CBGWO theoretically has the ability to tackle the feature selection problem in the classification of EMG signals.
•
In CBGWO, only the positions of N/2 (half of the population) wolves are updated. This means that the processing speed of CBGWO is extremely fast. • CBGWO applies leader enhancement, which has the capability to avoid the leaders (alpha, beta, and delta) from being trapped in the local optimum. • CBGWO includes the role of winner and loser in the position update. This indicates that the process of hunting and searching prey of wolves, is not only guided by the leaders, but also the winner wolf in each couple. • CBGWO employs the dynamic change rate, R, in the random walk strategy, which aims to balance the exploration and exploitation in the leader enhancement process.
Proposed CBGWO for Feature Selection
In this paper, a new CBGWO is proposed to tackle the feature selection problem in EMG signals classification. For feature selection, the solutions are represented in binary form, either bit 1 or 0. Basically, bit 1 denotes the selected feature, while bit 0 represents the unselected feature. For example, given a solution X = {0,1,1,1,0,0,0,0,0,1}, this shows that the second, third, fourth, and tenth features are selected. Figure 5 illustrates the flowchart of proposed CBGWO for feature selection. Initially, the STFT is employed to transform the EMG signal into time-frequency representation. Next, features are extracted from each STFT coefficient, and form a feature set. Afterward, the STFT feature set is fed into the CBGWO for the feature selection process. The initial population (solutions) is randomized. Iteratively, the initial solutions are evolved in the process of fitness evaluation. Note that the classification error rate obtained by the classifier is used as the fitness function in this work. The classification error rate is defined as the ratio of the number of wrongly classified samples over total number of samples, which can be computed by the classifier. In the fitness evaluation, if the solutions result in same values of fitness, then the solution with the smaller number of features will be selected. At the end of the iteration, the alpha wolf is selected as the global best solution (optimal feature subset).
Proposed CBGWO for Feature Selection
In this paper, a new CBGWO is proposed to tackle the feature selection problem in EMG signals classification. For feature selection, the solutions are represented in binary form, either bit 1 or 0. Basically, bit 1 denotes the selected feature, while bit 0 represents the unselected feature. For example, given a solution X = {0,1,1,1,0,0,0,0,0,1}, this shows that the second, third, fourth, and tenth features are selected. Figure 5 illustrates the flowchart of proposed CBGWO for feature selection. Initially, the STFT is employed to transform the EMG signal into time-frequency representation. Next, features are extracted from each STFT coefficient, and form a feature set. Afterward, the STFT feature set is fed into the CBGWO for the feature selection process. The initial population (solutions) is randomized. Iteratively, the initial solutions are evolved in the process of fitness evaluation. Note that the classification error rate obtained by the classifier is used as the fitness function in this work. The classification error rate is defined as the ratio of the number of wrongly classified samples over total number of samples, which can be computed by the classifier. In the fitness evaluation, if the solutions result in same values of fitness, then the solution with the smaller number of features will be selected. At the end of the iteration, the alpha wolf is selected as the global best solution (optimal feature subset). Computers 2018, 7, x FOR PEER REVIEW 12 of 18
Results
Remarkably, STFT transforms the EMG signal into time-frequency representation. Ten time-frequency features are extracted from STFT coefficient. In total, 120 features (10 features × 12 channels) are extracted from each movement from each subject. For fitness evaluation, k-nearest neighbor (KNN) with k = 1 is used as the learning algorithm, due to its speed and simplicity [16,28].
Results
Remarkably, STFT transforms the EMG signal into time-frequency representation. Ten time-frequency features are extracted from STFT coefficient. In total, 120 features (10 features × 12 channels) are extracted from each movement from each subject. For fitness evaluation, k-nearest neighbor (KNN) with k = 1 is used as the learning algorithm, due to its speed and simplicity [16,28]. According to [22], the 2nd and 5th repetitions are used for the testing set, while the remaining four repetitions are applied as training set.
To examine the effectiveness of the proposed method in feature selection, CBGWO is compared with BGWO1, BGWO2, binary particle swarm optimization (BPSO), and genetic algorithm (GA). The parameter setting of feature selection methods are described as follows: the population size, N, and maximum number of iterations, T, are fixed at 30 and 100, respectively. It is worth mentioning that there is no additional parameter setting for BGWO1, BGWO2, and CBGWO. For BPSO, the inertia weight, w, is linearly decreasing from 0.9 to 0.4, acceleration coefficients, C 1 and C 2 , are set at 2, and the maximum and minimum velocity are set at 6 and −6, respectively. For GA, the crossover rate, CR, is set at 0.6, the mutation rate, MR, is set at 0.01, the roulette wheel selection is applied for parent selection, and the single point crossover is implemented.
For performance evaluation, four statistical parameters, including classification accuracy, precision (P), F-measure, and Matthew correlation coefficient (MCC), are determined. Classification accuracy, P, F-measure, and MCC, are calculated as follows [29][30][31]: where TP, TN, FP, and FN are the true positive, true negative, false positive, and false negative that can be obtained from the confusion matrix. In this study, each feature selection algorithm is executed for 20 runs, with different random seed. The averaged results of 20 runs are used for performance comparison. All the analysis is done in MATLAB 9.3 using a computer with processing Intel Core i5-3340 3.1 GHz with 8 GB random access memory (RAM). Figure 6 demonstrates the classification accuracy of proposed methods for individual subjects. As can be seen, eight out of ten subjects obtained the best classification accuracy in CBGWO. For subject 6 and 8, the best results are achieved by BPSO. From the point of view, CBGWO is more capable in selecting the relevant features. Figure 6 shows that BGWO2 is the second-best feature selection method, which provides better results on six subjects compared to GA, BGWO1, and BPSO. Certainly, BGWO performs well in feature selection, which is similar to results in the literature [15].
Experimental Results
On average, across all subjects, the best mean classification accuracy is obtained by CBGWO (92.69%), followed by BGWO2 (90.79%). Thanks to leader enhancement, the leaders (alpha, beta, and delta) in CBGWO are allowed to enhance themselves iteratively. Hence, CBGWO has a higher chance to prevent itself from being trapped in the local optimum. By conducting a t-test, it is seen that there is a significant difference in classification performance between CBGWO versus GA (p = 3.8907 × 10 −4 ), CBGWO versus BGWO1 (p = 9.2063 × 10 −4 ), CBGWO versus BGWO2 (p = 0.0023), and CBGWO versus BPSO (p = 0.011). This shows that the performance of CBGWO is significantly better than GA, BGWO1, BGWO2, and BPSO. The statistical results revealed the superiority of CBGWO over other algorithms in feature selection. Table 1 displays the results of the number of selected features and precision for proposed methods. It is observed that not all the features are required in the classification process. A proper selection of features is more capable of obtaining a higher classification performance with lower complexity. As presented in Table 1, CBGWO contributed the smallest number of features for all ten subjects. This means that CBGWO can achieve promising classification accuracy while keeping a smaller number of features. On one side, GA and BGWO1 have a higher mean number of selected features, 61.29 and 61.49. It can be inferred that GA and BGWO1 did not evaluate the relevant features very well, thus leading to poor classification performance in this work. Table 2 outlines the results of F-measure and MCC of proposed methods. As can be seen in Tables 1 and 2, CBGWO offered higher precision, F-measure, and MCC values for most of the subjects. Obviously, CBGWO showed a comparative performance compared to GA, BGWO1, BGWO2, and BPSO. The results obtained show the superiority of CBGWO for solving the feature selection problem in EMG signals classification. Table 1 displays the results of the number of selected features and precision for proposed methods. It is observed that not all the features are required in the classification process. A proper selection of features is more capable of obtaining a higher classification performance with lower complexity. As presented in Table 1, CBGWO contributed the smallest number of features for all ten subjects. This means that CBGWO can achieve promising classification accuracy while keeping a smaller number of features. On one side, GA and BGWO1 have a higher mean number of selected features, 61.29 and 61.49. It can be inferred that GA and BGWO1 did not evaluate the relevant features very well, thus leading to poor classification performance in this work. Table 2 outlines the results of F-measure and MCC of proposed methods. As can be seen in Tables 1 and 2, CBGWO offered higher precision, F-measure, and MCC values for most of the subjects. Obviously, CBGWO showed a comparative performance compared to GA, BGWO1, BGWO2, and BPSO. The results obtained show the superiority of CBGWO for solving the feature selection problem in EMG signals classification. Figure 7 demonstrates the convergence curve of proposed methods for individual subjects. From this point of view, CBGWO has very good diversity. With the leader enhancement process, CBGWO has the ability to escape from the local optimum. Unlike BGWO1 and BGWO2, CBGWO keeps tracking for the global optimum, thus leading to a very good performance. On the other side, GA and BGWO1 converged faster, but without acceleration. This showed that GA and BGWO1 were easily getting trapped in the local optimum. From Figure 7, it can be inferred that CBGWO is effective and reliable in evaluating the optimal feature subset. Figure 7 demonstrates the convergence curve of proposed methods for individual subjects. From this point of view, CBGWO has very good diversity. With the leader enhancement process, CBGWO has the ability to escape from the local optimum. Unlike BGWO1 and BGWO2, CBGWO keeps tracking for the global optimum, thus leading to a very good performance. On the other side, GA and BGWO1 converged faster, but without acceleration. This showed that GA and BGWO1 were easily getting trapped in the local optimum. From Figure 7, it can be inferred that CBGWO is effective and reliable in evaluating the optimal feature subset. Figure 8 shows the mean class-wise accuracy (classification accuracy of 17 hand movement types) across all subjects. Inspecting the result, CBGWO showed competitive performance as compared to GA, BGWO1, BGWO2 and BPSO. By applying CBGWO, 14 out of 17 hand movement types had been successfully recognized (accuracy more than 90%). A similar performance was also found in BGWO2. However, CBGWO overtook BGWO2 in 14 hand movement types. Other algorithms, such as GA and BGWO1, had the difficulty in selecting the relevant features, thus leading to ineffective solutions. The results obtained clearly evinced the effectiveness of CBGWO in EMG feature selection. Figure 8 shows the mean class-wise accuracy (classification accuracy of 17 hand movement types) across all subjects. Inspecting the result, CBGWO showed competitive performance as compared to GA, BGWO1, BGWO2 and BPSO. By applying CBGWO, 14 out of 17 hand movement types had been successfully recognized (accuracy more than 90%). A similar performance was also found in BGWO2. However, CBGWO overtook BGWO2 in 14 hand movement types. Other algorithms, such as GA and BGWO1, had the difficulty in selecting the relevant features, thus leading to ineffective solutions. The results obtained clearly evinced the effectiveness of CBGWO in EMG feature selection. Figure 9 illustrates the average computational time of the proposed methods. Successfully, CBGWO obtained the fastest processing speed in this work. This indicates that CBGWO can achieve the optimal feature subset in a very short period. The reason CBGWO has a very short computational time is because CBGWO utilizes a competition strategy, which performs a position update for only half of the population. Moreover, the leader enhancement was only implemented to three leaders, so there is not much influence on the computational complexity. In short, CBGWO was not only excellent in feature selection, but also computational cost.
GA, BGWO1, BGWO2 and BPSO. By applying CBGWO, 14 out of 17 hand movement types had been successfully recognized (accuracy more than 90%). A similar performance was also found in BGWO2. However, CBGWO overtook BGWO2 in 14 hand movement types. Other algorithms, such as GA and BGWO1, had the difficulty in selecting the relevant features, thus leading to ineffective solutions. The results obtained clearly evinced the effectiveness of CBGWO in EMG feature selection. Figure 9 illustrates the average computational time of the proposed methods. Successfully, CBGWO obtained the fastest processing speed in this work. This indicates that CBGWO can achieve the optimal feature subset in a very short period. The reason CBGWO has a very short computational time is because CBGWO utilizes a competition strategy, which performs a position update for only half of the population. Moreover, the leader enhancement was only implemented to three leaders, so there is not much influence on the computational complexity. In short, CBGWO was not only excellent in feature selection, but also computational cost.
Discussion
In this study, a novel CBGWO has been proposed to tackle the feature selection problem in EMG signals classification. CBGWO has been tested and compared with other popular feature selection methods, including BGWO1, BGWO2, BPSO, and GA. The finding of current study shows the superiority of CBGWO in selecting the optimal feature subset. Compared to BGWO1 and BGWO2, CBGWO introduces a competition strategy to keep the high-quality solutions (winners) and promote the cooperation between the competitors. In the hunting and searching prey process, the winner will guide the loser to move toward a better prey position which, in turn, will improve the quality of search. For instance, only half of the population (losers) participate in the position update, while the rest (winners) directly pass into the new population. In other words, CBGWO consumes a very low computational cost, since the updating process is only applied to the losers. Furthermore, CBGWO utilizes the leader enhancement strategy to evolve the quality of the leaders. Iteratively, the leader updates itself if the newly generated leader has a better prey position. In other words, CBGWO keeps tracking for the global optimum, and avoids the algorithm being trapped in a local optimum. By making full use of these mechanisms, CBGWO is known to be successful in feature selection.
Through the analysis, we found that CBGWO is the best feature selection method in this work. CBGWO not only yields the optimal classification performance, but also provides the minimal feature size. It showed that the proposed model is more capable and efficient at solving the feature selection issues in EMG signals classification. Since EMG signal is subject-independent, it is yet to be known the best combination of features for each subject in achieving the optimal classification performance. In practice, users might have difficulty in selecting the best features for each subject. Unlike other traditional feature selection methods, users can apply the CBGWO to select the potential features without prior knowledge. Successively, CBGWO will automatically select the optimal features for specific subjects, and that feature subset will be used in real world application. This, in turn, will reduce the complexity and improve the performance of the recognition system. In
Discussion
In this study, a novel CBGWO has been proposed to tackle the feature selection problem in EMG signals classification. CBGWO has been tested and compared with other popular feature selection methods, including BGWO1, BGWO2, BPSO, and GA. The finding of current study shows the superiority of CBGWO in selecting the optimal feature subset. Compared to BGWO1 and BGWO2, CBGWO introduces a competition strategy to keep the high-quality solutions (winners) and promote the cooperation between the competitors. In the hunting and searching prey process, the winner will guide the loser to move toward a better prey position which, in turn, will improve the quality of search. For instance, only half of the population (losers) participate in the position update, while the rest (winners) directly pass into the new population. In other words, CBGWO consumes a very low computational cost, since the updating process is only applied to the losers. Furthermore, CBGWO utilizes the leader enhancement strategy to evolve the quality of the leaders. Iteratively, the leader updates itself if the newly generated leader has a better prey position. In other words, CBGWO keeps tracking for the global optimum, and avoids the algorithm being trapped in a local optimum. By making full use of these mechanisms, CBGWO is known to be successful in feature selection.
Through the analysis, we found that CBGWO is the best feature selection method in this work. CBGWO not only yields the optimal classification performance, but also provides the minimal feature size. It showed that the proposed model is more capable and efficient at solving the feature selection issues in EMG signals classification. Since EMG signal is subject-independent, it is yet to be known the best combination of features for each subject in achieving the optimal classification performance.
In practice, users might have difficulty in selecting the best features for each subject. Unlike other traditional feature selection methods, users can apply the CBGWO to select the potential features without prior knowledge. Successively, CBGWO will automatically select the optimal features for specific subjects, and that feature subset will be used in real world application. This, in turn, will reduce the complexity and improve the performance of the recognition system. In sum, the proposed CBGWO is useful in feature selection.
Conclusions
A competitive binary grey wolf optimizer (CBGWO) is proposed in this study. CBGWO includes the competitive strategy that allowed the wolves to compete in couples. The winners are directly passed to the new population. On the contrary, the losers update their positions by learning from the winners. In addition, CBGWO implemented a leader enhancement strategy to evolve the quality of leaders in each iteration. As for feature selection, CBGWO is compared with BGWO1, BGWO2, GA, and BPSO. The experimental results revealed CBGWO yielded better performance and overtook other algorithms in feature selection. CBGWO not only offered a very low computational cost, but also ranked as the best in feature selection. In summary, the proposed CBGWO is successful, and more appropriate to be used in clinical and rehabilitation applications. As for future work, a chaotic map can be used to fine-tune the parameters of CBGWO. The number of leaders in CBGWO can be increased to improve the diversity. Moreover, CBGWO will be applied to other optimization areas, such as training neural network, knapsack, and numerical problems. | 10,068.8 | 2018-11-05T00:00:00.000 | [
"Engineering",
"Medicine",
"Computer Science"
] |
PARTIALLY CONCENTRATING STANDING WAVES FOR WEAKLY COUPLED SCHRÖDINGER SYSTEMS
A bstract . We study the existence of standing waves for the following weakly coupled system of two Schrödinger equations
Introduction
The mathematical analysis of singularly perturbed semilinear elliptic equations and systems has been the object of a wide range of studies in the last decades.Among the many motivations, a big role is provided by models in Quantum Mechanics, and in particular by the semiclassical analysis of Schrödinger-type equations.In this context, one postulates that the classical Newtonian Mechanics should be recovered from the Quantum one by letting the Planck constant ℏ vanish.Accordingly, the wave function of the quantum particle should concentrate and collapse to one or more Dirac's deltas, which position should describe the sharp location of classical particles.When different quantum waves interact, e.g. in the case of weakly coupled NLS systems, the commonly investigated setting is the one in which all the waves concentrate in point particles.From the analytical point of view, this study lets different challenges arise.On the one hand, one may ask what is the limit concentrating profile at specific energy levels, for instance for ground states; this is typically done exploiting variational methods and blow-up analysis.On the other hand, solutions concentrating with prescribed shape and position can be constructed, mainly using the Lyapunov-Schmidt reduction approach.
A largely studied model is the case of a binary mixture of Bose-Einstein condensates, usually described by the Gross-Pitaevskii system, namely a systems of two weakly coupled nonlinear Schrödinger equations for N = 1, 2, 3. Here, ψ 1 and ψ 2 are the order parameters of the two components of the mixture, m 1 and m 2 the corresponding masses, and V and W the external potentials, bounded from below.In general, the (trapping) potentials may be different, opening interesting possibilities concerning the geometrical configurations of the condensates.Finally, the interaction parameters µ 1 , µ 2 , β depend on the scattering lengths associated to the different states.These interaction parameters can be fine-tuned across a wide rage of values, profiting from the presence of a Feshbach resonance.For more details on this model we refer to the book by Pitaevskii and Stringari [21], in particular Chaps.5 and 21.
Looking for standing waves (ψ 1 (x, t), ψ 2 (x, t)) = (e iE 1 t/ℏ u(x), e iE 2 t/ℏ v(x)) of frequencies E i we have that (u, v) with V(x) = E 1 + V(x) and W(x) = E 2 + W(x), for E i such that inf R N V > 0 and inf R N W > 0.
The usually studied case is the one in which µ i , β are both very large, or ℏ is very small with respect to the other parameters, so that one is lead to consider the singularly perturbed elliptic system While the autonomous case, namely the case V i ≡ λ i with λ i positive constants, has been widely studied in the last two decades, (see the recent paper [25] for an exhaustive list of references), a few results concerning the non-autonomous situation are known.
The first result seems due to Lin and Wei [11] who studied the case of a binary mixture in a singularly perturbed regime and in presence of trapping potentials.They prove the existence of ground state solutions, derive their asymptotic behaviors as ε → 0 and show that each component has one maximum point (possibly the same), called spike, which is trapped at the minimum points of the potentials V i .The existence of a concentrating ground state solutions was also established by Montefusco, Pellacci and Squassina [15], Pomponio [22], Ikoma and Tanaka [7] and Byeon [4].All the previous papers are concerned with system (1.2) where both the equations are affected by the presence of the small parameter ε, so that every wave (given by the components of the vector solution) concentrates as ε approaches zero.
Here, we focus on another type of regime, as it may happen that only some of the waves act in a semiclassical way, while the others persist in a quantum behavior.To the best of our knowledge, this kind of analysis is not present in the PDEs literature yet, and this paper is a first contribution in this direction.
More precisely, in our study we consider 2m 1 = ℏ 2 and m 2 = 1 2 in (1.1), so that (u, v) solves the following elliptic weakly coupled system where ε 2 := ℏ 2 .According to the previous discussion, along this paper we deal with the system above in the singularly perturbed regime ε → 0.Moreover, our study will deal with the case of µ i > 0 and β < 0, corresponding to positive intraspecies and to a negative interspecies scattering length, describing a repulsive interaction between the condensates.Our analysis will include the class of potentials satisfying the following assumptions.
which is non-degenerate in the space H 1 e (R N ) of functions even with respect to each variables, i.e.
that is the only solutions to (W) W(x) is even with respect to all the variables, Our main result is stated as follows.
Theorem 1.1.Let N = 2, 3, and suppose that (V 1 ), (V 2 ) and (W) hold.Assume set ω 0 := ω(0) > 0 and assume that Then there exists ε 0 > 0 such that for every ε ∈ (0, ε 0 ) there exists a solution (u ε , v ε ) of system (1.3) even with respect to each variable and having the following asymptotic profile as ε → 0 where Υ solves (1.5), U is the positive radial solution of and the peaks P ε and −P ε collapse to the origin as (1.12) Theorem 1.1 states the existence of a solution whose first component looks like a genuine solution to (1.5), in particular it does not concentrate, and whose second component concentrates at two opposite points which collapse to the origin as ε goes to 0. As a consequence of the coupling in the equations, the first component of (1.3) plays the role of an additional potential in the singularly perturbed second equation, so that the concentration will be triggered by the modified potential W − βΥ 2 .Because of the assumption β < 0, we will obtain a solution in the repulsive regime, and, when the origin is a maximum point of W our solution exists for every β negative; while when ∂ 2 11 W(0) > 0 we obtain a solution for β < β 0 < 0 (see (1.14)).
Let us make some comments.
Remark 1.2.We point out that, in case Υ satisfies then assumption (1.9) is satisfied as long as On the other hand, assumption (1.13) is verified in case V is radially non-decresing near 0 and Υ is radial and has a (local) strict maximum at 0, as one can verify applying Hopf's Lemma.Indeed, first we observe that, in such a case, for any i the function ∂ i Υ solves By Hopf's Lemma we deduce ∂ ii Υ(0) < 0 and (1.13) follows.
Remark 1.3.It is useful to recall the classical results concerning the case of constant potential, i.e. −∆U (1.15) It is well known that (1.15) has an unique positive solution which is radially symmetric and also that the set of solution of the corresponding linearized equation is spanned by the N partial derivatives ∂U ∂x i which are odd in each variable.In addition, U is radially decreasing and it satisfies the following exponential decay (see [2,3,9]) Remark 1.4.We observe that a class of potentials V which satisfy hypotheses (V 1 ) and (V 2 ) includes both the constant potentials and, at least in dimension N = 3, the trapping ones, like V(x) = λ + |x| m for some λ > 0 and m > 0. Indeed, (V 1 ) follows from Remarks 1. we believe that a version of such results should hold also in lower dimension, but this is far beyond the aim of this paper).
Remark 1.5.Our result relies on the simmetry of the potentials V and W which allows to build symmetric solutions with symmetric peaks P ε and −P ε collapsing to the origin.We strongly believe that a similar construction could be carried out in a more general setting in the spirit of Kang and Wei [8], when the radial solution of the first equation (1.5) is non-degenerate in the whole space H 1 (R N ) (i.e.V is a trapping potential as in [5,23]).In that case, it should be possible to build a solutions whose first component resembles the radial solution Υ of (1.5) and the second component has two different peaks collapsing to a maximum point of the modified potential W − βΥ 2 .
Remark 1.6.We will prove Theorem 1.1 using a classical Lyapunov-Schmidt reduction.This will allow us to build each component with a prescribed profile: the first component will look as one bump solution for ε sufficiently small, while the second will develop two spikes collapsing at the origin and will be exponentially small far from them.In performing this classical procedure, we faced some new difficulties.First of all, in view of the square growth of the coupling term, we need to correct the ansatzs of both the components to detect the suitably reduced problem.Moreover, the use of the regularity theory will be crucial in order to make suitable expansion of all the terms involved in the construction.Our existence result does not cover the case N = 1, as in this case the size of the error term does not produce the suitable smallness of the reminders terms in the ansatz despite of the presence of the correction term.We think that this point could be managed introducing further correction terms, again in both the equations, which at the prices of heavy technicality should allow to construct a remaining term sufficiently small.
Remark 1.7.Our result deals with the case of a binary mixture and it is natural to ask if our construction can be extended to the case of a larger number of equations, i.e.
In particular, we wonder if it is possible to build a solution whose components v j concentrate at different pairs of points (P ε j , −P ε j ) collapsing to the origin as ε goes to zero.Remark 1.8.The unperturbed version of system (1.2) (let us say ε = 1) was firstly studied by Peng and Wang [19] who (in presence of radial potentials) constructed an unbounded sequence of non-radial solutions exhibiting an arbitrarily large number of peaks.Their result has been successively extended to the case of more than two equations by Pistoia and Vaira [20] and very recently by Li, Wei and Wu [10].We wonder if it is possible, by combining the ideas used in the above papers, to produce a solution to system (1.3) with ε = 1 whose first component looks like the solution to (1.5) and second component concentrates at an arbitrary large number of points approaching infinity as ε goes to zero.Remark 1.9.Let us finally observe that the existence of solutions to the system (1.2) is closely related to the study of the normalized solutions for nonlinear Schrödinger systems.We refer the reader to the recent papers by Lu [14], Liu and Yang [13], Guo and Xie [6] and Liu and Tian [12].In particular, it would be interesting to produce normalized solutions using as a parameter their L 2 -norms, in the spirit of the results obtained by Pellacci, Pistoia, Vaira and Verzini [18].
The paper is organized as follows.In the next section we set the problem, by introducing the the main blocks of our construction and by reformulating problem (1.3) as a system of two equations, one set in an infinite dimensional set, the other, called the reduced problem, in a finite dimensional one.In Section 3 we solve the the infinite dimensional equation.Finally, in Section 4 we study the reduced problem and we complete the proof of Theorem 1.1.
Acknowledgments.The authors warmly thank the anonymous referee for her/his precious comments, and in particular for having pointed out a gap in the proof of Lemma 2.1 in a previous version of this manuscript.
Setting of the problem
Let us introduce the Banach spaces equipped with the norms and Henceforth, we omit the subscript ε in u, v and we agree that a b means |a| ≤ c|b| for some constant c which does not depend on a and b.
Performing a change of variable in the second equation, we are lead to seek a solution (u, v) of −∆u in the space ))}.In the next subsection, we introduce the main building blocks in the construction of our solution.
The ansatz and the correction terms. In view of assumptions
and U be the solution of where, since β < 0 We look for a solution (u, v) of (2.2) of the form where and the concentration points satisfy (see (1.12)) The function Φ ε and Ψ ε , are suitable correction terms, whose existence and properties are established in Lemma 2.1, 2.3.The reimander terms ϕ and ψ belong to the space where solves the linear equation Moreover, it is worthwhile to point out that all the functions Φ ε , U ε and Z ε are even functions.
In the following we introduce the two correction terms we need in our construction of the solution.Let us start from the term in the first component.
Lemma 2.1. There exists a unique even
(2.9) Proof.By exploiting assumptions (V 1 ) − (V 2 ) we deduce that for any even function , for some constant c which does not depend on f .Now we point that the function ε is an even function with Indeed, by scaling x = εy we immediately deduce As a direct consequence, we infer . Now, we write (2.9) as in R N and we observe that, reasoning as in (2.10), we deduce that , for any m ≥ 2; then, assumptions (V 2 ) implies, Remark 2.2.It is useful to remark that by Lemma (2.1) since ∇Φ ε (0) = 0 we deduce Moreover, since ∇Υ(0) = 0 we also have (2.12)
Lemma 2.3. There exists a unique
Proof.As U is radial, there exists a unique Ψ radial solution to then the function Ψ ε defined as ).The regularity properties of Ψ are consequence of the regularity properties of U, while the bound from above of the H 2 (R N ) norm follows from the upper bound on the The exponential decay of Ψ ε will follow from the analogous decay of Ψ.In order to prove this property, let us first show that there exists R > 1 such that Ψ(r) ≥ 0 for every r > R. By contradiction there exists a sequence r n → +∞ of minimum point at a negative level for Ψ.Then as soon as r n is sufficiently large so that the parenthesis is positive .
Remark 2.4.Let us point out that assumptions (V 1 ) and (V 2 ) are satisfied by constant potentials as well as by potentials of the type V(x) = |x| m with m > 0 as shown in [5, 23].
Remark 2.5.As a consequence of Sobolev embedding, the bounds from above stated in Lemma 2.1 hold for Ψ ε as well.
Finally, the nonlinear term N = (N 1 , N 2 ) is defined by
The linear theory
Let us start the study of the second equation in (2.15) by proving the following crucial result.
Lemma 3.1.
There exists c > 0, and ε 0 > 0 such that for every ε ∈ (0, ε 0 ) and for every β < 0 it results Proof.We argue by contradiction and suppose that there exist Step 1.Let us first show that ) and almost everywhere in R N .Taking into account that By applying Lemma 2.1 one deduces that Moreover, by applying Lemma 2.3 we obtain Arguing analogously on the other terms on the right hand side, it follows that ϕ solves Since ϕ is an even function, by the assumtpion (V 1 ) we get ϕ ≡ 0.
In order to show that ϕ n → 0 strongly in H 2 V (R N ), it is enough to exploit Lemma 2.1 and 2.3 to verify that the L 2 (R N )−norm of the right hand side (R.H.S. for short) of the first equation goes to zero, and then apply hypothesis (V 1 ).Indeed, as (ϕ n , where we have also taken into account that Υ decays exponentially and ϕ → 0 strongly in L 2 loc (R N ), so that we also have Step 2. We now study the second equation in (3.1) and we prove that t n → 0. We test with Z ε n and we remind that Z ε n solves =o (1).
Indeed, we use the exponential decay of U and of its derivatives.Since Moreover a direct computation and Lemma A.1 shows that It is possible to show that all the other integral terms on the left hand side tend to zero by applying Lemma 2.1 and 2.3, Step 1. and Sobolev embeddings.
Finally, since it is immediate to check that for some C > 0, we deduce that t n = o(1).
Step 3. Let us now introduce the sequences We will show that (up to subsequences) ψ ±P n ⇀ 0 weakly in H 1 (R N ) and strongly in L 2 loc (R N ).Both these sequences are bounded in H 2 W εn (R N ), so that, up to subsequences, ψ ±P n ⇀ ψ ± weakly in H 1 (R N ) and strongly in L 2 loc (R N ).Let us first show that ψ + ≡ 0, then an analogous argument will yield that ψ + ≡ 0. In the following we will use the notation ψ +P n (x) = ψ P n (x).Recalling (3.1), the function ψ P n satisfies the equation Arguing as in the first step, applying Lemma 2.1 and Lemma 2.3, and taking into account that ϕ n L ∞ (R N ) → 0, we obtain that the limit function ψ+ solves the limit problem On the other hand, the function ψ + inherits the symmetry properties of the function ψ ±P n , namely it is even in the last two variables and it satisfies the orthogonality condition This, together with (3.3), yields ψ + ≡ 0.
Step 4. Let us prove that a contradiction arises.First, let us prove that . By testing the second equation with ψ n , and recalling that β < 0 in view of (1.14), we deduce that where we have repeatedly applied Lemma 2.1, 2.3, that φ n L ∞ (R N ) = o(1) and that g n → 0 strongly in L 2 (R N ).Concerning the last term, we have that because ψ ±P n → 0 strongly in L 2 loc (R N ) (as shown in the previous step) and U decays exponentially.This implies that ψ n → 0 strongly in H 1 (R N ), thanks to (1.7).Finally, let us prove that a contradiction arises by showing that also In order to show this, it is enough to use hypothesis (V 2 ) and to check that the L 2 (R N )−norm of the right hand side of the second equation in (3.1) goes to zero.Indeed, by Lemma 2.1, taking into account that Remark 3.2.Let us observe that the hypothesis β < 0 is needed only in the proof of Step 4.Moreover, it is not needed in the case of a constant potential so that the sequence W 0 − βΥ 2 (ε n x) ψ 2 n ≥ 0 for every x and the final contradiction can be obtained applying Fatou Lemma.However, even if at this point we can manage β ≥ 0 (in the case of W constant), the study of the finite dimensional problem will require β < 0 as shown in hypothesis (1.14).
3.1.The size of the error term.In this subsection we compute the L 2 (R N ) of E which will determine the norm of the remainder term (ϕ, ψ).
Proposition 3.3. There exists ε
Proof.Let us start studying E 1 given in (2.17).We have Lemma 2.1 and Sobolev embedding imply that (see Remark 2.2) Then we deduce and Moreover by applying Lemma 2.3 we obtain As far as concern the last three terms, similar computations show that Let us now study the L 2 (R N ) norm of the terms in (2.18).In view of (1.7) and (1.12), recalling that x 0 = 0 is a critical point of ω by symmetry, On the other hand Lemma 2.3 allows us to deduce that so that, we obtain Moreover, conclusion (ii) of Lemma A.1 and (1.12) yield as, by using (1.12) it follows that ρ ε ∼ 1 √ ω 0 ε| ln ε| and hence ε ).Let us finally study the last terms in (2.18) , concluding the proof.(2.15).Lemma 3.1 and Proposition 3.3 yields the following result Proposition 3.4.There exists ε > 0 such that for every ε ∈ (0, ε 0 ) there exists a unique solution (ϕ, ψ) ∈ K ⊥ of the equation
Solving the second equation in
Proof.We will obtain the result by applying the contraction principle to the continuous map where is well defined thanks to Lemma 3.1, and A is a suitable positive constant to be chosen.In order to find A, it is sufficient to prove that In addition, this, together with (3.7), implies (3.6).Then, the claim follows by the contraction mapping theorem.
Solving the reduced problem
In this section, we are going to study the first equation in (2.15).Let (ϕ, ψ) the solution of the second equation in (2.15), then Our goal will be to prove that c 0 = 0. From now on, we fix (ϕ, ψ) given in Proposition 3.4.
Proof.Arguing as in the proof of Proposition 3.4 it is easy to obtain that Now by using (2.16) and (2.8) we get that Let us study the right hand side.First of all, arguing as in (3.4) and taking into account (3.5), we have where we have applied Lemma A.1.Similarly In addition, (3.5) and Lemma 2.1, 2.3 yield and the terms in (4.2) can be handled analogously.Let us focus on the terms in (4.3).Recalling that we obtain Taking into account Remark 3.5 (choosing α = 1 2 for N = 2), we infer Moreover, (1.12) and (3.5) yield The last two terms in (4.5) can be studied similarly, by applying Lemma A.1.It results The last term in (4.3) can be easier studied as concluding the proof.
We are now in position to study the relevant term in (4.1).We claim that the other terms on the right hand side of (4.7) are of higher order with respect to ερ ε .Indeed, Lemma A.2 and (1.12) yield The last term in (4.7) can be managed, by applying Lemma 2.3.Indeed, we have Let us now study the second term on the right hand side of (4.6).It holds In view of Remark 2.2, we obtain Let us study the cubic and square term in U P ε in (4.6).We have Applying again Lemma A.2-(ii) with s = t = 2 we infer 3µ 2 Hence | 5,461.4 | 2023-11-20T00:00:00.000 | [
"Mathematics",
"Physics"
] |
The Case for Octopus Consciousness: Valence
: Octopuses may demonstrate perceptual richness, neural unity, temporality, and finally, valence or affective evaluation, as the neural basis for consciousness. Octopuses attach a positive valence to food as ‘specializing generalists’ with long-term learning and flexible choices. They value shelter, yet modify, adapt and even transport it where necessary. They attach a negative valence to what may be described as pain, monitoring and protecting the damaged area and learning to associate locations with pain relief. Finally and surprisingly, octopuses attach a negative value to uncertainty so that they explore their environment before exploiting certain aspects of it and even exhibit motor play. This series of four papers, culminating in the present one, demonstrates in detail why the Cambridge Declaration of Consciousness has suggested octopuses might have the substrate for consciousness, although it is likely not similar to or as complex as that shown by ‘higher’ vertebrate lineages.
Introduction
Birch et al.'s [1] fourth dimension of animal consciousness is titled e-richness. For them, this represents affective experiences or 'feelings'. Evaluating any animal's abilities in this category produces a major difficulty. The 'hard question' assumes that affect needs to be reported to be known, and non-human animals (with few exceptions) cannot report their emotions or anything else about themselves. One way around this lack is to evaluate both the behavioral reactions and physiological states that would accompany affect or are reported to produce affect in humans. Another is to report on valence, or the value associated with particular situations or responses. Birch et al. [1] (p. 792) comment that "valence must be present whenever there is affect-based decision making". In other words, affect comes when you value particular situations or sensory feedback, and by that definition, affect accompanies all choices. This cannot be true as many choices are automatic, possibly reflex responses to a narrow range of stimuli; it must be true only when choices are learned and used in future decisions or responses involving a set of actions in a complex situation. For a background on this link, it is useful to look at an investigation of human motivation as a foundation for the neural and behavioral processes that underlie our emotions. In a review, Lyon [2] suggests that valence informs an organism's decisions about what to do next and reminds us of Damasio's finding that reason and emotion together fuel any animal's decision-making.
From a biological view of the decision-making process, Barrett [3] suggests that the brain (1) monitors one's needs and current states, (2) infers causes, (3) monitors trends, and (4) decides what to do next. This assumes that both 'needs' and decisions have valence. Barrett further reminds us that it is all one package and that subjective feelings have accompanying neural, physiological, and behavioural changes. Such valence can be negative (as in pain) or positive (in pleasure), although a negative input such as pain, signaling actual or potential tissue damage [4], has a strong valence and is perhaps the most likely one to be seen across the animal kingdom. There is one kind of valenced situation that is not the result of simple sensory input. Again with reference to humans performing this decision-making process, uncertainty and unpredictability are stressful [5] and organisms work to reduce them by attention (focusing on what one needs to know), learning (acquiring information), and habituation (learning what not to attend to). In addition, for decision making an organism must know its current states, then its attainable states, and then compute its goal states to cope with change. It is this change that needs to be either planned for or reacted to-and remember that octopuses live in a complex and swiftly changing environment. Can they attach valence to the reduction of uncertainty?
Evaluation of positive and negative valences (what does it seek? what does it avoid?) in an octopus can thus approach evaluation of the category of emotion and begin to probe into possible affective states that help guide its behavior. Perceptual richness [6] clearly guides the input, evaluates one's state, and predicts future actions, and organisms monitor the effects of actions to judge possible causes. Unity of the self [7] allows monitoring of one's needs and current trends, and temporality [8] unites past experiences with future actions to predict responses. This paper adds affective valence to the three other aspects of consciousness previously covered.
It is nevertheless difficult to understand what categories of valence can guide octopuses' behavior. Certainly, as they are generally not social beings, they do not have positive affective experiences when affiliating with conspecifics. Yet all organisms must find nourishment, so positive valence is likely attached to (1) decision-making about food. Similarly, the soft-bodied octopus must (2) seek shelter and should attach a positive valence to finding, modifying, and attaining it. Pain (3) is a negative experience for complex organisms, including octopuses, and can be seen to have a strong negative valence. Investigating these situations will give us some foundation for understanding valence in this group. We will also examine a fourth situation, the negative value attached to (4) uncertainty and how it is resolved, for similar characteristics.
Valence and Food Choice
An influential book, Foraging Theory by Stevens and Krebs [9] suggested that nonhuman animals' food choices were driven simply by energetics, that energy produced by the digestion of food minus energy expended on finding and preparing it predicted food choice. This simple equation, which was promptly challenged, contrasts with the varied influences known to predict human choice of food, where [10] sensory cues, learned choices, and context are all major drivers of food consumption. Despite the simplicity of the theory, energetics was often held up as the expected predictor of prey choice in any animal. The equation is much too simple for many species, including the octopuses, as research has shown abundantly since then. Prey items have valence or value attached to them, and the food-finding situation is controlled partly by availability but also by learning, sensory characteristics assumed but often not known to scientists, and other influences such as predator pressure. Energetics and its modifiers can control choices in the laboratory but the situation is not so simple in the 'real world' where animals make their choices. The evidence of the outcome of these complex pressures and the value that must be attached to particular prey is fairly easy to view in octopuses. They take prey to a sheltering 'home' and consume it there so the hard remains of food are usually though not always available for assessment.
How do we know that octopuses have valences for and make choices about what items to find and consume as food? Because hard remains of prey are easily available as middens outside octopus dens, many researchers have collected them [11][12][13][14][15][16]. All these authors noticed octopuses took a wide variety of prey species but also that the percentage of prey chosen did not equal the number of prey individuals of each species in the immediate area. What might have influenced these active choices? One variable might be ease of access. E. dofleini predominantly chose crabs as prey, but Hapalogaster mertensii, known casually as the 'hairy crab', was under-represented in midden remains [16]. The spiky surface is likely to make it difficult for the smooth suction cups of the octopus' suckers to gain a grip on the crab (Astroturf is similarly repellent when placed on the upper walls of aquariums) and sponges on the surface of scallops similarly protect the host from octopod predation [17].
Chitons were very common on the subtidal rocky areas where both E. dofleini [16] and O. insularis [14] foraged yet were almost absent from prey remains. There might be a problem with the ease of removal from the rocks or penetration through the integument, or it may be that low food value is available from each prey item, but chitons seemed to have limited 'value' for octopuses. These examples suggest that some combination of choice and availability must drive actual consumption. A clear example of the modification of preference by availability is that of O. bimaculatus, who much preferred crabs over two species of Tegula snail in the laboratory. In the field, crabs were hardly available and so the octopuses consumed much of the more preferred of the two abundant snail species [12].
Predation on the hard-shelled bivalves offers an opportunity to evaluate the limitation of energetics in controlling prey-selection decisions, both within prey species in terms of size and across species. Penetration techniques varied. The valves of smaller and weaker bivalves such as mussels can be simply pulled apart but larger ones are drilled through the shell and poisoned to weaken the adductor muscles. As larger clams are resistant to pulling, this choice of technique is also dependent on octopus size, with bigger and stronger octopuses generally choosing bigger prey, but not always [18]. Across prey species, prey choice somewhat follows an energetic preference, with the strongest muscled clam species taken least often [19]. Yet when the clams were opened and presented 'on the half shell', that species was preferred, so again for choice of species, energetics, and preferences interacted. A study of selection by large octopuses among clam species [20] showed that on average the prey species selected took less 'handling time' and thus less energy expenditure, but post-evaluation data revealed that one of the species less often selected actually had the shortest handling time. More than effort and energetic reward were evaluated.
When penetrating bivalve and gastropod shells, octopuses first test muscle resistance and if it is large, they drill into the shell [21]. This leads to a trade-off in laboratory studies of size preference; smaller prey needs little work but yields meager rewards, and larger prey demands more work with greater return. The result was a compromise: mediumsized clams were preferred over bigger and smaller ones [22]. Another study of strategies and size looked at the choice of pulling versus drilling, and they found individuals chose a wide range of sizes at which to make the switch [23].
In 'real world' oceans, there are other influences besides energetics on prey choice that may be evaluated. Prey is first located and then prepared, and octopuses appear to use visual guidance to go to likely places and then use chemotactile search to actually find prey [14]. Smaller octopus species may calculate predator pressure, take more small prey, and be time-minimizing foragers [16], whereas much larger ones can resist predation and take larger prey, following an energy-maximizing strategy [15]. Additionally, octopuses simply prefer crabs. A carefully measured study of energy expenditure and gain [24] revealed that crabs were chosen over clams by a ratio of 4:1, yet crabs required more energy to prepare and consume and yielded less energy for the work. Octopuses prefer crabs as prey when given many different choices, consume more, and gain more weight when crab is offered in the lab [25] and this preference cannot be modified by early learning [26]. We do not know why crabs are valued, but they are.
Most of these studies looked at situations where the prey was provided, but to see the mechanism by which several influences affect prey selection, it is necessary to evaluate choices in the field and also to focus on the individual. After all, it is the individual that makes decisions. Departures from the notion of octopuses as generalist predators at the population level have already been noticed [14][15][16]21]. However, seeing the particular patterns required observation at the individual level and with frequent collections of prey remains, since shells discarded from the home are often removed by water movement or scavengers [14]. When middens were checked daily at a rich and varied site in Bonaire [27], the octopus population could be seen as generalist yet some individuals within it were specialists. This is an interesting pattern of choices because a strict specialist species could be seen to be automatically responding to a narrow set of stimuli and a generalist species to be simply taking whatever prey was immediately available. The variation amongst individuals suggests an animal without strong preferences, but rather ones based on learning, with some presumably having learned particular foraging strategies or prey availability. One individual had a 'run' on juvenile conch snails, for instance, which give a large energetic yield yet also require effort/learning as they need to be drilled and were found in sandy areas some distance from the rocky den location (and see [11] for a similar preference). This specialist/generalist combination showed a diverse mix of both strategies [28,29] within and across species. Interestingly, in Alaska E. dofleini had a very diverse diet yet further south the population was specialized-on crabs [29], suggesting specialization only when this preferred prey was abundant. Habitat richness, both across mainland and island populations [30] and at the micro level in the same area [31] still predicted the breadth of prey choice; the more species that were available, the more were taken, on average. Within the second study another variable, octopus personality [32], influenced the breadth of choice. Shy octopuses had a more narrow selection of prey species than bold ones, possibly because they explored less and discovered fewer food sources.
So the studies of octopus prey consumption reveal valence, but they also show what a variety of influences can intervene between preference and consumption. Sensory cues are important, learned choices modify tendencies, and context including availability is vital, a set of pressures similar to that proposed for our species by Mela [10]. One preference remains unexplained: why do octopuses so prefer crabs? They may provide needed nutritional trace elements, though perhaps they taste good, as they do to us, but we have no way of asking the octopus.
Assessment of valence in food choices generally is much more sparse in the cuttlefish. Evaluation of stomach contents [33] showed diet shifts across the lifespan and led to the conclusion that common cuttlefish were generalists. Yet a couple of studies of cuttlefish cognition revealed an interesting way in which valence was expressed. First, the authors decided that animals would be willing to wait longer for access to preferred prey [34]. So they offered a choice in a variation of the so-called 'marshmallow test' that was featured for small children as a measure of self-control [35]. They offered preferred and non-preferred food items with a small delay for delivery of the preferred one, but as soon as either was taken, the other was withdrawn. Individual cuttlefish were able to wait for 50-130 s to gain the preferred food, showing a clear control of responses. Second, cuttlefish were offered non-preferred food in the daytime, and preferred food in the evening. Cuttlefish learned the schedule and refused the non-preferred food in the daytime when the preferred was scheduled for later delivery [36]. More interesting, if the schedule of evening food delivery was reliable, the cuttlefish were willing to wait, but if the delivery was random they took the immediate but less-preferred reward. Again as with humans [5], the cuttlefish seek predictability and respond to it, but because these cognitive tests were done in isolation from any natural history observations, we have little idea of how these abilities fit into the cuttlefish's actual choices in nature.
Valence and Shelter Use in Octopuses
Because of their shell-less soft bodies, cephalopods are at major risk of predation. The benthic octopuses consequently spend little time foraging for prey [37], staying within a shelter for around 70% of their active period, thus fitting the definition of a time-minimizing forager [16]. The value of such shelter is quite simple-no shelter, no octopus survival. This is particularly true for female octopuses, which attach their eggs to a solid substrate and guard and tend them. Deep sea octopuses may cluster on the scarce solid outcropping above the soft sediment in locations such as 'Baby Bare' [38]. Shallow water species can congregate in similar locations and interact with actions unusual for 'asocial' species [39] even though, given a choice, they will not shelter near each other [40,41]. Additionally, the picture is complicated by the fact that the flexible and opportunistic octopus had preference but is also able to manipulate its environment [42][43][44] and modify it to suit its needs.
In general, octopuses will be somewhat confined to solid substrates, within which suitable shelter might be abundant [45] or somewhat limited [46]. Areas such as reef edges that offer both shelter and easy access to sandy areas nearby for hunting [47] are selected. Octopuses also seem to prefer site locations looking downward and out from shore, a 'room with a view' [43], maybe because they can evaluate the immediate environment before they go out hunting. Octopuses selecting ready-made shelters seem to value ones that have a volume a little more than their own and have a small aperture, as with their boneless body they can squeeze through an opening that would block competitors [42]. However, if an aperture is large, octopuses will block it off with items collected from nearby, including remains of shelled prey and small rocks [43]. Females with eggs, who no longer hunt, may completely withdraw and thoroughly block off the aperture [48]. Even though octopuses can gather tactile information indicating they are sheltered, they also can see outside and prefer shelter that is dark.
Although they need solid shelter, many octopus species can use natural shelter available far from solid rock, such as molluscan shells. The availability of empty mollusc shells enlarged the range and shaped the distribution of O. joubini on sea-grass beds and enrichment of the area with gastropod shells increased octopus density [43]. Such enrichment by artificial shelters can rejuvenate a population that has been over-fished by providing shelter for brooding females [49]. Oyster beds far from solid rock offered empty shells to shelter O. tehuelchus [50], a scallop bed gave shelter for an unusually crowded group of O. tetricus [39] and a variety of human-made structures allowed O. vulgaris to survive on soft sediment [44]. All of these observations indicate the imperative need for some kind of shelter, but the fact that they can modify physical structures makes it difficult to specify what type of shelter an octopus 'values'.
One of the few positive results of pollution of the marine environment is the widespread use of glass, plastic, and metal waste for octopus shelter [51][52][53], sometimes allowing them to live in 'urbanized' seascapes [54]. Dark beer bottles are ideal for O. rubescens [55], tires for O. vulgaris [44]. Split coconut shells are light enough to be portable so that O. tetricus can imitate hermit crabs and actually take shelter with them when they forage on the sandy/mud substrate [56]. Perhaps the value of shelter is best demonstrated by this, how much the octopuses work and adapt their environment to attain it.
Valence and Pain
Pain, defined as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage" [4], is based on a universal sensory input across the animal kingdom yet lifted to the experiential level by central monitoring, evaluation, and decision-making. The sensation is vital and imperative, the signal amount increasing swiftly with more potential damage and the input habituating poorly. Yet the central monitoring matters, as humans without pain sensation cannot self-monitor damage and do not live long lives. Pain is thought to have sensory, cognitive, and affective components in humans and thus it is impossible to absolutely prove its existence in non-human animals. There is a widespread debate about whether and which animals can experience pain, as those with simpler nervous systems are often designated as having nociception, solely receiving the sensory component. Sneddon et al. [55] acknowledge this problem and suggest that many pieces of evidence can be accumulated to make a good case for true pain and thus valence in non-human animals. They suggest that there should be a neurobiological, physiological, and behavioral response to a noxious event, that this should result in avoidant and protective responses afterward, and that we should see subsequent changes in motivational state such as place preference and either analgesia self-administration or that the animal pay an energetic cost to access it.
Besides the obvious tissue damage, many marine animals presumably experience nociception from the sting of nematocysts (also sensed by humans) of the phylum Cnidaria [56]. Octopuses show signs of aversion after contact with cnidarian sea anemones, and hermit crabs place anemones on their sheltering gastropod shell to protect themselves again predators including octopuses [57]. After being stung, octopuses try different approaches to an anemone-carrying crab, such as coming from a different angle or trying to blow the anemone off the crab with jets of water. This avoidance is only partly successful, but observation of aversion to stings gave the early researchers looking for a learning situation in octopuses the idea to use a small electric shock in aversive conditioning [58].
Whether it was from chemical or mechanical stimuli, researchers have slowly accumulated evidence that cephalopods have more than simple nociception. A complex set of responses takes place if squid are given a small injury and then exposed to a potential predator [59]. First, and notably, a sea bass recognized some behavioural or chemical differences about a squid that had been injured and was more likely to attack it. The injured squid changed their behavior and became wary. They were alerted to the predator from further away and sooner in its approach and fled sooner (note that loliginid squid in midwater are 'jumpy', moving away from a potential threat on average eight times per hour [60]). The injured squid had a heightened sensitivity to mechanical stimuli, but if they were given anesthetic, they lost this sensitivity and wariness and were more likely to be captured by the fish. While squid thus have a heightened response to damage, the authors did not see any behavioral response to the wound itself. However, this study was only short-term.
Octopuses given a similar arm crush injury showed their normal aversive behavior of inking, jetting away, and, in the case of Abdopus, arm autotomy [61]. Their sensitivity to mechanical touch increased, both locally around the damaged area and more generally across the skin surface. Local mechanoreceptors were sensitized and twenty-four hours later they still had a lower threshold but on the ipsilateral though not the contralateral arms. There was a local seeking response of nearby suckers towards the damaged area, as if exploring for the source of the problem (a protective response). This sensitivity increase was abolished with anesthetic. Interestingly, octopuses showed immediate arm grooming near the injury and persistent arm guarding, criteria expected as part of affective responses [55]. A different octopus species, O. bocki, was given a presumably painful injection of acetic acid in the arm and then confined in a previously positively conditioned chamber [62]. The experience led octopuses given the injection to now avoid this chamber. If, however, they were given a lidocaine injection that relieved the sensation, they stayed in what had now become a rewarding location. Both of these place responses fulfil the motivational changes expected to demonstrate likely sentience [55] so we can recognize that damage has the expected negative valence.
Valence and Lack of Information
There is one surprising aspect of cephalopod behavior which has been studied mostly in octopuses, that a positive value is attached to gaining information. It is not one that we normally think of as part of animals' lives, mainly because studying details of behavior in a controlled and constrained laboratory limits its expression. Additionally, Peters [5], focusing on humans from a biological standpoint, pointed out that uncertainty (unexpected novelty, unpredictability, and uncontrollability) is stressful and that organisms act to reduce this uncertainty. In that sense, knowledge of the environment has a positive valence. He suggested that three processes reduce uncertainty-attention, learning, and habituation-and that in mammals, glucocorticoids in the brain are central to these processes. In rats, there is a clear effect of environmental enrichment on the reduction of anxiety as well as an increase in exploratory behavior, again linked to glucocorticoid receptors in the brain [63]. The value of this uncertainty reduction can be seen from two different viewpoints. One is motivational, in that researchers have found that mobile animals always explore a new environment, expressed in terms of orienting to situations of interest, manipulation of items nearby, and locomotion through the area [64]. It might be a basic drive to explore or one to increase the amount of sensory information available. Separately, information seeking might be linked to ecological necessity, a need to balance exploration (gaining information) with exploitation (using the information to satisfy basic needs) [65]. The authors suggest that animal monitoring of this balance might be a simplistic form of the metacognition that we humans use.
There are two spatial referents in which the balance is expressed, reflecting the division between close-by egocentric and larger allocentric space [8]. In egocentric space, animals explore their immediate environment, probably to reduce the uncertainty about what is around them [5]. Again, object manipulation is linked across primate species to being a generalist feeder with a diversity of food handling procedures [66], and remember that octopuses are also generalists. It also appears to be linked to brain size [67]. This kind of object manipulation may be a foundation for the cognitive skill of mental manipulation [68], one of the bases for human sentience. While the exploration of one's immediate environment is widespread, it extends in some species into two different special activities, tool use and play. Although tool use was once described as discriminating humans from non-human animals [69], simple tool use, defined as "the exertion of control over a freely manipulable external object with the goal of altering the physical properties of another object, substance, surface or medium via a dynamic mechanical interaction" [70] is now known to be much more widely distributed in the animal kingdom. It is thought to be more sophisticated in mammals, especially primates, and corvid and parrot bird groups. The use of tools may actually extend the egocentric body space [71] and the tool can be included in the immediate body schema and change neural networks in the process. Such actions must be planned and the goal has a positive valence in some way.
The aquatic environment is not always supportive of object manipulation and tool use, with the lack of objects except on the benthos and the higher density of water making object movement more difficult [72]. As well, we lack information about marine animals because the oceans are poorly explored. On the other hand, water itself may be a tool, and cephalopods have used jets of water aimed through the flexible funnel for many different functions. Octopuses [73], cuttlefish [74] and sepiolid squid [75] manipulate the substrate, particularly sand, to form hiding places in crevices or underneath a sandy surface. Octopuses also use jets of water to repel scavenging fishes [76] and occasional pesky experimenters, and finally in object play [77]. This manipulation is a prime example of domain generality, extending the use of behavior that originally evolved as circulation of water through the mantle cavity for respiration, to jet propulsion, and into object propulsion then apparently to play, an excellent example of octopuses' behavioral flexibility.
A second category of manipulative behavior that is thought to have evolved from exploration is play. Except for having large complex brains and a demanding environment, cephalopods do not fit the category of animals that are presumed to play. Play consists of incomplete or out-of-context action, is not stereotypical, is produced spontaneously in a stress-free situation, and is not obviously 'useful' [78]. Normally play is seen in mammals. It is thought to arise because young mammals have excess energy resources, a sheltered juvenile period that gives them time to express play, and because it is useful, i.e., has positive valence for preparing them for adult lives, particularly in social roles. Some groups such as the parrot play through adulthood, perhaps to cope with a varied environment and to practice generalist foraging strategies [79]. We can think of exploration as finding the affordances (potential ecological roles) of objects in the environment, and object play as manipulating those affordances. Octopuses, being asocial, do not perform social play, but do play with objects using different actions. The first play behavior recorded used the water jet, moving a floating object around an aquarium [77], and the second was moving an object by one or more of the mobile arms [80]. A third context has been casually reported, where a floating object is pulled underwater and allowed to bob back up again. As in tool use, the octopus has a great deal of behavioral flexibility in play. Yet play is defined as having a hedonistic character in that it is 'pleasurable', while exploration is assumed to reduce anxiety [67] so by definition these are valenced actions.
Uncertainty is also resolved by exploration in the larger allocentric environment, as mobile animals need to know in particular what resources are in the areas into which they will move. Forging bees balance exploration of areas with potential resources with exploitation of these resources, beginning with short trips near the hive, moving to longer exploratory ones, and then to directed exploitation by traplining [81]. Rodents establish a 'home base' and first make short excursions from it, later making longer ones and using a saltatory stop-and-go pattern of movement [82] and this pattern is also found in octopuses [14]. Yet rats also shift their routes depending on the certainty of the potential rewards along them [83]. Exploration is linked to diet, with frugivorous primates exploring less and having simpler paths than generalists [84]. It seems also tied also to the demands of a complex environment across their life history since corvid birds explored and played as juveniles and parrots throughout their lives [79] and remember that octopuses live in a diverse and changing environment. Given a novel laboratory environment, octopuses spent much of their first 24 h exploring, and this activity decreased with time [85]. They learned the location of sheltering burrows during exploration and used them when they were needed later, increasing their movement again when the testing arena was rotated and locations had to be re-calibrated. Exploration is part of the normal lives of cephalopods although it is very poorly documented, and its reduction is likely valenced.
Conclusions
The four papers of this series have attempted to look at the foundation for sentience in the cephalopod mollusc octopuses and have evaluated as much as presently possible their 'consciousness profile' [1]. Cephalopods have rich perceptual experiences [6], but the dimension of these experiences are not necessarily the same as the mammals we are related to and think in terms of. Despite the decentralized distribution of the nervous system, with many neurons in the octopod arms, cephalopods have a basic unity of evaluation and a central brain that learns and makes decisions [7]. They make and use these decisions across time, from monitoring their needs and inferring causes to monitoring trends [3], before deciding on and carrying out actions (see Neisser [86] for this process in humans). Rather than simply seeing this ability as an accumulation of 'unlimited associative learning' capacities [87], we may need to evaluate the patterns of attention, learning, and habituation [5] that are necessary to live in a complex world. It is more than acquiring bits of information to use immediately. In fact, picking up a sheltering coconut shell to carry for future shelter [54] as well as choosing to reject unpreferred prey due to the certainty of later preferred prey delivery [36] come close to the 'mental time travel' [1] suggested as evidence for consciousness.
Much of the evidence for a basis for sentience in these complex animals comes from evaluating valence, what 'matters' to animals. The multiple influences on food choice contradict the simplistic energy tradeoff model [9] and leave us wondering whether cephalopods may actually like some foods more than others. Shelter is vital to the softbodied octopuses, but they do not choose ready-made spaces, instead manipulating objects, again suggesting that they have some kind of mental template [3] of what they need. Parallels with human valences can be useful in testing for animal sentience. As Sneddon et al. [55] suggest, behavioral parallels in cognition lead us to conclude that animals such as cephalopods may have subjective pain. Similarly, parallels between humans and cephalopods in the exploration/exploitation processes may suggest a need to reduce the uncertainty about the world around them which is stressful for humans [5]. This pattern of information acquisition seems to lead to play and again may be evoked by living in a complex and varying environment [79]. While we can never 'prove' that any species has sentience, the information presented in this series of papers clearly points out that the foundation for this capacity is present in cephalopod molluscs such as octopuses. Acknowledgments: I would like to acknowledge my husband Lynn Mather for unwavering support over all these decades, and Roland Anderson for a rich research partnership across many years. In addition, I should acknowledge that the administration of my university, who provoked a strike and refused to bargain for weeks, gave me the time to write the first draft of this paper.
Conflicts of Interest:
The author declares no conflict of interest. | 7,768.6 | 2022-11-17T00:00:00.000 | [
"Biology",
"Philosophy"
] |
Concrete Object Anomaly Detection Using a Nondestructive Automatic Oscillating Impact-Echo Device
The goal of this study was to develop an impact-echo device that can conduct automatic oscillation tests, process signals rapidly, and apply it to concrete object anomaly analysis. The system presented in this study comprises three parts, namely the impact device, the oscillator circuit, and signal processing software. The design concept of the impact-echo device was inspired by a pendulum clock, and its implementation used a nondestructive wooden hammer instead of a conventional manual steel hammer. In this study, we used a pulse generator in the adjustable oscillator circuit to produce delayed changes. The delayed changes would activate the wooden hammer that struck the surface of the object. To process the signal, our lab used a built-in sound card in the computer to transfer the reflection soundwave from striking the wall to MATLAB software to analyze the energy of the frequency spectrum. This was conducted to evaluate whether the object contained anomalies and, if so, to determine the location of the anomalies to serve as a reference for real-life implementation.
Introduction
Concrete is commonly used in many types of construction and in a variety of configurations. Although materials scientists have been trying to improve the capabilities of concrete in recent years [1,2], the shrinkage of concrete, the differential settlement in a building's foundations, and the impacts of temperature stress and loading may cause cracks. In addition, the corrosion of reinforcing bars in concrete may lead to the development of longitudinal cracks along the bars. Cracks are the most common form of deterioration in concrete objects [3,4]. Moreover, Taiwan is located on the boundary of the Philippine Sea Plate and Eurasian Plate, where geotectonic movements are frequent, leading to severe or partial damage to the internal structure of buildings. The severity of such damage cannot be identified by sight, and, without repair, damaged structures may lead to large numbers of injuries and deaths. For example, several structures collapsed in recent years, including Houfeng Bridge in 2008, the Weiguan Building in 2016, and the Marshal Hotel in Hualian city (located on the east coast of Taiwan) in 2018, resulting in several casualties [5,6] and demonstrating the importance of structural safety inspections. Among the evaluation methods for structure damage, nondestructive structural safety evaluations have become increasingly popular. The advantage of a nondestructive testing method is that it does not cause damage to a building, and its implementation (online testing) does not interfere with the regular usage of the building. It is mobile, convenient, and can evaluate the internal deterioration condition of a structure in a short period of time. To date, the most common nondestructive test in civil engineering was developed using the theories of engineering was developed using the theories of electromagnetic waves and stress waves. For example, ground-penetrating radar uses electromagnetic testing technology [7][8][9][10], whereas the ultrasonic method and the impact-echo method involve stress wave testing technology [11][12][13][14][15]. The electromagnetic testing technology involves expensive equipment, namely transceiver signal devices, transceiver antennas, signal recorders, and analysis systems; consequently, it is typically used in geotechnical engineering or oil-mining engineering. When the testing objects are concrete structures, such as those in the present study, testing technologies based on stress wave theories are more suitable. In addition, the ultrasonic method involves preburied boreholes that cause damage to the structure of a building. Therefore, the impact-echo method has gradually become the leading technology in civil engineering for conducting concrete structure testing. In recent years, the implementation of the impact-echo method has become increasingly diverse. It is used to test internal anomalies in rod structures (e.g., beams and columns) or in the concrete lining structure in tunnels, to examine the quality of a concrete structure, and to test the minimum width and the depth of cracks in a concrete structure. However, the implementation of the impact-echo method typically involves a device generating an impact on the testing object, and subsequently using a digital receiver to acquire the echo wave signals and signal acquisition devices to transmit the signals to the computer for analysis [16][17][18][19]. For disaster prevention, however, fixed-point automatic detection and rapid signal processing are critical.
To improve the latencies, our lab developed a low-cost, labor-reducing, and real-time nondestructive testing system that can be applied as a mechanical impact method to determine whether a concrete object contains defects. The sound card used in our system transforms the data collected through the audio software interface, supplemented with MATLAB software [20], into the soundwave of the echo signals. Subsequently, the soundwaves are connected to the computer; the echo waves are acquired and reconstructed on a personal computer (PC). This system can directly transfer signals to a PC to process a substantial amount of data, which significantly reduced the time required for the acquisition and reconstruction of echo signals.
System Architecture
This study proposes an impact-echo detection system that automatically oscillates and rapidly processes data. In terms of the system's architecture, the hardware comprises an automatic oscillation impact device our lab designed for this study, and adjustable automatic oscillator circuits, while the software features an echo signal acquisition and structural anomaly analysis. The overall system framework is presented in Figure 1. The circuit of this study comprises a buck circuit, timer circuit, and amplifier circuit. The system produces continuous pulse outputs through energized coil and pig iron that generates electromagnetic effects and produces magnetic forces. Adjusting the frequency of the electric field produces a fixed force that activates oscillating impact-echo device to regularly generate impacts on a concrete object.
1.
First-level buck circuit: For future applications, our lab designed the system based on simplicity, portability, and convenience. Thus, resistor-capacitor buck circuits were adopted to reduce volume and cost, and to replace cumbersome transformers. For rectification, a bridge rectifier that was composed of four 1N4007 diodes was adopted. Subsequently, nine 1K5W (1K Ohm can withstand 5 watts power) cement resistors that limit capacitance were connected to the charging current. Regarding filters, four 220V470uf (470 Farad can withstand 220 voltage) were used in the AC circuit; reactance caused by the capacitors resulted in the reduction of voltage to meet the voltage required for the electrical load. As for the voltage regulation, three 1K5W cement resistors were designed to be voltage dividers, and Zener diodes were used to maintain a specific voltage.
2.
Second-level IC555 timer circuit: The output signal from the buck circuit of the previous level was 5 V, which could activate IC555 to produce modulation at periods ranging from microseconds to hours. Next, R1 and R2 (precision variable resistors) were used to adjust the frequency to generate a complete and continuous square wave signal.
3.
Third level amplifier circuit: Because the output signal from the previous level had a low current value (5 V) and because the current amplification ratio of a single transistor was limited and could not drive the load of a large power system, two 2SC3457 transistors were used to construct an amplifier to improve high-frequency characteristics and prevent abnormal power loading from damaging the elements. Therefore, high power was acquired to drive enameled wire for charging or discharging. In addition, the on and off positions of two light-emitting diodes was used by our lab to indicate whether the amplifying circuit was connect. The overall amplifier circuit is illustrated in Figure 2. The circuit of this study comprises a buck circuit, timer circuit, and amplifier circuit. The system produces continuous pulse outputs through energized coil and pig iron that generates electromagnetic effects and produces magnetic forces. Adjusting the frequency of the electric field produces a fixed force that activates oscillating impact-echo device to regularly generate impacts on a concrete object.
1. First-level buck circuit: For future applications, our lab designed the system based on simplicity, portability, and convenience. Thus, resistor-capacitor buck circuits were adopted to reduce volume and cost, and to replace cumbersome transformers. For rectification, a bridge rectifier that was composed of four 1N4007 diodes was adopted. Subsequently, nine 1K5W (1K Ohm can withstand 5 watts power) cement resistors that limit capacitance were connected to the charging current. Regarding filters, four 220V470uf (470 Farad can withstand 220 voltage) were used in the AC circuit; reactance caused by the capacitors resulted in the reduction of voltage to meet the voltage required for the electrical load. As for the voltage regulation, three 1K5W cement resistors were designed to be voltage dividers, and Zener diodes were used to maintain a specific voltage.
2. Second-level IC555 timer circuit: The output signal from the buck circuit of the previous level was 5 V, which could activate IC555 to produce modulation at periods ranging from microseconds to hours. Next, R1 and R2 (precision variable resistors) were used to adjust the frequency to generate a complete and continuous square wave signal.
3. Third level amplifier circuit: Because the output signal from the previous level had a low current value (5 V) and because the current amplification ratio of a single transistor was limited and could not drive the load of a large power system, two 2SC3457 transistors were used to construct an amplifier to improve high-frequency characteristics and prevent abnormal power loading from damaging the elements. Therefore, high power was acquired to drive enameled wire for charging or discharging. In addition, the on and off positions of two light-emitting diodes was used by our lab to indicate whether the amplifying circuit was connect. The overall amplifier circuit is illustrated in Figure 2. Before the hardware (circuits) was connected to the software (the programs), it was required to undergo simulation and verification tests using Multisim (It is an electronic schematic capture and simulation program. Production company location in Austin, Texas, USA) After the hardware system settings were finalized, the next step was to transmit echo signals to the computer program for anomaly analysis. The overall experimental setup is presented in Figure 3. Before the hardware (circuits) was connected to the software (the programs), it was required to undergo simulation and verification tests using Multisim (It is an electronic schematic capture Appl. Sci. 2019, 9, 904 4 of 14 and simulation program. Production company location in Austin, Texas, USA) After the hardware system settings were finalized, the next step was to transmit echo signals to the computer program for anomaly analysis. The overall experimental setup is presented in Figure 3.
Oscillating Impact-Echo Device
The oscillating impact-echo device contains an oscillating wooden hammer made of wood that strikes the wall. At the bottom of the wooden hammer, iron core made of pig iron was installed. In addition, an electromagnet device composed of two coils of enameled wire prepared for this study was erected.
1. Oscillating wooden hammer: First, the center point of the wooden hammer was identified, and then steel material was used to construct a computer numerical controlled lathe to fix the wooden hammer to the rotary machine and combine the steel material with the wooden material. The bearing was used as the fixed part in the impacting device to maintain the central position of its main body.
2. Copper wire coil: Enameled wire was prepared in this study to convert electromagnetic energy. Copper wire with insulation layers was rolled on cylindrical acrylic to form coils that were used in electromagnetic induction. Although more coils produce stronger magnetic force, saturation is eventually reached. To overcome the saturation of the magnetic force, an electric field tester and a current intensity meter were installed to test the electric field strength and the current intensity of the coils.
3. The bottom of the wooden hammer: Pig iron was used as the iron core, while magnetic force and coils was used to produce changes in magnetic flux that led to the induction of electromotive force. When the coils were not connected to electricity, no magnetic force occurred, so the electromotive force drove the electrons to generate induced current and subsequently achieve electromagnetic induction. The device has the same power to produce a 0.03 kg stable impacting force, as shown in Figure 4a.
After integrating the abovementioned rocking raft, winding coil, and the bottom of the raft, and a swinging tapper with a size of 35 cm × 15 cm × 32 cm, a weight of 1.7 kg was obtained. The physical system is shown in Figure 4b.
Oscillating Impact-Echo Device
The oscillating impact-echo device contains an oscillating wooden hammer made of wood that strikes the wall. At the bottom of the wooden hammer, iron core made of pig iron was installed. In addition, an electromagnet device composed of two coils of enameled wire prepared for this study was erected.
1.
Oscillating wooden hammer: First, the center point of the wooden hammer was identified, and then steel material was used to construct a computer numerical controlled lathe to fix the wooden hammer to the rotary machine and combine the steel material with the wooden material. The bearing was used as the fixed part in the impacting device to maintain the central position of its main body.
2.
Copper wire coil: Enameled wire was prepared in this study to convert electromagnetic energy. Copper wire with insulation layers was rolled on cylindrical acrylic to form coils that were used in electromagnetic induction. Although more coils produce stronger magnetic force, saturation is eventually reached. To overcome the saturation of the magnetic force, an electric field tester and a current intensity meter were installed to test the electric field strength and the current intensity of the coils. 3.
The bottom of the wooden hammer: Pig iron was used as the iron core, while magnetic force and coils was used to produce changes in magnetic flux that led to the induction of electromotive force. When the coils were not connected to electricity, no magnetic force occurred, so the electromotive force drove the electrons to generate induced current and subsequently achieve electromagnetic induction. The device has the same power to produce a 0.03 kg stable impacting force, as shown in Figure 4a.
After integrating the abovementioned rocking raft, winding coil, and the bottom of the raft, and a swinging tapper with a size of 35 cm × 15 cm × 32 cm, a weight of 1.7 kg was obtained. The physical system is shown in Figure 4b.
Echo Signal Acquisition Processing
The sound card on the computer was used to transfer the echo soundwave, and GoldWave (It is a commercial digital audio editing software. Production company location in St. John's, Newfoundland, Canadian) audio editing software was used to produce an audio file [21]. For signal acquisition, a considerable number of data files were transferred directly to the PC to facilitate the construction of a measurements database in the future. The schematic of the hardware-software connection is illustrated in Figure 5. After the microphone acquired the echo soundwave (as analog signals) from the wooden hammer striking the test object, the GoldWave audio software acquired soundwave signals through the sound card ( Figure 6).
Echo Signal Acquisition Processing
The sound card on the computer was used to transfer the echo soundwave, and GoldWave (It is a commercial digital audio editing software. Production company location in St. John's, Newfoundland, Canadian) audio editing software was used to produce an audio file [21]. For signal acquisition, a considerable number of data files were transferred directly to the PC to facilitate the construction of a measurements database in the future. The schematic of the hardware-software connection is illustrated in Figure 5.
Echo Signal Acquisition Processing
The sound card on the computer was used to transfer the echo soundwave, and GoldWave (It is a commercial digital audio editing software. Production company location in St. John's, Newfoundland, Canadian) audio editing software was used to produce an audio file [21]. For signal acquisition, a considerable number of data files were transferred directly to the PC to facilitate the construction of a measurements database in the future. The schematic of the hardware-software connection is illustrated in Figure 5. After the microphone acquired the echo soundwave (as analog signals) from the wooden hammer striking the test object, the GoldWave audio software acquired soundwave signals through the sound card ( Figure 6). After the microphone acquired the echo soundwave (as analog signals) from the wooden hammer striking the test object, the GoldWave audio software acquired soundwave signals through the sound card ( Figure 6). PEER REVIEW ure 6. GoldWave acquired soundwave signals through the sound ca of Object Anomaly Analysis Software aly analysis software was primarily developed using M o signal was acquired and processed, the impact-echo signals to time domain and frequency domain programs to exec The goal of the time domain analysis was to obtain the co e impact device after the first reflected wave to evaluate the frequency domain, substantial peaks were observed in th e frequency curve in the frequency graph was used to eval ct, which further confirms the accuracy of the analysis.
nterface program design: Two string signals were tested. The Hz, and their amplitudes were 1.5 and 2.0, respectively. The
Development of Object Anomaly Analysis Software
The anomaly analysis software was primarily developed using MATLAB. After the aforementioned echo signal was acquired and processed, the impact-echo signals stored in an audio file were translated to time domain and frequency domain programs to execute analyses using MATLAB software. The goal of the time domain analysis was to obtain the consecutive reflected waveforms from the impact device after the first reflected wave to evaluate the velocity of the soundwave. As for the frequency domain, substantial peaks were observed in the frequency graph. The established time frequency curve in the frequency graph was used to evaluate the structural anomaly in the object, which further confirms the accuracy of the analysis.
1.
Time domain interface program design: Two string signals were tested. Their frequencies were 25 Hz and 200 Hz, and their amplitudes were 1.5 and 2.0, respectively. The discrete signal was sampled at 1 KHz, and 1000 samples were obtained. A portion of the codes for writing this signal and the acquired signals are presented in Figure 7.
2.
Frequency domain interface program design: After conducting a Fast Fourier transform (FFT) of the time domain signals, we obtained frequency signals as shown in Figure 8. The frequency graphs revealed that at 25 Hz and 200 Hz, the amplitudes were 1.5 and 2.0, respectively.
When using FFT to obtain the spectrum of discrete signals, the maximum frequency that could be analyzed was half of the sampling frequency; when this frequency was exceeded, spectrum aliasing occurs, resulting in reading difficulties. anomaly in the object, which further confirms the accuracy of the analysis.
1. Time domain interface program design: Two string signals were tested. Their frequencies were 25 Hz and 200 Hz, and their amplitudes were 1.5 and 2.0, respectively. The discrete signal was sampled at 1 KHz, and 1000 samples were obtained. A portion of the codes for writing this signal and the acquired signals are presented in Figure 7. When using FFT to obtain the spectrum of discrete signals, the maximum frequency that could be analyzed was half of the sampling frequency; when this frequency was exceeded, spectrum aliasing occurs, resulting in reading difficulties.
Experimental Setup Testing and Analysis Results
When a natural disaster happens, such as an earthquake, it often damages concrete structures, specifically creating cracks. It is critical to locate the cracks in a concrete structure in a timely manner to repair them and prevent the structure from incurring further damage. To address this issue, our lab developed an oscillation impact device and a signal acquisition and reconstruction system that can detect the location of a crack in a concrete object. In this section, the systematic integration of the module design is described in detail and the results of testing the experimental setup of the automatic oscillator circuit, oscillating impact device, echo signal acquisition device, and anomaly analysis programs are presented.
Experimental Setup
The environmental setup included an oscillating impact device, automatic oscillator circuit, a notebook computer, a microphone, and concrete objects. The experimental setup of the overall system is shown in Figure 9. The environmental test of the experimental setup was divided into three parts: (1) Using the automatic oscillator circuit to produce a continuous square wave and using the iron core of the oscillating impact device and the circuit to produce electromagnetic induction in order to perform automatic impacting. (2) Connecting the microphone to the sound card on the computer to transfer the original analog signals into digital signals. (3) Importing the
Experimental Setup Testing and Analysis Results
When a natural disaster happens, such as an earthquake, it often damages concrete structures, specifically creating cracks. It is critical to locate the cracks in a concrete structure in a timely manner to repair them and prevent the structure from incurring further damage. To address this issue, our lab developed an oscillation impact device and a signal acquisition and reconstruction system that can detect the location of a crack in a concrete object. In this section, the systematic integration of the module design is described in detail and the results of testing the experimental setup of the automatic oscillator circuit, oscillating impact device, echo signal acquisition device, and anomaly analysis programs are presented.
Experimental Setup
The environmental setup included an oscillating impact device, automatic oscillator circuit, a notebook computer, a microphone, and concrete objects. The experimental setup of the overall system is shown in Figure 9. The environmental test of the experimental setup was divided into three Appl. Sci. 2019, 9, 904 8 of 14 parts: (1) Using the automatic oscillator circuit to produce a continuous square wave and using the iron core of the oscillating impact device and the circuit to produce electromagnetic induction in order to perform automatic impacting. (2) Connecting the microphone to the sound card on the computer to transfer the original analog signals into digital signals. (3) Importing the audio files produced by the audio analysis software GoldWave into the self-developed MATLAB program to evaluate anomalies.
Experimental Setup
The environmental setup included an oscillating impact device, automatic oscillator circuit, a notebook computer, a microphone, and concrete objects. The experimental setup of the overall system is shown in Figure 9. The environmental test of the experimental setup was divided into three parts: (1) Using the automatic oscillator circuit to produce a continuous square wave and using the iron core of the oscillating impact device and the circuit to produce electromagnetic induction in order to perform automatic impacting. (2) Connecting the microphone to the sound card on the computer to transfer the original analog signals into digital signals. (3) Importing the audio files produced by the audio analysis software GoldWave into the self-developed MATLAB program to evaluate anomalies.
Explanation of Experimental Procedure
The experimental procedure in this study followed the standard operating procedure of the impact-echo method and comprised the following steps: (1) Wave velocity calculation: In this study, the wooden hammer was used as the impact echo device. when the impact echo device struck the concrete object that generated the first signal was considered the point of origin of the impact, and the first time when the echo wave reflected to the surface was considered the receiving point. The time lag between impact and reflection and the distance between the impact device and the echo signal acquisition device were used to calculate the velocity of the P-wave. (2) Concrete object thickness test: When the impact echo device struck the surface of the concrete object, it produced echo waves that traveled downward until they reached the bottom of the concrete object. Once the echo waves reached the bottom of the object, they were reflected back to the surface of the concrete object. Then, the reflected wave created another reflected wave that continued to travel between the bottom and the surface of the concrete object, and thus multiple reflected waves were obtained to evaluate the thickness of the object. (3) Concrete object internal defect detection: The same method as in point (2) above was used to detect the defects in the concrete object. When the echo waves were produced, the echo waves traveled downward until the waves reach the cracks in the concrete object. As the echo waves reach the cracks in the object, it reflected back to the surface of the concrete object. The reflected wave created another reflected wave that continued to travel between the locations of the cracks and surface of the concrete object. Therefore, multiple reflected waves were obtained to evaluate the internal defects of the object.
Concrete Object Crack Detection
Since the purpose of this study is to develop a system to detect the defects in a damaged concrete object in a timely manner and be able to use this system in real damaged structures such as buildings and bridges, our lab asked the concrete industry to produce regular and defective concrete objects. A total of one regular and two cracked objects were used for characteristic analysis (Table 1). The characteristics tested were thickness and internal defections. Thickness testing was performed on both sides of the object, whereas internal defect testing was performed on only one side.
(1) Thickness test: The impact-echo device was used to strike one side of the studied object.
The microphone collected the sounds, and the sound card processed the sound to form an audio file for MATLAB to conduct simulations. The obtained waveform graph is presented in Figure 10. Figure 10a presents the waveform of the incident impact point. Figure 10b depicts the waveform of the first reflected wave. It can be seen from the figure that t 1 is the impact time (243.3 µs), whereas t 2 is the reflected wave receiving time (257.7 µs). The time lapse is t 2 − t 1 = 14.4 µs.
Because the distance between the impact device and receiver was 5 cm, the velocity of the P-wave was 3742 m/s. Figure 10c shows the overall reflected wave, which was simulated using MATLAB, and Figure 10d presents the wave frequency, which was obtained using FFT and indicates that the main frequency was 6.4 KHz. After calculation, dividing the wave velocity by twice the frequency revealed that the thickness of the object was approximately 28.1 cm, which was almost the same as the actual thickness of 30 cm. To verify the feasibility of this system, we also strike the other side of the studied object. Figure 11a presents the waveform of the incident impact point. Figure 11b shows the waveform of the first reflected wave. It can be seen from the figure that t 1 was the impact time (243.3 µs), whereas t 2 was the reflected wave receiving time (257.7 µs).
The time lapse was obtained by subtracting t 1 from t 2 (14.4 µs). Because the distance between the impact device and receiver was 5 cm, the velocity of the P-wave was 3742 m/s. Figure 11c shows the MATLAB simulation of the reflected wave. Figure 11d presents the wave frequency graph obtained from FFT and main frequency was 6.1 KHz. The same calculation method revealed that the studied object thickness was approximately 29.5 cm, which was similar to the actual thickness of 30 cm. This object has performed a total of 10 measurements, which are measured five times on each side. The measured values on one side are 28.1 cm, 27.7 cm, 27.4 cm, 27.9 cm, and 28.9 cm. The other side of the measured values is 28.7 cm, 29.1 cm, 29.6 cm, 29.5 cm, and 29.8 cm. The average value of each side is close to 30 cm, which is the actual thickness of the studied object. (2) Internal defect testing: This test was conducted on a regular concrete object without internal cracks (Object 1). The frequency domain waveforms in Figures 10d and 11d reveal the absence of continuous high-frequency reflected waves after the highest main frequency, so this Object was considered to contain no internal cracks. and indicates that the main frequency was 6.4 KHz. After calculation, dividing the wave velocity by twice the frequency revealed that the thickness of the object was approximately 28.1 cm, which was almost the same as the actual thickness of 30 cm. To verify the feasibility of this system, we also strike the other side of the studied object. Figure 11a presents the waveform of the incident impact point. Figure 11b shows the waveform of the first reflected wave. It can be seen from the figure that t1 was the impact time (243.3 µs), whereas t2 was the reflected wave receiving time (257.7 µs). The time lapse was obtained by subtracting t1 from t2 (14.4 µs). Because the distance between the impact device and receiver was 5 cm, the velocity of the P-wave was 3742 m/s. Figure 11c shows the MATLAB simulation of the reflected wave. Figure 11d presents the wave frequency graph obtained from FFT and main frequency was 6.1 KHz. The same calculation method revealed that the studied object thickness was approximately 29.5 cm, which was similar to the actual thickness of 30 cm. This object has performed a total of 10 measurements, which are measured five times on each side. The measured values on one side are 28.1 cm, 27.7 cm, 27.4 cm, 27.9 cm, and 28.9 cm. The other side of the measured values is 28.7 cm, 29.1 cm, 29.6 cm, 29.5 cm, and 29.8 cm. The average value of each side is close to 30 cm, which is the actual thickness of the studied object. (2) Internal defect testing: This test was conducted on a regular concrete object without internal cracks (Object 1). The frequency domain waveforms in Figure 10d and Figure 11d reveal the absence of continuous high-frequency reflected waves after the highest main frequency, so this Object was considered to contain no internal cracks.
Testing a Concrete Object with Cracks (Object 2)
Object 2 was 30 cm in thickness with a crack 15-cm deep in the concrete. The impact-echo device struck one side of the studied object. The microphone was utilized to collect sounds, and the sound card processed the recording and produced the audio file to be imported into MATLAB for simulation, resulting in the waveform graph shown in Figure 12. Figure 12a presents the incident impact point wave graph. Figure 12b shows the waveform graph of the first reflected wave. The t 1 is the impact time (243.3 µs), whereas t 2 is the reflected wave receiving time (256.7 µs). The time lapse is expressed as t 2 − t 1 = 13.4 µs. Because the distance between the impact device and receiver was 5 cm, the velocity of the P-wave was 3748 m/s. Figure 12c presents the overall reflective wave from MATLAB simulation.
Concrete Object with Cracks (Object 3) Testing
Object 3 was 50 cm in thickness with a crack 12 cm deep in the concrete. The impact-echo device struck one side of the studied object. The microphone was utilized to collect the sounds, and the sound card processed the recording and produced the audio file to be imported into MATLAB for simulation, resulting in the waveform figure shown in Figure 13. Figure 13a depicts the incident impact point wave graph. Figure 13b presents a waveform graph of the first reflected wave. The t1 is the impact time (243.3 µs), whereas t2 is the reflected wave receiving time (256.7 µs). The time lapse is expressed as t2 − t1 =13.4 µs. Because the distance between the impact device and receiver was 5 cm, the velocity of the P-wave was 3748 m/s. Figure 13c presents the overall reflective wave from the MATLAB simulation. Figure 13d reveals two conspicuous frequencies, signal A at 3.8 KHz and signal B at 13.7 KHz obtained using FFT. A thickness of 49.3 cm and a crack depth of 13.6 cm were obtained in this study calculated by the wave velocity of 3748 m/s. In addition, at 7.3 KHz, the amplitude was the same as that of Signal B at 13.7 KHz, at a depth of approximately 23.9 cm,
Concrete Object with Cracks (Object 3) Testing
Object 3 was 50 cm in thickness with a crack 12 cm deep in the concrete. The impact-echo device struck one side of the studied object. The microphone was utilized to collect the sounds, and the sound card processed the recording and produced the audio file to be imported into MATLAB for simulation, resulting in the waveform figure shown in Figure 13. Figure 13a depicts the incident impact point wave graph. Figure 13b presents a waveform graph of the first reflected wave. The t 1 is the impact time (243.3 µs), whereas t 2 is the reflected wave receiving time (256.7 µs). The time lapse is expressed as t 2 − t 1 =13.4 µs. Because the distance between the impact device and receiver was 5 cm, the velocity of the P-wave was 3748 m/s. Figure 13c presents the overall reflective wave from the MATLAB simulation. Figure 13d reveals two conspicuous frequencies, signal A at 3.8 KHz and signal B at 13.7 KHz obtained using FFT. A thickness of 49.3 cm and a crack depth of 13.6 cm were obtained in this study calculated by the wave velocity of 3748 m/s. In addition, at 7.3 KHz, the amplitude was the same as that of Signal B at 13.7 KHz, at a depth of approximately 23.9 cm, possibly resulting from an internal defect at the bottom of the concrete. A total of five crack depth measurements were performed in this studied object, and the measured values were 12.7 cm, 14.1 cm, 13.9 cm, 13.6 cm, and 13.3 cm. The average value of the crack depth matched the actual crack depth of the studied object that was originally manufactured by the concrete industry. The testing results from the regular concrete object (Object 1) and cracked concrete objects (Object 2 and 3) were organized by their thickness, P-wave velocity, main frequency, measured crack depth, and error in Table 2. The testing results from the regular concrete object (Object 1) and cracked concrete objects (Object 2 and 3) were organized by their thickness, P-wave velocity, main frequency, measured crack depth, and error in Table 2.
Conclusions
The main purpose of this study was to develop an automatic oscillating impact-echo device that can rapidly process signals and reconstruct the measured signal waveform into a PC. The results of our study indicate that our self-developed system significantly reduces the time spent locating the cracks of a damaged building. Furthermore, the main contributions of this study are (1) a lightweight, low-cost, and ready-to-measure automatic oscillating impact-echo device to replace traditional complex and manual steel hammers and reduce the damage to buildings; (2) using the sound card in the personal computer to retrieve the echo signal, replacing the traditional expensive digital echo collector; (3) a complete automatic oscillating impact-echo device to connect with a personal computer, and analyze the tapping wave characteristics through the computer and transmit it to the remote network through the Internet; and (4) the automatic oscillating impact-echo device can do fixed-point testing, which is very helpful for the safety of the staff and saving manpower. | 8,171.8 | 2019-03-04T00:00:00.000 | [
"Computer Science"
] |
Pairs of quadratic forms over finite fields
Let Fq be a finite field with q elements and let X be a set of matrices over Fq. The main results of this paper are explicit expressions for the number of pairs (A,B) of matrices in X such that A has rank r, B has rank s, and A + B has rank k in the cases that (i) X is the set of alternating matrices over Fq and (ii) X is the set of symmetric matrices over Fq for odd q. Our motivation to study these sets comes from their relationships to quadratic forms. As one application, we obtain the number of quadratic Boolean functions that are simultaneously bent and negabent, which solves a problem due to Parker and Pott.
Introduction
Let F q be a finite field with q elements.Let X be a set of matrices of the same size over F q and let X k contain all matrices in X of rank k.Define which is the number of pairs (A, B) of matrices in X such that A has rank r, B has rank s, and A + B has rank k.We are interested in the numbers N X (r, s, k) when X is the set of m × m alternating matrices over F q and when X is the set of m × m symmetric matrices over F q (recall that a matrix is alternating if it is skew-symmetric and its diagonal contains only zeros).Our motivation to study these sets comes from their relationships to quadratic forms over finite fields.Some consequences of our results for quadratic forms are discussed later in this section.
Our main results are explicit expressions for the numbers N X (r, s, k), which involve the q 2 -binomial coefficient given by (q 2x−2i+2 − 1)/(q 2i − 1) for real x and nonnegative integral k (see [1] and [8], for example, for elementary properties of these numbers).For now we state our results for the most important case when r = s = k = m.The general results are postponed to later sections.We begin with the case that X is the set of alternating matrices over F q .Recall that every alternating matrix has even rank (see [8, Lemma 10], for example).We have the following result, which holds for finite fields of arbitrary characteristic.
Theorem 1.Let m be even and let X be the set of m × m alternating matrices over F q .Writing n = m/2, we have is the number of nonsingular matrices in X.
For the symmetric matrices we have the following result for finite fields of odd characteristic.
Theorem 2. Let q be an odd prime power and let X be the set of m × m symmetric matrices over F q .Write n = (m + 1)/2 .Then, for odd m, we have and for even m, we have the electronic journal of combinatorics 23(2) (2016), #P2.8 where for even m is the number of nonsingular matrices in X.
It can be shown that Theorem 2 also holds for even q and odd m.In particular, it can be shown that, if q is even and X is the set of m × m symmetric matrices over F q and Y is the set of m + 1 × m + 1 alternating matrices over F q , then and so N X (m, m, m) can be obtained from Theorem 1.This follows from a relationship between two association schemes (see [16,Section 5], for example) and our discussion on association schemes in Section 2. We could not prove, but conjecture based on its verification for m ∈ {2, 4, 6}, that Theorem 2 also holds for even q and even m.
A quadratic form on F m q that is nonsingular is also called bent or a quadratic bent function.(There is a more general definition [2] of the bent property for arbitrary functions from F m q to F q , which however is not required here.)Recall that there is a one-to-one correspondence between quadratic forms on F m q and m × m alternating matrices over F q if q = 2 and m × m symmetric matrices over F q if q is odd.Thus, for q = 2 or odd q, a quadratic form on F m q is bent if the corresponding matrix is nonsingular.Vector spaces of bent functions are important in cryptography and coding theory (see [2] and [3], for example) and m-dimensional spaces of bent functions on F m p for odd prime p (also called planar functions) are equivalent to commutative semifields of odd characteristic [5].Our results give the number of 2-dimensional spaces of quadratic bent functions on F m 2 .A related and more difficult problem is the determination of the number of inequivalent 2-dimensional spaces of quadratic bent functions on F m q .This number is known for odd q and m ∈ {2, 3} and equals 1 in these cases [13], [14].
A quadratic form on F m 2 is negabent if its associated alternating matrix M is such that M + I is nonsingular, where I is the identity matrix [15] (again there is a more general definition of negabent functions from F m 2 to F 2 [15], which we do not require here).A quadratic form on F m 2 is bent-negabent if it is simultaneously bent and negabent.Hence bent-negabent quadratic forms on F m 2 can only exist if m is even.It has been shown in [15,Theorem 8] that a quadratic form on F m 2 is bent-negabent if and only if its associated alternating matrix M is such that M and M + I + J are both nonsingular, where I and J are the identity and the all-ones matrix, respectively.
Let X be the set of m × m alternating matrices over F 2 and let X k contain all matrices in X of rank k.Since X 0 , X 1 , . . ., X m are the fibres of an association scheme (see Sections 2 and 3), we find by a general property of association schemes that, for fixed A ∈ X r , the number of B ∈ X s such that A + B ∈ X k is independent of the particular choice of A. Therefore, Theorem 1 gives the number of bent-negabent quadratic forms, solving a problem due to Parker and Pott [15,Problem 2].
A general method
Suppose that (X, +) is an abelian group of matrices over F q (which is certainly true when X is the set of m × m alternating or symmetric matrices over F q ).In this case the numbers N X (r, s, k) can be computed as follows.Recall that the characters of (X, +) are the homomorphisms from (X, +) to the multiplicative group of the complex numbers and form themselves a group, which is isomorphic to (X, +).
Lemma 4. Let (X, +) be an abelian group of matrices over F q and let X k contain all matrices in X of rank k.Then the numbers defined in (1) satisfy where the first sum ranges over all characters φ of (X, +).
Proof.Indeed, by an elementary property of characters, the sum and is zero otherwise.The lemma follows easily from this.
The computation of the numbers N X (r, s, k) is particularly simple in the case that X has the structure of a (symmetric) translation scheme, which is an association scheme with additional properties.Let X 0 , X 1 , . . ., X m be a partition of X.Then X is a translation scheme with fibres X 0 , X 1 , . . ., X m if the following properties are satisfied: (P1) X 0 contains only the identity of (X, +).
(P2) For all r ∈ {1, . . ., m}, we have x ∈ X r if and only , s, k) (called the intersection numbers) depending only on r, s, and k, but not on the particular choice of x and y.
We refer to [6] and [9] for background on association schemes and in particular to [9, Section V] for background on translation schemes.Let X k contain all matrices in X of rank k and suppose that X 0 , X 1 , . . ., X m are the fibres of a translation scheme.Then by taking y equal to the zero matrix in (P3), it the electronic journal of combinatorics 23(2) (2016), #P2.8 is readily verified that the numbers N X (r, s, k) can be computed from the intersection numbers p(r, s, k) via Let X be the group of characters of (X, +).There is a unique partition X 0 , X 1 , . . ., X m of X with the property that is constant for each φ ∈ X i .The numbers (3), denoted by P k (i), are the eigenvalues of the translation scheme.It then follows from Lemma 4 that which, via (2), gives a well known formula for the intersection numbers (see [10, p. 227], for example).Hence, to compute N X (r, s, k), it is sufficient to know the multiplicities | X i | and the eigenvalues P k (i) of the translation scheme.This principle can be applied for example when X is the set of m × n matrices over F q .Without loss of generality, assume that m n, in which case, X 0 , X 1 , . . ., X m are the fibres of an association scheme whose multiplicities and eigenvalues are given in [7].The principle can also be applied in the case that X is the set of m × m alternating matrices over F q , which is discussed in Section 3.However, in general, the principle cannot be applied in the case that X is the set of m × m symmetric matrices over F q since then (P3) in the definition of a translation scheme does not hold.We can however still apply Lemma 4 in this case, which we shall do in Section 4.
Alternating matrices
Throughout this section, let X be the set of m × m alternating matrices over F q and write n = m 2 and c = q m(m−1) 2n , so that |X| = c n .Let X k contain all matrices in X of rank k.It is well known that X 0 , X 1 , . . ., X m are the fibres of a translation scheme [8].Let v(k) be the cardinality of X k .(It turns out that these numbers are the multiplicities of the translation scheme.)It is known (see [12,Theorem 3], for example) that v(k) = 0 for odd k and for each i ∈ {0, . . ., n}.Let A, S ∈ X and write a ij and s ij for their entries, respectively (indexed from 1 to m).Let χ be a nontrivial character of (F q , +) and define φ S : X → C the electronic journal of combinatorics 23(2) (2016), #P2.8 by Since X is an F q -vector space of dimension m(m − 1)/2, the mapping φ S ranges through all characters of (X, +) as S ranges over X.For S ∈ X 2i , the numbers are well defined.They are the eigenvalues of the translation scheme and given by [8] The following result is now a straightforward consequence of Lemma 4.
Theorem 5. Let X be the set of m × m alternating matrices over F q .Then the numbers defined in (1) satisfy where v(2i) and P k (i) are given in (4) and (5), respectively.
To obtain Theorem 1 from Theorem 5, let m be even, so that m = 2n, and observe that in this case This formula can be either obtained from ( 5) by a tedious calculation using the q-binomial theorem h j=0 q j(j−1) h j x h−j y j = h−1 k=0 (x + q 2k y) for real x, y or by observing that P n (0) = v(2n) and P n (i) satisfies the recurrence which can be obtained from [8, Lemma 12] and (5).From ( 4) we find that Theorem 1 is now easily obtained from Theorem 5 using ( 6) and (8).
Symmetric matrices
Throughout this section, let q be an odd prime power and let η be the quadratic character of F q .Let X be the set of m × m symmetric matrices over F q and write n = m + 1 2 and c = q m(m+1) 2n , so that |X| = c n .As usual, let X k be the subset of X containing all matrices of rank k.Let A, S ∈ X and write a ij and s ij for their entries, respectively (indexed from 1 to m).Let χ be a nontrivial character of (F q , +) and define φ S : X → C by Since X is an F q -vector space of dimension m(m + 1)/2 and q is odd, the mapping φ S (A) ranges through all characters of (X, +) as S ranges over X.
Two matrices A, B ∈ X are equivalent if there exists a nonsingular matrix L such that LAL T = B. We recall some well known facts (see [11,Section 6.2], for example).Every matrix A ∈ X of rank r is equivalent to a diagonal matrix with main diagonal [d 1 , . . ., d r , 0, . . ., 0], where d 1 , . . ., d r are nonzero.The value η(d 1 • • • d r ) is preserved under equivalence and is called the type of A (an empty product equals 1 by convention and so the all-zero matrix has type 1).Two matrices in X are equivalent if and only if they have the same rank and the same type.
Our further analysis crucially relies on the following lemma.
Lemma 6.The number depends only on the type and rank of S.
Proof.Let L be an arbitrary m × m matrix over F q .For A ∈ X, we readily verify the identity If L is nonsingular, then the mapping A → L T AL induces a permutation on X k and hence as required.
In view of Lemma 6, we may write the electronic journal of combinatorics 23(2) (2016), #P2.8 where S is of rank i and of type δ.
The equivalence relation defined above partitions X into 2m + 1 equivalence classes.Let v(i, δ) be the cardinality of the equivalence class containing matrices of rank i and type δ.It will be convenient to write v(0, −1) = 0 and P k (0, −1) = 1.
The following result is a consequence of Lemmas 4 and 6.
Theorem 7. Let q be an odd prime power and let X be the set of m × m symmetric matrices over F q .Then the numbers defined in (1) satisfy v(i, δ) P r (i, δ)P s (i, δ)P k (i, δ).
In what follows let v(i) be the number of m × m symmetric matrices of rank i, so that v(i) = v(i, 1) + v(i, −1).
whenever this expression is defined and let F (m, k, s) = 0 otherwise.Then P 0 (i, δ) = 1 and P k (0, δ) = v(k), and for k, i 1, the numbers P k (i, δ) are given by and To prove Proposition 9, we require the following recurrence relation for the numbers P k (i, δ).Henceforth, we write P (m) k (i, δ) and v (m) (i) for P k (i, δ) and v(i), respectively, to indicate dependence on m.
Proof of Proposition 9. From the definition of P k (i) we see that P 0 (i, δ) equals 1 and P k (0, δ) is the number of symmetric m × m matrices of rank k, namely v(k).
From this last identity and Lemma 10 we find that With elementary manipulations we then deduce from Proposition 8 that , which we can write as Using we find that F (m, k, 0) = (−1) k n − 1 j=0 q j(j−1) j (−1) j q 2j c −j .
Applying the q-binomial theorem (7), we then see from (11) that as required.Now substitute the recurrence in Lemma 10 into itself to obtain it is readily verified that P (m) k (2s + 1, δ) = F (m, k, s) satisfies the recurrence (13) for all s 1. Combination with (12) proves (9).The identity (10) is a then straightforward consequence of Lemma 10 and (9). | 3,889.2 | 2016-04-15T00:00:00.000 | [
"Mathematics"
] |
Lepton flavor universality violation without new sources of quark flavor violation
We show that new physics models without new flavor violating interactions can explain the recent anomalies in the $b\to s\ell^+\ell^-$ transitions. The $b\to s\ell^+\ell^-$ arises from a $Z'$ penguin which automatically predicts the $V-A$ structure for the quark currents in the effective operators. This framework can be realized either in a renormalizable $U(1)'$ setup or be due to new strongly interacting dynamics. The di-muon resonance searches at the LHC are becoming sensitive to this scenario since the $Z'$ is relatively light, and could well be discovered in future searches by ATLAS and CMS.
Introduction. Lepton flavor universality (LFU) of electroweak interactions is one of the key predictions of the standard model (SM). The electric charge is copied from one generation of fermions to the other, so that the photon couples with the same strength to the electron as it does to the muon and the tau lepton. Similarly, the Z boson couples in the same way to all three generations of leptons, a fact that has been tested at the permille level for on-shell Z couplings at LEP [1,2]. Any deviation from LFU either in on-shell processes or from off-shell exchanges would be a clear indication of new physics (NP) (LFU violations from differing charged lepton masses are usually negligibly small, but will be kept in our discussion when needed).
If confirmed, these would constitute a discovery of NP. * Electronic<EMAIL_ADDRESS>† Electronic<EMAIL_ADDRESS>‡ Electronic<EMAIL_ADDRESS>The NP models that have been put forward to explain the b → s + − anomalies fall into two categories. Most of the analyses so far have focused on the case where the b → s + − transition receives a contribution from a tree level exchange of a new heavy vector boson, Z , with flavor violating couplings to b and s quarks, as well as couplings to either electrons [23] or muons [24][25][26][27][28][29][30][31][32][33][34][35][36][37] (in the case of Ref. [38] the latter is generated at loop level), or through tree level exchange of leptoquarks [39][40][41][42][43][44]. The other set of models generates the b → s + − through box loop diagrams with new heavy fields [45,46]. Both of these sets of solutions require flavor changing couplings beyond those present in the SM. One thus needs to make sure that the generated flavor changing transitions are consistent with other precision flavor observables such as B s −B s , D −D mixing, etc.
In this paper we show that there is a third class of models where all the NP couplings are flavor diagonal -but not flavor universal. The simplest realization is in terms of a Z whose dominant couplings in the SM sector are to the right-handed top quarks and to the muons, see Fig. 1. Other realizations are possible, for example in strongly coupled scenarios as we briefly discuss below.
The NP models that we are proposing as possible explanations of b → s + − anomalies have several salient features. They are examples of NP with (general) minimal flavor violation (MFV) [47][48][49][50][51] and thus easily satisfy the present experimental bounds from other flavor changing neutral current transitions, beside b → s + − . The b → s + − transition is generated via the exchange of the SM W gauge boson in the loop. This class of models thus leads automatically to the V − A structure of the quark current in the NP operators, as preferred by the global fits to the data [12][13][14][15]. There is more freedom in the structure of couplings to muons, where both V − A and V + A currents are possible. Finally, since in this class of models the b → s + − transition is generated at the one-loop level, the Z is quite light, with a mass of a few hundred GeV and can be searched for at the LHC in high p T processes.
General discussion. The effective weak Hamiltonian arXiv:1704.06005v3 [hep-ph] 12 Jan 2018 that describes the b → s + − transitions is given by where e is the EM gauge coupling and the sum runs over the dimension-five and dimension six-operators. Denoting SM and NP contributions to the Wilson coefficients , with some preference for a NP solution with C µ,NP 9 = −C µ,NP 10 0.60 (15); see, e.g., [15]. Here the relevant four-fermion operators are O 9 = sγ µ P L b ¯ γ µ , and O 10 = sγ µ P L b ¯ γ µ γ 5 . The data thus imply the presence of NP contributions with a V − A structure in the quark sector. However, additional contributions of comparable magnitude but with a V + A structure from the NP operators O 9 = sγ µ P R b ¯ γ µ , O 10 = sγ µ P R b ¯ γ µ γ 5 are still allowed by the current data.
In the class of models we are considering only the O 9 and O 10 are generated at one loop, see Fig. 1. The V −A current in the quark sector is a clear prediction of the models, while the structure of the couplings to leptons depends on the details of the model. For simplicity we assume that NP predominantly affects the b → sµ + µ − transition and not the b → se + e − . This leads to LFU violation when comparing It also modifies the total rates in various b → sµ + µ − decays, in accordance with indications of global fits [12][13][14][15]. On the other hand B s , B d and K 0 mixing via Z exchange arises only at the two-loop level and is well within present experimental and theoretical precision.
Since the NP sector does not contain new sources of flavor violation, this class of models respects the MFV ansatz. In MFV, a shift to C 9,10 can be correlated with the analogue contributions to rare kaon decays. For instance, the K + → π + νν(γ) decay branching ratio is modified to [52] where .10 + 0.24i with X i defined, e.g., in [53], and have written for the weak mixing angle s W ≡ sin θ W 0.48, c W ≡ cos θ W . For values of C µ,NP 9,10 that are preferred by current b → s data, the resulting effect in K → πνν is small compared to current experimental uncertainties, but could be within reach of the ongoing NA62 experiment [54]. Similar comments apply to the theoretically very clean K L → π 0 νν decay. The decay Figure 1: The NP contributions to the d i → d j processes from the exchange of a Z that couples to the top quark and a heavy top partner T .
is modified at the level of O(5%) by such NP models. To observe these effects the experimental sensitivity [55] would need to be improved by two orders of magnitude in conjunction with some improvements in theoretical precision [56]. The decay modes K + → π + e + e − and K + → π + µ + µ − are dominated by long distance contributions, while the NP contributions are expected to only give effects below the permille level and thus be unobservable. The same is true for the K L → µ + µ − transition, where again the NP contribution is drowned by the SM long distance effects.
The minimal aligned U (1) model. We discuss next the simplest realization of the above framework. We restrict ourselves to the case where on the leptonic side only the muons are affected by NP. The minimal model has a new U (1) gauge symmetry that is spontaneously broken through the vacuum expectation value (VEV) of a scalar field, Φ, transforming as Φ ∼ (1, 1, 0, q ) under . The model contains, in addition, a colored Dirac fermion T ∼ (3, 1, 2/3, q ). The SM is thus supplemented by the Lagrangian where D µ ⊃ igq Z µ , the U (1) part of the covariant derivative, F µν = ∂ µ Z ν − ∂ ν Z µ the field strength for the gauge boson Z , and Φ = (φ +ṽ)/ √ 2. Hereg is the U (1) gauge coupling,ṽ is the VEV that breaks the U (1) , while φ is the physical scalar boson that obtains mass m φ after spontaneous breaking of U (1) .
All the SM fields are singlets under U (1) . There are only three renormalizable interactions between the SM and the U (1) sector: the Higgs portal coupling Φ to the SM Higgs, H; the U (1) kinetic mixing with the SM hypercharge, B µν ; and a Yukawa-type coupling of T and Φ to the SM right-handed up-quarks u i R , ) . (6) The summation over generation index i = 1, 2, 3, is implied. While y i T can in general take any values, we assume it is aligned with the right-handed up-quark Yukawa coupling, i.e., that the two satisfy the basis independent condition y † T y T , y † u y u = 0. In the up-quark mass basis thus y ij u ∼ diag(0, 0, y t ), and y i T ∼ (0, 0, y t T ) so that at leading order Z(Z )-couplings to light quarks remain exactly SM-like (vanish); see Refs. [57,58] for more detailed discussion. Such a structure is natural in flavor models of quark masses where the commutator above does not vanish exactly, but it is still sufficiently small to avoid dangerous Z− and Z −mediated flavor changing neutral currents. For example, in Froggatt-Nielsen type models with horizontal U (1) symmetry [59] one has y T ∼ y y T (c t λ 3 C , c c λ C , 1), with λ C ∼ 0.2 and c u,c ∼ O(1). If U (1) is gauged, the charm mixing [60] bounds the corresponding Z to m Z |Re(c u c * c )| × 250 GeV, for O(1) gauge couplings and large mixing between t and T , as in Fig. 2. While these start to probe interesting parameter space they do not yet exclude the above explanation of the b → sµ + µ − anomaly.
In the rest of the paper we ignore the mixing of T with the first two generations of quarks. For simplicity we also assume that |y t T | λ , , and neglect the Higgs portal and the kinetic mixing couplings. After electroweak symmetry breaking the t − T part of the mass matrix, M u , for up-type quarks and T is given by where v 246 GeV is the SM electroweak (EW) VEV. The mass eigenstates, t, T , with masses m t 173 GeV and m T , are an admixture of the interaction states with the mixing angles for two chiralities, θ L,R , given by In the phenomenological analysis we will take y t v, y t Tṽ M T , in which case θ R ∼ y t Tṽ /M T and θ L ∼ θ R v/M T . The two mass eigenstates, t, T , have masses m t y t v/ √ 2, m T M T , or more precisely, The couplings to the massive gauge bosons are thus given by where we used for shortness s L,R (c L,R ) ≡ sin θ L,R (cos θ L,R ). The SM weak coupling constant is g ≡ 2m W G F √ 2 0.65, V ij are the elements of the unitary 3×3 CKM matrix, and J µ EM ≡ 2(tγ µ t+T γ µ T )/3 is the relevant EM current.
In the limit M T v,ṽ the dominant effect is in the newt/ Z P R t interaction which is suppressed by 1/M 2 T , while modifications of W and Z couplings appear at O(1/M 4 T ). The mixing angle θ L is constrained by electroweak precision tests. The modification of the ρ parameter is given by [58] where r ≡ m 2 T /m 2 t . A comparison with the experimental value ∆ρ exp = (4 +3 −4 ) · 10 −4 [1] yields s L 0.2 for m T 1 TeV .
The renormalizable vector and axial muonic current couplings to Z are in general given by We assume that the Z couplings to charged leptons are flavor diagonal and focus on couplings to muons. The effective couplings of Z to the muon, q ,V , q ,A , depend on the embedding of U (1) in the UV theory. For instance, if only µ L couples to Z , then q ,V = −q ,A , giving C µ,NP 9 = −C µ,NP
10
. This possibility is somewhat preferred by present b → s global fits. Such structure arises, if the SM muon EW doublet, L = (µ L , ν µ ), mixes with a heavy Dirac fermion lepton, L T , through a Yukawa interaction y µL ΦL T (a possibility of this type was first discussed in [61]). The L T has the same electroweak charges as L, but is in addition charged under the U (1) with the opposite charge to Φ. The L T decays predominantly through L T → µZ, νW → µνν. Chargino searches at the LHC in the dilepton+MET channel bound M L T 600 GeV from L T pair production [62,63]. If there is in addition a heavy U (1) lepton with electroweak charges of the right-handed muon then there is no fixed relation between C µ,NP 9 and C µ,NP
10
. Furthermore, Z can also couple to electrons and taus, a possibility we do not pursue in detail, but may be important for LHC searches and their relation to LFU violating observables in B decays, as well as to K → πνν decays. Depending on the details of how the leptonic sector is extended one may also potentially explain the g − 2 anomaly.
The leading Z effects in rare semileptonic B meson decays are captured by the shifts to the Wilson coefficients (see also [64]) where we have kept only the dominant, logarithmically enhanced term. We observe that sufficiently large C µ,NP
9,10
as preferred by current data can be generated for O(1) The searches at the LHC for dimuon resonances could put important bounds on the Z couplings and its mass or lead to its discovery [65,66]. The most important production channels are the tree level pp → ttZ , as well as pp → ZZ and pp → jZ production through top and T loops. The representative diagrams for these are shown in Fig. 3 (see also [67]). For the calculation we use MadGraph5 aMC@NLO [68] with a modified model file [69] for the model of Ref. [70].
The Z boson decays to pairs of muons and, if it has a mass above the 2m t threshold, also to top quarks. The relevant fermionic widths are given by neglecting the m 2 t /m 2 Z suppressed terms. Similar expressions apply to potential Z → νν and/or Z → τ + τ − decays, with obvious replacements in the notation. For Z that predominantly couples to one left-handed lepton flavor (two lepton flavors with the same strength) one has Br(Z → ¯ ) ≈ 0.5(0.25) for each charged lepton .
In Fig. 2 we show the constraint from the recent ATLAS high-mass dilepton resonance search [65] in thẽ gq and m Z plane. We fix the heavy T top partner mass to be m T = 5m Z with s R = 0.4 and taking q µ,V = −q µ,A = q /3. The s L ∼ s R v/m T is small enough so that electroweak precision tests are not constraining in the shown parameter space. For the above parameter choice the branching ratios to tt and µ + µ − are similar. Following ATLAS analysis we use a 40% acceptance for the dominant Z j production channel and show the bounds derived for Γ Z /m Z = 0.08 adjusting for the fact that ATLAS assumes equal decay probabilities for Z → µ + µ − and Z → e + e − . The regions that are excluded by the dilepton resonance search [65], for Br(Z → µµ) = 0.25, 0.50, 1, are shown in orange. The 1σ region preferred by the b → s + − transitions is shown in blue. We see that existing dimuon searches are already covering interesting parameter space. Still, it would be important to gain another order of magnitude in sensitivity of the experimental searches as the precise value of Br(Z → µµ) is model dependent. In the most interesting Z mass range, m Z 300 GeV, the tree level pp → ttZ cross section is larger than the loop induced pp → Z j process. Thus, searches for di-muon resonances in association with tt can provide an important additional handle on this model. An important probe of Z coupling to left-handed muons is the neutrino trident production [71]. The resulting upper bound ongq µ is given by the dashed purple line in Fig. 2. This is much more constraining than the bounds from LFU violation in leptonic Z couplings, induced at one loop because the Z couples to muons but not the electrons [72] (see also [73]). Finally, since the heavy quark, T , or vectorlike leptons, L, are charged under both Z and hypercharge, one expects kinetic mixing between the Z and the SM B gauge field at the one-loop level, ∼ 10 −3 . This is below present bounds in our preferred range of Z masses; see e.g. [74].
Beyond the minimal model. The above minimal model can be extended in several ways. For b → s + − decays the only essential ingredient is that the Z couples to top quarks and to muons. It is very easy to deviate from this minimal assignment, and also couple Z to τ leptons without significantly changing the phenomenology. The main effect is on Z searches since in that case the branching ratio for Z → µ + µ − is reduced, making the searches less sensitive, while on the other hand opening a new search channel of Z → τ + τ − .
The simplest model can also be viewed as a simplified model for strongly interacting NP. In this case the Z is the lightest resonance in the strongly interacting sector, while the Φ field can be thought of as a condensate of the strong dynamics that breaks dynamically the hidden U (1) corresponding to the Z vector. The couplings of Z to tops and muons then depend on the compositeness fractions of these two fermions. It is then also natural for the Z couplings to the lighter quarks to be suppressed, since these are presumably less composite, while one would expect the couplings of Z to tau leptons and possibly b-quarks to be enhanced. In this case the Z → µ + µ − branching ratio can be significantly smaller than in the minimal renormalizable model we considered above, while searches for resonances in the ditau channel can become more sensitive (see e.g. [75]).
Conclusions. In conclusion, we introduced a Z model, whose defining feature is that the Z couples to the up-sector, and that can explain the b → sµ + µ − anomaly. The V − A structure of the quark current in the b → s transition is a clear prediction of such models. The b → sµ + µ − decay is due to a Z coupling to muons and top quarks, where the flavor changing transition is predominantly due to a top-W penguin loop. The flavor structure is of the minimal flavor violating type, naturally leading to b → sµ + µ − decays as the most important precision flavor observables. The Z is expected to be light, m Z TeV, and can be as light as a few 100 GeV. It can be searched for in dimuon and ditop channels, either in inclusive searches or in a production in association with Z, or with a tt pair. | 4,768.4 | 2017-04-20T00:00:00.000 | [
"Physics"
] |
User Behaviors and User-Generated Content in Chinese Online Health Communities: Comparative Study
Background: Online health communities (OHCs) have increasingly gained traction with patients, caregivers, and supporters globally. Chinese OHCs are no exception. However, user-generated content (UGC) and the associated user behaviors in Chinese OHCs are largely underexplored and rarely analyzed systematically, forfeiting valuable opportunities for optimizing treatment design and care delivery with insights gained from OHCs. Objective: This study aimed to reveal both the shared and distinct characteristics of 2 popular OHCs in China by systematically and comprehensively analyzing their UGC and the associated user behaviors. Methods: We concentrated on studying the lung cancer forum (LCF) and breast cancer forum (BCF) on Mijian, and the diabetes consultation forum (DCF) on Sweet Home, because of the importance of the 3 diseases among Chinese patients and their prevalence on Chinese OHCs in general. Our analysis explored the key user activities, small-world effect, and scale-free characteristics of each social network. We examined the UGC of these forums comprehensively and adopted the weighted knowledge network technique to discover salient topics and latent relations among these topics on each forum. Finally, we discussed the public health implications of our analysis findings. Results: Our analysis showed that the number of reads per thread on each forum followed gamma distribution ( H L =0, H B =0, and H D =0); the number of replies on each forum followed exponential distribution (adjusted R L 2 =0.946, adjusted R B 2 =0.958, and adjusted R D 2 =0.971); and the number of threads a user is involved with (adjusted R L 2 =0.978, adjusted R B
Introduction Background
An online community is a social group created by internet users for a variety of purposes and interests. The rapid development of the "internet plus" [1] technology has further promoted the value of online communities in recent years. In the meanwhile, the health consciousness of populations and their motivation for better self-health management have been steadily growing. Propelled by the strong desire to mitigate the information asymmetry between doctors and patients pervasive in traditional health care communications, patients have gained a new way to share their disease situation and receive needed health care advice through online health communities (OHCs). For example, a continuously rising number of patients and their relatives continually participate in OHCs, actively share their treatment experiences, and openly express their personal opinions and feelings on various issues encountered during treatment or the whole care journey. The value of OHCs for exchanging emotional communications and delivering social support for patients and their families has also been widely recognized [2]. At present, an increasing multitude of user-generated content (UGC) and associated online user behaviors are becoming available on OHCs. The US Office of the National Coordinator for Health Information Technology defines patient-generated health data as health-related data created, recorded, or gathered by or from patients (or family members or other caregivers) to help address a health concern [3]. The Chinese National Health Commission publicly released an Action Plan for the Further Improvement of Medical Services (2018-2020), which emphasizes the role of patient organizations in knowledge sharing, whole-course disease management, rehabilitation support, drug development, and clinical trials [4]. However, such UGC in Chinese OHCs and their associated user behaviors are often underexplored and rarely analyzed systematically, thus losing valuable clues and evidence for improving treatment design and patient care.
Status of Research Concerning OHCs
A number of OHCs exist, providing users with diverse and fluent ways to exchange information, share experiences, seek answers, and receive support. PatientsLikeMe is the first and also the largest social network platform in the world dedicated to patients. By 2018, more than 650,000 users had communicated and shared their health information over the platform, with more than 2900 diseases involved [5]. MyHealthTeams [6] is a social network for people living with chronic diseases, which aims to provide mutual aids for its participants, and has gathered more than 2 million users spread over 33 online disease platforms. Several OHCs, such as Breastcancer [7] and BecomeAnEX [8], are also popular among patients. In China, Manyoubang [9] provides an interactive OHC for patients with chronic diseases, which has more than 22 subforums. Yi Xiang Network [10] is the largest case sharing website in China, providing services, such as case upload, communications, and mutual assistance, for patients. Mijian [11] is the largest interactive OHC for patients in China at the time of writing this manuscript, which integrates multiple single disease forums with interactive question and answer functions. Other OHCs in China, such as Sweet Home [12] and Lymphoma Home [13], mainly focus on servicing patients with chronic conditions. Meanwhile, in some general-purpose Chinese online forums (eg, Tianya [14], Tieba [15], and Zhihu [16]), there are also subforums specifically dedicated to disease-centric discussions.
Since the emergence of versatile OHCs, scholars have attempted to analyze these virtual communities from various perspectives. For example, Smailhodzic et al [17] conducted a literature review covering 22 articles, according to which, patients' use of social media were classified into the following 6 categories: emotional, information, esteem, network support, social comparison, and emotional expression. Dongxiang [18] overviewed OHCs in China from 3 perspectives, including their UGC, and the characteristics of participants and underlying online communities. Wu et al [19] summarized research hotspots concerning OHCs and the evolution of OHCs, as well as the key analysis methods for OHCs. To reveal factors that may motivate knowledge sharing in OHCs, scholars utilized text mining to better understand and predict user participation [20,21]. Fernandes et al [22] adopted a netnography method to analyze the positive impact of OHCs on the prognosis of diabetes. Li [23] utilized the structural equation model to study factors affecting individual patient's willingness to share medical information. Scholars also studied the distribution of health topics according to questions and answers on OHCs using machine learning approaches [24][25][26][27]. In addition, scholars utilized social network analysis methods to analyze knowledge exchange behaviors among users in OHCs by constructing and examining underlying knowledge-sharing networks [28][29][30][31][32][33].
Overall, existing research on OHCs has mainly focused on uncovering users' motivation for participating in the OHCs, discussing factors affecting users' online knowledge-sharing behaviors, and mining UGC in OHCs. For Chinese OHCs, existing research primarily concentrates on examining small-scale single-disease forums. In comparison with peer international studies, both the breadth and depth of current analysis regarding Chinese OHCs are much more limited, calling for expanded efforts to broaden the understanding and strengthen preliminary findings produced through existing studies. To meet the demand and fill the gap, this study comprehensively examined 3 representative disease forums hosted on the 2 most popular Chinese OHCs. The large-scale evaluation reveals both the shared traits and distinct characteristics of user behaviors and UGC on these forums to shed light on understanding user behaviors and UGC on Chinese OHCs in general.
Objectives
Given the popularity and proliferation of OHCs, understanding multifaceted patient experiences reflected from UGC in these forums and related user behaviors can provide many valuable insights for enhancing public health awareness and improving the quality of the care delivered. Comprehensive and in-depth analysis of such user content and behaviors can also help optimize the design and management of OHCs from a software engineering perspective, as well as the design and development of better community-based knowledge services at large. Driven by the above anticipated benefits, this study performed an in-depth analysis on UGC and related online user behaviors in 3 large-scale OHCs in China. We utilized a variety of social network analysis methods and constructed a knowledge-sharing network for each OHC to study the evolution of OHCs, discover characteristics of user behaviors, uncover salient topics and their relations in each of the virtual communities, and reveal common traits and distinct characteristics in the 3 OHCs examined. Through these case analyses, we also aimed to offer insights regarding user behaviors and UGC in Chinese OHCs in general.
Data Collection
Two influential OHCs in China, Mijian and Sweet Home, were selected for analysis in this study. Mijian was selected for analysis in this study because it is the largest OHC for patients in China at present. The site targets to serve patients diagnosed with chronic, severe, or rare diseases, aiming to help relieve their psychological stresses, learn disease-related health knowledge, and effectively acquire medical resources. Sweet Home was selected for analysis in this study because it is the largest OHC in China for patients with diabetes. The site offers categorized forums for medical consultation, service guidance, and emotional expression. Through the site, patients with diabetes can not only discuss their medical conditions, but also connect and communicate remotely with other patients across the country. Regarding the 2 focus OHCs identified in this study, our analysis concentrated on examining the lung cancer forum (LCF) and breast cancer forum (BCF) on Mijian, and the diabetes consultation forum (DCF) on Sweet Home in particular because of their predominant popularity among users of the 2 OHCs and the significance of the 3 diseases for the well-being of Chinese patients and the population as a whole given that breast cancer and lung cancer represent the 2 leading chronic noncommunicable diseases in the world and China has the largest number of patients with diabetes globally [34,35].
In a disease forum, a single conversation is referred to as a "thread" (ie, a topic). Users can respond to another person's thread, which is referred to as a "reply." Thus, a post made by a user on a forum can either be an original thread created by the user or a reply to another user's thread [36]. We crawled all threads on the 3 focus disease forums from their respective forum establishment dates (LCF: November 15, 2013; BCF: August 25, 2015; DCF: September 1, 2005) to October 20, 2020. Once a post was crawled, we also obtained its ID, posting time, number of reads, and number of replies. We then performed a series of data cleaning operations, including filtering posts automatically created by chatbots on these forums and deleting missing data. Table 1 presents key statistics of the acquired online data sets in comparison with those of data sets used in peer studies [20,24,26,28,30,37,38], which shows that the scale of the current analysis significantly transcends that of all prior efforts.
Social Network Analysis
A social network is a social structure made up of a set of social actors (such as individuals and organizations), sets of dyadic ties, and other social interactions between actors [41]. Social network analysis can help identify community structures at the network level, as well as individual behaviors at the single-user level. Since a user could be both a thread author and a reply author, this study adopted a directed network structure to model the community network. In such a directed network, each edge of the network is directional, where the in-degree of a node refers to the number of directed edges ending with the node. Conversely, the out-degree of a node is the number of directed edges starting from the node. The total degree of a node is the total number of its network neighbors irrespective of the tie direction (ie, the sum of its in-degree and out-degree).
We conducted a topological analysis for complex networks [41] to explore the structural characteristics of each forum. Our analysis was carried out in 2 steps. First, we explored the small-world effect of each social network. We took the path length between 2 nodes as the minimum number of edges connecting these nodes in the network. The average path length, also known as characteristic path length, is defined as the average number of steps along the shortest paths for all possible pairs of network nodes. Let d ij denote the shortest distance between 2 nodes i and j in the network. Assume that d ij =0 if i=j or j cannot be reached from i. Then, the average path length L is as follows: where N is the number of network nodes. The clustering coefficient of a network measures the degree of node clustering in the network. Assume a node k has n adjacent neighboring nodes (N 1 , N 2 , …, N n ). If 2 nodes i and j are connected with a link, the directed link is denoted as e ij . The local clustering coefficient of the node k is defined as follows: Assume the entire network has K nodes in total. The average clustering coefficient of the network is the mean of the clustering coefficients of all its nodes, that is, The small-world effect, also known as the 6 degrees of separation, is the idea that all strangers can be related through 6 or fewer people [42]. Watts et al proposed a small-world network model (WS model) [43], in which a small-world network is characterized by a small average path length and a high clustering coefficient. As a general method for quantifying the small-world effect of a network, the network can be measured by comparing its clustering coefficient and average path length with those of an equivalent Erdös-Rényi (ER) random network that has the same number of nodes and edges [44]. To construct such an equivalent random network, we employed the following generative procedure: Let N and M be the number of nodes and edges expected of the network to be generated, respectively. The network is initialized to have N unconnected nodes. At each step, we randomly selected and linked a pair of nodes not currently connected in the network. We repeated the above step until all M edges were added into the network. Given the random network generated, σ can be calculated as follows: where is the average clustering coefficient of the network, L is the average path length of the network, C r is the average clustering coefficient of the equivalent ER random network, and L r is the average path length of the equivalent ER random network. If σ>1 (eg, ), the network is considered a small-world network [44], in which case, it is assumed that knowledge can be spread efficiently and rapidly in the community represented by the network.
Second, we explored the scale-free property of social networks. Scale-free property is a structural characteristic concerning a network as introduced by Barabási et al [45]. Across scientific domains and classes of networks, it is common to encounter the claim that most or all real-world networks are scale free. Generally, a network is deemed scale free if the fraction of nodes with degree k follows a power-law distribution P(k)=c×k −r , where r>1. The property mainly comprises the following 2 aspects. First, the distribution of nodes follows a power-law distribution, where most nodes have few links, while a small fraction of nodes has a large number of links. In a power-law distribution, the probability that a node has degree k follows the distribution equation P(k)=c×k −r , where c and r are network-specific constants. It is generally believed that if the degree of nodes in a network follows a power-law distribution, the network is a scale-free network. Second, during a network growth process, new nodes preferentially establish relations with well-connected nodes. The scale-free network model is also commonly referred to as the B-A model [45].
Weighted Knowledge Network
Topic analysis techniques can be leveraged to extract conceptual topics, determine their types, and analyze their internal structures latent in a large text corpus. In this study, we analyzed health topics on OHCs to identify hot topics and the salient health information needs of their users.
We executed the topic analysis in 2 steps. First, we extracted key phrases among UGC according to point-wise mutual information (PMI), as well as the left and right information entropy, with which the co-occurrence relationship between words can be efficiently found. In this step, mutual information is mainly used to measure the degree of correlation between 2 signals according to information theory [46], which is repurposed to measure the degree of interdependence between 2 variables. In natural language processing (NLP), PMI is used to calculate the degree of correlation between 2 words, so that the co-occurrence of words can be found from a statistical perspective to examine whether any semantic correlation or thematic correlation exists between a pair of words. The PMI of 2 adjacent words x and y is computed as follows: where p(x) is the probability of word x appearing in all threads, that is, p(x)=the number of occurrences of word x / the total number of words in all threads; p(y) is the probability of word y appearing in all threads, that is, p(y)=the number of occurrences of word y / the total number of words in all threads; and p(x,y) is the joint probability of x and y, which is the probability that 2 words (x,y) appear adjacent to each other in the text. A higher PMI of x and y is associated with higher internal aggregation and greater possibility of the 2 words forming a phrase. Conversely, those 2 words are more likely to have phrasal boundaries.
Entropy is an uncertainty measure associated with a random variable. A higher entropy is associated with greater underlying information content and hence higher uncertainty [47]. In NLP, the left and right entropy of the word W are defined as follows: where E L and E R are the left entropy and right entropy of the word W, respectively; A and B represent the sets of all words appearing to the left and right of W, respectively; and a and b represent the words appearing immediately on the left and right sides of W, respectively. Greater left or right entropy is associated with a higher degree of freedom of the word, which indicates more abundant choices for a target word surrounding the given word W.
Second, we treated a keyword as a node and the co-occurrence relationship between a pair of keywords as an edge to construct a weighted knowledge network (WKN) [37,48]. In the process, we also assigned weights to the nodes and edges according to weights of the corresponding keywords and the relationship strength between the corresponding key phrases. The WKN integrated and modeled fragmentation knowledge of the thematic content, which can be used to effectively discover the internal relationships and overall characteristics of a knowledge network. More specifically, we summed the PMI value of the left entropy and right entropy calculated above, which was used as a measure of 2 words as a phrase. We then extracted key phrases in each post and their respective weights where the frequency of a key phrase is used as the weight of the key phrase. We define E as the keywords co-occurrence set and Q(E) as the weight set associated with E, as follows: In the above equations, if keywords k i and k j formulate a key phrase, as indicated by a co-occurrence relationship between them in the WKN, then e ij =1; otherwise, e ij =0. q(e ij ) is the weight of e ij . q(e ij )=n(e ij )/N, where n(e ij ) is the number of occurrences of a key phrase in all phrases and N is the total number of all phrases. Next, all detected key phrases were organized as a keywords set K, for which an associated keyword weight set Q(K) was introduced as follows: According to the constructed WKN model, the results can be displayed by social network visualization tools. Table 2 presents descriptive statistics of the OHC data analyzed in this study where extreme outliers were removed during the preprocessing. In terms of the number of reads and replies per thread, the data distribution was severely nonuniform. In other words, most of the threads had fewer reads and replies, and only a few threads got a large number of reads and replies, which meant that most users preferentially read the threads that had received a larger number of replies, thus resulting in the polarization. The SD is a measure of the amount of variation or dispersion of a set of numbers, which is also affected by the volume of data analyzed. The coefficient of variation is a statistical measure of the dispersion of data points around the mean of a data series. The coefficient of variation of the general normal distribution is less than 1. Considering the large coefficient of variation for each attribute listed in Table 2 and according to the skewness of the frequency distribution graph (SK L =35.41, SK B =25.65, and SK D =12.45), we concluded that none of them follows a normal distribution. Next, we plotted the frequency distribution of the number of reads per thread in each forum, as shown in Figure 1. We also adopted the K-S test (H L =0, H B =0, and H D =0), from whose results we can determine that the number of reads per thread follows the gamma distribution [49]. When we plotted the frequency distribution of the number of replies per thread, we found that they exhibited an obvious long-tail phenomenon, which suggests that the number of replies per thread follows the power-law distribution. We further found that the log-log distribution of the number of replies per thread was noticeably curved, as shown in Figure
User Activities
We explored key user activities on each forum. To understand user stickiness in a community, we analyzed online user activities. Community managers can adopt different strategies and incentives to improve their user experiences based on the behaviors of these users. Figure 3 shows the percentage of all posts (threads plus replies) created on each day of the week. These forums had similar trends, especially between the LCF and DCF. Besides, most online question and answer communities or vertical knowledge sharing communities are more active on weekdays than weekends, presumably due to users' conscious work-life balance choices. By counting the frequency of posting for each day of the week by month and drawing a box plot, it can also be seen that users in the LCF and DCF post more frequently during the week and are more active during the week than on the weekend (Multimedia Appendix 1). The same conclusion was found in nononline health communities such as the DISboards [36,50] and Tianya community [15,51]. Figure 4 shows the percentage of threads and replies created at each hour of the day. In each forum, the number of posted threads and replies increased significantly from 4 AM. In terms of the posting time of each thread, the number of posted threads in the BCF had a peak around 8 AM, which declined in the middle of the day, followed by a second peak around 9 PM. Both the LCF and DCF showed 3 peak posting moments at around 10 AM, 4 PM, and 9 PM, with their least active posting moment at around 6 PM. Furthermore, there was high similarity (ρ=0.927; P<.001) between the LCF and DCF in terms of the relative thread posting frequencies during each hour of the day. The numbers of posted threads among the 3 disease forums around 12 PM and 6 PM were less than those at other moments in the day, presumably due to the common lunch and dinner hours observed for the 2 moments of the day. Most users became active from 7 PM after dinner, and activity peaked again at 9 PM, after which the number of posted threads gradually declined as it approached bedtime. We also found that the number of posted threads in the LCF and DCF peaked around 2 or 3 hours before the Chinese mealtime (12 PM and 6 PM), likely because diabetic patients pay more attention to their diet to control blood glucose. Similarly, lung cancer affects the digestive function of patients, which may cause decreased appetite. Especially in the advanced stage of lung cancer, it is indispensable to control and adjust the diet. Therefore, most users consulted about diet in advance, leading to a significantly increased number of posted threads. Due to the different dietary behaviors of breast cancer patients and those of patients with the other 2 diseases, the active posting periods of the BCF were different from those of the other 2 forums. In terms of the posting time of replies, we found that the relative frequencies of posting for threads and replies during each hour of the day were highly positively correlated in each forum. Table 3 shows the Spearman correlation test results. Similarly, the least numbers of replies were posted at around 12 PM and 6 PM, and this was also due to their overlap with common mealtimes.
Social Network Structure
The social network structure graph visually presents the node relationship matrix of the network [41]. Table 4 shows the characteristics of the aggregated social network based on users' reply postings on each forum. In the structure graph, each node represents a user in the community, and edges are directed links formed by replies between users. The average clustering coefficient of all 3 forums was lower than those of Facebook (0.519), Flickr (0.313), and LiveJournal (0.330) [52]. A higher global clustering coefficient indicates that there was a closer connection between users where friends tend to find each other through their mutual friends [36]. In a network, user degree is the sum of out-degree and in-degree of a user. In our data set, all users had at least one thread due to the crawling strategy used at the time of data acquisition. In the LCF, 39.7% (8968/22,610) of users had a user degree of 1; in other words, more than 39% of users only had a post. In the BCF, 51.4% (15,886/30,901) of users had a user degree of 1, and in the DCF, 24.7% (6604/26,751) of users had a user degree of 1. Most users had a relatively lower degree, while only a few users had higher degrees. Figure 5 shows the user degree distribution in each forum, from which we visually noticed that the degree distribution of each forum follows the power-law distribution. To quantitatively verify this finding, we treated the number of user degrees as an independent variable and the number of users as a dependent variable to fit a power-law curve. The resulting fitted curves were y=3.571x −1.330 ( =0.9976) for the LCF, y=3.253x −1.056 ( =0.9946) for the BCF, and y=3.873x −1.445 ( =0.9683) for the DCF. The fitting degree of the power-law curves for each forum was nearly perfect, indicating that the distribution of user degrees on each forum follows the power-law distribution and r>1, which further shows that the underlying social network is a typical scale-free network. Many studies have reached the same conclusions that other OHCs [20,29,53], such as Facebook [52] and Weibo [54], are also typical scale-free networks. We additionally calculated the average clustering coefficient and the average path length of the equivalent ER random networks for each forum. We randomly generated 50 sets of equivalent ER random networks, calculated σ according to equation (4), and calculated the mean value of 50 sets of σ as the final judgment coefficient. 2). The results indicated that the numbers of nodes and edges increased yearly since the creation of the LCF. With the development of the community, the number of users was gradually increasing accompanied by more active user behaviors. Since the establishment of the BCF in 2015, its number of nodes has been increasing yearly, indicating that new users constantly join the forum. In the meanwhile, the number of edges in the network increased in the beginning and reached a peak in 2017, and then declined afterwards. Such a trend line shows that the disease forum reached its most active period in 2017. Since the creation of the DCF back in 2005, the numbers of nodes and edges of the forum had been steadily increasing until its peak in 2015, after which the activity of the community continuously declined, in particular between 2018 and 2019. The "sleeping rate" or "loss rate" of users in the forum was noticeable. More concretely, the number of nodes active in 2019 was 21% (943/4510) of that in 2015, while the number of edges active in 2019 was only 5% (2731/55,741) of that in 2015, and both statistics indicate an apparent recession phase of the forum.
Analyzing UGC Using WKNs
Due to the COVID-19 outbreak in 2020 and its likely impact on user behaviors on OHCs, we divided our observation window into 2 periods (one before January 1, 2020, and another afterwards). By comparing user behaviors between these 2 periods, we analyzed whether the UGC changed notably due to the disease outbreak. We extracted the first 200 key phrases from the UGC of each of the 3 forums. In the preprocessing, we first filtered away keywords with no factual information according to peer literature, as well as merged synonym keywords [25,37]. For each forum, we subsequently constructed its corresponding WKN according to the construction method of the WKN model discussed above. Figures 6-8 show the resulting WKNs for each forum (Multimedia Appendix 3 for the detailed pictures). A larger node of a keyword in the WKN is associated with a greater weight of the keyword, implying more attention received by the keyword. Applying the criterion, we detected significant keywords in each forum, for example, the keywords "treatment" and "chemotherapy" in the LCF, according to Figure 6A. A darker color of the connection link between 2 keywords was associated with a higher co-occurrence frequency between these keywords. For example, the keyword "target" most frequently co-occurs with the keyword "treatment" in the LCF; thus, the connection link between these nodes has the darkest color in the forum's corresponding WKN as shown in Figure 6A. The dense connection of a keyword indicates that the keyword co-appears with many other keywords in a sentence, for example, the keywords "treatment" and "patients" in the LCF, according to Figure 6A. There were 8 major categories of theme feature classification strategies, namely, "etiology and pathological knowledge," "diagnosis and examination," "treatment," "disease management," "complications," "social life," "disease prevention," and "education and research," based on the classification of OHC information in PubMed literature [55]. According to these 8 classification strategies, the classification categories of the keywords of the 3 disease forums were judged, and the topic distribution was macroclassified, so as to determine the hot topics discussed in the content more clearly and quickly. There were 400 keywords in the 200 key phrases. In the LCF, the topic "disease treatment" (145/400, 36.3%) included keywords such as "treatment" and "chemotherapy;" the topic "examination and diagnosis" (118/400, 29.5%) included keywords such as "examination," "confirmed diagnosis," and "detection;" and the topic "social life" (38/400, 9.5%) included keywords such as "sick friends" and "life." In the BCF, the topic "disease treatment" (128/400, 32.0%) included keywords such as "treatment" and "chemotherapy;" the topic "examination and diagnosis" (119/400, 29.8%) included keywords such as "examination" and "confirmed diagnosis;" and the topic "social life" (58/400, 14.5%) included keywords such as "sisters," "foods," and "sport." In the DCF, the topic "disease treatment" (155/400, 38.8%) included keywords such as "control," "treatment," and "injection;" the topic "examination and diagnosis" (115/400, 28.8%) included keywords such as "examination," "confirmed diagnosis," and "hyperglycemia;" and the topic "social life" (31/400, 7.8%) included keywords such as "foods," "sport," and "sick friends." It can be concluded that the 3 forums had "disease treatment," "examination and diagnosis," and "social life" themes. However, we noticed that topics related to disease prevention were rarely discussed in all 3 forums. Table 5 shows the top 10 keywords appearing in each forum, which primarily focused on disease treatment and diagnosis. Given the fact that target users of Mijian are patients after diagnosis, users of the forum tend to discuss topics on disease treatment and examination more frequently. More specifically, in both forums on Mijian (LCF and BCF) users generally paid more attention to topics on disease reexamination, metastasis, recurrence, anticancer drugs, and drug side effects. In both the BCF and DCF, users paid more attention to topics on healthy diet, exercise, and disease management. October 20, 2020 According to Figure 6A, hot topics on the LCF before January 1, 2020, mainly concentrated on lung cancer treatment, examination and diagnosis, and social life. The most salient topic on the LCF was disease treatment, because this topic had the greatest weight. This category mainly focused on topics including keywords such as "lung cancer treatment," "chemotherapy," "surgical treatment," "drug treatment," and "treatment effect." The most relevant topics for treatment were examination and diagnosis, including keywords such as "lung pain" and "cough symptoms," indicating that users discussed examination and diagnostic contents, as well as the treatment of lung cancer. Ego networks consist of a focal node known as the ego, and the nodes to whom the ego is directly connected to, called alters, with edges showing links between the ego and altars or between altars. Each alter in an ego network can have its own ego network, and all ego networks combine to form the social network. The red nodes in Figure 6A formulate 2 ego networks for the keywords "father" and "mother." We found co-occurrence relationships among the keywords "mother," "father," "confirmed diagnosis," "reexamination," and "surgery," and they appeared in the same thread, implying that many users were probably children who consulted and communicated online health information for their parents. In the BCF, the hot topics before January 1, 2020, mainly focused on factors such as breast cancer treatment, social life, examination and diagnosis, and disease management, as shown in Figure 7A, and users mainly focused on breast cancer treatment and chemotherapy, such as endocrine therapy and treatment effects. The green nodes in Figure 7A formulate an ego network for the keyword "children." We found the keywords related to "result," and "health" was associated with "children." Due to the particularity of breast cancer, patients considered some special factors such as the health status of the next generation. Different node colors were used to more clearly distinguish the ego network of a particular keyword. Since the ego networks of other nodes do not have obvious characteristic results, only the ego networks of nodes with characteristic results are discussed in the paper. In the DCF (Figure 8), the UGC mainly concentrated on topics like diabetes control and management, disease treatment, examination and diagnosis, and social life, including keywords such as "blood glucose control" and "diet control." Meantime, the examination and diagnosis of diabetes were mostly related to disease management. In Figures 6B and 7B, yellow nodes formulate the ego network for the keyword "COVID-19," relating to keywords such as "infection," "remission," "confirmed diagnosis," and "influences." Cancer patients represent one of the susceptible populations of COVID-19, who are more vulnerable to COVID-19 complications [56] and prone to experience severe events on exposure to COVID-19, such as admission to an intensive care unit or death. Thus, they should pay more attention to self-protection and social distancing. Since 2020, both in the LCF and BCF, users tended to mention COVID-19-related matters while discussing their medical issues. Meanwhile, the relative mention frequencies of other key topics did not change noticeably. Given that there were only a few threads (only 130 threads) posted on the DCF after 2020, no clear thematic change was observed to draw any qualitative conclusions regarding the response of its participants to COVID-19. Thus, no distinction was made.
Principal Findings
This study carried out an in-depth analysis of the UGC and related online user behaviors of 3 large-scale OHCs in China. We utilized a variety of social network analysis methods and constructed a knowledge-sharing network for each OHC to study the evolution laws of the corresponding online community, discover characteristics of user behaviors, and uncover salient topics and their relations shared in the virtual community.
Since the existing research conducted on OHCs in China only examined a small-scale single disease forum, as shown in Table 1, that is, the number of data sets was less than 2000 threads and less than 10,000 replies [28,30,37,38], the scale was significantly smaller than that of analyses performed in Western countries [20,24,26], which severely undermines the reliability and comprehensiveness of the analysis findings. To meet the demand and fill the gap, we conducted thorough and extensive research on 3 representative disease forums selected from the 2 most popular Chinese OHCs. Over 80,000 users, 190,000 threads, and more than 2.8 million replies were crawled to reveal the common traits and unique characteristics of user behaviors and UGC in these forums, which can better support our findings and represent the overall characteristics of OHCs in China. The results are discussed in detail below.
First, we found that the data of these 3 disease forums were polarized, and the underlying data distributions were certainly nonuniform. In these disease forums, the number of reads per thread followed gamma distribution (H L =0, H B =0, and H D =0), and the number of replies per thread followed exponential Second, users were more active during the weekdays than on weekends. The thread posting frequencies and reply frequencies in the abovementioned 3 forums had highly positive correlations between each other during each hour of the day. In particular, the LCF and DCF exhibited high temporal similarity (ρ=0.927; P<.001) in terms of the thread posting frequencies during each hour of the day. The numbers of threads and replies increased significantly from 4 AM, and the number of posted threads was relatively small in each forum around 12 PM and 6 PM. Because both lung cancer patients and diabetes patients need to pay attention to their diets, the number of posted threads in the LCF and DCF had a crest around 2 or 3 hours before the Chinese mealtime (12 PM and 6 PM).
Besides, the study showed that all 3 forums had the small-world effect ( =517.15, SD L =13.31, =275.23, SD B =13.02, and =525.18, SD D =14.38) and scale-free characteristics, and the user degrees followed the power-law distribution ( =0.997, =0.994, and =0.968), while their global clustering coefficients were lower than those of international peer OHCs. According to the dynamic trends of the community networks, it was demonstrated that the LCF was still in the developing stage, the BCF needed to stimulate the activity of "zombie users," and the DCF needed to attract more new users and improve the retention rate of users.
Finally, we found that several hot topics were commonly shared among the abovementioned 3 disease forums, such as disease treatment, disease examination, diagnosis, and social life. The most relevant topics for treatment were examination and diagnosis, and many children consulted related information for their parents in the LCF. In the BCF, users paid more attention to the next generation's health, while in the DCF, users paid more attention to the detection of blood glucose and diet control. Furthermore, we noticed that in both the LCF and BCF, users tended to mention COVID-19-related matters while discussing their medical issues after the outbreak of the disease in 2020.
Limitations
There are few limitations in this paper. On one hand, although 2 influential OHCs in China (Mijian and Sweet Home) were selected for analysis in this study, the analysis results cannot be extended to all Chinese OHCs. On the other hand, this paper only focused on the characteristics of the overall social network structure, which did not distinguish the strong and weak connections between users and user roles. At the same time, this study only analyzed the topic content of 1 user, which did not consider replies or the topic type of a thread. Therefore, subsequent research should try to add weights to the connection edges between users to study the influence of users in social networks, or study the theme changes in different periods.
Conclusions
Our findings shed light on the basic characteristics of social networks, user behaviors, and UGC in Chinese OHCs. UGC in OHCs and related online user behaviors can be leveraged as an important source of information to gain insights on individual and population health conditions, which can be beneficial for users to understand hot topics in different forums and gain knowledge of health management. Despite the fact that OHCs are developing in China, it is indispensable to take measures to improve the retention rate and activity of users, increase user stickiness, analyze user behavior, and mine forum content themes. It is important to better mine potential content to provide users with useful information and knowledge. In conclusion, our research not only contributes to the understanding of the different characteristics of OHCs, but also helps to discover the salient topics and latent relations among these topics in each forum. Hence, effective, timely, and consistent mining and utilization of content can provide more valuable evidence for health providers and policymakers. | 9,465.4 | 2021-01-11T00:00:00.000 | [
"Medicine",
"Computer Science",
"Sociology"
] |
Extended focal depth Fourier domain optical coherence microscopy with a Bessel-beam – LP 02 mode – from a higher order mode fiber
: We present a robust fiber-based setup for Bessel-like beam extended depth-of-focus Fourier-domain optical coherence microscopy, where the Bessel-like beam is generated in a higher order mode fiber module. In this module a stable guided LP 02 core mode is selectively excited by a long period grating written in the higher order mode fiber. Imaging performance of this system in terms of lateral resolution and depth of focus was analyzed using samples of suspended microbeads and compared to the case where illumination is provided by the fundamental LP 01 mode of a single mode fiber. Illumination with the LP 02 mode allowed for a lateral resolution down to 2.5 µm as compared to 4.5 µm achieved with the LP 01 mode of the single mode fiber. A three-fold enhancement of the depth of focus compared to a Gaussian beam with equally tight focus is achieved with the LP 02 mode. Analysis of the theoretical lateral point spread functions for the case of LP 01 and LP 02 illumination agrees well with the experimental data. As the design space of waveguides and long-period gratings allows for further optimization of the beam parameters of the generated Bessel-like beams in an all-fiber module, this approach offers a robust and yet flexible alternative to free-space optics approaches or the use of conical fiber tips. depth of focus and lateral resolution with the performance of a standard (Gaussian-like beam) illumination setup with similar beam waist. Our analysis shows that the Bessel-like beam provides a larger depth of focus and higher resolution than the standard system. Given the simplicity to generate a Bessel-like beam in a fiber-based module and the possibility to explore a large design space for the long period gratings and higher-order mode fibers, this approach could be used to generate Bessel-like beams with parameters allowing to further improve depth of focus and lateral resolution. This can be tailored to developing simpler FD OCT imaging systems in comparison to alternative free-space optics approaches (such as axicon lenses). The use of higher-order mode fibers also offers a flexible and more robust alternative to conical tip fibers.
Introduction
Optical coherence tomography (OCT) is a three-dimensional imaging modality which has proven to be a powerful tool for biological imaging and healthcare diagnostics [1][2][3]. This imaging modality is based on interferometric analysis of backscattered light generated within the medium being imaged [4]. In Fourier domain OCT (FD-OCT) axial resolution is provided via broad-band illumination and a spectral analysis, thus enabling high-speed imaging with high sensitivity [5][6][7][8]. This approach captures a full depth profile (A-scan) within a single detector dwell time without axial scanning, with the imaging speed being essentially limited by the frame rate of the array detector used [9]. In FD-OCT it is important to consider the depth of focus (DOF) of the imaging system, which determines the range at which the lateral extension of the beam remains within predefined limits. For Gaussian beam illumination the DOF is defined as twice the Rayleigh range. The DOF is inversely proportional to the square of the (effective) numerical aperture (NA) of the imaging lens. The beam waist, which determines the lateral resolution of the excitation point spread function (PSF), is also determined by the focusing conditions and is linearly proportional to 1/NA. Increasing the lateral resolution at the focus by use of high NA optics thus leads to a significant shrinking of the DOF. The total PSF of the OCT system and its available signal levels outside the focal plane are further influenced by the detection PSF for the light that is collected after back-scattering.
High lateral resolution implementations of OCT are referred to as optical coherence microscopy (OCM). OCM uses high NA objectives to obtain a higher lateral resolution as compared to standard OCT. The increase in lateral resolution reduces the DOF and therefore limits the multiplexing advantages of FD-OCT. To maintain a high lateral resolution over a larger depth range additional axial scanning can be performed at the cost of compromised imaging speeds [10,11].
The applicability of OCM as a non-invasive technique for 3-dimensional (3D) in vivo structural and functional imaging with micrometer scale resolution was demonstrated in recent years [12][13][14]. Extension of FD-OCT based on illumination and detection of the fundamental spatial laser mode of the light source (so called Gaussian beam illumination) to applications in FD-OCM is hindered by the fact that the DOF in OCM is further limited by confocal gating implied by the spectrometer entrance aperture (or coupling efficiency of the backscattered light into an optical single mode fiber) [15]. For standard OCM implementations using Gaussian beam illumination the DOF is very short (typically a few µm) as compared to OCT implementations where low NAs are used at the expense of lower lateral resolution.
A fundamentally different, but also very attractive, approach to extend the DOF beyond the limit given by Gaussian optics that was proposed for FD-OCT is based on wavefront engineering [9,[27][28][29] to generate so-called diffraction-less (self-healing) beams (most prominently Bessel beams), to be used instead of the traditional Gaussian beam illumination. Because of this feature Bessel beams are also considered to be advantageous for imaging within scattering media. A combination of computational methods and Bessel-like illumination has also been applied for simultaneous optimization of lateral resolution, signal-to-noise-ratio and DOF [30].
Traditionally, free-space axicon lenses have been the most popular method for the generation of Bessel-beams for FD-OCT [9,[31][32][33][34]. Optical fiber-based methods for Bessel-like beam generation are desirable for imaging applications because they potentially allow for building more robust and compact setups, enable remote delivery, and because they are inherently more compatible with endoscopic applications.
Bessel-like beams have been generated in a fiber by focusing a ring mode with a lensed fiber tip [35], or by fabricating a micro-axicon directly onto a fiber core [36]. These methods improve the robustness and reliability of Bessel-beam generation but offer limited design flexibility [37]. Alternatively, illuminating a large-core multimode fiber on-axis can also provide a Bessellike output albeit with a strong degree of axial variation in the near field and inherent strong wavelength-dependent performance [37,38].
In fiber optics, due to rotation symmetry, the degeneracy of the HE and TE eigenmodes allows to transform the mode solutions to Bessel-like LP nm modes. Selectively excited LP 0m cladding modes by means of a long period grating (LPG) have been shown to behave like diffraction-resistant self-healing Bessel beams in free-space [37]. LPGs allow to achieve high mode conversion efficiency and accurate control of the number of rings in the mode. To achieve coupling to cladding modes LPGs were written in a H 2 -loaded high-NA single mode fiber [37] and due to the inherent sensitivity of LPGs to fiber bends or temperature fluctuations, a practical implementation would be written in a double clad fiber that can reliably propagate LP 0m modes [39]. In this type of fiber instead of light propagating in the fundamental, approximately Gaussian-shaped (LP 01 ) mode, the light can be forced to travel in a single sought-after higher order mode (HOM). Since the Bessel-like LP 0m mode is directly delivered from the fiber facet, accidental damage to the fiber tip can be repaired by simply cleaving off a short piece of fiber or polishing the fiber tip. Fabrication of a new module, as would be the case with a micro-axicon fiber tip, is not required.
In this work we exploit the favorable properties of higher order mode fibers. We demonstrate a proof of principle realization on the feasibility and imaging capabilities of an FD-OCT imaging system where the Bessel-like beam is generated in a stable higher order mode fiber LPG module. The HOM fiber used here is designed to support four LP modes including LP 01 and LP 02 at 1030 nm (fiber 2 of [40]). The HOM fiber is spliced to a fiber that is single moded at 1030 nm (OFS ClearLite 980-14) to launch the LP 01 mode of the HOM fiber. To get a pure LP 02 out of the fiber, an LPG is UV written into the HOM fiber with a period (Λ) matching the difference in propagation constant β of the two modes.
The LPG will then convert the LP 01 mode to the LP 02 mode. The HOM fiber is furthermore designed so the difference in propagation constant versus wavelength is flat around 1030 nm, which results in very broadband conversion [40][41][42]. It is important to reiterate that the LP 02 mode propagating in the HOM fiber is not a cladding mode but a guided core mode, thus providing a very robust and stable approach to generate a Bessel-like beam at the output. The resolution power of this system is characterized by imaging an optical phantom target containing randomly distributed point scatterers (Ø 3 µm polystyrene beads) embedded in agarose gel and a standard resolution test target. We compare the performance of the system with the fiber-generated Bessel-like LP 02 beam to the performance of the same system where the HOM fiber is replaced by a single mode fiber providing a Gaussian-like beam.
Optical setup
The FD-OCM imaging system used in this study is shown in Fig. 1. The illumination source is an all-polarization maintaining femtosecond Ytterbium-doped fiber oscillator [43]. The free-space output of the oscillator is coupled into a single mode fiber via fiber collimator L1 (Thorlabs F240APC-1064). The single mode fiber is spliced to the single-mode pigtail of the HOM fiber (half)-module for imaging using the LP 02 output mode of the (∼2 m long) HOM fiber. The HOM fiber output is fixed on a stainless-steel v-groove mounted on a 3-axis translation stage, and the output beam is collimated using an aspheric lens L2 (Thorlabs C240TME-B, f = 8mm). The insertion loss of the HOM module is 0.8dB. It was designed for use as broadband intracavity dispersion compensation (of single mode fiber) in ultrafast Yb:fiber oscillators [44], and hence supports about 60 nm bandwidth around 1040 nm.
For our comparative measurements using a Gaussian-like beam, the oscillator output is also coupled to a single mode fiber using L1, and the output end of this fiber is placed at the same v-groove mounted and collimated by L2. This approach allows us to quickly change from Bessel-like illumination to Gaussian-like illumination and vice-versa without the need to realign any system component other than the 3-axis translation stage.
The collimated output is relay-imaged with a 4f-telescope (L3 and L4, both Thorlabs AC254-100-B-ML, f = 100 mm) to the Michelson interferometer, which consists of a 50/50 beamsplitter (Thorlabs BS011). The sample arm consists of two galvanometric scan mirrors and scan lens (L5, Thorlabs LSM02-BB, f ∼ 18 mm). The reference arm contains a glass block for dispersion compensation (Thorlabs LSM02DC) and a plane silver mirror for reflection. L4 is placed one focal length away from the center point between the two galvanometric mirrors, which coincides with the back-pupil of L5. The output of the Michelson interferometer is focused by lens L6 The effective spectral resolution of our spectrometer is better than 0.1 nm, resulting in a depth sensing range of more than 2 mm. The axial resolution, resulting from the spectral bandwidth covered by the spectrometer, is about 20 µm. The use of different lenses at the position of lens L6 allows changing the point spread function (PSF) (i.e. resolution and signal attenuation away from the geometric focus of lens L5) of the detection pathway, without affecting the spectrometer resolution. Generally, shorter focal lengths of lens L6 enlarge the detection PSF. The detection PSF can also be changed by using different sized pinholes; however, changing the pinhole also influences the resolution of the spectrometer.
The diffraction efficiency of our gold-coated grating favors light polarized perpendicular to the grating grooves. To ensure optimal detection efficiency, a fiber polarization controller before the HOM fiber and a half-wave plate between L2 and L3 are used to fix the polarization state of the light in the setup.
Note that the effective mode-field area of the LP 02 mode in the HOM fiber module is comparable to the LP 01 mode in the standard single mode fiber used (PM980). As a result, the diameter of the outer ring lobe of the LP 02 mode is comparable in size to the LP 01 mode at the fiber output as well as in the far-field -collimated -beam profile, and the same collimation and focusing optics can be used for optimal performance of both the Bessel-like and Gaussian-like illumination. The collimation lens is chosen to utilize most of the available NA of the scan lens L5 (and clear aperture of the galvanometric mirrors), without clipping the beam.
The insets in Fig. 1 show the collimated far-field beam profiles (intensities, not electric fields) of both the SMF and HOM fiber output. Note that the far-field beam profile of the HOM fiber (Bessel-like) output is very different from the annular far-field profile (i.e. without a central lobe) of a Bessel-like beam from an axicon. The ring-shaped far-field profile from an axicon can be used in a dark-field illumination setup when combined with a low-pass filter in the detection pathway [9,31]. Dark field illumination can be advantageous when bright direct reflections (e.g. from a cover glass at the top of the sample) cause artifacts that limit the ability to detect weaker reflections from diffuse scatterers. In contrast, the LP 02 mode has a bright central spot, and as a result, (bright) direct reflections are not suppressed in the detection path, thus representing a bright field detection scheme. This facilitates the direct performance comparison to Gaussian-like illumination.
Simulations
We have simulated the beam propagation of the Gaussian-like LP 01 , the Bessel-like LP 02 mode and an ideal high-NA Gaussian beam throughout the optical imaging system of Fig. 1. These create the excitation PSFs shown in Figs. 2(a), 2(b) and 2(c), respectively. We simulated the high NA Gaussian by over-filling the back aperture of lens L5, thus that it reaches a similar lateral width as the LP 02 mode in focus. However, it quickly diverges outside of the focus. We further simulated the detection PSF for light that is back-scattered by the sample. The detection PSF results from the combination of the finite numerical aperture of the scan lens L5 and the rejection of out-of-focus light by the pinhole after lens L6. The effective detection mode in Fourier space can be modeled as a Gaussian of finite width and as if the sample would be illuminated by light that is passing through the pinhole from the other direction [15]. It can be shown that in object space (i.e. real space) the total PSF is simply the product of the excitation and the detection PSF [15]. We assumed a low-NA Gaussian for the detection mode, determined by the size of the pinhole. The corresponding total PSFs are shown in Figs. 2(d)-(f). The detection PSF is not explicitly shown in Fig. 2, but its 1/e width is shown is shown as the magenta curves in Figs. 2(g) and 2(h).
The evolution of the waist of the total PSFs for the high-NA Gaussian, the LP 01 mode, and the corresponding width of the central lobe in the LP 02 mode, as a function of distance from the focus are representative of the transverse resolution of our FD-OCT setup. Figures 2(g) and 2(h) show the waists of the excitation beams and total PSFs, respectively, for the high-NA Gaussian (green), the LP 01 mode (blue), and the LP 02 mode (red) as a function of distance from the focus. Furthermore, in Fig. 2(i) we show the signal dampening outside the focus in units of dB, which can be relevant if the scattered light does not provide a high SNR.
For the calculations we started with the near-field profiles at the fiber facet that can be calculated as a solution of the Maxwell equation for each fiber respectively. The high NA Gauss is modeled as a Gauss distribution with small waist at the fiber facet. Then we applied a 2D Fourier transform to obtain the far-field (collimated) beam profiles. Taking into account the rotational symmetry of the fiber modes, the 2D Fourier transform can be expressed as a zeroth order Hankel transform, yielding the far-field beam profile as a function of angle φ from the center of the beam [45]: With r being the distance from the optical axis in the near-field profile, k=2π/λ the wavenumber and the wavelength λ being 1030 nm. Appropriate (linear) scaling of the angular distribution of the far-field profile E FF allows to obtain the correct beam size on the scan lens, by matching the simulated far field profile to the experimentally observed profile.
The 3D focus distributions for the given far-field beam profiles can be calculated by evaluating the following formula [46]: where r and z are the radial and axial coordinate, respectively, and α reflects the finite numerical aperture of the focusing element. The resulting excitation PSFs are depicted in Figs. 2(a)-(c). Under some conditions (under-or over-filling of the pupil of the scan lens L5) it can be observed that the propagation of the LP 01 and LP 02 modes both do not display a monotonously decreasing beam-size (or central spot size) towards the focus, especially around the focus, whereas an under-filled Gaussian-shaped beam displays a monotonously decreasing beam size when propagating towards the focus. Similar effects have been observed in multiphoton-microscopy experiments with the output from a HOM fiber [47]. A simple explanation for this behavior can be found in the fact that both the LP 01 and LP 02 modes have positive and negative electric field values in their far field profile. In addition, overfilling of the back pupil creates a hard cut-off for the beam profiles which typically leads to small ripples in the focal distributions both in r and z (cf., for instance, the well-known Airy disk in 2D and its corresponding 3D focal distribution [46]).
For Gaussian beams with focused 1/e 2 beam waist w 0 , and wavelength λ, a much simpler expression can be also used to calculate the 1/e 2 width w(z) as a function of distance z from the focus in a medium with refractive index n. For the case of our LP 02 beam, addition of a scaling parameter C = 4 -effectively 4-fold expanding the Rayleigh length of an equally sized Gaussian beam -into that well-known expression allows matching both the size of the focus and divergence of the central spot of the beam: When comparing the results from this equation to the results of our simulations, the LP 01 beam -while using C = 1 -appears to have a slightly larger spot size in the focus than a Gaussian beam with equal far-field divergence. Estimating the confocal range using the beam-evolution (of the excitation beams -dashed curves in Fig. 2(g), and a virtual detection beam -magenta curve in Fig. 2(g)) calculated with the above approximation yields a confocal range of 324 µm for the LP 02 beam, and a confocal range of 96 µm for the extended Gaussian beam, both with a minimum 1/e radius of 2.8 µm. The confocal range for the PSF obtained with the LP 01 beam approximated with Eq. (4) is estimated to be 440 µm, with a minimum 1/e width of 5.3 µm. Note that the 1/e width of the PSF obtained with the LP 02 beam is smaller than this size over a range of more than 500 µm.
Results and discussion
To test and compare the resolution performance of the HOM fiber-based FD-OCT system (LP 02 ) to the performance of the FD-OCT system seeded with a standard single mode fiber output (LP 01 ), we prepared a phantom sample by dispersing 3 µm Ø polystyrene beads in Agarose gel (1% weight/volume low-melt in dH 2 0). We acquired A-scans on a 180 × 180 pixel grid, with a pixel-pitch of 0.78 µm, to obtain an OCT image of the phantom sample. The geometric focus of the scan lens was located about 0.5 mm below the surface of the phantom sample. Figures 3(a) and 3(b) show 3D (isosurface) renderings of the reconstructed volume, for the Gaussian-like and Bessel-like illumination respectively. Before drawing the isosurfaces, each plane in both volumes is normalized to its respective maximum signal. The red (iso)surfaces are chosen to appear approximately at the 1/e value of the beads (as the detected signals vary between beads and measurements). Figure 3(c) shows the measured width of 81 and 61 beads detected in the volumes reconstructed from the LP 02 and LP 01 FD-OCT measurements, respectively. The measured width stems from a Gaussian fit to the total distribution (LP 01 ) or the central peak (LP 02 ), respectively. More beads than the respective 81 and 61 beads of which we determined the size could be identified in the reconstructed volumes, however the size of these beads could not be determined with sufficient accuracy because of interference of their signature with the signal from nearby other beads. As the spot size in the LP 01 measurements is larger, this effect allowed to determine the size of less beads in these measurements.
The apparent width of the beads as a function of distance z from the focus is very different in the two measurements. In the measurement with the LP 02 mode, beads appear with a profile reflective of the LP 02 mode profile, i.e., as a bright central spot enclosed by a ring. The width of the central spot slowly increases with increasing distance from the focal plane. The central spots of the beads observed in the focal plane have about the same size as the central spot of the beam profile of the LP 02 mode in the focus. In the measurements with the LP 01 mode, beads appear as approximately Gaussian shaped spots, which increases in size significantly with increasing distance from the focal plane. For the LP 02 mode the size of the beads (∼2.5 µm) measured in focus agrees well with the simulated size of the central peak of the LP 02 mode. For the LP 01 mode the in-focus size of the beads (∼ 4.5 µm) is significantly smaller than the size of the LP 01 mode. Away from the focus the extracted bead size as a function of distance from the focal plane does not coincide with the calculated size of the excitation beam. The reason for this is that the resolution of our OCT system is not only determined by the spot size of the imaging beam, but also by the confocal gating of the detection system. In addition to the above measurements of our 3 µm bead phantom sample, we have also taken OCT measurements off a USAF 1951 resolution test target (Thorlabs R3L1S4P). Images taken with the Gaussian-like beam illumination and the Bessel-like beam illumination are shown in Figs. 4(a) and 4(b), respectively. Figure 4(c) shows the measured steepness of the edge of the square in-between size groups 6 and 7 as a function of shift of the surface away from the geometric focal plane of the scan lens. In this measurement, different lenses were used at the position of lenses L2 and L6 (Thorlabs C280TME-B, f = 18.4 mm and Thorlabs AC254-100-B-ML, f = 100 mm, respectively), resulting in a smaller focal spot size of the excitation beams and increased detection NA. Nonetheless, the analysis of the apparent sizes of the beads in our phantom sample and the mechanical displacement of the USAF 1951 resolution test target yield similar results for the resolution as a function of distance from the geometric focus. While the latter measurement allowed to reconstruct the resolution over a slightly larger depth range, the measurement with the 3 µm bead phantom sample shows that the LP 02 (Bessel-like) illumination not only provides an extended depth of focus, but also provides sufficient signal-to-noise ratio to detect and quantify structures across this extended depth range. The measurements with the USAF 1951 resolution test target confirm the expansion of the PSF beyond this extended range.
Comparing the case of Bessel-like (LP 02 ) illumination to Gaussian-like (LP 01 ) illumination, the Bessel-like illumination allows an improved lateral resolution (in the case of our phantom sample measurement ∼2.5 µm for the LP 02 beam and 4.5 µm for the LP 01 beam) and a 3-fold extension of the depth of focus compared to a high-NA Gaussian-like beam with similar focal spot size. Note that such a hypothetical high-NA Gaussian-like beam requires overfilling the back pupil of the imaging lens, such that a large portion of the excitation beam power is discarded.
Compared to Bessel beams generated using different methods, such as using an axicon lens, the LP 02 mode has only one ring outside the main peak. Because the Bessel-like LP 02 mode is generated in the higher order mode fiber module, the Bessel-like beam is not dependent on the alignment of any components in the optical setup. Whereas there is virtually no bandwidth limitation to the generation of Bessel-like beams using axicon lenses, LPG based mode converters make use of phase-matching, which can limit the bandwidth of the generated Bessel-like beam. With the appropriate design of the LPG, efficient conversion from the LP 01 mode to the LP 02 mode can be achieved over large bandwidths. The LPG used in our setup provided about 60 nm bandwidth, and larger bandwidths have been demonstrated [42]. State-of-the-art OCT systems use fiber-based interferometers with detection through the same fiber used to deliver the excitation light, rather than the free-space interferometer used in our setup. Because of the inefficient coupling into the fiber delivering the excitation light when traveling back through the axicon lens, in axicon lens-based OCM systems this detection geometry is favored over one where light is coupled back into the delivery fiber. In our system, because the Bessel-like beam is delivered directly from the fiber, this problem can be avoided. Because light coupled to different modes in our HOM fiber will experience different dispersion, the relatively long length of HOM fiber after the LPG would cause additional artifacts in the detection, as the backscattered light can couple to each of the modes supported by the fiber. This would be avoided in a setup with a shorter HOM fiber. A mode filter (possibly preceded by a second mode converter, as back-scattered light may couple more efficiently to the fundamental mode to be converted into a back-propagating LP 02 mode by the LPG) before combining the back-propagating light with the reference arm, may be needed to ensure proper overlap of the sample arm and reference arm in a fiber-based interferometer.
In conclusion, we have demonstrated for the first time to our knowledge the feasibility of an extended focus OCM imaging system where the Bessel beam illumination is achieved through a HOM fiber module, making use of an LPG as mode converter to selectively excite the LP 02 mode in a fiber especially designed to support the LP 02 mode.
We compared the OCT imaging capabilities of this system in terms of depth of focus and lateral resolution with the performance of a standard (Gaussian-like beam) illumination setup with similar beam waist. Our analysis shows that the Bessel-like beam provides a larger depth of focus and higher resolution than the standard system. Given the simplicity to generate a Bessel-like beam in a fiber-based module and the possibility to explore a large design space for the long period gratings and higher-order mode fibers, this approach could be used to generate Bessel-like beams with parameters allowing to further improve depth of focus and lateral resolution. This can be tailored to developing simpler FD OCT imaging systems in comparison to alternative free-space optics approaches (such as axicon lenses). The use of higher-order mode fibers also offers a flexible and more robust alternative to conical tip fibers. Disclosures. The authors declare no conflicts of interest. Data availability. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. | 6,574.4 | 2021-10-28T00:00:00.000 | [
"Physics",
"Engineering"
] |
Bulk Viscosity at Extreme Limits: From Kinetic Theory to Strings
In this paper we study bulk viscosity in a thermal QCD model with large number of colors at two extreme limits: the very weak and the very strong 't Hooft couplings. The weak coupling scenario is based on kinetic theory, and one may go to the very strong coupling dynamics via an intermediate coupling regime. Although the former has a clear description in terms of kinetic theory, the intermediate coupling regime, which uses lattice results, suffers from usual technical challenges that render an explicit determination of bulk viscosity somewhat difficult. On the other hand, the very strong 't Hooft coupling dynamics may be studied using string theories at both weak and strong string couplings using gravity duals in type IIB as well as M-theory respectively. In type IIB we provide the precise fluctuation modes of the metric in the gravity dual responsible for bulk viscosity, compute the speed of sound in the medium and analyze the ratio of the bulk to shear viscosities. In M-theory, where we uplift the type IIA mirror dual of the UV complete type IIB model, we study and compare both the bulk viscosity and the sound speed by analyzing the quasi-normal modes in the system at strong IIA string coupling. By deriving the spectral function, we show the consistency of our results both for the actual values of the parameters involved as well for the bound on the ratio of bulk to shear viscosities.
Introduction and summary
The wide and thoroughgoing experimental programs pioneered at the Relativistic Heavy Ion Collider (RHIC) and pursued at the Large Hadron Collider (LHC) offer a unique opportunity to study properties of a most exotic state of matter: the quark-gluon plasma (QGP). Although there is a common agreement that droplets of QGP are produced in heavy ion collisions in the pursued experiments, a unequivocal and quantitative determination of the properties of such a state is still the topic of much research. The time evolution of the plasma, its transport properties, the parameters of the transition to the confined phase, are some of the features that are currently being addressed along with many others. The difficulties in extraction of QGP properties owe much to the fact that the excited nuclear matter produced by colliding heavy ions at currently achievable energy scales is strongly coupled. Accordingly, the applicability of known fundamental methods and approaches to study the system in this regime is very limited, and hence all obtained findings in this limit have to be examined critically. On the other hand, this situation may also provide an opportunity to explore new technical facets of known tools and to explore new directions.
One of the methods that have proven useful to study the properties of QGP in the domain accessible experimentally is viscous hydrodynamics -a low-frequency long-wavelength effective theory. The application of the hydrodynamic framework to heavy ion collisions [1,2,3,4,5,6,7,8,9] and its use in the interpretation of a wide range of experimental observables [10,11,12,13] allowed to conclude that the experimentally produced QGP is a strongly coupled system. In particular, the studies on the hadronic flow and the emergence of other collective phenomena in the hydrodynamic description of QGP were taken as an indication of its fluid-like nature. Moreover, the success of hydrodynamics seemed to necessitate a fast nearthermalization of the QGP. All these arguments led to the conclusion that the created quark gluon plasma must be strongly coupled [14]. For reviews on hydrodynamic applications and formulations see [15,16,17,18,19].
Another powerful tool to study systems in the limit of strong 't Hooft coupling originated with the discovery of the AdS/CFT correspondence [20]. Even though it was hydrodynamic predictions and analyses that provided the empirical evidence that the shear viscosity to entropy density ratio is small [21], it was the AdS/CFT conjecture that established an analytical bound of η/s = 1/4π [22] 1 .
Transport coefficients are valuable elements of the hydrodynamic description as they carry information about the microscopic properties of a medium. In the case of the shear viscosity of strongly interacting matter, numerous phenomenological studies, the AdS/CFT result, kinetic theory calculations in the high-temperature weakly coupled regime of QGP η ∝ 1/(g 4 Y M log(1/g Y M )) [24], and non-perturbative estimates [25] allow a schematic global understanding of the shear viscosity to entropy density ratio. It is known that shear viscosity is large in the perturbative, hightemperature, limit, smaller near the phase transition temperature [26,27], and large again in the confined, pion gas domain [28]. However, the physics of the bulk viscosity is less satisfactorily understood. There are strong indications that bulk viscosity behaviour follows a trend opposite to that of the shear viscosity. In the limit of hightemperature QCD, bulk viscosity was found to have a very small value [29]: this is to be expected, as the coefficient of bulk viscosity can be written as a correlator of the trace anomaly (see Section 2.1), and QCD is known to be approximately conformal at high temperatures. Although it may seem that, in the very large coupling regime, a direct application of AdS/CFT techniques to bulk viscosity exploration is not relevant as the conjecture relies on the N = 4 super Yang-Mills theory (which is perfectly conformal and where the bulk viscosity vanishes identically), this is not quite true. Approaches based on holography in fact have proven useful by providing a lower bound on the ratio of bulk to shear viscosities: ζ/η ≥ 2(1/3 − c 2 s ) [30] 2 . In the vicinity of the transition from QGP to hadronic degrees of freedom, the bulk viscosity should, in principle, be calculated from the equation of state extracted the lattice data [33,34]. It is expected to be proportional to the trace anomaly ( − 3P )/T 4 and hence be notably peaked. Various investigations, both formal and phenomenological [35,36,37,38,39,40], confirm this expectation. Recently, it was demonstrated that the presence of a coefficient of bulk viscosity is important in hydrodynamical simulations, as it has a significant impact on the elliptic flow coefficients [40,41,42] and other heavy-ion observables, strongly interacting and otherwise [43,44,45,46,47]. However, it is fair to write that the precise behavior of the bulk viscosity for systems in extreme conditions of temperature and density is not yet firmly established and therefore needs further studies.
Understanding the behavior of bulk viscosity and knowing how it changes when the coupling strength varies is important for several reasons. First, bulk viscosity is an inherent property of nonconformal systems, and finite-temperature systems obeying QCD are good examples of such environments. The behavior of the bulk viscosity is fixed by parameters that break conformal symmetry. These include, at least at the perturbative region, finite masses of plasma constituents and the Callan-Symanzik β-function which expresses the coupling constant as a function of an energy scale [29,31]. Equivalently, these parameters enter the definition of the speed of sound, and bulk viscosity can be conveniently expressed as some function of 1/3 − c 2 s as well. From the phenomenological point of view, expressing bulk viscosity via the speed of sound is practical as this enables a direct connection with the lattice QCD equation of state. Second, bulk viscosity plays an essential role in the hydrodynamical description and modelling of hot and dense strongly-interacting matter. One could attempt to compute the coefficient within a theory which captures the microscopic interactions, and then insert it into hydrodynamic equations. Alternatively, fluid dynamics may be viewed as an effective theory of the long wavelength behaviour, and its transport coefficients are to be extracted empirically. Either way, viscous hydrodynamics serves as a powerful tool to investigate the strongly coupled nuclear medium produced in RHIC and LHC experiments. It provides information on the dynamics of the plasma, informs how the plasma evolves and also helps to extract, or at least constrain, other plasma characteristics. In addition, bulk viscosity studies have the potential to further the development of new theoretical methods to study the conformal anomaly of QCD. Because of different system dynamics at different coupling regimes one may expect a different dependency of the bulk viscosity on the factor 1/3 − c 2 s . This is what was observed by comparing the bulk to shear viscosity ratios at perturbative and very strong-coupling limits: ζ/η ∝ (1/3 − c 2 s ) 2 [29] and ζ/η ∝ (1/3 − c 2 s ) [30] respectively. Analyzing this difference is one of the main objectives of our studies.
We examine the bulk viscosity of systems governed by the SU(M ) theories with the interaction strength determined by the 't Hooft coupling λ = g 2 Y M M , where g Y M is the gauge coupling and M is the number of colors. The 't Hooft coupling may be thought as an effective coupling of QGP. We distinguish here 3 regions of the 't Hooft coupling: the weak coupling region, the intermediate coupling region (near the phase crossover temperature), and the strong (infinite) coupling one. In each region a different microscopic approach is applicable. The extreme limits are discussed comprehensively while the intermediate coupling part includes a brief summary and discussion on conceptual difficulties preventing one from determining bulk viscosity in this domain.
As already mentioned, the weak-coupling studies on bulk viscosity for QCD were done within kinetic theory in [29]. In our work we intend to adjust the kinetic theory result to the 't Hooft coupling. In this way we provide the form of bulk viscosity which can be directly confronted with its strong-coupling counterpart discussed via string theory methods. In this approach quark contribution is always suppressed by a factor 1/M and may be neglected in the leading order analysis. Kinetic theory [24,48,49] is an effective theory which is commonly and successfully used to compute transport coefficients. Its correspondence to fundamental microscopic theory was directly shown for scalar theory in [31] and then also for QED [50,51]. In this manuscript we undertake the task to justify the validity of kinetic theory for SU(M ) theory by providing power counting of microscopic processes contributing to the collision kernel of the Boltzmann equation. Since a derivation of the transport equation from diagrammatic representation of any non-Abelian theory is highly non-trivial, we intend to present a procedure on how to represent the collision kernel diagrammatically. We discuss how the pinching poles and nearly pinching poles control power counting of elastic and inelastic processes, respectively. The consequences of soft physics on power counting are emphasized. We also show how the integral equations emerge by discussing all topological structures of planar diagrams contributing to them. We believe that this examination may provide solid arguments to prove an equivalence of the Boltzmann equation with the analysis based on the loop expansion.
The intermediate coupling region is considered mostly to summarize the status of studies of bulk viscosity done with microscopic analyses. The bulk viscosity in this regime can be obtained if one can extract the low frequency behavior of the corresponding spectral density [35,36,37,52,53,54,55]. Although these approaches provide some constrains, they do not yet allow definite conclusions on the behaviour of the bulk viscosity to be drawn. We briefly discuss the difficulties.
On the other hand, the strong 't Hooft coupling behavior of bulk viscosity is an interesting playground to study sting theory and gauge theory because of the use of gauge/gravity duality. In fact since the bulk viscosity should truly be studied in a theory with running couplings, the famed AdS/CFT duality is not very useful, as discussed above. Going beyond CFT will require us to find the right gravity dual to answer any questions related to running couplings, and especially questions related to bulk viscosity. The gravity dual that we seek has been first proposed in [56,57] and the full UV completion was given from the type IIB side in [58,59,60] and more recently from the type IIA side in [61].
At this stage one might ask as to how a gravitational background, which has hitherto no connection to gauge theory, could in principle enter the picture to help us solve strongly coupled system like the one that we concentrate on here. There are two ways to answer this question, but none are completely satisfactory. The first one is to relegate this to the magic of duality. However this duality is special because all dualities studied so far have either been between two different gauge theories or between two different supergravity theories. There has never been a duality between a gauge theory and a gravitational theory before AdS/CFT [20].
The second one is to view the gauge theory as to be somehow contained inside a gravitational background. To elaborate this viewpoint, let us consider a four-dimensional Minkowski spacetime. This serves as an arena for gauge theory interactions, and for simplicity we decouple all gravitational interactions by tuning the Newton constant. The gauge theory interactions can happen at various energy scales, and we can assume that a specific slice of four-dimensional Minkowski spacetime is associated with a specific energy configuration. We can stack up all the slices together such that the low energy slices are at the bottom and the high energy slices are kept on top of one another in an increasing fashion. Clearly the topmost slice will be at infinite energy.
The above construction immediate provides a five-dimensional space and if we assume the energy direction to be parametrized by a radial coordinate r, then at r = 0 we have IR physics and at r → ∞ we have UV physics. This is also by construction a five-dimensional gravitational background, and by this simple assumption we seem to have got a five-dimensional gravity theory that captures the dynamics of the fourdimensional gauge theory from IR to UV! Of course this is a very simple construct and does not answer all questions related to gauge/gravity duality but it is instructive to see how two seemingly unrelated physics, one of gauge theory and other of gravity, may be united in a framework like above.
A few quick checks may be easily performed at this stage. If the gauge theory is a CFT, i.e scale independent, then the slicing idea will tell us that we need not worry too much of the physics at any scale r, and instead study the dynamics of the corresponding gauge theory from the boundary at r → ∞. Of course this is what AdS/CFT dictionary tells us, but what is lacking in our simple construct is the justification that the gravitational background is indeed an AdS 5 space. Maybe the idea of scale invariance, combined with decoupling and the supergravity EOMs could uniquely fix that, but this has not been checked.
On the other hand if we are dealing with a gauge theory that is not scale invariant, then every point on the slice matters. At every r we have the corresponding gauge theory dynamics at that scale 3 . Indeed in the Wilsonian sense at this scale all high 3 This argument entails the fact that if we keep r fixed and move along the remaining fourdimensions, nothing should change. However we can envision more generic scenario where the energy scale is mapped to a certain combination of r and the other three directions. In this case the Wilsonian effective action will be sensitive to where we are on a given slice. Of course it should be possible to redefine coordinates in such a way to find a new radial coordinate that will again correspond to the energy scale. For this paper we will however stick to the simplest case where r is mapped to the energy scale, r c to the UV cut-off, and r h , the horizon radius, to the temperature. energy degrees of freedom are integrated out and we are left with a set of relevant, marginal and irrelevant operators. This is of course the premise of our construction in [58], and the UV completion in [59,60] is done by introducing new degrees of freedom from the so-called Region 2 of [59] onwards.
Other checks, that include the exact mapping of the gauge theory operators to supergravity states, are much harder to perform and in fact the dictionary for gauge/gravity duality for the non-conformal case is not yet fully developed compared to the conformal case. Nevertheless one thing is for sure: to have any control on the computations on the supergravity side we need small g s . For a background with a constant dilaton − an example would be the Klebanov-Strassler background [56] − a little bit of numerology can tell us that g s may be made arbitrarily small 4 .
There is also an additional requirement of large number of colors. For a SU(M ) gauge theory, the corresponding supergravity theory will make sense if λ ≡ g s M is very large. In this limit all computations can be restricted to classical supergravity alone, and stringy corrections can be entirely ignored. However if we want to study an actual large M QCD we will have to explore string coupling g s = O(1). How can we ignore stringy corrections now and restrict ourselves to supergravity alone?
A way out of this conundrum was first proposed in [63] by performing a sequence of two stringy dualities: mirror transformation and M-theory uplift. The mirror transformation is a special kind of duality that takes a type IIB background to a type IIA background by simply interchanging the Kähler and the complex structures of the internal manifolds on both sides of the duality. In [63] this was implemented by performing three T-dualities along the isometry directions of the internal manifold in the type IIB side [64]. Being T-dualities they do not change the behavior of the dilaton too much, and therefore takes a weakly coupled background into another weakly coupled one.
The second duality is when we increase the type IIA coupling. At strong coupling a new internal direction opens up and the theory goes to eleven-dimensional M-theory where the dynamics is now miraculously governed by eleven-dimensional supergravity. All the type IIA stringy corrections are now captured succinctly by classical supergravity analysis in M-theory [65], and therefore g s = O(1) can again be studied using supergravity, albeit from eleven-dimensions. Such a dual description was termed as the MQGP limit of thermal QCD with large number of colors in [63].
The above considerations tell us that the strong 't Hooft coupling regime may be studied from the perspectives of both weak and strong string couplings. In the presence of N f flavors, it means we are exploring both g s N f → 0 as well as g s N f = O(1) limits 5 . This in turn boils down to saying that we can have analytic control on the transport coefficients − and here we will concentrate only on bulk viscosities − for pure glue as well as for flavored large M thermal QCD. Section 4 of the paper will therefore be dedicated to studying the bulk viscosity using weak string coupling and with vanishing number of flavors, whereas section 5 will be dedicated to studying the bulk viscosity using the other limit, namely strong string coupling and non-vanishing number of flavors.
There is yet another limit where we can remain at weak string coupling, but explore strong YM coupling. In the type IIB side such a scenario becomes possible once N f flavor degrees of freedom are switched on. That this could happen is a consequence of two conspiracies: one, the dilaton picks up O(g s N f ) corrections forcing it away from being a constant, and two, the NS 2-form field, through the vanishing two-cycle on which we have the M wrapped D5-branes, also picks up O(g s N f ) corrections. These corrections provide additional structure to the already non-constant field, but more importantly they add to the dilaton factor constructively to provide the full structure of the YM coupling.
Interestingly, from either of these limits at strong 't Hooft coupling, the ratio of the bulk to shear viscosities remains proportional to linear power of 1 3 − c 2 s . The difference however lies in the precise coefficients that control the lower bounds at weak and strong string couplings. For example at strong string coupling, the lower bound is almost 9 times bigger than the Buchel bound [30] as we will discuss in section 5. Of course nowhere we see any violation of the Buchel bound, so presumably the violation can only occur once we dimensionally reduce the four-dimensional theory to two-dimensions. This is much like the scenario presented in [66], but we will not discuss it any further here.
What we will discuss however is the appearance of the linear power of the deviation factor, 1 3 − c 2 s , when we study spectral function using the weakly coupled type IIA theory. The spectral function is an important aspect in the study of QGP, and its derivation is rather complicated at weak 't Hooft coupling. At strong 't Hooft coupling there is a way to derive it from the gauge/gravity duality, but the derivation is technical and involves various manipulations of the background. Nevertheless an answer can be found in the present set-up and the final result shows a linear dependence on the deviation factor. In the limit of vanishing frequency, the result matches well with actual QGP, despite the presence of a large number of colors. Such a success points towards some inherent universality, and it will be interesting to explore this further.
Organization of the paper
The paper is organized as follows. In section 2 we study bulk viscosity at weak is because we always want to keep the combination (g s N f ) k gsM 2 N m << 1 even for m = 1 and k ∈ Z. See also footnote 19. 't Hooft coupling. After short introductory remarks on the definition of bulk viscosity and the applied microscopic theory, in section 2.1 we discuss the Kubo formula which provides a general and first-principle method of computation of the coefficient. In section 2.2 we briefly summarize results on the leading order bulk viscosity calculation performed within kinetic theory by solving the Boltzmann equation. Sections 2.3 − 2.6 contain a diagrammatic, but qualitative only, analysis which is to justify validity of the effective kinetic theory formulation for transport coefficients studies. In section 2.3 we consider the one-loop diagram to find a typical size of the bulk viscosity. This step shows also that fermionic contributions are subleading in favor of the gluonic ones. Then, in section 2.4, the power counting of the relevant self-energies is done. Section 2.5 is devoted to an evaluation of typical sizes of multi-loop diagrams which represent scattering processes. Both particle number conserving and particle number changing processes are studied and the role of the soft physics is emphasized in subsections 2.5.1 and 2.5.2. In section 2.6 a schematic form of the relevant integral equations needed for a diagrammatic bulk viscosity computation is presented.
In section 3 an intermediate coupling regime is discussed. The section consists of a brief overview of literature on the approaches aiming at an extraction of bulk viscosity from lattice QCD results by studying mostly QCD sum rules and finding constraints on the spectral density. The difficulties in the quantitative determination of the bulk viscosity are pointed out.
The strong coupling results are discussed in sections 4, 5 and 6. In section 4, the weak string but strong 't Hooft coupling regime is discussed. We start by giving a detailed description as to where the string theory techniques would fit in in the study of bulk viscosity. The various domains of compatibility as well as the UV completion are emphasized and the consistency of the background is shown from both type IIB and its dual type IIA pictures. In section 4.1, a slightly simplified background is taken to quantify various parameters associated with the computation of bulk viscosity. For example, one of the important parameter is the fluctuation associated with the vielbeins. This is elaborated in section 4.2. The fluctuation modes can be divided into positive and negative frequencies, and we show that there are pieces of the fluctuations, called p nk , that are related to certain sources ∆ (n) ab in the gravity dual picture. The analysis of the sources is rather complicated and therefore in section 4.2.1 we first take a toy example to study the equations connecting p nk fluctuations with the ∆ (n) ab sources. The toy example is based on a simplifying constraint, and using this the simplest zero and the non-zero modes of the fluctuations are shown to satisfy equations that relate them to the sources. In section 4.2.2 we go beyond the simple toy example by studying the equations governing the fluctuating modes in a generic setting. As before, the zero and the non-zero modes satisfy equations relating them to certain sources.
Once we have the fluctuations, we can use them to compute the transport coefficients. In section 4.3 we perform two important computations: one, the sound speed, and two, the ratio of the bulk to the shear viscosities. The former is given by an equation which takes into account not only the scale dependence of the temperature, but also the background fluctuations. Needless to say, the ratio of the bulk to the shear viscosities should depend on the sound speed, and we elucidate this by first computing the precise ratio and then showing that the ratio is indeed bounded below by the deviation of the sound speed from its conformal value.
The remaining two sections are devoted to studying bulk viscosity at strong string and strong 't Hooft couplings. The first, i.e section 5, has to do with obtaining a Buchel-like bulk-viscosity-to-shear-viscosity bound by looking at scalar modes of metric perturbations and the associated quasi-normal modes. The second, i.e section 6, has to do with obtaining the same result from spectral functions. Here is a more detailed plan of these two sections.
In section 5, we first briefly review the Strominger-Yau-Zaslow (SYZ) type IIA mirror of [58]'s top-down type IIB holographic dual of large-N thermal QCD, as well as its M-theory uplift as constructed in [63]. This is followed by a discussion on obtaining the EOM for a linear combination of scalar modes of metric perturbations gauge invariant under infinitesimal diffeomorphisms and obtaining the associated quasi-normal modes in section 5.2; it is noted that with a non-zero bare resolution parameter, the horizon turns out to be an irregular singular point, a fact that proves in fact to be quite helpful in obtaining the aforementioned quasi-normal modes. In section 5.3, we show that one cannot avoid non-normalizable modes if one were to turn off the bare resolution parameter resonating well with similar non-normalizable perturbation modes obtained in section 4. A Buchel-like bound for the ratio of the bulk and shear viscosities in terms of the linear power of the deviation of the square of the speed of sound from its conformal value is finally obtained, both for N f = 0 and N f = 0 in section 5.4.
In section 6, we follow a different route − that of spectral function involving correlation function of gauge fluctuations in background value of gauge fields on the world volume of the flavor D6-branes of the aforementioned SYZ type IIA mirror. In section 6.1, we obtain the background value of a D6-brane world volume gauge field A t (r), r being the radial coordinate and set up the EOM for fluctuations about the same. We obtain and explicitly solve the EOM − there turns out to be only one linearly independent EOM − in the zero-momentum limit in section 6.2. From the on-shell action, the gauge-fluctuation correlation function and hence the spectral function per unit frequency in the vanishing frequency limit, is worked out in section 6.3 and it is explicitly seen that the difference between the same at non-zero and zero temperatures is precisely proportional to the deviation of the square of the speed of sound from its conformal value. In section 6.4, we argue that unlike sections 6.1 − 6.3 wherein one had considered weak-string-coupling strong-'t-Hooft-coupling limit, the result of section 6.3 goes through even for the strong string and strong 't Hooft couplings' limit. We argue therein that the g s → 0 limit alongwith non-trivial B-field along the vanishing two-cycle conspires to produce a g 2 Y M in the gauge theory side that is no longer a small number.
Finally in the appendices we discuss three topics. The first one is on a gauge invariant combination of the scalar modes of metric fluctuations. Such a combination is useful to study the quasi-normal modes.The second one is on the derivation of the on-shell action and Green's function required to study the spectral function. The third one is on an estimation of the horizon radius.
Bulk viscosity at weak 't Hooft coupling
When the system exhibits a small deviation from thermal equilibrium, its evolution is well described by the equations of hydrodynamics. These are given in terms of conservation laws of currents accompanied by the equation of state. Here, we focus only on the energy and momentum currents which are encoded in the stress-energy tensor T µν . Its spatial part is: where ζ and η are the bulk and shear viscosities, u i is the fluid flow velocity and the metric is mostly negative. A many-body system can be driven out of its equilibrium state through uniform compression or rarefaction and both processes lead to changes in the energy density , the increase or decrease, respectively. The pressure P also changes but its change is different than that provided by the equation of state P ( ). The trace of the stress tensor carries information on changes in pressure. The deviation from the equilibrium pressure when the system is expanding or contracting is characterized by the bulk viscosity ζ: where ∇ · u is the expansion parameter. Bulk viscosity, as well as other transport coefficients, is determined by microscopic dynamics. Here we discuss how bulk viscosity emerges when the system is governed by the non-Abelian SU(M ) gauge theory with the Lagrangian: Here ψ is the quark field with M × N f degrees of freedom, where M is the number of colors and N f is the number of flavors, D µ = ∂ µ + ig Y M A µ is the covariant derivative with the gluon field A µ , which has M 2 − 1 degrees of freedom, and F µν = 1 ig Y M [D µ , D ν ] is the field strength tensor. The strength of interaction is fixed by the gauge coupling g Y M .
Classically, this theory has conformal symmetry as long as the quarks are massless. Quantum mechanically, renormalization breaks the conformal symmetry since the Callan-Symanzyk β-function is non-zero. Therefore, it is expected that the bulk viscosity of the massless SU(M ) gauge theory is directly related to the β-function. This is manifestly shown within the effective kinetic theory analysis in Ref. [29]. In the rest of our analysis, we mainly consider the large M limit. In this limit, the relevant interaction strength turns out to be the 'tHooft coupling λ = g 2 Y M M and then β ∼ λ 2 /M [67].
In principle, to study bulk viscosity comprehensively one should consider massive fermion fields, since a constant mass is a parameter that breaks conformal symmetry as well. In the light of the forthcoming discussion it is, however, not necessary here as the quark contribution will be M suppressed compared to the gluon contribution.
Kubo formula for bulk viscosity
The first-principles prescription to compute bulk viscosity is given by the Kubo formula [68]: where ρ P P (ω, k) is the spectral function of the pressure-pressure correlator and ω is the frequency of the hydrodynamic mode. In the following discussion, we will often omit the the k dependence from the correlation functions and spectral densities. The common k → 0 limit should be understood in those cases. The spectral function is related to the imaginary part of the pressure-pressure retarded correlation function: where we used G A = G * R . In the rest frame of the fluid cell, the pressure operator is given by the trace of the stress-tensorP = − 1 3T i i . Because of the energy-momentum conservation, one can easily show that the spectral functions ρ P (ω, k) and ρ (ω, k) must vanish in the same limit [19], whereˆ =T 00 is the energy density operator. For theoretical analysis, it is often more advantageous to use the trace of the full stress-energy tensorΘ/3 =T µ µ /3 = 1 3ˆ −P which is Lorentz invariant, or the more kinetic-theory-friendly combinationP =P − c 2 sˆ which reduces to −Θ/3 in the conformal limit. Here c 2 s = ∂P/∂ is the speed of sound squared. Therefore, with the retarded correlation function given in coordinate space as Here the operatorÔ can beP ,Θ/3 orP . This correlation function contains all the essential information about the physics of bulk viscosity and their structures are fixed by the Lagrangian (2.3) and thermal medium effects. Although the Kubo formula (2.6) is general, in this section we focus on the regime of the sufficiently high energy scale, where the expansion in the small 't Hooft coupling λ may be applied. We consider here the limit where the 't Hooft coupling λ = g 2 Y M M remains small and the number of flavours N f is fixed while M → ∞. In this limit one should, in principle, be able to calculate bulk viscosity perturbatively. Due to very complex multi-scale nature of the non-Abelian theory, a comprehensive quantitative computation of bulk viscosity using field theoretical tools is not an easy task. To date, a complete diagrammatic analysis of the bulk viscosity in QCD has not been carried out (for other transport coefficients of QED, see [50,51]). However, an equivalent approach to compute the coefficient is offered by using effective kinetic theory.
Bulk viscosity from kinetic theory
The foundations of the effective kinetic theory of the SU(M ) theory were formulated in Refs. [24,48,49]. The scattering processes governing transport properties of the medium are embedded in the collision kernel of the Boltzmann equation and their sizes in terms of the gauge coupling g Y M , the numbers of degrees of freedom and the Casimir operators are explicitly shown in Ref. [48]. Further this formulation was used in Ref. [29] to calculate bulk viscosity of QCD. Here we briefly summarize these results in the leading order in the 't Hooft coupling λ = g 2 Y M M . In the large-M limit, the bulk viscosity in the leading order in λ is governed only by the pure gluodynamics since quarks are suppressed by at least a factor of M . This can be clearly seen from the following analysis. Bulk viscosity depends on two factors. First, it must be proportional to the nonconformality parameter reflecting the incompressibility of the system. Second, it is controlled by the mean free path carrying the information on the microscopic properties of the medium, in particular, on the nature of interaction, and relevant degrees of freedom. From Ref. [29] one observes that the same dependence of bulk viscosity on the nonconformality parameter is obtained either for quark and for gluon contributions. The mean free path of the quark contribution and of the gluon one is parametrically the same but it is associated with the corresponding numbers of degrees of freedom, which are different. While the number of gluons scales as M 2 , the number of quarks scales as M . This dependence occurs for both the number conserving and number changing processes and can be extracted when analyzing all matrix elements and associated degrees of freedom shown explicitly in [48]. Hence we ignore the quark contribution at every step of the forthcoming analysis.
In kinetic theory one focuses on the evolution of the distribution function of relevant quasiparticles. The evolution of the gluon distribution function f (p, x) is governed by the Boltzmann equation of the form: (2.8) Since f (p, x, t) is slightly out of equilibrium it can be expressed as f = f eq +f 1 , where f eq is of the form f eq (p, f eq is therefore a function of time-space dependent quantities: β(t) being the inverse of temperature T (t) and E p (x) = p 2 + m 2 th (x) -the energy of a gluon where the x dependence appears through the thermally fluctuating mass m th (x). f 1 is the nonequilibrium correction, which includes both the action of hydrodynamic forces and the correction due to thermally fluctuating mass. C[f ] is a collision term, that contains processes involving only gluons, namely, the number conserving gg → gg scatterings and the number changing g → gg splittings. Its explicit form can be found in [48]. The left-hand side of the Boltzmann equation at the linearized order is then: ) and f 0 is the Bose-Einstein distribution function (e βEp −1) −1 . The form of the quantity q(p) is most essential as it establishes the final parametric dependence of bulk viscosity on the nonconformality parameter. It reads: The quantitym 2 is of the form: The formula (2.10) is obtained by taking into account the stress-energy conservation law, thermodynamic relations and space dependence of the quasiparticle energy. Note that as the consequence of the temperature dependence of the quasiparticle mass, given by m 2 th = g 2 Y M (T )M T 2 /6, the beta function of SU(M ) theory β λ = −11λ 2 /(48π 2 M ) arises in the formula (2.11) and, consequently, in Eq. (2.10). The β λ -function is just the parameter that breaks conformal symmetry in the system and the factor 1/3 − c 2 s , with the speed of sound squared c 2 s = ∂P/∂ , is equivalent to it through the relation: (2.12) Due to such a dependence, q(p) can be expressed in a simple form: In all formulas, terms which are suppressed by any powers of M were omitted. The form of left-hand side of the Boltzmann equation, Eq. (2.9), dictates also the form of the correction f 1 which, in turn, fixes the form of the linearized collision kernel. The correction is f 1 = β 2 f 0 (1 + f 0 )χ∇ · u, so that both sides of the Boltzmann equation are proportional to ∇ · u. By dropping this scalar factor, the Boltzmann equation can be expressed in a convenient form S(p) = [Cχ](p). Bulk viscosity may be then found as: where the matrix isC mn = 2M 2 p φ m (p)[Cφ n ](p) and the column vector isS m = 2M 2 p φ m (p)S(p), with the basis functions φ m (p) = p m T K−m−1 /(T + p) K−2 and m = 1, ..., K. The numerical procedure relies on the variational method. Since ζ ∝ S 2 ∝ q 2 , the bulk viscosity is clearly expressed by the nonconformality parameter squared, (1/3 − c 2 s ) 2 or equivalently β 2 λ and the inverted collision kernel introduces the mean free path. The final expression then scales as: where a and b should be obtained by solving the integral equation (2.14). The whole procedure of finding bulk viscosity coefficient of QCD is comprehensively discussed in [29] for different values of the number of flavors N f . One can then reproduce the dependence of bulk viscosity of the SU(M ) theory on the coupling constant λ from Fig. 1 of Ref. [29] by setting all quark masses to 0, taking N f = 0, and rescaling the coupling 4πM α s → λ. Due to the same sizes of the nonconformality parameter and the 't Hooft coupling constant squared, given by the relation (2.12), one can write: where we used the entropy density s = (P + )/T ∝ M 2 T 3 . The formula (2.16) shows that in the very weak coupling regime the leading order bulk viscosity over entropy density ratio is a linear function of the nonconformality parameter 1/3−c 2 s , up to the logarithm. This occurs due to the fact that β λ function is of the same order as the inverse of the mean free path. This behavior is characteristic for the theories when the conformal symmetry is broken only by the β λ function. These are, for example, SU(M ) in the large M limit or massless QCD. Also, the shear viscosity coefficient of QCD with the effective coupling λ was studied in [69] and the result is: with A and B being numerical constants. Combining Eqs. (2.17) and (2.15), one finds that the ratio of ζ/η is characterized by the quadratic dependence of the non-conformality parameter:
One-loop diagram and power counting
So far kinetic theory has been the only utilizable method allowing for a quantitative computation of transport coefficients of non-Abelian weakly coupled gauge theories. However, it is an effective description of quasiparticle dynamics and its equivalence to quantum field theoretical approach has not been fully shown for the SU(M ) theory.
In particular, a complete diagrammatic analysis of the bulk viscosity of that theory has not yet been carried out (for transport coefficients of QED, see [50,51]). As was shown in Refs. [31,70,71,72], the equivalence of the diagrammatic method and the kinetic theory description can be established when the ladder diagram resummation dominates the leading order result. To carry out a qualitative analysis of the weakly coupled large M Yang-Mills theory, it will be therefore enough to confirm that the planar ladder diagrams dominate in the viscosity calculations. The goal of the forthcoming subsections is to do just that for the purpose of establishing the qualitative behavior of the bulk viscosity in the weakly coupled theory. We will not, however, attempt to carry out the full analytical computation necessary for the quantitative analysis as this is beyond the scope of this work.
To perform the qualitative analysis we need to establish the necessary basic ingredients dictated by the Kubo formula (2.6). The full stress-energy tensor of the SU(M ) gauge theory is given by: To have some insight into the parametric form of the bulk viscosity and to establish a starting point for evaluating the size of microscopic processes governing its behavior, it is illuminating to consider only the kinetic terms of the stress-energy tensor, that is, the first term in Eq. (2.19). Since quarks are subleading we focus only on the gluonic contribution to the stress-energy tensor; we briefly comment on this issue later. Power counting of the gluon one-loop diagram is most conveniently accomplished using the (r, a) basis of the thermal field theory. This was shown for the scalar field theory in [73,74] and also for gauge theories in [70]. In this basis, the elementary gluon propagators are the retarded propagator G ra , advanced one G ar and the auto-correlation function, which introduces information on the medium momentum distribution, G rr = (1 + 2n B )(G ra − G ar ), where n B is the Bose-Einstein statistics. These propagators carry indices related to color, spin or the Lorentz structure, but within this analysis we will not explicitly show them.
Since all these propagators describe a propagation of a given particle in a thermal medium they are dressed with self-energies. The retarded self-energy is given by Π = Re Π − iIm Π and the retarded propagator is: , (2.20) where A g (p) carries the necessary color and tensor indices. The advanced propagator then is given by G ar = G * ra . In the weakly coupled limit, the retarded propagator has poles at p 0 ≈ ±E p −iΓ g p where the quasi-particle energy is given by E p = p 2 + m 2 th with the thermal mass m 2 th = ReΠ(p). The thermal width is given by the imaginary part of the self-energy at the on-shell momentum Γ p = Im Π(E p , |p|)/2E p . The resummed propagator can be then expressed as: (2.21) In using the propagators in the (r, a) basis to evaluate the Kubo formula, we encounter two different types of singularities: The pinching pole singularity and the collinear singularity. Both are regulated by the thermal self-energies but they complicate the power counting. In this section, we discuss the pinching pole singularity and its ramification. The effect of collinear singularity is discussed in the later section.
Using the operatorP defined below Eq. (2.5) one finds the gluonic one-loop contribution to the bulk viscosity in the pinching pole approximation as: Note that the propagator part is written in a symbolic way as all internal indices and traces over them are not written explicitly. The retarded propagator has poles at p 0 = ±E p − iΓ g and the advanced one at p 0 = ±E p + iΓ g . Hence the two poles at Re p 0 = E p , for instance, are separated by 2iΓ g p in the imaginary direction on the opposite side of the integration contour. When integrated, these "pinching poles" result in a large 1/Γ g factor leading to the following power counting: This expression requires a few comments. First, the factor in the square bracket has analogous form to the quantity q(p) found within the kinetic theory and given by Eq.
(2.10), up to the thermal mass term. The expression (2.23) is obtained, however, only from the one-loop analysis and it does not include all effects. We expect that when the Lagrangian part and the interaction terms of the stress-energy tensor operator are included in the computation, the term d(m 2 th )/d(logT 2 ) will emerge in Eq. (2.23). This term, when subtracted from m 2 th in (2.23) will be analogous to the expression (2.11) and therefore will result in the β λ function emergence, or equivalently (1/3 − c 2 s ), analogously to what was obtained within the kinetic theory. The inclusion of the temperature dependence of the thermal mass was justified in Ref. [31] and explicitly incorporated to formulate fluid dynamic equations in Ref. [75], but for scalar theories only. We expect that performing full analysis of the spectral function of the SU(M ) theory will result in this dependence of the nonconformality parameter, but we do not intend to derive it. We do focus on discussing the consequences of the presence of 1/Γ g factor in the formula (2.23), which governs the mean free path behavior. Before doing that let us point out that M 2 − 1 in (2.23) reflects the number of degrees of freedom and since the number of colors is large we will be neglecting further the constant "−1". To represent the expression (2.23) diagrammatically, it is convenient to use the 't Hooft notation [76] so that a double line corresponds to a gluon propagator and any fermion propagator is represented by a single line. In this representation power counting relies on the simple formula [76]: where L is the number of closed loops, V 3 is the number of the 3-body interaction vertices and V 4 is the number of 4-body interaction vertices. In case of a fermion occurrence there is an extra factor of N f and L f is the number of fermion loops. Using the 't Hooft notation, the one-loop diagram corresponding to the expression (2.23), together with its typical size, is depicted in Fig. 1a), where the crossed vertices stand for the insertion of the renormalized operator of the trace of the stress-energy tensor. For a comparison, in Fig. 1b) we also present the fermionic one loop with its typical size given in terms of the corresponding degrees of freedom and the fermionic thermal width Γ f being given by the imaginary part of the fermionic self-energy. Therefore, the gauge boson contribution to the correlation function at the leading order scales as M 2 /Γ g , since the diagram is made of two closed loops and the fermionic one scales as M N f /Γ f . As may be implied from Fig. 1 each factor of the thermal width is associated with the presence of a pair of propagators. Thus one observes that the fermion contribution is subleading by a factor of N f /M comparing to the gluonic one as long as the same parametric dependence in the parameter 1/3 − c 2 s holds and Γ f is of the same order as Γ g . From the kinetic theory findings in Refs. [29,48] one finds the parameter (1/3 − c 2 s ) 2 being common for gluons and fermions and we rely on this result. Given that, the estimates of the sizes of the fermionic and gluonic thermal widths are still needed. What is more, to fully estimate the leading order 2-point correlation function for bulk viscosity, one also needs to know typical sizes of the corresponding thermal masses, which are essential for number changing processes.
Self-Energy Power Counting
Both the real part and the imaginary part of the self energy plays a key role in the calculation of transport coefficients. The role of the imaginary part (the thermal width) as a regulator for the pinching pole singularity has already been discussed in the previous section. The role of the real part (the thermal mass) is to regulate the infrared and collinear singularities that occur at finite temperature. Hence, the size of the thermal mass defines the soft scale while the temperature itself defines the hard scale. In QCD, we know that the thermal mass is of O(g Y M T ) while the thermal width is of O(g 4 Y M T ) when the particle momentum is hard (For instance, see [50]). In the large M limit, these need to be re-expressed using the 't Hooft coupling.
The thermal mass is determined by the real part of the self-energy of a particle. In our case, the leading order contribution comes from one-loop diagrams. The corresponding diagrams contributing to the gluon thermal mass in the double-line representation are shown in Fig. 2. For a systematic comparison, the fermion leading contribution to the real part of self-energy is shown in Fig. 3. The coupling dependence comes from counting the interaction vertices and number of degrees of freedom using the formula (2.24). As in case of one-loop diagrams contributing to the spectral density, when M → ∞ the gluon loops, Figs. 2a) and 2b), dominate over the fermion one by a factor of M/N f . Thus, the leading order of the gluon thermal mass as well as the fermion one ( Fig. 3) in the large M limit is: For explicit expressions, see [77,78,79]. The imaginary part of the one-loop self-energy vanishes when bare propagators are used due to kinematic constraints. It does not vanish when the resummed propagators are used, but that is equivalent to the two-loop self-energy which we discuss next. The relevant two-loop self-energy diagrams and their sizes for both gluons and quarks are shown in Fig. 4 and 5, respectively. It is then apparent again that gluon contributions dominate over the fermion ones by a factor of M for both the gluon and quark self-energies. The size of the imaginary parts of the self energies is: which leads to the thermal widths being of the same size, Γ g ∼ Γ f ∝ λ 2 T in the leading order. This is already enough to justify that quark loops do not have to be considered any more since the gluon contribution to a given quantity is always M times bigger than the quark one, up to the factor of N f , which is fixed constant and much smaller than M . This justifies omitting all quark contributions in the forthcoming analysis. One can then observe that the typical size of the propagator part of the correlation function is: up to the logarithm. The parametric estimate of the self-energy is of significant importance since it controls power counting of the scattering processes establishing the bulk viscosity coefficient. Consequently, as we discuss in the next subsection, it is the form of the self-energy at the one and two-loop order that controls the form of the collision kernel of the Boltzmann equation. In particular, by studying the self-energy one is able to find which processes contribute to the collision kernel and what is their sensitivity to different scales. In contrast to the shear viscosity, where the hard scale dominates its typical size, the bulk viscosity is sensitive to the soft scale as well. Since the soft scale is dictated by the size of the thermal mass, whenever we refer to it we mean
Diagrammatic justification of the processes contributing to the Boltzmann equation
Although the parametric estimate of bulk viscosity can be found by considering the one-loop diagram of the correlation function, in the thermal medium infinite number of multiple processes have to be included in the leading order. Equivalently, the infinite number of relevant loops need to be resummed. In the kinetic theory formulation this procedure is supposed to be captured by the collision term of the Boltzmann equation. The equivalence between the two approaches can be established by showing that at a given order of the coupling constant microscopic processes obtained in the diagrammatic representation have their counterparts in the collision kernel of the Boltzmann equation. Here we discuss this issue. There is a twofold source of the need for resummation of infinite number of diagrams. Each of them is related to the presence of a different type of singularity. The first case has already been discussed and it is the pinching pole singularity, regulated by the thermal width Γ g , where no other singularity occurs. The diagrams reflecting this type of singularity correspond to the number conserving, 2 → 2, processes. Then, within the one loop already discussed, any number of gluon exchanges between the side rails is possible. Any possible insertion of the permissible gluon exchanges, meant as rungs, is of the order of λ 2 and it is compensated by an accompanying 1/Γ g factor coming from pinching poles of the pair of retarded and advanced propagators. Infinitely many such combinations are possible. The other type of singularity, characteristic for gauge theories, is the collinear singularity associated with the small angle between the scattering and scattered particles. This type of singularity governs the number changing processes, 1 + N → 2 + N , where N is a number of hard particles taking part in a splitting of one hard gluon into two hard gluons. The collinear processes contribute to the bulk viscosity computation at the same order of the coupling constant as the number conserving processes. The splitting process occurs when a hard gluon traversing the medium interacts with another hard gluon via a soft momentum exchange and then emits an additional hard gluon so that both the emitted and the emitting gluons move almost collinearly within an angle θ ∼ O( √ λ). The collinear region of propagating particles is always associated with the corresponding product of retarded and advanced propagators, which, if not dressed, has a singular behavior. The collinear singularities, similarly to the pinch singularities, are regulated by the thermal width, but in this case, the soft scale fixed by the thermal mass plays an essential role as well. As discussed in detail later, all hard gluons taking part in the process can interact infinitely many times via the soft exchanges with the thermal background and they have to be coherently resummed.
Since each type of singularity involves a resummation of the corresponding set of infinitely many diagrams, there are two integral equations that need to be solved, each of them associated with the corresponding type of singularity. We first focus on the physics of number conserving processes which can be represented by a set of diagrams involving the pinch singularities only. The case of collinear processes is discussed later.
2.5.1
The case without collinear singularities 2 → 2 processes are represented by rungs, which have to be inserted in the one-loop spectral function and then resummed. Finding the structures of rungs is not trivial and to do so one needs to rely on a few constraints: Ward identities, power counting and kinematic boundaries. The most essential constraint is imposed by the Ward identities which provide relations between the effective vertex and dressed propagators and also dictate the way to maintain gauge invariance. Thus, the Ward identities should be used to obtain relations between the full on-shell imaginary part of the selfenergy and possible rung insertions. This is discussed for QED transport coefficients in [50,51] and for any SU(M ) one should expect similar relations. Accordingly, one can reproduce corresponding rungs by cutting the two-loop self energy diagrams in all possible ways and then opening one line in every diagram in all permissible ways. In Fig. 6 we show one schematic example. The two-loop diagram in Fig. 4 a) is cut through the two loops and the cross denotes lines which are open. In this way one gets two possible topologies of a rung. The lines which are cut, but not open, represent particles put on shell. Similarly, the external lines represent thermal on shell excitations. By opening the cut lines one reproduces 2 → 2 scattering processes shown in the right column in Fig. 6. Therefore, the first row shows how to obtain t and u channels of scattering events, they are obtained from the same rung but with a different momentum flow along the rung, which is not shown explicitly. The second row presents how one gets the s channel. The topological structures obtained by using this procedure are depicted in Fig. 7. First, only the diagrams a)-e) in Fig. 4 of the two loop gluon self energy have to be examined when looking for the topological structures of rungs. The diagrams g) and h) are tadpole diagrams with one-loop corrections and they contribute to the real part of self energy. The diagram f) is a one loop diagram with a tadpole correction and it also provides a contribution to the resummed propagator. Therefore, in general, all diagrams containing tadpoles do not have to be investigated any more for this qualitative analysis. For the power counting analysis one should include all rungs which have g 4 Y M factor coming from the interaction vertices and which have one closed loop contributing a factor of M . The other factor of M , expected for the proper 't Hooft coupling order is obtained when the external lines of rungs on the right-hand side are joined with other rungs or with each other. What is more, in Fig. 7 we present all topological structures arising only from the use of the Ward identity. The final relevant contributions to the kernel of the integral equation can be, however, found by using kinematic constraints and power counting arguments. The kinematic constraints are schematically represented by the dashed and the dotted lines. The dashed lines represent the cuts through rungs which are allowed by kinematics. All possible dashed lines that lead to the number conserving processes are shown. The dotted lines, although coming from the Ward identity analysis, reflect forbidden processes since one on-shell massless particle cannot decay into two on-shell masslesss particles. Therefore, the structures j) and k) in Fig. 7 do not contribute to the kernel. Also, any diagram involving crossed lines do not have to be considered here since it is always suppressed by some powers of M . Accordingly, only diagrams a)-i) constitute the kernel of the integral equation determined by the pinching singularity and they all contribute at the order O(λ 2 ). It is also easy to observe that all the contributing rungs with the associated cuts may be converted to reproduce matrix elements in the scattering amplitude defining the collision term of the Boltzmann equation. The rungs a) and b), shown also in Fig. 6, represent a contribution to the scattering amplitude squared given by t, u, and s channels. The diagram c) leads to the respective contribution from the contact interaction. The diagrams d)-i) reflect all possible interference terms.
At hard scale all allowed rungs contribute at the order O(λ 2 ), where it is enough to count the number of interaction vertices and the number of color loops. To see the relevant M -dependence it is more convenient to count closed loops of the spectral function shown in Fig. 1 a) with the rungs inserted.
When the soft scale starts to play a role power counting of the diagrams presented in Fig. 7 can change and not all diagrams are of the same size. The rung c) is not affected by the soft physics since all lines must be hard and on-shell. To do power counting of other diagrams with momenta of the order O( √ λT ), we use the (r, a) basis. It is important to notice that there can be many diagrams of the same topology but with different a and r assignment. In Fig. 8 we show exemplary diagrams from Fig. 7, where the a and r positions and the momentum convention are shown. The expression corresponding to the rung a) is: 4 G ra (l)G ar (l)G rr (k + l)G rr (p + l). (2.28) The size of the rung is estimated as follows. All incoming and outgoing momenta are hard, k ∼ p ∼ O(T ), and on-shell, while the loop momentum l is soft l ∼ O( √ λT ) and off-shell. In this case both G ra (l) and G ar (l) propagators are of the order O(λ −1 T −2 ). Additionally, since both G rr (k + l) and G rr (p + l) are on-shell they contain delta functions to maintain energy-momentum conservation. When the loop momentum integration is performed the phase space d 4 l combined with the delta functions reduces to d 2 l, which is O(λT 2 ) when l is soft. Combining all these factors one gets λ 2 from the explicit interaction vertices, λ from a phase space suppression and λ −2 from two soft propagators which make this rung to be of the order O(λ). The rung has therefore a different size at the soft scale than at the hard one, which is due to the Coulomb divergence characteristic for these scattering processes. This is, however, only a superficial difference since there is an additional mechanism which makes this rung contribute to the integral equation at the expected O(λ 2 ) order. The best way to see it is to refer to the 2 → 2 collision kernel of the Boltzmann equation [49], which is: (2.29) where the functions χ represent a small nonequilibrium deviations from the Bose-Einstein distribution function. When 2 → 2 scattering processes represented by rungs a) and b) in Fig. 7 occur in the medium via the soft momentum exchange, which is when k − k = l, with l ∼ √ λT , then one encounters the following cancellation between the χ functions: (2.30) The prescription dictated by the Kubo formula has similar structure to the Boltzmann equation [31], where the term [χ(k) + χ(p) − χ(k ) − χ(p )] needs to be squared to compute any transport coefficients from the Boltzmann equation. This introduces additional power of λ from the soft momentum and softens the contribution of rung a) so that its final size is O(λ 2 ). An analogous mechanism applies to the diagram c) where the two vertical soft lines cause λ −2 enhancement, the explicit vertices and the phase space introduce λ 3 and the Boltzmann equation structure (2.30) -the factor λ, which altogether give the size O(λ 2 ). We also need to evaluate the interference terms, that is, the rungs d) -i). They are all of O(λ 2 ) order when the off-shell exchange momentum is soft. To see this we first consider the rungs f) and g) (for the notation of the rung f) see Fig. 8). They both contain one propagator with a soft momentum l whose contribution is O(λ −1 ), but this is canceled by an additional phase space suppression. Due to combination of two delta functions in the propagators G rr (k + l) and G rr (p + l) with the phase space d 4 l, the latter one is reduced to d 2 l which leads to d 2 l ∼ O(λ). When assessing the size of the rungs h) and i) the same arguments hold as before. The rung h) is shown in Fig. 8) and the corresponding expression is: The soft propagator G ra (l) introduces O(λ −1 ) and the number of integrals over the loop momentum is reduced as previously so that we are left with d 2 l ∼ O(λ). These two factors cancel each other leaving the rungs O(λ 2 ). In rungs d) and e) the vertical line represents the soft propagator which is λ −1 . Including further the phase space suppression and the couplings from the explicit interaction vertices, one gets this rung of the expected O(λ 2 ) size.
The case with collinear singularities
Number changing processes contribute at the same order as 2 ↔ 2 processes (up to logarithm). They are entangled in the same topological structures as number conserving processes, shown in Fig. 6, but emerge under different kinematic conditions. The mechanism responsible for their occurrence is also more complicated than the one discussed above and it is fully controlled by soft physics. Here we briefly and qualitatively discuss how they emerge and evaluate their sizes.
Collinear processes occur when one hard particle splits into two hard particles with an accompaniment of a soft gluon exchange with the thermal medium [70,71,72]. The topological structures corresponding to these processes can be obtained in the procedure shown in Fig. 9. As presented, the rungs representing collinear processes are reproduced by opening one outer line of the two-loop self-energy. The line which is open is denoted by the black cross in the figure. The internal (shaded) lines of the self-energy represent propagators with soft momenta. They contain the hard thermal loop corrections, which is not shown explicitly in the two first columns of Fig. 9. Thus, whenever the cut is through the soft line it means that the hard thermal loop is cut. Consequently, all the cut lines and the external lines are hard and nearly on-shell. Specifically, in contrast to the number conserving processes, the thermal masses in the respective propagators must be included. To evaluate the size of the processes in Fig. 9 we consider in detail the rung shown in Fig. 10, reproduced with a and r positions and momentum convention. As before there is more than one layout of the a and r assignment and a complete analysis of the kernel of the spectral function has to include all possibilities. The size of this rung can be evaluated similarly to the case where collinear singularities are absent, but the power counting is more subtle. First, it is important to point out that whenever a soft line appears in the rung, it must be G rr propagator since it carries the distribution function to account for the interaction with the medium. G rr propagator, by contrast to other propagators, introduces 1/ √ λ enhancement in the soft momentum region. Moreover, the process under consideration is in the collinear regime when there is a pair of the adjacent retarded and advanced propagators with respect to a given momentum. If these propagators were bare their product would produce a singular behavior as their poles would nearly pinch the real axis in the contour integration. This is, however, cured by the inclusion of the self-energies, which leads to a finite expression. As in case of pinching pole approximation, diagrams containing the products G ra G ra or G ar G ar instead of G ra G ar for the same momentum give much smaller contribution to the whole expression and can be neglected in the leading order analysis. The expression corresponding to the rung shown in Fig. 10 is: where (. . . ) means the contribution from the external propagators, which is not needed to be shown explicitly. As mentioned, the external momenta are hard and nearly on-shell, k ∼ p ∼ T and k 2 ∼ p 2 ∼ O(λT 2 ), while the loop momentum is soft l ∼ √ λT . In this kinematic region the integral over the loop momentum is dominated by dl 0 ∼ O(λT ) in the frequency region, and d 3 l ∼ O(λ 3/2 T 3 ). What is more, G ra (l + k) and G ar (l + k) propagators are both O(λ −1 ) since they are dressed with the self-energies to cure pinch singularities. Additionally, since G rr (l + k − p) is in the collinear regime with G ar (l + k), it is also dressed and is of the order O(λ). The properties of propagators impose that (l + k) 2 and (l + k − p) 2 are O(λT 2 ) and the same holds for (l) 2 , which is soft and dressed with the HTL correction. These conditions are, in turn, equivalent to the fact that the angles between all participating particles are parametrically small so that they all propagate collinearly. The small angles are therefore θ kl ∼ θ pl ∼ O( √ λ). The constraints on the angles impose constraints on the phase spaces, which is, d 3 p ∼ |p| 2 d|p|sinθ pl dθ pl dφ ∼ O(λT 3 ) and d 3 k ∼ |k| 2 d|k|sinθ kl dθ kl dφ ∼ O(λT 3 ). The loop momentum l must be spacelike and since it is soft there is an additional Bose-Einstein enhancement making G rr be of the order O(λ −3/2 ). Combining all these powers of λ and the couplings coming from the explicit interaction vertices combined with the closed color loops one finds this rung to be O(λ 2 ).
The presence of the self-energy in G ar (l + k) and G ra (l + k) propagators signals further interactions, which have not yet been explicitly shown nor discussed. In fact one can attach other lines to the side rails of the rung to reproduce processes involving a larger number of participating excitations. For example, one could add a hard line so that to obtain a double gluon emission. Such a process is however subleading [49]. One could also add many soft lines to reflect the process of a hard excitation interacting many times with the medium via a soft exchange and then ending up with splitting into two hard particles. Attaching any number of soft lines to the side rails is possible and they all contribute O(1) corrections. These processes, however, do not need to be explicitly included inside the diagram in Fig 10 since they are resummed within the integral equation for the bulk viscosity. Also, apart from the pair of propagators with pinching poles, there is also a pair of G ar (l + k) and G ra (l + k − p) propagators, which contain nearly pinching poles, where other insertions are possible. We do not examine them here since they are a part of the forthcoming collinear analysis, which investigates the emergence of an effective vertex in the collinear regime. In Fig. 9 we depicted how one can reproduce the collinear processes when one soft line appears in the two-loop self-energy. The rightmost column in Fig. 9 presents the squares of amplitudes of these processes. For the entire analysis to be completed one also needs the interference terms. The leading order interference terms, which contain number changing processes, are those with only 3-gluon vertices; diagrams with 4-gluon vertices are suppressed. The diagrams in question are shown in Fig. 11. The figure presents the procedure of opening one cut line of the self-energy to reproduce the topological structures representing interference terms between collinear processes. To reproduce number changing processes the cutting line has to go through 3 lines of the 2-loop self-energy including the soft (shaded) line (as shown in Fig. 11). The line which can be open is the cut line which is a part of the side rails. Opening the internal (vertical) line would mean opening all color loops and it would lead to emergence of nonplanar diagrams, which are suppressed in large M limit. One can also cut the self-energy so that the soft line remains uncut. Diagrams obtained in this way could only represent number conserving processes. In Fig. 11 we show only a few typical topologies with respect to the position of the soft line. One can also realize that the same structures, but inverted upside down, are also possible. The latter ones would be the complex conjugates of those shown in Fig. 11. The interference terms are essentially the sums of the rungs shown in Fig. 11 and their complex conjugates. Additionally, all structures shown can have different momentum and a and r assignment and the full computation of the imaginary part of the spectral function requires summation over all possibilities. To show that collinear splittings occur at the same order as 2 → 2 processes we examine one representative rung depicted in r, a basis in Fig. 12. Other rungs with collinear singularities can be considered analogously. The expression corresponding to this rung is where (. . . ) stands for the insertion of propagators corresponding to the incoming and outgoing states and we also included integrals over all momenta since pairs of propagators with respect to all momenta can have nearly pinching poles in the collinear regime. In this particular case there are three such pairs: G rr (l − k + p)G ar (l + p) ∼ G ra (l − k + p)G ar (l + p), G ra (k)G ra (p − k) and G ar (p)G ra (p − k), which have singularities with respect to momentum l, k, and p, respectively. Notice that in case of k momentum integration the propagators which have pinching poles are both denoted as G ra due to the notation and assignment of r and a with respect to p in Fig. 12. However, taking into account that G ra (p − k) = G ar (k − p) one obtains the expected G ra (k)G ar (k−p) responsible for the emergence of the singularity. Due to all these constraints G rr (l), G rr (l−k+p), and G ra (p−k) have to be dressed and therefore they all are of O(λ −1 ). In the leading order analysis l has to spacelike, and thus G rr (l) is O(λ −3/2 ). All these properties of propagators have their equivalence in the kinematic constraints, which reflect collinearity conditions, namely, small scattering . These, in turn, limit the respective phase spaces to . Additionally, the integral over dl 0 is dominated by the narrow frequency width, ∼ O(λ). Collecting all powers of coupling constant one finds this rung to be O(λ 2 ).
When assessing the size of the rung in Fig. 12 one can realize that the enhancement in the rung's size coming from the collinear singularities is always balanced by the suppression coming from the phase space caused by the small scattering angle. Given that, there are more effects that need to be included in the leading order evaluation. One can attach infinitely many soft lines to a given pair of propagators with nearly pinching poles and still get the rung at the same order. This is schematically shown in Fig. 13, where a few exemplary insertions of a soft line are shown (the r, a positions and the momentum convention is the same as in Fig. 12). The insertion of soft lines in the leading order is governed by a few rules. The lines have to be G rr propagators and they cannot cross each other since they must be ordered in time and coherent. Moreover, their insertion must follow the standard a and r assignments so that one has to have an odd number of a in a given vertex. Also, a pair of propagators with the nearly pinching poles must appear, otherwise the rung is suppressed by some powers of λ. If all these rules are kept then attaching G rr soft line to the pair of lines with the nearly pinching poles always introduces λ −3/2 from the very size of the propagator, λ from the two explicit interaction vertices and a closed color loop, a pair of new propagators with the nearly pinching poles with the contribution O(λ −2 ) and a phase space suppression d 4 l ∼ λ 5/2 . Altogether the insertion is O(1) and thus infinitely many soft lines can be added in this way without changing the size of the rung. All possibilities have to be resummed and such a procedure reflects the diagrammatic representation of the LPM effect.
The resummation of all possibilities of adding a soft line to a given rung is most efficiently done by finding an effective vertex. The vertex involves three hard and nearly on-shell lines, where all possible insertions of a soft line are included. One exemplary vertex in (r, a) basis is shown in Fig. 14, where one soft line can be added in 3 possible ways. There are more such combinations since r and a can have a An inclusion of all possible combinations reflects the need for an integral equation, which needs to be solved to find a form of the effective vertex. The solution should be then inserted in the kernel of the integral equation established by the pinch singularities. This approach is, however, demanding within quantum field theory approach. The only essential point to notice is that insertion of any number of soft lines does not change the size of the vertex nor, consequently, the size of a rung where the effective vertex appears.
Integral equations
The bulk viscosity is controlled by the elementary scattering processes entangled in rungs discussed above and both 2 → 2 and effective 1 → 2 processes between gluons contribute at the same order in the 't Hooft coupling λ. For a quantitative computation of the bulk viscosity coefficient all diagrams representing scattering events have to be resummed, which leads to the relevant integral equations. For the prescription given by the Kubo formula the integral equation is shown schematically in Fig. 15. The kernel of the equation is presented in Fig. 16. It includes 2 → 2 processes and effective 1 → 2 processes as well. For number changing processes another integral equation needs to be solved. It is the equation for the effective vertex and is shown schematically in Fig. 17. The shaded regions denote resummed parts. For this schematic representation of the integral equations we do not distinguish between the propagators with hard and soft, HTL resummed, momenta, but it can be easily done taking into account the discussion in Sec. 2.5. In general, the leading order analysis requires all propagators to be dressed with the self-energies. In principle all bare vertices could be replaced by the effective ones, but since their contribution in all diagrams apart from the last one in Fig. 16 is subleading we do not show them explicitly. In the last diagram in Fig. 16 effective vertices must be used as arbitrary many interactions with the medium through the soft momentum exchange occur at the same order. The rung is responsible for the interference terms and the coherent resummation of all contributions reflects the LPM effect for the effective 1 → 2 processes.
The analytical computation of the bulk viscosity spectral function in terms of quantum field theory tools is very challenging and only qualitative picture can be sketched. The same physics is, however, embodied in the Boltzmann equation as long as the same elementary processes govern its collision kernel. As has been examined in this section, both 2 → 2 and 1 → 2 processes can be reproduced from the planar diagrams of the spectral function. Both classes of processes occur at the same order and so contribute to the kernel of the Boltzmann equation, discussed in detail in Refs. [24,48,49]. It therefore justifies that the collision kernel of the Boltzmann equation captures the same physics as the kernel of the spectral function, shown in Fig. 16, and serves as a convenient way to compute transport coefficients. In particular, the analysis justifies the employment of the Boltzmann equation to calculate the bulk viscosity coefficient of the SU(M ) theory, as carried out in Ref. [29] and summarized in Sec. 2.2 of this manuscript.
Bulk viscosity at intermediate coupling
In the previous section, we have discussed the behavior of the bulk viscosity in the SU(M ) gauge theory in the weak coupling limit. In the next sections, we will discuss the strong coupling behavior. In these two limits, we have well defined calculational tools, perturbation theory in the case of the weakly coupled limit and the AdS/QCD correspondence in the strongly coupled limit. When the coupling is neither weak nor strong the only reliable QCD results are from Euclidean Lattice QCD (LQCD) calculations. Unfortunately, direct extraction of viscosities from LQCD is very nontrivial since viscosities have to do with dissipation in real time while LQCD calculations are inherently static. Of course, if one can calculate full Euclidean correlation functions, they can be analytically continued to real time correlation functions. But since only discrete and finite number of data points are available from LQCD, this procedure in practice introduces large uncertainties.
In literature, efforts were made to extract information on the bulk viscosity from LQCD results using sum rules [35,36,37,52,53,54,55]. In this section, we summarize the main points and point out why it is difficult to get any information on the bulk viscosity from the sum rules in particular and from LQCD results in general.
Extraction of the bulk viscosity from the sum rule relies on the Kubo formula for the bulk viscosity ζ: where ρ P P is the spectral density for the pressure-pressure correlator. Hence one may expect that the bulk viscosity can be extracted from a sum rule involving ρ P P (ω, 0)/ω. Equivalently, it may be able to extract ζ from a sum rule involving the correlation function of the trace operatorΘ =T µ µ =T 00 − 3P because the energy-momentum conservation laws dictate that the zero wavenumber limit ofT 00 correlators vanish. Note that different forms of the trace operator can be used and they were briefly discussed in the previous section.
In Ref. [36], low energy sum rules for the stress-energy tensor trace, Θ = T µ µ , were derived and the following result was established: where Θ G is the gluon contribution to the stress-energy tensor trace and ρ ΘΘ is the spectral density for the ΘΘ correlation function. 6 As we have argued in the previous section, the quark contribution is negligible in the large M limit and we will not consider it here, either. Consequently we will drop the subscript G. The trace average is: where Θ 0 is the vacuum contribution. In Ref. [52], re-derivation of the results with the direct subtraction of the vacuum spectral density led instead to: is the deviation from the vacuum spectral density at finite temperature T . The difference between the two sum rules was attributed to the non-commutability of the limits lim ω→0 and lim k→0 , see [52]. In Ref. [54], the sum rule Eq. (3.4) is re-cast as: where ρ * is the spectral density for the operatorΘ * =T µ µ −(1−3c 2 s )T 00 . The spectral density of the operatorΘ * satisfies the same Kubo formula but has an added benefit that the limits lim ω→0 and lim k→0 commute.
The right hand side of the sum rule (3.5) can be evaluated using the LQCD results. If one can then show that the left hand side is a well defined function of the bulk viscosity then these sum rules may be used to determine the bulk viscosity in the region of temperature where LQCD calculations can be performed.
A first attempt at relating the sum rule integral (the left hand side of Eq.(3.5) to the bulk viscosity was carried out in [35,36]. In Ref. [36] the following ansatz was introduced: which does satisfy the Kubo formula and makes the sum rule integral in the left hand side of Eq. (3.5) proportional to ζω 0 . However, this form lacks contribution from frequencies higher than the unknown parameter ω 0 , see Ref. [80]. It turned out that the high frequency contribution is actually negative that largely cancels the low frequency contribution. The fact that this ansatz is not adequate has been shown by [37,52,54] both perturbatively and non-perturbatively. The biggest problem is that the right hand side of Eq. (3.5) is negative while Eq.(3.6) makes the left hand side strictly positive. This difference can be attributed to the the presence of the glueball [54]. Hence, the sum rule Eq. (3.5) is not particularly useful since it cannot be definitely established that the dominant contribution to the sum rule integral comes from the low frequencies.
If one cannot rely on the sum rule, then one needs to obtain the spectral density directly from LQCD calculations at least in the k = 0 and |ω| T limit. This is not an easy task as it involves analytic continuation when only a finite number of data points in the Euclidean space is known. First attempts in this direction were made in [53,54], which does show that δρ * (ω, 0)/ω has a peak at ω = 0. Unfortunately, actual values and behavior of ζ/s obtained in this way contains too much uncertainty at this point. All one can conclude from the LQCD studies right now is that
Bulk viscosity at weak string and strong 't Hooft couplings with zero flavors
In the previous section we studied the bulk viscosity at weak 't Hooft coupling, and argued how the ratio of the bulk to shear viscosities should be interpreted at both weak and strong couplings. As mentioned therein, the strong coupling result depends on the existence of a gravity dual of the resulting framework. Our aim here is to analyze SU(M ) gauge theory at various values of the 't Hooft couplings and at high temperatures as depicted in Fig. 18. The three regimes of interest are shown in Fig. 18: the yellow box denotes weak, the green box denotes intermediate and the blue box denotes strong 't Hooft couplings, all at high temperatures. The theories governing each of the three dynamics are also varied as we discussed above. The weak and the intermediate couplings are studied using kinetic and LQCD, whereas the strong coupling will be studied using string theory. The latter however is more elaborate because of its UV properties, and in fact differs quite a bit from what we expect from kinetic theory and LQCD. Let us start with the kinetic theory which was discussed in details earlier. The regular RG flow of such a theory is governed by the black lines in Fig. 18. At low energies the YM coupling becomes very large and the theory confines. However if Figure 18: The three different regimes of interest used here to study bulk viscosity. The yellow box denotes the regime of kinetic theory, the green box denotes the regime of LQCD, and the blue box denotes the regime of string theory. All these regimes are analyzed at high temperatures, i.e above the deconfinement temperatures, and the regular RG flows connecting the yellow and the green boxes are denoted by black curves. On the other hand, the cascading RG flows, that specifically arise from string theory, are shown here in the blue box. All the three different RG flows lead to a consistent picture at low energies connecting the weak, intermediate and strong 't Hooft couplings.
we increase the temperature, the coupling can be made smaller. In fact at high temperature the 't Hooft coupling λ ≡ g 2 YM M can become very small even for large M . This is of course the regime where kinetic theory can be studied (see section 2 for more details), and is denoted by the yellow box in Fig. 18. One can similarly go to the intermediate coupling regime, whose dynamics is governed by LQCD..
What we now require is to understand the regime where the 't Hooft coupling λ can be very large for both weak and strong YM couplings. This is the regime where neither kinetic theory nor LQCD can help us, and therefore the only way we can have any analytic control is to use techniques of string theory. Of course when M , the number of colors, is small even string theory cannot provide a controlled laboratory, so it is the large M limit that can be tackled using stringy techniques. This is then the regime of gauge/gravity dualities, i.e the dynamics at strong 't Hooft in the gauge theory side may now be done using a gravity dual description.
Clearly since string theory provides a UV complete picture, it is natural to ask what UV completion would mean in the present set-up. However before we go about exploring this side of the story, let us first elucidate the IR dynamics of the theory directly from the gravity dual description. The gravity dual description was originally provided in [58] (see also [63] for the mirror set-up which will be useful soon). In simple terms, the gravity dual is given in terms of a resolved warpeddeformed conifold with fluxes with an additional black hole that provides the high temperature physics in the gauge theory side, i.e the physics above the deconfinement temperature. In the absence of a black hole we expect minimal four-dimensional supersymmetry (that may be broken too), whose simplest description appear, on one side from wrapped D5-branes on a non-Kähler resolved conifold [62], and on the other side from fluxes on a resolved warped-deformed conifold alluded to above [61]. The "resolution" parameter in the resolved warped-deformed conifold is responsible for the UV completion, that we will discuss soon (see also [82] for a slightly different realization of the same story). In the following we want to discuss the background as well as the issue of supersymmetry, mostly for the IR part of the gauge theory. For simplicity we will concentrate on the Baryonic branch of the gauge theory where is the issue of supersymmetry is most prominently displayed. Later on, in section 4.1, we will concentrate on a more specific point in the moduli space of the corresponding gauge theory.
In the Baryonic branch, generated by M wrapped D5-branes on a non-Kähler resolved conifold [62], the gravity dual for the IR physics may be given by the following type IIB background with three-and five-form fluxes [61] 7 : where (θ i , φ i , ψ) are the angular coordinates, β is the parameter of the Baryonic branch, h is the warp-factor and J is the fundamental (1, 1) form that is not closed. We have also denoted the dilaton by φ, and the five-form by C 5 . The internal metric ds 2 6 can be expressed as: with H i (r) being the additional warp-factors. Note that the two-spheres, denoted by (θ 1 , φ 1 ) and (θ 2 , φ 2 ) have different curvatures governed by H 3 and H 4 respectively, and their inequality will be responsible for UV completion. The complexified threeform flux G 3 then takes the following form [61]: where M i are certain functions expressed in terms of the vielbeins whose form may be ascertained from eq (2.113) of [61]. The E i are defined with the following choice of the almost complex structure: which is integrable 8 for a constant dilaton, otherwise the three-form flux G 3 is defined as an ISD (Imaginary Self-Dual) form with respect to the almost complex structure (4.4). Note that (4.3) is a (2, 1) form as one would expect from a supersymmetrypreserving background. Additionally, the choice of the Baryonic branch tells us that the gauge group is SU(2M ) × SU (M ), which is in fact one cascading step away from the confining SU(M ) gauge group that we seek! In the blue box of Fig. 18 this may be seen as the second-last stage of the cascading RG flow before permanent confinement sets in.
One can also give a physical meaning to the Baryonic branch directly from the wrapped five-brane picture. The SU (2M ) × SU (M ) gauge group implies that, along with M wrapped D5-branes, we have M D3-branes too. The five-branes wrap the two-sphere parametrized by (θ 2 , φ 2 ). For vanishing size of the two-sphere, the M additional D3-branes preserve the same supersymmetries as the M wrapped D5branes. However if the two-sphere is of finite size, supersymmetry is completely broken, and the only way to preserve supersymmetry in this case would be to dissolve the D3-branes on the D5-branes.
Being on the Baryonic branch does not give a well-defined UV picture. We will still need to find the UV completion of the model. This will be discussed a bit later, but note that being in the Baryonic branch does tell us that if we make one Seiberg duality we will land in the confining SU(M ) gauge theory description. At non-zero temperatures, we will require black holes in the gravity side of our story. Since this is the premise on which our calculations in this paper will be based on, let us elaborate the story a bit more. At zero temperature, the duality sequence that we shall use is laid out in Fig. 19. On the bottom left corner, i.e box (a), is the gauge theory configuration discussed in [58,62] with M D5-branes wrapped on the two-sphere parametrized by (θ 2 , φ 2 ). This is a non-Kähler resolved conifold because at r = 0 there is a resolved two-sphere parametrized by (θ 1 , φ 1 ). The usefulness of such a configuration will be spelled out a little later. The wrapped D5-branes on the non-Kähler resolved conifold give rise to the gravity dual background which is a non-Kähler deformed conifold with three-form fluxes, much in the lines of (4.1), (4.2) and (4.3) and is given by box (b) in Fig. 19. The computations performed in section 4 will be based on this configuration, albeit with a black hole that will signify non-zero temperature, but with no flavors. Figure 19: The two configurations, one in type IIB and the other in the M-theory uplift of type IIA, on which all the computations of sections 4 and 5 respectively will be based upon. On the left is the type IIB picture with the gravity dual given by a resolved warpeddeformed conifold with fluxes. On the right is the M-theory uplift of the type IIA gravity dual. The IIA gravity dual involes a non-Kähler resolved conifold with fluxes, whereas the M-theory uplift is a seven dimensional manifold with a G 2 structure. The type IIB computations will be done at high temperatures, i.e above the deconfinement temperatures, but with zero flavors. The type IIA, and also the M-theory uplift, will take into account both high temperatures as well as non-zero flavors.
A mirror transformation, a la Strominger-Yau-Zaslow [64], on both the type IIB boxes of Fig. 19 will produce the IR type IIA background whose gravity dual configuration involves a non-Kähler resolved conifold with fluxes, as shown in box (d) in the figure. The M-theory uplift of this is given in the top right-hand box of Fig. 19, i.e box (e), which is a seven-dimensional G 2 structure manifold with G-fluxes [63]. Our computations in section 5 will be based on this specific M-theory manifold albeit, again, with non-zero temperatures but now including non-zero flavors. Interestingly for the spectral function computation of section 6, we shall resort back to the type IIA picture.
Let us now come to the UV completion of these models that we alluded to earlier. In the type IIB side, this was first discussed in [58], but a full elaborations on the actual ingredients that constitute the UV degrees of freedom were given in [59] and [60]. We expect the UV theory to be a strongly coupled conformal field theory, as this would be the closest to being asymptotically free. The reason for choosing a CFT − and not an asymptotically free theory − as the UV theory is because we require strong 't Hooft coupling to allow for a gravity dual. In fact, a gravity dual description only exists if the corresponding gauge theory is strongly coupled at all scales, i.e strongly coupled from UV to IR. For large but finite number of colors, this means that the requirement for asymptotic freedom is not quite compatible with the existence of a gravity dual. Therefore the closest we can come to asymptotic freedom is to allow for a CFT in the UV. In the limit of infinite number of colors, the 't Hooft coupling can be very large, yet the YM coupling can be made arbitrarily small.
One specific choice of a UV group that could lead to a CFT is SU(N + M ) × SU (N + M ), where we have introduced an extra parameter of N . In the present context, the choice of N has a special meaning. In the type IIB theory, N signifies the number of D3-branes whereas M is the usual wrapped D5-branes. The two-cycle on which the D5-branes wrap, i.e the two-cycle parametrized by (θ 2 , φ 2 ), should now be of vanishing size to preserve supersymmetry. In the blue box of Fig. 18, we have denoted the UV group SU(N + M ) × SU(N + M ) that is shown to get Higgsed to a smaller group SU(N + M ) × SU(N ) at a certain IR scale. This is followed by a series of cascading RG flows that eventually takes us to the confining gauge group SU(M ) at the far IR.
The complete RG flow that is depicted in the blue box of Fig. 18 can be described rather succinctly from both type IIB as well as the type IIA theories. This will also answer all the questions that we put aside earlier. From the type IIB side, the UV CFT may be easily described by allowing additional M anti-D5-branes distributed on the northern hemisphere of the resolved sphere parametrized by (θ 1 , φ 1 ). These anti-D5-branes are stabilized against collapse by using fluxes, details of which have appeared in [59,84]. The string connecting the branes and the anti-branes are heavy, and they are integrated out at low energies. Thus at low energies we only see the cascading SU(N ) × SU(N + M ) theory. At high energies, the anti-brane degrees of freedom are integrated in and the M D5-brane and the M D5-branes combine to give M D3-brane degrees of freedom. Together with N D3-branes localized at the south pole of the resolved sphere, this leads to the UV CFT described above. Therefore the three stages of operation namely, (1) emergence of CFT, (2) Higgsing and (3) the cascading behavior, are all described neatly from the type IIB configuration of N D3, M D5 and and M D5-branes on a non-Kähler resolved conifold with fluxes.
The correctness of our construction may also be ascertained from a T-dual type IIA configuration as shown in Fig. 20. This T-duality is a single T-duality along the ψ direction and therefore should not be confused with the three T-dualities that we performed earlier to determine the mirror configuration. A single T-duality of a conifold along ψ direction, in the type IIB theory, leads to a configuration of two orthogonal NS5-branes in the type IIA side. In the presence of N + M extra D3branes in the type IIB side, the T-dual configuration is shown on the left of Fig. 20. The M D3-branes have five-brane origins as discussed above, and so the configuration on the left of Fig. 20 gives us a CFT with a gauge group SU(N + M ) × SU(N + M ). The reason why this is a CFT comes from the fact that the NS5-branes are not bent. Clearly, any bendings of the NS5-branes would have lead to running couplings of the gauge theories on the D4-branes [97]. These bendings can be achieved by having an unequal number of D4-branes on both sides of the NS5-branes. Such a feature may be achieved independently but does not seem to come naturally from the configuration on the left of Fig. 20: a consequence due to the absence of Coulomb branches in N = 1 gauge theories.
However, all is not lost as these theories do have other branches, namely Baryonic, Mesonic and possible remnants of the N = 2 Coulomb branches. Without going into too much details, which the readers may find in [61,82], one may easily see that a branch in the moduli space arises by putting an extra NS5-brane along the dotted line in the left configuration of Fig. 20. Happily, this does not break any extra supersymmetries but creates the necessary Higgsing effect that we require to jump-start the cascading process! On the right of Fig. 20 we have shown how one may go from UV conformal to IR cascading behavior. As should be obvious from Fig. 20, moving the M D4-branes along parallel NS5-branes bends the NS5-branes, thus creating RG flows on the remaining D4-branes. The far IR physics is then exactly a confining SU(M ) gauge theory with decoupled U(1)'s that we seek here. Switching on a non-zero temperature we can study the various transport coefficients.
In the gravity dual, the IR story is clear: this is given as in (4.1) and (4.2).
The UV degrees of freedom start appearing from Region 2 onwards as shown in [59], and as we go to large r we are effectively in Region 3 where the three-form fluxes vanish and the background asymptotes to an AdS 5 space. In this section we will use a slightly simplified form of this background and mainly concentrate on Region 1 − to be at low energies − to study the bulk viscosity at strong 't Hooft but weak string coupling in the absence of fundamental flavors. In the next section, we will put in the flavors and study the bulk viscosity as well as the ratio of the bulk to shear viscosites at both strong 't Hooft and strong string couplings, again concentrating on Region 1. For an earlier work on bulk viscosity with bottom-up approach, using two different AdS spaces at UV and IR and for a wide class of models, see [83]. Note however that the study of bulk viscosity in [83] differs from our study here in at least two respects. First, the model considered in [83] has two fixed points: one at UV and the other at IR respectively. This differs from the IR confining model that we consider here. Secondly, the study of bulk viscosity in [83] finds violation of the Buchel bound [30]. Although this is possible in our set-up, by choosing a different lower bound for d 1 in (4.85) and (4.86), we do not analyze such cases here.
The type IIB dual background for large N thermal QCD
In [84] we made some preliminary study of bulk viscosity using the UV complete large N thermal QCD model of [58] with N f = 0. The metric that we took in [84] is of the form: Note that the internal space is a warped resolved conifold and not a resolved warpeddeformed conifold as one would have expected from (4.2). This is a simplifying assumption which helped us to study bulk viscosity without worrying about the far IR regime of the gauge theory. Recall that the far IR regime of the gauge theory is governed by the blown-up three-cycle of the resolved warped-deformed conifold.
However since the small r regime of the geometry is covered by the horizon radius r h , our choice of the metric (4.5) is not too far from the correct answer. The resolution parameter a 2 (r) is not the resolution parameter used in the brane side to control the UV behavior of the theory. In the brane side, i.e in the gauge theory description, the M D5-branes wrap the vanishing two-cycle of the resolved conifold parametrized by (θ 2 , φ 2 ) in a way that the D5-branes (and the N D3-branes) are at the south pole of the resolved 2-cycle, parametrized by (θ 1 , φ 1 ), and the anti-D5-branes are distributed over the upper hemisphere of the 2-cycle.
In the language of the metric (4.5), this means putting the M D5-branes on the (θ 2 , φ 2 ) 2-cycle has caused an asymmetry quantified by the resolution parameter a 2 . From the discussion above this would mean that a(r) 2 = O( ) and should have no terms that are zeroth order in . This can be confirmed by plugging the metric into the equations of motion.The Einstein's equations are: here G 3 , F 5 and τ are the complexified three-form flux, five-form flux and the axiodilaton respectively as defined in (4.1) and (4.3). To figure out how the wrapped D5-branes, inserted in the non-extremal system, affect the warp-factor , we can express the change as: where A 0 ≡ − 1 4 log L 4 r 4 is the conformal value and = 3gsM 2 2πN is our expansion parameter. The resolution parameter a 2 now may be expressed in the following way: where, as we emphasized above, to zeroth order in , the D5-branes wrap vanishing two-cycle. We start seeing non-zero resolution only from the first order in . The two functions, P (r) and Q(r), are related via the following set of equations: where D 1 , D 2 and D 1 are constants that may be fixed from the boundary conditions. This has been discussed in details in [84], and after the dust settles, the functional form for P (r) and Q(r) can be explicitly represented in the following way: where both behave well in the limit r → r h as one would have expected. In fact knowing the functional form for Q(r) immediately tells us what the black-hole factor, e 2B , in the metric (4.5) should be. This may be expressed by the following integral form: which reproduces the conformal result for vanishing . Finally, plugging (4.10) to (4.7) and (4.8) gives us the O( ) corrections to the conformal values for the resolution and the warp-factors: The functional forms for a 2 , e 2B and e −4A are consistent with the general picture developed in [58], [59] and [85]. In particular knowing the O( ) correction to the black-hole factor is consistent with the O( ) corrections to the two black-hole factors g 1 and g 2 in [58]. There are also O(g s N f ) corrections, from the N f flavors, that we do not consider here. This is relegated to section 5.
Details on the bulk viscosity computations from gravity dual
Bulk viscosity appears, in a system with a SO(3) spatial symmetry, from the correlation of T xx at two different points in four-dimensional space-time with one point fixed at the origin. This means, as discussed earlier, in the gravity dual bulk viscosity may be computed from the fluctuations of the vielbeins e k with k = 0, x and r. These fluctuations may be divided into positive and negative frequencies, and are expressed as: where is the same non-conformality factor as before and ω is the frequency. The other parameters appearing in (4.13) may be defined in the following way. The coefficients p nk are in general functions of r as well as |ω| but not constants. With constant p nk , the bulk viscosity would vanish despite the existence of a complex piece in (4.13). Note however that δe k ≡ δe k (r, t) are all real functions of r and t.
The coefficients Γ 0k (r, |ω|) and Γ 1k (r, |ω|) capture the essence of the bulk viscosity computations here. In a system with SO(3) symmetry, Γ 0x takes the following form: where B(r, 0) is given in (4.11) and T , the temperature, is proportional to r h , the horizon radius. We also expect Γ 0y = Γ 0z to be equal to Γ 0x . On the other hand, Γ 00 and Γ 0r take the following form: Although (4.14) and (4.15) are related to conformal theory 9 , we will use them to analyze the non-conformal regime of our model as the imaginary parts of the fluctuations in (4.13) depend on Γ 0k as well as p nk . The latter are associated with extra sources coming from the distribution of anti-D5 branes in Regions 2 and 3. We can quantify these sources in the following way: where we see that the imaginary part involves three infinite series of modes specified by the sources ∆ where A 0 is defined in (4.7) and B 0 ≡ B(r, 0) is given in (4.11). Note that (4.17) involves five fluctuation modes, p n0 , p nx , p nr , p (n−1)x and p (n−1)r ; as well as the three Γ 0k 's defined in (4.14) and (4.15). This means knowing ∆ (n) 2k , we will need at least five equations to solve for the fluctuations p nk . One may also construct the following recursion relations from (4.17): and so on. Note that there are two types of non-derivative terms in the first equation of (4.18): (a) the terms proportional to p 00 , p 0x and p 0r , and (b) terms proportional to p (−1)x and p (−1)r . The latter have no dynamics, so maybe we could use them to cancel the former terms in the following way: This would mean that by knowing p 00 , p 0x and p 0r , one may not only build the next set of fluctuation modes from (4.18) but also determine the functional forms for the non-dynamical modes p (−1)k . Unfortunately such an identification would either overconstrain the dynamics or lead to some apparent contradictions. To avoid this, we will set: In any case, identification like (4.19) can never be used to cancel the non-derivative terms p 1k with p 0k as both set of fluctuations are dynamical now. Thus generically we should assume the existence of p (n−1)k modes along with the p nk modes. The next series of sources appear from ∆ (n) 2x and would follow similar strategy as above. These sources may be expressed in terms of the fluctuation modes p nk in the following way: where this time four, instead of five, modes p nx , p n0 , p nr and p (n−1)x are needed. As before, the zeroth and the first order recursion relations may be written as: where this time an input of p 0x is needed to build the first order fluctuation equation.
In a similar vein, we now construct the third series of sources associated with ∆ (n) 2r in the following way: with the input of four fluctuation modes p nx , p n0 , p nr and p (n−1)r governing the dynamics. The recursion relations for the zeroth and the first order can be easily expressed in terms of the fluctuation modes as: At this stage let us ask whether the above three set of equations, (4.17), (4.21) and (4.23), are enough to determine the five unknown functions 10 , p n0 , p nx , p nr , p (n−1)x and p (n−1)r . It would seem we need at least two more equations. However a careful look tells us that the first equations in each of the three recursion series, (4.18), (4.22) and (4.24), are enough to determine the three functions p 00 , p 0x and p 0r provided the sources ∆ 2k and the boundary conditions are adequately specified. Similar arguments apply for the next three functions, p 10 , p 1x and p 1r : once we specify the sources ∆ (1) 2k and the boundary conditions, this would in principle fix the functional forms for all p 1k . Thus it seems that the above three equations (4.17), (4.21) and (4.23) should suffice.
For the present case we will work out the equation satisfied by p 0x as this is the only component relevant for bulk viscosity. This will be explained soon (see also [84]). In fact what is required is not p 0x , rather p 0x , and therefore we will work out the equation for Y x (r, |ω|) ≡ p 0x . In the process we will also see how to write the equations for p 00 and p 0r . To start, let us define a few variables f i , g i and h i using which the zeroth order equations in (4.18), (4.22) and (4.24) may be re-expressed in the following way: where we have identified p 01 ≡ p 0x and p 02 ≡ p 0r to avoid clutter. One may define similar equations for the first order fluctuation equations, namely for the f 1k using the recursion relations. The various coefficients appearing in (4.25) may be written as: where Γ 0k have been defined in (4.14) and (4.15); and A 0 and B 0 are defined in (4.7) and (4.11) respectively. It is interesting to note that the LHS of the equations in (4.25), i.e the coefficients of p 0k and p 0k , are mostly functions of Γ 0k whereas the RHS of (4.25), i.e the coefficients of p 0k , are all functions of the derivatives of Γ 0k . The set of equations (4.25) are highly non-linear and solving them will in general be a non-trivial exercise. Therefore it might be instructive to first solve a slightly simpler system than (4.25) to gain some familiarity with the solutions and then proceed to address the full set of equations. In the following subsection we analyze a simpler case, and in the next subsection we will study the full system.
A toy example in full details
To study a toy example from (4.25), the first question is: how can we simplify the set of equations in (4.25)? This is where the observation that we made above could become useful, namely, we can assume that the derivatives of Γ 0k are much smaller than Γ 0k at some r >> r h . This would immediately imply: making the RHS of all the equations in (4.25) to only depend on the sources ∆ 2k . Note that (4.27) does not imply absorbing the p 0k terms in the definition of the sources because the sources are independent of the bulk fluctuations. Nor does this imply invoking relations like (4.19), since such a procedure is generically prone to errors. Thus (4.27) would be the only way to simplify (4.25).
With this in mind the next set of procedures may be elaborated in the following way. Using (4.26), let us define another set of functions as: which will help us to avoid cluttering of formulae later when we write the equations for the fluctuations p nk . Note that these functions are all expressed in terms of certain definite integrals (the lower bounds of these integrals could be r h or r = 0, but these details will be irrelevant). There are also four other functions that are not expressed in terms of integrals. They may be expressed as: where ∆ 2k are the zeroth order sources that appear in (4.25). Note that G 2 and G 4 are functions of r as well as |ω| because they depend on the sources ∆ (0) 2k (r, |ω|). Therefore with (4.26), (4.28) and (4.29), we are ready to write the equation governing the fluctuation Y x (r, |ω|) ≡ p 0x as: which is a second order differential equation and therefore would require boundary conditions, both at the cut-off r = r c as well as at the horizon radius r = r h , to determine the functional behavior precisely. The coefficients a I1 appearing in (4.30) are non-trivial functions of F i and G i variables, defined in (4.28) and (4.29), and can be written as: where a 41 = a 41 (r, |ω|) and all other a I1 are functions of r. This implies Y x = Y x (r, |ω|) as expected. Solving (4.30) would provide the fluctuation mode p 0x . Once p 0x is known, we can use it to determine the next fluctuation mode, p 00 . Let us now define p 00 ≡ Y 0 (r, |ω|), instead of p 00 , and write the equation for Y 0 in the following way: where one may use either of the two set of expressions on the RHS of (4.32) to solve for Y 0 . The equality between the two expressions can be argued easily from (4.25). Finally, knowing Y x and Y 0 , one may use any of the three equations in (4.25) to solve for Y r (r, |ω|) ≡ p 0r .
Let us now work out the first order fluctuations for our case invoking (4.27). Again we expect three set of fluctuations of the form p 10 , p 1x and p 1r , similar to the three set of fluctuations p 00 , p 0x and p 0r respectively for the zeroth order case. The equations satisfied by the first order fluctuations are a slight variations of (4.25), namely: where f i , g i and h i are exactly the ones appearing in (4.26); Γ 0k are as in (4.14) and (4.15); and A 0 and B 0 are the zeroth order values in (4.7) and (4.11) respectively. However not everything remain the same: the RHS of the equations (4.33) have two kind of sources, (a) the first order sources ∆ (1) 2k , and (b) sources appearing from the zeroth order in fluctuations, p 0x and p 0r . These changes in sources imply that G 4 (r, |ω|) and G 2 (r, |ω|) in (4.29) may be replaced by: 2k in G 2 and G 4 . Note that there are no additional changes to G 1 (r) and G 3 (r) in (4.29). The above observation immediately tells us that the equation satisfied by p 1x ≡ Y 1x (r, |ω|) should be: where we see that the coefficients appearing in the LHS of (4.35) are the same as the ones appearing in (4.30) with a 11 , a 21 and a 31 as given in (4.31). The only difference from (4.30) is the replacement of a 41 by a 41 , where: Similarly the equation for p 10 ≡ Y 10 (r, |ω|) will be similar to (4.32) with the replacement of Y x by Y 1x and G 2 and G 4 by G 2 and G 4 respectively. Once we know Y 1x and Y 10 , we can use (4.33) to determine the equation for Y 1r . This way the first order fluctuations may be completely determined. The picture is now clear for the generic order fluctuations. If we want to study the n-th order fluctuations Y nx , Y n0 and Y nr , all we need is to rewrite the sources, ∆ 20 − e −4(A 0 +B 0 ) r 3Y (n−1)x (y, |ω|)Γ 0x (r) + Y (n−1)r (y, |ω|)Γ 0r (r) dy.
Once these sources are specified we can construct G 4 using ∆ (n) 2r and ∆ (n) 2x ; and G 2 using ∆ (n) 20 and ∆ (n) 2r using the definitions in (4.34); and finally a 4 using (4.36). The equations for Y nx , Y n0 and Y nr would then follow the steps outlined above.
Towards exact solutions for the fluctuations
To study exact solutions for the system of equations in (4.25), one way would be to eliminate the p 0k pieces on the RHS by rearranging the set of equations there. However a slightly simpler approach is to keep the RHS only as a function of p 0x and eliminate the others. This leads to the following set of equations: where, as mentioned above, we kept the RHS as functions of p 0x only. The set of equations (4.38) are in some sense more symmetrical than the earlier set of equations (4.25). The coefficients are expressed in terms of brackets which may be defined as: This formalism has some distinct advantages that will be clear soon. Note also that, in (4.38), there are no second derivatives of p 0r which in turn will help us to rearrange the set of equations further. But before we do so, let us define the coefficients appearing in (4.38). The k i are defined in the following way: where k 6 and k 7 will be used to describe the sources ∆ 1 and ∆ 2 in (4.38) below. All the k i are in turn constructed out of the (f i , g k ) coefficients defined earlier in (4.26).
In a similar vein, the l i coefficients are defined as: where the (h i , g k ) coefficients, used here to define l i , are given in (4.26). As before, the (l 6 , l 7 ) coefficients will be used below to describe the sources ∆ 1 and ∆ 3 . Finally the m i coefficients may be defined in the following way: where again the (h i , f k ) coefficients are given in (4.26), and m 6 and m 7 will be used to describe the sources ∆ 2 and ∆ 3 . The sources ∆ k may now be expressed as: The new sources are combinations of the original sources ∆ 2k , the coefficients defined in (4.40), (4.41), (4.42) and (4.26); and p 0x . These equation explicitly take us away from the simplifying assumption (4.27), and so are only valid when no approximations are made 11 . Additionally, the dependence of all the sources only on p 0x means that any further rearrangements of the sources will not have new dependences on other fluctuation modes. This means one may eliminate p 0r pieces from (4.38) to simplify them further in the following way: which mix the sources ∆ 1 and ∆ 2 as well as ∆ 2 and ∆ 3 . We could also write another equation, parametrized by γ i coefficients, that mix the sources ∆ 1 and ∆ 3 , but that won't be necessary for us. The new sources may be expressed in the following way: which explicitly show that they are not only linear with respect to the fluctuation mode p 0x but also that no other modes show up in the definition (4.45). The precise coefficients of p 0x appearing in the sources above are respectively: which do not vanish generically, although special cases with vanishing coefficients could appear. Of course in the limit (4.27) everything vanishes, but since we are no longer considering the simplifying condition (4.27), we will assume non-zero coefficients. This consideration also allows us to express the other coefficients in (4.44), namely α i and β i , in the following suggestive way: which again do not generically vanish. At this stage the signs of the various α i and β i coefficients are not important, but they could be worked out by carefully studying the relative terms. The relative terms depend on the (k i , l i , m i ) coefficients defined in (4.40), (4.41) and (4.42) respectively which in turn are expressed in terms of coefficients given in (4.26). We also expect α i = β i as well as α i α j = β i β j for all i = j, which may be inferred from (4.47).
Something interesting happens here. Eliminating p 00 from (4.44) lands us directly to an equation for p 0x ≡ Y x whose form is similar to what we had earlier when we analyzed a toy example. This means, as in (4.28) therein, we can define the following functions: using the integrals of the functions defined in (4.47), assuming neither α i nor β j vanish. If any of the α i or β j vanish, the analysis has to be changed completely to get the requisite equation for Y x . We can also define another set of functions using α i , β j and the sources ∆ [a,b] that do not involve integrals, much like the ones in (4.29). They are: where as before we have − similar to G 2 and G 4 in (4.29) − P 1 and P 2 that are functions of both r and |ω| because of their dependences on the sources ∆ [1,2] and ∆ [2,3] respectively. Thus using (4.48) and (4.49), we can write the equation for Y x in the following way: similar to (4.30). The coefficients a I2 are defined in somewhat similar form to (4.31) in the following way: Note that, although the analysis is similar to what we had for (4.30), there is an important difference now. The RHS of the equation (4.50), defined using a 4 is constructed with P 3 and P 4 which are in turn defined in (4.49). Both P 3 and P 4 are linear in p 0x as may be seen from (4.45) and (4.43). Thus a 4 in (4.51) differs from a 4 in (4.31) by the presence of p 0x , implying (4.50) to be a third order equation in p 0x . We can use the above set of equations to formulate the equation for p 00 , instead of p 00 , as we had in (4.32). Needless to say, the equation for Y 0 (r, |ω|) ≡ p 00 follows similar route as before, and we can write the equation for Y 0 in the following way: where the equality between the two sides is the consequence of (4.50). The way we have constructed the sources P 3 and P 4 in (4.49), Y 0 do not appear on the RHS of (4.52) and therefore knowing Y x we would not only know: respectively. These may be worked out with some effort, but we will not do so here as these fluctuation modes are not important for computing the bulk-viscosity to the order that we want to analyze here. The story however does not end here as there are additional constraints on the p nk modes that appear from the flux EOMs, namely the five-form, the three-forms and the axio-dilaton EOMs. We can also get another equation from the cross-term in the metric, namely the rt component of the metric. All these should further constrain the fluctuation modes, and there is a worry that these additional EOMs may over-constrain the system rendering them inconsistent. The scenario is subtle, so let us proceed carefully. First, and to O( ), we may ignore the three-form EOMs as they start changing the equations only to O( 2 ). Similarly, once we switch off the g s N f corrections we are also effectively switching off the contributions from the axiodilaton EOMs. On the other hand we cannot ignore the five-form and the rt EOMs. They will constrain the p nk modes, and it is easy to see how the rt component of the metric EOM does this: where the summation convention for k follows the same as in (4.25). The other coefficients appearing in (4.54) are defined in the following way: c nk are constants that one may determine from the way the sources arrange themselves in the rt EOM, whereas c [q]nkm are functions of r such that: In fact this is where the above mentioned constraint show up: one can determine the functional forms of the coefficients c [q]nkm and the constants c nk by comparing with the LHS of (4.54). One may also get these coefficients directly from the rt EOM. We expect these two ways of getting these coefficients to match because in the absence of the sources i.e for the conformal case, the extra rt equation did not over-constrain the system [84]. Motivated by the above discussions, one may now give similar arguments for the five-form EOM, where the constraint equation takes the following form: for every choice of n, and with d nk and d [q]nkm being the coefficients similar to c nk and c [q]nkm respectively in (4.54) with d [q]nkm vanishing for m ≥ 3 as (4.55). As before, the RHS of (4.56) may be expressed in terms of the modes p nk and their derivatives which may be compared with the LHS of (4.56). The system will be consistent when all the coefficients on both sides match. There is also a simpler way to see why the coefficients on both sides of the equations in (4.54) and (4.56) would match, once the RHS of these equations have been specified in terms of the sources and the modes. This is because, all the three equations in (4.17), (4.21) and (4.23) may be expressed as: with f (k) [q]nlm being constrained in the same way as in (4.55), implying that the RHS of either of the two equations (4.54) and (4.56) take the following form: where b can be either c or d for (4.54) and (4.56) respectively. In this form (4.58) may easily be made to match with the LHS of the respective equations. Finally, let us give a reason why the RHS of the two equations (4.54) and (4.56) are expressed in terms of the sources ∆ (n) 2k and the modes p nk and p (n−1)k . For (4.54) it is easy to justify since it is the Einstein equation for the rt component and therefore should depend on the sources and the fluctuation modes. To O( ) we expect only a linear combination of the form given as the RHS of (4.54). On the other hand, in the five-form EOM (4.56), the fluxes used to balance the system against any collapse [84] would in turn induce three-brane sources on the anti-D5 branes. The fluctuation modes should also affect these sources, and therefore the RHS of (4.56) is expressed as a linear combination of the sources and the fluctuation modes to O( ), justifying the above analysis.
The speed of sound in the strongly coupled plasma
We are now ready to do the two set of computations related to bulk viscosity: the speed of sound and the bound on the ratio of bulk viscosity to shear viscosity. The latter is again related to the speed of sound [30], so it will suffice to compute the speed of sound in the strongly coupled plasma. However before we go about computing the sound speed, let us present the formula for the ratio of the bulk viscosity ζ to the entropy density s, which was already derived in [84] for an appropriate choice of the quadrant: where Y x (r, ω) ≡ p 0x (r, ω) satisfies the differential equation given in (4.50). The result for the ratio of bulk viscosity to entropy density in a different quadrant can also be written down, and even their equivalence may be shown as in [84], but we will not do so here. Instead we will analyze the sound speed in the medium using all the ingredients we have collected so far.
One of the ingredients that we shall use extensively to compute the sound speed is the entropy density s. This has already appeared in (4.59) above, but the s appearing above is only the conformal result as the ratio (4.59) is already proportional to ≡ 3gsM 2 2πN . What we now need is the non-conformal correction to s. This may be written as: where the non-conformal corrections to s enters through the resolution parameter a 2 given in (4.12). We have chosen zero bare resolution parameter for simplicity and therefore, as evident from (4.12), a non-zero resolution already implies nonconformality in this set-up. One may worry that a zero bare resolution parameter may fail to capture the essential ingredients for a UV completion [58,84]. However that is not much of a concern here as we are not exploring the UV physics. Thus a cut-off r c will prominently feature in our results, as evident from (4.59) already. These all may be easily rectified, and we will elaborate it somewhat in the next section, but since no essential IR physics is lost in this simplified set-up, we will continue with this construction. There is however one issue that we do want to emphasize at this point and it has to do with the sign of the first expression in (4.12). Of course we naively expect a 2 to be positive, but the expression (4.12) involves various functions of log and dilog, so it will be instructive to check the sign of (4.12). Let us therefore start by defining x ≡ r 2 h r 2 c << 1, using which we can express (4.12) by: which is negative definite. This may trigger an alarm because a now becomes imaginary. Note that this problem does not arise if there is a bare resolution parameter a 0 , however small (as one may tune to be smaller than the smallest a 0 ). The way out of this conundrum is to notice that all expressions of fluxes etc involve a 2 and not a. Further, a 2 appears in the metric (4.5) as a combination r 2 + 6a 2 , and since we are only exploring the region r ≥ r h , the sign of a 2 does not create any problem here too. On the other hand, when there is no black-hole, r h vanishes, and so does a 2 (4.61). All this has also appeared in [85] − see discussions around figure 3 therein − for a more generic choice of a 2 given as eq. (2.63) in [85]. We can of course resort to a more conservative approach by writing an expression for |a| instead, and we shall do so in (5.13) in the next section wherein a non-zero bare resolution parameter will also be taken into account. Coming back, the entropy density computed above in (4.60) is proportional to powers of r h , so it vanishes when r h → 0. Additionally when r → r h , the entropy density receives corrections that take us away from the conformal value. These corrections may be easily quantified as powers of but we won't analyze it here 12 . 12 For example, to first order in and for r → r h we can sum up the series (4.61), or use (4.12), to show that the entropy density may be expressed as: where s 0 is the conformal value for the entropy density that can be read up from (4.60). To this order we can see that there is no r h dependence at the horizon.
Instead, at this point it may be instructive to point out the steps that went in the computation of the entropy density s. This would in turn effect the computation of the sound speed c s , since it depends upon s via: The entropy density may be determined directly from supergravity by first computing the energy-momentum tensors and then dividing the result by the temperature T . The energy-momentum tensor, on the other hand, arises from the variation of the action of the the form given by eq (3.120) of [58]. One may add a Gibbon-Hawkings term to it to control the boundary behavior, as evident from equations (3.121) and (3.123) of [58], but that does not alter the required linear term for our case. One may also add counter-terms to holographically renormalize the subsequent action, but since we are using a finite cut-off r c , it is not necessary to add them at this stage. This aspect has already been alluded to earlier, and here we see a more concrete realization of this. Putting everything together, the sound speed for r > r h will be given by: where x is the same parameter used in (4.61) before. Expectedly, the sound speed reduces to c s = 1 √ 3 in the conformal limit, and is smaller than 1 √ 3 when non-conformal corrections are included. One may justify this by looking at either of the two expressions in (4.63): the two terms, that account for the non-conformal corrections, are negative definite 13 when x < 1 (or r h < r c ). In the limit r h << r c , the sound speed (4.63) may be approximated by: (4.64) The above limit is not without its merit as we expect r c to be much bigger than r h , even if we restrict the dynamics completely to Region 1 of [58]. We can now use (4.59) and (4.64), to express the ratio of the bulk viscosity to shear viscosity in the 13 This may be easily seen from the first expression in (4.63) written in terms of the variable x in the following way: which is by construction smaller than c s = 1 √ 3 . Note that, for vanishing we get back the conformal result for the sound speed as one would expect.
following suggestive way: where η = 1 4π is taken at its conformal value to this order in , x = r 2 h r 2 c as before, and α x ≡ Yx(rc,0) Yx(r h ,0) is the ratio if the two fluctuations. We have also defined, without loss of generality, Y x (r h , 0) ≡ 1 r h for x << 1. We expect the ratio (4.65) to be positive definite, as (4.59) is positive definite. The second term is already positive, and the first term can become positive if α x is constrained in the following way: There is something puzzling about (4.65) that we should clarify right now. The way we have expressed (4.65) would seem to put an additional constraint on the ratio α x of the fluctuations as evident from (4.66). However such a constraint does not seem to follow from (4.59). In fact as long as x 2 < 13 16 both (4.59) and (4.65) should be positive definite. Since the expression (4.65) is basically a rewriting of (4.59) using the expression (4.64), it implies that (4.65) should not introduce any additional constraint of the form (4.66) on the ratio α x . Then why is there a new constraint? One way to argue for this would be to observe that the expression (4.65) is generic in the sense that it may be re-expressed as: where a(x) and b(x) are variations of the coefficients appearing in (4.65). This generalization however suffers from the appearance of explicit cut-off dependences of the respective variables. There could also be O( ) corrections to (4.63) and (4.64) that may change the coefficients of (4.65). These corrections appear from O( ) corrections to the temperature T , which we had identified to the horizon radius r h . To see this first let us take the cut-off temperature T c used in [58], which may be expressed as: where e 2B and e 2A are defined in (4.11) and (4.12) respectively. To O( ) the functional forms for the various parameters, i.e g(r) ≡ e 2B(r, ) and h(r) ≡ e −4A(r, ) , appearing in (4.68) may be determined exactly as: where L 4 ≡ 27gsN 4 , and note the appearance of higher powers of x in the black-hole factor g(r) defined at the cut-off r c . This series has similarity with the series (4.61) defined for the resolution parameter a 2 . The connection is of course spelled out earlier in (4.11), and once we plug (4.69) in (4.68), the temperature may be expressed as: ∞ n=1 x 2(n+1 n 2 (n+1)(2n−1) where the second line is in the limit x << 1. The corrections are exactly the ones that one would expect from switching on non-conformalities in the system. However note that even in the limit → 0, our expression for T c seems to have an additional factor of the form: which implies the cut-off dependence of the temperature. Clearly when we make r c → ∞ we recover the conformal result, but the appearance of x in (4.71) as well as in (4.70) means that UV completion is necessary to argue for the physical value of T c here. Naively taking r c → ∞ for non-zero will not give us the correct answer here, which of course resonates well with the UV completion discussed in [58]. Thus there is a way to holographically renormalize the system, following the procedure given in [58], that would take care of the log pieces in the metric and other variables in the problem. Once this is accomplished one may, in some restrictive sense, take r c → ∞. This is a specific UV completion wherein the UV cap gives rise to an asymptotically conformal theory. For such a case the temperature does take a physical value which may be expressed as: where Λ is related to the QCD scale for this model. The above is the so-called boundary temperature of [58] that we define at far UV. We will however need to define the temperature at any given scale, not just the UV, to avoid issues like (4.71) in the absence of any non-conformalities. Let us therefore take the following definition of the temperature: where a 1 and a 2 , which can be functions of x = r 2 h r 2 c , will be determined below. Note that T and T c are similar when a i take specific values extracted from (4.72). In general however, T should be the temperature that would occur naturally in this framework. This means we need to change slightly the formula for entropy in (4.60) by replacing r 3 h in (4.60) by r 4 h /T , with T given by (4.73). The sound speed will also change from (4.63) to the following: where the expected cut-off dependence appears from x as before. Clearly when a 1 = 1 πL 2 and a 2 = 0, we recover the sound speed computed in (4.63). However now, when both a 1 and a 2 are functions of x, the = 0 limit gives us: which takes us away from the conformal value of c 2 s = 1 3 in the conformal limit. This is not what we expect here, so we can use (4.75) to determine the functional form for a 1 (x). There are clearly two possible solutions for a 1 (x), namely: where b is yet an undetermined constant. The second choice is not acceptable in a theory that is holographically renormalizable, as it blows up when the cut-off is taken to infinity. This implies that T in (4.73) can only be: with constant b. What value can a 2 (x) take? To determine this we will need to study the full holographically renormalized temperature. This is in general a tedious exercise, but we can get a hint from the renormalized boundary temperature T c that we determined earlier in (4.72). To the first order in , the renormalized boundary temperature depends on log r h . This tells us that we can make the following ansatze for a 2 (x): where c 1 (x) and c 2 (x) are polynomials in x that do not have either log x or x −n pieces. The two functions c 1 (x) and c 2 (x) contribute to the full sound speed in the following way: x 2n n(2n − 1) where we see that the result is not so different from our earlier value for sound speed (4.63). The difference lies in the additional term proportional to da 2 /dx, which in turn would depend on how c 1 (x) and c 2 (x) depend on x. If c n (x) = −|c n | with constant c n , and x << 1, the sound speed is simple and is given by: where b > 0, and the signs are dictated by the fact that the beta function is negative and so the sound speed is smaller than 1/ √ 3. The additional term in the sound speed (4.80) means that the ratio of bulk to shear viscosities, i.e (4.65), changes to: where expectedly when |c 2 | = 0 we recover (4.65). The RHS now crucially depends on α x i.e on the ratio of fluctuations Y x (r c , 0) and Y x (r h , 0) satisfying (4.50). The equation (4.50) is difficult to solve, partly because of our ignorance of the precise sources a 4 defined in (4.49) using ∆ [1,2] and ∆ [2,3] via (4.45). Nevertheless, using the constraint (4.66), allows us to make the following ansatze for α x : where d 1 is another positive definite quantity. Note that we have not included a term proportional to x 3/2 in (4.82). This is done precisely to bring the ratio of bulk to shear viscosities (4.81) into the following suggestive form: where the cut-off dependence, compared to (4.81), now appears only through the last term. In the absence of the precise knowledge of d 1 and c 2 , this is the best we can do at this stage. However note that the negative terms in (4.83) cannot offset the sign of ζ/η in (4.83) because we have already established the positivity of (4.83) from the original expression (4.59). The concern however is the choice (4.82). How are we justified in the selective choice of the coefficients in (4.82)? How do we even know that such a choice will solve the EOMs? The answer to both the questions lies in the specific UV completion, or more appropriately on the distribution of the anti D5-branes in Regions 2 and 3. Once we plug (4.82) in (4.50), we can in principle determine the form of the sources ∆ [1,2] and ∆ [2,3] in a 4 , given via (4.49). One can then re-arrange the anti D5-distributions to match with the functional forms of ∆ [1,2] and ∆ [2,3] . This way the ansatze (4.82) may be justified.
Once this is settled, note that the negative definite second-term cannot be very large as all the three constants appearing there, namely d 1 , |c 2 | and b, are finite numbers. In fact for large cut-off r c , it is easy to establish the following upper bound on the coefficient d 1 : without loss of generality, where r h is the horizon radius. This also means that the cut-off dependent term in (4.83) will dominate over the negative definite second term. This is still consistent with the overall positivity of the ratio (4.83). However we now need the lower bound on d 1 . To determine this, we first note that the ratio of bulk to shear viscosities (4.83) satisfy the following bound: which would eventually control the behavior of the fluctuation modes studied in section 4.2. In the next section, we will study the sound speed and viscosity bound with non-zero fundamental flavors and with string coupling of order 1. We will rederive some of the above results, but in a different regime of the parameter space.
Such an analysis will hopefully shed light on the underlying universality of the results derived here.
Bulk viscosity at strong string and strong 't Hooft couplings with non-zero flavors
In the previous section we saw how one may study bulk viscosity, sound speed and the bound on the ratio of the bulk to shear viscosities at strong 't Hooft coupling using a gravity dual in type IIB theory. At this stage one may attempt few improvements in the present scenario by including both the flavor degrees of freedom as well as the UV regions. One may even ask the questions in the regime where the string coupling itself is of order 1, which of course still maintains strong 't Hooft coupling in the gauge theory side. The latter is however harder to study because it is the regime where even S-duality does not help. The question then is whether we can say something concrete in this regime of parameter space. One simple answer to the enigma may be to T-dualize the system to type IIA, by including the flavor branes, and then lift the configuration to M-theory. This should in principle accomplish the task, except that the T-dual scenario leads to a configuration of intersecting NS5-branes with the intersection region being blown up to a diamond [86,87]. This is not necessarily bad, and in fact in the past useful results have been drawn out of this configuration [88], but the requirement of keeping track of the NS5 degrees of freedom may thwart a simple analysis of the system. What we are looking for is a configuration with manifold and fluxes that we could use to succinctly address similar set of questions as in the previous section, avoiding the unnecessary requirement of including extra degrees of freedom. This is exactly where the mirror dual of the type IIB framework becomes handy. In fact, latticecompatible results pertaining to glueball spectroscopy were obtained in [101] and P(article)D(ata)G(roup)-compatible results pertaining to meson spectroscopy were obtained in [102], by working with the mirror dual.
The mirror type IIA model and its M-theory uplift
As discussed above, and also alluded to in Fig. 19, the model that we want to use here is the M-theory uplift of the type IIB scenario that we studied earlier. This is the MQGP model of [63,89] where at weak string coupling we have a type IIA description. One of important procedure that goes in the construction of [63,89] is the so-called delocalized mirror symmetry via the Strominger-Yau-Zaslow (SYZ) prescription [64]. This prescription involves a two-step procedure: one, by viewing the Calabi-Yau manifold as a special Lagrangian T 3 fibered over a base that is taken very large, and two, by performing three T-dualities over the T 3 fiber. In this subsection, we will provide some discussions on the details of the procedure.
The first requirement of a large base is important. This has to do with nullifying the contributions from open-string disc instantons with boundaries that appear as non-contractible 1-cycles in the special Lagrangian (sLag) T 3 fibered over the base. To see this more clearly, let us define three delocalized T-dual coordinates (x, y, z) which are basically proportional to (φ 1 , φ 2 , ψ) coordinates respectively that we encountered earlier. These coordinates are valued in the fiber torus T 3 via [63]: where s i are constants whose values may be derived from [91]. Interestingly, the choice of the coordinates (x, y, z) allows us to study the local geometry of the un-derlying manifold. Furthermore, using the results of [90] the following conditions, as shown in [89,103], are satisfied: for the underlying T 2 -invariant special Lagrangian manifold of [90] for resolved and deformed conifold. This immediately implies that, if the underlying resolved warpeddeformed conifold is predominantly either complelely resolved or deformed, the local sLag T 3 of (5.1) is then the required sLag to allow for the SYZ mirror construction via local T-dualities. Let us analyze thus further by taking the type IIB background given in (4.5) but now with e B = 1. The latter requirement is to just simplify the ensuing discussion. As we saw above, to enable use of SYZ-mirror duality via three T dualities, one is required to take a large base. This immediately means taking large complex structures of the aforementioned two two-tori of the sLag T 3 (x, y, z) fibration. One may easily implement this via the following considerations [92]: for appropriately chosen large values of f k (θ k ) with k = 1, 2. This choice does not change the local NS three-form flux, as was shown in [91,92]. Globally the underlying manifold can be a non-Kähler manifold as we discussed earlier. This is the advantage of using the (x, y, z) coordinates. On the other hand, the fact that one may be allowed to choose large values of f k (θ k ), was justified later in [63]. The main idea is basically the requirement that the metric obtained after SYZ-mirror transformation, applied to the non-Kähler resolved warped-deformed conifold, should resemble, at least locally, a non-Kähler warped resolved conifold. This means after incorporating (5.3) to (4.5), the (x, y, z) coordinates discussed in (5.1) will parametrize the local behavior succinctly. The global considerations will follow afterwards as shown in [93,92] 15 . 15 To justify the delocalization method while constructing the type IIA mirror a la SYZ triple-Tduality prescription [64] and its subsequent M-theory uplift one may argue the following. Consider the example of the mirror of a D5-brane wrapping the resolved S 2 with fluxes as studied in the first reference of [91]. The M-theory uplift can be made free of delocalization ensuring that one can construct a permissible G 2 structure manifold for the entire domain of validity of the delocalized coordinates. For example, in the delocalized large-complex structure limit and after a fixed ψ coordinate rotation, one obtains the SYZ mirror to be D6-brane wrapping a non-Kähler deformed conifold. Now, as shown in section 6 of [91], one can define an appropriate set of vielbeins to construct an explicit G 2 structure in terms of which the M-theory uplift of the previously obtained type IIA mirror could be rewritten, and which is valid for all values of ψ. In other words, the mirror In the local geometry, we can now perform three T-dualities 16 , first along coordinate x, then along coordinate y and finally along coordinate z, to get the local mirror manifold. The details of this construction, utilizing the results of [91], [92] was first worked out in [63]. The local mirror captures all the right properties of the expected dual configuration in the type IIA side, and then one may use the coordinates (φ 1 , φ 2 , ψ) to express the global metric as ds 2 IIA (see [91,93,92,63] for details). An additional ingredient that appears naturally from the SYZ procedure from the type IIB three and five-form fluxes as well as the axio-dilaton, is the one-form type IIA potential A. Such a one-form is useful to construct the M-theory uplift of the mirror type IIA as was shown in [63] 17 . The global M-theory metric takes the following form: where ϕ is the type IIA dilaton that appears from the mirror transform of the type IIB dilaton. Once the dilaton is allowed to take a non-trivial value, both in type IIB as well as in the mirror type IIA side, one starts seeing the effects of the flavors. This is simply because, in the type IIB side, non-trivial axio-dilaton shows up only when we switch on N f seven-branes. Of course, not all the N f seven-branes are required to be local D7-branes, but having D7-branes make the mirror picture more transparent as these would eventually contribute to the dilaton ϕ in the type IIA side. Once the dust settles, the g R 3 and g tt components appearing in (5.4) may be defined in the following way 18 : for ψ = ψ 0 coincides with the triple-T-dual-fixed-ψ rotated type IIA mirror obtained assuming delocalization. This essentially states that the type IIA mirror in equation (6.23) of the first reference in [91] obtained by descending to type IIA from arbitrary-ψ M-theory uplift will be the same as the fixed-ψ 0 type IIA mirror of equation (5.64) obtained using delocalization for ψ = ψ 0 . Hence we could just replace ψ 0 by ψ in the type IIA mirror obtained assuming delocalization.This therefore implies that the type IIA mirror is effectively free of the delocalization restriction. 16 Now also switching on e B in (4.5). 17 As is standard in such constructions, the one-form A may not be globally defined, although it's field strength will be. In the type IIB side such one-form will lead to either a RR two-form field or the axion depending on the T-duality direction. 18 Note that, unless mentioned otherwise, we shall always assume log r, in expressions like (5.5), is written as log r rc with r c being the cut-off radius. To avoid clutter, we will also take r c = 1 so that r remains dimensionless.
(5.5)
where r h is the horizon radius, and both g s N f as well as gsM 2 N are expectedly small 19 . Note also that both the metric components are independent of the resolution parameter a 2 . In fact the only metric component that depends on the resolution parameter would be the g rr component, whose explicit value is given by: where the full structure for a 2 will be given later. The functional forms for the coefficients appearing in (5.5) and (5.6) are determined by mapping the local metric to the warped resolved conifold metric 20 with a resolution parameter a 2 . In addition to that, and in the MQGP limit of [63], the α θ k factors for k = 1, 2 are angular coordinates such that around: we can allow the decoupling of the five-dimensional spacetime M 5 (t, x 1,2,3 , u) from the internal six-dimensional space M 6 (θ 1,2 , φ 1,2 , ψ, x 10 ). This decoupling is affected by making the Kaluza-Klein (KK) modes very heavy. The above discussions more or less summarizes the mirror construction as well its M-theory uplift. However it would be instructive to compare this with the type IIA brane construction of Fig. 20, which deals with both the UV and the IR brane configurations. The IR picture is of course the Klebanov-Strassler construction which is got by making a single T-duality along a direction orthogonal to the wrapped D5brane world volume, i.e along z of (5.1). This yields the RHS of Fig. 20, if we ignore the parallel NS5-brane. In other words, we get M D4-branes straddling between a pair of orthogonal N S5-branes whose world-volume directions are parametrized by (θ 1 , x) and (θ 2 , y) [95,96]. The mirror picture discussed here is then got by making two further T-dualities along x and y directions. Each of these T-dualities would yield Taub-NUT spaces from the corresponding NS5-branes [98]. The N f flavor D7-branes would yield N f D6-branes that are then uplifted to M-theory as KK monopoles [99]. These are also Taub-NUT spaces. Combining everything together 19 In section 4 we took g s → 0 with N, M very large and N f vanishing such that gsM 2 N << 1 and g s N f = 0. Here we take g s < 1 and N f = O(1) with N, M still very large. Again gsM 2 N << 1, but g s N f < 1. The latter can be implemented, for example, by choosing g s ∼ 0.4 and N f ∼ 2. Such a choice will guarantee that (g s N f ) m gsM 2 N n << 1 even for n = 1 and m = Z. Note however that g s → 0 does not always imply g 2 Y M → 0. We can have g 2 Y M = O(1) when N f = 0. This will be elaborated in section 6.4. 20 Recall that globally we can only put a non-Kähler metric on the resolved conifold [62].
then leads to a seven-dimensional manifold with a G 2 structure and with G-fluxes. This configuration is precisely equivalent to the uplift of the wrapped D5-branes on a warped resolved conifold of [92,91,62].
Quasi-normal modes, attenuation constant and the sound speed
Let us now discuss the main ingredient of our construction, namely the quasi-normal modes in the dual gravitational background. The procedure involves a few steps that we lay down in the following. Building up on the ideas developed in [100] and [103], and using gauge-invariant combinations of metric perturbations invariant under infinitesimal diffeomorphisms, in other words: as discussed in [100], the gauge-invariant combination of scalar modes of (M-theory) metric 21 perturbations was constructed in [103]. A discussion of the same appears in Appendix A.
Next, we work near the decoupling limit prescribed in (5.7), and choose the other three angular coordinate (ψ, φ 1,2 ) in the mirror metric (5.4) as ψ = 2nπ, with n = 0, 1, 2 and small φ 1,2 . We also choose our radial variable henceforth as u ≡ r h r . Using these we can define: (5.9) in the context of the gravitational dual of large-N thermal QCD with N f = 0, where N f is the number of flavors. This functional form of B(u) appears in the construction of the gauge-invariant Z s (u) in the following way: (5.10) where the H ab functions are given in appendix A, and T is the temperature whose form will be given below. Note that the upshot of appendix A is essentially the construction of the gauge-invariant Z s (u) that will satisfy certain EOM to be elaborated in the following 22 .
In obtaining an EOM for Z s (u), we will make use of q 3 = q πT , w 3 = w πT where T is temperature that appears in (5.10) above. We will express T in terms of all the 21 As discussed above, this corresponds to the local uplift of the delocalized Strominger-Yau-Zaslow [64] type IIA mirror of the holographic type IIB dual of [58] of large-N thermal QCD, having integrated out the six angular directions as in [104], up to NLO in N in the MQGP limit of [63]. 22 Our emphasis here would be to determine the EOM up to NLO in N .
variables that appear in the metric. To proceed, and for later brevity, we start by defining the following quantity: where (k, j) will be integers. Now assuming the resolution to be larger than the deformation in the resolved warped-deformed conifold in the type IIB background of [58] in the MQGP limit, and using the decoupling limit (5.7), the temperature T may be expressed in the following way (see also [103]): where G µν is the M-theory metric (5.4), and C 11 (u) may be extracted from (5.11). We can also go to the limit where α θ i are O(1) numbers. This way the temperature may be written completely in terms of the resolution parameter a 2 and the horizon radius r h . Interestingly when a 2 >> r 2 h , the temperature is expressed in terms of inverse r h . Otherwise, the temperature is proportional to r h . In the limit of vanishing flavors, small bare resolution parameter, and large cut-off, the expression for the temperature becomes identical to what we took on the type IIB side (see (4.77) and (4.78)). The bare resolution parameter in type IIB side, as given in (4.12), was taken to be zero. A natural question then is to ask what happens if we take non-zero bare resolution parameter 23 . A particular choice of a(u) can be: (5.13) this way b may serve as the bare resolution parameter in (5.12) and c 1 (u), c 2 (u) are some slowly varying functions of the u parameter (not to be confused with b and c 1 , c 2 taken in (4.77) and (4.78)). One may compare (5.13) with the type IIB resolution parameter (4.12) in the limit b → 0. The functional forms in the two cases are similar, but not identical. This is intentional because the choice (5.13) allows us to perform computations in the mirror side more efficiently compared to the choice (4.12). This in turn will also effect some of our final results, so comparison with the type IIB side will have to be done more carefully.
With these definitions at hand, we are now ready to write down the equation of motion for Z s (u) appearing in (5.10). This may be expressed in the following way: (5.14) 23 Note that allowing a bare resolution parameter in the type IIB side allows us to perform the SYZ mirror transformation more efficiently [91]. Here however we will use the word bare to denote the part of the resolution parameter that is independent of g s N f and gsM 2 N . Of course the r h independent piece of a(u) vanishes.
which is a second-order differential equation in u whose solutions will tell us the precise gauge-invariant variables that we seek here. This equation depends on two non-trivial functions of u, namely m(u) and l(u), whose functional form will be important. Both these functions may be expressed in terms of C kj (u) in (5.11) as well as certain other functions that we shall elaborate in the following. We start with m(u) which may be written as: where note the appearance of C 21 (u) and C 23 (u) defined in (5.11) as well as A i (u) that form the various coefficients above. The function A 1 (u) may be written as: which expectedly simplifies for vanishing b. On the other hand, at the boundary when u vanishes, A 1 (0) is proportional to (ω 2 3 − q 2 3 ) 2 which is now expressed in terms of T defined at u → 0 instead. Similarly, A 2 (u) may be written in the following way: which is again proportional to (ω 2 3 − q 2 3 ) 2 at the boundary u → 0. When b vanishes, A 2 is unaffected, but the term itself comes multiplied with b in (5.15), so decouples completely. The third term in (5.15) is proportional to b 2 , so we expect it to decouple in the limit of vanishing b. To see what happens at the boundary, i.e when u → 0, we express A 3 (u) as: 18) which is expectedly proportional to (ω 2 3 − q 2 3 ) 2 , but the term itself decouples because it appears together with a factor of u in (5.15), much like the previous term in (5.15). The remaining two coefficients, A 4 (u) and A 5 (u), are in similar vein as (5.16), (5.17) and (5.18) and share much of the same properties as above. They take the following form: and become proportional to (ω 2 3 − q 2 3 ) 2 when u → 0 at the boundary, but do not decouple in a simple way as before. In this sense they share the property of the first term in the definition (5.15). The boundary behavior of m(u) can then be given by the following limiting expression: 12u , (5.20) where ≡ 3gsM 2 2πN is the same expansion parameter that we used in section 4. Both the C 2j factors behave as log u, but are suppressed by as well as g s N f (5.11) (any constant factors get suppressed by from (5.15)). Thus m(0) seems to blow up as 1 u or log u u . This preliminary analysis however is naive because precisely in this regime the UV cap modifies the boundary behavior appropriately to avoid any such pitfalls. Therefore a more relevant question to ask is the behavior of m(u) at the horizon, i.e when u → 1. We will analyze this below, but before that let us discuss the behavior of the other function l(u) appearing in (5.14).
The expression for l(u) turns out to be very large so we shall suffice ourselves by demonstrating that the horizon u = 1 is an irregular singular point whenever N = 0. In the following we give below the expansion of l(u) about u = 1 to see the same: where C 21 (1) may be extracted from (5.11) by putting u = 1 therein. For vanishing bare resolution parameter (5.21) vanishes, so a minimal resolution is necessary to see the behavior at the horizon. The above expression for l(u) near the horizon is what we need, and we could also go to the u → 1 limit for m(u) in (5.15) to determine its behavior at the horizon. However the results are expressed in terms of both q 3 and ω 3 . To elaborate further, we need to first express ω 3 in terms of q 3 and then identify the subsequent behavior of m(u) and l(u) at the horizon. To this effect, we make the following ansatz: 22) and substitute into m(u) and l(u). Here α(u) and β(u) are certain functions whose values will be determined near the horizon. We then first perform a small q 3 expansion, followed by an expansion around u = 1 and lastly a large N expansion. The procedure is straightforward albeit a little tedious. After the dust settles, we come up with the following expansions for m(u) by keeping terms up to O q 3 , gsM 2 N and the most singular term in u near u = 1, namely: with C 21 as in (5.11). For vanishing bare resolution parameter, there is a further simplification: m(u → 1) behaves simply as 1 1−u as may be easily seen from (5.23). On the other hand, the behavior of l(u) at the horizon may be read more directly from (5.21) as: which expectedly vanishes for vanishing bare resolution parameter, and has the required irregular singular point. The functional forms for m(u) and l(u), expressed using the dispersion relation (5.22), and analyzed near the horizon u → 1 is essentially the regime that we want to concentrate here. We can also study the system at the boundary by attaching an appropriate UV cap controlling, in turn, the behavior of m(u) and l(u), but this will not be the emphasis of this section. Our aim would be to explore the near horizon behavior where one sees u = 1 as an irregular singular point of (5.14). To proceed, let us make the following ansatz for Z s (u): where we shall assume [S (u ∼ 1)] 2 > |S (u ∼ 1)|. This derivative requirement essentially converts (5.14) to a simple quadratic equation in S (u) with coefficients m(u) and l(u). The solutions are: At this stage it would be interesting to ask what happens when the derivative condition is not satisfied. Clearly in this case we will get a second order inhomogeneous differential equation which becomes homogenous when the bare resolution parameter vanishes. Generically it is harder to deal with the inhomogeneous case, because of the complicated forms of m(u) and l(u), and the homogenous form is not a suitable choice for the system undergoing SYZ transformations [64,91]. Thus the simplification and calculability attained from the derivative requirement guarantee not only analytic control, but also solutions not far from the regime of interest. With this in mind, the next set of steps are standard 24 . Choosing the minus sign in (5.26), one obtains the following: which has a simple pole structure of − 1 2(u−1) in the limit of vanishing bare resolution parameter b. The other parameters appearing in (5.27) are C 21 (u) defined in (5.11), and the two functions B 1 and B 2 defined in the following way: Let us now assume that q 3 → 0 as N −1−κ with κ > 0. One might worry that imposing this one would obtain, near u = 1 − which is an irregular singular point − a solution of the type e S(u) = (1 − u) γ F (u) implying u = 1 to be a regular singular point. This does not happen, and therefore demanding the vanishing of the residue of S (u) at u = 1 gives the following values for b, α and β: where C 21 (u) is defined in (5.11). In fact this is all we needed to determine both the sound speed c s as well as the attenuation constant Γ because the first term in (5.22), i.e the term proportional to q 3 , gives us the sound speed as 25 : Although we do not undertake here, a more generic analysis away from u = 1 can be performed and from there the limiting form of (5.27) can be ascertained. Needless to say, the results match. 25 Recall that we can express the dispersion relation (5.22) in terms of sound speed c s , shear viscosity η and bulk viscosity ζ as: where one can see that the result differs from the conformal answer of 1 √ 3 expectedly by the gsM 2 N and g s N f factors. Even in the absence of the fundamental flavors N f , the sound speed deviates from the conformal answer. The form of the deviation is consistent with what we had earlier in (4.63) although the precise factors differ. This is understandable in the light of the different choices of the supergravity parameters in the type IIB and the M-theory pictures.
The attenuation constant Γ may now be easily extracted from (5.22) by plugging in the value of β from (5.29). To NLO in N , Γ may be written as: where T is the temperature, and again we see that even in the absence of fundamental flavors, the attenuation constant differs from the conformal value of 1 6πT . The parameter C 21 (1), defined in (5.11), becomes identity when N f = 0, so the deviation from the conformal value is solely governed by gsM 2 N .
The case with a vanishing bare resolution parameter
Let us now discuss what happens if one sets b = 0 in (5.13). We briefly dwelt on this earlier, wherein we saw how (5.15) and (5.21) behave when b vanishes: (5.21) completely decouples but some remnants of (5.15), as seen from (5.16), (5.17), (5.18) and (5.19), survive. Interestingly, this now makes u = 1 a regular singular point of (5.14). To proceed, let us then rewrite Z s (u) using an analytic function F (u) in the following way: where C 23 (1) can be evaluated from (5.11) by putting u = 1 in the required expression and we have defined ≡ 3gsM 2 2πN as the non-conformality factor. With the definition (5.32), the EOM (5.14) becomes: which is a second order homogenous differential equation with coefficients defined by parameters a 1 , a 2 and b 2 . The 1 u and log u u terms are remnants of the equivalent terms in (5.20). The a 1 and a 2 coefficients take the following form: where s is the entropy density. This means α(u) in (5.22) is related to c s and β(u) in (5.22) is related to shear viscosity and bulk viscosity combination, or the attenuation constant Γ. However since we are analyzing the system close to the horizon, i.e u → 1, the relevant parameters for us will be α(1) and β(1).
where C 21 (1) is extracted from C 21 (u) in (5.11). It is interesting to note that the combined expression with a 1 and a 2 may be succinctly expressed as: which simply converts C 21 (1) in (5.34) to C 21 (u). This is expected from the way we represented the EOM for Z s in (5.14). On the other hand, the form for b 2 in (5.33) may be expressed as: where C 23 (u) is given in (5.11) (note the appearance of C 23 (u) instead of C 23 (1), much like what we have in (5.35)) and D is defined in the following way: (1). Implementing this, F (u) takes the following form: where d 1 and d 2 are constants. We can also motivate the replacement C 21 (u) → C 21 (1) in both (5.35) and (5.36) in the following way. In (5.35), this amounts to dropping the log u term near u = 0 compared to the log N term in the large N limit i.e making a 2 = 0 in (5.33). In (5.36), this amounts to just keeping terms of O gsM 2 N as the log u term in C 21 (u) is already suppressed by g s N f (see (5.11)).
The u → 0 limit may seem a bit puzzling because so far we have analyzed the system near the horizon i.e near u → 1. However in (5.14), for vanishing bare resolution parameter, as we saw earlier, l(u) vanishes and the EOM is solely governed by m(u) (5.15). This may be defined both at the boundary (5.20) and at the horizon (5.23). Thus extrapolating F (u) to the boundary is still well defined, modulo the subtlety of including a UV cap.
Let us now go to the various choices of the parameters d 1 and d 2 in the solution (5.38). If one sets d 2 = 0 then the small-u expansion of F (u) will be given by the small-u expansion of the first part of the solution (5.38), i.e the d 1 part of the solution in (5.38), as: To analyze the boundary conditions, first let us make Im b 2 = 0. There is already a problem at this stage, but let us still carry on. At the boundary u = 0, if Re b 2 < 0 then (5.39) blows up as exp . This behavior will persist even if we include the a 2 log u piece in (5.33). Thus to be able to impose Dirichlet boundary condition on F (u), i.e impose F (u) = 0 at the boundary, one needs to set b 2 = 0. Now, substituting b 2 = 0 in (5.33), one obtains: where d 3 and d 4 are constants. We can fix d 4 in terms of d 3 by demanding Dirichlet boundary condition on F (u). This immediately gives us: This is almost what we need, except for an important caveat. Putting b 2 = 0 (or Im b 2 = 0) in (5.36) gives us: where D is defined in terms of α, β, q 3 as well as gsM 2 N in (5.37). Since the RHS of (5.42) is a c-number, and β defined in (5.22) is a pure imaginary number (at least at the horizon), the equation (5.42) can only be solved if: The above forms of α and β are unfortunately not acceptable as they will not only lead to the wrong sound speed and attenuation constant, but also take us away from the perturbative regime where all our computations were focussed. One might think that this could be rectified if we had started off with a non-zero Im b 2 , but unfortunately the conclusions don't change much as F (u) would still oscillate infinitely fast or blow up. The above failure is a near miss, but it teaches us an important lesson about the choice of the function F (u): the boundary conditions are subtle and important, but one still needs to choose the function carefully, as any arbitrary choice may take us away from the perturbative regime of interest. Therefore at this stage there are two ways to fix the function F (u). One, we may not impose a Dirichlet boundary condition, and allow a non-normalizable functional form for F (u). Two, we again allow for a Dirchlet boundary condition, but choose the functional form for F (u) a little differently from the previous choice (5.39). The latter case is easier to implement, so we start by setting d 1 = 0 in (5.38). This gives us: We see that if Im b 2 = b 2 = 0 then we would encounter similar problem as in (5.43).
On the other hand, if we allow Re b 2 < 0, then we can control the amplitude of oscillation from the Im b 2 piece, provided: The above set of conditions does help us to solve for F (u) as before allowing the required Dirichlet boundary condition at u = 0, although the procedure for getting the exact functional form for F (u) is not as straightforward as in (5.40). However the condition Re b 2 < 0 now leads to the following condition on α and β ≡ iγ: (5.46) where = 3gsM 2 2πN is the non-conformal factor. Although the above condition gets further refined by (5.45), getting α and γ satisfying (5.46) can at least indicate the behavior of α and β with respect to gsM 2 N . A careful look at (5.46) tells us that if both α and γ are proportional to , then q 3 gets constrained. This cannot be right, so it seems the only way to satisfy (5.46) would be to take α and γ to be inversely proportional to , much like what we had in (5.43) before. Such a choice will again take us away from the perturbative regime of interest. Thus it seems the only way to analyze the behavior of α and β from the boundary u = 0 point of view is to allow for a non-normalizable F (u). This resonates well with the analysis of fluctuation modes of the metric in section 4 where p nk and Γ 0k functions were both non-normalizable functions. Note that we did not encounter these issues while studying the b = 0 case because the analysis was performed at the horizon u = 1 where these subtleties were not visible.
Shear viscosity, entropy and the bulk viscosity bound
After this detour, it is time to go back to our analysis of bulk viscosity and the bound on the ratio of the bulk to shear viscosities. To proceed, we will first quantify the functional forms of f 1 (θ 1 ) and f 2 (θ 2 ) in (5.3) in the following way [94]: 47) where α N and the choice (5.7) ensure large base for implementing the SYZ [64] mirror transformation. Recall the necessity of a large base in our set-up to nullify certain disc instanton contributions. The choice (5.47) is essential to compute transport coefficients and entropy in the M-theory uplift of the mirror set-up. We can now combine this with the value of the bare resolution parameter b = √ 6 that we got in (5.29), and using results of [63], [103], we can show that the shear viscosity near the horizon takes the following form: 48) whereĈ 21 (1) is defined as C 21 (u) with u = α θ i = 1 in the definition (5.11). Note also the appearance of α θ 1 and not α θ 2 in (5.48). This is because θ 1 and θ 2 defined in (5.7) approach zero at different rates so the former got selected in the computation 26 . We have also introduced a coefficient Υ in the formula (5.48) for η, whose value will be fixed soon. It is now time to compute the entropy density s. The procedure for computing s remains similar to what we did in section 4, although the choice of the mirror variables differ from the type IIB case. This implies that the entropy density at the horizon may now be expressed as: whereĈ kj (1) is defined for C kj (u) with u = α θ i = 1 in (5.11). One may now compare (5.49) with (4.60) as well as the entropy computed in [58] where we see similar suppressions with respect to gsM 2 N and g s N f . The precise coefficients understandably differ because of the different choices of variables alluded to above. One may get away from this by choosing a uniform definition of the variables in all the models. However this suffers from a reduction in the efficiency of computations of physical quantities in some models and increase in others 27 .
The above discussion however does not spell out a failure to compare the physical quantities in different models; rather one should interpret the validity of different results to be at different range of parameter values. For the choice of parameters in the mirror set-up, and using (5.48) and (5.49), we can now express the ratio between shear viscosity and entropy density as: where note the absence of the parameter α N from (5.47). We also wrote the first term as 1 4π . In the absence of the gsM 2 N correction, this should be the conformal result [106], and therefore we have used this to fix the parameter Υ in (5.48) as: There are a few issues regarding the ratio (5.50) that we should take into account now. First, observe the appearance of an inherent cut-off in (5.50). This appears through the log r h term above as log r h rc , where r c is the cut-off radius. Physical result should not depend on the cut-off so we should interpret (5.50) carefully.
The r c dependence in (5.48) for example should remind us of a similar r c dependence of shear viscosity in the type IIB side as given in eq (3.198) of [58]. The introduction of UV cap to the geometry contributed an additional piece as eq (3.200) in [58]. This eventually led to the ratio of the shear viscosity to the entropy density being given by eq (3.222) therein that depended upon the UV degrees of freedom N uv as e −Nuv . The result for infinite UV degrees of freedom was exactly 1 4π , so we should expect similar result for our case too. However the analysis of η s in (5.50) is done at the horizon with a UV cut-off r c , and one may easily see that the cut-off dependence is log r c which is an expected answer for a QCD like model. This means that, even with a UV cut-off, we expect η s to be at least bigger than 1 4π so that the KSS bound [106] is not violated. In (5.50) it is easy to see that the r h dependent terms are positive definite because log r h rc < 0. Thus if we define c 1 , which is as yet an unfixed function, as: with σ as another undetermined function, then η s > 1 4π . The UV cap can then change the result accordingly, but we will not elaborate on this here anymore. At this stage, it will simply suffice to see that the KSS bound is not violated.
All the ingredients are at hand now to compute both the bulk viscosity ζ as well as the ratio of the bulk to shear viscosities i.e ζ η . As we saw earlier, the shear and the bulk viscosities are connected by the following relation: where s is the entropy density (5.49), T is the temperature (5.12) and Γ is the attenuation constant (5.31). One can therefore use (5.53) to express the ratio ζ s in terms of Γ and η s as: 54) where κ 0 ≡ 364π √ 6 45 , C jk (1) is given by (5.11) for u = 1, andĈ jk (1) is given by (5.11) with u = α θ i = 1. The overall factor of gsM 2 N is interesting and crucial: it tells us that the ratio (5.54) is zero for conformal theories. This is of course consistent with what we had in section 4, and we note that the bulk viscosity may be easily derived, to this order in gsM 2 N , by simply multiplying (5.54) by the conformal entropy density. Any non-conformal corrections to s will change the bulk viscosity only to higher orders in gsM 2 N . Note also that, in the limit of vanishing fundamental flavors i.e N f = 0, the ratio (5.54) takes the following form: where we have inserted back the cut-off radius r c (which was taken to be 1 so far). Looking at (5.55) one might be tempted to compare it with the bulk viscosity that we got in (4.59), which was expressed using the fluctuation mode Y x satisfying (4.50).
Both have somewhat similar r h /r c dependence, but the exact factors differ. This has already been alluded to earlier because of the different choices of the parameters in the two theories. However, as we discussed in section 5.3, the ratio (4.59) is derived for vanishing bare resolution parameter whereas (5.54) is derived with non-zero bare resolution parameter. This of course is not the only difference. The zero bare resolution case, according to section 5.3, involves study of quasi-normal frequencies whereas the result (4.59) is derived from the study of fluctuation modes Y x . The point of comparison between the two results maybe that both involve certain nonnormalizable functions at the boundary u = 0. Plugging in the non-normalizable function F (u) in (5.32) will help us find α and β in (5.22), which in turn may be compared to (5.54).
Finally, the ratio of bulk to shear viscosities may now be determined from (5.54), to first order in gsM 2 N , by taking the conformal limit of (5.50). The result is similar to what we have in (5.54) up to a factor of 4π: 56) where κ 0 is defined earlier and we have inserted back r c , the cut-off radius. To see whether (5.56) does not violate the Buchel bound [30] we will have to determine c 1 and c 2 in (5.56). In (5.52) we expressed c 1 in terms of a negative definite function −|σ| at the horizon u = 1, assuming c 2 to be a positive definite quantity there. However underneath this choice was the assumption that both the bare resolution parameter b and the full resolution parameter a in (5.13) remain positive definite. As long as b > 0, this could still be made true with c 2 > 0. However b can be zero, as we saw in sections 4 and 5.3, and in this case c 2 > 0 will make a < 0 in (5.13) with the choice of c 1 in (5.52), rendering the whole construction meaningless. One might think that c 1 could be changed, but then the KSS bound [106] will be affected. Therefore it seems the only way to avoid any contradictions is to take c 2 = −|c 2 | with: at the horizon u = 1. By construction this preserves the KSS bound [106], and keeps the resolution parameter (5.13) positive definite 28 . With this at hand, it is now time to see if the ratio of bulk to shear viscosities (5.56) preserve the Buchel bound [30]. We will start with the simplest case of vanishing flavor i.e N f = 0. Referring back to sound speed (5.30) and the ratio (5.56), we get: where c 1 and c 2 now satisfy (5.52) and (5.57) respectively. Since log r h rc < 0, all terms in ζ η in (5.58) are positive definite. In the limit where r c > r h , the ratio of bulk to shear viscosities may be related to the sound speed as: which clearly satisfies the Buchel-bound [30]. Interestingly, for r c >> r h exp 11+|c 1 | |c 2 | , one may ignore the second piece in (5.59) and the ratio (5.56) may solely be expressed in terms of 1 3 − c 2 s . In either case, one may easily infer from (5.58) that the Buchelbound is always satisfied, at least for vanishing fundamental flavors N f .
What happens when N f = 0, i.e when we switch on fundamental flavors? Both, the ratio of bulk over shear viscosities and sound speed, have been computed above in (5.56) and (5.30) respectively. So it's time to combine them to see whether the specified combination of them satisfy the Buchel-bound. It is easy to see that the bulk to shear ratio (5.56) may now be expressed as: where the C ij andĈ ij terms may be extracted from (5.11) using the limits u = 1 and u = α θ i = 1 respectively. By switching off the g s N f terms one gets (5.59) from (5.60), so the question now is whether the gsM 2 N terms in (5.60) can again be positive definite.
It turns out, with some algebraic manipulations, one may rewrite the gsM 2 N terms appearing on the RHS of (5.60) in the following suggestive way: where σ 0 ≡ 201 20π log 4 + 603 20 ≈ 100.86 is a positive coefficient. Since N is very large and r c >> r h , every term in (5.61) can be shown to be positive definite, and the negative piece σ 0 g s N f does not change anything as long as log N log rc r h >> 160. The latter is not a constraint as we saw above 29 . Thus generically our model satisfies the Buchel-bound [30], and comparing (5.59) and (5.61), we see that there is in fact a new bound on the ratio of bulk to shear viscosities given by: To see that the terms on the RHS of (5.61) can be positive definite, choose N ≡ exp (n 0 + 60.3) with n 0 being a very large number approaching infinity, and r c ≡ r h exp (n 1 + 2.5125) with n 1 being another large number, not necessarily infinite. The condition for positivity of the RHS of (5.61) is n 0 n 1 > 160. This is easily achieved because going to the gravity dual description forces us to choose both n 0 and n 1 very large.
Type IIA spectral function and the viscosity bound at strong coupling with non-zero flavors
In the above section we found an interesting bound (5.62) for the ratio of bulk to shear viscosities at strong string and strong 't Hooft couplings. In fact the form of the bound seems consistent over the whole strong 't Hooft coupling regime as is obvious from the weak string coupling bound that we got earlier in (4.85): both (5.62) and (4.85) are proportional to 1 3 − c 2 s , although their coefficients differ. On the other hand, the weak 't Hooft coupling bound differs by being proportional to the square of the strong coupling bound as shown in (2.18). The reason for the different results at the two ends of the couplings has been motivated in section 2.2. Loosely, it is the ratio of the shear viscosity over entropy density that creates the difference at the two ends. At weak 'tHooft coupling the ratio is not a constant and is given by (2.17), whereas at strong 't Hooft couping we expect the ratio to be a constant [106]. This is a reasonably strong argument for justifying the difference between the two bounds, despite the fact that we have no control on the dynamics at the intermediate coupling regime as argued in section 3. The very weak coupling results have been justified in great details, and in sections 4 and 5, we provided some justifications for the strong coupling results. However one might be interested in deriving the bound at strong coupling directly from the spectral function, as such an analysis will be in line with the discussions of section 3. Further, we make the following observations: • The ratio of the bulk viscosity ζ to entropy density s is of O gsM 2 N as we saw in (5.54), and the ratio of the shear viscosity η to the entropy density is dominated by the conformal result plus an O gsM 2 N correction term from (5.50). This means up to O 1 N the ratio ζ/η would mimic ζ/s.
• The gauge and the metric perturbations may be required to be considered simultaneously − see subsection 4.2 of [81] and references therein.
• The correlation of gauge fluctuations, A x i A x i for i = 1, 2, 3, along the same direction could hence mimic the spirit behind the correlation of the metric perturbations, h x i x i h x i x i , along the x i axis relevant to the evaluation of bulk viscosity as, for example in [38] or in section 4.
The above three observations provide the necessary motivation for this section. Therefore we would like to evaluate the aforementioned gauge-field correlation function (in the hydrodynamical limit using the prescription of [107]) and see if one obtains a linear bound seen in (4.85) and (5.62). Even if this may not be explicitly tied to ζ/η, we feel the result obtained in this section, in itself, is sufficiently interesting.
Background gauge fluxes and perturbations on the flavor branes
Our starting point is configuration of N f D6-branes in the type IIA mirror set-up. For our purpose, we will however isolate one D6-brane and study world-volume dynamics on it. Alternatively, one can view this as D6-brane probing a non-Kähler warpedresolved conifold with N f flavor D6-branes. The DBI action for a single D6 brane is given as: with 2πα = 1. Here the worldvolume directions of the D6 brane are denoted by the coordinates: (t, x 1 , x 2 , x 3 , Z, θ 2 , ϕ 2 ), with (t, x 1 , x 2 , x 3 ) as the usual Minkowski coordinate, Z as the newly defined radial direction and two angular coordinate (θ 2 , ϕ 2 ); Z is related to the usual radial coordinate r as r = r h e Z and ϕ 2 is the local value for the angle φ 2 (for more details, see sections 3 and 4 of [102]).
In the above, and as before, ϕ denotes the type IIA dilaton which is the triple T-dual version of type IIB dilaton. The pullback metric and the pullback of the NS-NS B field on the worldvolume of the D6 brane are denoted as g and B in (6.1). F is the field strength for a U(1) gauge field A µ , where the only nonzero component of the same is the temporal component A t . In the gauge A Z = 0, the only nonzero component of F is F Zt = −F tZ . Combining together the symmetric g field and the anti-symmetric B field as G ≡ g + B and expanding the DBI action up to quadratic order in A, we get: The second term in (6.2), is coming because of the anti-symmetric B field in G.
As none of the fields in the DBI action depends on the angular coordinates ϕ 2 , the integrand in equation (6.2) is independent of the same. Also we choose to work around the same small values of both θ 2 and θ 1 given by (5.7) earlier. Hence, the upshot is that the integration over θ 2 and ϕ 2 is trivial and we denote Ω 2 as the factor one gets after the integration over θ 2 and ϕ 2 , such that: 3) The equation of motion for the temporal gauge field A t (Z) as obtained from the above lagrangian in (6.3) is given as: At this point we can use the precise functional forms for the background data, namely G tt , G ZZ as well as the dilaton ϕ 2 , to rewrite the above equation in the following form: where a 2 is the resolution parameter, (α θ 1 , α θ 2 ) are the two angular values in (5.7) and C is the integration constant. In the large Z and small a 2 limit, (6.5) yields: which appears from the fact that the second line in (6.5) dominates over the first line. The large Z limit is also the large r limit where one might be concerned about UV issues appearing from AdS cap. This is however not much of a worry at this stage because as long as r h e Z >> a, (6.6) continues to hold. With this in mind, the solution to (6.6) is: where we have used A t to express the background value to avoid confusion. The other parameters appearing in (6.7) are C 1 , which is yet another constant and Ei, which is the exponential integral 30 . In the second line of (6.7) we have shown the first dominant piece in the large Z limit. Higher powers of 1 Z can then be ignored. This background value also prepares us to study the fluctuation of the gauge field components. For example we can express the gauge field appearing in (6.3) as: where the fluctuation A µ only exists along the directions µ = (t, x 1 , x 2 , x 3 ) due to the particular gauge choice and depends only on the radial variable Z. Including the perturbations in the lagrangian of the DBI action (6.1), one gets: with F as the field strength for the gauge field fluctuations. Now defining G ≡ g + B + F and again expanding the above lagrangian up to quadratic order in the gauge field fluctuation one gets: This definition can be used for positive values of x, but the integral has to be understood in terms of the Cauchy principal value due to the singularity of the integrand at zero.
Writing the field strength F in terms of the gauge field fluctuation A µ and after doing some simplifications in terms of the interchange of indices, one can write the above lagrangian as: The second line in equation (6.11) is a total derivative term and equating the last line to zero for any arbitrary A β gives the equation of motion for the gauge field fluctuation: The total derivative term in (6.11) does not necessarily have to vanish at Z → ∞, as there could be non-normalizable modes serving as sources for the dual gauge theory operators. Our EOM in (6.12) however is not affected by this, and in the following section we will discuss possible solutions of (6.12).
Equation of motion for gauge field fluctuations
To derive the equation of motion for the gauge field fluctuation, we first need to write down the fluctuating field as the following Fourier decomposed form: where we assume the fluctuation to have momentum along x 1 direction only, with k 0 = ω, k 1 proportional to q and k 2 , k 3 arbitrary. Now, the equation (6.12) has a free index β and for β = (t, x 1 , x 2 , x 3 , Z), one gets a total of five equations of motion. For example for β = Z, plugging in (6.13) in (6.12) yields: where the RHS vanishes because of the antisymmetry of G [αβ] . The dilaton does not appear because it is independent of the four-dimensional spacetime coordinates. The above equation relates A t with A x 1 . On the other hand if we take β = t, we get the following EOM: which now relates A t with A t and A x 1 . A somewhat similar equation appears when we choose β = x 1 in (6.12), namely: At this stage one can easily verify that pluging in (6.14) in (6.15), we can get (6.16). This shows that the above three equations (6.14), (6.15) and (6.16) are self-consistent. Finally, one may find the equations for β = x 2 and β = x 3 . We expect them to be equivalent, and are given by: To proceed further, we will have to define gauge invariant variables. For our case, there would be two such variables E x 1 and E β with β = x 2 or x 3 , expressed in the following way: With these new variable the three equations in (6.14), (6.15) and (6.16) can be cast into a single second order equation involving E x 1 . Even more obviously, the fourth one for β = x 2 or x 3 , can be rewritten in terms of the new variable E T . Moreover, in the zero momentum limit, i.e in the limit q → 0, it can be shown that the equation involving E x 1 is the same as the one involving E T , given as: 19) implying that in the zero momentum limit all we need is to solve one second order differential equation. This is of course a huge simplification, and one can even rewrite (6.19) in the following suggestive way: where all functions appearing above are only functions of the Z variable. In fact P (Z) and Q(Z) may be easily seen from (6.19) to take the following form: The suggestive way alluded to above is that the above equation (6.20) can be recast in a Schrödinger like form by certain redefinition of the variables involved in the following way: with E T defined as E T (Z) ≡ P (Z)E T (Z), and V E T is the potential term that is expressed as: The Schrödinger like equation is a valid description in the zero momentum limit. Once we go away from that limit, we will have more equations for the fluctuations with different choices of potentials. This is a more complicated scenario and fortunately our present analysis does not call for that. Nevertheless, the potential (6.23) is still highly non-trivial, as both P (Z) and Q(Z) take non-trivial values when expressed in terms of the background metric and dilaton in (6.21). For example, the function P (Z) that may be written as: where C is a constant that appeared first time in (6.5). The other parameters that appear above are G i and P i . All the G i 's depend only on the fixed parameters of the theory, and are defined by: where a is the resolution parameter (5.13) and α θ i are defined in (5.7). The other variables appearing in (6.25) are the P i 's out of which only P 1 is a constant. They are defined in the following way: where we have laid out clearly the g s N f dependences of each of the coefficients. One may see that the g s N f independent terms appear only from P 2 and P 3 . In a similar vein, we can also work out the Q(Z) piece in (6.21). This is given by: where P 2 and G 1 have already been defined in (6.26) and (6.25) respectively, but G 4 and G 5 are new. They can be related to, say, G 3 in the following way: With these set of definitions, the functional forms for P (Z) and Q(Z) are fully determined, although there is one issue that one may want to clarify at this point. This has to do with the presence of terms with relative minus sign inside the square root in (6.24). To avoid (6.24) to develop complex values, we require: where we have used the fact that g s N f P 2 ≥ 4π in the limit g s → 0 (see also (6.26)). This seems to constrain short distances, but since r > r h (6.29) do not put strong constaints. In fact we can take small Z, large N and vanishing momentum limits to re-express the potential (6.23) in the following way: The way we have expressed the above potential, one may clearly see how the various terms in the sum are increasingly suppressed by g s N f and gsM 2 N . The constant β appearing above is related to c i in (5.13) as β = c 1 = c 2 for simplicity 31 ; and m 0 ++ is the mass of the lightest glueball given via: and parametrized by the scale m 0 . This is computed using M-theory metric perturbation, much like the analysis we had in section 5, and is further detailed in [101]. We have also used (6.31) to define d o as: which is a constant. Note that in (6.31) the first term in independent of ω 2 and only depends on Z, b 2 and the glueball mass. The glueball mass also features in the definition of α(Z), that appears in (6.30), in the following way: where b is the bare resolution parameter that is defined in (5.13). Comparing the definition of the glueball mass in (6.31), we see that α(Z) is proportional to g s N , but suppressed by 1 Z 2 . This clearly indicates that the potential (6.30) goes as 1 Z 2 for small Z.
Note that Z = 0 (horizon) is a regular singular point of (6.22) and the exponents of the indicial equation near Z = 0 can then be written as 1 2 ± iI, where I is defined as: The functional form for I shows that it is suppressed by both g s N f and gsM 2 N so to zeroth order there is only a piece that depends on the bare resolution parameter b, the frequency ω, the horizon radius r h and the 't Hooft coupling g s N . We can also express the solution for the Schrödinger type equation (6.22) using I as: (6.35) with F T (Z) being a function that is analytic at the horizon radius r h (or at Z = 0). The above equation tells us the precise behavior of the eigenfunction E T (Z) at Z = 0. This is useful but not exactly relevant for the present case, as what we actually need is the form for E T (Z) when Z 1. The question then is how will the Z = 0 analysis be useful for the large Z domain.
The answer lies in our choice of the ansatze (6.35) that in fact serves as a good ansatze even when Z 1. In other words, the exponent of the indicial equation I that we computed in (6.34) still remains a valid solution for large Z. What changes for large Z is the functional form for F T (Z).
Of course there is yet another change in (6.22): it is the functional form for the potential V T (Z) that we computed earlier in (6.30). Naturally since (6.30) was for small Z, this should change. The change is easy to work out, and may be written in the following way: where we are ignoring higher powers of e −2Z that would appear from the relevant higher powers of e −2Z in P (Z) and Q(Z) in (6.24) and (6.27) respectively. The A and B appearing in (6.36) are not constants, with A defined as: where P 2 is given in (6.26). The function P 2 is defined with N, r h and Z, and one may take appropriate limits in terms of either of these parameters. Before we do this, let us write the expression for B in terms of the background parameters: where the successive suppressions with respect to gsM 2 N as well as g s N f are shown. The term independent of all these is proportional to b/β o where b is the resolution parameter and β o ≡ β log(er h ) with β = c 1 = c 2 in (5.13). The other parameters appearing in (6.38) are defined in the following way: One can now take the form of the potential, given in (6.36), and the wave-function ansatze, given in (6.35), and plug them in the Schrödinger-type equation (6.22) to obtain the following equation for F T (Z): where I is still given by (6.34). The above second order differential equation is rather hard to solve because of the presence of the exponential term e −2Z . However, since we seek the spectral function only in the limit of large Z where e −2Z vanishes, we can easily remove the problematic term from our equation (6.40). Doing this yields the following form for E T (Z): where C + and C − are two integration constants whose values will be determined later. To extract the actual fluctuation E T (Z) from (6.41), we need the functional form for P (Z) in the large Z limit. This is easy to extract from (6.24) and may be written as: which is as expected proportional to g s N f , and P 2 is defined in (6.26). The other coefficient G 6 appearing above can be extracted from some combinations of G i and P i in (6.25) and (6.26) respectively at large N . Here we write this simply as: where n o is a numerical constant given by n o = 4 √ 23 1/3 π 3/2 ≈ 45.43. Combining (6.41), (6.42) and (6.43) together and looking at the definition of E T (Z) given just after (6.22), we can finally determine the form of the fluctuation at large Z as: Few comments are in order related to the form of (6.44). First we see that the suppression factor is (g s N f ) −1/2 . From here it would seem like this does not have a natural zero flavor i.e N f = 0 limit. However when combined with P 2 , g s N f P 2 does have a zero flavor limit, and is given by: (6.45) which one may also verify directly at the level of the Schrodinger equation (6.22). Secondly, the integration constants C ± appearing in (6.44), can in principle be complex valued. So this will require us to investigate few possibilities associated with various choices of C ± satisfying the boundary conditions. Let us start by investigating the form for A given in (6.37). First let us assume that Z goes to infinity as: The above would make sense because N → ∞ and g s N f → 0. In this limit P 2 may be replaced by −3 log r h . In other words A in (6.37) becomes: for large log r h so that the inverse suppression in (6.47) makes sense. Assuming this is possible, plugging (6.47) in (6.44) would imply the following form for fluctuation E T (Z): where we have suppressed inverse log 2 r h dependences. Note that the functional form for E T (Z) is not the only way to express E T (Z) from (6.44). For example if the horizon radius goes as: (6.49) in the limit of very large Z uv and vanishing g s N f , then one may rewrite P 2 simply in terms of log N and not log r h . This means A in (6.37) in-turn will be expressed in terms of log N and not log r h , implying: From the multiple ways of expressing (6.44), for example (6.48) and (6.50), one might worry that the final result would be dependent on our approximation scheme. However we will show in section 6.3 that this will not be the case. Finally, the functional form for E T (Z) in the zero momentum limit matches with the functional form for E x 1 as may be seen from (6.18). This will help us to express the on-shell action completely in terms of known parameters appearing in (6.44), allowing us to compute the spectral function more efficiently. This is the topic that we turn to next in the following section.
On-shell action and the strong coupling spectral function
In the previous section we managed to find the functional form for the gauge field fluctuation E T (Z) in the large Z and in the zero momentum limit. What we now want is the four-dimensional on-shell action. This can be easily extracted from the boundary piece of the Lagrangian (6.11). Earlier we had used (6.11) to determine the EOM for E T (Z) and subsequently for E T (Z). Plugging in the EOM in (6.11) then leaves us only with the boundary term, that we shall label as the on-shell four-dimensional action S 4 . This takes the following form: where x 0 ≡ t and Ω 2 is the same volume of the two-sphere that we had in (6.3). Note that we took Z h to be the lower limit of Z to be consistent with the lower bound (6.29) 32 . However what we seek here is in fact the on-shell action at the boundary Z = Z uv , so the near-horizon geometry is not too relevant for us. At the boundary F tZ = −F Zt = 0, so we must set G tZ = 0 and replace √ −G by √ −G. Incorporating these changes, the boundary value of the on-shell action is now given as: Using the gauge field EOM (6.14), but now resorting to the metric G µν instead of G µν , and the result in Appendix B, the above action can be rewritten in terms of the gauge invariant variables E x 1 , E x 2 and E x 3 as: with k 2 a given in (B.9), and at this point we will be concerned about the x 1 part of the fluctuation. In other words, we only want to study the behavior of E x 1 at zero momentum. At zero momentum, according to (6.18), the fluctuations E x 1 and E T follow the same equation (6.19). Using such an identification, we can define: where one may match the Lorentz indices using (6.18). Plugging (6.54) in (6.53) and using E 0 (k)E 0 (−k) = 1, it is easy to see that the zero momentum limit yields the following action for the x 1 piece of the fluctuation: Before moving ahead, let us make couple of observations. One, E T (Z) is exactly the fluctuation (6.44) that we derived earlier and is therefore subjected to take either of the two possible limits (6.48) and (6.50) that we mentioned above. Two, the coefficient of E T (Z)/E T (Z) looks very similar to P (Z) in (6.21), so one might think that it can take the functional form (6.24). This is unfortunately not the case because P (Z) in (6.21) and (6.24) involves det G ab whereas the coefficient of E T (Z)/E T (Z) in (6.55) involves det G ab . The former differs from the latter by the presence of F ab . The above discussion more or less sets out the tone for the rest of the computations. There are two parts to the computation that we will indulge in the following. One is the coefficient of E T (Z)/E T (Z) in (6.55) and the other is E T (Z)/E T (Z) itself. To condense some of the subsequent formulae, let us define: where P 2 is given in (6.26). The coefficient of E T (Z)/E T (Z) can then be represented in the following way: 57) where we are suppressing higher orders 1/N terms, and n o is a numerical constant that appeared in (6.43). Note that both the denominators are suppressed differently with respect to N, α θ 1 and e 2Z . The numerators are non-trivial functions of e 2Z , and they will govern the behavior of the spectral function. Let us therefore study them carefully by first writing out the form for Σ 11 : where we see that it is proportional to g s N f . This makes sense because in the absence of the flavor D6-branes we won't see this contribution. The forms for P 1 and P 2 are given earlier in (6.26), where P 2 is a function of Z and r h but P 1 is independent of both of them. At this stage we can make (6.58) vanish by choosing: Few questions immediately arise from (6.59). What is the logic behind the choice (6.59), instead of making the other bracketed terms in (6.58), to vanish? What would happen if we make the other bracketed terms in (6.58) vanish? The answers to both the questions lie on the following observation: since b 2 as well as α θ i pieces cannot be large, the first three brackets in (6.58) cannot vanish. Making them zero would lead to contradictions. Therefore from (6.46) we see that Z can be very large, and we can use this to fix the value of r h using (6.59). This gives us: where is a small number that can be derived from above. One may also see that (6.60) cannot be related to (6.49). This is because of our choice between (6.46) and (6.49): we are allowed either of them, but not both. Coincidentally, we can choose either (6.48) or (6.50), but not both. As one may easily verify that the choice (6.46) only, and therefore (6.48), can be consistent with (6.59). The caveat however is that, since log r h is no longer a large number, the expansion in (6.47) cannot be terminated and we shall require the exact form for A in (6.47). We will discuss a way out of this soon. After the dust settles, there will be no Σ 11 term, and so we have to go to the next term given by Σ 22 . The next term incorporates both g s N f as well as gsM 2 N , and takes the following form: where we see that the term is dependent on r 2 h as well as various other factors of log r h . There are also e Z and N dependences that will take large values, so we will have to be careful taking the limits at large Z and large N . The various other quantities appearing above are α b , K(Z) and L(Z) that will be defined in the following. First let us start with α b : where on the right we have shown its behavior at large Z: the resolution parameter b 2 being small, does not contribute anything to α b . In the same vein, K(Z) is defined in the following way: where we have defined P 7 in (6.56) above. Using this definition for P 7 , and (6.46) for Z that we took earlier, one can easily show that: leading to some simplification in (6.63). It also means that for large Z, K(Z) goes as −3e 6Z . This is consistent with the other coefficient for α 4 θ 1 as evident from (6.61). Finally, the last term L(Z) takes the following form: where the large Z behavior is solely governed by the vanishing of P 7 in (6.64). In fact plugging the limiting values of (6.62), (6.63) and (6.65) in (6.61) and then in (6.57), leads us to the following behavior of (6.57) for large Z: (6.66) with κ o being a constant that depends on N as κ o ≡ bβN 0.1 no , where n o remains the same numerical constant that appeared in (6.43).
Before moving ahead, let us pause briefly to examine the situation at hand. The crucial outcome of (6.66) is the dominance of Σ 22 over Σ 11 because of the imposed constraint (6.59). This further lead to the form of the horizon radius given in (6.60) that is of order 1. This in turn gives us the high temperature limit, and so one might ask if there is a way to analyze the spectral function for small r h . Otherwise an expression like A in (6.47) does not have a good expansion in terms of inverse log r h . The situation is subtle because we would still have to impose (6.59) to eliminate Σ 11 piece in (6.57). How can we then avoid the outcome (6.60) for the horizon radius?
A way out of this conundrum is to not impose (6.46) that determines Z from the start, and instead use (6.59) to fix Z. This means (6.47) for A does not hold anymore although the form of A in (6.37) continues to hold. Z then satisfies: which is extracted from (6.59). The RHS has log r h and, as discussed above, we cannot use either (6.60) or (6.49) for r h . Instead we will use a different way, as shown in Appendix C, to determine the horizon radius by demanding the vanishing of the effective number of the three-brane charges in the original type IIB side. Solving (6.67) then gives us the following value for Z: where W n is the analytic continuation of the product log function with integer n. By construction this is a large positive number because N is large whereas r h is a very small number. Plugging (6.68) in (6.37) then gives us the following value for A: which is expectedly different from (6.47). The form of A shows that it is in fact a very large number because in addition to it being inversely proportional to a small number, i.e r h << 1 as mentioned above, it is also exponentially dependent on a large number as g s N f → 0. This will be useful for us because large A can simplify the expression for E T (Z) in (6.44). We will come back to this soon. Let us now compute the coefficient (6.57), which in turn means computing Σ 11 and Σ 22 . As mentioned earlier, Σ 11 vanishes, so we only need to compute Σ 22 at large Z. For this we will need the limiting values for α b , K(Z) and L(Z) in (6.62), (6.63) and (6.65) respectively. The limiting value for α b remains e 6Z as before, but the limiting values for K(Z) and L(Z) change because we can no longer apply (6.64) anymore. They now take the following values: where P 7 is given in (6.56). Plugging (6.70) in (6.61) and using (6.59), gives us the following value for the coefficient in (6.57): 71) where κ 1 = κo g 3/2 s and κ o is the same constant that appeared earlier in (6.66), and in the last line we have used the large Z limit (6.68) to eliminate the e −2Z piece. The above result differs clearly from (6.66), which was computed for r h as in (6.60). Here we expect r h to be small − as shown in Appendix C − and so (6.71) will finally be proportional to r 2 h log r h .
Having done the first part of the computation in (6.55), let us now investigate the second part which is the ratio E T (Z)/E T (Z). The functional form for E T (Z) is given in (6.44) and is expressed in terms of coefficients C ± which could in principle be complex. The ratio then can be written as: where we have introduced three functions α, g and Q that are in general complex.
In fact what we require is that the α, g and Q functions remain complex for large Z and small r h . The precise forms for these functions are: where I and A are defined in (6.34) and (6.37) respectively. Note that in the limit of large N , small g s N f , and small r h , A is large number, implying a small (but non-zero) complex piece in g. On the other hand, I is a large number being proportional to g s N and inversely proportional to the horizon radius r h . However to avoid contradictions, we will not take any limit at this stage and continue with the operations with exact expressions. This gives us: where now there are three distinct sources of imaginary pieces from (6.74): they can come from the C ± coefficients, the exponential term e iZ/ √ A and the bracketed terms in (6.74). The bracketed terms are defined with respect to two new functions P o and Q o , which may be written as: (6.75) The limit that we are looking for now, and as mentioned earlier, is the large Z, large N and small r h limit where Z becomes large as (6.68). Essentially then it is the large N and large |log r h | limit. In this limit P 2 can be expressed using Z as (6.59), which would tell us that it is a small number 33 . Plugging the values of A from (6.69), and Z from (6.68) now implies that Q o may be approximated by: where on the RHS we have shown the behavior of the function as it approaches zero, by ignoring a constant additive factor as the term in the bracket on the LHS of (6.76) will always dominate. We have also defined x and then Z as a function of x in the following way: where the latter should be viewed as an alternative expression for (6.68). For large N , small r h and g s N f → 0, it is easy to see that x vanishes whereas Z becomes very large. However Q o will always go to zero in this limit. What we now want to claim, in this limit, is that: which is easy to justify from the form on Z in (6.77) and the fact that multiplying x with any powers of log x will always approach zero in the above limit. The dominance of I/Z over Q o is a huge simplification for us because this will not only render the expression (6.74) manageable without worrying about contributions from the exponential pieces, but also remove the ambiguity of its dependence on the constants C ± whose values have not been explicitly determined. In fact after plugging in all the values from (6.75) and (6.34) in (6.74) and using the limiting conditions (6.76) and (6.78), it is easy to see that: where m 0 ++ is the mass of the lightest glueball expressed in terms of scale m 0 and is given in (6.31). In fact this is all we need, because the imaginary part of (6.79) can then take the following form: where we have used (6.31) to write it in this form. One may note its linear dependence on ω, the frequency parameter that we encountered earlier. It is also inversely proportional to the horizon radius r h , a fact that will be useful soon. The logic behind the above series of computations should be clear now. What we are looking for is the retarded Green's function in the zero momentum limit. This is now easy to extract from (6.55), and can be written as: which precisely contains the two pieces of computations that we performed above, namely the coefficient of E T /E T in (6.71) and the ratio E T /E T itself in (6.79). One additional input was the imaginary piece in (6.79) that we extracted in (6.80). The reason for this extra bit of work is apparent: the spectral function is exactly the imaginary piece of the retarded Green's function, i.e: where T is the temperature that will be related to the horizon radius r h . Since (6.71) is all real, the imaginary piece in the retarded Green's function can only come from (6.79). Any other contributions to the imaginary piece will be suppressed by higher powers of 1/N so does not concern us here. Putting everything together then gives us the required expression for the spectral function: where expectedly this is proportional to g s N f and g s M 2 /N . It is also proportional to r h (and also log r h ), so at zero temperature ρ(0, ω) = 0. We can use (5.13), or the footnote 31, to express the combination br h in terms of the resolution parameter as . This way, the pre-factor multiplying log r h in (6.83) is not explicitly but only implicitly dependent on r h and it brings out the resolution in the gravity dual rather succinctly. The two other functions appearing in (6.83) may be defined in the following way: where n o is a numerical constant defined after (6.43), and β is defined in footnote 31. Note that if we use the strong string coupling result, as opposed to the weak string coupling analysis presented here (both a strong 't Hooft coupling of course), β can be defined from (5.13) with β = c 1 = c 2 . The coefficient c 1 appears in (5.52) and c 2 is bounded by (5.57). Following this logic, what we now need is the g s N f independent pieces to define β. Thus if we take the negative definite constant piece of c 1 from (5.52) and use this to define both c 2 and β then we can ignore higher order g s N f dependences. Thus essentially, from both strong and weak type IIA couplings, β will be another constant to O(g s N f ), which in turn would make F b to be another constant 34 , that we shall call f b . However the worrisome feature is the other function in (6.84), i.e the function F a that depends on N, g s and Z uv . Both N and Z uv , with Z uv defined in (6.68), go to infinity whereas g s approaches zero. If we define ζ 1 ≡ g s , ζ 2 ≡ 1/N and ζ 3 ≡ 1/Z uv , then we can choose the behavior of each of these parameters such that: 85) 34 Recall that the parameters α θ1 and α θ2 are constants.
with a constant f a . 35 . As T → 0, r h vanishes, and from the expression of the spectral function in (6.83), this also vanishes. Therefore we can finally put everything together and argue that: where we have used (5.58) to express the RHS in terms of the sound speed. Of course, as mentioned above, (5.58) is a strong coupling result, so the comparison has to be done with c 1 and c 2 being proportional to g s N f and not constants (as opposed to the weak string but strong 't Hooft coupling answer). Taking all these into consideration we see a clear linear dependence on ( 1 3 − c 2 s ) at strong 't Hooft coupling, perfectly consistent with the results of sections 4 and 5.
Few comments are in order now. Our analysis is based on small r h as derived in Appendix C, so the natural question is what happens when r h is of order 1, i.e the one given in (6.60). When the horizon radius is of order 1, it means we are at the point where new degrees of freedom are about to enter, i.e we are in Region 2 of [58]. Therefore unless we know the detailed metric configuration of Region 2 and beyond, we cannot perform the analysis as clearly as we have done here because of our definition the radial coordinate as r = r h e Z . When r h is small we are still in Region 1 of [58] and so precise computations may be performed (as shown here).
Secondly r h itself is bounded below by (6.29). This bound is of course to prevent any appearances of unphysical imaginary pieces in the computations. Clearly for the range of Z that we are concerned here, this poses no constraints. Thus happily all the results lead to the following conclusion: 6.4 The strong string coupling limit and pure classical supergravity Most of the analysis section 6 is done with g s → 0 and with large M . This differs a bit from what we did in section 5 where g s = O(1), so that natural question is whether we can work through the analysis of sections 6.1 − 6.3 assuming (g s , N f ) ∼ O(1) and N 1 as part of the MQGP Limit of [63] 36 . This is an unusual large N limit but still warrants the use of pure classical supergravity. To see this, one 35 In the MQGP limit wherein g s < ∼ 1, one can argue that f a will be a finite non-zero constant as follows. As r h < r 0 or |logr h | > |logr 0 | (r 0 being the r where the D3-branes have been entirely cascaded away, and noting min(r) = r h ), hence instead of choosing r h to satisfy (C.9), assume |logr h | = N 1/3 κ 1 f , 0 < f < 1 and κ = n b g s M 2 /3 from (C.9). As Z U V ∼ |logr h | + logN 1/3 ∼ notes that by including terms higher order in g s N f in the RR and NS-NS threeform fluxes than those considered in [63] and the NLO terms in the angular part of the metric, one sees that in the IR in the MQGP limit, there occurs an IR colorflavor enhancement of the length scale as compared to a Planckian length scale in the Klebanov-Strassler (KS) model [56] for large M , thereby showing that stringy corrections will be suppressed. To see this more explicitly, we summarize the main ideas of [94,103] here. Using [58] let us define an effective number of color in the following way: where M eff and N eff f are the effective number of bi-fundamental and fundamental flavors respectively that are defined for our background in the following way: where (m, n) indices are summed from (m, n) = (0, 0) onwards, and henceforth to avoid clutter we will use Einstein summation convention. The coefficients k mn ≡ k mn (r, g s ) and f mn ≡ f mn (r, g s ) and therefore the effective flavors are constructed from the higher orders g s N f and gsM 2 N corrections [58]. Combining these together, it was argued in [94,103] that the length scale in the IR at r = Λ will be dominated by: In the IR, relative to KS geometry, we thus see that (6.90) implies the abovementioned color-flavor enhancement of the length scale. Therefore in the IR, even for g s = 0.45, M = 3 and N f = 2, upon inclusion of of n, m > 1 terms in M eff and N eff f in (6.89), the characteristic length scale in the MQGP limit [63] involving g s ≤ 1 satisfy: where L KS is the characteristic length scale for the Klebanov-Strassler model [56] in the far IR. Because of this enhancement, the stringy corrections are suppressed implying that one can still trust classical supergravity. It is however interesting to note that in the IR, one can obtain g 2 Y M = O(1) even for g s → 0, provided N f = 0). To see this let us first consider vanishing N f . The NSVZ RG flow equation for the SU(M ) gauge group that survives at the end of the Seiberg duality cascade, gives us: (6.92) where the RHS appears from the integral of the NS two-form field over a vanishing two cycle S 2 in the type IIB side. This is of course the same two-cycle discussed at the beginning of section 4, parametrized by (θ 2 , φ 2 ), on which we have M wrapped D5-branes. The question is whether (6.92) can allow g 2 Y M = O(1). Solving the equation (6.92) gives the inverse YM coupling in terms of M and log r, for r = Λ. It is easy to see that, with M = O(1) this is only possible if Λ is proportional to the UV cutoff itself. Clearly since we want to concentrate on far IR physics, such a choice is not feasible. Additionally, since near the UV cutoff we expect the theory to become scale invariant, M automatically vanishes there.
On the other hand, when N f = 0, the above conclusion can change because the dilaton on the gravity side is no longer a constant. Recall that, with N f flavors, the dilaton takes the following form [57,58]: where a 2 is the resolution parameter that we encountered earlier. Using the fact that we have an almost vanishing resolution parameter, and the angular coordinates (θ 1 , θ 2 ) are parametrized by (5.7), the inverse of the YM coupling now satisfy: at the scale r = Λ measured with respect to the cutoff scale Λ ∞ . What we are looking for now is a Λ in the IR whereat g 2 SU (M ) = O(1). The scenario is more subtle now because of the additional O(g s N f ) pieces appearing in (6.94). These pieces come from carefully looking at the NS B-field threading the vanishing two-sphere on which we have the wrapped D5-branes. The B-field is more non-trivial than what we had above, and is given by: where the first term is precisely what we had on the RHS of (6.92) for the case with vanishing N f , and the second term involves the g s N f corrections. These correction terms have been worked out in [58], and may be expressed as: where we have removed any dependence on the resolution parameter when writing (6.96) from [58] 37 . In fact the O(g s N f ) term alluded to in (6.94) comes precisely from Q in (6.96). There is however one subtlety associated with the angular variables θ i and φ 2 . Since the integral of the B 2 field over the two-sphere parametrized by (θ 2 , φ 2 ) contributes to the YM coupling g 2 Y M , one needs to be careful while imposing (5.7). One way would be to impose (5.7) to θ 1 in (6.95) and then integrate over θ 2 . In that case an additional N dependence would appear from the second term in (6.96). Alternatively we could also insert the value of θ 2 from (5.7) after integration over the two-sphere. The latter would imply that the integration of B 2 field over the two-sphere is concentrated mostly near the regime defined in (5.7). After the dust settles, the equation that we need to solve to determine Λ can be derived from (6.94) as: which is a quadratic equation to the first order in g s N f . To higher orders in g s N f the equation starts becoming more complicated. The various coefficients of (6.97) are defined as 38 : Let us pause a bit to see what are the dominating terms in the above set of coefficients. We want g s → 0, and small N f , but we also want very large N . Let us therefore take the following limiting values for g s , N f and N : There is one subtlety that we are putting under the rug. A part of the B-field in (6.95) goes as 9 4π (g s M ) (g s N f ) log r log |a|, where a is the resolution parameter. This blows up in the limit a → 0, so one might be worried that Q given in (6.96) is not well defined in this limit. This is however not the case because the derivation of the B-field in [58] was done with non-zero resolution parameter, and for zero resolution parameter we have to do the analysis separately. The result then is of course independent of the log |a| piece, and is as given in (6.96). 38 We have used the following values of the integrals governing the B 2 field using the θ i values given in (5.7): where α N could be a large number and 1 < b < 2. This clearly shows that the g s N f log N term in B dominates and B 2 4AC. Using this criteria, and solving (6.97) immediately gives us: implying that Λ can be in the deep IR. Hence, one can obtain an O(1)g Y M in the IR without requiring an O(1) g s , in the presence of flavors but not in their absence. In the IR, of course N f = 0. Before ending this section, let us make a few observations. First, if we also take M to be very large, then the first term of C in (6.98) will be suppressed by 1/M . This of course will not change the conclusion of (6.100). Secondly, in Section 6, (6.29) will be replaced by the observation that for large Z the argument of the square root in (6.24) is obviously positive and for small Z: where P 2 is given in (6.26); as long as logN, |logr h | 1. This is obviously true from our earlier considerations. Therefore, the argument of the square root in P (Z) in (6.24) is always positive.
Conclusions and discussions
In this work we have studied bulk viscosity in the whole range of the 't Hooft coupling constant λ. One of the main goal of our studies was to express the bulk viscosity as a function of the speed of sound within well-established first-principle theories. Our efforts were put into clarifying possible differences in the parametric form of the ratio ζ/η, obtained at different coupling constant. Apart from the discussion on the final forms of the bulk viscosity, we have also elaborated on the relevant employed analytic methods. This was to adapt them to the examined regimes and to justify their relevance. We focused on the extreme limits of the coupling where analytical methods are applicable. At weak coupling, kinetic theory was used, which is currently the most common effective approach to calculate the bulk viscosity and other transport coefficients. To confirm the validity of kinetic theory we provided its justification from a more fundamental diagrammatic approach. At strong ('t Hooft) coupling, the UV complete type IIB holographic dual (and its M theory uplift when addressing also the strong string coupling limit) of large-N thermal QCD was employed. The intermediate coupling behavior, most relevant for the quark gluon plasma produced experimentally in the heavy ion collisions, was also briefly discussed. We mainly summarize known challenges related to the first-principles extraction of bulk viscosity.
To discuss the weak coupling limit we summarized and matched the analysis of bulk viscosity of QCD done extensively within the effective kinetic theory in Ref. [29] to the case when the interaction is governed by the 't Hooft coupling λ = g 2 Y M M . In such case, the bulk viscosity behavior is controlled by gluons only as the quark contributions are suppressed by a factor 1/M , where M → ∞ is the number of colors. 39 The parametric form of the bulk viscosity as a function of the speed of sound is ζ/s ∝ 1/3 − c 2 s , while the ratio ζ/η ∝ (1/3 − c 2 s ) 2 . Then, starting from the Kubo formula, we performed a multi-loop analysis which enabled us to determine which scattering processes contribute to the collision kernel of the Boltzmann equation and provided a power counting in the weak 't Hooft coupling and high temperature. Collecting all evaluated diagrams we have shown a schematic procedure how to derive an integral equation which may be thought of as a diagrammatic representation of the Boltzmann equation. The integral equation is formed by infinite number of planar diagrams with propagators and vertices being dressed. Both number conserving and number changing processes have to be included in the complete bulk viscosity examination. For the vertices a separate integral equation, governed mainly by the soft physics and capturing the LPM effect, has to be solved. The diagrammatic analysis presented in this work stands for the first explicit justification of the validity of the Boltzmann equation, whose solution is needed for transport coefficients investigation governed by SU(M ) theories.
Within the intermediate coupling region, we have summarized a state of knowledge on the bulk viscosity studies. Although the prescription of calculation of bulk viscosity is given by the Kubo formula, it is difficult to reliably establish the hydrodynamic limit of the spectral function and determine which physical phenomena may be responsible for its shape. Therefore we can only conclude that all compiled findings do not allow one for quantitative determination of the bulk viscosity behavior in this region starting from first principles and new methods and/or perspectives are needed.
After analyzing the weak and the intermediate 't Hooft coupling regimes, we go to the next stage, i.e the strong 't Hooft coupling regime. Clearly neither pQCD, nor Lattice results can help us here. A new paradigm is needed and is given by the so-called gauge/gravity duality. This is a refined form of the famed AdS/CFT duality, constructed precisely to tackle strongly coupled gauge theories that are nonconformal. In section 4 we study a SU(M ) gauge theory in the IR at high temperature (i.e the temperature above the deconfinement temperature) and at strong 't Hooft coupling. We take large M , but keep the string coupling g s very small, such that λ = g s M is still very large. To avoid additional complications, we take no flavor degress of freedom. 39 The number of colors is N + M in the UV and M in the IR; both are kept very large in sections 2.2 and 4 and N f (along with string coupling) could be taken to be O(1) in sections 5 and 6 keeping N to be very large as part of the "MQGP" limit.
In such a setup, the computation of bulk viscosity boils down to the computation of metric fluctuations in the corresponding gravity dual. In section 4.2 we study the equations governing the fluctuations using two steps: one, in section 4.2.1, we relax some of the constraints and study a toy example which in turn provides a nice solvable system; and two, in section 4.2.2, we do a more precise and careful computations of the fluctuation equations. Knowing the precise fluctuations help us to compute both the sound speed as well as the ratio of the bulk to the shear viscosities. In section 4.3 we perform the aforementioned computations and show that the ratio of the bulk to shear viscosities is indeed bounded below by the deviation of the square of the sound speed from its conformal value.
It is believed that QGP is an example of a strongly coupled system at finite temperature wherein unlike as considered in most gravity duals, the gauge coupling and hence the string coupling, is of O(1). Motivated by the same and with the idea of also including the flavor degrees of freedom as well as the UV region, in section 5, we calculate holographically at finite string coupling, the deviation of the square of the speed of sound from its conformal value, the attenuation constant and the ratio of the bulk and shear viscosities and find a Buchel-like bound for the latter. Finite string coupling necessitates addressing these issues from the M-theory uplift of the type IIB construct of [58] which was obtained by the M-theory uplift of the SYZ type IIA dual in [63]. This also enjoys the additional benefit of not having to keep track of the NS5-degrees of freedom that one needs to while working with a single T-dual of the type IIB configuration of [58]. Based on [100,103], an equation of motion (EOM) for a combination of scalar modes of metric perturbations invariant under infinitesimal diffeomorphisms, is constructed. Upon investigating this EOM near the horizon, it is realized that for a non-zero bare resolution parameter, the horizon turns out to be an irregular singular point. Demanding the same of an ansatz for the solution to the same, in section 5.2, the dispersion relation for the quasinormal modes obtains not only the conformal values of the speed of sound and the attenuation constant but also their respective non-conformal corrections. Interestingly, for the case of a vanishing bare resolution parameter, by looking at the solution to the EOM near the asymptotic boundary, in section 5.3 one realizes that one can not consistently impose Dirichlet boundary condtion (at the asymptotic boundary); like section 4, non-normalizable modes are required to propagate. In section 5.4, with a non-zero bare resolution parameter, we first show that the KSS bound on the shearviscosity-to-entropy-density is not violated having incorporated the non-conformal corrections. We then obtain the bulk-viscosity-to-entropy-density ratio and the deviation of the square of the speed of sound from its conformal value, and confirm that the conformal value of both vanish and they are both hence determined entirely by the non-conformality of the theory. One of the main results of this section is a crisp bound: ζ η ≥ 91 5 1 3 − c 2 s with(out) the flavor degrees of freedom. In section 6, we approach the issue of obtaining the deviation of the square of the speed of sound from its conformal value, from two-point correlators involving gauge field fluctuations on the world-volume of flavor D6-branes using the prescription of [107]. To begin with one considers the weak-string-coupling strong-'tHooft-coupling limit. The fluctuations are considered over a background value of the gauge field − worked out in section 6.1 − assumed to be having only a temporal component and radial dependence. In the zero-momentum limit, interestingly and as shown in section 6.2, there is only a single second order equation in a gauge-invariant perturbation field − the 'electric field' − which needs to be and is solved for (in section 6.3). Finally the subtracted (zero temperature from the non-zero temperature) spectral function per unit frequency in the vanishing-frequency limit yields that the same is proportional to the linear power of the deviation of the square of the speed of sound from its conformal value, thereby validating the same as obtained in the previous sections 4 and 5. We conclude section 6 with some remarks (in section 6.4) arguing that this result remains unchanged even in the strong-string-coupling stong-'tHooft-coupling, or the true MQGP limit of [58].
Let us briefly discuss some future directions. It would be rather interesting to probe better the regime of intermediate 't Hooft coupling whereat the number of colors is large, the gauge coupling is small but the 't Hooft coupling is finite, i.e., neither small (weak coupling regime) nor large (strong coupling regime). As discussed, techniques based on QCD do not offer currently a reliable way to explore this region. The ansatz proposed for the spectral function parametrization does not capture properly a high frequency tail and the QCD sum rule cannot be directly applied to constrain bulk viscosity. Since it is not clear how to handle the issues with the QCD tools, the region can be alternatively explored within the supergravity framework. One could invoke higher derivative corrections in the supergravity action which would hence back-react on the background. The same in the context of N = 4 SYM has been studied recently in [108]. For the present case there are two ways to go about it. One, we could start from the type IIB background of [58] and consider corrections to the metric and fluxes in powers of α 3 and solve the modified equations of motion up to O(α 3 ). Two, we could use the MQGP limit (with g s ∼ O(1), large N but finite g s M ) and start with D = 11 supergravity action up to sextic power in the eleven-dimensional Planck length [109], and construct solutions to the EOMs as Planck-length perturbation of the M-theory uplift of [63]. Clearly the latter is a bit more practical because of the reduced number of fields in M-theory. Following this, one can then include metric perturbations and solve for their EOMs and hence see the effects of the inclusion of the aforementioned higher derivative terms on some spectral functions. It would be interesting to evaluate the non-zero frequency contribution to the spectral function per unit frequency and compare with previous studies on this topic as in [110] (which had excluded higher derivative corrections) in N = 4 SYM.
Another possible future direction could be to look at simultaneously turning on gauge and vector modes of metric perturbations [81] and then see the modification in the spectral function of gauge fluctuations considered in Section 6. The same in the context of type IIB for evaluating electrical and thermal conductivities, was considered in [103].
A.1 The equation of motion for the fluctuation mode H tt
To elaborate the implications of the above discussion, let us discuss the EOM for H tt . This can be expressed in terms of the other fluctuation modes in the following way: , (A.7) which at first glance seems to be well defined in the regimes r h > 0 and u ≥ 1.32. The precise regimes of interest however is not important for the kind of details that we are aiming for here. This will be illustrated later. Note also that A i are not constants but certain nested functions whose forms may be given in the following way: where the denominator of the form (a, b) is to be understood as being identified with the subscript bracket A (a,b) so that individual relations for A a and A b may be constructed. The nested function D 1 takes the following form: which is also expressed in terms of B(u) in a somewhat similar form as in (A.9) above. Together they would determine the coefficient A 1 in (A.7). The next coefficient A 2 is now determined in terms of D 3 and D 4 . The former is simple: 11) and expressed in terms of B(u), whereas the latter is more involved and may be expressed in the following way: where the B(u) independent terms appear, in our notation, as B (0) (u) and one may verify the uniqueness of the proposed form (A.13). This is also evident from the next coefficient, namely D 6 which may be determined from c 6nm as: We see that the structure is somewhat similar to (A.14), i.e the coefficient D 5 in the sense that we have B (0) , B (1) , B (2) and B (3) terms distributed in an identical way (although the precise c knm coefficients differ) as the derivations of these terms involve similar manipulations of the Einstein's equations. This is evident from the form of the next coefficients, namely D 8 which may be expressed as: which is structurally similar to D 6 in (A.15). On the other hand, the last two coefficients require a slightly different analysis and therefore we expect them to differ from the above D k coefficients. This becomes clear from the expression of D 9 which is written as: D 9 ≡ 6g s π u 4 − 15 u 4 − 1 q 2 + w 2 B (u)u 5 + 8g s π u 4 − 3 u 8 + 8u 4 − 3 q 2 + 4u 4 + 3 w 2 B(u) + u 4 − 3 − 2r h 2 u 8 − 12u 4 + 3 u 2 + 9g s π u 4 − 1 u 4 − 1 q 2 + w 2 B (u)u 2 + 8g s N π u 8 + 8u 4 − 3 q 2 + 4u 4 + 3 w 2 , (A. 18) that takes the form, although similar to (A.13), different from the other D k coefficients. The final coefficient, D 10 , may be presented in the following to illustrate the same point: This completes our analysis of the EOM (A.7) for H tt (u). Our next step is to analyze the EOM for H s (u) defined above in (A.6).
A.2 The equation of motion for the combined mode H s
The functional form for H s (u), as evident from (A.6), can be expressed as certain linear combination of H xx and H yy . As in (A.7), we can express the EOM for H s (u) in the following way: , (A.20) where we see that both the denominator and the numerator have the same set of factors as in the denominator and the numerator of (A.7). The only thing that would differ are the actual values of B k . The functional forms for B k may be expressed in terms of certain nested functions in the following way: where the F k functions may be compared with the D k functions in (A.8). In fact one may even express the functional forms for F k as a power series in u and derivatives of B(u), much like (A.13), but now with coefficients g knm instead of c knm . The coefficients g knm are independent of u, and may be determined easily as before by analyzing the corresponding Einstein's equations. For example finding g 1nm and g 2nm immediately reproduces the functional forms for F 1 and F 2 in the following way: which as expected takes the form (A.13). One may also easily see the pattern repeating for the next two coefficients, namely F 3 and F 4 , in the following way: We can now go to the other set of coefficients where we can see how we could relate to the D k coefficients studied above. A priori there shouldn't be any apparent connections, but the functional forms for F 5 and F 6 are similar to what we had earlier. For example: which should be somewhat reminiscent of (A.14). Similarly the functional form for F 6 , expressed here as: , we see that they are related via the following relation: Thus knowing F 1 would determine the functional forms for F 2 as well as F 6 . In fact one can show that F 1 or F 6 can also fix the functional forms for two other coefficients, namely F 8 and F 10 , in the following way: . (A.27) The remaining two coefficients, namely F 7 and F 9 , are however more complicated and are not anyway related to F 1 in a simple way. For example the functional form for F 7 may be expressed as: which of course follows the pattern similar to (A.13), but cannot be decomposed in terms of any of the above F k coefficients. A similar thing may also be said for the coefficient F 9 , written as: take values in (t, x, y), (A.30) may also be expressed in terms of H ab (u) and H tt (u). This pattern follows for the next component H yy (u) as: H yy (u) = − q (u 4 − 1) H tt (u) + 2qu 3 H tt (u) + wH tx (u) 2q (u 4 − 1) , (A. 31) implying that solutions may be found once we know the background values. Finally, combining the above set of equations with the defining equation for H s (u), namely (A.6), gives us a way to formulate the EOM for H xx (u) as: Basically this is all we need to construct gauge invariant perturbation modes. For us, following [100], a specific combination of the above set of perturbations is useful to quantify the required perturbation as: ) is a precise reproduction of this fact. Once m(u) is determined, l(u) can also be obtained by equating the coefficient of H tt from Z s (u) to the sum of coefficients of H tt from Z s (u) and Z s (u). In (5.21) we quoted the functional form for l(u) for u → 1. The generic form for l(u) is straightforward but technically challenging, and is therefore left as an exercise for the reader.
After the dust settles, one may verify that the EOM (A.34) is satisfied by the gauge-invariant choice of the perturbation Z s (u) in (A.33).
B. A derivation of the on-shell action and the Green's function
The four-dimensional action that we considered in (6.51) uses the pull-back metric G µν constructed out of the type IIA metric, the NS B-field and the world-volume gauge field background. When the gauge field fluctuation, whose Fourier component is written as A µ in (6.13), is also taken into account, the four-dimensional action takes the following form: where x 0 ≡ t, T D6 is the tension of the probe D6-brane, and Ω 2 is the volume of the two-sphere that we had in (6.3). The presence of Z derivative in the integrand, despite integrating out the Z variable, is from a total derivative term as may be inferred from (6.51). This also explains the two limits of Z in (B.1) Note that we took Z h to be the lower limit of Z to be consistent with the lower bound (6.29). However what we seek here is in fact the on-shell action and the Green's function at the boundary Z = Z uv , so the near-horizon geometry is not too relevant for us. At the boundary F tZ = −F Zt = 0, so we must set G tZ = 0 and replace √ −G by √ −G. Incorporating these changes, the boundary value of the on-shell action may now be re-written from (6.52) as: where we have suppressed the ω dependence, and will have to resort back to G µν component if we want to take Z h , i.e the lower limit of Z. Recall also that we have used EOM to get to the boundary action (B.2), so it makes sense to use the EOM further to simplify the above action. For example we can use (6.14) to rewrite G tt in the following way: for non-vanishing ω. Plugging (B.3) in (B.2) then gives us the following action: which is almost similar to the action (B.2) except with three major differences: one, the appearance of 1 ω as an overall factor; two, the sum over a now being from 1 to 3; and three, the appearance of three new variables E xa for a = 1, 2, 3. The new variables are defined in the following way: which are clearly borne out from (B.3) and explains the appearance of the 1 ω suppression of the full action. We could also use (B.5) to express A y and A z in terms of E x 2 and E x 3 respectively, but we won't do this right way. Instead let us use the first equation in (B.5) to write: where to get the second equality in the above we have used equation (B.3). To complete the picture we will need the ratio of the two metric components. Using the fact that r ≡ r h e Z , we can easily argue that G xx G tt = − 1 − e −4Z . Plugging this in (B.6) gives us: This is all we need, because the derivatives on the other components are straightforward replacements of E xa with a = 2, 3. Therefore combining (B.7) with (B.5) and plugging this in (B.4) gives us the final action: which is the action given earlier in (6.53). The k 2 a appearing in (6.53) are the poles in (B.8) and may be identified with the variables of (B.8) as: Since we are only interested in the x 1 part of the fluctuation, the values of k 2 2 and k 2 3 are not very useful for us. Of course one may perform a more generic study, but we will not do so here. For the simplest case, the next step would be to define (6.54) and then re-write the action as in (6.55). From here the story follows as depicted in section 6.3.
C. Effective number of three-brane charges with background three-forms and the horizon radius
The horizon radius that we computed in (6.60) was typically a O(1) number that was written as r h = 1 − 2 by demanding the vanishing of (6.59). The small parameter is defined as: with both b, the resolution parameter, and g s N f small. This choice of the horizon radius is not very useful for us because this would imply that r h is placed right at the point where new degrees of freedom would appear to UV complete the system. With the definition of the radial coordinate r as r = r h e Z , this means Z only measures geometry beyond r h , i.e the geometry of Regions 2 and 3. Question then is how to place r h deep inside Region 1 where the background is well known. However we cannot make r h arbitrarily small, as there exists a lower bound on r h given in (6.29).
If r
(o)
h denotes the lower bound, then: with C being an integration constant that appeared in (6.5), and we expect the horizon radius to satisfy r h > r h . Such a lower bound is necessary otherwise an expression like P (Z) in (6.24) will develop unphysical imaginary piece.
To find an appropriate r h it would be easier to do the analysis in the type IIB side instead of the mirror type IIA side. Such an analysis won't change the expression for r h as the mirror transformation a la SYZ [64] keeps the radial coordinate unchanged. To proceed then, let us define an effective number of three-brane charge as: where B 2 , F 3 and F 5 are given, for N f = 0 and in the Baryonic branch, in (4.1). The five-dimensional internal space M 5 , with coordinates (θ i , φ i , ψ), is basically the resolved warped-defomed conifold of (4.2), or its simplified avatar given in (4.5).
If we now collectively denote the lower limits of all the angular variables, namely (θ i , φ i , ψ) as R − and the upper limits of all the angular variables as R + ; and also use the fact that at fixed r, dr = 0, then the effective number of three brane charges take the following form: thus simplifying the expression (C.3) tremendously. Here N denotes the integral over F 5 , and is therefore related to the integer D3-branes in the dual gauge theory side at the Higgsing scale. The second term combined with N then denotes the effective number of cascading D3-brane charges at the scale r = r 0 . The functional forms for (a 0 , e 0 , f 0 , b 2 , c 2 , d 2 ) can be extracted from (C.5). Combined together leads to the following expression for N eff : 18a 2 (−1) k log r + r 2 108a 2 log r 2k + 1 + r (C.8) + 5 3a 2 (g s − 1) + r 2 (3g s N f log r + 2π)(9g s N f log r + 4π) 9a 2 g s N f log e 2 r 3 + 4πr 2 , = N 1 + 6πlog r (3g s N f log r + 2π) (9g s N f log r + 4π) where we have only kept terms linear in gsM 2 N , linear and quadratic in g s N f , and ignored higher order terms. Of course one may question the logic of suppressing a term linear in log N . Such a term typically comes with (g s N f ) 2 and with either a 2 or with higher powers of r = r 0 . Since we will be assuming r 0 << 1, we can safely ignore the log N piece. Note that the assumption of small r 0 is crucial here. This implies the domination of g s N f |log r 0 | over other constant pieces in (C.8). Implementing this 40 , and putting N eff = 0, gives the following estimate for r 0 that we will identify with the horizon radius r h as: where n b ≡ 3 (6π) 1/3 . Since both g s N f and gsM 2 N are very small quantities, the horizon radius is indeed deep inside Region 1. Note that this estimate has to be bigger than the lower bound r (o) h which in turn has a range (C.2). 40 Otherwise one will have to solve a cubic equation in log r 0 from (C.8). This will have one real solution that we can identify with the horizon radius r h . | 64,048.6 | 2018-07-12T00:00:00.000 | [
"Physics"
] |
Predicting rice phenotypes with meta and multi-target learning
The features in some machine learning datasets can naturally be divided into groups. This is the case with genomic data, where features can be grouped by chromosome. In many applications it is common for these groupings to be ignored, as interactions may exist between features belonging to different groups. However, including a group that does not influence a response introduces noise when fitting a model, leading to suboptimal predictive accuracy. Here we present two general frameworks for the generation and combination of meta-features when feature groupings are present. Furthermore, we make comparisons to multi-target learning, given that one is typically interested in predicting multiple phenotypes. We evaluated the frameworks and multi-target learning approaches on a genomic rice dataset where the regression task is to predict plant phenotype. Our results demonstrate that there are use cases for both the meta and multi-target approaches, given that overall, they significantly outperform the base case.
Introduction
Machine learning algorithms are increasingly being adapted for the prediction of plant phenotypes (Grinberg et al. 2016(Grinberg et al. , 2019. This task is most commonly regression based as most agronomic phenotypes are quantitative. This observation is true of rice (Spindel et al. 2015), the most agronomically important crop in the world, as a significant proportion of the global population relies on it for their dietary needs (Maclean et al. Editors: Larisa Soldatova, Joaquin Vanschoren. 2013). With a growing global population, estimates suggest that we need to double rice yields over the next few decades (Ray et al. 2013;UN 2015). Therefore, it is crucial that we develop high yielding varieties that are resilient to an increase in biotic and abiotic stresses caused by climate change (Tai et al. 2014). The predictive phenotype models built for such plant populations are most commonly used in genomic selection (GS). In GS, these predictive models are used to estimate the likelihood that an individual in a population will express a trait of interest. This likelihood is expressed as a genomic estimated breeding value (GEBV) and is used by plant breeders to select individuals that will serve as parents for the next generation of progeny. Therefore, it is desirable that the models used to estimate GEBVs are as accurate as possible.
GS has only been recently adopted in rice (Grenier et al. 2015), and a model which is based on a single learning algorithm is often used for phenotype prediction, most commonly a variant of the best linear unbiased predictor (Grenier et al. 2015;Onogi et al. 2015). In this context, we propose the use of meta-learning, which seeks to improve overall predictive accuracy by leveraging the predictive power of multiple learning algorithms, and has been shown in other domains to outperform a single learning algorithm if the goal is to optimize predictive accuracy (Jahrer et al. 2010). The process can be broadly split into two main steps, a meta-feature generation step and a meta-feature integration step. In the former, a set of base models are built using a collection of learning algorithms. Each base model is then used to predict meta-features, which are predictions of a phenotype of interest. In the latter, the meta-features generated in the previous step are combined using another learning algorithm to form the final prediction.
A vital consideration we make is that of the nature of the attributes or features present in the input data used in building phenotype prediction models. The input data is often genomic, with features that are representative of the genetic diversity present in a population and are at different loci across an organism's genome (Spindel et al. 2015). These features are themselves representative of genes which control phenotypes and are located in different chromosomes. Therefore, the features in such genomic data can naturally be grouped by chromosome. In typical predictive experiments, the feature groupings by chromosome in the genomic data are ignored when models are built. The advantage of this approach is that potential interactions between features belonging to different chromosomes are captured. However, this may lead to suboptimal predictive accuracy if the features are in a chromosome with genes that are not associated with a phenotype, which introduces noise in a built model. Therefore, it might be the case that systematically diminishing the effects of features in irrelevant chromosomes might lead to higher accuracy. To address this problem, we propose two meta-learning frameworks which seek to improve phenotype prediction accuracy. The first ignores the feature groupings present in the input genomic data, and the other does not (Orhobor et al. 2018).
Given that one is typically interested in predicting multiple phenotypes, we considered the viability of multi-target regression for phenotype prediction, where the interest lies in building models that simultaneously predict multiple outputs (Aho et al. 2012;Appice and Džeroski 2007;Kocev et al. 2009;Spyromitros-Xioufis et al. 2012;Tsoumakas et al. 2014). The key insight of this approach is that by jointly learning models for different outputs, one is able to leverage the relationships between the outputs, which may be correlated, in building better models. This approach has been applied in various fields, and like meta-learning, has been shown to outperform a single base model (Han et al. 2012;Kocev et al. 2009;Tuia et al. 2011).
The remainder of this paper is organized as follows. In Sect. 2 we present the different considerations in meta-feature generation and integration, and in Sect. 3, we describe the proposed frameworks. In Sect. 4, our experimental setup is given, detailing the learners used in our evaluation. In Sect. 5 we discuss the outcome of evaluating the proposed frameworks, where our results show that there are use cases for both. Lastly, we conclude in Sect. 6.
Background
Rather than using a single learning algorithm, meta-learning seeks to improve the predictive accuracy of models used to predict phenotype by combining the predictive power of a set of base learners utilizing a combining/meta-level learner. For example, assume a rice population with input genomic data (learning set) where one is interested in predicting grain width. Furthermore, assume that the goal is to improve predictive accuracy by combining the predictive power of random forests (Breiman 2001) (RF) and support vector regression (Cortes and Vapnik 1995) (SVR) using simple linear regression (LR). Therefore, RF and SVR are the base learners while LR is the combining learner.
To amalgamate the predictive power of RF and SVR, they are both independently used to build a model to predict grain width, and the predictions made by these models are considered as grain width meta-features. Meta-features are typically generated by resampling the learning set using v-fold cross-validation (Breiman 1996;Parmanto et al. 1996), where each fold serves as a validation set and the remainder as a training set. We adopt this approach in the proposed frameworks. The first advantage that v-fold cross-validation offers is in computational expense with regards to time. Given the advances in genotyping and sequencing technologies, the genomic data used in phenotype prediction experiments typically have input features in the order of a million features (Alexandrov et al. 2015). Therefore, building a single model takes a substantial amount of time, so other resampling methods like the Monte-Carlo cross-validation (Xu et al. 2007) may be infeasible. The second advantage is in the reduction of overfitting. As stated earlier, genomic data can have on the order of a million input features; therefore there is potential for overfitting as it is often the case that the number of features far outnumber the number of samples ( p ≫ n ). Using our example, assume 3-fold cross-validation in the meta-feature generation step. In this case, both RF and SVR are used to build three models each on the different training sets and used to predict three meta-feature vectors on the validation sets. This means that we end up with three independent meta-feature matrices with columns corresponding to the number of base learners. Therefore, three sets of combining weights can be learned using LR and applied to the predictions made on unseen data. By doing this, we get combining weights that do not closely fit to one set of examples. A similar approach has been applied to positive effect in super learners (Van der Laan et al. 2007).
The diversity of the set of base models used in generating the set of meta-features is vital, as it is desirable for the base models to be incorrect in different ways (Caruana et al. 2004). That is, it is better for their predictions on some test set to be wrong on different samples, so that the amalgamation of their predictions yield improved results. There are two main ways of achieving this. The first is to use a set of different base learners, which has been alluded to in our example, as they would make different assumptions about the nature of the relationships between the features in the input data (Džeroski and Ženko 2004). For example, RF might make predictions based on nonlinear interactions amongst the features, whereas nearest neighbour techniques (Altman 1992) which consider the level of relatedness between samples might yield a unique perspective. The second way of achieving model diversity is by varying the input data. That is, the input data can be split into multiple datasets which have different subsets of the features from the original. A base learner can then be used to build models on each of these new datasets, which are then used in the generation of meta-features. This approach is used in the stacked interval partial least squares framework (Ni et al. 2009), where meta-features are combined from various intervals in spectral data using partial least squares. We have adopted the first approach to be used with both of the proposed frameworks. The second is used only in the framework for which feature groupings are considered. The main difference between what we propose and the work using partial least squares (Ni et al. 2009) is that we use an ensemble of base learners for each input data subset.
Having generated a set of meta-features the next step is to integrate them, creating the final prediction. Using our example, this entails integrating the meta-feature predictions by RF and SVR. Several integration methods have been proposed. However, most are better suited to classification rather than regression problems (Džeroski and Ženko 2002;Ting and Witten 1999). In a regression setting, meta-feature integration is done using weights. These weights are coefficients which determine how much each base learner's meta-feature will influence the final prediction. A constant or dynamic weighting approach can be used (Merz 1998). Constant weighting in its simplest form involves averaging the meta-feature values for each sample. If the meta-features generated by the base models are incorrect on different samples but are all mostly accurate, averaging the meta-features improves overall accuracy by adjusting the incorrectly predicted samples. A more sophisticated constant weighting approach is to learn the weights using a combining learner, which is LR in our example. Note that on a test set, the learned weights are uniformly applied to every sample. We utilize both of these constant weighting approaches in the proposed procedures. In contrast to constant weighting, dynamic weighting assigns individual weights to each sample in a test set. This is done by learning individual weights for each sample in the test set using only the most closely related samples in the learning set (Rooney et al. 2004). This approach is computationally expensive in terms of time, and we do not use it in the proposed procedures. However, we conjecture that it may yield interesting results, and will be a subject of future study.
The natural feature groupings present in the genomic data used for phenotype prediction can also be thought of as views in multi-view learning. This assertion is based on the fact that the groups in this context are chromosomes which have genes that may influence a phenotype of interest. Therefore, each group of features represents a different perspective/ view in terms of gene-phenotype associations. Several approaches have been proposed in multi-view learning (Xu et al. 2013), and multiple kernel learning (MKL) (Sonnenburg et al. 2006) is the most closely related to the current discourse. In typical multi-view learning problems, the views are often distinct, with different underlying structures and distributions of the input features. In MKL, learning algorithms that are best suited to each distinct view are used, and their predictions are then combined (Cortes et al. 2009;Lanckriet et al. 2004). This approach is similar to what we propose, in that a combining learner is used to integrate the meta-features of different learners. However, our proposal differs in that multiple learners are used within each group or view to form a consensus on their influence on a trait.
As stated in the introduction, multi-target learning involves simultaneously learning models for different outputs to leverage output relatedness. Multi-target learning approaches have been classified into problem transformation methods and algorithm adaptation methods (Borchani et al. 2015). In problem transformation methods, the model building process is modified to accomodate several outputs, which usually involves augmenting the predictive features with the outputs before building the model. Examples of such methods are multi-target regressor stacking, ensemble of regressor chains, and ensemble of regressor chains corrected (Spyromitros-Xioufis et al. 2012). We considered all three of these methods in our evaluation as they are most closely related to the proposed frameworks in that they can be used independent of a particular learning algorithm. For algorithm adaptation methods, known algorithms such as SVR, which are typically used in single target problems, are adapted for a multi-target setting. Several of such methods have been proposed (Abraham et al. 2013;Appice and Džeroski 2007;Ikonomovska et al. 2011;Sánchez-Fernández et al. 2004), however, we do not consider them in our evaluation and this could be a subject of future study in the phenotype prediction domain.
Proposed frameworks
In this section, we describe two proposed meta-learning frameworks, frameworks A and B respectively. Framework A is for a situation in which the feature groupings present in an input dataset are ignored, and Framework B is for a situation in which feature groupings are considered.
Framework A
The motivation for this framework is the overall improvement of phenotype prediction accuracy by leveraging the predictive power of multiple learning algorithms. In this case, we assume that although the features in an input dataset can be grouped by chromosome, these groupings are ignored when building a predictive model. Regarding the description of the procedure, we first give a description using an example, followed by a more formal one.
Assume a scenario where there is a learning and test genomic dataset with the goal of predicting grain width. The test set contains samples for which we want to predict their phenotype, and it is not used to build models. The two base learners are RF and SVR, and the combining learner is LR. We also assume v folds. For the meta-feature generation step, first split the learning data into v folds. Using each fold as a validation set and the remainder as a training set, build an RF and SVR model for grain width on the training set then predict learning meta-features using the validation set and also predict the test meta-features using the test set. At the end of this, v sets of learning and test meta-feature matrices are generated, all with two columns which correspond to predictions made by RF and SVR.
For the integration step, form a single test meta-feature matrix, avg , by averaging the v predictions made by each base model (RF and SVR). Using LR, learn combining weights with each of the v learning meta-feature matrices. This produces v sets of weights. Apply each of these weights to avg , producing v predictions. Finally, average these v predictions to form the final prediction for grain width. More formally: Assume a learning set, a test set with samples for which we want to predict their phenotype, a set of base learners, a combining learner, and v cross-validation folds.
Step 1 1. Split the learning set into v folds, aiming for approximately equal number of samples in each fold. 2. For each v fold: (a) validation set = current fold. (b) training set = the combination of the other folds. (c) build b base models using base learners on the training set. (d) predict the validation response using base models, generating a meta-feature matrix v ∈ IR m×b , where m is the number of samples in the vth fold and b is number of base models. (e) predict the test response using base models, generating a meta-feature matrix v ∈ IR n×b , where n is the number of samples in the test set and b is number of base models.
Output: (a) a set of validation meta-features
Step 2 Using V and T from step 1 and a combining learner : Therefore the average predictions for all base models in T can be represented as avg ∈ IR n×b , where n is the number of samples and b is number of base models. 2. Learn combining weights on each validation meta-feature set in V using the combining learner . This produces v weight sets which are applied to avg , producing 1 , … v predictions. The final prediction is given by
Framework B
Like framework A, the motivation for this framework is also to improve overall phenotype predictive accuracy by leveraging the predictive power of multiple learning algorithms. However, in contrast to framework A, feature groupings present in the input genomic data are considered. The rationale for this is that for phenotype prediction, including features which are in regions that have genes that are not associated with a trait might only serve to introduce noise in a built model, leading to suboptimal predictive accuracy. Therefore, systematically diminishing the influence of such features might be better.
For a general genomic dataset, it is assumed that the group to which each feature belongs is known, and all features in the dataset have been separated into their respective groups, c. That is, for a general dataset ∈ IR m×f , where m is the number of samples and f is number of features, has been separated into c subsets, D = 1 , … , c , such that the intersection between the features in any pair of subsets must be empty and the union of the features in all subsets must be equal to the features in .
The procedure for this framework can be described using the example in Sect. 3.1. However, we assume that both the learning and test datasets have been split into their c subsets by chromosome. For the meta-feature generation step, first split the learning set into v folds across all c data subsets, ensuring that across each c subset the same samples are in each v split. Using each fold as a validation set and the remainder as a training set in all c subsets, build an RF and SVR model for grain width on each c training set and then predict the learning meta-features using the corresponding c validation set and also predict the test meta-features using the corresponding c test set. At the end of this, v sets of learning and test metafeature matrices are generated for the c subsets, all with two columns, p, which correspond to predictions made by RF and SVR. Therefore, there are v × c meta-feature matrices for the learning and test sets. For the learning meta-feature matrices, merge all c subsets for each v fold. This produces v learning meta-feature sets, where each set has c pairs of RF and SVR meta-features, or c × p meta-features. For the test meta-feature matrices, first form a single test meta-feature matrix for each c subset, c avg , by averaging the v predictions made by each base model (RF and SVR) within each c subset. These c averaged test meta-feature matrices are then merged in the same order the learning meta-feature matrices were, forming merged .
Using LR, learn combining weights with each of the v merged learning meta-feature matrices. This produces v sets of weights. Apply each of these weights to merged , producing v predictions. Finally, average these v predictions to form the final prediction for grain width. More formally: Assume a learning and a test set that have been split into their c subsets using the chromosome to which features belong, a set of base learners, a combining learner, and v cross-validation folds.
Step 1 4. Merge V 1 , … , V c in order for all v validation meta-feature sets, creating v merged validation meta-feature sets V merged = ( 1 , … , v ) ∈ IR m×p , where p is b × c. 5. For each test meta-feature set subset T 1 , … , T c , average the v predictions of each base learner in c 1 , … , c v . This produces the average prediction matrices of all base models for all c subsets, 1 avg , … , c avg . Merge all c average prediction matrices in order to form merged ∈ IR n×p , where p is b × c. 6. Output: (a) the set of v merged validation meta-feature matrices V merged . (b) the merged test meta-feature matrix merged .
Step 2 Using V merged and merged from step 1 and a combining learner : 1. Learn combining weights on each validation meta-feature set in V merged using the combining learner . This produces v weight sets which are applied to merged , producing
Experimental setup
In this section, we discuss the dataset and methods used in our evaluation.
Dataset
We evaluated the proposed procedures using data from the 3000 rice genomes project (Alexandrov et al. 2015), downloaded from http://SNP-Seek.irri.org/_downl oad.zul. For the genotype data, we used version 0.4 of the core single nucleotide polymorphism (SNP) subset of 3000 rice genomes, which consists of 3023 samples and 996,009 markers. It is a filtered SNP set with a fraction of missing data at <20%. Using linkage disequilibrium in Plink (Purcell et al. 2007), we pruned this dataset using a window of 50 SNPs, a step size of 5, and with an r 2 value of 0.001, where r 2 is the allowed correlation coefficient between the SNPs. This generated a smaller dataset with 12,286 features which represent the twelve rice chromosomes. The total proportion of missing values in this dataset is approximately 7%. We converted each SNP call for all varieties to numeric values; class 1 homozygotes are represented with 1, class 2 homozygotes as -1, and heterozygotes with 0. Missing values were imputed using column means, as it has been shown that mean imputation is sufficient in cases where less than 20% of the data for each marker is missing (Rutkoski et al. 2013).
Twelve quantitative traits were considered: culm diameter, culm length, culm number, grain length, grain width, grain weight, days to heading, ligule length, leaf length, leaf width, panicle length, and seedling height. Only 2266 samples in the genotype data are represented in the trait data. Of this 2266 samples in the trait data, some of them have missing values for some traits. We created two datasets. In the first, we excluded samples with unavailable or missing trait data for each trait experiment. We used this in the initial evaluation of the proposed frameworks. Therefore, a variable number of samples was used in each trait experiment. In the second, we removed all samples with missing data for any trait. This dataset consists of 1865 samples, and we used it in the evaluation of the proposed frameworks and the multi-target regression approaches. We refer to these datasets as I and II respectively. The raw and processed forms of the data used in our experiments are available in the Mendeley Data Repository at http://dx.doi.org/10.17632 /86ygm s76pb .1.
Setup
In our evaluation of the proposed approaches we used v = 5 folds and split the dataset into learning (75%), and testing (25%) sets with random sampling. For multi-target regressor stacking (MTRS), we generated the training output meta-features using 5-fold internal cross-validation and the test set meta-features using the models built for the base case. For the ensemble of regressor chains (ERC) and the ensemble of regressor chains corrected (ERCC), we used 10 chains, and for ERCC we used 5-fold internal cross-validation to generate the training output meta-features. Predictive accuracy was calculated as the coefficient of determination ( R 2 ). All experiments were performed in R (Ihaka and Gentleman 1996). The code for the initial evaluation of the proposed framework is available at https :// githu b.com/oghen ejokp eme/DS201 8. The code for the multi-target evaluation is available at https ://githu b.com/oghen ejokp eme/DSMLS E.
For the learners that require parameter tuning, we performed parameter selection using a grid search and cross-validation on the training data. We opted for grid search over random search (Bergstra and Bengio 2012) as the parameters which require tuning and the range of values we explored for these parameters were modest. This can be seen in the provided source code. We considered three sets of learners. Learners that take feature groupings into account, a set of base learners which do not take groupings into account and a set of combining learners.
Group learners
In our evaluation, we considered learners which take feature groupings into account. These learners are the group least absolute selection and shrinkage operator (Friedman et al. 2010) (GLASSO), group bridge-penalized regression ) (GBRGE), and group minimax concave penalty (Breheny and Huang 2009) (GMCP). The optimal value for lambda along the regularization path was chosen using five-fold internal cross-validation for GLASSO. For GBRIDGE and GMCP, the Akaike Information Criteria was used as it has been shown to produce slightly better accuracies (Ogutu and Piepho 2014).
Base learners
The base learners used are the ridge regression best linear unbiased predictor (Endelman 2011) (RBLUP), random forests (RF), gradient boosted machines (Friedman 2001) (GBM), support vector regression (Cortes and Vapnik 1995) (SVR), k nearest neighbors (Altman 1992) (KNN), and eXtreme gradient boosting (Chen and He 2015) (XGB). RBLUP is specially designed for genomic predictions and has no parameters that require tuning. For RF the default of 1/3 the total number of variables is considered at each split, five observations are used for each terminal node, and 1000 trees were grown for each forest. For GBM we used a shrinkage parameter of 0.1, interaction depth of 6, 15 minimum number of observations in each node, and 1500 trees were grown. For SVR we used a radial basis kernel, and the hyperparameters were tuned using a grid search. XGB were also tuned with a grid search. Lastly, the optimal number of neighbors, n, used in the KNN models were chosen using cross-validation, where 1 ≤ n ≤ 30.
Combining learners
The combining learners used are linear regression (LR), gradient descent (Kivinen and Warmuth 1997) (GD), kernel regularized least squares (Hainmueller and Hazlett 2014) (KRLS), ridge regression (Tibshirani 1996) (RR), and principal component regression (Jolliffe 1982) (PCR). The regularization parameter for RR was selected using internal cross-validation. A radial basis kernel was used with KRLS, and the bandwidth and regularization parameters were chosen using a grid search. For PCR the number of components used was chosen using internal cross-validation.
Results
In this section we discuss the results from the evaulation of the proposed frameworks and the multi-target regression approaches.
Evaluation of frameworks
The results discussed in this section are from the evaluation of the proposed approaches using dataset I (see Sect. 4.1).
Group and base learner performance
The group and base learner performances serve as a baseline for the performance of the combining learners on the proposed frameworks. For the twelve rice traits considered, a base learner which does not take feature groupings into account outperforms all other learners on ten of the twelve traits (Table 1). In general SVM and XGB outperform all other learners, even outperforming RBLUP, a learner designed for genomic predictions. 199 We argue that this is the case for two reasons, (1) the traits considered are controlled by features with strong nonlinear interactions which RBLUP does not detect, and (2) SVM and XGB are better able to deal with a large number of irrelevant features. This is significant as recent advances in genotyping and sequencing technologies mean that genomic data is now being generated on the order of a million features, most of which are irrelevant in a built model. Therefore, rather than using traditional methods like RBLUP for phenotype prediction, more sophisticated methods like XGB should also be considered if one wants to use a single learning algorithm. The best performing group learner was GLASSO, which excludes features belonging to groups with low signal by assigning a zero coefficient to all features in such groups. It outperforms all other learners on one trait, seedling height, suggesting that it is indeed the case that some traits might benefit from excluding features from certain chromosomes. We assumed a null hypothesis that there is no difference in performance between GLASSO, the best performing group based method, and SVR and XGB, the best base learners. A sign test showed that with a significance level of 0.05, the null hypothesis can be rejected in both cases, as both comparisons (GLASSO-SVR and GLASSO-XGB) produced a p-value of 0.006.
Combining learner performance
In our evaluation of the proposed frameworks, the six base learners outlined in Sect. 4.4 were used to generate meta-features for twelve rice traits. To evaluate the frameworks five learning algorithms were then used as combining learners to integrate the generated metafeatures. We found that in a meta-learning setting, some traits benefit when the feature groupings are ignored in the meta-feature generation and integration steps, while others benefit from having the feature groupings considered. We argue that the latter case occurs for two reasons. Firstly, each group has its own unique set of meta-features, generated by its own set of models. Therefore, noise is not introduced in these models from groups that may not be strongly associated with a phenotype. Secondly, the meta-features for a group represent the degree of association that a group has with a phenotype. Therefore, generating meta-features for each feature group in isolation before learning combining weights aids a combining learner in estimating the amount of influence each group has on a phenotype. Comparing frameworks A and B based on the performance of the combining learners showed that for LR, framework A outperforms B on eleven of the twelve traits. For GD, framework A outperforms B in nine of the twelve traits. For KRLS, framework A outperforms B on eight of the twelve traits. For RR, framework A outperforms B on ten traits, they perform equally well on one trait, and framework B outperforms A on one trait. For PCR, framework A outperforms B on nine of the twelve traits, they perform equally well on two traits, and framework B outperforms A on one trait. See Table 2 for the results. These results suggest that on a per learner basis, framework A, in which feature groupings are ignored, is generally the better meta-learning approach. For each combining learner, we assumed a null hypothesis that there is no difference in performance between frameworks A and B. A sign test showed that with a significance level of 0.05, the null hypothesis cannot be rejected for GD, KRLS and PCR, with p-values of 0.146, 0.774, and 0.146 respectively. Whereas, the null hypothesis can be rejected for LR and RR, with p-values of 0.006 and 0.039 respectively. This suggests that the extent to which a given framework outperforms the other on a particular learner is learner dependent.
Evaluating the performance of the frameworks on a per trait basis irrespective of the combining learner tells a different story. In this case, framework A and B perform better on six traits each. The results show that no particular learner performs better on any trait-framework pair. This suggests that if the proposed approaches are to be used, combining learners should be chosen based on the framework of choice and the trait one is interested in predicting. One way of making this decision might be to modify the well-known model selection procedure used to select a single model from a set of competing models. However, we acknowledge that this will be computationally expensive given the number of models that are built in both frameworks. It is also worth noting that GLASSO, a single model approach, outperforms both frameworks (Table 2). Therefore, one should also consider single learner approaches.
For each trait, we compared the best performing combining learner on both frameworks to the best performing base learner. For framework A, we found that the best performing combining learner performs just as well or outperforms the best performing base learner on ten of twelve traits. For framework B, we found that the best performing combining learner performs just as well or outperforms the best performing base learner on eight of twelve traits. See Table 3. These results show that it is not always the case that one of the meta-learning approaches outperforms a single base model. However, the best performing combining learner on at least one of the proposed meta-learning approaches outperforms the best performing single base learner on ten of the twelve traits. We assumed a null hypothesis that there is no difference in performance between the best performing learner on framework A and the base case, the best performing learner on framework B and the base case, and the best performing learner on both frameworks and the base case. A sign test showed that with a significance level of 0.05, the null hypothesis can be rejected for the first and third cases but not for the second case, with p-values of 0.039, 0.039 and 0.388 respectively. Therefore, we conclude that the proposed frameworks generally increase the accuracy by which plant phenotype can be predicted by leveraging the predictive power of multiple learning algorithms in scenarios where the feature groupings present in genomic data are considered and ignored. The best performing framework for each learner is in boldface. The overall best performing learner-framework pair is in italics. '-' are cases where the model building failed, which to the best of our knowledge was due to multicollinearity
Comparison to multi-target learning
In this section, we discuss the results from evaluating the proposed frameworks and problem transformation multi-target approaches using dataset II (see Sect. 4.1). We used only SVR and XGB as the learners of choice for the experiments with the proposed frameworks and with the multi-target approaches as they were the two best performing learners in our initial evaluation (see Table 1). We used LR as the combining learner for the meta-features generated by both frameworks.
In the base case, we found that XGB performs just as well or outperforms SVR on nine of the twelve traits, which is consistent with the results in Table 1. However, one of either MTRS, ERC or ERCC outperforms the base case for both SVR and XGB for all traits (see Tables 4, 5), suggesting that even in a high dimensional setting such as this, where approximately 12,000 features are present in the genome data, the signal from the augmented We also assumed a null hypothesis that there is no difference in performance between the base case and the best performing multi-target approaches. A sign test showed that the null hypothesis can be rejected for both SVR and XGB with a p-value of 0.0004 at a significance level of 0.05. These results demonstrate that in a multi-phenotype prediction setting, multi-target approaches should be used if one wants to optimize predictive accuracy. In comparison to the proposed frameworks, one of either frameworks A or B outperforms base SVR on nine of the twelve traits and base XGB on eight of the twelve traits, which is also consistent with the results in the initial evaluation (see Table 3). But more interestingly, we compared the performance of frameworks A and B to that of the unweighted average predictions of SVR and XGB for MTRS, ERC and ERCC. We found that at least one of the multi-target approaches outperformed the frameworks on nine of the twelve traits (Table 6). We assumed a null hypothesis that there is no difference in performance between the proposed frameworks and the unweighted average predictions of SVR and XGB for MTRS, ERC and ERCC. With a signficance level of 0.05, a sign test showed that for framework A the null hypothesis cannot be rejected for MTRS, ERC, and ERCC, with p-values of 0.146, 0.388, and 0.146 respectively. For framework B, a sign test showed that the null hypothesis can be rejected for MTRS, ERC, and ERCC, with p-values of 0.006, 0.0004, and 0.0004 respectively. We argue that these results further demonstrate the utility of the multi-target approaches and highlights the need to consider weighted approaches for averaging predictions in a multi-target setting.
Conclusion
In this paper, we investigated the prediction of rice phenotypes. We argued that because rice is the most agronomically important crop in the world, the models used by plant breeders for the selection of the parents that will produce progeny with desirable traits should be as accurate as possible. We proposed that meta-learning, which leverages the predictive power of multiple learning algorithms, could improve the accuracy by which rice and plant phenotypes, in general, can be predicted. We noted that the genomic datasets often used in predicting phenotype consists of features that can naturally be separated into groups by chromosome and argued that including features from chromosomes which may not influence a trait might lead to suboptimal predictive accuracy, as it introduces noise in a built model. With this in mind, we proposed two meta-learning frameworks, one which does not consider feature groupings (framework A) and another which does (framework B). Our results show that framework A generally outperforms framework B on a per learner level of analysis, but that they perform equally well on a per trait level of analysis. But more importantly, the results show that the best performing meta-learner on at least one of the proposed meta-learning approaches outperforms the best performing single base learner on ten of the twelve traits. Furthermore, we evaluated three problem transformation multi-target learning approaches: multi-target regressor stacking, ensemble of regressor chains, and ensemble of regressor chains corrected. We demonstrated that in cases where a single learner is used or the predictions made by multiple learners are combined, the multi-target learning approaches performed best.
In future work, we intend to apply the proposed procedures to other agronomically relevant crops like wheat and barley, and possibly on human population data. Furthermore, we intend to extend the proposed procedures by introducing meta-feature pruning, which aids in the selection of the meta-features that will eventually be integrated (Mendes-Moreira et al. 2012). There are several methods (Caruana et al. 2004) that can be used to perform meta-feature pruning, and we conjecture that the different techniques will perform differently on the proposed frameworks. As stated in the discussion of considerations we made in developing the proposed frameworks (Sect. 2), we also intend to extend the proposed frameworks by introducing dynamic weighting for the integration of meta-features. It would also be interesting to apply these extentions to problem transformation multi-target approaches given a multiple learner scenario.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 9,361.8 | 2020-08-02T00:00:00.000 | [
"Computer Science"
] |
Bottom-Up Kinetic Chain in Drop Landing among University Athletes with Normal Dynamic Knee Valgus
The study investigated the influence of ankle strength and its range of motion (ROM) on knee kinematics during drop landing. Fifteen male and fifteen female university athletes with a normal range of dynamic knee valgus (DKV) (knee frontal plane projection angle: men = 3° to 8°, females = 7° to 13°) were recruited. They performed drop landing at height 30 cm and 45 cm with three-dimensional motion capture and analysis. Knee angles were compared at specific landing phases. Isokinetic ankle strength was tested at 60°/s angular velocity while the weight-bearing lunge test was conducted to evaluate ankle ROM. For males, strength for both plantarflexors and dorsiflexors were associated with knee kinematics at both heights (30 cm: r = −0.50, p = 0.03; 45 cm: r = −0.45, p = 0.05) during maximum vertical ground reaction force (MVGRF) phase. For females, ankle invertor strength and knee kinematics were associated at both 30cm (r = 0.53; p = 0.02,) and 45 cm landing heights (r = 0.49, p = 0.03), while plantarflexor strength and knee kinematics showed a significant association during initial contact (r = 0.70, p < 0.01) and MVGRF (r = 0.55, p = 0.02) phases at height 30 cm only. Male and female athletes with normal range of DKV showed a significant relationship between ankle strength and knee kinematics at specific landing phases. These relationships varied with increased landing height.
Introduction
Dynamic knee valgus (DKV) is a mechanism of medial knee collapse due to a combination of hip internal rotation, hip adduction, knee valgus, and external rotation of the tibia during dynamic motions such as jump-landing [1]. Biomechanical factors observed from a poor technique of landing such as high impact loading, sudden decelerations, and high vertical ground reaction forces (GRFs) predispose athletes to lower limb injuries and pain [2]. Kinetic chain theory states that abnormalities of a joint may influence risks of injuries in other joints as observed in excessive DKV [3].
Fortunately, DKV is a modifiable factor of non-contact lower extremity injuries. Hence, previous studies conducted exercise intervention targeting the kinetic chain of DKV [4]. During closed chain activities, due to the interdependent of joint motions, excessive motions from one joint may overload subsequent tissues in the kinetic chain [5,6]. There are two types of kinetic chains related to DKV, which are top-down (i.e., proximal origins) and bottom-up (i.e., distal origins). Regarding the top-down kinetic chain, the function of muscles and other soft tissues either at the trunk or hip joint may influence the occurrence of altered kinematic patterns at the subsequent distal joints [7]. It was shown that weakness of hip musculature was associated with greater knee valgus during single leg ballistic and single leg squat tasks [6]. Hip and trunk muscle strengthening are commonly recommended to modify lower limb kinematics such as excessive hip medial rotation and adduction during weight-bearing tasks and to treat and prevent injuries at distal joints of lower limbs [8].
On the other hand, in the bottom-up kinetic chain, weakness of ankle musculature and foot structure may cause a lack of control at the knee joint and thus increase risks of knee injuries [9]. Khamis et al., [9] stated that DKV is often associated with the top-down kinetic chain of lower limbs. For instance, decreased isometric strength of hip abductors, adductors, and extensors was closely correlated with increased peak valgus angle at the knee [10]. Studies on the bottom-up kinetic chain of dynamic knee valgus are limited despite some evidence that pointed out the influence of the ankle joint on subsequent medial joints. For example, tibial rotation was significantly affected by ankle and foot kinematics [9]. Additionally, knee rotation was shown to be affected by toe directions (i.e., toe-in, toe-out, and natural position) [11]. However, the study by [11] was limited to physically active females who were not screened for excessive dynamic knee valgus.
Reduced dorsiflexion range of motion (ROM) is linked to increased knee valgus excursion during landing [12] and altered landing mechanics that predisposed athletes to injury [13]. Deficits in ankle dorsiflexion ROM may occur due to the decreased extensibility of the gastrocnemius/soleus complex and restricted posterior talar glide on the tibia, thus creating DKV [14]. A significant correlation was found between ankle dorsiflexion flexibility and the peak knee abduction angle (r = 0.355, p = 0.048) during landing [15]. Moreover, individuals with greater ankle dorsiflexion ROM demonstrated smaller GRFs and greater knee-flexion displacement during landing, which may be associated with a reduced risk of anterior cruciate ligament (ACL) injury [16].
In the present study, we investigated the association between knee kinematics during drop landing and ankle strength and its ROM among male and female university athletes. Previous studies by [17] and [18] did not exclude those with excessive DKV, which may influence their findings. It was shown that changes in ankle kinematics may cause excessive DKV or inward movement of the knee [19]. Furthermore, when strength was gender-matched among skilled athletes, the differences in hamstring and quadricep activity during landing were reduced despite biomechanical differences observed across gender [20,21]. Hence, when a biomechanically homogenous group is studied, the effects of landing heights could be less visible. Therefore, we aimed for a biomechanically homogenous group by including only those with normal DKV. The normal range of DKV, which can be assessed by the two-dimensional (2D) knee frontal plane projection angle (FPPA), is 3 • to 8 • for males and 7 • to 13 • for females [19]. We hypothesized that reduced ankle dorsiflexion ROM and ankle strength may be associated with knee angles during drop landing among university athletes with normal DKV.
Materials and Methods
The protocol of this cross-sectional study was approved by the Human Research Ethical Committee of a local university (USM/JEPeM/18020138). A priori sample size calculation showed that 15 participants per gender were sufficient to yield 0.8 power of the study with 0.5 effect size [22]. The sample size calculation was conducted using GPower software (v.3.1.9.2, University of Düsseldorf, Düsseldorf, Germany).
We recruited university athletes who participate regularly in sports at least three times per week and that had a normal body mass index (BMI: 18.5-24.9 kg/m 2 ), aged between 19-25 years old, and exhibited a normal range of DKV during a drop vertical jump (DVJ). We included those with normal BMI to reduce the influence of body weight on knee biomechanics during landing. Those who were pregnant and had any lower limb injury at the time of data collection were excluded. After obtaining their information about medical history and written informed consent, anthropometric measurements such as their body weight, height, body fat percentage, and leg length were recorded.
A screening test was conducted based on methods by [19] to distinguish those with and without excessive DKV. Briefly, markers were placed at the midpoint of the knee joint, midpoint of the center of the ankle joint, and anterior superior iliac spine (ASIS). Then, participants performed three trials of DVJ with one-minute rest interval between trials. The trials were captured from the frontal plane using a digital camera (SONY, Tokyo, Japan) and were further analyzed using Silicon Coach Pro v.8 (The Tarn Group, Dunedin, New Zealand). The two-dimensional (2D) knee FPPA is the intersection of the line created between the ASIS and center of the knee joint and the line between the center of the knee joint and the center of the ankle joint. The normal range of DKV is 7-13 • for females and 3-8 • for males [19]. The screening test took approximately 30 min to be completed. Upon analysis, participants were contacted regarding their results; those with excessive DKV were excluded from further tests.
Drop Vertical Jump at Different Heights
Thirty-five reflective markers were attached bilaterally to the posterior superior iliac crest, anterior superior iliac spine, greater trochanter, medial and lateral knee, and medial and lateral malleolus based on the Plug-in-Gait marker set ( Figure 1). of the ankle joint, and anterior superior iliac spine (ASIS). Then, participants performed three trials of DVJ with one-minute rest interval between trials. The trials were captured from the frontal plane using a digital camera (SONY, Tokyo, Japan) and were further analyzed using Silicon Coach Pro v.8 (The Tarn Group, Dunedin, New Zealand). The two-dimensional (2D) knee FPPA is the intersection of the line created between the ASIS and center of the knee joint and the line between the center of the knee joint and the center of the ankle joint. The normal range of DKV is 7-13° for females and 3-8° for males [19]. The screening test took approximately 30 minutes to be completed. Upon analysis, participants were contacted regarding their results; those with excessive DKV were excluded from further tests.
Drop Vertical Jump at Different Heights
Thirty-five reflective markers were attached bilaterally to the posterior superior iliac crest, anterior superior iliac spine, greater trochanter, medial and lateral knee, and medial and lateral malleolus based on the Plug-in-Gait marker set ( Figure 1). Next, participants were required to drop off an adjustable plyometric box without an upward or forward jump action and adopted a stable double-leg landing posture to be considered a successful trial [18]. After a 5-minute rest interval, the test was repeated with different heights. The sequence of landing height (i.e., 30 and 45 cm) was randomized. Three successful trials at each landing height were selected for analysis. Upon completion, participants cooled down by cycling on a bike ergometer and stretching their legs.
The trajectories of the reflective markers during these trials were sampled at 100 Hz and were identified using Qualisys Track Manager software (Qualisys, version 2.6.673, Gothenburg, Sweden). Then, the raw data of the marker coordinates were low-pass filtered using a fourth-order, zero-lag Butterworth filter with a cutoff frequency of 12 Hz by using Qualisys Track Manager software (Qualisys, version 2.6.673, Gothenburg, Sweden). The missing trajectories were pattern filled using spline estimates. Next, data were transferred to Visual 3D (version 5, C-Motion, Inc, Rockville, MD, USA) to construct a bone model and calculate the kinematic variables of the hip, knee, and ankle joint.
Weight-bearing Lunge Test
Maximum weight-bearing ankle dorsiflexion ROM was quantified in terms of maximum distance reached during the Weight-Bearing Lunge Test (WBLT). The test followed the procedure by Hoch et al., [17]. Briefly, the participants stood facing a wall with the tested foot at the front and parallel with a tape measure attached to the floor while the big toe touched the wall ( Figure 2). The uninvolved foot was placed comfortably behind the tested foot. Next, participants lunged until their knee touched the wall while the heel remained firmly planted on the floor. This was to ensure that the foot posture did not influence the measurement. Then, they were asked to step backward in 1 cm increments until heel or knee contact could no longer be sustained during the lunge [17]. The maximum lunge distance was measured from the tip of the big toe to the wall in the nearest 0.1 cm. The WBLT was repeated for three trials for each leg, and the values were averaged for further analysis. Next, participants were required to drop off an adjustable plyometric box without an upward or forward jump action and adopted a stable double-leg landing posture to be considered a successful trial [18]. After a 5-minute rest interval, the test was repeated with different heights. The sequence of landing height (i.e., 30 and 45 cm) was randomized. Three successful trials at each landing height were selected for analysis. Upon completion, participants cooled down by cycling on a bike ergometer and stretching their legs.
The trajectories of the reflective markers during these trials were sampled at 100 Hz and were identified using Qualisys Track Manager software (Qualisys, version 2.6.673, Gothenburg, Sweden). Then, the raw data of the marker coordinates were low-pass filtered using a fourth-order, zero-lag Butterworth filter with a cutoff frequency of 12 Hz by using Qualisys Track Manager software (Qualisys, version 2.6.673, Gothenburg, Sweden). The missing trajectories were pattern filled using spline estimates. Next, data were transferred to Visual 3D (version 5, C-Motion, Inc, Rockville, MD, USA) to construct a bone model and calculate the kinematic variables of the hip, knee, and ankle joint.
Weight-Bearing Lunge Test
Maximum weight-bearing ankle dorsiflexion ROM was quantified in terms of maximum distance reached during the Weight-Bearing Lunge Test (WBLT). The test followed the procedure by Hoch et al., [17]. Briefly, the participants stood facing a wall with the tested foot at the front and parallel with a tape measure attached to the floor while the big toe touched the wall (Figure 2). The uninvolved foot was placed comfortably behind the tested foot. Next, participants lunged until their knee touched the wall while the heel remained firmly planted on the floor. This was to ensure that the foot posture did not influence the measurement. Then, they were asked to step backward in 1 cm increments until heel or knee contact could no longer be sustained during the lunge [17]. The maximum lunge distance was measured from the tip of the big toe to the wall in the nearest 0.1 cm. The WBLT was repeated for three trials for each leg, and the values were averaged for further analysis.
Isokinetic Ankle Strength Test
After at least 24 hours of rest, participants performed isokinetic ankle strength tests using a dynamometer (Biodex System 3 Pro, Shirley, NY, USA). They were seated with 90° knee flexion and a neutral position of the ankle, while keeping their back straight on a chair with a Velcro strap attached to a strain gauge placed around the lower leg and foot. The ankle axis was set on the same line as the equipment axis, and the handles were held by both hands. We followed the Biodex manual, whereby the ankle axis was determined based on the head of the talus. Ankle strength in dorsiflexion/plantarflexion and eversion/inversion motions was tested in concentric mode at angular velocity of 60°/s for three sets of five repetitions per set and 120 s rest interval between sets. The data were averaged for further analysis. Ankle strength was measured in terms of peak torque per body weight (PT/BW, %). The antagonist:agonist ratio was determined by dividing the PT/BW of the antagonist muscle group by the PT/BW of the agonist muscle group.
Statistical Analysis
Data were tested for normal distribution with the Shapiro-Wilk test which is appropriate for small sample sizes (<50 samples) [22]. The kinematics data observed in the two different landing heights were compared across three different phases of landing, namely, initial contact (IC), maximum vertical ground reaction force (MVGRF), and maximum knee flexion (MKF) phases. IC was defined as the point in the trial when the vertical GRF exceeded 10 N [17]. The ankle dorsiflexion ROM and isokinetic ankle strength were compared between female and male collegiate athletes by using an independent T-test. Then, the relationships between ankle dorsiflexion ROM, ankle strength, and knee kinematic during landing at different heights were determined by using Pearson correlation coefficients. All statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) (version 22.0, IBM, New York, NY, United States). The level of significance was set at p < 0.05.
No statistically significant differences in the isokinetic strength of the evertors, dorsiflexors, and antagonist:agonist ratios were observed across gender (Table 1). Males showed statistically greater strength in invertors and plantarflexors than females.
Isokinetic Ankle Strength Test
After at least 24 hours of rest, participants performed isokinetic ankle strength tests using a dynamometer (Biodex System 3 Pro, Shirley, NY, USA). They were seated with 90 • knee flexion and a neutral position of the ankle, while keeping their back straight on a chair with a Velcro strap attached to a strain gauge placed around the lower leg and foot. The ankle axis was set on the same line as the equipment axis, and the handles were held by both hands. We followed the Biodex manual, whereby the ankle axis was determined based on the head of the talus. Ankle strength in dorsiflexion/plantarflexion and eversion/inversion motions was tested in concentric mode at angular velocity of 60 • /s for three sets of five repetitions per set and 120 s rest interval between sets. The data were averaged for further analysis. Ankle strength was measured in terms of peak torque per body weight (PT/BW, %). The antagonist:agonist ratio was determined by dividing the PT/BW of the antagonist muscle group by the PT/BW of the agonist muscle group.
Statistical Analysis
Data were tested for normal distribution with the Shapiro-Wilk test which is appropriate for small sample sizes (<50 samples) [22]. The kinematics data observed in the two different landing heights were compared across three different phases of landing, namely, initial contact (IC), maximum vertical ground reaction force (MVGRF), and maximum knee flexion (MKF) phases. IC was defined as the point in the trial when the vertical GRF exceeded 10 N [17]. The ankle dorsiflexion ROM and isokinetic ankle strength were compared between female and male collegiate athletes by using an independent T-test. Then, the relationships between ankle dorsiflexion ROM, ankle strength, and knee kinematic during landing at different heights were determined by using Pearson correlation coefficients. All statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) (version 22.0, IBM Corp., Armonk, NY, USA). The level of significance was set at p < 0.05.
No statistically significant differences in the isokinetic strength of the evertors, dorsiflexors, and antagonist:agonist ratios were observed across gender (Table 1). Males showed statistically greater strength in invertors and plantarflexors than females. The knee angles during three specific landing phases from heights 30 cm and 45 cm are tabulated in Table 2. The relationships between knee kinematic and weight bearing ankle dorsiflexion (Table 3) and isokinetic ankle strength (Table 4) were presented according to gender. Table 3. Relationship between ankle dorsiflexion range of motion and knee angle during landing among male and female university athletes (n = 30).
Discussion
Among male athletes, significant relationships were observed between plantarflexor strength and knee kinematic at 30 cm landing height during maximum vGRF and maximum knee flexion (MKF) phases and at 45 cm landing height during the maximum vGRF phase. Also, a significant relationship was noted between dorsiflexor strength and knee kinematic at 30 cm landing height during maximum vGRF and MKF phases and at 45 cm landing height during the maximum vGRF phase only. Additionally, inverse relationships were observed between plantarflexor and dorsiflexor strength and knee kinematic during landing at both heights. These findings indicated that greater strength of the plantarflexor or dorsiflexor may cause the male athletes to land with a higher varus knee angle.
Regarding female athletes, a significant relationship was noted between the invertor strength and knee kinematic during the MKF phase for both landing heights. This proportional relationship indicates that a greater invertor strength may cause them to land in the valgus knee position. Decreased neuromuscular control of ankles may cause increased inversion angular velocity which is possibly a key contributor to recurrent ankle injuries [23]. In addition, a significant relationship was observed between plantarflexor strength and knee kinematics during 30 cm landing height at IC and maximum vGRF phases. The direct relationship shows that the greater the plantarflexor strength, the lower the tendency to land in the varus position. Indeed, it was previously shown that the knee was the primary shock absorber for both genders, whereas the ankle plantar-flexor muscles were the second largest contributor to energy absorption in females [24]. Moreover, knee kinematics were also significantly related to the evertor: invertor strength ratio at 30 cm landing height during the MKF phase only and at 45 cm landing height during all phases. An increased evertor:invertor strength ratio implies greater chances of landing in the varus position. Among those with chronic ankle instability (CAI) and coper athletes, ankle eversion was frequently used as their major adaptation strategy in the initial landing phase to reduce the effect of ankle over-inversion [25].
Knee kinematic and ankle strength showed an inverse relationship in males, but a proportional relationship in females. Males had greater vertical leg stiffness compared to females, which occurred with a lower center of mass (COM) vertical displacement per height and greater peak GRF per body weight [26]. A previous study showed that males had greater ankle joint stiffness, a lower initial plantar flexion angle, lower ankle ROM, and greater changes and peaks in ankle moment compared to females [26]. Ankle joint stiffness was thought to be due to a reduced ankle joint ROM during landing.
Peak vertical and posterior GRF increased with greater vertical height at landing [27]. Increased height may elevate maximum vGRF experienced during landing, thus with increased risks of sustaining traumatic injuries such as ACL rupture [18]. Moreover, a previous study conducted among male athletes showed that maximum vGRF was negatively correlated with ankle plantarflexion at increasing vertical landing height [27], which is similar to our findings. However, studies that exclude females with excessive DKV are not known for comparison with our findings.
Ankle dorsiflexion ROM among female athletes showed a positive relationship with knee kinematic during all phases of landing at 30 cm height. Similarly, Brookreson [28] observed a positive linear relationship (r = 0.75, p = 0.001) between weight-bearing of dominant ankle dorsiflexion ROM and knee kinematics at 40 cm landing height during the MKF phase [28]. Greater passive ankle-dorsiflexion ROM was associated with greater knee-flexion displacement and smaller GRFs during landing, which may be associated with a reduced risk of ACL injury [29]. However, their study involved only thirteen physically active male athletes who were proficient in landing and jumping techniques and free from lower limb injury, and no screening test was conducted to exclude people with excessive DKV. Additionally, male athletes are likely to demonstrate increased stiffness in ankle, knee, and hip joints with maturation [27].
We noted that there were no statistically significant differences in ankle dorsiflexion ROM between male and female university athletes (Table 2). In a study among 107 of healthy university students, the average values for total distance covered was 32.7 cm for males and 33.9 cm for females [28]. Our findings on total distance covered are within the range of the findings by Hankemeier & Thrasher [30], who also conducted WBLT on the dominant leg only.
In a previous study among 128 healthy adults (55 men, 73 women), ankle dorsiflexion ROM for females was greater than for males, although the differences were not statistically significant [28]. Additionally, females showed greater ROM in lower-extremity joints than adult males [31]. The results from these two studies concur with our findings. Greater ankle mobility among females is due to the greater capacity of plantarflexors, as compared to males [31]. Greater passive ankle dorsiflexion ROM was associated with greater hip and knee flexion and lower GRFs during a jump-landing task in healthy individuals [15]. Dorsiflexion deficits may limit the ability to fully achieve a closed-packed, stable position of the ankle during dynamic activities, such as gait and jump-landing [17]. Hence, athletes and coaches should focus on improving ankle ROM in their jump-landing training to prevent injury.
Isokinetic strength was normalized to body weight and expressed as a percentage to reduce the influence of participants' body weight on results. We observed that males exerted greater peak torque/body weight (PT/BW) for ankle inversion than females (p = 0.01) (Table 1). Also, males showed higher plantarflexion PT/BW than females (p = 0.02). Males have greater muscle mass especially in lower extremities than females, which may help them to achieve higher PT/BW during the isokinetic plantarflexion test [31].
There are some limitations in the current study that should be addressed in future studies. Firstly, our participants were barefoot during trials, which may not represent the real condition of sporting activities. Bare feet are preferable because wearing shoes during trials may influence the athletes' landing strategies due to the shoes' shock absorption effect [2,32]. Secondly, although we ensured that the participants maintained their foot position during WBLT, the foot posture (i.e., pronation and supination) was not quantified. It was also shown that foot position did not influence knee kinematics during single leg squats among male athletes with normal DKV [33]. Future studies are recommended to investigate the relationship between foot structure (i.e., posture and arch) and landing mechanics. Finally, our results are limited to physically active young adults due to their greater exposure to injury risks than other members of the population.
Conclusions
Male athletes with normal DKV showed a consistent relationship between knee FPPA and plantarflexor strength at both landing heights, particularly during the maximum vGRF phase. Moreover, dorsiflexor strength was significantly associated with knee FPPA at both landing heights during the maximum vGRF phase. In female athletes with normal DKV, invertor strength was associated with knee FPPA at 30 cm during the MKF phase, while plantarflexor strength and knee FPPA showed a significant association during IC and maximum vGRF at 30 cm landing height only. Knee FPPA was also significantly associated with evertor:invertor strength during all landing phases at 45 cm. Ankle ROM was significantly correlated with knee FPPA at 30 cm landing height only. Also, male and female athletes with normal DKV showed a significant relationship between ankle strength and knee FPPA at specific landing phases. These relationships varied when the jump height increased. Funding Acquisition, S.S. We confirm that the manuscript has been read and approved by all named authors before the manuscript was finalized. There are no other persons who satisfied the criteria for authorship but are not listed. We further confirm that the order of authors listed in the manuscript has been approved by all of us. All authors have read and agreed to the published version of the manuscript. | 6,011.6 | 2020-06-01T00:00:00.000 | [
"Biology"
] |
DETERMINING 3D COORDINATES BASED ON THE TRACK GEOMETRY DESCRIPTION
. This paper is focused on the data description of the railway infrastructure. Its aim is to present the possibilities of constructing of linear elements representing tracks in 2D and their subsequent transformation into 3D using the parametric description of horizontal and vertical curves. The transformation is based on determining the spatial coordinates of the newly emerging 3D linear elements. This process is supposed to be implemented in such a graphical editor environment that allows the relevant data to be transferred to the the database based on the Multipurpose Railway Infrastructure Model. With the use of this data, it is possible to perform, among other things, the visualization of the railway infrastructure.
Introduction
In recent years, the data description of railway infrastructure has become more important, as many intelligent transport systems are being developed, for which it is an essential input.It is desirable to address the question of how to describe the infrastructure in such a way that the description can be used for the widest possible range of target applications [1].This is what also the Multipurpose Railway Infrastructure Model aims for.
The Multipurpose Railway Infrastructure Model is a data model reflecting some principles of the UIC RailTopoModel [2, 3] with data stored in the form of a relational database.It is gradually being developed in the Railway Laboratory at CTU in Prague, Faculty of Transport Sciences [4], whereas its latest version, referenced in this paper, is the version 12.2.
One of the fundamental aspects of the railway infrastructure description in an expression of the track geometry are the spatial characteristics of the railway line which are the basis for the location of many other infrastructure facilities.When expressing the track geometry, there are several different ways to do this.The Multipurpose Railway Infrastructure Model allows us to express the individual points of the centre line of the track described by coordinates as well as to describe its geometric parameters analytically, using the appropriate attributes [5,6].
Both these approaches have their specific advantages and cases in which it is appropriate to apply them.Therefore, in order to create a consistent data description, it is necessary to design software tools that allow both of these approaches to be used so that the outputs provided are in accordance with each other.
Consistent filling the model with data
For the purpose of filling the Multipurpose Railway Infrastructure Model database with data, a specialized editing script working in the environment of a computer-aided design software is being developed.
In the following text, this script together with the computer-aided design software will be referred to as the graphical editor.
The graphical editor allows individual instances of the model classes to be visualized as graphical objects arranged into corresponding layers.These graphical objects can also be described by relevant data.Each graphical object is described by several items corresponding to the attributes of the respective class.
Within a specific graphical object, each item is expressed using a record of the structure
[table][attribute][attributeV alue],
where table expresses the name of the table, in which the value of the respective attribute is to be stored, attribute expresses the name of the attribute to be stored and attributeV alue expresses its value.On the basis of such a description of all graphical object belonging to layers corresponding to the Multipurpose Railway Infrastructure Model classes, the data from the computer-aided design software can be uploaded to the relational database, based on its structure.
The mutual consistency of individual data items, which cannot be verified at the database level, is supposed to be ensured on the basis of functionalities of the graphical editor.Among other things, the graphical editor provides the creation of graphical object representing individual data objects of the Multipurpose Railway Infrastructure Model and describing them with respective data items.Some of the data is entered by the graphical editor user, other can be obtained based on the spatial aspects of these objects in the environment of the computer-aided design software.The way the data objects are graphically visualized usually depends on the class which they belong to.
In addition, the model makes it possible to assign some data objects to individual network levels.According to the RailTopoModel, a network level is a data object expressed by an instance of the Level-Network class.It can be described by the description-Level attribute expressing the respective level of detail [2, 3].
The Multipurpose Railway Infrastructure Model introduces the dimension and representation attributes, additionally.These attributes make it possible to distinguish between the description of the network using different spatial dimensions and whether the network is described only schematically or realistically.Of course, the way of visualization of individual data objects also depends on the attribute description of the respective network level which these data objects are assigned to.
When interested in creating the track geometry data description, it is therefore necessary to pay attention to the development of such software tools of the graphical editor that allow coherent transformation across the different description methods and levels.Providing these transformations is one of the other functionalities of the editing script.Since the track geometry data provide considerably detailed information, it is appropriate to express them at a detailed level corresponding to the value micro of the descriptionLevel attribute of the LevelNetwork class.
Graphical representation of the Multipurpose Railway Infrastructure Model data is suitable either in 2D or in 3D which can match the values xy and xyz of the dimension attribute.In order to enable the derivation of attribute values based on the spatial aspects of the visualized data objects, it is desirable to work with such network levels described by the value realistic of the representation attribute.
Linear elements and associated positions
In accordance with the RailTopoModel, the basic units of the topological network description are the so-called net elements which can be classified into non-linear and linear elements [2, 3].According to the Multipurpose Railway Infrastructure Model, each linear element is also additionally described by its length in meters, expressed by the length attribute.The interest in describing linear elements by its length was originally promoted by the railML community in order to allow the position within a linear element to be expressed in a more practical way than by specifying the relative value of the intrinsicCoord attribute [7].
In the latest RailTopoModel version 1.4, the length attribute attribute was also added to the NetElement class [8].
Since the information of the geometrical track description has mainly linear features and because the linear elements make it possible to better express the information about the admissible routing within the network, the recommended way of expressing the network structure for the needs of describing track geometry is to use linear elements.In that case, the line elements represent individual track sections (routes) between nodes.After all, such an approach implies the micro description level of the network.
Each linear element is represented by an instance of the LinearElement class, within RailTopoModel version 1.4 renamed to LinearNetElement.Individual net elements can be connected by positioned relations which are the instances of the PositionedRelation class.Each positioned relation connects exactly two net elements and in the case of a linear element, it can be bound either at its beginning or at its end.Each positioned relation is also described by the navigability attribute, expressing whether it is passable and, if so, in which direction [2,3,8].
The Multipurpose Railway Infrastructure Model allows us to define any number of associated positions bound to a particular linear element.Each associated position, represented by an instance of the Associated-Position class, is described by its intrinsicReference and deltaPosition attributes.
The intrinsicReference attribute expresses the intrinsic coordinate within the relevant net element.In the basic concept, for a linear element, it can either take the value of 0 (at the beginning of the element) or the value of 1 (at the end of the element).
The deltaPosition attribute expresses the difference in position measured along the net element from the position expressed by the intrinsicReference attribute to the resulting position in meters.This attribute can also take the values of negative numbers.Therefore, the resulting position parameter p of each associated position can be calculated as where c expresses the value of the intrinsicReference attribute, ∆p expresses the value of the deltaPosition and l expresses the value of the length attribute of the linear element to which the respective associated position is bound.Whereas net elements can appear in different dimensions, the values of the resulting positions calculated from the attributes of corresponding associated positions bound to the linear elements of different dimensions matching each other are generally slightly different.This fact can be demonstrated on the case of two network levels, which will be referred to as the 2D network level and the 3D network level according to the respective dimension.
Let the linear elements expressed at the 2D network level are visualized as the perpendicular projection of the centre line of the real objects that they represent into the xy horizontal plane and the respective instance of the LevelNetwork class is described by the attribute values as follows: • descriptionLevel ← micro, • dimension ← xy, • representation ← realistic.
Let the linear elements expressed at the 3D network level are visualized basically accurately representing the centre line of the real object in three-dimensional space xyz and the respective instance of the Level-Network class is described by the attribute values as follows: • descriptionLevel ← micro, When we compare the linear element expressed at the 2D network level with the linear element expressed at the 3D network level, they have different lengths (except for cases where they express a horizontal track section).This difference is also reflected in the calculation of the corresponding resulting positions on the linear elements and caused by the gradient profile, which is not taken into account when determining resulting positions at the 2D network level.
The described ways of graphic visualization can also be reasonably applied to objects of other classes assigned to the stated network levels.The direct description of linear elements does not say anything about their shape.They can nevertheless be visualized based on the geometric entities that are localized to them.These are instances of the terminal classes of the GeometryEntity module, which is one of the specific Multipurpose Railway Infrastructure Model extensions used by many projects [5,6].The GeometryEntity module was designed using some aspects of the railML® 3.1 data format [9], which is a exchange format based on the RailTopoModel, so that portability can be ensured in the future.Nevertheless, the module has some specific features of its own.Its structure can be seen in the Figure 1.
Horizontal curves
When filling in the data description of the Multipurpose Railway Infrastructure Model using the graphical editor, it is advantageously feasible to construct the linear elements of the micro level first in 2D using horizontal curves.Horizontal curves are instances of all the terminal classes which are specializations of the HorizontalCurve class contained in the Geometry-Entity module.Together with the HorizontalCurve abstract class, the module includes also the Vertical-Curves and the Superelevation abstract classes, which all are specializations of the GeometryEntity abstract class, the top class of the GeometryEntity module, derived from the NetEntity abstract class.
Generally, net entities, as introduced by the Rail-TopoModel, are those objects representing the facilities and properties of the railway infrastructure.They can be localized to individual net elements.The Rail-TopoModel is so general that it does not define specific classes of net entities [2,3,8].
Although the railML® specifications do so, the railML® 3.1 data format implements the Horizontal-Curve class as a common class for which instances are horizontal curves of all types [9].Nevertheless, the Multipurpose Railway Infrastructure Model introduces a separate class for each curve type, allowing instances of these classes to be described with more specific attributes.
The currently used version of the GeometryEntity module includes the following terminal classes of horizontal curves: • StraightHC -describing straight horizontal curves, • CircularArcHC -describing horizontal curves of the shape of a circular arc, • CubicParabolaHC -describing horizontal transition curves of the shape of a cubic parabola, • ClothoidHC -describing horizontal transition curves of the shape of a clothoid.
Each of these specialized classes of horizontal curves has several attributes defined.The only attribute common to all Multipurpose Railway Infrastructure Model classes except association classes is id inherited from the BaseObject class.This attribute has the meaning of a unique identifier across all objects of these classes.Based on its value, records of all relational database tables belonging to one data object are joined.Attributes that are common to a significant number of named object classes representing the formalized and user-defined naming of the relevant objects are the name and longname attributes inherited from the NamedResource class.
The attributes azimuth0 and deltaAzimuth are the attributes that are common to all specialized classes inherited from the HorizontalCurve class.The azimuth0 attribute determines the azimuth at the starting point of the respective horizontal curve expressed in degrees and it can take values from 0 to 360.The deltaAzimuth attribute expresses the difference between the azimuth at the end and the azimuth at the beginning of the respective horizontal curve.It take the value of a positive number for right-turning curves and the value of a negative number for left-turning curves.For each instance of the StraightHC class it takes the value of 0.
In order to express the azimuth also at the horizontal curve end point, we can define the azimuth1 parameter.For each horizontal curve, this parameter can be calculated as follows: (4) where α 0 expresses the value of the azimuth0 attribute, ∆α expresses the value of the deltaAzimuth and α 1 expresses the value of the azimuth1 parameter.
In terms of individual specialized classes of horizontal curves, the StraightHC class has the horizontal-Length attribute that expresses the length of the line segment representing the straight horizontal curve in meters, in addition.It takes the value of a positive number.The CircularArcHC class has the radius attribute expressing the radius in meters, instead.It takes the value of a positive number for right-turning curves and the value of a negative number for leftturning curves.
For transition curves, which are instances of the ClothoidHC and CubicParabolaHC classes, however, two radius values must be expressed, both at the beginning and at the end of the respective curve.This is provided by the radius0 and radius1 attributes.One of these points is often a point with zero curvature.For such a point, the corresponding attribute takes the value of 0, although it does not express the radius.
When constructing horizontal curves in the xy plane (where z = 0) using the graphical editor tools, we can express the starting point of each horizontal curve using the x 0 and y 0 coordinates and its end point using the x 1 and y 1 coordinates as follows: The ∆x and ∆y values can be calculated for each horizontal curve based on knowledge of the class of which it is an instance and the set of values of the following attributes: azimuth0, deltaAzimuth and the specific attributes of individual specialized classes of horizontal curves.Based on knowledge of the horizontal curve specialized class and the attribute values, the horizontal length of the respective curve can also be calculated.
Creating linear elements in 2D using horizontal curves
If we choose a specialized horizontal curve class and enter the x 0 and y 0 coordinates (for example by selecting a point of the xy plane) and the values of the relevant attributes in the graphical editor, we are able to plot a curve representing one instance of the given specialization of the HorizontalCurve class.After plotting the horizontal curve of specified attribute values in the base position (which can be starting from the beginning of the coordinate system at the azimuth of 90 • , i. e . not yet taking into account the value of the azimuth0 attribute), it is necessary to move it to the determined starting point with the x 0 and y 0 coordinates and rotate it so that the azimuth at the beginning of it matches the value of the azimuth0 attribute.In that case, the x 1 and y 1 coordinates can also be calculated by means of the graphical editor.If we declare the x 1 and y 1 coordinates of the current horizontal curve to be the x 0 and y 0 coordinates of the consecutive horizontal curve, it is then possible to construct the consecutive horizontal curve in the same way.Whereas the value of the azimuth0 attribute of the newly constructed horizontal curve should be equal to the azimuth1 parameter of the already constructed horizontal curve.This procedure can be used to plot the graphical representation of the entire linear element which these horizontal curves belong to.In that case, the mentioned steps must be repeated until all the horizontal curves belonging to the linear element has been constructed.Although all the horizontal curves are considered net entities connected to a net element, the graphical representation of the linear element created using the graphical editor at the 2D network level it is based on the spatial aspects of horizontal curves.While net entities of all other classes are localized to an already existing element along with creating their graphical representation, horizontal curves are supposed to be exceptionally inserted (not yet located using associated location) in advance of the linear element itself.The relevant 2D linear element is created by enclosing selected consecutive horizontal curves, then.Horizontal curves that can be enclosed into several linear elements can be seen in the Figure 2.
In order for a linear element to be enclosed, several conditions must be met.If the linear element is to be created from n horizontal curve the (k+1)th horizontal curve must start at the point where the kth curve ends and the azimuth at the beginning of the (k + 1)th curve must be equal to the azimuth at the end of the kth curve, where can k can take the value of a natural number from 1 to n.When plotting subsequent horizontal curves in the above-mentioned manner, these conditions are already ensured.
The length attribute of the linear element is also calculated based on the particular horizontal curves used to enclose it.If we denote the horizontal length of the kth horizontal curve from the total number of n horizontal curves forming the linear element as s k , we can express the linear element length l as follows: These horizontal curves are retrospectively located to the linear element, which was created on their spatial basis, then.Each horizontal curve is to be located to the linear element using the line associated location that uses individual associated sections.Each associated section is defined by two associated positions bound to the same linear element.In order to create associated sections required to create associated locations intended to locate horizontal curves, individual associated positions at the boundaries and interfaces of the individual horizontal curves must be created.
For the stated purpose, we need to define n + 1 associated positions.Since the associated positions are instances of the AssociatedPosition class, it is necessary to set the values of their attributes intrinsic-Reference and deltaPosition as well.Although this can be done in various ways, the following one can be recommended: ∆p 1 = 0, (9) where j takes the value of a natural number from 2 to n, c k expresses the value of the intrinsicReference attribute of the kth created instance of the Associated-Position class and ∆p k expresses the value of the deltaPosition attribute of the kth created instance of the AssociatedPosition class.Within the Multipurpose Railway Infrastructure Model, each associated position can be linked to a geo-point.Each geo-point can have its coordinates assigned within each defined coordinate system.In the case of geometric or geographical coordinates, this is carried out using the GeoPointGeoCoordinate association class, while the respective coordinates are expressed by the x, y and z attributes.
Taking into account the coordinate system of the graphic editor, we can fill the x and y attributes related to individual geo-points connected to the associated positions created in order to locate horizontal curves with the x 0 and y 0 (eventually x 1 and y 1 ) values of the respective horizontal curves in meters.Since we are describing a linear element in 2D, we are supposed to set the value of the each z attribute to 0.
Vertical Curves
After a linear element has been successfully created, other net entities can be located to it.This usually brings with it the creation of new associated positions and associated sections used to define their associated locations.In some cases, existing associated features can also be used.We can also calculate the coordinate values with which the attributes of the GeoPoint-GeoCoordinate class instances are to be filled, when creating new geo-points based on the new associated positions.The means of the graphical editor can also be used to fulfill this task.
In terms of track geometry, the GeometryEntity extension module of the Multipurpose Railway Infrastructure Model allows us to describe selected types of vertical curves and separable superelevation sections.
The currently used version of the GeometryEntity module includes the following terminal classes of vertical curves: • StraightVC -describing straight vertical curves, i. e. sections of constant slope, • ParabolaHC -describing vertical curves in the shape of a parabola.
Both of these specialized classes of vertical curves has several attributes defined.The common ones of them are the id, name and longname attributes again and in addition the attributes of the VerticalCurve class, which are the elevation0, deltaElevation and horizontalLength attributes.The elevation0 attribute determines the elevation at the starting point of the respective vertical curve expressed in meters above the reference level (e. g. above sea level) and it can take the value of a real number.The deltaElevation attribute expresses the difference between the elevation at the end and the elevation at the beginning of the respective vertical curve.
In order to express the elevation also at the vertical curve end point, we can define the elevation1 parameter.For each vertical curve, this parameter is calculated as follows: where z 0 expresses the value of the elevation0 attribute, ∆z expresses the value of the deltaElevation attribute and z 1 expresses the value of the elevation1 parameter.
The horizontalLength attribute determines the length of the perpendicular projection of the respective vertical curve to the horizontal xy plane in meters.It must have the same value as the difference between resulting positions of the associated positions defining the associated section intended to locate the respective vertical curve to the corresponding 2D linear element.
In terms of individual specialized classes of vertical curves, the StraightVC class has no additional attributes, while the ParabolaHC class has the parabola-VertexRadius additional attribute expressing the radius of the respective parabola in meters defined.It takes the value of a positive number for sag roundings and the value of a negative number for crest roundings.
Separable superelevation sections
The currently used version of the GeometryEntity module includes the following terminal classes of separable superelevation sections: • ConstantSuperelevation -describing sections of constant superelevation, • LinearSuperelevationRamp -describing linear superelevation ramps.
Both of these specialized classes of separable superelevation sections has several attributes defined.The common ones of them are the id, name and longname attributes again and the attribute of the Superelevation class, which is the anchoredAxisReference attribute, in addition.The anchoredAxisReference attribute determines which axis remains at its original height even after the elevation is constructed.If it is the track axis, the attribute takes the value of 0. If it is the left rail axis, the attribute takes the value of −1.If it is the right rail axis, the attribute takes the value of 1.
In terms of individual specialized classes of separable superelevation sections, the Constant-Superelevation class has the superelevation attribute expressing the height difference between the track rails in millimeters, in addition.It takes the value of a positive number if the left rail is higher, the value of a negative number if the right rail is higher a the value of 0 if both rails are at the same height level.
For superelevation ramps, which are instances of the LinearSuperelevationRamp class, however, two superelevation values must be expressed, both at the beginning and at the end of the respective separable section.This is provided by the superelevation0 and superelevation1 attributes.
Transformation of 2D linear elements to 3D
The 2D linear element that is coherently, completely and unambiguously described by vertical curves can be transformed into the corresponding 3D linear element in the graphical editor.This involves creating a new graphical representation of the newly emerging linear element.The default assumption for doing this is the ability to express the x and y coordinates and corresponding resulting position p at each point of the 2D linear element graphical representation (we can imagine an instance of the AssociatedPosition class to be created in each of these points).Since the element is already plotted in the graphical editor which has the tools to obtain these values, this assumption can be considered fulfilled.
In order to plot the corresponding 3D linear element, it is necessary to calculate the z coordinate belonging to the individual points of the 2D linear element.It is advisable to proceed according to the associated sections to which the individual vertical curves are located.The determination of the z coordinate is carried out depending on the class of the respective vertical curve and the resulting position at the relevant point.It is based on calculations related to the design of rail transport structures [10,11].
For straight vertical curves, the z coordinate of each point within the associated location of the respective vertical curve can be calculated based on its resulting position within the source 2D linear element simply by interpolating the stated values of the elevation0 attribute and the elevation1 parameter of the respective vertical curve in the following manner: where p p expresses the resulting position of the given point, p 0 represents the resulting position at the beginning of the vertical curve, p 1 expresses the resulting position at the end of the vertical curve, z 0 expresses the value of the elevation0 attribute, z 1 expresses the value of the elevation1 parameter and z p expresses the z coordinate of the given point.
For parabolic vertical curves, the calculation of the z coordinate of each point within the associated location of the respective vertical curve requires knowledge of the parabolaVertexRadius attribute value, below referred to as r v , in addition.First of all, it is advisable to calculate the values of resulting position p v and the elevation z v at the top of the respective parabola: With the use of these auxiliary values, the final calculation can already be performed: Once this procedure is done for all points of all vertical curves located to the 2D linear element, it is possible to plot the corresponding 3D linear element.How the result of the transformation looks for several linear elements can be seen in the Figure 3.
The transformation may also include creating 3D associated points bound to the 3D linear element based on 2D points bound to the corresponding 2D element.As individual associated positions may also correspond to individual geo-points with assigned geocoordinates of defined geo-positioning system, this transformation may further include the creation of matching geo-points in 3D.If a suitable geo-coordinate system is used, this basically means to preserve the values of the x and y attributes and to change the z attribute value of the original 2D geo-point from 0 to the calculated value when assigning new geocoordinates to the corresponding geo-points in 3D.
Visualization
The presented method can also be used to visualize the infrastructure data related to track geometry directly based on records in the database of the Multipurpose Railway Infrastructure Model.This procedure is nevertheless more complex, as it also includes the construction of the curve representing the 2D linear element.Indeed, this can be achieved using tools similar to those used to create the horizontal curves in the graphical editor environment.
The construction of the 2D linear element curve assumes that the linear element is coherently, completely and unambiguously described by horizontal curves.In case this data was obtained from the graphical editor where the 2D linear element was created by enclosing of inserted horizontal curves, this condition is already fulfilled.It only must be ensured that the resulting data description was not subsequently improperly tampered with.
At the beginning of the 2D curve construction, it is essential to determine the coordinates of the starting point.For this purpose, it is appropriate to find the geo-point which is assigned to the associated position of the values of its attributes intrinsicCoordinate ← 0 and deltaPosition ← 0 which is bound to the relevant 2D linear element.Its coordinates expressed in the appropriate geo-coordinate system determine the starting point.
The first horizontal curve, which associated location uses the stated associated position, is supposed to be plotted from the starting point.The subsequent procedure is similar to the one carried out when working in the graphical editor, only there is no need to enter individual attribute values, as these are loaded from the database.An example of visualization of a linear element representing a track based on data from the Multipurpose Railway Infrastructure Model database is shown in the Figure 4.
Net entities of other classes can also be used for visualization purposes.E. g., in terms of track geometry, these are the above-mentioned separable superelevation sections.
Conclusions
This paper introduced the possibilities of creating a realistic graphic representation of linear elements representing tracks in 2D and 3D.Related procedures and mathematical operations use data compatible with the Multipurpose Railway Infrastructure Model structure and the GeometryEntity module, which is its extension module focused on the track geometry description.Their implementation is supposed to be carried out in the graphical editor, which also serves to fill the database of the Multipurpose Railway Infrastructure Model with consistent data.The way of data description of horizontal curves, vertical curves and separable superelevation sections using the classes of this the GeometryEntity module was continuously presented.
The 2D linear element formation is based on the
Figure 1 .
Figure 1.The GeometryEntity module as a specific extension of the Multipurpose Railway Infrastructure Model.
Figure 2 .
Figure 2. Graphically expressed horizontal curves of a station throat area to be enclosed into 2D linear net elements.
Figure 3 .
Figure 3. Graphically expressed linear elements representing line and station tracks in 2D and in 3D.
Figure 4 .
Figure 4. Visualization of the Multipurpose Railway Infrastructure Model data in software created by Martin Němec.
insertion of individual horizontal curves intended to be enclosed into it.The 3D linear element creation presupposes the transformation of the 2D linear element into 3D.This is based on the calculation of the z coordinates of the individual points of 2D linear element graphical representation.The method of the z coordinate calculation within a given associated section used to locate any of the vertical curves of the linear element varies based on the the class of the respective vertical curve.The resulting data description stored in the Multipurpose Railway Infrastructure Model database can be graphically presented with the use of dedicated visualization tools.Further development of the described Geometry-Entity module could include the introduction of other types of transitions curves and superelevation ramps, as well as the possibilities of expressing different values of track gauge and track gauge widening. | 7,256.2 | 2023-11-22T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Non-Linear Numerical Analysis of Earthquake-Induced Deformation of Earth-Fill Dams
is used to promote the level of hysteretic damping during dynamic analysis. Masing rules are implemented into the constitutive model to precisely explain the non-linear soil response under general cyclic loading. The numerical model is then calibrated using centrifuge test data as well as field data. The field data are obtained in real measuring of Long Valley earth-fill dam subjected to the 1980 Mammoth Lake earthquake. The results of dynamic analysis, obtained in this study, are compared with the real measurements of Long Valley dam in terms of accelerations computed at the crest of dam in both time and frequency domains. The proposed numerical model can properly reproduce the overall seismic behaviours of earth-fill dams, their qualities and quantities, under earthquake loading conditions, confirmed by validation analyses. After validation, the effects of dam height, real earthquake loading, soil behaviour and strength of shell materials on the seismic response of earth-fill dams are evaluated through a comprehensive This book sheds lights on recent advances in Geotechnical Earthquake Engineering with special emphasis on soil liquefaction, soil-structure interaction, seismic safety of dams and underground monuments, mitigation strategies against landslide and fire whirlwind resulting from earthquakes and vibration of a layered rotating plant and Bryan's effect. The book contains sixteen chapters covering several interesting research topics written by researchers and experts from several countries. The research reported in this book is useful to graduate students and researchers working in the fields of structural and earthquake engineering. The book will also be of considerable help to civil engineers working on construction and repair of engineering structures, such as buildings, roads, dams and monuments.
sophistication in terms of proper problem formulation, characterization of material properties and modelling of soil stress-strain behaviour. In dynamic analysis methods using numerical simulation techniques, comprehensive analysis of earth-fill dam responses to dynamic loading is allowed. The development of geotechnical computation and numerical modelling offers interesting facilities for dam response analysis, considering complex issues such as soil non-linearity, evolution of pore water pressures and real earthquake records. In this regard, Prevost et al. (1985) presented 2D and 3D non-linear dynamic finite element (FE) analyses of an earth-fill dam, based on non-linear hysteretic analysis using multi-surface plasticity theory. Lacy & Prevost (1987) proposed a general and efficient numerical procedure for analyzing the seismic response of earth-fill dams. In their procedure, the dams were considered as non-linear two-phase systems. They outlined appropriate coupled dynamic field equations for the response of two-phase soil system. Abouseeda & Dakoulas (1998) studied the non-linear seismic behaviour in earth-fill dam-foundation interaction using boundary element (BE) and finite element (FE) methods. Chen & Harichandran (2001) studied the stochastic response of Santa Felicia earth-fill dam, in southern California, to spatially varying earthquake ground motion (SVEGM). They used SVEGM model in which the propagation of seismic waves is taken into account. Cascone & Rampello (2003) investigated the seismic stability of an earth-fill dam using decoupled displacement analysis. Ming & Li (2003) conducted a full coupled analysis of failure and considered the remediation of Lower San Fernando Dam. They used a critical state model, incorporating the concept of state dependent dilatancy for describing the soil behaviour over full loading ranges. Adalier & Sharp (2004) studied the seismic behaviour and remediation of an embankment on a liquefiable foundation. Papalou & Bielak (2004) studied the non-linear seismic response of earth-fill dams with canyon interaction. In their developed FE-based method, the dam was idealized as a shear beam and the surrounding medium as a halfspace. The dam's non-linearity was considered using multi-yield surface plasticity theory. Ebrahimian & Vafaeian (2005) considered the seismic response of earth-fill dams during earthquake using 2D full non-linear dynamic analysis. They used finite difference (FD) method and adopted the Mohr-Coulomb elastic-perfectly plastic constitutive model to describe the stress-strain relation of the soil. They focused on the seismic behaviour of very high earth-fill dams. Wang et al. (2006) presented the dynamic analyses in which a nonlinear, effective-stress-based soil model is employed. They used bounding surface hypoplasticity model for sand and implemented the model into a 2D finite difference (FD) program. The advantages of the proposed non-linear approach, conducted for a rock-fill dam, were illustrated by comparing the obtained results with those of equivalent linear approach. The model's capability was demonstrated by evaluating the seismic performance of an earth-fill dam. carried out the transient dynamic time history FE simulations to investigate the performance of earth-fill dams under seismic excitation. Then, they studied different failure modes of earth-fill dams as the earthquake aftermath. Sica et al. (2008) studied the effect of loading history on the seismic response of earth-fill dams. They considered the static and seismic behaviours of a real case-history using coupled dynamic approach. The approach was solved numerically by FE method. Rampello et al. (2009) studied the response of an earth-fill dam to seismic loading using the displacementbased analysis and the FE effective-stress dynamic analysis. The FE analysis was carried out using a constitutive model which was capable to reproduce the soil non-linearity and calibrated against laboratory measurements. They also investigated the effects of assumed input motion and bedrock depth on the seismic response of earth-fill dam. Ebrahimian (2009) presented a numerical modelling of seismic behaviour of an earth-fill dam rested on www.intechopen.com liquefiable foundation. The numerical simulation was carried out using effective-stressbased, full coupled non-linear dynamic analysis. In this regard, Finn-Byrne model with extended Masing rules was used to model the pore water pressure generation in the liquefied soils. Ebrahimian (2011) investigated the dynamic behavior of earth dams by using a full non-linear dynamic finite difference analysis. The effects of input motion characteristics and dam reservoir condition on the dynamic response of earth dams were identified in this study. For this purpose, three real earthquake records with different levels and PGAs were used as the input motions.
In many parts of the world, the repetition of medium-strong intensity earthquake ground motions at brief intervals of time has been observed. The design philosophies for dams in seismic regions are based on multi-level design approaches, which take into account more than a single damageability limit state. According to these approaches, a sequence of seismic actions may produce important consequences on the dam safety. In fact, dams have been among the first structures that have been designed systematically against different earthquake levels. Since 1989, the ICOLD guidelines have introduced several levels of seismic loading, namely the Operating Basis Earthquake (OBE), Maximum Credible Earthquake (MCE), Maximum Design Earthquake (MDE) and Safety Evaluation Earthquake (SEE). However, the terms MDE or SEE are used as substitutes for the MCE. In order to analyze the behaviour of dams for specified levels of seismic hazard, several requirements should be considered. The seismic input and performance levels associated with serviceability, damage control, and collapse prevention are also defined. A thorough review about the different earthquake levels for dam design has been given in Wieland (2008). Amadio et al. (2003) analyzed the effects of repeated earthquake ground motions on the response of single-degree-of-freedom systems (SDOF) with non-linear behaviour. Accordingly, a comparison study was performed to investigate the effect of a single seismic event on the originally non-damaged system for different hysteretic models in terms of pseudo-acceleration response spectra and damage parameters. They showed that the elastic-perfect plastic system is the most vulnerable one under repeated earthquake ground motions. Moustafa & Takewaki (2010) modelled ground motions of multiple sequences that produce the maximum damage in the structure. It was shown that critical repeated acceleration sequences produce larger structural damage compared to single critical earthquakes. Afterwards, Moustafa (2011) developed a new framework to model the design earthquake loads for inelastic structures. New measures of the structure performance that were based on energy concepts and damage indices were introduced in his paper.
Concerning the seismic-resistant design of dams, several international standards have been developed by scientific communities in the past 25 years. However, only few countries have their own guidelines and regulations for seismic design of dams. Therefore, the ICOLD Bulletins and the local seismic building codes (e.g., Eurocode 8) are used as references. In brief, other famous international codes are USCOLD (United States Committee on Large Dams), US Army Corps of Engineers (USACE), ANCOLD, (Australian National Committee o n L a r g e D a m s ) , I I T K -G S D M A G u i d e l i n e s f o r S e i s m i c D e s i g n o f E a r t h D a m s a n d Embankments and Canadian Dam Association Guidelines for Dam Safety.
In this chapter, a numerical study of seismic behaviour of earth-fill dams overlaying bedrock subjected to real earthquake records is presented. For this purpose, full non-linear dynamic finite difference (FD) analysis is employed incorporating a simple elastic perfectly plastic constitutive model and Rayleigh damping. The former is used to describe the stress-strain response of the soil and the latter to increase the hysteretic damping level. The effect of non-linear soil behaviour is then considered in the analysis from the very beginning of earthquake loading. In order to precisely explain the soil response under general cyclic loading, Masing rules (Masing, 1926) are implemented into the constitutive model. Soil stiffness and hysteretic damping change with loading history. Firstly, the procedures of calibrating the constructed numerical models with centrifuge test data as well as real case history are presented and explained. Moreover, some important aspects of model calibration are discussed. Long Valley earth-fill dam, subjected to the 1980 Mammoth Lake earthquake, is analyzed for the real case history and the obtained numerical results are compared with the real ones, measured at the site in both time and frequency domains. The computed values show relative good agreements with the measured ones. It is shown that the Masing rules, combined with the simple elastic-plastic model, offer reasonable numerical predictions. A comprehensive parametric study is also carried out to identify the effects of dam height, input motion characteristics, soil behaviour and strength of shell materials on the seismic response of earth-fill dams. It is demonstrated that the fundamental aspects of seismic behaviour of earth-fill dams can accurately be captured by the current numerical procedure. It should be mentioned that this study does not consider the fluid-skeleton interaction, which may have significant effects on the seismic response of earth-fill dams.
Numerical modeling procedure
Here, numerical analysis is conducted using FLAC program, based on a continuum finite difference discretization applying Lagrangian approach (Itasca, 2004). Every derivative in the set of governing equations is directly replaced by algebraic expression written in terms of field variables (e.g., stress or displacement) at discrete point in space. Regarding dynamic analysis, explicit finite difference scheme is applied to solve the full equation of motion using the lumped grid point masses derived from the real density of surrounding zone. The calculation sequence first invokes the equations of motion for deriving new velocities and displacements from stresses and forces; then, strain rates are derived from velocities, and new stresses from strain rates. Every cycle around the loop corresponds to one time step. Each box updates all grid variables from known values which are fixed over the time step being executed ( Fig. 1). The equation of motion, in the simplest form, relates the acceleration ( du dt ) of a mass (m) to the applied force (F) which may vary with time. Newton's law of motion for the massspring system is: In a continuous solid body, Eq.
(1) is generalized as follows: where, ρ = mass density; t = time; x j = components of coordinate vector; g i = components of gravitational acceleration (body forces); ij = components of stress tensor; i = components in a Cartesian coordinate frame.
For problem analysis, the strain rate tensor and rotation rate tensor, having the velocity gradients, are calculated by the following equations: where, ij e = components of strain rate; ij = components of rotation rate; i u = components of velocity.
The specific mechanical relationship is used in order to obtain the stress tensor as below: where, M = specific rule of behaviour; = history parameters (based on the specific rules which may or may not exist).
The problem selected here is the simplified representation of typical earth-fill dam geometry. The dam section is a symmetric zone section with central clay core rested on bedrock,
Constitutive model
Mohr-Coulomb constitutive relation is used to model the behaviour of soil. The failure envelope for this model corresponds to a Mohr-Coulomb criterion (shear yield function) with tension cutoff (tensile yield function). Stress-strain relationship is linear elasticperfectly plastic. Linear behaviour is defined by elastic shear and bulk modulus. While, plastic behaviour is determined by the angle of internal friction and cohesion of the soil. Shear modulus of sandy shell materials is calculated from the following formula (Kokusho & Esashi, 1986): where, G max = maximum (small strain) shear modulus in kPa; e = void ratio; m = mean effective confining stress in kPa; Poisson's ratio is considered as 0.3 for the shell materials.
Shear modulus of clayey core materials is calculated by the below formula (Hardin & Black, 1968): Poisson's ratio for the core materials is taken as 0.45.
Here, the basic elastic-perfectly plastic model is modified in order to better fit with the curves of shear modulus and damping ratio derived from the experimental data. This modified model can predict the seismic behaviour and the associated deformations of earthfill dams. Masing behaviour is implemented into FLAC via FISH subroutine (Itasca, 2004) in order to represent more accurately the non-linear stress-strain behaviour of soil that follows the actual stress-strain path during cyclic loading. Masing model consists of a backbone curve and several rules that describe the unload-reload behaviour of soil and the cyclic modulus degradation. Backbone curve can be constructed by the modulus reduction curves coupled with the small strain modulus (G max ). Unload-reload rules can similarly be formulated to reproduce the hysteretic damping values expected from the standard curves of damping ratio versus shear strain (e.g., Seed et al., 1986;Vucetic & Dobry, 1991). These formulations are described later.
In this study, shear modulus and damping ratio curves, proposed by Seed et al. (1986) for sandy soils and Vucetic & Dobry (1991) for clayey soils, are adopted as the references. The geotechnical properties of earth-fill dam materials, used in the analyses, are presented in Table 1.
Boundary conditions
The geotechnical problems can be idealized by assuming that the regions far from the area of interest extend to infinity. The unbounded theoretical models should be truncated to the manageable size by using the artificial boundaries for minimizing the computation time as well as avoiding the outwards propagating waves form reflecting to the model. The viscous boundary, developed by Lysmer & Kuhlemeyer (1969), is used in the current calculations. In this case, independent dashpots are used in the normal and shear directions at the model boundaries, as shown in Fig. 3. During the static analysis, the bottom boundary is fixed in both the horizontal and the vertical directions and the lateral boundaries only in the horizontal direction. In dynamic analysis, when the dam is laid on the foundation (and not on the bedrock), lateral boundaries are changed into free-field boundaries, available in the FLAC library, in order to eliminate the wave reflections from the truncated boundaries.
Element size
Numerical distortion of propagating wave can occur in dynamic analysis as a function of modelling condition. The numerical accuracy of wave transmission is affected by both the frequency content of input wave and the wave speed characteristics of system. Kuhlemeyer & Lysmer (1973) showed that for an accurate representation of wave transmission through the soil model, the spatial element size should be smaller than 1/10 to 1/8 of the wavelength associated with the highest frequency component of input wave i.e., where, λ = wave length associated with the highest frequency component that contains significant energy. Considering the above mentioned criteria, the element size is defined small enough to allow the seismic wave propagation throughout the analysis.
Damping
Material damping in soil is generally because of its viscosity, friction and plasticity development. Indeed, the role of damping in the numerical models is the reproduction of energy losses in the natural systems subjected to dynamic loads. The dynamic damping is provided in the model by Rayleigh damping option available in FLAC. Rayleigh damping was primarily used in analyzing the structures and elastic continua in order to damp the natural oscillation modes of system. Rayleigh damping R d =5% is used to compensate the energy dissipation in the media (Itasca, 2004). In the dynamic analysis incorporated plasticity constitutive models, considerable amount of energy dissipation can occur during the plastic flow. In the calculations of such cases, minimal percentage of hysteretic damping (e.g. 2%) is considered as well. The dam's natural frequency is determined as the Rayleigh damping parameter by Fourier analysis of its free response, as shown in Fig. 4. The fundamental frequency (f 1 = 1.71 Hz) of the dam with 40 m height is shown in this figure and those of dams with different heights are tabulated in Table 2.
Time step
The governing equations in time should be integrated incrementally for completing the numerical solution. The solution time step should be small enough in order to accurately define the applied dynamic loads and guarantee the stability and convergence of solution.
In this regard, time step is about 10 -6 second in the current FLAC model.
Input excitations
Selecting dynamic input motion is an important task in the seismic evaluation processes. In non-linear dynamic analysis, the expected earthquakes should be expressed as a set of ground motion time histories. For more correct evaluation, the input motions which offer an appropriate range of dam responses should be selected in the adaptable time history realizations. This procedure may be intractable due to the number of time-history realizations. However, in reality, the level of earthquake responses, probably achieved by physical system, is limited. Quantifying such responses demands good understanding of the seismic response of the system as well as the ground motion parameters that characterize the damage potential of seismic input (USCOLD, 1999). Different parameters can be employed to identify the severity and damage potential of a certain acceleration time history, assumed as the representative of expected earthquake ground motion; peak ground acceleration value (PGA) is of such kind. The use of this descriptor is intuitively natural since accelerations and resulting inertial forces are directly related by Newton's second law. However, there is no direct relation between PGA and structural response at the dominant natural frequencies of most typical dams. Moreover, large PGA values are not sufficient for generating response conditions which lead to significant damage. Despite these limitations, PGA is still the fundamental parameter used to judge the damage potential of certain acceleration histories. On the other hand, the seismic response of system is strongly affected by the frequency content of earthquake. Therefore, the better characterization of a given input motion can be achieved by using some forms of spectral representations. In particular, using Fourier amplitude spectrum is at the core of earthquake engineering practice. However, such characterizations do not provide direct description of the duration or time variation features of a given input motion.
Based on above, in this chapter, three different real acceleration time histories are selected from a database of earthquake records: Tabas, PGA=0.93g in MCE level; Naghan, PGA=0.72g in MDE level; San Fernando, PGA=0.21g in DBE level. In the dynamic analysis of dams, the scaled records are filtered to the maximum frequency of 10 Hz, transferred to the "inside" bedrock formation by standard de-convolution analysis and applied at the base of numerical model. The information of earthquake records are summarized in Table 3
Full non-linear dynamic analysis
Equivalent linear analysis is the common method used for evaluating the seismic behaviour of earth structures. In this approach, first, the responses are linearly analyzed using the initial values of damping ratio and shear modulus. Then, the new values of damping ratio and shear modulus are estimated, using maximum value of shear strain and laboratory curves. These values are used for redoing the analysis. This procedure is repeated several times until the material properties show no variation. Therefore, no non-linear effect is directly captured by this method as it assumes linearity during the solution process. Straindependent modulus and damping functions are considered roughly in order to approximate some effects of non-linearity (damping and material softening).
In the non-linear analysis, employed in this study, the non-linear stress-strain relationship is directly followed by each zone. Damping ratio and shear modulus of the materials are calculated automatically at different strain levels. The real behaviour of soil, under cyclic loading, is non-linear and hysteretic. Such behaviour can be simulated by Masing model (Masing, 1926), which can model the dynamic behaviour of soil. In this model, the shear behaviour of soil may be explained by a backbone curve as: where, F bb(γ) = backbone or skeleton function; = shear strain amplitude; G max = initial shear modulus; max = maximum shear stress amplitude.
Stress-strain curve follows the backbone curve in the first loading, as shown in Fig. 7(a); however, for explaining the unload-reload process, the above equation should be modified. If load reversal occurs at the point ( r , γ r ), stress-strain curve follows the path given by the below formula: In other words, the shapes of unload-reload curves are similar to that of backbone curve (with the origin shifted to the loading reversal point) except they are enlarged by a factor of 2, as shown in Fig. 7(b). The Eqs. (9) & (10) describe the Masing behaviour (Masing, 1926).
Masing rules seem not to be enough for precise explanation of soil response under general cyclic loading. Finn et al. (1977) developed modified rules to describe the irregular loading. They suggested that unloading and reloading curves follow the concerning two rules. If the new unloading or reloading curve exceeds the last maximum strain and cut the backbone curve, it will follow the backbone curve up to meeting the next returning point, as shown in Fig. 7(c). If a new unloading or reloading curve passes through the previous one, it will follow the former stress-strain curve, as shown in Fig. 7(d). According to this model, the tangent shear modulus can be defined at the points on the backbone and new reloadingunloading curves by the Formulas (11) & (12), respectively, as: Based on the results, obtained in this research, the shear stress decreases as the number of load cycles increases; it means that shear stress-strain curves are more inclined. In this study, Masing rules are implemented into FLAC via a series of FISH functions in order to simulate the non-linear stress-strain relationships.
Validation analysis
In this research, one-zone sample is simulated using the unit cell as shown in Fig. 8, in order to validate the implementation of Masing rules in FLAC program. The one-zone sample consists of sandy soil and a periodic motion is exerted at its base. Vertical loading is established only by gravity and then, the Equilibrium stresses are installed in the soil. The stress/strain loops of the sample are shown in Fig. 9(a) for several cycles. According to the figure, shear modulus decreases as shear strain increases. It seems that the hysteretic model can reasonably handle the multiple nested loops. Energy dissipation and shear stiffness degradation are clearly observed during seismic loading, as shown in Fig. 9(b). Shear modulus reduction curve, obtained in this study, follows well the empirical relation proposed by Seed et al. (1986) and the test data. The results obtained from the numerical analyses are compared with those of experimental ones in order to evaluate the capability of proposed model. One of the centrifuge tests related to the embankment which was performed in VELACS project (VErification of Liquefaction Analysis using Centrifuge Studies Scott, 1993, 1994)) is chosen as a reference. It is attempted to create almost similar conditions for both laboratory model test and numerical model, as shown in Figs Finally, the model's ability in simulating the seismic behaviour of earth-fill dam during a real earthquake is verified by a real well-documented case history. In this regard, the results of non-linear dynamic analysis of Long Valley (LV) earth-fill dam in California subjected to the 1980 Mammoth Lake earthquake (Griffiths and Prevost, 1988) are presented. Then, the results are compared with the real measurements recorded at the site and also with the results presented by previous authors. LV dam is located in Mammoth Lake area www.intechopen.com (California) in close proximity of active faults. The dam is a rolled earth-fill one formed mainly with the impervious zone. The dam has maximum height of 55 m, 182 m length at the crest, and upstream and downstream slopes of 3h/1v. The LV dam was instrumented in the 1970's with a multiple-input-output array; it has 3 accelerometer stations to monitor the boundary conditions, and 5 stations to record the dam response ( Fig. 12(a)). Thus, the array comprised a total of 22 accelerometers linked to a common triggering mechanism. In May 1980, a series of 6 earthquakes occurred in the Mammoth Lakes area. The magnitudes of these earthquakes were M L = 4.9 -6.7, and the induced peak accelerations at the crest centre was 0.5g in the upstream-downstream direction (x direction, as shown in Fig. 13(a)) during the strongest event. Extensive arrays of 22 input-output (excitationresponse) accelerations were recorded, providing a valuable information source of the dam seismic responses over a wide range of deformation levels. In this study, the dam is subjected to the input motion, recorded downstream at the outlet during Mammoth Lake earthquakes. The first 12 seconds of the recorded acceleration is used with data point at 0.02 second intervals and the peak acceleration is 0.135g in the upstream-downstream (x) direction and 0.084g in the vertical direction (y).
The cross section of LV dam is shown in Fig. 12(b) and its detailed information is found in (Griffith and Prevost, 1988). The numerical grid constructed in FLAC is presented in Fig. 12(c). The input accelerations are applied in the horizontal and vertical directions of the model base. Free Field boundary conditions are exerted to the lateral boundaries of numerical model. This research focuses on the computed acceleration at the crest which can be compared directly with the measured values. Previously, LV dam has been analyzed by : Lai & Seed, 1985;Elgamal et al., 1987;Griffiths & Prevost, 1988;Yiagos & Prevost, 1991;Zeghal & Abdel-Ghaffar, 1992;Woodward & Griffiths, 1996 Fig. 13(a) shows the computed horizontal acceleration of the crest; it indicates that the amplification occurs between the base and the crest. The magnification factor of peak amplitude at the crest is about 5.47 over the peak base amplitude. The crest response, computed in the horizontal direction, is compared with the measured values, as shown in Fig. 13(b); the dashed line corresponds to the computed response there. Excellent overall agreement is achieved between the computed and measured values; however, the computed values show higher amplitudes. The frequency contents of two time records are compared in the form of Fourier amplitude spectra (FAS), as shown in Fig. 13(c). Their peaks are in close agreement although the computed values show rather more energy associated with the fundamental frequency around 1.8 Hz. The frequency content of the up/down stream motion, presented in Fig. 13(c), shows that the energy is concentrated just at the frequencies below 2 Hz.
In the vertical direction, the calculated acceleration shows low agreement with the measured values, as shown in Fig. 14(a). According to this figure, the plots of vertical acceleration are superimposed at the base and crest. This excitation is considerably noisier and less intensive in the vertical direction in compared with that of horizontal one. The maximum accelerations at the crest, recorded in the vertical and horizontal directions are 0.172g and 0.64g, respectively. The computed accelerations in the vertical direction are compared with the measured values in the crest of LV dam, as shown in Fig. 14(b). According to this figure, the computed values have generally lower amplitudes in compared with those of measured values. The Fourier amplitude spectra of these time histories are given in Fig. 14 The results, obtained in the validation analysis of LV dam, in term of crest acceleration are given in Table 5 and compared with the other numerical results presented by previous authors. According to the comparisons, the numerical procedure, presented in this study, can properly capture the fundamental aspects of seismic behaviours of earth-fill dams. As mentioned earlier, the numerical model is then used for parametric studying of hypothetical earth-fill dams due to the satisfactory modelling of validation cases.
Parametric study
Here, the analyses are carried out to investigate the effects of dam height, input motion characteristics, soil behaviour and strength of shell materials on the seismic behaviour of earth-fill dams. The effects of different earthquakes are studied on the horizontal permanent deformations, permanent shear strains and maximum accelerations, as shown in Fig. 15. The values have been induced at the crests of dams with different heights. The displacements are shown in Fig. 15(a) and the relevant shear strains in Fig. 15(b). It is clear that the shear strain variation is similar to displacement. The horizontal displacements and shear strains in the dam body increase with dam height increasing. The calculated values are much higher in Tabas earthquake and the failure occurs in the dam body. According to Fig. 15(a), the maximum horizontal displacement computed at the crest of dam is about 94 cm at the end of Tabas earthquake. It can be observed in Figs. 15 (a) & (b), that increasing in the input motion energy leads to significant increase of displacements and shear strains. Fig. 15(c) illustrates the coupled effects of dam height and earthquake type on the maximum acceleration induced at the dam crest. According to the figure, the crest acceleration decreases as the dam height increases and no amplification is seen maybe due to more flexible behaviour, larger damping and larger developed plastic zones, observed in higher dams. Therefore, because of these factors, more energy is absorbed in higher dams in compared with that in the shorter ones. It can be seen in Fig. 15 (c) that the accelerations in the dam crest are more reduced in higher dams comparing with the smaller ones. It should be mentioned that PGA of Naghan earthquake (0.72g) is much higher than that of San Fernando earthquake (0.21g). However, the created displacements and shear strains in the dam crest caused by Naghan earthquake are close to those of San Fernando input motion. It can be concluded that using just PGA parameter is not sufficient for evaluating the effect of a certain earthquake time history on the dam response. Therefore, other earthquake parameters such as effective duration, magnitude and frequency content should be considered in the analysis. Failure mechanism with permanent shear strain contour in the dam body is shown in Fig. 16, regarding two different heights at the end of Naghan earthquake. The slip surface is much deeper and more obvious in the dam with 280 m height ( Fig. 16(b)) in compared with that of 120 m height ( Fig. 16(a)). A dam with 40 m height is subjected to the mentioned earthquakes and chosen as a reference with two different behaviours, elastic and elastic-perfectly plastic, in order to investigate the effect of soil behaviour on the seismic response of dam body. As it is expected, regarding linear elastic behaviour, smaller displacements and shear strains are observed along the dam height, Figs. 17(a) & (b). However, large amplification occurs especially for the strongest earthquake, Fig. 17(c). It means that plasticity reproduce more energy dissipation during dynamic loading. In such cases, the accelerations are reduced across the dam height and therefore become lower than the base acceleration. According to Fig. 17(a), maximum displacement occurs at about Z/H = 0.88 in linear elastic behaviour, while it happens at the crest of dam in linear elastic-perfectly plastic behaviour. Furthermore, in elastic-perfectly plastic behaviour, the dynamic induced residual (permanent) displacement increases largely in the upper part of dam, especially for Tabas and Naghan earthquakes, confirmed in the previous research works (Ohmachi and Kuwano, 1994;Ozkan et al., 2006). That is why the crest should especially be considered in designing the embankment dams, due to the stronger shaking at the upper parts, for avoiding undesirable deformations. The distribution of shear strain is extremely non-linear along the dam height in the stronger earthquakes, as shown in Fig. 17(b). In the elastic dams, maximum acceleration occurs in the dam crest, as shown in Fig. 17(c). In the acceleration profile of Tabas earthquake, a special increase is seen along the dam centerline at Z/H = 0.38. Strength of dam materials is an important parameter which can significantly affect the seismic response of dam. In this regard, different friction angles are assumed accompanying with different dam heights, subjected to Naghan earthquake, in order to clarify the above mentioned effect. Fig. 18(a) shows horizontal displacement values versus dam height for different friction angles of shell materials. The variations of shear strains in the crests of dams, with different heights, are shown in Fig. 18(b) for different friction angles of shell materials. As it is expected when the friction angle increases, the horizontal displacement and shear strain in the dam crest decrease. It can be seen in the above figure that the variation of friction angle causes no significant change in the displacement and shear strain regarding ≤ 40˚. However, the variation is more significant in = 45˚, compared with the lower friction angles; the highest displacement values correspond to = 30˚. The horizontal displacements computed at the crests of dams with 40 and 120 m heights are about 16 and 13 cm, respectively, and their shear strains are about ³-5.3 and ,³-5.2 respectively. The maximum acceleration induced at the top of dam decreases as the friction angle decreases or the dam height increases, as shown in Fig. 18(c). Considering larger friction angles for the shell materials (e.g., = 45˚) leads to about 70% increase in the dynamic amplification. The computed maximum crest acceleration of dam with 40 m height is about 0.89g for = 45˚, while that of 120 m height is 0.52g. When the dam height deceases, the horizontal displacement and shear strain increase but the acceleration decreases at the crest of dam. All variations are linear for = 45˚, but for the other friction angles are slightly non-linear, as shown in Fig. 18.
Lessons learned
The author experienced several interesting points and noteworthy items during numerical model calibration and numerous dynamic analyses which are listed below: In full non-linear dynamic analysis, soil stiffness degradation is automatically taken into account upon constitutive model of soil and just the initial shear modulus is needed as an input parameter. Therefore, it is important to be sure that in the numerical model, the trend of shear modulus decrease and damping ratio increase are in agreement with those of laboratory test results during dynamic loading. The poisson's ratio of about 0.5 should not be used in the calculating of bulk modulus of undrained soil layers (such as clay) in the analysis. Otherwise, bulk modulus increases irrationally and the time step of analysis decreases rigorously and consequently the calculation time increases excessively. Therefore, the poisson's ratio should not be more than 0.45 in such cases. If a "raw" acceleration record from a site is used as a time history, then FLAC model may exhibit residual displacements once the motion is finished. This arises from the fact that the integral of complete time history may not be zero. Therefore, the process of baseline drift correction should be performed in such cases. The input motion should be filtered before being applied to the FLAC grid in order to eliminate all high frequency components form it. The stages of construction should be considered in the numerical simulation of earth-fill dams. In the present study, the stages are: initial state of foundation (if any); layer by layer dam replacement; applying the hydrostatic water pressure due to the replacement of dam reservoir; seepage analysis in the dam body; mechanical adjustment to new flow field; and finally dynamic analysis. Regarding the mentioned stages, one is run to equilibrium and then the next stage is started. However, construction sequences have much greater effects on static results than dynamic results.
Conclusions
This chapter presents the non-linear seismic behaviour of earth-fill dams using explicit finite difference method. In this regard, a simple elastic-perfectly plastic constitutive model with Mohr-Coulomb failure criterion is used to describe the stress-strain response of the soil.
Here, Rayleigh damping is used to promote the level of hysteretic damping during dynamic analysis. Masing rules are implemented into the constitutive model to precisely explain the non-linear soil response under general cyclic loading. The numerical model is then calibrated using centrifuge test data as well as field data. The field data are obtained in real measuring of Long Valley earth-fill dam subjected to the 1980 Mammoth Lake earthquake. The results of dynamic analysis, obtained in this study, are compared with the real measurements of Long Valley dam in terms of accelerations computed at the crest of dam in both time and frequency domains. The proposed numerical model can properly reproduce the overall seismic behaviours of earth-fill dams, their qualities and quantities, under earthquake loading conditions, confirmed by validation analyses. After validation, the effects of dam height, real earthquake loading, soil behaviour and strength of shell materials on the seismic response of earth-fill dams are evaluated through a comprehensive parametric study. The effect of dam height on the non-linear seismic behaviour is particularly focused in this research. The following conclusions are obtained based on the performed parametric study: If the dam materials keep their elastic behaviours during dynamic loading, then the horizontal acceleration increases along the dam height (from the base to the top). In this case, the higher dams show larger amplifications, especially if the natural periods of their bodies coincide with the periodical nature of earthquake waves. When the dam body shows non-linearity or the materials go towards plastic behaviour during a strong shaking, the attenuation of acceleration waves in the dam body becomes more effective. Consequently, the amplitudes of earthquake accelerations decrease when moving from the base towards the top. According to the non-linear elastic-plastic analyses, when the height of the dam increases, then the strongest dynamic loading (Tabas earthquake) induces plasticity in large parts of the dam body. In fact, strong earthquakes are more effective in changing the material behaviour from elastic to plastic condition in comparison with weak earthquakes. The higher dams are more flexible than the smaller ones. Consequently, the flexibility affects the shear strains which influence the shear modulus degradation and attenuating coefficient. All these effects are on the trend of weakening the accelerations along the height. Soils with less strength (suppose low friction angle) go towards yielding by small amount of dynamic force which cause the attenuation of acceleration along the dam height in the weak materials compared with the strong ones. Regarding a dam subjected to the earthquake with lower energy, the dam body behaves as an elastic material. Therefore, the induced seismic accelerations inside the dam body become larger from the base of dam to its top. In this case, small plasticity zones are developed in the dam body and the dam remains safe during dynamic loading. Finally, non-linear dynamic analysis shows that plasticity should be considered in the investigation of seismic response of earth-fill dams, because of which the acceleration of the dam crest decreases and the displacements and shear strains of dam body as well as the energy dissipation increase. All these can significantly affect the seismic response of earth-fill dams. | 9,326.8 | 2012-02-10T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Rapid and Visual Detection of SARS-CoV-2 Using Multiplex Reverse Transcription Loop-Mediated Isothermal Amplification Linked With Gold Nanoparticle-Based Lateral Flow Biosensor
Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a novel coronavirus that has caused the outbreak of coronavirus disease 2019 (COVID-19) all over the world. In the absence of appropriate antiviral drugs or vaccines, developing a simple, rapid, and reliable assay for SARS-CoV-2 is necessary for the prevention and control of the COVID-19 transmission. Methods A novel molecular diagnosis technique, named multiplex reverse transcription loop-mediated isothermal amplification, that has been linked to a nanoparticle-based lateral flow biosensor (mRT-LAMP-LFB) was applied to detect SARS-CoV-2 based on the SARS-CoV-2 RdRp and N genes, and the mRT-LAMP products were analyzed using nanoparticle-based lateral flow biosensor. The mRT-LAMP-LFB amplification conditions, including the target RNA concentration, amplification temperature, and time were optimized. The sensitivity and specificity of the mRT-LAMP-LFB method were tested in the current study, and the mRT-LAMP-LFB assay was applied to detect the SARS-CoV-2 virus from clinical samples and artificial sputum samples. Results The SARS-CoV-2 specific primers based on the RdRp and N genes were valid for the establishment of mRT-LAMP-LFB assay to detect the SARS-CoV-2 virus. The multiple-RT-LAMP amplification condition was optimized at 63°C for 30 min. The full process, including reaction preparation, viral RNA extraction, RT-LAMP, and product identification, could be achieved in 80 min. The limit of detection (LoD) of the mRT-LAMP-LFB technology was 20 copies per reaction. The specificity of mRT-LAMP-LFB detection was 100%, and no cross-reactions to other respiratory pathogens were observed. Conclusion The mRT-LAMP-LFB technique developed in the current study is a simple, rapid, and reliable method with great specificity and sensitivity when it comes to identifying SARS-CoV-2 virus for prevention and control of the COVID-19 disease, especially in resource-constrained regions of the world.
INTRODUCTION
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a non-segmented positive-sense RNA genome virus, is a novel coronavirus that causes the outbreak of respiratory disease all over the world (Bao et al., 2020;Zhang, 2020). In the 21st century, two important coronaviruses, severe acute respiratory syndrome coronavirus (SARS-CoV) and Middle East respiratory syndrome coronavirus (MERS-CoV), have severely threatened public health (in 2003 and 2012, respectively) (Chen, 2020;Wang et al., 2020). Since December 2019, the novel SARS-CoV-2 coronavirus has been found in many countries around the world and was declared as a disease of "public health emergency of international concern" by the World Health Organization (WHO) (Rothe et al., 2020). Most patients infected with SARS-CoV-2, present with acute onset of fever, cough, dyspnea, and radiological evidence of ground-glass lung opacities compatible with atypical pneumonia (Tu et al., 2020). Not only that, asymptomatic or mildly symptomatic cases have also been reported (Coronaviridae Study Group of the International Committee on Taxonomy of Viruses, 2020; Jiang et al., 2020). Owning to the current disease situation, the SARS-CoV-2 virus has been becoming the third coronavirus posing significant threats to public health worldwide. In the absence of appropriate antiviral drugs or vaccines, developing a reliable, simple, and rapid assay for SARS-CoV-2 is necessary for the prevention and control of the COVID-19 transmission.
Since the outbreak of COVID-19, real-time reverse transcriptionpolymerase chain reaction (RT-PCR) is the most robust and widely used technology for the detection of SARS-CoV-2 in hospitals and other medical institutions (Corman et al., 2020;Tahamtan and Ardebili, 2020;Zhen et al., 2020). However, RT-PCR assays require special experimental instruments, are timeconsuming, and require skilled personnel, which may not be readily available in many resource-poor settings. Therefore, a cost-effective, simple, reliable, rapid, sensitive, and specific assay for the identification of SARS-CoV-2 is urgently developed to improve the detection capability and prevent the spread of COVID-19.
To overcome the drawbacks of RT-PCR detection, a wide variety of isothermal amplification-based methods have been developed for use in molecular identification (Wang et al., 2015;Wang et al., 2017). Loop-mediated isothermal amplification (LAMP), as a reliable, sensitive, and rapid assay with low equipment cost, has been widely applied to detect many pathogens, including SARS-CoV, MERS-CoV, and influenza virus (Huang et al., 2018;Kim et al., 2019;Ravina et al., 2020). LAMP products have been analyzed by various methods, including visual inspection of color changes, turbidimetry changes, and fluorescence dye (Notomi et al., 2000;Wang et al., 2019;Lu et al., 2020). However, these detection techniques require special apparatus and reagents. To overcome this defect, a target-specific, visual and simple nanoparticle-based lateral flow biosensor (LFB) detection method was successfully designed and applied to analyze mRT-LAMP products Li et al., 2019;Wang et al., 2019). In this study, a multiplex reverse transcription LAMP technique linked to an LFB detector (mRT-LAMP-LFB) was developed for the simple, specific, reliable, sensitive, and visual identification of SARS-CoV-2 by targeting the RNA-dependent RNA polymerase gene (RdRp gene) and nucleocapsid protein gene (N gene) Huang et al., 2020). The optimal amplification conditions and feasibility of the mRT-LAMP-LFB assay were confirmed with SARS-CoV-2 pseudo-virus, clinical samples, and artificial sputum samples. materials, including the backing card, sample pad, absorbent pad, conjugate pad, and nitrocellulose membrane (NC), were purchased from Jie-Yi Biotechnology. Co., Ltd. (Shanghai, China). Anti-FAM (rabbit anti-fluorescein antibody) and biotin-BSA (biotinylated bovine serum albumin) were purchased from Abcam. Co., Ltd. (Shanghai, China). Dye (Crimson red) streptavidin-coated polymer nanoparticles (129 nm, 10 mg ml -1 ; 100 mM borate, pH 8.5, with 0.1% BSA, 0.05% Tween 20 and 10 mM EDTA) were purchased from Bangs Laboratories, Inc. (Indiana, USA).
Design of RT-LAMP Primers
Based on the reaction mechanism of LAMP, two sets of specific primers were designed according to the target genes RdRp and N (GenBank Accession No. NC_045512.2), respectively. The primers were designed with Primer Explorer V5 (http:// primerexplorer.jp/e/; Eiken Chemical Co., Ltd., Tokyo, Japan) online primer design software and checked with the basic local alignment search tool (BLAST). The primer positions are shown in Figure 1, and the RdRp and N genes sequence alignment among seven human coronaviruses (SARS-CoV-2, SARS-CoV, MERS-CoV, HCoV-HKU-1, HCoV-NL63, HCoV-OC43, and HCoV-229E) are shown in Supplementary Figure 1. The primer sequences and modifications are shown in Table 1. All of the primers were synthesized by TsingKe Biotech Co., Ltd. (Beijing, China) with HPLC purification grade.
SARS-CoV-2 RNA Standard and Artificial SARS-CoV-2 Virus Preparation
The SARS-CoV-2 RNA standard material was obtained from the Chinese Academy of Metrology (Code NO. GBW (E) 091089). The RNA transcripts contained ORF1ab gene segment (13201-15600), complete E gene, and N gene (GenBank NO. NC_045512), and the concentration of RNA was measured by absolute quantitative digital PCR.
RNA Template Preparation
In the current study, the viral RNA comes from both pseudovirus (TsingKe Biotech Co., Ltd) and clinical samples were obtained using Viral RNA Extraction Kits (Qiagen, Hilden, Germany) in accordance with the manufacturer's instructions. The RNA templates were stored at -80°C before use. The concentration was assayed using quantitative PCR with RNA standard. Then, 10-fold serial dilutions of the pseudo-viruses ranging from 1×10 4 copies/ml to 1 copy/ml were prepared.
Gold Nanoparticle-Based Lateral Flow Biosensor Preparation
The LFB platform was prepared according to a previous report (Cheng et al., 2019). Briefly, the LFB contained four components: an absorbent pad, NC membrane, sample pad, and conjugate pad (Jie-Yi Biotechnology. Co., Ltd.). The components were assembled orderly on a backing card. The capture reagents, including anti-FAM, anti-Dig, and biotin-BSA (Abcam. Co., Ltd.), were immobilized by physical adsorption on the reaction regions. Then, anti-FAM was immobilized at test line 1 (TL1) (RdRp), and anti-Dig was immobilized at test line 2 (TL2) (N), while biotin-BSA was immobilized at the control line (CL); each line was separated by 5 mm. SA-PNPs (dye streptavidin-coated polymer nanoparticles) were gathered on the conjugate pad. The prepared biosensors were preserved in a plastic box with a desiccant gel at room temperature before use.
RT-LAMP Products Detection
The monitoring techniques, including 2% agarose gel electrophoresis, visual detection reagents MG (VDR, Haitai-Zhengyuan biotech, Co. Ltd. Beijing, China), and lateral flow biosensor (LFB) methods, were applied for the determination and verification of the RdRp-RT-LAMP, N-RT-LAMP, and mRT-LAMP products. For the products amplified effectively, the agarose gel presented ladder-like bands, and the color changed from colorless to light green in the MG assay. However, there have no bands in gel electrophoresis, and the
color remains colorless in negative and blank controls. The strategy of visualization of RT-LAMP products with LFB was as previously described (Gong et al., 2019).
Temperature Optimization of the RT-LAMP Assays
To confirm the optimal amplification temperature for RdRp-RT-LAMP and N-RT-LAMP, the pseudo-virus of SARS-CoV-2-RdRp-N was used as a positive control at a concentration of 1×10 4 copies per reaction, and the RT-LAMP amplifications were monitored by a real-time turbidity technique. Reaction temperatures ranging from 60 to 67°C with 1°C intervals were tested. The curves of DNA concentrations of each amplified product were exhibited in the graph. Turbidity > 0.1 was considered as positive. Three replicates were tested for each temperature.
Optimization of the Amplification Time for the mRT-LAMP-LFB Assay
To optimize the reaction time of mRT-LAMP-LFB, four amplification times (20, 30, 40, and 50 min) were evaluated. The mRT-LAMP-LFB reactions were carried out as described above, and the results were tested by LFB. Each reaction time was tested at least three times.
Analytical Sensitivity of mRT-LAMP-LFB Assays
The sensitivity of each RT-LAMP-LFB reaction (RdRp-RT-LAMP-LFB, N-RT-LAMP-LFB, and mRT-LAMP-LFB) was determined using pseudo-virus of SARS-CoV-2 with ten-fold serial dilutions range from 1×10 4 copies to 1 copy. The RT-LAMP reactions were carried out as described above, and the results were tested using visual detection reagents (MG) and LFB. The limit of detection (LoD) of single and multiplex reactions was verified as the last dilution of each positive test. The LoD of RT-PCR technology using Applied Biosystems ™ 7500 Real-Time PCR System (Life Technologies, Singapore) with Novel Coronavirus Nucleic Acid Diagnostic Real-Time RT-PCR Kit (Sansure biotech Inc, China) was also tested in the current study. Three replicates were tested for each dilution.
Specificity Analysis of mRT-LAMP-LFB Detection
To evaluate the specificity of the mRT-LAMP-LFB assay, pseudo-viruses of SARS-CoV-2, SARS-CoV-2 positive clinical samples, and other pathogens ( Table 2) were used for mRT-LAMP detection, and all of the results were tested using the LFB method. All examinations were confirmed at least three times.
Application of the mRT-LAMP-LFB Method to Analyze the Clinical Samples and Artificial Sputum Samples
To verify the applicability of the mRT-LAMP-LFB assay for detecting SARS-CoV-2, one hundred and ten clinical nasopharyngeal swab specimens were collected from suspected SARS-CoV-2 infected patients, and sixty artificial sputum samples (randomly added 100 copies of SARS-CoV-2 pseudoviruses in each 200 ml artificial sputum sample) were used in the current study. The artificial sputum samples were pretreated with N-acetyl-L-cysteine-2% NaOH. The initial process of all specimens was handled in a validated biological safety cabinet, and performed by staff trained with appropriate personal protective equipment. The clinical samples and artificial sputum samples were detected for SARS-CoV-2 using RT-PCR and mRT-LAMP-LFB methods. The mRT-LAMP detection was as described above. The Novel Coronavirus Nucleic Acid Diagnostic Real-Time RT-PCR Kit (Sansure biotech Inc, China) was used as the reference standard, which was recommended by the Chinese Center for Disease Control and Prevention. The RT-PCR detection was performed with Applied Biosystems ™ 7500 Real-Time PCR System (Life Technologies, Singapore). A threshold cycle (Ct value) < 38 was determined to indicate a positive result. The mRT-LAMP-LFB and RT-PCR assays were performed simultaneously in a biosafety level 2 laboratory, as detailed in the WHO Laboratory biosafety manual, third edition. The mRT-LAMP-LFB detection was performed as described above.
RESULTS
COVID-19 is a newly emerging, life-threatening respiratory disease caused by a novel coronavirus SARS-CoV-2, and it has had a significant impact on public health and the economy worldwide (Bao et al., 2020;She et al., 2020). The purpose of the current study is to develop a reliable, rapid, sensitive, and easyto-use assay for SARS-CoV-2.
Verification and Analysis of RT-LAMP Products
To confirm the amplification with the two sets of LAMP primers, the RdRp-, N-, or mRT-LAMP mixtures were incubated at a constant temperature of 65°C for 1 h. Then, the RdRp-, N-, and mRT-LAMP products were analyzed with 2% agarose gel electrophoresis, colorimetric indicator (MG), and lateral flow biosensor (LFB), respectively. The ladder-liker bands of agarose gel were observed in the positive amplification, but not in the negative controls (Figures 2A, D, G). The color of the positive results in the RdRp-, N-, and mRT-LAMP reactions changed from colorlessness to bright green, while the negative reactions remained colorless ( Figures 2B, E, H). LFB was used for further confirmation of RdRp-, N-, and mRT-LAMP. For RdRp-RT-LAMP detection, two crimson red bands (CL and TL1) appeared, indicating positive results, CL and TL2 were visible for N-RT-LAMP, indicating successful amplification, while the negative controls only appeared as a crimson red line (CL) in the biosensor ( Figures 2C, F, I). Therefore, the results suggested that the two sets of RT-LAMP primers for RdRp and N detection were valid for the development of the mRT-LAMP assay.
Optimal Reaction Temperature for RdRp-RT-LAMP and N-RT-LAMP Amplification The reaction temperature is crucial for RT-LAMP amplification.
In this study, the reaction temperature of RdRp-and N-LAMP amplification was tested at different temperatures (60 to 67°C with 1°C intervals) with genomic templates (1×10 4 copies) from the pseudo-virus of SARS-CoV-2. The RT-LAMP amplification protocol was as described above, the RdRp-and N-LAMP amplification were monitored by means of real-time turbidity technique, and the kinetics graphs were recorded from all temperatures. The results showed that the faster amplifications of RdRp-RT-LAMP were obtained for detection temperature range from 63 to 64°C, and 62 to 63°C for the N-RT-LAMP reactions ( Figure 3). Hence, the amplification temperature of 63°C was considered as optimal temperature for the rest of multiple-RT-LAMP reactions in the current study.
Optimization of Amplification Time for mRT-LAMP-LFB Assay
To obtain an optimal reaction time for mRT-LAMP, four amplification times (20, 30, 40, and 50 min) were tested at the 63°C amplification temperature. The results showed that the LoD of the genomic RNA templates (20 copies) was detected when the mRT-LAMP amplification lasted 30 min (Figure 4). Hence, a reaction time of 30 min was considered the optimal amplification time for mRT-LAMP detection. In summary, the whole detection procedure, including reaction preparation (approximately 10 min), target genomic RNA preparation (30 min), mRT-LAMP (30 min), and analysis of results (approximately 2 min), could be completed within 80 min.
Sensitivity of RdRp-, N-, and mRT-LAMP Detection
The sensitivity of RdRp-, N-, and mRT-LAMP detection was evaluated with serially diluted pseudo-virus RNA range from 1×10 4 copies to 1 copy. The RT-LAMP amplification products were analyzed by visual inspection with MG reagents and lateral flow biosensors. The CL and TL1 lines appeared on the biosensor, showing positive results for the RdRp-RT-LAMP assay, and two crimson lines (CL and TL2) were observed on the biosensor, indicating positive results for N-RT-LAMP detection. The CL, TL1, and TL2 bands simultaneously became crimson on the biosensor, reporting positive results for the RdRp and N genes. For the negative controls, only the CL line appeared on the biosensors. The results showed that the LoD of mRT-LAMP was 20 copies per reaction, which was the same as the LoD of the RdRp-and N-RT-LAMP assay ( Figures 5A, B, D, E, G, H). Meanwhile, the sensitivity of RT-PCR technique was also tested in the current study, the results indicated that the LoD of RT-PCR was 100 copies per reaction ( Figures 5C, F, I).
Specificity of the mRT-LAMP Assay
The specificity of mRT-LAMP detection was confirmed with pseudo-viruses of SARS-CoV-2, 12 clinical SARS-CoV-2positive samples, and 36 other pathogens ( Table 2). The process of mRT-LAMP amplification, as described above. The genomic RNA extracted from SARS-CoV-2 presented positive results. Other pathogens and the blank control showed negative results ( Table 2). Hence, the results confirmed that the mRT- LAMP-LFB method could accurately identify SARS-CoV-2 from other pathogens.
Feasibility of the mRT-LAMP-LFB Method Using Clinical Samples
To further demonstrate the feasibility of mRT-LAMP-LFB as a valuable method for the detection of SARS-CoV-2, 110 clinical nasopharyngeal swab specimens and 60 artificial sputum samples (randomly added 100 copies of SARS-CoV-2 pseudoviruses in each 200 ml artificial sputum sample) were simultaneously tested by mRT-LAMP-LFB and RT-PCR. Among them, 12 clinical samples and 35 artificial sputum samples had been confirmed as SARS-CoV-2 through RT-PCR and mRT-LAMP-LFB, respectively ( Table 3). The Cq values of RT-PCR and mRT-LAMP-LFB detection results were shown in Supplementary Table 1. These results suggested that the mRT-LAMP-LFB assay established in the current study could be used as an advanced tool to detect SARS-CoV-2.
DISCUSSION
SARS-CoV-2 is the seventh coronavirus that causes human infections. Like SARS-CoV and MERS-CoV, this virus has the ability to cause lethal pneumonia (Chiappelli, 2020). Moreover, it has a stronger human-to-human transmission capacity than the above two coronaviruses (Ki, 2020;Wilson and Chen, 2020). Until now, up to 140 million COVID-19 cases have been confirmed, including more than 3 million deaths (www.who. int/emergencies/diseases/novel-coronavirus-2019). The main findings of the current study are that we established a simple, sensitive, reliable, and rapid assay with great specificity and low equipment cost for SARS-CoV-2 by mRT-LAMP-LFB. To avoid false-positive or -negative results, we chose the two target genes, RdRp and N, to detect viral RNA in clinical samples Huang et al., 2020;Pang et al., 2020). To reduce the amplification time, we designed the loop primers. Briefly, six primers targeting eight regions generated a selfpriming dumbbell-shaped template upon isothermal incubation with strand-displacing polymerase, resulting in the Tubes A1-A7 (Biosensors B1-B7) represent the genomic RNA amounts of 1×10 4 copies, 1×10 3 copies, 1×10 2 copies, 20 copies, 10 copies, and 1 copy per reaction and blank control (DW), respectively. The LoD of RdRp-RT-LAMP detection was 20 copies of RNA template per reaction. (C) Sensitive of RdRp-RT-PCR detection (1×10 4 copies to 1 copy). The LoD of RdRp-RT-PCR detection was 100 copies of RNA template per reaction. (D, E) Sensitivity analysis of N-RT-LAMP reaction. Tubes D1-D7 (Biosensors E1-E7) represent the genomic RNA amounts of 1×10 4 copies, 1×10 3 copies, 1×10 2 copies, 20 copies, 10 copies, and 1 copy per reaction and blank control (DW), respectively. The LoD of N-RT-LAMP detection was 20 copies of RNA template per reaction. (F) Sensitive of N-RT-PCR detection (1×10 4 copies -1 copy). The LoD of N-RT-PCR detection was 100 copies of RNA template per reaction. (G, H) Tubes G1-G7 (Biosensors H1-H7) represent the genomic RNA amounts of 1×10 4 copies, 1×10 3 copies, 1×10 2 copies, 20 copies, 10 copies, and 1 copy per reaction and blank control (DW), respectively. The LoD of the mRT-LAMP assay for RdRp and N detection was 20 copies of RNA template per reaction. (I) Sensitive of mRT-PCR detection (1×10 4 copies to 1 copy). The LoD of mRT-PCR detection was 100 copies of RNA template per reaction. rapid production of large quantities of the complex amplicon. The specificity of the mRT-LAMP assay was confirmed with genomic RNA from pseudo-viruses of SARS-CoV-2, clinical samples, and other pathogens. The mRT-LAMP detection of the RdRp and N genes identified SARS-CoV-2 with 100% specificity ( Table 2).
In previous studies, there have some reports on a molecular diagnostic test for SARS-CoV-2 using RT-LAMP technology. Most of them have used visual inspection of color changes, turbidimetry, and fluorescence dye to analyze RT-LAMP products Lu et al., 2020;Park et al., 2020;Yan et al., 2020). However, these techniques have to rely on special instruments and expensive reagents, such as colorimetric indicator, turbidimeter, and fluorescence detector, which may not be readily available in many resource-poor settings. To overcome these drawbacks, a target-specific visual nanoparticle-based lateral flow biosensor (LFB) detection method of easy operation and low-cost (approximately $2 USD) was successfully designed and applied to analyze mRT-LAMP products in the current study. The test result of SARS-CoV-2-mRT-LAMP-LFB provided direct visualization by naked eyes and does not require special instruments. Due to the specificity and elimination of special instruments, the LFB-based LAMP assay could easily apply to various fields (Cheng et al., 2019;Wang et al., 2019). In particular, the LFB applied in this study can simultaneously and visually detect two target genes (RdRp and N) in a single test.
Compared with RT-PCR method, the mRT-LAMP-LFB technique is more sensitive, time-saving, and cost-saving. The newly developed mRT-LAMP-LFB method was able to detect 20 copies of genomic RNA, which was more sensitive than RT-PCR method ( Figure 5). The entire detection process, including reaction preparation (approximately 10 min), template preparation (approximately 30 min), isothermal amplification (30 min), and LFB reading (approximately 2 min), could be accomplished within 80 min. The RT-PCR assay, however, requires 2~3 h during the whole process. The running cost of one test, including genomic RNA extraction (approximately $1 USD), LAMP reaction (approximately $3.5 USD), and LFB reading (approximately $2 USD), is estimated to be $6.5 USD, which is getting closer with RT-PCR testing (approximately $7.0 USD). In addition, the advanced technology can decrease labor costs because performing the mRT-LAMP-LFB assay does not require skilled technical personnel. More importantly, the mRT-LAMP-LFB technology has great potential to develop point-ofcare (POC) testing in clinical practice. The detection results could be easily judged by the naked eye. The three crimson red bands (CL, TL1, and TL2) appeared indicating positive results, while the negative results only appeared as a crimson red line (CL) in the biosensor. The findings of this study have been applied for a patent from the State Intellectual Property Office of the People's Republic of China (Patent Application NO. 202010717954. X). The shortcoming of this detection is that the RT-LAMP amplification must be taken out from the reaction tube for LFB detection. There has a risk of contamination with the post-reaction processing of LAMP products. The strict control of the laboratory environment is critical for the reduction of the production of aerosols in experimental processes. Spraying timely 10~15% sodium hypochlorite solution and 70% ethanol after completion of detection is an effective way to overcome nucleic acid contamination in the laboratory. In the current study, the mRT-LAMP-LFB detection results were consistent with the RT-PCR methods in the evaluation of clinical samples. It is indicated that false-positive rates have been effective controlled in our laboratory.
The main limitation of this study is that with the widely spread of SARS-CoV-2 virus, the accuracy of the mRT-LAMP-LFB technology will be affected by the mutations occurring in the primers sequence region of the target genes. So, it is necessary to monitor the mutant sites of the virus genome by whole-genome sequencing. Besides, owning to laboratory biosafety, SARS-CoV and MERS-CoV viruses could not be tested for the specificity of the mRT-LAMP-LFB assay, we used pseudo-virus of SARS-CoV and MERS-CoV as alternatives.
In conclusion, a simple, rapid, and reliable mRT-LAMP-LFB technique based on the RdRp and N genes was successfully developed for assaying SARS-CoV-2 in the current study. This method could rapidly, reliably, specifically, and sensitively detect SARS-CoV-2. The amplification products were analyzed with LFB, which was objective, rapid, and easily interpretable. Hence, the mRT-LAMP-LFB assay could be considered as a useful method for the reliable and rapid detection of SARS-CoV-2 in clinical samples, especially in resource-constrained regions of the world.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The study was approved by the Human Ethics Committee of the Second Affiliated Hospital of Guizhou University of Traditional Chinese Medicine (Approval No. TYH2020011) and the Human Ethics Committee of the Zhejiang Hospital (Approval No. 2020 Lin Shen Di (7K) Hao), and complied with the Declaration of Helsinki. All data/isolates were analyzed anonymously.
AUTHOR CONTRIBUTIONS
XC, QZ, and SD conceived and designed the study. XC and SD participated in primers design. XC, QZ, BC, YW, and HY contributed to all the laboratory works. BC and HY contributed to the data collection. XC, SL, and QZ performed the statistical analysis. XC wrote the initial draft of the manuscript, and SD revised the manuscript. All authors contributed to the article and approved the submitted version. | 5,563.6 | 2021-07-14T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Effect of Flame Retardants and Electrolyte Variations on Li-Ion Batteries
: Lithium-ion batteries are being increasingly used and deployed commercially. Cell-level improvements that address flammability characteristics and thermal runaway are currently being intensively tested and explored. In this study, three additives — namely, lithium oxalate, sodium fumarate and sodium malonate — which exhibit fire-retardant properties are investigated with respect to their incorporation into graphite anodes and their electro/chemical interactions within the anode and the cell material studied. It has been shown that flame-retardant concentrations of up to approximately 20 wt.% within the anode coating do not cause significant capacity degradation but can provide a flame-retardant effect due to their inherent, fire-retardant release of CO 2 gas. The flame-retardant-containing layers exhibit good adhesion to the current collector. Their suitability in lithium-ion cells was tested in pouch cells and, when compared to pure graphite anodes, showed almost no deterioration regarding cell capacity when used in moderate (≤ 20 wt.%) concentrations.
Introduction
Li-ion batteries (LIB) are promising energy-storage devices for portable consumer electronics as well as energy sources for electric vehicles.However, Li-ion batteries are susceptible to high temperatures, which can lead to ignition and even the explosion of these batteries.In the event of improper operation, gas and heat may be generated inside the battery as a result of electrode, electrolyte or solvent degradation.This gas mixture can be very explosive or even self-igniting if it is released from the cell, mixed with air or through oxygen-release reactions from the electrodes (especially the cathode).Physical damage by crushing or puncturing can also cause rapid ignition and the destruction of the battery [1,2].
Many efforts have been made to improve the safety of Li-ion batteries.There have been developments to obtain safer electrolytes: for instance, by the addition of flame retardants (FRs) to electrolytes, or by using less-flammable electrolyte solvents such as ionic liquids (ILs) and hydrofluoroethers (HFEs) instead of organic electrolytes [1].Fluorinated solvents are also a possibility for enhancing the battery's safety characteristics [3].Safety devices incorporated into battery cells and modules, such as a shutdown separator, cell vent, and current interrupt devices, etc., are further options for managing battery safety [2,4].
Several different materials can be used as anode materials [5].Typically, graphite, LTO or Si-graphite blend materials are used in commercial cells.They are processed as a layer together with a binder and carbon black.Accordingly, the combustible fraction involved in the thermal runaway fire (expressed as carbon) is correspondingly high.To our knowledge, no flame retardants are currently processed directly into the anode material.
Standard organic, carbonate-based electrolytes are sensitive to increases in local temperature and overcharge, even if the temperature does not overcome 100 °C [6].Such abusive conditions can cause exothermic reactions inside the cell and lead to thermal runaway.The main strategies to improve LIB safety on a cell level are the development of less/non-flammable electrolytes, the use of electrolyte additives or the addition of fire retardants [7][8][9][10].ILs are of major interest due to their high thermal, chemical and electrochemical stability.However, their high viscosity and respectively poor ionic conductivity hamper the application of ILs in batteries.Therefore, the addition of organic solvents to such IL electrolytes is often applied to improve the ionic conductivity and/or reduce the viscosity of pure IL electrolytes [11,12].Solid-state electrolytes, namely, inorganic solid electrolytes and polymer electrolytes, are often described as more thermally stable; therefore, they can replace liquid electrolytes, overcoming the issue of their low ionic conductivity [13].Additionally, organic phosphorous compounds can be used as fire retardants.They improve the thermal stability of the electrolyte, reduce its flammability by interrupting decomposition reactions, and can even diminish the ageing process [14][15][16][17][18][19][20][21][22].At the same time, however, organic phosphates can act as electrolytes in the battery itself (e.g., trimethyl phosphate).Non-flammable phosphate electrolytes improve the thermal stability of the battery and suppress gas generation during charging and discharging [17,23].
Lithium bis(trifluoromethanesulfonyl)imide (LiTFSI) is an alternative conducting salt.It is related to the standardly used, high reactive LiPF6 due to its higher thermal and electrochemical stability [24].Its main disadvantage is its corrosivity towards aluminum (a cathode current collector) [24].Electrolyte additives such as lithium difluoro(oxalato)borate (LiDFOB), in combination with fluoroethylene carbonate (FEC), can prevent Al corrosion in presence of LiTFSI and improve cell cycling and cell aging [24].
The formation of solid electrolyte interface layers (SEIs) appears on an anode surface during the first charging cycles as a result of electrolyte decomposition, This protecting the electrolytes from further decomposition.Thus, the thermal stability of the SEI is crucial for battery safety.The SEI decomposition temperature can be increased if a combination of thermally stable Li salts and high-boiling electrolytes is used.Some extra additives (e.g., organic phosphorous compounds) can also strengthen the SEI and improve its thermal stability [25,26].Jiang et al. proposed a composite electrolyte additive consisting of perfluoro-2-methyl-3-pentanone (PFMP) and N,N-dimethylacetamide (DMAC), which doubled the protection mechanism.DMAC improved the thermal stability of the electrolyte and PFMP served as a self-cooling component [27].
The separator in the battery cell also plays a crucial role in battery safety.It prevents physical contact between the cathode and anode while simultaneously allowing ion transport between both electrodes [4].Damage caused by external puncturing, dendrite growth or shrinkage by overheating or overcharging leads to an internal short circuit and thus accelerates thermal runaway in the battery.Separators with a so-called "shutdown behavior" are favored because of their ability to "close pores" if the temperature increases [2].Single-or three-layer separators composed of polyolefins are the most widely used separators in batteries [9,28,29].They exhibit a high porosity, low thickness and demonstrate a low ionic resistivity.A crucial disadvantage of such separators, however, is their narrow melting temperature range, which is between 135 °C and 165 °C [4].Additionally, a shrinkage of the separator occurs at even lower temperatures.Composite separators made of ceramics such as LiAlO2, Al2O3, MgO, etc., coated onto polyolefins were developed with the aim of improving the melting point of the separator and thus the cell safety [4,9,[29][30][31].There are still many efforts being made with respect to the thermal-stability improvements of separators.On one hand, different polymers and ceramics are intensively investigated with respect to their higher thermal stability.For example, fluorinated polyimide nanofibers with improved flame-retardant properties were proposed [32].Luo et al. demonstrated that no visible changes occurred when the separator was heated up to 160 °C.This was additionally confirmed through differential scanning calorimetry and thermogravimetric analysis [32].Liu et al. developed a polyphenylene sulfide separator which displayed a high porosity, high wettability and high thermal stability (up to 280 °C) [33].A composite, polyvinyl-alcohol-based separator with a wider shutdown temperature window of 155 °C was fabricated and successfully tested [34].Cellulose-based separators are a good option for both battery safety improvement and the environment.In addition to demonstrating negligible shrinkage at an elevated temperature, they can also improve rate capability and enhance capacity retention and cycling stability [35,36].The incorporation of MoO3 and Al-doped Li6.75La3Zr1.75Ta0.25O12into poly(vinylidene fluoride hexafluoropropylene) demonstrates superior safety to flame events.Only approximately 5% of shrinkage was observed after heating the separator at 160 °C for 4 h [31].
Another possible avenue to improving the flame-retardant effect of a separator is to include additives with known flame-retardant properties as surface coatings or even introduce them into the separator structure.Liao et al. proposed an environmentally friendly separator consisting of bacterial cellulose, attapulgite rod and ammonium polyphosphate which displayed self-extinguishing characteristics after ignition with a low heat and gas contribution [36].Lee et al. significantly improved the thermal stability of a tri-layer separator by coating it with brominated poly(2,6-dimethyl-1,4-phenylene oxide) [37].Peng et al. soaked ceramic separators with phenol-formaldehyde resin, which is known for its insulating properties and is already widely used in electrical equipment [38].
Li dendrite growth is an adverse process inside the battery which can lead to separator damage and a short circuit [39].Dendrite penetration through the separator can be suppressed by coating a polyolefin film with an aramid resin.The small pores of this separator do not allow dendrites to grow into or through it, leading to granular or spherical deposits along the plane of the separator [39].Good separation between the cathode and anode can be achieved by coating both electrodes with an α-Al2O3 slurry.Such a coating is very thin, less sensitive toward high temperatures and more mechanically and dimensionally stable when compared to commercial separators [40].Flame retardants can even be added to the current collector foils.Thus, Ye et al. proposed an ultralight, polyimidecoated Cu current collector with the addition of flame-retardant triphenyl phosphate, thus improving the cell safety [41].
Another potential approach to reduce the incidence of fires is to use substances that can release fire-retardant gases.It is known that oxalates, formates, fumarates and malonates can release CO2 under thermal exposure.The exact decomposition depends strongly on the surrounding atmosphere and the reaction conditions.Nevertheless, it is conceivable that the released CO2 has a fire-retardant effect or can influence the gas composition limits for explosion.Lithium oxalate in particular has already been investigated mechanistically; therefore, it is known that it decomposes into the corresponding lithium carbonate at approximately 550 °C, with the CO being split off [42][43][44][45].The CO immediately reacts again in the presence of oxygen, forming CO2.
In this study, we have incorporated various flame retardants directly into the anode material to answer the open question of how to best introduce flame retardants into the cell and whether they have the desired safety-enhancing effect.This allows for the corresponding substances to act directly in the cell so that they can react appropriately (well and quickly) in the case of cell abuse or cell failure.Since this function is embodied directly in the cell, failures due to electrical issues or time delays from activating external safety measures can also be overcome.At the same time, the ion transport and cell chemistry are not or are only marginally affected.Due to the high flammability of the carbonate mixtures typically used as electrolytes, we have considered the influence of the electrolytes to address the overall safety.
In the present study, three organic salts (namely, lithium oxalate, sodium fumarate and sodium malonate) with two groups of -COO-in their structure were introduced into the Li battery anode.The intent was to achieve a CO2 release during thermal decomposition at elevated temperatures, thus slowing down fire extension or even preventing battery ignition.For that purpose, full coin cells as well as pouch-bag cells with modified anodes vs. NMC111 (NMC-lithium nickel manganese cobalt oxide) cathodes were assembled and filled with three different electrolytes: (1) An LP30-standard Li-ion battery electrolyte containing ethylene carbonate (EC) and dimethyl carbonate (DMC) in equivalent volumetric parts as solvents and 1 M LiPF6 as a conductive salt; (2) Ethylene carbonate (EC) and propylene carbonate (PC) + 1 M LiDFOB; (3) 1,2-butylene carbonate (1,2-BC) and fluoroethylene carbonate (FEC) + 1 M LiTFSI.
The performance of these battery cells (within 122 cycles) and the cell aging (within 1024 cycles) was investigated.Finally, the pouch cells were overcharged, and the released gases were studied using gas chromatography (GC).Postmortem analyses on coin cells using liquid GC-MS (MS-mass spectrometry) to investigate the decomposition reactions taking place in the presence of flame retardants are also provided.
Electrode and Cell Preparation
The anode preparation procedure consisted of three steps: (1) mixing a slurry, (2) doctor-blading processing and (3) drying the sheet.An amount of 59.4 g of graphite mixed with 1.234 g of carbon black was added step by step to 55.2 g of 2 wt.%Na-CMC solution and stirred using a vacuum-equipped dissolver (VMA Getzmann, Reichshof, Germany; rotational speed of 500 min −1 ).After all the solid particles were dispersed in the Na-CMCbinder, the rotational speed was increased to 2000 min −1 and the mixture was stirred with additional cooling for approximately 40 min until homogeneity was achieved.The resulting slurry consisted of approximately 52.4 wt.% of dispersed components and 47.6 wt.% of water.The anode materials with flame retardants were prepared by adding well-defined amounts of Li oxalate, Na fumarate or Na malonate to the slurry, resulting in 5, 10, 20, 35 and 50 wt.% of FRs in the dried anode layer (see Table 1), respectively.Extra water was added to the mixture to keep the component-water ratio similar to that of the pristine slurry.Finally, a 1.25 wt.% of SBR binder was added to a slurry.The obtained slurries were coated onto 10 µ m thin Cu foil (width of 10 cm, Nippon Foil Mfg.C., Tokyo, Japan) using a miniature tape-casting coater (MSK-AFA-HC100, MTI Corp.) and a doctor blade with an adjustable film height and a speed of 0.2 m•min −1 .The film thickness was varied with purpose to prepare anodes with a specific capacity higher than 2.0 mAh•cm −2 (see Table 1).The foils were dried in a furnace at 40 °C for at least 12 h.For cell assembly, all the electrodes and separators were dried overnight in a vacuum furnace at 100 °C for the coin cells and at 130 °C for the pouch cells.
Li-Ion Battery Cells
Coin cells were prepared in an argon-filled glove box (MBraun GmbH, Garching, Germany) with oxygen and water levels below 0.5 ppm.The NMC111-cathode (Ø 16 mm, Custom Cells, Germany), anode (Ø 16 mm, with or without FR) and separator (Ø 17 mm, Whatman, QMA, U.K.), loaded with 110 µ L of electrolyte, were assembled in a coin cell (CR 2032 type, PI-KEM, Tamworth, U.K.) with a digital, pressure-controlled electric crimper (MSK-160E, PI-KEM, Tamworth, U.K.) at approximately 0.8 T. For better contact and uniform current distribution, a stainless-steel spacer and spring were placed between the cathode and the coin cell case.The coin-cell tests were carried out using an in-housedeveloped cell cycler Liccy (Institute of Data Processing and Electronics, KIT, Karlsruhe, Germany) and CTS (Battery Test System, BaSyTec GmbH, Asselfingen, Germany) (Table 2).The individual anode specifications are listed with the supporting information in Table S1 and Figure S1.All coin cells were cycled with a constant current charge to 4.2 V and discharged with a constant current to 3.0 V (Table S3, supporting information) The pouch-bag cells were assembled in a dry room at a dew point below −70 °C.The size of NMC111 cathode was 5 cm × 5 cm and the size of anode (with and without FR) was 5.5 cm × 5.5 cm.The cathode loading was 2.0 mAh•cm −2 .The anode loading varied from anode material to anode material but exceeded 2.1 mAh•cm −2 for each material electrode to avoid lithium plating during cycling.A ceramic-coated PET was used as a separator.All cells were loaded with 450 µ L of electrolyte.The individual anode data are listed in Table S2 (supporting information).The pouch bags were cycled with a constant current charge to 4.2 V and discharged with a constant current to 3.0 V (Table S4, supporting information).
Gas Chromatography Coupled to Mass Spectrometry (GC-MS, Gas)
Gas analyses were performed using a Clarus 690 GC (Perkin Elmer, Waltham, Massachusetts, U.S.A.), coupled with an ARNEL 4019 system (Perkin Elmer, Waltham, Massachusetts, U.S.A.) and a mass spectrometer (MS, SQ8S, Perkin Elmer, U.S.A.).This setup allowed for the detection and quantification of the gases CO2, CO, CH4, C2H2, C2H4, C2H6, O2, He, Kr and Ar from a concentration of approximately 150 ppm.A calibration gas mixture with the components included (Basigas) was used for quantification.The gas samples were injected at room temperature and switched to the columns at a normal pressure.Evaluation and control took place using TotalChrom 6.3.4 software (Perkin Elmer, Waltham, Massachusetts, U.S.A.).To reference the gas intensities, krypton (Kr) was used as an internal standard after the electrochemical tests.
Gas Chromatography Coupled to Mass Spectrometry (GC-MS, Liquid)
Gas chromatographic measurements of the liquid electrolytes were performed using a Clarus 690 (Perkin Elmer, Waltham, Massachusetts, U.S.A.) with a coupled MS (SQ8T, Perkin Elmer).The method is described in detail in [46].Briefly, the samples were introduced into the device via an autosampler (CAP injector, T = 250 °C, 0.5 µ L) and separated using a 5MS column (ELITE-5MS, PerkinElmer, 30 m length, 0.25 inner diameter, 0.5 µ m film thickness).A temperature program was used, and the pressure was adjusted accordingly (40 °C, 1.5 min; 20 K•min −1 heat up to 320 °C; initial pressure of 175 kPa for 2 min, then pressure increased at 7,8 kPa/min to 300 kPa).After separation, a split into the MS (T (ion source) = 200 °C, T (transfer line) = 200 °C, filament voltage = 70 kV) and into an additional FID (T = 280 °C) took place.The analysis and hardware control were performed using the software TurboMass 6.1.2(Perkin Elmer, Waltham, Massachusetts, U.S.A.).
Light Microscopy
The electrode surface was studied using a Keyence digital microscope VHX-7000 and an objective VHX E500S with 2000× magnification.
Mandrel Bend Test
The influence of the addition of flame retardants on the adhesive and cohesive properties of the anode electrodes were studied using the mandrel bend test.For this purpose, the anode foils were prepared as described and tested using a Mandrel Bending Tester EQ-MBT-12-LD (PI-KEM, Tamworth, U.K.) equipped with 12 cylinders with diameters ranging from 2 mm to 32 mm.Beginning with the largest one, the coated foils were bent over the bending cylinder for 2-3 s at 180°.They were precisely examined with a light microscope AX70 (Olympus, Hamburg, Germany) after each test.The results were compared with a commercially available anode purchased from Custom Cells.
Resistance
Anode slurries with and without FRs were coated onto a glass plate (20 cm × 20 cm × 0.2 cm) using a miniature tape-casting coater (MSK-AFA-HC100, PI-KEM, Tamworth, U.K.) and a doctor blade with an adjustable film height and a speed of 0.2 m/min.The film thickness was adjusted to the anode foils for corresponding concentrations of FRs in the layer (Table 1).The resistance was measured between 1 MHz and 1 Hz using the electrochemical workstations Zennium E and X and THALES software (Zahner-Elektrik GmbH, Kronach, Germany).The glass plate with anode coating was placed inside an in-housebuilt, 3D-printed sample holder (see Figure 1a).Two copper plates (20 cm × 2 cm × 0.2 cm) were positioned on the anode surface and connected with the workstation.Several measurements were performed on the same layer by changing the distance between the Cu plates from 1 cm to 7 cm in 1 cm increments (Figure 1b).
Rheology
The rheological properties of the pristine slurry and the slurries with added FRs were studied using a Gemini HR Nano (Netzsch Gerätebau GmbH, Selb, Germany) rotational rheometer with a 40 mm diameter, cone-plate measuring system and a 4° cone angle.All samples were measured at 25 °C with shear rates from 1 s −1 to 200 s −1 .
Scanning Electron Microscope (SEM)
SEM images were taken with a Zeiss Supra 55 FE-SEM (Zeiss, Oberkochen, Germany).All samples containing FRs were additionally sputter-coated with gold/palladium (Au/Pd = 80/20) prior to SEM inspection to reduce sample charging.
Solubility
The solubility limit of the FR was analyzed by repeatedly dissolving appropriate amounts of FR (10 mg) in water (1 mL) under stirring conditions.When no more FR was soluble, the previously determined amount was taken as the solubility limit.The solubility limit was checked by directly weighing the amount that was barely soluble.
Thermogravimetric Analysis (TGA-IR)
All the chosen flame retardants were examined with TGA-IR using a thermogravimetric analyzer (Netzsch STA 449 F3) coupled with an IR spectrometer.Amounts of 23.4 mg of lithium oxalate, 22.8 mg of sodium fumarate and 24.8 mg of sodium malonate were measured in an Al2O3 crucible.The samples were heated from 30 °C to 1000 °C with a heating rate of 10 K•min −1 under air gas flow.IR spectra were recorded between 400 cm −1 and 4,500 cm −1 .The decomposition of the FRs was additionally examined by Kieran Evans from PerkinElmer (U.K.) using a thermogravimetric analyzer (PerkinElmer TGA 8,000) coupled with an IR spectrometer (PerkinElmer Spectrum 3, transfer line: TL9000e).All valves, lines and cell temperatures were set to 280 °C and the gas flow was set to 85 mL•min −1 .The samples were heated from 30 °C to 1000 °C at a 10 K•min −1 heating rate.There was an air purge of 40 mL•min −1 with a 60 mL•min −1 nitrogen balance purge.IR spectra were acquired from 4000 cm −1 to 600 cm −1 with two scans per spectrum and a resolution of 8 cm −1 .
Overcharging Abuse Test
The pouch-bag cells were connected to a programmable, DC power supply (Korad, KD6005P, Welectron, Germany) and placed inside a fume hood.Initially, all the studied cells were charged up to 4.2 V with a 0.02 A current (corresponding to 0.5 C).Following this, the current was increased up to 0.2 A (5 C).The cell potential was monitored with the power supply, and the cell temperature was measured with an IR thermometer (Mestek, IR03A, Shenzhen Mestek Tools Co., LTD, Longhua, Shenzhen, China).The current was switched off as soon as the cell reached 50 V.Finally, the cell was removed from the power supply and prepared for GC-MS gas investigation.
Electrolyte Formulations and Electrolyte Characteristics
In the present study, three electrolyte mixtures-namely, 0.77 mol•kg −1 of LiPF6 in EC/DMC (LP30), 0.75 mol•kg −1 of LiDFOB dissolved in EC/PC 1:1 (v/v) and 0.75 mol•kg −1 of LiTFSI dissolved in 1,2-BC/FEC 1:1 (v/v)-were used, and their impact on the flammability of Li ion battery cells was investigated in detail.In Table 2, the electrolyte formulations and the electrochemical and physicochemical properties of the studied electrolytes are summarized.The data indicate that the two newly prepared electrolytes with comparative Li + concentration ranges achieved similar oxidative stabilities but had slightly lower ionic conductivities and higher viscosity values, respectively, when compared to the LP30 electrolyte.This suggests a slightly lower ion mobility.However, the flash point was significantly higher for the new electrolytes vs. the LP30.Additionally, the flash point of the EC/PC mixture (160 °C) was also increased when compared to the 1,2-BC/FEC electrolyte formulation (149 °C).Temperature-dependent conductivity measurements revealed a typical behavior which is well-known from organic liquid electrolytes (Figure S2, supporting information).8In addition, based on its distinctly improved conductivity properties, the newly investigated EC/PC electrolyte suggested an improved cell performance when compared to the BC/FEC electrolyte.
Flame Retardants and CO2 Release
As indicated in the introduction, methods are still being explored to improve the intrinsic safety behavior of Li-ion cells during thermal runaway.In this study, therefore, a new approach based on the release of internal gas (CO2) was tested to improve the safety characteristics of the corresponding Li-ion cells.Accordingly, three selected flame retardants were tested in detail.Their decomposition pathways are depicted in Figure 2. To evaluate the potential of the FRs in terms of CO2 release, the decomposition of lithium oxalate, sodium malonate and sodium fumarate was investigated by TGA (Figure 3).CO2 is expected to be released when these salts are heated to elevated temperatures.Lithium oxalate is already known as a cathode additive for Li-ion batteries [47,48] and has been shown to act as a "sacrificial salt" in order to donate Li ions when the cell is charged at a high potential (approximately 4.7 V), simultaneously releasing CO2 [47,48].Moreover, a higher charge capacity, cycling stability and coulombic efficiency were observed when lithium oxalate was introduced into the cathode material [47,48].The thermal decomposition of lithium oxalate was also previously studied [42][43][44][45].As is described in the literature, the formation of gas products is highly dependent on the atmosphere in which the TGA measurement is performed [43,44,49].For example, the gas products and reaction enthalpies in TGA measurements differ depending on the purge gas employed, e.g., in the study of pure lithium oxalate.Air or oxygen as a purge gas leads to the direct conversion of CO into CO2.CO2 is then predominantly detected in the subsequent IR analysis.A conclusion of the CO gas actually released is then only possible via mass loss.In the practical case, oxygen will be present in a battery due to oxygen release from the cathode material or by the venting of a cell and the subsequent contact with ambient air.Therefore, dry air was used as the carrier gas in this study.The precise decomposition of each individual compound/component was not investigated in the present study.However, it was referred to in the literature findings [42,49,50] or postulated on the basis of mass loss and gas detection by IR spectroscopy to determine the decomposition gases (Figure 2).
The behavior of lithium oxalate described in the literature was confirmed in the present study (Figure 3a).Here, the Li oxalate decomposition occurred in two steps: (1) at 480 °C -530 °C and (2) between 700 °C and 1000 °C .In this case, the temperature range of the second mass loss depended on the heating rate and, in the examined study, was probably not yet completed at 1000 °C.The experiment confirmed the theoretical mass loss of the first stage (480 °C-530 °C) for a CO release of 27.4% (27.3% found).Time-resolved IR spectra, as indicated, provided strong absorption band characteristics for CO2 (2330 cm −1 and 2360 cm −1 correspond to stretching, 690 cm −1 corresponds to bending, and 3600 cm −1 and 3720 cm −1 represent a combination of bending and stretching vibrations: also see Figure S3, supporting information) [51].Only a small amount of CO was detected (weak bands: 2100 cm −1 and 2180 cm −1 ).This is due to the reaction of CO with ambient, atmospheric oxygen to CO2.The residue of thermal decomposition was Li2O.Thus, the thermal decomposition of Li oxalate in the oxygen-containing atmosphere released two CO2 molecules (the second, however, beginning only at approximately 700 °C).A pre-decomposition in the case of Na fumarate was identified but, due to the much slower process (kinetics), this pre-decomposition process between 400 and 450 °C is not included.Additionally, the second decomposition in the case of lithium oxalate is also not marked due to the high temperature (>700 °C).The uncertainty of the mass loss is in the order of 0.1-0.3percentage points.
In the case of sodium fumarate (Figure 3b), a slight initial mass loss of 4.1% between 400 °C and 515 °C was observed prior to the main decomposition, which occurred around 530 °C.In contrast to the literature, the main mass loss (27.4%) occurred at a higher temperature range, between 516 °C and 550 °C (440 °C-490 °C [52]).However, this mass loss is in good agreement with the release of CO2 (a theoretical loss of 27.5%), which was also observed by Ionashiro et al. [52].The IR spectra show strong bands at 2330 cm −1 and 2360 cm −1 , which are characteristic of CO2, and some weak bands in the range between 1250 cm −1 and 2000 cm −1 (Figure S3, supporting information).Apparently, the main product of decomposition was CO2, but some organic residues were also released as the result of molecular rearrangement.Ionashiro et al. ultimately assigned the organic bands to methane and some traces of CO [52].Consequently, the total mass loss, up to 550 °C in the present case, was 31.6%,indicating the formation of Na2CO3 (33.8%, determined theoretically).A mass decrease from 550 °C indicates the continuous slow formation of sodium oxide under CO2 evolution.CO2 could also be detected in the corresponding temperature range (550 °C-1000 °C).
The TGA curve of sodium malonate (Figure 3c) shows a strong initial mass loss between 330 °C and 360 °C (22.5%).This range was also described by Caires et al. for sodium malonate with the formation of Na2CO3 (in a TGA experiment, under dry air) [53].However, the loss is too large for a 1:1 molar CO decay (18.9%, determined theoretically) on one hand, but too small for a 1:1 molar CO2 decay (29.7%, determined theoretically) on the other hand.The IR spectra indicate that large amounts of CO2, as well as some organic fragmentation products, were released, but this can also be explained by conversion of CO to CO2 under the dry air atmosphere, as previously described for lithium oxalate.It is assumed that the decomposition was not equimolar, but that a gas mixture CO/CO2 (approximately 70:30) was released: this was then detected as CO2 in the IR spectrometer.Analogous to sodium fumarate, sodium malonate also continuously released CO2 up to 1000 °C , starting at 360 °C, which can be observed in the IR spectra.
Electrode Slurry Preparation and Anode Characterization
Anode materials with FRs were prepared in three steps: (1) a pristine slurry consisting of graphite, carbon black and 2 wt.%Na-CMC was mixed using a dissolver; (2) defined amounts of FRs were added to the slurry and stirred manually and (3) an SBR binder was added and stirred manually as well.All tests on anode materials and anode coatings were performed immediately after the anode material was prepared to avoid any degradation processes.The solubility of the FR in water was also estimated, with an aim of gaining a better understanding about the form of the FRs inside the slurry.Table 1 provides an overview of the component contents in the slurry as well as the FRs' solubilities.It is evident that lithium oxalate was the least soluble and sodium malonate was the most soluble flame retardant studied here.The solubility of the FRs in water at 23 °C were obtained as follows: lithium oxalate: 55 ± 2 g•L −1 , sodium fumarate: 220 ± 5 g•L −1 and sodium malonate: 1155 ± 10 g•L −1 .
The slurry for the anode materials studied here is a suspension of solid components, such as graphite, carbon black and partly FR, in liquid components, which included Na-CMC and an SBR binder dissolved in water.to the aim was to improve the adhesion and cohesion properties of the electrode layer as well as the dissolved FR.Its rheological properties are crucial for electrode coating.On one hand, viscosity defines the stability and lifetime of the slurry (e.g., sedimentation).On the other hand, viscosity affects the coating procedure.A too high or too low viscosity can lead to non-uniform coating, runniness and pooling, which, from their side, leads to inhomogeneous active-material loading on the electrode surface.This causes local differences in dis/charge currents with the appearance of hot spots during battery cycling [54][55][56][57][58][59][60].
To eliminate the influence of FR additives on slurries, their viscosity was studied.Figure 4 displays the viscosity of the anode material and the slurry as a function of a shear rate.A viscosity drop was observed for all slurries when the shear rate increased.This is a so-called shear-thinning effect, which is typical for suspensions containing macromolecules of polymer structure [59,61,62].Furthermore, a clear viscosity dependence on the amount of added water with an increasing content of FRs was detected.Extra water was added to the slurry to keep the solid particles/water ratio constant.However, please note the partial dissolution of the FRs in water.Thus, the anode material became less viscous.Another reason for decreasing the viscosity by increasing the amount of water and FR was a decline in the Na-CMC content in the final anode material (see Figure S1, supporting information) [56,63].It should also be mentioned that deviations in viscosity for all three FRs were less pronounced at concentrations of up to 20 wt.%, especially at higher shear rates.Beginning with concentrations of 35 wt.%, the anode material containing lithium oxalate appeared to be more viscous, and the slurry with sodium malonate appeared to be less viscous.This trend persisted for the samples with a 50 wt.%.A possible explanation for this observation is the difference in solubility of these FRs in water (Table 1).Lithium oxalate, with a solubility of 55 g•L −1 , is the least soluble FR.The solubility of sodium fumarate is 220 g•L −1 , and sodium malonate is most soluble FR with a solubility of 1155 g•L −1 .In order to achieve a large variety of different lithium-ion cell designs, electrodes must be folded and winded many times during the manufacturing procedure.Therefore, highly flexible electrodes with stable material layers are needed for the large-scale production of batteries.Additionally, the heating of the anode foils during the electrode preparation (drying) as well as the charge-discharge cycling reduces the adhesion of the graphite.This leads to a higher fragility of the active material and its delamination from the copper current collector, hence limiting its application in the battery assembly.Delamination of the electrode material strongly negatively influences the electrochemical performance, e.g., by an increase of overall resistance; it may even cause an internal short circuit [50,[64][65][66][67][68].The adhesion/cohesion properties of the electrode layer and the layer thickness determine its mechanical stability and define an applicability of the anode for battery assembly.The mandrel bending test serves as a fast and easy method for adhesion/cohesion property evaluation.For these aims, several anodes with predefined thickness were prepared in the same way as for the coin cell or pouch-bag tests and dried as described (Table 1).Each foil was placed on the cylinder surface, beginning with the largest (32 mm diameter) and ending with the smallest (2 mm diameter), and then bent.After each bending procedure, the foil was carefully examined using a light microscope with the aim to detect any splits or detachment signs.The test results are listed in Table 3.Commercial graphite foil from Custom Cells did not reveal any sign of split or detachment up to 2 mm, whereas the self-made graphite without an FR showed small splits at 2 mm but did not show any sign of particle detachment.
The presence of small amounts of FR (up to 10 wt.%) did not deteriorate the adhesion/cohesion properties of an anode material.Only small splits in the layer/coating were observed in the case of Li oxalate by concentrations of 10 wt.%.The adhesion/cohesion properties were comparable with pristine graphite (both self-made (SM) and purchased from Custom Cells).A poorer cohesion was observed for moderate concentrations of FRs (20 wt.% and 35 wt.%) in a layer, especially for samples using Na malonate.The first small cracks on the electrode sheet were detected after bending it over a cylinder with a diameter of 20 mm.This effect became even more obvious when the concentration reached 50 wt.%.The signs of cracks could already be observed by bending the sheet over the 32 mm cylinder for both Na fumarate and Na malonate.Poor adhesion thereby led to detachment from the current collector when a bending diameter of 10 mm was reached.This behavior could be explained by the lower content of the Na-CMC binder in the layer when compared to other solid components.This decreased with the increasing concentration of FR, as well as with changes in the binder structure described above.Surprisingly, the anode foils with added lithium oxalate were less influenced by this effect.The layers demonstrated good adhesion properties even at 50 wt.%.Another reason for poor adhesion and cohesion is the layer thickness.As was mentioned previously, thicker films must be coated for slurries with higher FR concentration to fulfil the specific capacity criteria (a minimum of 2.0 mAh•cm −2 ).Kishimoto et al. [69] showed that the crack initiation depends on the maximum strain from bending, which is higher in thicker films.Anodes prepared for cell test performances were also studied using light microscopy and SEM to gain an overview of the FR distribution inside the anode layer.Figure 5 shows the anode with 50 wt.% of sodium fumarate and the anode without an added FR.The image of the pristine slurry demonstrates predominantly graphite particles as the main component of the anode material.Carbon black, Na-CMC and SBR-binder cannot be identified because of resolution limitations.In contrast, some blueish inclusions on graphite particles are observed when the FR was added (highlighted in the figure with red circles).These inclusions can be ascribed to the crystals of the organic FR.Thus, one can be sure that FRs do not cover the entire anode surface with an impenetrable film even at high FR concentrations and, therefore, the graphite with an FR remains accessible for Li-ion de/intercalation during dis/charging.SEM images provide a deeper insight into the anode layer structure when an FR is added (Figure 6).
Graphite particles covered with active carbon can be recognized.Organic Na-CMC and SBR-binder partially cover the graphite surface.Small amount of FRs (5 wt.%) added to the slurry dissolved in the binder completely during anode material preparation (Table 1).Therefore, one can imagine that added FRs affect the binder structure.Figure 6b demonstrates these changes.Crystals of organic salts can be distinguished from graphite, active carbon and pristine binder by their appearance.Thus, lithium oxalate builds wirelike crystals between particles and single crystals on the graphite-particle surface.Sodium fumarate can be described as sharp crystals in binder.Sodium malonate crystals appear as fine grains connected to pore structure.In its turn, this finding explains and accomplishes the results of mandrel bend test (Table 3).Binders mixed with FRs are not able to maintain good cohesion as well as the adhesion properties of the anode, especially at high contents of the FR.The resistivity of a lithium-ion battery depends on the resistance of its components and the resistance due to phase changes.For a better electrochemical performance of the battery, electrodes with a low internal resistance are required.To gain a better overview of the impact of FRs on the anode resistance, the specific resistance of a pristine anode coating was studied.For that purpose, four layers with different thicknesses-namely, 80 µ m, 240 µ m, 750 µ m and 1040 µ m-were coated on a glass plate, dried, and studied using electrochemical impedance spectroscopy (EIS).Figure 7 demonstrates the measured electronic resistance of FR-free electrodes in dependence of the sample length for all four layers.The measurements demonstrate a clear linear tendency.The resistance was high for a thinner layer and decreased with an increasing thickness.The higher slurry volume allowed electrons to move more freely.At the same time, the measured resistance grew with an increased path of coating between the Cu plates.Apparently, the slurry layer can be considered a typical conductor.Its resistance is directly proportional to its length (or distance, L) and inversely proportional to its cross-sectional area, A (Equation 1), in which R is the resistance [Ohm] and ρ is the specific electrical resistivity [Ohm•m].An offset (Figure 7) can be caused by Cu plates that are not perfectly in place.A specific resistivity of the layer can be calculated using the following Equation ( 2), in which d is the width of the slurry stripe on the glass plate in meters and h is the layer height (thickness) in meters.The calculated specific electrical resistivities for all layer thicknesses are summarized in Table 4.
The anode slurry consisted of a large amount of graphite and some percent of conductive carbon black, SBR-binder and Na-CMC.Consequently, the resistivity of the layer depended predominantly on the specific electrical resistivity of graphite.The electrical conductivity or resistance of a material depends on the charge mobility in its structure.From this point of view, graphite can be considered semi-metallic.The electrons can move easily along a basal plane, as in a conductor, but not between layers.As a result, the resistance perpendicular to the graphene layers is very high, and graphite acts as an electrical insulator.Hence, the resistance values for graphite vary between 0.0025 and 0.005•10 −3 Ohm•m (when the current flow is parallel to the graphite layers) and 3•10− 3 Ohm•m (when the current flow is perpendicular to the graphite layers) [70].The calculated resistance is in a good agreement with this data.Table 4. Calculated specific resistance for graphite layer correlated with its thickness.Independent of layer thickness, all values are in the same order of magnitude (within the experimental error).
Layer Thickness [µm]
Specific Electric Resistivity [10 The same experiment was performed for layers containing FRs. FRs are organic salts; thus, their electric conductivity is considerably smaller in comparison to graphite and carbon black.Consequently, the resistivity of a slurry/FR-mixture is expected to be significantly higher and to increase with increasing concentrations of the FR.Table 5 and Figure S4 in the supporting information summarize this behavior.The resistance for all three FRs at different concentrations was measured in the same way as for the FR-free coating.The measured resistance rose by increasing the concentration of all the chosen FRs.Li oxalate, Na fumarate and Na malonate demonstrated similar resistance values for a concentration up to 20 wt.%.This behavior remained similar for Li oxalate and Na fumarate at even higher concentrations, while the resistance observed for Na malonate appeared to be ca.5 times larger at 50 wt.%.The calculated specific resistance supports this observation.Hence, the presence of a higher amount of FR in the electrode is desirable from the safety point of view but appears to be challenging for the electrochemical performance of the battery.
Table 5. Specific resistance of graphite/FR layer calculated for different concentrations of FR.Errors (standard deviation) are in the order of ±10%.The thickness of the layer is dependent on the FR concentration.
Assembly and Testing of Lithium-Ion Cells
The influence of flame retardants on electrochemical cell performance was studied in coin cells with a NMC111 cathode material.The cells were filled with an electrolyte (LP30, LiDFOB + EC/PC or LiTFSI + 1,2-BC/FEC), and glass fiber (QMA) was used as a separator.All coin-cell parts as well as the electrodes and separator sheets were dried carefully in a vacuum oven to remove traces of water.As shown above, high concentrations of FRs in the anode led to poor adhesion and cohesion and, as a result, to splits inside the anode layer and its subsequent delamination from the current collector (Table 3).Such anodes decrease the battery cycle life, and the use of such "damaged" anodes must be avoided.The anodes with FRs were processed and handled very carefully.
Coin-cell tests were performed on anodes with FRs loaded with three different electrolytes.Figure 8 shows the discharge capacities for cells with 5 wt.%, 20 wt.%, and 50 wt.% of FR discharged with 1C, 5C and 10C.From an electrochemical point of view, LP30 and LiDFOB + EC/PC appear to be better electrolytes than LiTFSI + 1,2-BC/FEC.Cells loaded with these electrolytes delivered, on average, more capacity than cells loaded with LiTFSI + 1,2-BC/FEC at all discharge rates.The type of the FR has less impact on the capacity.Higher discharge rates led to a capacity decrease for all electrolytes and all flame retardants because of the higher resistance.The impact of this decrease is individual and characteristic for both the flame retardant and the electrolyte.Thus, the discharge capacities of the cells with LP30 were, on average higher, than for the LiDFOB+EC/PC and LiTFSI + 1,2-BC/FEC mixtures (125-135 mAh•g −1 for 1 C, 30-80 mAh•g −1 for 5 C, and 10-30 mAh•g −1 for 10 C).This can be explained by the lower ion conductivity/higher viscosity of the two latter electrolyte mixtures, causing an aggravated lithium mobility at high current rates (Table 2).Individual deviations in discharge capacity could be also explained by the fact that the self-made anodes were not calendered.This especially affected the electrodes with higher FR contents and thus higher layer thicknesses.
Cells loaded with the LiDFOB + EC/PC electrolyte mixture delivered discharge capacities at 1 C, comparable to the cells containing LP30.Increased C-rates led to significant capacity drop (from 120-130 mAh•g −1 at 1 C to 20-40 mAh•g −1 at 5 C and to 5-20 mAh•g −1 at 10 C), which was caused by the higher resistance, as described previously.Interestingly, increasing the FR content did not lead to further discharge-capacity losses, as was observed for LP30.The capacity values for 5% and 20% Li oxalate and Na fumarate were comparable.Cells using Na malonate loaded with LiDFOB+EC/PC demonstrated a behavior similar to the LP30.Increased amount of FRs resulted in a decrease of cell capacity if the electrode dimensions were maintained (thickness, etc.).Lithium-ion cells loaded with LiTFSI+1,2-BC/FEC had the lowest discharge capacity values (100-120 mAh•g −1 at 1 C, 15-20 mAh•g −1 at 5 C and 1-10 mAh•g −1 at 10 C) because of their low ion conductivity (Figure S5, supporting information).As an electrolyte, LiTFSI+1,2-BC/FEC caused performance fading for all C-rates and all FRs.This effect was less pronounced for Li oxalate and Na fumarate and more obvious for Na malonate.Obviously, Na malonate had more of an impact on SEI building and thus on the capacity decrease.Although it is known from the literature that LiTFSI is highly corrosive toward Al, no signs of this could be observed in coin-cell test performance up to potentials of 4.2 V vs. Li/Li + [24,71].Thus, the decreasing specific capacity in the presence of LiTFSI was predominantly caused by its low con- The Coulomb efficiencies of the FR-containing cells show that the cells did not change significantly in the presence of the FR (Figure S6, supporting information).Pouch-bag cells were used for the aging tests.Anodes for pouch-bag cells with 5 wt.% of FR were prepared in the same way as the anodes for coin cells.All cell components were dried overnight at 130 °C in the vacuum oven and assembled in a dry room.The pouch-bag cells specs are listed in Table S2 (supporting information).The pouch-bag cycling data delivered results similar to the results for the coin cells (Figure 9).The initial capacities for the three studied electrolytes are almost similar and correspond to the capacities measured for the coin cells (130-140 mAh•g −1 ).Capacity deviations between the coin and pouch-bag cells can be explained by differences in the stack pressure between a pouch-bag cell and a coin cell [72].The main differences between the electrolytes and FRs were revealed after approximately 1000 cycles (see Table S4, supporting information).Thus, LP30 demonstrated the most stable behavior when compared to the LiDFOB + EC/PC and LiTFSI + 1,2-BC/FEC mixtures.At least 80% (for Na fumarate and pure graphite) and 87% (for Li oxalate and Na malonate) of the initial capacity were kept in the pouch-bag cells after 1024 cycles.The LiDFOB + EC/PC electrolyte caused a significant capacity loss for all the examined anode materials.This effect was more obvious for cells with Na fumarate and Na malonate.After 100 cycles, the capacity had already decreased to 60% of its initial value and remained constant up to 1024 cycles.Cells with graphite and Li oxalate demonstrated a more stable cycling performance and therefore less capacity fade (approximately 72% after 1024 cycles).The LiTFSI + 1,2-BC/FEC mixture is highly corrosive to the Al current collector, as was mentioned previously [24,71].The cell-test performance on pouch-bag cells supported this finding.The discharge capacity dropped dramatically after the 100th cycle for Li oxalate and after the 160th cycle for graphite, reaching almost 0 mAh•g −1 after the 200th and 750th cycles, respectively.Cycling data of pouch-bag cells with graphite (blue), 5 wt.%Li oxalate (green), 5 wt.%Na fumarate (red), and 5 wt.%Na malonate (magenta) cycled 1024 times.At cycle 516, a C/10 cycle was performed; thus, the discharge capacity is recorded too high here.In the case of LiTFSI + 1,2-BC/FEC in combination with self-made graphite, some spikes were recorded (between cycles 100 and 200) which were removed for better comparison.
A rate capacity test was also performed for the pouch-bag cells by charging them with 0.5 C and discharging with different C-rates from 0.5 C to 10 C (0.5 C, 1 C, 2.5 C, 5 C, 7.5 C and 10 C) (Figure 10).This test was performed directly after the formation cycles at the beginning of the cell cycling.The cells loaded with LP30 and LiDFOB + EC/PC showed similar capacities at 0.5 C, which only slightly decreased at 1 C.For detailed information, the FR results of the discharge curves are shown in Figure S7, supporting information.An increased discharge capacity (2.5 C and higher) caused a capacity fade for both electrolytes but, in the case of the LiDFOB mixture, an increased current led to a dramatic capacity drop (55 mAh•g −1 for LiDFOB vs. 110 mAh•g −1 for LP30 at 2.5 C, and 10 mAh•g −1 for LiDFOB vs. 4 mAh•g −1 for LP30 at 10 C).The hinderance of Li-ion diffusion at high current rates is related to the lower ion conductivity of LiDFOB + EC/PC and therefore to higher resistance.Similar observations can be made for the cells containing FRs.The discharge capacities of the cells loaded with the LiDFOB + EC/PC mixture were more affected by the increasing current than the cells with LP30.
Differential capacity curves (dQ/dU) demonstrate a redox response of the NMC111and-graphite-based anode with and without FR during charging/discharging, depending on the studied electrolytes.Figure 11 shows the results for the pure graphite anode.The corresponding plots for the FR-containing anode sheets are shown in Figures S8-S10 Charging at 1 C demonstrated one intense peak at 3.7 V and two less-intense peaks at 3.5 V and 3.8 V.During discharge, only two peaks were observed: a weak peak at 4.0 V and an intense peak at 3.6 V.The intense peaks at 3.7 V (charging) and 3.57 V (discharging) correspond to the Ni 2+ /Ni 4+ couple [73].A peak at 3.6 V originated from Li intercalation into the graphite anode [74].By discharging at a rate of 5 C, only two peaks could be found in the anodic scan: 3.7 V and 3.8 V.The peak observed at 3.57 V at 1C was almost merged with the intense peak at 3.7 V.Only two peaks were observed in the cathodic scan direction, which were shifted toward a lower potential when compared with 1 C (3.4 V and 3.9 V at 5 C vs. 3.57V and 4.0 V at 1 C).An increased shift between the redox peaks at 5 C was caused by a strong polarization effect.Pouch-bag cells loaded with the LiD-FOB+EC/PC mixture demonstrated three strong, overlapped peaks between 3.5 V and 3.8 V, corresponding to Ni 2+ /Ni 4+ redox couple, Li-ion intercalation and an intense peak at 4.15 V, which was probably caused by a redox reaction in electrolyte.At a higher discharge rate, all peaks during discharge were shifted to a lower potential; they merged even stronger and lost their intensities.The added FRs influenced this behavior only slightly.Obviously, an electrolyte has more impact on the redox processes during charging/discharging than FRs.
Post-Mortem Analysis of the Li-Ion Cells
After the coin cells were cycled for 122 cycles, they were disassembled inside an argon-filled glove box.Electrolytes were extracted from the separator, diluted with dichloromethane, and studied with GC-MS (m/z > 15, Figure 12).Chromatograms were acquired for coin cells with pure graphite anodes loaded with LP30, LiDFOB + EC/PC and LiTFSI + 1,2-BC/FEC.Additionally, anodes with FRs in different concentrations were examined as well.Since the aim was to detect decomposition products if possible, the main components were added at a concentration too high for the column (tailing and peak shape).LP30 is a standard electrolyte for Li-ion batteries.Its decomposition paths have already been studied and reported in the literature [75].In the presence of water or other impurities, the conducting salt LiPF6 decomposes readily to form LiF, POF3, HF and POF2H and/or POF2(OH) [75].These decomposition products can be found in the chromatogram in the case of LP30.POF3 can be detected at 1.81 min and POF2(OH) can be detected at 2.2 min (please note that both compounds can only be observed in the MS analysis, not in Figure 12, top).An intense peak at 2.9 min can be assigned to the electrolyte solvent dimethyl carbonate (DMC).The following peak at a higher retention time originates from ethylene carbonate (EC, onset at 6.3 min).The last well-resolved peak at 8.1 min is associated with dimethyl-2,5-dioxahexanedioate (DMDD), which is a product of DMC+EC decomposition.No signs of other analytes were found in the chromatograms, even from cells with high amounts of an organic FR within the anode.The electrolyte mixtures LiD-FOB+EC/PC and LiTFSI+1,2-BC/FEC appear to be more stable than LP30.A broad, non-well-resolved peak at 6.9 min in the chromatogram in Figure 12 (middle) belongs to EC and PC solvents that were not baseline separated under the given GC conditions.No other signs of decomposition products can be observed in the chromatogram.The chromatogram of the LiTFSI+1,2-BC/FEC (Figure 12, bottom) mixture demonstrates two well-resolved peaks which can be ascribed to FEC (onset at 5.1 min) and 1,2-BC (onset at 7.4 min), respectively.It can be concluded that FRs neither cause additional decomposition of the electrolyte nor change the LP30 decomposition pathways.
Li-ion batteries can maintain their capacity if they operate in a defined voltage range.Overcharge and depth discharge cause irreversible capacity loss, reduce cycling life and leading to heat generation, which has a negative impact on battery safety.Strong overcharge triggers Li plating, SEI and electrolyte decomposition and gas generation, which also accelerates cell decomposition [76,77].Systematic overcharge tests on small-format pouch cells are always associated with very large errors.Therefore, a systematic accelerating rate calorimetry (ARC) measurement, including gas detection, will be used to investigate larger-sized pouch cells in a subsequent study.
The abuse test (overcharging) was performed to demonstrate the fundamental principle of the FR effects in a battery cell.Overcharging usually leads to electrolyte decomposition and thus to gas evolution [78,79].As shown previously, FR mainly decomposes into CO2, which should protect the cell from ignition.For this purpose, the Li-ion cells were prepared in specially designed pouch cells that included a gas extraction valve.An image is shown in Figure S11 (supporting information).This setup made it possible to extract gas without contamination from the surrounding atmosphere.The appropriately prepared pouch cells were then first cycled during cell formation (10 cycles) and subsequently overcharged.For this purpose, the cell was first fully charged (100% SOC) and then overcharged at 15 C up to a cell voltage of 50 V.As sodium malonate showed the weakest performance in the current rate tests, the gas tests were performed without sodium malonate as an FR additive.During this process, the cells (with a capacity of approximately 40 mAh) showed significant swelling but did not burst.A reference gas (Kr) was then introduced into the cell, and the present gas amounts were determined relative to the reference gas using GC-TCD.Figure 13 shows the corresponding measurements for FR Li oxalate and Na fumarate.At an FR fraction of 20%, a significantly increased CO2 gas formation could be observed in the cases of lithium oxalate and sodium fumarate.It should be noted that, as the proportions can only be estimated relative to themselves, quantitative statements between the gases are not possible.It is noticeable that the proportion of CO was almost constant in all three cells although, as expected, lithium oxalate decomposed to CO in the first step.
it should be noted that no O2 was detected in the cells, so a conversion of CO to CO2 and carbon cannot be excluded [80].
Discussion
In this study, three flame retardants incorporated into the anode layer (namely, lithium oxalate, sodium fumarate and sodium malonate) were investigated together with three electrolytes (namely, 1 M LiPF6 in EC/DMC, 1 M LiTFSI in 12-BC/FEC and LiDFOB in EC/PC).It was shown that the electrolytes were optimized in terms of flammability (especially with respect to flash point, Table 2) and the flame retardants were selected with respect to their potential for CO2 release.
Based on the measurements, the gas release of the FRs can be discussed in more detail.The release of CO2 in the air atmosphere, neglecting energy balancing, can be estimated from TGA measurements that exploited revealed information about mass losses.Assuming an ideal gas behavior and under atmospheric conditions (which are present after release) in a temperature range between 300 °C and 600 °C, respectively (as the temperature range in which thermal runaway of a battery usually occurs), and assuming the direct formation of CO from CO2 (1:1 mol), the calculated values of Table 6 were obtained.From this point of view, lithium oxalate is more favorable as a flame retardant than sodium fumarate or malonate.The fabrication of the anode layers was specifically investigated and described.Thus, the rheological behavior of the anode slurry was studied in detail.The slurry containing 10 wt.% and more of lithium oxalate can be considered a mixture of graphite, active carbon, and undissolved FR crystals in a saturated Li-oxalate-Na-CMC/SBR-water solution.In contrast, a major part of sodium fumarate and sodium malonate were dissolved in the slurry, even at high concentrations.Adding a water-soluble FR to a slurry containing a Na-CMC-water solution can also cause changes in the Na-CMC polymer structure and thus in the entire graphite/active carbon/Na-CMC network.
Mandrel bending tests around a metal cylinder showed no significant changes in the adhesion of the self-made anode sheets up to an FR content of 10%.For lithium oxalate, even at a 35 wt.% FR content, almost the same adhesive strength could be shown.SEM, microscopy and conductivity tests on the electrodes showed that the anode layers containing FRs were very similar up to approximately a 20 wt.% FR content in the case of lithium oxalate and up to a 10 wt.% FR in the cases of sodium fumarate and sodium malonate.This was confirmed by the adhesion tests and suggests that the FRs are well-fabricated into the layers.
The FRs were investigated in coin cells to evaluate their impact on cell capacity, especially at higher current rates.A significant capacity loss was observed for higher C-rates when the FR concentration increased.This can be explained by the fact that FRs themselves are a source of additional resistivity, meaning that the mobility of the electrons through the anode layer at elevated current rates was hampered since their intrinsic electrical conductivity was lower than that of the surrounding electrode layer.Another reason for the capacity loss at high C-rates may be changes in the SEI layer caused by the high amount of the FR.Possibly, the transport of Li ions was slowed down during charging and discharging as a result of changes in the SEI caused by FR decomposition.At the same time, there was no noticeable difference in the rate capability for cells with Li oxalate and Na fumarate.On the contrary, discharge capacities of the cells with Na malonate were remarkably smaller.Higher C-rates intensified this effect.The influence of FRs on discharge capacity was more pronounced at high C-rates (>5 C).Sodium malonate featured the slightest effect on concentration-dependent capacity losses and was highly dependent on the electrolyte used.The influence for sodium fumarate had a moderate impact, and lithium oxalate had the largest impact on capacity losses.
Cell tests provide an indication that, at low current rates, the flame retardants have no negative effect on performance and cell capacity, even at high FR concentrations.Higher currents above 5 C, on the other hand, lead to a reduction in discharge capacity with an increasing FR content due to deteriorated conductivity in the layers.While lithium oxalate and sodium fumarate both exhibited similar behaviors, sodium malonate showed the weakest performance and exhibited a significant capacity loss at low-current rates.Gas-releasing tests demonstrated that the evolved CO2 amount was higher when flame retardants were present in the anode layer.
Conclusions
In the present study, the properties of flame retardants (lithium oxalate, sodium fumarate and sodium malonate) and their combination with three electrolytes (one standard electrolyte, LP30, and two low-flammable electrolytes, LiDFOB + EC/PC and LiTFSI + 1,2-BC/FEC) were investigated.The addition of FRs to a pristine anode slurry causes changes to the physical, mechanical and electrical properties of the resulting anodes, especially at high concentrations.From this perspective, a 20 wt.% of FR in the slurry could be considered optimum.The viscosity of the slurry/FR mixture is high enough to enable a coating of the anodes with the desired thickness.The produced electrodes are stable for further manipulations during cell manufacturing.At the same time, the resistivity of the anode with an FR is low and comparable with the pristine graphite slurry, which in turn indicates less impact on the entire cell resistance.The coin cells and pouch bags with an added 5 wt.% of FRs were studied with an aim of demonstrating how electrochemical performance is affected in the presence of FRs.Even at high discharge rates (10 C), the discharge capacity values were comparable to the pristine anode.The long-time performance tests on the pouch-bag cells with and without FRs showed that at least 80% of the pristine capacity remained in the cell after 1024 cycles (ca. 3 months at 1 C).Lithium oxalate is the most promising FR from this study.During the thermal decomposition of 1 g of Li oxalate, more than 200 mL of CO2 was released (in the first step as CO).
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/batteries9020082/s1, Figure S1 S1: Composition of anode materials with and without FRs in combination with electrolytes LP30, LiDFOB + EC/PC and LiTFSI + 1,2-BC/FEC studied in coin cells and their area capacities; Table S2: Anode material data in combination with electrolytes LP30, LiDFOB + EC/PC and LiTFSI + 1,2-BC/FEC, studied in pouch-bag cells including their area capacity; Table S3: Cycling procedure for coin cells; Table S4: Cycling procedure for pouch-bag cells.
Figure 1 .
Figure 1.The in-house-built, 3D-printed sample holder made of polyactide (PLA) for the measurement of the resistance is shown in (a).(b) depicts the graphite anode on a glass plate, including a coating on the glass surface, including the sample holder.
Figure 2 .
Figure 2. Chemical structures and potential decomposition pathways, including the decomposition temperatures of investigated flame retardants.
Figure 3 .
Figure 3. TGA measurements of lithium oxalate (a), sodium fumarate (b) and sodium malonate (c, all with dry-air purge gas).The ranges (blue dots) where the mass loss was identified are marked.A pre-decomposition in the case of Na fumarate was identified but, due to the much slower process (kinetics), this pre-decomposition process between 400 and 450 °C is not included.Additionally, the second decomposition in the case of lithium oxalate is also not marked due to the high temperature (>700 °C).The uncertainty of the mass loss is in the order of 0.1-0.3percentage points.
Figure 4 .
Figure 4. Viscosity of pristine slurry and slurry with added FR as a function of shear rate.
Figure 5 .Figure 6 .
Figure 5. Microscope images of anodes without FRs (a) and with added sodium fumarate (b) at 50 wt.%.Red circles emphasize FR inclusions on anode surface.The sample "slurry" accounts for the dried slurry without any FR.
Figure 7 .
Figure 7. Analysis of resistance measured on anode layers with different thickness coated on a glass plate.
Figure 9 .
Figure 9. Cycling data of pouch-bag cells with graphite (blue), 5 wt.%Li oxalate (green), 5 wt.%Na fumarate (red), and 5 wt.%Na malonate (magenta) cycled 1024 times.At cycle 516, a C/10 cycle was performed; thus, the discharge capacity is recorded too high here.In the case of LiTFSI + 1,2-BC/FEC in combination with self-made graphite, some spikes were recorded (between cycles 100 and 200) which were removed for better comparison.
Figure 10 .
Figure 10.Discharge curves of pouch-bag cells (NMC111//graphite) loaded with LP30 (a) and LiD-FOB+EC/PC (b) for C-rates from 0.5 C to 10 C (the charge rate is 0.5 C).
Figure 13 .
Figure 13.Gas composition measured with GC-MS after abuse test.C-graphite.
Table 1 .
Composition of anode materials with added flame retardants.
Table 2 .
[24]osition, density, viscosity and conductivity, as well as the glass, flash and melting points estimated for the studied electrolytes.Results for the EC/DMC and 1,2-BC/FEC electrolytes are taken from reference[24]for comparison.
Table 3 .
Result of mandrel bend test on anodes with different FR contents.Table states critical cylinder diameters (smallest diameter) with first visible defects.
NA-no signs of any damage, even with the lowest bending diameter of 2 mm.
Table 6 .
Gas formation of FR compounds.Gas volumes were calculated by assuming ideal behavior (1 mol equals to 22.4 L).A total of 100 g was converted to mol, and the molar equivalents of CO and CO2 were determined. | 14,529.2 | 2023-01-26T00:00:00.000 | [
"Materials Science",
"Engineering",
"Chemistry"
] |
Computer Aided Patterning Design for Self-Assembled Microsphere Lithography (SA-MSL)
In this paper, we use a finite difference time domain solver to simulate the near field optical properties of self-assembled microsphere arrays when exposed to an incoherent light source. Such arrays are typically used for microsphere lithography where each sphere acts as a ball lens, focusing ultraviolet light into an underlying photoresist layer. It is well known that arrays of circular features can be patterned using this technique. However, here, our simulations show that additional nanometer scale features can be introduced to the pattern by optimising the sphere dimensions and exposure conditions. These features are shown to arise from the contact points between the microspheres which produce paths for light leakage. For hexagonally close packed arrays, the six points of contact lead to star shapes in the photoresist. These star shapes have subfeature sizes comparable to the current achievable resolution of low-cost fabrication techniques.
Self-assembled microsphere lithography (SA-MSL) is a cost effective, fast, highly ordered, repeatable, and innovative method of microarray fabrication, the origins of which lie in the work of Van Duyne's group 1 . A colloidal crystal mask (CCM) is used instead of a conventional mask which is applied directly onto the surface of the substrate. CCM's can be formed by gravity sedimentation 2 , electrophoretic deposition 3 , solvent evaporation 4 , the Langmuir-Blodgett technique 5 , the air-water interfacial floating method 6 and spin coating 7 . We chose spin coating due to the fact it is relatively cheap, fast and a common cost-effective equipment setup found in many laboratories.
Thorough and accurate simulations of incident illumination on the physical setup have been utilised to explore the effects and characteristics of the light within the photoresist. This enables the exploitation of the inherent phenomena of nanosphere lithography (NSL) without the use of unconventional, less cost-effective equipment. The resultant array of 2D 6-point microstars has nano-dimensional sub-features which can be produced to a length scale of the order of 100 nm. This is a comparable resolution to other nanofabrication techniques for example star arrays formed using electron beam lithography 8 . The stars produced in ref. 8 are down to 1 μm which has been reduced in this paper (the 2 μm spheres produce a star of 800 nm). Projection photolithography is a method of NSL, forming complex designs by projecting an image through the microspheres forming an array of this image via the array of microsphere lenses. However, this method requires complex equipment and is limited to large microsphere sizes 9 . In this paper, a cost-effective method of producing star shapes by exploiting the inherent properties of the microspheres within the resist is demonstrated. The unique benefits of this work lies within the fabrication techniques utilising the complex optical response of the microspheres to produce the star shape, coupled with the simplicity and cheapness of the technique to produce potentially large areas of these features.
The microstar arrays could be utilised as an array itself or could be lifted off to be used as individual star shaped particles (microstars). NSL is already used for plasmonic enhancement by using deposition and lift off to form triangular shapes between the spheres 10 . Using NSL to produce star shaped arrays for plasmonic enhancement could provide more angular stability than its triangular counterpart. Reference 11 uses NSL to produce micro rings which can be used for resonance; any periodic metallic shape has a resonance and this method is a cheap and simplistic way of producing an array of stars which are one of the more complex shapes, therefore they could show promise in this area. The unique shape of the microparticles produced here is expected to have a particularly useful topology; the points of the star are of particular interest as they lend themselves to increasing the likelihood of tunnelling within composite materials 12 . Whilst many applications already exist that utilise the topology of 3D star microparticles, 2D (flat) versions could have their own unique benefits in terms of reducing physical space.
An important field which could benefit from these advancements is bioengineering in both biosensors and drug delivery. Both the arrays and the microstars could be of use in the development in label-free impedance biosensors 13 and electrochemical sensors 14 . Reference 15 stresses the importance of microparticle shape with respect to their uses for drug delivery. Currently a bottleneck in terms of development in this field is the production of easily fabricated alternate shapes. Therefore, advances such as those reported in this paper are critical to the future development of the microparticle-based drug delivery field.
Results and Discussion
Basic microsphere lithography theory. In microsphere lithography, a suspension of polystyrene microspheres is spin coated on top of a photoresist layer. Under certain spin coating conditions, these spheres can self-assemble into close packed arrays. When the array is top illuminated by a light source (e.g. from a UV mask aligner), as shown in Fig. 1a, each microsphere acts as a ball lens which focuses the light with an effective focal length, EFL, of where n is the refractive index and D is the diameter of the sphere. This assumes that the sphere diameter is significantly larger than the wavelength of the incident light. As the focal point is close to the edge of the sphere, the focused light starts to diverge at a short distance from the sphere. This limits the maximum resist thickness that can be used without blanket exposing the photoresist. The minimum thickness of resist is constrained by practical fabrication limitations (e.g. the requirement for a protective etch layer).
In conventional photolithography, a mask is used to block the UV light in specific regions to provide feature contrast. The SA-MSL technique differs in that all of the incident light passes through the spheres but feature where c is the speed of light in vacuum. This condition typically results in a large grid and a large number of time steps, which can be computationally intensive. However, the flexibility in simulating a variety of material properties, as well as the ability to visualise the propagation of broadband pulses, makes this method attractive for simulating optical and quasi-optical devices.
In this work, we have used custom FDTD software, Lucifer 17 , to study the light propagation throughout the microsphere array and photoresist, in order to identify situations where the contact areas between the spheres and the substrates produce interesting patterns, e.g. star shapes.
The simulation used a cubic cell with an edge of between 25 and 50 nm. Periodic boundary conditions were used on the sides of the simulation domain, while the top and bottom boundaries were absorbing. The light source has been modelled as a linearly polarised incoherent light source to account for the spectrum of the mercury lamp. A plane wave source is enforced by using periodic boundary conditions at each of the sides of the www.nature.com/scientificreports www.nature.com/scientificreports/ simulation domain. The spectral features measured from the mercury lamp of an EVG620 aligner, as used for the experiments (Fig. 1b), have been simulated using the Ornstein-Uhlenbeck process as defined in equation 3 18 .
where Δt is the time step, τ is a constant which accounts for the line width, C is the diffusion constant and n is a pseudorandom number generated from a Gaussian distribution. The X value for each numerical time step is multiplied by a sine function at at the frequency of the lamp emission lines. Similarly Y is multiplied by a cosine. This is repeated for each of the peaks found in the spectrum and the τ values are optimised to correctly model the line widths.
The dielectric materials were modelled as perfect dielectrics, which included silicon, novolak based positive resist, and polystyrene microspheres with relative dielectric constants of 12 19 , 2.8 20 and 2.5 21 respectively.
The exposure of the resist has been calculated from integrating the light intensity (I) over time, where the light intensity was calculated by using the relationship in equation 4: Each of the microspheres will be physically touching and are likely to be slightly deformed as the polystyrene is flexible. Therefore, in an hexagonally close-packed array, the spheres will have some small flat sections at the contact areas. The points of the stars emerged from the absence of reflective boundaries between adjacent spheres, essentially behaving as a uniform material with star-shaped focal points. This geometrical arrangement has been simulated by overlapping the spheres, which removes the reflective boundaries. Figure 2b shows the integrated light intensity across the sections of the resist below the touching spheres. This can be seen in the comparison of the two images in Fig. 2b where the top image is perpendicular to the touching spheres and the bottom image is through the touching spheres. In general, in the photoresist surrounding the central axis of the sphere, the exposure is higher in the region along the star points (Fig. 2b bottom), whereas in the orthogonal direction, where there are no star points, the exposure is lower (Fig. 2b top). By exploiting this effect, it is possible to expose patterns with features which are significantly smaller than the spheres. Further reducing the sphere dimensions is challenging as it is more difficult to form an even monolayer of spheres 7 . Simulations have been repeated for various sphere dimensions, showing that the star pattern only appears for diameters above 1.5 μm (≈3 wavelength). This has been confirmed experimentally using three different diameters of 3 μm, 2 μm and 500 nm. Simulations suggest that the star pattern is not present when there is strong diffraction from the spheres as shown in Fig. 3a,b, which smears out the smallest features. Therefore, if this method were to be repeated with a shorter wavelength, the minimum sphere size capable of producing the star shape would also be reduced.
The experimental results in Fig. 4b which shows the circular holes formed by the 500 nm spheres, are consistent with the simulation results of Fig. 3a. As previously mentioned, these spheres are too small to produce a star pattern due to the increased diffraction. Similarly, Fig. 2a shows the simulation of exposure through the resist while 10 displays the experimental findings for the 3 μm spheres. Here, the star pattern is clearly present and once again, the simulations are consistent with the experimental findings. Increasing the exposure level increases the size of the features, hence, although the pitch is directly proportional to the sphere size, there is some degree of control over the ratio of feature to pitch size.
Both theoretical and experimental findings conclude that the star shape is produced by removing the air-polystyrene reflective boundary between adjacent spheres, allowing homogeneous light transmission through the sphere monolayer. In Fig. 4a the points of the stars are directed towards neighbouring stars. The theoretical and experimental work shows that the star pattern is not observed when strong diffraction is present. The implication of these findings is that, with control of the microsphere arrangement, the feature shape in the photoresist can be controlled at the nanometre scale (square feature shapes have been found at lattice dislocations, see supplementary material). Multi-sized particle MSL could be an interesting avenue to explore here, possibly allowing feature shape tuning. This is significant since previous work have only reported simpler patterns, none of which were produced by only using a microsphere monolayer.
Methods
SPR-350 positive photoresist was used on clean, 1 cm 2 diced oxidised silicon <100> substrates to produce 0.4 μm layers for the 3 μm spheres and 0.2 μm for the 2 μm and 500 nm spheres. The 0.4 μm thickness photoresist was deposited by mixing SPR-350 1.2 and EC solvent in a 2:1 ratio and spinning at a maximum speed of 7000 rpm. The 0.2 μm thickness photoresist was deposited by mixing SPR-350 1.2 and EC solvent in a 1:2 ratio and spinning at a maximum speed of 3700 rpm. | 2,779 | 2019-09-06T00:00:00.000 | [
"Physics"
] |
TOWARD A PEDAGOGY GROUNDED IN CHRISTIAN SPIRITUALITY
INTRODUCTION Catholic educators are called to foster the spirituality of their students. This statement is validated by Catholic Church documents on education (Congregation for Catholic Education [CCE], 1977, 1982, 1988), along with myriad authors who have written on vocational spirituality (Carotta & Carotta, 2005; Durka, 2002; Groome, 1998; Jacobs, 1996, 2005; Palmer, 1983, 1998, 2000, 2004; Shimabukuro, 1998, 2007). Although the term, spirituality, pervades the literature, it remains an abstract concept, described by compelling phrases, such as promoting the “integral formation” of students (CCE, 1982, §28), instilling in students “the spirit of Christ” (CCE, 1977, §90), fostering the “growth of the whole person” (CCE, 1977, §29), and assisting in students’ “interior formation” (CCE, 1988, §95). These concepts are at the heart of teaching and learning in the Catholic school and, when concretized into progressive teaching and learning methodologies, suggest a pedagogy that is grounded in Christian spirituality that will meet the needs of today’s “millennial generation students” (Nicoletti & Merriman, 2007).
OVERVIEW
In order to construct a conceptual model of a pedagogy grounded in Christian spirituality, the meaning of the term "pedagogy" will be explored, along with the evolution of three pedagogical models as they relate to the millennial generation.The term "spirituality" will be investigated, particularly as it relates to teaching and learning in a Catholic context, incorporating the work of Jacobs (2005) who suggested five spiritual components that equip the teacher to implement a spiritually-based pedagogy.Finally, a New THREE PEDAGOGICAL MODELS Cambron-McCabe and Dutton (2000) identified the evolution of three approaches to teaching and learning.The first, referred to as a "transmission approach" places the learner in "a passive role of having something 'done to you'…where experts [teachers] 'tell' participants [students] what they need to know" (p.206).They portrayed this pedagogical model through the following classroom parody from the movie Ferris Bueller's Day Off: A high school economics teacher stands in front of a chalkboard.In a monotone, deadpan delivery, devoid of a shred of enthusiasm, he addresses the students with a fill-in-the-blank lecture."In the 1930s," he intones, "the Republican-controlled House of Representatives, in an effort to alleviate the effects of the-" He pauses for a second."Anyone?Anyone?"Having received no answer, he fills in the blank "-Great Depression-" and continues with the sentence: "passed the-Anyone?Anyone?The Tariff Bill."Students sitting at their desks, eyes glazed over, bored, disinterested, comatose, or asleep.This classroom parody…though cartoonlike in its exaggeration, taps into people's shared experiences or beliefs.We have yet to see a group of teenagers watch this movie without a hilarious response and comments like "That is so true!" (pp. 205-206) The authors described a range of contemporary educational experiences that continue to fall into this approach to learning, from high school and college instruction to workplace trainings to conference sessions.Palmer (1998) characterized this hierarchical model of instructional delivery as an objectivist myth, in which "truth flows from the top down, from experts who are qualified to know truth…to amateurs who are qualified only to receive truth" (p.101).Palmer identified two problems with this myth: It falsely portrays how we know, and it has profoundly deformed the way we educate.I know a thousand classrooms where the relationships of teacher, students, and subject look exactly like this image.But I know of no field-from astronomy to literature to political science to theology-where the continuing quest to know truth even vaguely resembles this mythical objectivism.(p.101) According to Cambron-McCabe and Dutton (2000), many educators today have been moving away from the "transmission approach" toward a "generative" one.This model, based on theories and methods such as constructivism, collaborative learning, and cooperative learning, advocates coaching students through exploration, inquiry, and discovery.Cambron-McCabe and Dutton succinctly summarized this teaching methodology: Learners create knowledge by building on their own experiences and by interacting with the subject matter and with other people, including the teacher or facilitator.New knowledge is created layer by layer.Contrary to popular criticism, generative pedagogy does not minimize content.It is built on a belief that learning is about both content and process, and that students more actively engaged in the process retain more and have a deeper understanding of the content.(p. 206) According to these authors, effective pedagogy must extend beyond the transmission and generation of knowledge in the classroom.It must extend into the larger context of the world.Thus, the "transformative" pedagogical model, based on the generative pedagogical model of active learning and student engagement, emerged from the educational theory and practice of critical pedagogy."Through this pedagogy, an individual can tap into the deep learning cycle, which provides a means to think critically about the world so that learning is a process of both self-and social transformation" (p.207).Foundational to this pedagogical model is social action, in which learners become empowered to use their knowledge to transform society.
CHRISTIAN PEDAGOGY
Christian pedagogy, as illustrated throughout the documents on Catholic education (CCE, 1977(CCE, , 1988;;National Conference of Catholic Bishops [NCCB], 1972, 1976), addresses all three pedagogical models, as exemplified in Table 1.Groome (1996) crystallized the essence of Christian peda-gogy as engaging "the very 'being' of…students, to inform, form and transform their identity and agency-who they are and how they live-with the meaning and ethic of Christian faith" (p.118).However, the instructional methods that teachers employ, which ultimately stem from their beliefs about teaching and learning, that is, whether they are predominantly teacher-centered or learner-focused, will determine the extent that their students become informed, formed, and transformed.
MILLENNIAL GENERATION STUDENTS
As written in The Catholic School (CCE, 1977), teachers must continually "adapt their work to the needs of the contemporary world" ( §17).Thus, the realistic assessment of today's students and their learning needs are central to an effective pedagogy.Prensky (2001a) designated today's students as "digital natives."He wrote, "Our students today are all 'native speakers' of the digital language of computers, video games and the Internet" (p. 1).They become engaged through interactivity and, according to Prensky, have short attention spans "for the old ways of learning" (p.4), but not for topics that interest them.
Digital Natives crave interactivity-an immediate response to their each and every action.Traditional schooling provides very little of this compared to the rest of their world (one study showed that students in class get to ask a question Generative "The [Catholic] school considers human knowledge as a truth to be discovered" (CCE, 1977, ¶41).
Transformative "Since the Gospel spirit is one of peace, brotherhood, love, patience and respect for others, a school rooted in these principles ought to explore ways to deepen its students' concern for and skill in peacemaking and the achievement of justice" (NCCB, 1972, ¶109).
every 10 hours).So it generally isn't that Digital Natives can't pay attention, it's that they choose not to.(2001b, p. 4) Nicoletti and Merriman (2007) identified today's learners as "millennial students," part of Generation Y born between 1982 and 2003.Based on the work of Jonas-Dwyer and Pospisil (2004) and Sweeney (2006), the following are conditions under which millennial students prefer to learn: • In a collaborative learning environment.They exhibit a preference for teamwork incorporating cooperative learning and constructivist principles.• In a challenging environment that has as its purpose a "life plan" that is goal orientated and directed toward their future plans.• In a flexible, personalized, and customized program.
• In an environment that makes learning interesting.
• In a structured environment.
• In an environment that uses technology to enable them to be more productive and connected.• In an environment that is goal and achievement orientated.
Thus, the wise teacher will implement a pedagogy that will engage today's millennial students with their distinct learning needs.
Having explored three major pedagogical models, along with the uniqueness of today's learners, components of Christian spirituality will be investigated as foundational to a spiritual pedagogy for Catholic schools.Ó Murchú (1998) provided a helpful definition for spirituality: "an openness to the creative Spirit of wisdom and love who inhabits the whole of creation and dwells in my inner being, informing my every instinct and my desire for meaning" (p.28).Ó Murchú explained that formation in a specific religious system is not a precondition for a spiritual experience, but that My formal faith tradition will enable me to name my experience, to couch it in words and concepts which will assist in deepening the experience and will enable me to engage with others in shared spiritual discourse, the basis for any authentic participation in the community of the church.The temptation here is that the ability to name can also become the occasion to label, to box things into categories that belong to the linear and literalist mind-set, to establish immutable dogmas.(p.28) Based on Ó Murchú's definition, a teacher wishing to craft a spirituallybased pedagogy creates a learning space with students that invites the creative spirit of God into their lives and, likewise, encourages students' expressions of the spirit through their learning.In addition, the teacher delivers instruction through intentionally thought-out methodologies that encourage students to discover ways that the spirit permeates the "whole of creation," as well as engages learners in self-exploration and awareness of the spirit dwelling within them.This teacher does not reserve this pedagogy for religion class, but implements it throughout the curriculum.
SPIRITUALITY IN CATHOLIC SCHOOLING
According to Jacobs (2005), there are five graces from God (see Figure 1) that empower educators to become spiritual leaders.In the case of teachers, these graces are foundational to employing a spiritually-based pedagogy with their students and, hence, to becoming spiritual leaders who integrate the spiritual dimension throughout their curricula.They represent areas of continuing growth and practice for the teacher.
The first grace is "understanding the nature of the soul and of spiritual experience" (Jacobs, 2005, p. 69).Jacobs challenged educators to stretch beyond the study of child development, which involves interpreting the child's physical and cognitive development in relationship to learning, to incorporate the developmental nature of the soul.
Understanding the nature of the soul and of spiritual experience adds greater depth and texture to the mind-body interaction by reminding Catholic educational leaders of the crucial third element constitutive of every human life, namely, the unique and unrepeatable soul God has breathed into each human being.(p. 69) Particularly in the Catholic school, but throughout the world of education, attention to students' souls and the indwelling of the divine in their souls is essential to the educative process, particularly when wishing to teach to the whole child.Moreover, teachers are called to an "integral formation" of students (CCE, 1982, §28), which discourages a disconnected, fragmented approach to student development, and, rather, promotes the holistic, integrated development of body, mind, and soul.The second grace, "adopting a contemplative stance," provides teachers with "the clarity of insight needed to discern better what God is calling them to do as they nurture…the souls entrusted to their spiritual leadership" (Jacobs, 2005, p. 69).This grace involves an ongoing contemplation of one's "personal vocation," which, in the words of Alphonso (2001), designates "the essence of our being…that expresses itself in everything we do" (p.x).Once discerned, one's personal vocation "becomes the criterion of discernment for every decision in life, even for the daily details of decision making…'God's will' in the deepest theological meaning of this much-repeated and muchmisused phrase" (p.43).In the little book, Discovering Your Personal Vocation, Alphonso offered a brief, yet profound reflection on identifying one's "personal vocation" as a means to discovering the very essence of one's being that is unique and unrepeatable.Alphonso wrote, The heart of ongoing formation…[lies in] a person's inmost resources of being, that person's unrepeatable meaning in life that is the source and secret of all his or her ongoing formation: that individual's personal vocation constitutes the life antennae, which are constantly picking up from the atmosphere or the whole range of human experience that which is meaningful for true growth and ongoing formation.(p.50) In other words, the personal vocation is precisely a person's unrepeatedly unique way of opening out onto community-opening out onto social reality, social responsibilities, social commitment.(p.53) Durka (2002), reiterating the unique dimension of each teacher's vocation, wrote, "Even though there are common threads in the calling of each teacher, each teacher dwells in the role in a unique way.We each give our vocation a distinctive stamp" (p. 6).The teacher's vocation, which penetrates one's deepest self and interfaces with one's God-given uniqueness, requires the consistent contemplative/reflective practice of entering one's interior space to further discern one's vocation.Subsequent student formation relies upon this process of the ongoing "interior synthesis" of the teacher (CCE, 1982, §29)."Exhibiting a magnanimous spirit," Jacobs' (2005) third grace, involves a genuine openness and sensitivity to "the presence and movement of the Holy Spirit within oneself, the school community, and its members" (p.68).This "magnanimous spirit" requires continually becoming awake to the movement of the Holy Spirit in one's own life, as well as in the lives of learners.Challenging, but rewarding, dimensions of this grace may involve discernment of the presence and movement of the Spirit in conflicts that arise in school settings, in students who are psychologically troubled or who struggle with learning difficulties, and in other similar demanding situations.
Exhibiting a magnanimous spirit connotes a teacher who is kind, generous, and forgiving.
"Possessing interpersonal sensitivity," the fourth grace, attunes the teacher to the learning needs of his or her students.It requires a willingness to put aside one's professional agenda for the sake of the learner and the potential teachable moment, ultimately guiding students to discern "who God is calling each of them to become as well as what God is calling them to do both as individuals and as a community" (Jacobs, 2005, p. 70).Palmer (1998), addressing the concept of interpersonal sensitivity specifically as a capacity for connectedness, wrote, "Bad teachers distance themselves from the subject they are teaching-and in the process, from their students.Good teachers join self and subject and students in the fabric of life" (p.11).Palmer continued to expand upon the teacher's interpersonal skills in the context of his or her pedagogy: Good teachers possess a capacity for connectedness.They are able to weave a complex web of connections among themselves, their subjects, and their students so that students can learn to weave a world for themselves.The methods used by these weavers vary widely: lectures, Socratic dialogues, laboratory experiments, collaborative problem solving, creative chaos.The connections made by good teachers are held not in their methods but in their hearts-meaning heart in its ancient sense, as the place where intellect and emotion and spirit and will converge in the human self.(p.11) Jacobs (2005) defined the fifth grace, "acting with courage," as "the strength of character enabling Catholic educational leaders to proclaim God's word to the school community and its members" (p.68).Expansion of this concept embraces the courage required of the teacher on a daily basis to implement a spiritually-based pedagogy.
The courage to teach is the courage to keep one's heart open in those very moments when the heart is asked to hold more than it is able so that teacher and students and subject can be woven into the fabric of community that learning, and living, require.(Palmer, 1998, p. 11) The courageous teacher is open to discovering creative ways for students to recognize the Spirit of God in their lives and to express the Spirit through their learning.Such a teacher is not driven by fear, reverting, for example, "to the safety of teaching by rote rather than relationship" (Palmer, 2004, p. 110), but rather, is motivated by integrity.According to Palmer (1998),
Integrity requires that I discern what is integral to my selfhood, what fits and
what does not-and that I choose life-giving ways of relating to the forces that converge within me: Do I welcome them or fear them, embrace them or reject them, move with them or against them?By choosing integrity, I become more whole, but wholeness does not mean perfection.It means becoming more real by acknowledging the whole of who I am.(p.13) The five graces offered by Jacobs (2005) provide areas to be cultivated by the teacher who wishes to implement a spiritually-based pedagogy.These five areas form the preconditions for a pedagogy that successfully invites the creative Spirit of God into the classroom and supports the learner's expression of the Spirit throughout the curriculum.
FROM AN INDUSTRIAL-AGE TO A NEW SCIENCE APPROACH TO LEARNING
Educational literature is replete with descriptions of the paradigmatic shift that is occurring in perspectives on learning, mainly from an assembly-line approach to educating children to one that is anchored in the New Sciences (Brown & Moffett, 1999;Caine & Caine, 1994;Marzano, Pickering, & Pollock, 2001;Senge et al., 2000;Wheatley, 1999;Zemelman, Daniels, & Hyde, 2005).This section will support the contention that a spiritually-based pedagogy must be firmly based in a New Science, systems approach to student learning.
INDUSTRIAL-AGE APPROACH TO LEARNING
In the mid-19th century, educators in the United States used the factory as their model for the design of public schools.Senge et al. (2000) detailed this assembly-line influence on schooling: Like any assembly line, the [school] system was organized in discrete stages.Called grades, they segregated children by age.Everyone was supposed to move from stage to stage together.Each stage had local supervisors-the teachers responsible for it.Classes of twenty to forty students met for specified periods in a scheduled day to drill for tests.The whole school was designed to run at a uniform speed, complete with bells and rigid daily time schedules.Each teacher knew what had to be covered in order to keep the line moving, even though he or she had little influence on its preset speed, which was determined by school boards and standardized curricula.(pp. 30-31) Unfortunately, today, too many schools continue to resemble assembly lines and endorse the "transmission" pedagogical approach discussed earlier.
Based on this model, Senge et al. derived a summary of underlying assump-tions about learning and schooling that are displayed in Figure 2. Clearly, a spiritually-based pedagogy, which inherently strives for the formation and transformation of students, cannot interface within an industrial-age model of education.
THE NEW SCIENCE REVOLUTION
Over the past 100 years, a "systems revolution" has been provoking a shift in scientific and social worldviews.Originating in the fields of physics and biology, this revolution has progressed throughout the cognitive and social sciences.Senge et al. (2000) stated that this shift is just at its outset, especially the appreciation of living systems as opposed to static mechanistic systems.Because it takes a very long time for a fundamental shift in scientific worldview to work its way into society, even though the beginnings of the systems view dates to 1900 or so, our institutions are still organized based on machine thinking that dates to the seventeenth century.
Probably another fifty to one hundred years will pass before the systems revolution truly becomes integral to our way of living as has the machine thinking that preceded it.(p.52) Systems thinking, in contrast to "machine thinking," is based on the holism of living systems, rather than the exclusive focus on mechanistic parts of systems, with an emphasis placed on the relationships within those systems (Wheatley, 1999).Applied to teaching, a New Science perspective advocates: • Learner-centered learning rather than teacher-centered learning; • Encouraging variety, not homogeneity-embracing multiple intelligences and diverse learning styles; and • Understanding a world of interdependency and change rather than memorizing facts and striving for right answers.(Senge et al., 2000, p. 55) Palmer (1998), in opposition to the objectivist myth of education, proposed a spiritually-based model, which he named a "community of truth." The community of truth represents knowing quite differently….In the community of truth, as in real life, truth does not reside primarily in propositions, and education is more than delivering propositions about objects to passive auditors.In the community of truth, knowing and teaching and learning look less like General Motors and more like a town meeting, less like a bureaucracy and more like bedlam.(p.101) Central to learning in "a community of truth" is the "subject," which represents the "great things of life," with learners interacting to form a web of relationships among themselves and with the subject matter.This interactive, relational model is in sharp contrast to the traditional, hierarchical model of instructional delivery in which "truth flows from the top down, from experts who are qualified to know truth…to amateurs who are qualified only to receive truth" (p.101).Palmer elaborated the distinction between these two models: This distinction is crucial to knowing, teaching, and learning: a subject is available for relationship; an object is not.When we know the other as a subject, we do not merely hold it at arm's length.We know it in and through relationship.(pp. 102-103) [In such a learning community] students and the act of learning are more important than teachers and the act of teaching.The student is regarded as a reservoir of knowledge to be tapped, students are encouraged to teach each .Industrial-age assumptions about learning and schooling (Senge et al., 2000, pp. 35-42, 43-49) other, the standards of accountability emerge from the group itself, and the teacher's role varies from facilitator to co-learner. (p. 116) This style of teaching and learning is respectful of and hospitable to the souls of its learners, whose value is not derived from objectified means, such as test scores and assignments, but rather, from their very beings who are in relationship with one another.Learning is participative, interactive, and cooperative.Thus, this type of classroom may evolve into a sacred space; it is fertile ground for entry of the Spirit of God and welcoming to students' expressions of the Spirit in their learning.
INSTRUCTIONAL BEST PRACTICES THAT SUPPORT SPIRITUAL DEVELOPMENT
A teacher's instructional practices will either advance or impede the creation of a classroom environment that can evolve into a sacred learning space.According to Zemelman, Daniels, and Hyde (2005), practices that discourage such an environment include excesses in the following areas: "teacher directed instruction" such as lecturing; "student passivity" in the form of "sitting, listening, receiving, and absorbing information"; "one-way transmission of information from teacher to student"; "prizing and rewarding of silence in the classroom"; "classroom time devoted to fill-in-the-blank worksheets, dittos, workbooks, and other 'seatwork'"; "time spent reading textbooks and basal readers"; "attempts by teachers to thinly 'cover' large amounts of material in every subject"; "rote memorization of facts and details"; "competition and grades"; "tracking or leveling students into 'ability groups'"; "pull-out special programs"; and "use of and reliance on standardized tests" (p.8).
In contrast, instructional practices that can advance a community of learners to incorporate the creative spirit of God into their learning consist of more: "experiential, inductive, hands-on learning"; "active learning, with all the attendant noise and movement of students doing, talking and collaborating"; "diverse roles for teachers, including coaching, demonstrating, and modeling"; "emphasis on higher-order thinking"; "deep study of a smaller number of topics, so that students internalize the field's way of inquiry"; "reading of real texts," such as entire books, primary sources and nonfiction resources; transference of responsibility to students for their work, such as self-assessment, goal setting, and record keeping; "choice for students," such as choosing their own research projects, writing topics, and team partners; "enacting and modeling the principles of democracy"; "attention to affective needs and varying cognitive styles of individual students"; "cooperative, collaborative activity"; "heterogeneous classrooms where individual needs are met through individualized activities, not segregation of bodies"; "delivery of special help to students in regular classrooms"; "varied and cooperative roles for teachers, parents, and administrators"; and "reliance on descriptive evaluations of student growth, including observational/anecdotal records, conference notes and performance assessment rubrics" (Zemelman et al., 2005, pp. 8-9).From these various instructional practices that can advance a community of learners, 13 interconnected principles emerged for Zemelman and colleagues that encapsulate this model of education.These principles are identified and explained in Figure 3. Zemelman et al.'s (2005) best practice principles may be examined through the lens of the definition of a spiritually-based pedagogy, couched in Ó Murchú's (1998) definition of spirituality cited earlier, in which a learning space is created with students that invites the creative spirit of God into their lives and, likewise, encourages students' expressions of the spirit through their learning.As mentioned earlier, for this type of learning environment to become actualized, the teacher must implement distinct teaching and learning methodologies that encourage students to become aware of ways that the spirit permeates the "whole of creation," as well as to engage learners in selfexploration of the spirit dwelling within them.Zemelman et al. proposed explicit teaching and learning methods, embedded in their 13 principles (see Figure 3), which promote the holistic development of students and nurture their spiritual development.For example, learning that is "Experiential," "Authentic," and "Holistic," is personalized and relevant to the learner.Experiential learning engages students and activates their curiosity and desire to learn.In essence, such learning methodologies activate the spirit of God that lies within each learner.Passive "drill and kill" methods deactivate the spirit within the student and create disconnects between learners and their inner lives.As stated by Zemelman et al., "Active, hands-on, concrete experience is the most powerful and natural form of learning" (p.10).When a student is engaged in an exciting learning experience, he or she comes in contact with that dimension of self that Alphonso (2001) characterized as "the 'name' by which God calls me-that is, my truest or deepest self " (p.8), the home base of his or her personal vocation.
Likewise, when students become involved in "meaning-making" in which they construct their own knowledge ("Constructivist") and understand concepts through higher-order, including metacognitive, thinking ("Cognitive"), they learn not only that life has meaning and consequent
STUDENT-CENTERED
The best starting point for schooling is young people's real interests; all across the curriculum, investigating students' own questions should always take precedence over studying arbitrarily and distantly selected "content."
Experiential
Active, hands-on, concrete experience is the most powerful and natural form of learning.Students should be immersed in the most direct possible experience of the content of every subject.
Holistic
Children learn best when they encounter whole ideas, events, and materials in purposeful contexts, not by studying subparts isolated from actual use.
Authentic
Real, rich, complex ideas and materials are at the heart of the curriculum.Lessons or textbooks that water down, control, or oversimplify content ultimately disempower students.
Challenging
Students learn best when faced with genuine challenges, choices, and responsibility in their own learning.
COGNITIVE
The most powerful learning comes when children develop true understanding of concepts through higher-order thinking associated with various fields of inquiry and through selfmonitoring of their thinking.
Developmental
Children grow through a series of definable but not rigid stages, and schooling should fit its activities to the developmental level of students.
Constructivist
Children do not just receive content; in a very real sense, they recreate and reinvent every cognitive system they encounter, including language, literacy, and mathematics.
Expressive
To fully engage ideas, construct meaning, and remember information, students must regularly employ the whole range of communicative media-speech, writing, drawing, poetry, dance, drama, music, movement, and visual arts.
Reflective
Balancing the immersion in experience must be opportunities for learners to reflect, debrief, and abstract from their experiences what they have felt and thought and learned.
SOCIAL
Learning is always socially constructed and often interactive; teachers need to create classroom interactions that "scaffold" learning.
Collaborative
Cooperative learning activities tap the social power of learning better than competitive and individualistic approaches.
Democratic
The classroom is a model community; students learn what they live as citizens of the school.
value, but also, that meaning emerges from within themselves.When they are encouraged to reflect upon ("Reflective") and express ("Expressive") their learning in multiple, creative formats, they tap into the spirit of God within them for this information.When they are guided to express their learning collaboratively ("Collaborative") and creatively, students learn experientially that collaboration with others can be a powerful means to contributing to the social and spiritual capital of the world.
CONCLUSION
When students actively engage in their learning through New Science teaching and learning methodologies, namely through "generative" and "transformative" pedagogical models, they experience opportunities to activate the spirit of God dwelling within them.This activation propels their spiritual development, which lies at the heart of Catholic education.In contrast, Industrial-Age methods emanate from a "transmission" pedagogical model, which relegates students to passive modes of learning and may cause to deactivate the spirit of God dwelling within them and, hence, their spiritual development.Today's "millennial generation students" crave interactivity in their learning, which may indicate this generation's need for spiritual activation.
A teacher wishing to implement a spiritually-based pedagogy must be equipped to do so with the God-given graces intrinsic to effective spiritual leadership.However, these graces must be cultivated and refined within the teacher through routine spiritual practice.According to Palmer (1998), "Who is the self that teaches?"…is the most fundamental question we can ask about teaching and those who teach-for the sake of learning and those who learn.By addressing it openly and honestly, alone and together, we can serve our students more faithfully, enhance our own well-being, make common cause with colleagues, and help education bring more light and life to the world.(p.7)
Five
Figure1.Jacob's (2005) five graces foundational to spiritual leadership, adapted and expanded fromEdwards (2001) deficient and schools fix them • Learning takes place in the head, not in the body as a whole • Everyone learns, or should learn, in the same way • Learning takes place in the classroom, not in the world • There are smart kids and dumb kids About School • Schools are run by specialists who maintain control • Knowledge is inherently fragmented • Schools communicate "the truth" • Learning is primarily individualistic and competition accelerates learning | 6,843.2 | 2008-06-01T00:00:00.000 | [
"Education",
"Sociology",
"Philosophy"
] |
Readout of an antiferromagnetic spintronics system by strong exchange coupling of Mn2Au and Permalloy
In antiferromagnetic spintronics, the read-out of the staggered magnetization or Néel vector is the key obstacle to harnessing the ultra-fast dynamics and stability of antiferromagnets for novel devices. Here, we demonstrate strong exchange coupling of Mn2Au, a unique metallic antiferromagnet that exhibits Néel spin-orbit torques, with thin ferromagnetic Permalloy layers. This allows us to benefit from the well-established read-out methods of ferromagnets, while the essential advantages of antiferromagnetic spintronics are only slightly diminished. We show one-to-one imprinting of the antiferromagnetic on the ferromagnetic domain pattern. Conversely, alignment of the Permalloy magnetization reorients the Mn2Au Néel vector, an effect, which can be restricted to large magnetic fields by tuning the ferromagnetic layer thickness. To understand the origin of the strong coupling, we carry out high resolution electron microscopy imaging and we find that our growth yields an interface with a well-defined morphology that leads to the strong exchange coupling.
A basic concept of antiferromagnetic (AFM) spintronics is to store information by the alignment of the staggered magnetization or Néel vector N, typically along one out of two perpendicular easy axes [1][2][3][4] . The major benefits of using AFMs as active elements in spintronics are their intrinsically fast THz dynamics 5 and their stability against external magnetic fields, e.g., up to 30 T in the case of Mn 2 Au 6 .
Regarding the manipulation of the Néel vector orientation, the application of short current pulses creating spin-orbit torques (SOT) is a promising approach. These can be created at interfaces with heavy metal layers [7][8][9] or for metallic compounds in the bulk of the AFM itself 10 . In the latter case, only CuMnAs and Mn 2 Au have been identified to combine the required crystallographic and magnetic structure with strong spin-orbit coupling, such that a current along a specific direction can create a bulk Néel spin-orbit torque (NSOT) acting on the Néel vector 10 . Indeed, for both compounds, current pulse-induced magnetoresistance effects of the order of 1% and below were observed and associated with a rotation of the Néel vector [11][12][13][14][15] . Furthermore, the current pulse-induced reorientation of the Néel vector of CuMnAs(001) and Mn 2 Au(001) was directly demonstrated by magnetic microscopy 16,17 . From these only two available compounds with a bulk NSOT, Mn 2 Au stands out concerning potential memory applications due to its metallic conductivity, high Néel temperature (>1000 K) 18 , and magnetocrystalline anisotropy, which results in a long term room temperature stability of Néel vector aligned states 6 .
Having established current-induced writing, still, the read-out of the Néel vector orientation in a potential device poses a major challenge. The magnitude of the anisotropic magnetoresistance effects (AMR) associated with the reorientation of N of metallic antiferromagnets amounts to only 0.1-1% 13,[19][20][21][22] . The spin-Hall magnetoresistance, most often utilized for insulating AFMs, is even smaller [7][8][9] . However, most applications require magnetoresistance (MR) effects above 20% 23 .
In contrast to the case of AFM spintronics, such MR values are easily obtained in ferromagnetic (FM) spintronics, e.g., based on tunnel magnetoresistance (TMR) of FM/MgO/FM junctions 24,25 . Thus, coupling AFM layers to thin FM films enabling e.g., TMRbased read-out is highly desirable, provided that the fast dynamics of the AFM, as well as its resistance against disturbing magnetic fields, is only moderately diminished. The former requires sufficiently strong coupling forcing the magnetization to follow the Néel vector, the latter is obtained by using very thin FM layers as we show in this work. Additionally, the sensitivity to external fields could be further reduced by replacing the single FM layer by a synthetic antiferromagnetically coupled FM bilayer with zero net magnetization 26 .
Here, we demonstrate a strong exchange coupling of 40-nmthick Mn 2 Au(001) epitaxial thin films with very thin (down to 2 nm) Permalloy (Py) layers. The coupling of the magnetization vector M F of the FM and of N results in the perfect imprinting of the AFM domain pattern of the Mn 2 Au film on the FM domain pattern of the soft Py layer. Similar imprinting was only reported for insulating AFM/ metallic FM bilayers such as 1.2 nm of Co on 40 nm of LaFeO 3 27 or Fe(5 nm) on NiO (15 nm) 28 . However, in these cases, it was associated with a coercive field of only ≃100 Oe. In contrast, we show that 5000 Oe are required to reverse the magnetization of 2 nm of Py on 40 nm of Mn 2 Au(001). This, at room temperature, represents a coercive field corresponding to a field stability range, which is more than an order of magnitude larger than that in related CuMnAs/Fe(2 nm) bilayers, where a long term stable remanent magnetization was reported measuring at 200 K 29 . We show that in Mn 2 Au(001)/Py even at the coercive field of 5000 Oe M F and N do not decouple, but rotate together driven by the Zeeman energy of the FM layer. Thus, Mn 2 Au(001)/Py represents an excellent system for read-out in AFM spintronics. Furthermore, we identify the morphological origin of the exceptionally strong magnetic coupling between these AFM and FM layers.
Results
Mn 2 Au/Py samples. Mn 2 Au has a tetragonal crystal structure and orders antiferromagnetically well above 1000 K 18 . It shows collinear AFM order consisting of antiparallel stacked planes with FM order, as indicated in Fig. 1c. Within the easy (001)-plane, there are four equivalent easy <110> -directions, which results for our as-grown Mn 2 Au(001) thin films in the formation of AFM domains with a typical size of 1 μm corresponding to all four associated orientations of N 6 .
Here, we investigate Mn 2 Au(001)(40 nm)/Ni 80 Fe 20 (Py) (2 to 10 nm) bilayers, which are grown on Al 2 O 3 (r-plane) substrates with a Ta(001)-buffer layer and a capping layer of 2 nm of SiN x . Scanning transmission electron microscopy with high-angle annular dark-field imaging (STEM-HAADF) of the complete multilayer is shown in Fig. 1a. The Fourier transform of the epitaxial Mn 2 Au(001) thin film region (inset in Fig. 1a) shows regularly-spaced Bragg peaks, which confirm that the layer is monocrystalline. Most importantly, Fig. 1b shows a magnification of the interface between Mn 2 Au(001) and Py, where we find that our growth leads to a defined Au termination of the AFM thin film. This is supported by UHV STM-images of a pristine Mn 2 Au surface, which show atomically flat terraces with steps corresponding to the half or the full length of the c-axis (Fig. 1d). Such steps are consistent with the well-defined Au termination. This, as we demonstrate later, implies that the same AFM sub-lattice couples to the FM at the interface, which generates a very strong coupling and leads to a one-to-one correspondence of the AFM and FM domain patterns.
Hysteresis loops. We quantified the coupling between the Mn 2 Au(001) and Py layers by measuring magnetic hysteresis loops. Single Py films are in general magnetically soft with coercive fields of the order of a few Oe, which, as we will show below, drastically increases if they are coupled to AFM Mn 2 Au films. Figure 2 shows hysteresis loops of a Mn 2 Au(40 nm)/Py(4 nm) bilayer measured in a superconducting quantum interference device (SQUID) with the easy [110] axis of the Mn 2 Au(001) thin film aligned parallel to the magnetic field direction.
After the as-grown sample consisting of an AFM multidomain state was placed in the SQUID, subsequent hysteresis loops with increasing maximum fields from 300 to 1000 Oe were measured as shown in the left inset of Fig. 2. In this field range, we obtained smooth hysteresis loops with almost zero remanent magnetization. This is consistent with a strong coupling of the Py magnetization to the AFM domain configuration of Mn 2 Au, which either remains unaffected by the magnetic field or restores the original domain configuration when the field is zero again.
However, once the magnetic field exceeds a threshold value, square-shaped easy axis loops with a coercive field H c ≃ 1600 Oe were obtained, as shown in Fig. 2. This behavior can be readily explained by assuming a strong exchange coupling of M F and N, which results in the Zeeman energy of the FM layer driving a reorientation of N. Measuring hysteresis loops of Mn 2 Au(40 nm)/ Py bilayers with various Py thicknesses varying from 2 to 10 nm, we observed linear scaling of the coercive field H c with the inverse saturation magnetic moment 1/m F , as shown in the right inset of Fig. 2. This is consistent with the assumption that the Zeeman energy M F ⋅ H of the Py layer sets the remanent reorientation of both M F and N.
To check if this holds, we next probe the orientation of the magnetization and the Néel vector: Probing the Néel vector orientation. We investigated the orientation of N and M F of a Mn 2 Au(40 nm)/Py(4 nm)/ SiN x (2 nm) sample by x-ray absorption spectroscopy (XAS), in the surfacesensitive total electron yield (TEY) as well as in the bulk sensitive substrate fluorescence yield (FY) mode. The magnetic contrast was obtained based on the x-ray magnetic linear and circular dichroism (XMLD/XMCD) effects, as described in the Methods section (the experimental geometry is shown in Fig. 3a).
In the as-grown state of the sample, due to the relatively large diameter of the x-ray beam (≃500 μm), we average over many AFM/FM domains canceling both the XMLD and XMCD to zero. We then apply a magnetic field of 1 T (i.e., well below the spin-flop field of ≃30 T 6,20 ) parallel to the x-axis, i.e., to the easy [110]-direction of Mn 2 Au(001) and subsequently reduce the field to zero again. In agreement with the hysteresis loops discussed above, this results in a significant Ni-L 2,3 edge XMCD signal (measured at a tilt angle of Θ = 60 ∘ ) showing a sizable remanent in-plane magnetization of Py (Fig. 3b). Additionally, we demonstrate that the Néel vector of the AFM becomes aligned as well, both at the interface and in the bulk parallel to M F of the FM. The corresponding XMLD spectra (measured at Θ = 0 ∘ ) are shown in Fig. 3, obtained in the surface-sensitive TEY mode (Fig. 3c) as well as in the bulk sensitive substrate FY mode (Fig. 3d). From the characteristic sign change of the XMLD signal, we can conclude that N becomes aligned parallel to M F as deduced by comparison with previous results 30 . After applying the magnetic field parallel to the ½ 110-direction of Mn 2 Au(001) and subsequently reducing the field to zero, the XMCD (Θ = 60 ∘ ) becomes zero since the remanent magnetization is oriented perpendicular to the direction of photon incidence. Consistently, the XMLD signal (Θ = 0 ∘ ) is inverted both at the interface and in the bulk as expected for a 90 ∘ rotated Néel vector. With this strong parallel coupling of N and M F , we expect the AFM domain pattern of Mn 2 Au, as formed during thin film growth, to be imprinted into the FM domain pattern of the Py layer, which we verify by XMLD-photoelectron emission microscopy (PEEM).
Imaging the AFM/FM domain pattern. XMLD-photoelectron emission microscopy (PEEM) of the FM domain pattern of the Py layer and simultaneously of the AFM domain pattern of the Mn 2 Au(001) layer of a Mn 2 Au(40 nm)/Py(4 nm)/SiN x (2 nm) sample was performed with a perpendicular incidence of the photon beam, as described in the Methods section.
We first discuss samples in the as-grown state, which were not exposed to a magnetic field. As shown in Fig. 4, panels a (AFM domains) and b (FM domains), the AFM domain pattern of Mn 2 Au(001) is perfectly imprinted on the FM domain pattern of the Py layer. Note that we have chosen also the FM XMLD contrast mechanism. As a result, also for the FM domains, only the axis along which M F is aligned produces a brightness contrast, not its direction, allowing for a direct comparison.
The size of the AFM/FM domains of ≃1 μm, is at least one order of magnitude larger than the typical distance between the steps at the surface shown in Fig. 1. Due to the well-defined Au termination of the Mn 2 Au(001) layer as discussed above, crossing a morphological step at the interface does not change the AFM sub-lattice to which the M F of Py couples. This enables a strong planar exchange coupling at the interface, as indicated in Fig. 5, which we address in more detail in the Discussion section.
Due to the associated one-to-one correspondence of the AFM and FM domain pattern, we can indirectly obtain microscopic images of the Mn 2 Au AFM domains using scanning electron microscopy with polarization analysis (SEMPA) 31 (Fig. 4c-e). Horizontal (Fig. 4c) and vertical (Fig. 4d) in-plane components of M F of Py are measured separately, revealing the direction of M F (Fig. 4e). Furthermore, SEMPA shows the same characteristics of the domain structure as the XMLD-PEEM images discussed above, but with the experimental advantage of the enhanced availability of a lab-based technique.
Discussion
The initial AFM domain pattern of the 40 nm Mn 2 Au(001) thin films forms during growth driven by an intrinsic mechanism such as the minimization of elastic energy 32 and couples to the Py magnetization during the room temperature deposition of this thin layer. No exchange bias is present or obtainable in the bilayers investigated here, because of the high Néel temperature >1000 K of Mn 2 Au 18 , which prevents the creation of exchange bias by standard field cooling procedures 33 .
Nevertheless, for a discussion of the magnetic coupling mechanism of the Mn 2 Au/Py bilayers, the physics of exchange bias systems 34 is a good starting point. Such systems can be used as references for the magnitude of the coupling of the Py magnetization M F to the Néel vector N of Mn 2 Au(001): The coercive fields presented above are at least one order of magnitude larger than those of typical exchange bias systems with a comparable Py-layer thickness [35][36][37][38] . Regarding other work on Mn 2 Au/FM bilayers, coercive fields of the order of only 100 Oe or less were reported, which could be related to different growth directions of these Mn 2 Au thin film 39,40 as well as to the selection of Fe as the FM layer 33,39 , or do a different morphology.
An explanation for the substantially different coercive fields is obtained by considering the morphology of the interface between the AFM and FM layers. Whereas in the general framework of exchange bias, typically interface irregularities of the AFM such as roughness or random alloy effects associated with uncompensated spins are discussed as pinning centers for FM domain walls 41 respective layers much more strongly 46 . This necessarily requires ferromagnetically ordered atomic layers of one defined AFM sub-lattice forming the surface of the AFM. Nevertheless, just the appropriate type of magnetic ordering is not sufficient, as depending on the interface morphology atomic steps can result in an alternating sign of the exchange coupling if different sublattices are present at the AFM/FM interface.
However, we find that our growth of the Mn 2 Au(001)/Py bilayers leads to a well-defined Au termination of the surface of the Mn 2 Au(001) thin films (Fig. 1). This means that everywhere at the surface the same AFM sub-lattice couples to the FM resulting in the same sign of exchange coupling over each AFM domain as shown schematically in Fig. 5. This leads to a maximum coupling strength and thus enables the perfect imprinting of the AFM domain pattern of the Mn 2 Au(001) thin film on the FM domain structure of the Py layer, which by itself would be magnetically highly isotropic and soft.
The coupling of the Py magnetization M F to the Néel vector N is so strong that the layers are not even decoupled at the large coercive field but jointly rotated. We describe this reorientation under the action of the magnetic field using a macrospin model. Within this model, we describe the density of the magnetic energy per unit area of the Mn 2 Au/Py bilayer by where the first term contains the magnetocrystalline anisotropy energy of Mn 2 Au (H an > 0 is the in-plane anisotropy field, M s = |N|), and the second term describes the exchange coupling of the Néel and Py magnetization vectors. t AF and t F are the thicknesses of the antiferromagnetic and ferromagnetic layers, J coup > 0, and ξ describe the strength and the characteristic length of the exchange coupling between the AFM and FM layers. The third term represents the Zeeman energy of the Py layer in a magnetic field H. The coordinate axes x and y are aligned along the easy magnetic axes of Mn 2 Au. The equilibrium configurations are obtained by minimizing (1) with respect to the magnetic vectors M F and N. In zero field M F ↑↑N are both parallel tox orŷ. A magnetic field H ""x splits the degeneracy between the states with M F ↑↑H, M F ↑↓H, and M F ⊥H and creates a ponderomotive force ∝ BM F , which acts on the domain walls 47 . This force shifts the domain walls thus reducing the fraction of the energetically unfavorable states (domains). This process shows up in the smooth growth of the magnetization as shown in Fig. 2a. As long as the value of the applied field is not sufficient to remove all domain walls from the sample, the process is reversible, corresponding to the minor hysteresis loops.
However, above a threshold field value, the sample develops a single domain state and further cycling of the magnetic field induces switching between the states M F ↑↑H and M F ↑↓H. From the stability conditions for these states we obtain the coercive field In the limit of strong exchange coupling, J coup M F ξ ≫ H an t AF , the coercive field H coer → 4H an M s t AF /(M F t F ) is associated with the magnetocrystalline anisotropy of the Mn 2 Au layer and shows a linear dependence on the inverse magnetization 1/m F = 1/(M F t F ) as observed experimentally (see Fig. 2b).
Inserting the experimentally determined magnetization of the Py layers of M F = 1.8μ B per Ni 80 Fe 20 -atom and taking M S = 4μ B per Mn-atom from 18 , we obtain H an ≃ 40 Oe, which corresponds to an anisotropy energy K 4 = 2 ⋅ 4μ B ⋅ 40 ⋅ 10 −4 T = 1.8 μeV per formula unit (f.u.). This is in good agreement with our previous estimation of K 4 > 1 μeV/f.u. 6 , thereby supporting the validity of our model assumptions.
H coer should also increase with the thickness t AF of the Mn 2 Au layer, which we indeed observed experimentally (e.g., H coer = 2000 Oe for Mn 2 Au(60 nm)/Py(4 nm)). However, this increase is not linear, presumably due to a thickness dependence of the crystallographic order.
In summary, we demonstrated an extremely strong exchange coupling between the Néel vector N of the metallic antiferromagnet Mn 2 Au and the magnetization M F of ferromagnetic Py thin films. This results in the perfect imprinting of the AFM domain patterns of 40 nm Mn 2 Au(001) thin films on the FM domain pattern of thin (2-10 nm) Py layers, while maintaining high stability in external magnetic fields. The strong coupling results from the particular morphology that we obtain from our growth at the Mn 2 Au/Py interface, identified by TEM and STM, where within every domain always the same AFM sub-lattice couples to the FM thus maximizing the exchange coupling. This strong coupling of N and M F enables the electric detection of the Néel vector orientation via standard techniques used for ferromagnetic thin films, thereby providing a solution for the challenging read-out in antiferromagnetic spintronics.
Methods
All layers of the Al 2 O 3 (r-plane)/Ta(001)(13 nm)/Mn 2 Au(001)(40 nm)/Ni 80 Fe 20 (Py)(2 to 10 nm)/SiN x (2 nm) samples were prepared by rf sputtering. The epitaxial Ta(001) buffer layers were sputtered with a substrate temperature of 700 ∘ C, the epitaxial Mn 2 Au(001) thin films were deposited at 500 ∘ C followed by an annealing procedure at 700 ∘ C as described in refs. 48,49 . All other layers are polycrystalline and were deposited at room temperature. SiN x (2 nm) sputtered from a Si 3 N 4 target in Ar provides a very effective capping layer, which prevents sample oxidation while being highly transparent for photoemitted electrons 50 .
The magnetic hysteresis loops of the samples were measured in a Quantum Design MPMS SQUID-magnetometer.
The antiferromagnetic order in Mn 2 Au causes an x-ray magnetic linear dichroism (XMLD), i.e., an asymmetry in the absorption of linear polarized x-rays at the Mn-L 2,3 -edge for the polarization direction parallel and perpendicular to N 30 . It is sensitive to the components of N, which are parallel versus perpendicular to the electric field vector of the x-ray beam, but it does not change sign upon reversal of N. Thus we were able to investigate the orientation of N of a Mn 2 Au(40 nm)/Py(4 nm)/SiN x (2 nm) sample by x-ray absorption spectroscopy (XAS) at the I06 branch line at Diamond Light Source, which is equipped with a superconducting vector magnet. In parallel to probing the Néel vector orientation by XMLD, we were also able to probe the orientation of the magnetization of the Py layer using the x-ray circular dichroism (XMCD) at the Ni-L 2,3 edge. The XMCD is an asymmetry in the absorption of circular polarized x-rays determined by the component of M F parallel to the direction of the incoming x-ray beam 51 . It does change sign upon reversal of M F . In the surface-sensitive total electron yield (TEY) mode, the sample current induced by the photoemitted electrons with an escape depth of 2-3 nm is probed 52 . In the bulk sensitive substrate fluorescence yield (FY) mode, the measurement signal originates from the x-rays transmitted through the magnetic layer stack generating fluorescence in the MgO substrates, which is probed by a photo diode 53 . The experimental geometry is shown in Fig. 3a.
XMLD-photoelectron emission microscopy (PEEM) was performed with a perpendicular incidence of the photon beam at the MAXPEEM beamline of MAX IV, Lund. The FM XMLD contrast was obtained at the L 3 edge of Fe 54 , the AFM XMLD contrast at the L 3 edge of Mn 30 . The SPELEEM microscope (Elmitec GmbH) of the MAXPEEM beamline provides, due to its perpendicular photon incidence on the sample surface, the highest possible AFM contrast for the in-plane orientation of the Néel vector of Mn 2 Au.
A SEMPA 31 differs from an ordinary SEM in that it is additionally equipped with a spin detector to analyse the net spin of outgoing secondary electrons from ferromagnetic materials. This enables a spatial mapping of the spin polarization and by extension of the domain structure with a spatial resolution of less than 30 nm. As a ferromagnet possesses an inherent spin asymmetry in its density of states, the outgoing population of secondary electrons (which are ejected from a broad binding energy range) is spin-polarized. These secondary electrons undergo spin-dependent scattering at a W(100) single crystal via spin-polarized low-energy electron diffraction (SPLEED) 55 . Thus, up and down spin electrons are preferentially deflected towards separate electron counters and the difference of the count rates reflects the local magnetization direction. | 5,601.4 | 2021-06-04T00:00:00.000 | [
"Physics"
] |
Ensemble Learning For Imbalanced Data Classification Problem
Imbalanced data is a kind of information that occurs in real life, such as medical diagnosis in which records of seriously ill patients outnumber by records of healthy ones. These imbalanced data affect the learning performance of algorithms in data mining. The boundary of decision in out of balance data chosen by most standard algorithms of machine learning tends to bias toward the majority class and hence misclassifies the minority class. Therefore, we present an approach for dealing with imbalanced data classification problem by applying the decision tree ensemble learning using both bagging and boosting techniques to build models that compensate the misclassification with cost sensitive learning. In this research, we build the model templates from different characteristics of synthetic data. We have chosen an appropriate model template for the real data with different imbalanced rating and overlapping ratio. The results showed that the chosen model template can solve the imbalanced data classification problem efficiently. But there are some model templates that cannot classify correctly when imbalanced rate increases.
Introduction
Data mining (1) is a method that has been extensively used to retrieve the hidden knowledge from a large information repository.Data mining task has many categories depending on the purpose of application.The one of those categories is data classification that aims to learn patterns to make prediction about the class of some unknown data.Most standard algorithms for data classification can be applied very efficiently in terms of overall classification accuracy if data in each class are in equal proportion.However, these algorithms show poor learning performance when classifying the imbalanced data that have amount of instances in the group of interest less than those in the other groups (2) .
For example, we can demonstrate a comparison between classifying balanced data and unbalanced data with 300 instances and two classes.For the balanced dataset, the amounts of data in the two classes are equal; that is, 150 instances in each class.For the unbalanced dataset, there are 285 instances in the class a, whereas there are only 15 instances in the class b.Take both datasets to be classified by decision tree induction.The results are shown in Fig. 1.Both datasets show good performance in terms of overall classification accuracy.When considering accuracy in each class, we found that the performance of classifying class b in the balanced data is more accurate than classifying class b in the imbalanced data.This classification accuracy drops drastically from 0.947 to 0.467.This example indicates that using imbalanced data in classification will affect the learning performance of algorithms that tend to bias toward the group of majority and cause high misclassification rate over a group of minority.
The problem of classifying imbalanced data mentioned above has drawn attention from many researchers to propose various methods to solve this problem.The proposed methods focus on a more accurate classification over a minority group.Some important work that proposed the methods mentioned above are as follows: Brown and Mues (3) proposed the undersampling technique to deal five credit scoring imbalanced datasets.
Cateni et al. (4) proposed the oversampling and undersampling techniques to deal the benchmarks imbalanced datasets from the KEEL Repository.
Lopez et al. (5) studied the performance of classification with the hybrid techniques of SMOTE+ENN and SMOTE.They solved the problem by combining the techniques at data level approach and algorithm level approach and focused on the cost-sensitive learning.They compared these techniques with their proposed hybrid techniques and performed experiments with 66 datasets that were taken from the KEEL Repository.
Krawczyk et al. (6) proposed the decision tree ensemble data classification and cost-sensitive learning to deal with six binary benchmarks imbalanced datasets from the KEEL Repository.They proposed a technique to prune the decision tree using the novel algorithm and optimal C minority derived from the ROC analysis.They compared the proposed method with the other six methods.The result showed that their proposed method was efficient in some datasets.
Liao et al. (7) proposed the ensemble learning for binary classification.They used Support Vector Machine (SVM) for rebalancing data in the stage of preprocessing and then selected the features for ensemble learning with Back-Propagation Neural Network (BPNN).The outputs from ensemble learning were taken to build new knowledge by using the rough set theory.They performed experiments with the listed electronics companies from 2005 to 2011: 63 financial crisis corporations and 2680 non-financial crisis corporations.The result showed that their proposed method was more efficient than other methods.
We recognize the importance of solving imbalanced data classification problem with an effective method.Therefore, we present an approach for dealing with imbalanced datasets by applying the decision tree ensemble learning using both bagging and boosting techniques to build models and combine the compensation technique to handle misclassification with cost sensitive learning.
The rest of this research is organized as follows: Section 2 gives details of the background and relevant techniques.Section 3 presents details of our proposed method.The experimental results and analysis will be presented in Section 4. Finally, the research is concluded in Section 5.
Imbalanced Data
Imbalanced data is the data that have the amount of instances in the group of interest much smaller than those in other classes (2) .The group of data that has a larger number of instances is called the majority class or negative class, whereas the group of data that has a smallest number of instances is called the minority class or positive class (8) .When classifying imbalanced data, the boundary of decision acceptable by standard algorithms tends to bias toward the majority class and misclassify in minority class as illustrated in Fig. 2.
Characteristics of imbalanced data that can influence the classification algorithms (9) are divided into three cases as follows: (a) Imbalanced Ratio Imbalanced data can be verified by the degree of imbalance, which represents the ratio between the number of data instances in majority class (n majority ) and the number of data instances in minority class (n minority ) (10) .The imbalanced ration can be defined by Eq. ( 1) (b) Lack of data This problem occurs when the size of samples in the minority class is too small (9) .Because of small sample size will cause difficulty in finding the patterns.
(c) Overlapping ratio between classes Overlap occurs when the data of each class has shared area.Overlap that occurs in conjunction with imbalanced data would result in the more complex situation for classification (11) .Maximum Fisher's Discriminant Ratio (F1) is one method that can be used for measurement the overlapping ratio.The F1 is defined by Eq. ( 2) The methods for solving imbalanced classification problem (5,12) can be divided into three approaches as follows: (a) Data Level Approaches This approach solves the problem in a pre-processing stage by rebalancing the class distribution using the sampling techniques: oversampling, undersampling, and a hybrid technique.
(b) Algorithm Level Approaches This approach attempt to adapt existing algorithms by adjusting the parameters.
(c) Cost Sensitive Approaches This approach uses both data level approaches by adding special cost to misclassification and algorithm level approaches by modification the algorithms to the possible classification that leads to less errors.
Ensemble Learning
The functions of single model have high classification performance but have a problem in terms of a fixed a set of parameters, which causes the bias.Reduction of such bias can be obtained through the ensemble learning.
The performance of ensemble learning depends on the precision of classifiers.In ensemble classification learning, multiple classifiers are used to learn the original dataset together.The results from learning will be combined and then used to classify the unknown data.The process of ensemble learning is given in Fig. 3 (13) .Ensemble learning can be divided into three approaches as follows: (a) Boosting method Boosting method ( 14) is an ensemble classification such that each classifier has a weight which is derived from the precision of learning.The results models are used to predict the unknown data by the majority vote.The popular technique is AdaBoost.
(b) Bagging method Bagging method (15) builds the models from the same learning algorithm but each algorithm learns from different instances.This method also uses majority vote for prediction of unknown data.The popular technique is Bag.
(c) Random subspace method Random subspace method or Attribute Bagging ( 16) learns from the same dataset and then performs sampling without replacement over the features.The method also uses majority vote for the prediction of unknown data.
Cost Sensitive Learning
Cost-sensitive learning takes into account the cost of misclassification.Penalties of misclassification will be built as a cost matrix as shown in Table 1.
From Table 1, let C(i,j) be the cost of predicting the sample in class i as class j.C(0,0) and C(1,1) are the cost of correct classification which is set to be equal to 0. C(0,1) is the cost of misclassifying of majority class to be minority, and the cost is set to 1. C(1,0) is the cost of misclassifying of minority class to be majority.The cost is C minority , which can be adjusted according to the specific algorithm.
The most important issue for solving the imbalanced data classification problem is recognizing correctly the positive instance (minority class) rather than the negative instance (majority class).Therefore, the cost of misclassifying of minority class must higher than the cost of misclassifying of majority class (C(1,0) > C(0,1)).
Methodology
In this research, our main objective is to find the classification model efficiently for solving the imbalanced data classification problem with high accuracy and efficiency.Our concern is to improve the process of imbalanced classification at different imbalanced ratio and overlapping ratio between classes.We apply the decision tree ensemble learning using both bagging and boosting techniques to build models and compensate the misclassification with cost sensitive learning by building cost matrix and then take the values from cost matrix to adjust the parameters of ensemble learning.We also find the optimal number of trees by visualization.The framework and the steps are shown in Fig. 4, we can explain in detail of each stage as follows: (a) Building the Model Templates This stage is for building the model templates with the following two steps: 1.
Generate the numeric synthetic data, 1000 instances with normal distribution, slightly overlapped with imbalance ratios of 10%.
(b) Building the System Model This stage is for building the system model with the following six steps: 1.
Normalization the features to zero mean and set standard deviation equal to 1.
2. Sampling data by using stratified sampling to draw samples from the imbalanced data with several of imbalance ratios.We call the sampled data DB1.
3. Find optimal cost value for minority class by generate cost matrix for misclassification cost.We are setting IR of DB1 to be the C minority .Other values in the cost matrix will be determined by a constant: C majority equals 1, C(0,0) and C(1,1) equals 0.
4. Model matching by analyzes the characteristics of DB1 and then choose the appropriate model from the model templates.
5.
Building the imbalanced classification model using the model templates and the best value of C minority .We initialize the number of decision trees to be equal to 200 and test the model by k-fold cross validation, k=5. 6.
Find optimal number of decision trees, we employ visualization to reduce the number of decision trees obtained from ensembles learning to achieve a number of suitable decision trees.The visualization will show the test of classification error for each decision tree and we will choose the top-10 decision trees with the minimum error.
(c) Test Performance of the Ensemble Model The optimal ensemble model will be tested to evaluate its performance by the evaluation measure.
Datasets
For our experiment, the proposed ensemble models have been developed and applied for binary classification on the following datasets: (a) Synthetic datasets The synthetic datasets have been created using Matlab.We created five datasets which have a slight overlap with an initial imbalanced ratio of 10%, two classes and three features.Details of the synthetic datasets and the optimal model templates are given in Table 2 and one of the synthetic dataset is shown in Fig. 5.
Evaluation Metrics
In order to evaluate the effectiveness of the proposed method we used confusion matrix to show the accuracy of the classification and the reliability of the model.Detail of confusion matrix is given in Table 4.
From Table 4, row of the matrix shows the number of actual instances of each class and column shows the number of predicted of each class.It is divided into the following four cases: TP: number of instances that are correctly classified as positive class.
TN: number of instances that are correctly classified as negative class.
FP: number of instances that are negative class incorrectly classified as positive class.
FN: number of instances that are positive class incorrectly classified as negative class.This measure shows the precision of the classification model in classifying negative class, which is defined as the ratio between the number of correctly classified negative class and the total number of the actual negative class.The specificity is defined by Eq. ( 4) This measure shows evaluation of the overall performance of all classes in the classification of a model.The accuracy is defined by Eq. ( 5)
Results and Analyses
In this section, we present the results from the evaluation of our proposed model using six binary benchmarks imbalanced datasets.We perform stratified sampling to draw samples from imbalanced datasets at different imbalanced ratios and then analyze the characteristics of data to find the suitable model from the model templates.The results of the suitable model templates are given in Table 5.
We derive the C minority from imbalanced ratio (IR) and then use it in the step of building the ensemble model.The initial number of decision trees is 200 and then reduce this number to get the optimal number of decision trees using visualization.
Table 5. Optimal model templates for benchmark datasets.Fig. 6 shown the decrease in the number of decision trees of the yeast dataset adjusted with the smallest error rate.The optimal number of decision trees equal 30.
From the obtained optimal model templates shown in Table 5, we performed the experiments over the benchmark datasets with these models.We also performed the experiments with the rest of model templates as a base line for comparisons.We experiments with the three imbalanced ratios: 1:10, 1:25 and 1:50.Each imbalanced ratio is equal C minority .The performance of such models in terms of sensitivity (SE), specificity (SP) and accuracy (Acc) are given in Table 6, Table 7 and Table 8, respectively.The results of the experiments in Table 6, Table 7 and The model template is disabling to classify efficiently when increasing the imbalanced ratio.The examples are Bag model for segment dataset can classify correctly at imbalanced ratio at 1:10, and AdaBoostM1, TotalBoost and LogitBoost model for segment dataset can classify correctly at imbalanced ratio at 1:25.For increasing imbalanced ratios of shuttle dataset, our proposed method could improve the classification of imbalanced data significantly with chosen model template: LogitBoost model.
As for comparison with the results proposed by Krawczyk et al. (6) is shown in Table 9.The proposed method shows the performance is quite satisfactory; especially the imbalanced datasets are imbalance ratio of 1:25 and 1:50.For an imbalance ratio of 1:10, the proposed method was statistically better on three of imbalanced datasets, while the competition method was statistically better on the rest of imbalanced datasets.For an imbalance ratio of 1:25, the proposed method was statistically better on all of six imbalanced datasets.For an imbalance ratio of 1:50, the proposed method was statistically better on five of imbalanced datasets, while the competition method was statistically better on the rest of imbalanced datasets.
Conclusions
Imbalanced data classification is a significant challenge for standard algorithms of machine learning.In this research, we propose the method to deal with imbalanced data classification problems with the main focus to improve recognition of the minority class.We combine the cost-sensitive learning with ensemble decision tree classification using bagging and boosting techniques.The numbers of decision trees are also decreased to an optimal number of decision trees by visualization.We create normally distributed synthetic data with binary classes, three features and 1000 instances.Then, we build the model templates from five algorithms: AdaBoostM1, Bag, TotalBoost, LogitBoost and RUSBoost.We analyze the standard datasets and selected the best model template by considering the overlapping between classes, mean and standard variation of minority class and majority class with closed to model templates.The experiments show that the overlapping ratio between classes has an effect to the performance of proposed model.If datasets has overlapping between classes, the model can classify correctly at imbalanced ratio not over 25.The appropriate ensemble technique is boosting such as RUSBoost, LogitBoost, TotalBoost, and AdaBoostM1.For Bag model, it classifies correctly at imbalanced ratio not over 10.The best model is RUSBoost that can classify an imbalanced data which have overlapping between classes and high imbalanced ratio.
Imbalanced dataFig.1. Comparisons of classification between the two datasets of balanced data and unbalanced data.
Fig. 2 .
Fig. 2. Linear classification of imbalanced data which bias towards majority class.
Table 1.Cost matrix C for binary classification.
Fig. 4 .
Fig. 4. The framework of steps for building the imbalanced classification models.
(a) Sensitivity or True Positive Rate (TPRate) or Recall This measure shows the ability of the model to classify positive class, which is defined as the ratio between the number of correctly classified positive class and the total number of the actual positive class.The sensitivity is defined by Eq. (3) = = = ( + ) (3) (b) Specificity or True Negative Rate (TNRate)
Fig. 6 .
Fig. 6.Shows the error of classification of yeast dataset with varying number of decision trees.
Table 2 .
Details of the synthetic datasets and optimal model templates.
Table 3 .
Details of the datasets used in the experiments.
Table 4 .
Confusion Matrix for binary classification.
Table 6 .
Classification results for benchmark datasets with imbalanced ratios 1:10, the best value of C minority is 10.
Table 7 .
Classification results for benchmark datasets with imbalanced ratios 1:25, the best value of C minority is 25.
Table 8 .
Classification results for benchmark datasets with imbalanced ratios 1:50, the best value of C minority is 50.
Table 9 .
Comparison between the best results from our proposed and the method with proposed by Krawczyk et al.(6)
Table 8 ,
show that there are chosen model templates could classify efficiently, such as RUSBoost model for yeast and pima datasets and TotalBoost model for page-blocks dataset.For vehicle dataset, we choose LogitBoost model templates which are inappropriate, RUSBoost model could classify efficiently.The rest of datasets demonstrate that part of model template can classify efficiently and some model templates without chosen; RUSBoost model for shuttle and page-blocks datasets; are able to classify efficiently. | 4,354 | 2015-02-05T00:00:00.000 | [
"Computer Science"
] |
Links Between Optical and X-ray Light in Cygnus X-2
We observed the low mass X-ray binary Cyg X-2 for a total of 18 nights over two observing runs in July and September of 2006, using the Otto Struve Telescope at McDonald Observatory and the Rossi X-ray Timing Explorer. Using discrete cross correlations, we found peaks occurring at near-zero lags in the flaring branch of the colour-colour diagram, which could signify reprocessing, in addition to an anti-correlation within the normal branch. When comparing optical flux to the system’s placement on the Z track, two distinct behaviors were seen: (1) a state with no correlation, and (2) a multi-valued (horizontal and normal branches)/correlated (flaring branch) state. The correlation was the result of direct steps and more gradual falls to and from the flaring branch respectively. Finally, we modeled timed spectra with 64 second bins with an extended accretion disc corona model. We found that correlations occurred between the optical and the various fitted parameters, particularly the blackbody normalization (and blackbody radius by extension) in higher intensity regions. Despite this, the Z track location was found to be a far better predictor of physical parameters than the optical flux, with clean correlations seen in every branch of the Z track. Where optical correlations are found, the location on the Z track was a better predictor of optical flux than any individual physical
INTRODUCTION
An X-ray binary (XRB) system is a semi-detached binary system that contains a normal star or white dwarf donor and a neutron star (NS) or black hole (BH) accretor.Through processes such as Roche lobe overflow, winds, or circumstellar disc interaction, material streams down into an accretion disc surrounding the central object.At the inner radii of the accretion disc, there is enough viscous heating for the material to emit X-rays.Depending on the mass of the donor, an XRB can be divided into low mass (< 1 ⊙ ) or high mass (> 10 ⊙ ) classes.
Neutron star low mass XRBs (LMXBs) can be divided into two subclasses, known as Z types and atolls (Hasinger & van der Klis 1989), depending on the shape of their X-ray colour-colour diagrams (CDs) and hardness-intensity diagrams (HIDs).Atolls have three main branches: the island state takes the shape of a formless clump, and the lower and upper banana branches together create a curve shaped like their namesake.Z sources are shaped like the letter, and have three branches as well: the horizontal branch (HB), the normal branch (NB), and the flaring branch (FB).They tend to be brighter than their atoll counterparts, generally being near or above the Eddington limit, but also less numerous, with only six persistent Z sources having been confirmed in our Galaxy.Z sources also move through all three states more quickly than atolls, on the orders ★ E-mail<EMAIL_ADDRESS>of hours to days and weeks respectively (Hasinger & van der Klis 1989).Finally, both see their shapes change and translate on the CDs, something known as secular drift.Cyg X-2 in particular has been observed to have a high amount of secular drift (Kuulkers et al. 1996).
The main driver of where LMXBs lie on their CDs is generally thought to be the mass accretion rate.Traditionally, it has been thought to increase in Z sources from HB to FB, but there are a number of proposed alternatives.Church et al. (2006) suggests that minimum accretion occurs between the NB and FB (the soft apex), while Lin et al. (2009) claims that other physical properties drive the location, and that mass accretion rate is constant across the whole track.
Z sources can be further broken down into two subcategories (Kuulkers et al. 1994): Sco-likes and Cyg-likes.Sco-likes (Sco X-1, GX 17+2, GX 349+2) are named after Sco X-1, and have a Z track shaped more like a Greek "", with a short, upturned HB, and a prominent FB.Cyg-likes (Cyg X-2, GX 340+0, GX 5-1) are named after Cyg X-2, and have a Z track that looks more like the letter "Z", with a more prominent HB and smaller FB than the Scolikes.A number of mechanisms have been proposed to explain the differences between the aforementioned subtypes, as well as (more generally) Z types and atolls.Suggestions included inclination angles, NS spin rates, NS masses, and NS magnetic field strengths (Hasinger & van der Klis 1989).The discovery of XTE J1701-462 (Remillard et al. 2006) put these options to rest, and suggests that accretion rate is what determines XRB type.Because it went through all three source states over its 19 month outburst (Homan et al. 2007a,b), the necessary timescales for changes of the previously mentioned physical parameters disqualifies them.Along with J1701-462, other LMXBs have been seen to transition between multiple states.GX 13+1 was seen by Fridriksson et al. (2015) to transition between Cyg-like and Sco-like states, and Rhodes et al. (2022) found that Swift J1858.6-0814bridged the atoll/Z source gap on the radio/Xray plane.
It is well established that the spectra of neutron star LMXBs have both a thermal component and a non-thermal component.However, there is not yet a single agreed upon model.Two models in particular are often cited: the Eastern model (Mitsuda et al. 1989) and the Extended Accretion Disc Coronal (ADC) model (Church & Bałucińska-Church 1995, 2004).In the Eastern model, soft X-ray emission would be dominated by a multicoloured blackbody disc.The hard component comes from the Comptonization of seed photons from the inner disc in a spherical corona around the NS.On the other side of the coin, the extended ADC model also has soft emission dominated by a blackbody, this time originating from on or near the NS surface.Comptonization occurs in a corona that exists as a layer above the inner radii of the accretion disc.An additional line component is often necessary for a good fit.A contribution from the K transition in iron is usually found in the mid-6 keV range.
Dipping sources, which are LMXBs viewed at an angle between 65-85 • , may be the key to discerning which model is correct.The region on the disc that touches the accretion stream is puffed up, creating a thick, absorbing bulge at the rim which extends up to 70 • in azimuth (White & Swank 1982).This leads to orbital-related brightness dips as the bulge covers an observer's line of sight to the corona.Because this is a purely geometric effect, models are more strongly constrained, as they should fit well both inside and outside of the dips.Dipping sources have already been used to get a clearer picture of the geometry of an extended ADC.Church & Bałucińska-Church (2004) showed that the ADC would extend from 20,000 km for faint sources to 700,000 km for bright ones, since the Comptonized component is obscured very slowly with dipping.The fractions of extended ADC radius to disc radius ranged from 6.4% to 64.8%, so the ADC has been seen to cover a significant portion of the accretion disc.The ADC also seems to be geometrically thin (/ << 1), because 100% deep dipping would not be seen otherwise.Cyg X-2 itself has shown evidence of an extended ADC.Vrtilek et al. (1988) found that variations in Cyg X-2 dips line up with the scenario where the dips are dependent on the thickness of the disc and ADC (both geometric and optical).Another example is given in Schulz et al. (2009), who used line widths and the ratios of forbidden and intercombination line fluxes to determine a lower limit of (6.4±1.4)×10 14cm −3 for the density of the corona.
The namesake of the Z source subclass it belongs to, Cygnus X-2 was first discovered in the X-ray by Byram et al. (1966) using a sounding rocket.It is comprised of a neutron star and a late type (A9) companion, circling each other in an ∼9.8 day orbit (Cowley et al. 1979).Orosz & Kuulkers (1999) found an inclination of 62.5 ± 4 • , and NS and companion masses of 1.78±0.23⊙ and 0.60±0.13⊙ respectively.Cowley et al. (1979) found the source to be at a distance of ∼8 kpc, but more recently, Ding et al. (2021) used Gaia EDR3 data to calculate a distance of 11.3 +0.9 −0.8 kpc.Because Cyg X-2 is so luminous, the system has been well studied spectrally.It is well established that its spectra contain an iron K line at ∼6.7 keV (Smale et al. 1993), which likely results from reflection on the disc.In addition, the system is known to have spectral features in the 0.2-1.5 keV range that are consistent with plasma that is collisionally excited, or possibly photoionized (Vrtilek et al. 1986).Spectral modeling of NuSTAR data using a relativistically blurred reflection model in Mondal et al. (2018) found large inner disc radii of ∼24-32 km in the dipping spectrum and a radius of ∼30-73 km in the non-dipping.
In the HID of Cyg X-2, the intensity usually decreases on the FB, creating a backwards "C" (Cyg-like) shape (Ludlam et al. 2022;O'Brien et al. 2004).However, intensity has also been seen to increase on the FB, making a "Z" shaped HID, as in Bałucińska-Church et al. (2010).The authors of this paper say that the difference lies in the distinction between dipping and flaring, where the true FB is an intensity increase, and occurs rarely (Bałucińska-Church et al. 2012).Fridriksson et al. (2015) disagree with the interpretation that the dipping and flaring branches are caused by different mechanisms (nuclear burning and outer disc absorption), as they noticed the change between the two occurring as a smooth transition (rotation) from one to the other.Instead, the authors suggest that the accretion rate is the primary driver of the dipping/flaring branch differences.They note that when the Z track lies at higher intensities, the shape becomes Cyg-like, and when the Z track lies at lower intensities, the shape becomes Sco-like.
In binary systems such as XRBs, X-ray photons from the inner disc radii can photoionize material on the outer disc or companion star.When the electrons recombine, they release a photon of lower energy than the initial X-ray (optical/UV), a process known as thermal reprocessing.Reprocessing is thought to account for much of the optical light seen in LMXBs (van Paradijs & McClintock 1994).Over time, an entire X-ray signal can be reprocessed into the optical regime, where the optical lightcurve is observed with a delay on the order of the light travel time through the system (∼1-10 s for most LMXBs).The timescale of reprocessing itself is nonzero, although it is negligible compared to light travel times (Cominsky et al. 1987).
The reprocessed signal is given by where is the optical flux, is the X-ray flux, is the time, and is the optical time delay, and Ψ is the transfer function, which describes the amount of X-ray light that is reprocessed into the optical as a function of lag.The possible lag times are based on the geometry of the system, ranging from a minimum of zero and a maximum of 2/, where is the furthest reprocessing site on the system, and is the speed of light.Lags in between can be shown in the form of isodelay surfaces, summarized in Horne (2003).O'Brien et al. (2002) modeled the transfer function of a typical LMXB as a function of lag and phase, which revealed two main components.The first is from the disc, which occurred at lower lags, and was constant in phase.
The second was from the companion star, which was quasisinusoidal and at higher lags.The accretion stream would also be included in the second component, but its contribution is relatively small due to the small area.
In this paper, we analyze how the X-ray regime is related to the optical from a number of angles.In Section 2, we elaborate on the conditions used for data acquisition.Section 3 contains descriptions of trends and features seen in the lightcurves and CDs.Section 4 discusses the process for Z track parameterization.Section 5 describes the search for thermal reprocessing using cross correlation functions, and the results of that search.Section 6 shows the results of the Z track parameterization.Section 7 contains the results for the timed spectral fits that were performed on the X-ray data, and how they are related to the optical and the Z track location.Finally, Sections 8 and 9 are the discussion of results and conclusions respectively.
OBSERVATIONS
The optical observations were performed over two runs at McDonald Observatory, using the Argos instrument on the Otto Struve Telescope (Mukadam & Nather 2005).The first was from July 25, 2006 to August 1, 2006, and the second from September 23, 2006 to October 2, 2006.Data were taken every night, except on July 31 due to weather.The timeranges with data can be viewed in Table 1.A broad BVR filter was used to make lightcurves with 1 s time resolution.Using the bias, dark, and flat frames taken nightly, custom IDL codes written for Argos were used to reduce the data.Most of the lightcurves were ∼7 hours of continuous data, although due to weather, seeing, etc., some of the lightcurves are shorter or are taken with some data gap, like in Fig. A1h.Syncing the absolute time was done using Network Time Protocol (NTP) servers and Global Positioning System (GPS) 1 s ticks.To reduce the the effects of the atmosphere, all lightcurves used were differential, using 2MASS J21444211+3817558 as the comparison star.If atmospheric effects became too strong, even differential lightcurves were not enough to mask them.As such, a minimum count threshold was set on an observation by observation basis, so that these bad data could be ignored.
The X-ray data were taken with the Rossi X-ray Timing Explorer (RXTE) Proportional Counter Array (PCA).The PCA was comprised of five Proportional Counter Units (PCUs), with a total collecting area of 6500 cm 2 and a usable energy range of 2-60 keV.Some of the PCUs began to discharge with instrument aging.In an effort to prevent further damage, some PCUs are turned off for periods of time.This leads to noticeable changes in count rates, which needs to be accounted for, specifically in the STANDARD-1 data.The STANDARD-2 data were reduced using only the count rate from PCU2 to ensure consistency.We used both STANDARD-1 and STANDARD-2 data for this study.The STANDARD-1 data, which do not contain energy bands, were taken with 1 s time resolution, matching the Argos data.The STANDARD-2 data were taken with 16 s resolution.We extracted lightcurves in the energy bands 2.06-3.68keV, 3.68-6.12keV, 6.12-8.98keV, and 8.98-14.76keV.Soft colour is defined as the ratio of the 3.68-6.12keV band over the 2.06-3.68keV band, and hard colour is defined as the 8.98-14.76keV band over the 6.12-8.98keV band.
The spectra were reduced from the raw outputs using standard tools in HEASoft (version 6.29), based on the instructions of the "New PCA Tools: Overview" recipe in the RXTE Cookbook1 .First, the raw data are prepared for analysis through processes such as creating filter files and estimating the background.Next, the good time filter is set, which uses user set conditions to determine what data to keep.The conditions used in the extraction were (1) ensuring that the target was above the Earth's horizon (ELV > 4), (2) making sure the PCA is pointing within 0.1 degrees of the target (OFFSET < 0.1), and (3) keeping only times where at least one PCU was active (NUM_PCU_ON > 0), but getting rid of unreal active PCU numbers (NUM_PCU_ON < 6).All of these are considered basic screening recommendations.The X-ray spectral data were extracted for 64 s bins.These bins were aligned with the times in the STANDARD-2 lightcurves, so that they could be matched with the X-ray observables and binned optical data to create timed spectra.
DATA DESCRIPTION
The CD has some significant secular drift, as may be expected for Cyg X-2.As such, it is separated into three groups (summarized in Table 2, CDs in Fig. 1, HIDs in Fig. 2), two of which are in July and the other comprising of all September data.These groups will from now on be referred to as groups 1, 2, and 3, in order of time the data were taken (see Table 3 for the parameters needed to align the groups).Within each group, drift much is lower, with a maximum of about 1000 counts/s on the intensity axis.The amount of data differs by group, as does the Z track coverage, where groups 2 and 3 each have almost twice as much data as group 1. Groups 1 and 2 have nearly identical branch coverage, stretching through the HB to the soft apex.Group 3 has less HB data, but it is the only group that includes a significant amount of data in the FB.
The X-ray data comprise segments that generally last about an hour (one per Earth orbit of the satellite, see Appendix A).As is seen in the HIDs, most of the data are in the HB or NB, and very little FB exists in the dataset.This could contribute to the lack of large scale changes seen within the data.However, some hour-long timescale changes can be seen, like in Fig. 3a, where the count rate nearly doubles over the course of a couple of hours.Where Cyg X-2 2).
does venture into the FB, the lightcurve becomes more variable.For example, at MJDs ∼54004.81(Fig. 3b) and ∼54005.70 (Fig. 3a), the count rate briefly decreases and returns to near its original value in ∼7 minutes.The system is the most active on MJD 54001 (Fig. 3c), where the lightcurve alternates between count rates of about 750 and 1100 counts/s/PCU.Interspersed among these steps are the previously mentioned quick dips (MJD 54001.789,Fig. 4), as well as spikes of similar duration (MJD 54001.792,54001.800,Fig. 4).The dips appear when the count rate is high, and the spikes when it is low.Although the transition between the steps is usually quick, one slower decrease can be seen starting at MJD 54001.780(Fig. 4).It is worth mentioning that this behavior looks very similar to the behavior in the Kepler K2 lightcurves of Sco X-1 in Hynes et al. (2016), which was noted to be bimodal.The dips and spikes in Fig. 2).
could then be thought of as quick reversions to and from the opposite state.One very clear X-ray burst is seen on MJD 53945 (Fig. 5, see Fig. A1e for full lightcurve), taking place on the HB.
Z TRACK PARAMETERIZATION
To quantify the state evolution, the Z track needed to be parameterized.The scheme developed is very similar to the one described in Dieters & van der Klis (2000), and is further discussed in Appendix A of Igl et al. (2023).The main difference is that instead of visualizing the Z track in 2D, it uses a 3D colour-colour-intensity (CCI) diagram.Thus, the parameters ( , , ) are created, where represents the location on the Z track, represents the distance away from the Z track, and represents the angular component around the track.An 3a is the data taken on MJD 54005.The X-ray data contains examples of hour-long features and increased variability while the system is on the FB.Fig. 3b is the data taken on MJD 54004.The X-ray lightcurve contains an example of increased variability while the system is on the FB.The effect described in Section 6 has adversely impacted the branch location plot here, as ranges from 1.25 -2.15.Fig. 3c is the data taken on MJD 54001.The system is most active here, alternating between count rates of ∼750 and 1100 counts/s/PCU.accurate analogy would be to think of a "warped" cylindrical coordinate system, where ( , , ) are similar to (,,) respectively.Using the CCI for the ranking is especially important for Cyg X-2, as the FB doubles back onto the NB in colour-colour space alone, but has a distinct intensity behavior.
One roadblock to a good mapping was the significant secular drift that occurs in the CD.Of course, the parameterization could be performed separately on each group, but this would make direct comparison between the behaviors of the three much more difficult.To get around this, the CDs of each group were transformed through scaling and shifting, until all three appeared to be part of the same Z track (parameters in Table 3).The mapping could then be transformed back, and used with the original data.Another issue that arises is the scale of each CCI axis.The intensity scale (∼ 10 3 counts/s) is much larger than the colour scales (∼ 10 −1 − 10 0 ), and so left as is, the intensity would dominate the mapping.This problem was avoided by normalizing the axes to the NB-midpoint before the mapping.
To start, the groups were transformed to a singular "CCI" as previously explained.The location of the hard and soft apexes were then defined by hand, and labeled with ranks 1 and 2 respectively.Using the scale of the normal branch, ranks were then defined for the rest of the Z track.The parameter was then obtained by spline interpolating the ranks with hard colour, soft colour, and intensity. is easily calculated from there as the distance of the data to those splines.Fig. 6 shows selected values over the transformed data for all three groups.
Calculating is less straightforward than the other parameters.With regular cylindrical coordinates, the angular and radial components are calculated on planes normal to the height axis, which means that the vectors defining the angular origins are parallel at any height.For these parameterizations, the "height" axis is allowed to bend and twist in space, meaning that the normal planes do as well.If the movement of these planes can be defined, along with an initial plane, then one can calculate the equation of a plane for any value of , and thus the angular component.Borrowing terminology from −1 about its origin to be parallel with .It was decided that the aforementioned Frenet-Serret equations would not be used, even though they are applied to similar problems.The reasoning comes from the definition of the axis, which is given as /, where is the arc length.Thus, where / changes direction ("wiggles"), the plane could be rotated significantly when compared to the −1 −1 plane.In comparison, this system is reliant on a static axis (intensity), and so no extreme plane rotations should occur, unless passes through a vector parallel to the intensity axis.
CCF METHODOLOGY AND REPROCESSING SEARCH
CCFs taken of a set of real data are usually performed with either the interpolated CCF (Gaskell & Peterson 1987) or the discrete CCF (Edelson & Krolik 1988).The DCCF is generally preferred, as data interpolated over larger gaps can dominate in a set of equally weighted interpolated points.Gaps like this occur in cases such as the removal of bad data or clouds.DCCFs take the form where is the lag, and are the data trains in question (X-ray and optical lightcurves in this paper), and are the standard deviations of the data trains, is the number of pairs per bin, and Δ is the bin size.For these reasons, all CCFs presented in this paper are discrete.CCFs were obtained systematically by splitting the lightcurves into segments with no large gaps.A CCF was then calculated for every instance of these split lightcurves overlapping in time.Additional segments were created to split the lightcurves at changes in the number of active PCUs.This was done instead of dividing the lightcurve by the active PCUs.The main reason for this was an effect that can be seen in the lightcurves, where the transition to the new operating PCU number is not immediate, but instead lasts anywhere from 30 s to 2 min (Fig. 7).The switch is sometimes accompanied by an intensity spike.Thus, data within 2 minutes of the PCU change were considered "bad", and was not used in the CCFs.Although this may seem a bit liberal, 4 minutes is negligible when compared to the amount of data in a lightcurve segment.The timescale of a CCF feature (i.e.peak lag and width) is dependent on the physical processes within the binary that created it.For reprocessing, X-rays produced near the central object can be reprocessed into optical wavelengths at distances as far as the companion star.As such, the mark of potential reprocessing in LMXBs is a peak occurring at positive optical lags (X-rays lead the optical) on the order of seconds.For Cyg X-2 specifically, these values can range from 0 s to ∼110 s.Reprocessing on the companion follows the equation given in O'Brien (2000): where is the semi-major axis, is the speed of light, is the inclination, and is the binary phase.Cyg X-2 has a quite high inclination (62.5 Orosz & Kuulkers (1999)), meaning that reprocessing occurring on that region could cover a large range of lags (∼5 -110 s).Because much of the calculated CCFs are comprised of broad features, Butterworth filters (which have a maximally flat passband) with cutoff periods of 15 min were applied to the optical and X-ray lightcurves, to make potential reprocessing peaks (which would have much smaller lag widths) more visible and easier to identify.The magnitude of the CCFs is much less important than the actual behaviors seen.
Out of the entire dataset, four events of interest were identified, all of which were in group 3 and occurred at near-zero lag: one on MJD 54001, one on MJD 54002, and two on MJD 54005.Most of these events also have corresponding features in their optical and Xray lightcurves.On its CCF, the MJD 54001 peak appears relatively small and difficult to see.After applying the Butterworth filter to the data, the CCF peak becomes clear, standing out amongst the surrounding features (Fig. 8).It has a large peak centered several seconds before a lag time of 0 s.This can be confirmed by checking the data themselves, where the close alignment of features is obvious.MJD 54002 has a possible event, but it is not likely that it is an echo (Fig. 9).Although it is well defined against the background and occurs at a lag on the order of seconds, the peak is negative (an anti-correlation).This is still an interesting occurrence, however, as Cyg X-2 is firmly on its NB, which is associated with X-ray/optical anti-correlations in Sco X-1 (Hynes et al. 2016).The first MJD 54005 plot has corresponding features which take place at nearly identical times, and as may be expected, the Butterworth filter reveals a large peak occurring at 0 s of lag (Fig. 10).As the X-ray time resolution is 1 s, this could be an instance of reprocessing from the inner disc.The second MJD 54005 plot has one peak that occurs at an optical lag of ∼1 min, which could suggest companion reprocessing, but the lack of peaks at similarly high lags makes this unlikely.However, it does contain some interesting behavior, in that there are multiple instances of rises in the optical, followed by drops in the X-ray (i.e., an anti-correlation at negative lag, Fig. 11).
During the times investigated, Cyg X-2 was on all three Z source branches (Fig. 1).Therefore, it was worthwhile to investigate how the object's location on the CD affected the correlation between the optical and X-ray light curves.No definitive trends were found, but the first MJD 54005 correlation (from Fig. 10) was dominated by the data in the flaring branch, as the peak disappears completely when the data is filtered for the NB, but remains when filtered for the FB.Another intriguing find was on MJD 54008.The cross correlation for the entire time period contains oscillatory behavior, with no clear peak that stands out among the others.However, when selected for the flaring branch, a thin peak centered near zero lag (∼3 s), as in Fig. 12.This was not repeated in any of the other Cyg X-2 data that were analyzed, but this near-zero lag peak structure can be seen in Figs. 8 and 10.Similar patterns were seen in the Z source Sco X-1, which saw positive peaks occurring only in FB and soft apex data (Igl 2023;Igl et al. 2023).NB behaviors (Figs. 9 and 11) were also comparable, as Hynes et al. (2016) noted broad anti-correlations in the same region.Although there was a search for reprocessing during the times near the X-ray burst, no interesting features were found in the CCF.This could be tied to the system being on the HB during that section of the lightcurve.
Z TRACK BEHAVIORS
The results of the ( , ) transformations can be seen in Figs. 13, 14, and 15 for groups 1-3 respectively.For all three groups, the soft, hard, and flux trends as a function of are approximately linear, changing slope at the apexes.There are some deviations from this, however, occurring most obviously towards the middle of the NB.The vs. plot also makes clear that the CCI plot is quite consistent through in terms of radial thickness, save for a broadening near the hard apex ( =1).
The time derivative of shows different behaviors for each group.Hertz et al. (1992) and Dieters & van der Klis (2000) both found a clear taper in / at the soft apex of Sco X-1, but this does not appear present in our Cyg X-2 data.Group 2 (Fig. 14) may have a taper at the hard apex, which is the opposite of the results in Dieters & van der Klis (2000), where a local maximum of the speed along the Z track is reached.Note that both Hertz et al. (1992) and Dieters & van der Klis (2000) calculated using only soft and hard colours.The inconsistencies in the definitions may account for the differences in these behaviors.
In addition, there is a noticeable increase in scatter of the time derivative in the NB for all three groups.The noise component in these measurements is likely small, as indicated by the low scatter in the HB section of Fig. 15 as compared to the NB.Another common thread for the groups, but especially group 2, is a region of drastically higher and lower / values centered at the apexes.On the concave side of two branches, the mapping tends to push consecutive points farther away from each other, and if these points are mapped to separate branches, |Δ | can become quite large.Therefore, even if the distances between consecutive data points are similar, the magnitude of / = Δ /Δ can be outsized in these regions.Large positive values appear at lower , as the system begins moving toward the FB, and large negative values appear at higher , as the system moves toward the HB.Thus, these sharply-sloped features are not due to binary behavior, but are artifacts purely a result of mapping.
Fig. 16 shows the Cyg X-2 optical data plotted against .The most immediately interesting behavior is in the top plot (group 1), which takes somewhat of an "S" shape.This carries the implication that, as the optical flux decreases, Cyg X-2 is bouncing between the HB and NB.It also makes clear that the optical flux is multi-valued, and cannot be uniquely determined by .The bottom plot (group 3) may show this as well, with a switch to the HB at a differential optical flux of about 0.95.Group 3 extends further into the FB than group 2, and there appears to be a positive correlation where > 2.
In the HB and NB, the optical data remain multi-valued.Group 2 looks very different from groups 1 and 3, in that there seems to be no correlation of and the optical flux.Thus, the data suggest that there are two different relations to optical behavior along the Z track (a no-correlation state and a multi-valued/correlation state).It is worth noting that, along with its optical- plot, group 3 has a HID that looks quite different than the other two (Fig. 2), containing both a FB and a noticeably different NB slope.
Because secular drift is accounted for when calculating the Z track parameters, all three plots in Fig. 16 can be combined, with the groups directly compared (Fig. 17).Viewed like this, it looks as though the HB and NB are comprised of steps.These steps have very little optical scatter by themselves, but the optical locations of each step can occur at a wide variety of values.The collated plot reframes the group 2 data as well.Instead of being a completely uncorrelated state, it now appears as more HB and NB steps, along with a potential transition on the hard apex.Similar transitions have been seen in Sco X-1 (Igl et al., submitted).The difference between group 2 and groups 1 and 3 might then simply be that group 2 is less well-sampled.
Looking at the group 3 (the only group with FB data) -optical plots by observation reveals another interesting trend: evidence of a clear jump in the optical near the soft apex (Fig. 18), moving from a low scatter NB level to a high scatter FB level.Similar behavior has been noted in Scorpius X-1, which included a step at the hard apex as well (Igl et al., submitted).Note that even though the jumps do not always occur exactly at =2, it is likely that the soft apex is moving even within the group, as Cyg X-2 is known to have particularly high secular drift.The behavior of the jump depends on the direction in the system is heading.When moving to higher , the jump is more like a step, moving directly up to a new optical level, before continuing onto the FB with higher variability than was seen on the NB.With descending , the jump is a slope, descending linearly down to the new optical flux in ≃ 0.3.Note that although group 3 has an apparently large scatter in the NB (comparable to the scatter in the soft apex and FB), this occurs over many observations.Single observation scatter is much smaller, but the flux level itself varies.This reframes the -optical correlation in group 3 and its scatter (days worth of data) as a combination of several slopes and steps from repeated crossings of the soft apex on the order of hours.The blackbody normalization is in units of 10 39 (ergs/s)/(10 kpc) 2 , and the cutoff powerlaw luminosity is in units of 10 38 ergs/s.
SPECTRAL FITTING
Data in the energy range of 3-20 keV were passed into Xspec (version 12.12.0)for spectral fitting.A number of models commonly applied to Z sources were tested using spectra from a full observation of data (∼1-2 hours), but none were found to be a clear best fit.The adopted model was similar to the well known Extended ADC model (Church & Bałucińska-Church 1995;Bałucińska-Church et al. 2010), TBABS(BB+CPL+GAUSS), where TBABS is an absorption model, BB is a blackbody (soft X-ray source), CPL is a cutoff power law (hard X-ray source), and GAUSS is a Gaussian to represent the iron K line.
Once the model had been chosen, a fitting strategy needed to be selected in order to produce an accurate fit.This involved testing fitting parameters in different orders, as well as freezing parameters at their initial values for the entirety of fitting.Plotting the fitted parameters versus time was the method used to check if the fits were sound.A good fit was free from parameters that hit a hard minimum/maximum, as well as instances where multiple parameters drastically changed without reason within the data.Two consecutive fits were used to obtain the parameters from a full observation of data: (1) a fit where the Gaussian parameters are frozen, and (2) a fit where every parameter except for the Gaussian is frozen.The absorption coefficient was frozen for both fits at 0.2 × 10 22 cm −2 .Although locking is not ideal, as it can vary, fits could not be stably constrained otherwise.Other examples exist in the literature where is frozen as well (Titarchuk et al. 2007; Devasia et al.The blackbody normalization is in units of 10 39 (ergs/s)/(10 kpc) 2 , and the cutoff powerlaw luminosity is in units of 10 38 ergs/s.
2021
).In addition, as is commonly done in other works fitting XRBs (Barret et al. 2000;Ding & Huang 2015), a systematic uncertainty of 0.5% was added to the fits.The time-resolved spectra required more finesse to obtain a good fit, so a slightly different process was used.Firstly, because the K lines proved particularly difficult to fit, it was decided that the Gaussian energies for the higher time resolution spectral fits would be frozen at the values from the full observation (hours of data) spectral fits.The absorption parameter was again frozen to 0.2×10 22 cm 2 .The value of the cutoff power law index was also frozen at a value of 1.7, as Bałucińska- Church et al. (2010) found that the parameter remained close to 1.7 when fit, for Cyg X-2 and other Cyg-like sources (Church et al. 2006).The power law cutoff energy was temporarily frozen at its value from the full observation fit.After an initial fit, the cutoff energy was freed, and the model was refit to give the final values.
This process was performed on data reduced at 16, 32, 64, and 128 second intervals, so that a balance between fit noisiness and a clear idea of how the fitting parameters were behaving in time could be found.Plots of parameter fits vs. time for the 16 second spectra were extremely noisy, making it clear that a lower time resolution was necessary.The noise levels in the 32 s spectra decreased significantly, and even more so in the 64 s spectra.Ultimately, it was decided that the 64 s spectra would be used, because on plotting certain parameters vs. optical data, the additional time resolution in the 32 s spectra did not seem to add any new information to the plots.
Results of the timed spectra fitting can be seen in Figs. 19, 20, and 21 (more group 3 fits can be found in Igl ( 2023)), plotted against and differential optical flux.The plots show clearly that most of the physical parameters behave monotonically within a branch. and the powerlaw cutoff energy vs. both show very similar behaviors: near constant values in the HB, a decrease moving from hard to soft apex, and an increase from soft apex through the FB.In all three groups, the plot has a noticeable increase in scatter at the lowest values.The same parameter behaviors can be seen in Hasinger et al. (1990) when fitting with a blackbody and a Boltzmann-Wien spectrum.
The blackbody normalization, powerlaw luminosity, and blackbody radius all follow different patterns.Note that the blackbody radius was calculated using the equation where is the blackbody luminosity (obtained from the blackbody normalization), is the Stefan-Boltzmann constant, and is the blackbody temperature.All three parameters monotonically increase moving through the HB to the hard apex.Continuing onto the NB, the blackbody normalization and radius continue to increase, albeit not as quickly for the normalization (the parameters are barely correlated in group 3).This results in the radius rate of increase to be the same as in the HB, despite the change in both the blackbody normalization and temperature.Group 1 (Fig. 19) is an exception to this, as starting in the mid-NB, the rate of increase changes to a new value for both the blackbody normalization and the powerlaw luminosity.The where this begins (∼1.6) is likely too low to attribute to secular drift of the soft apex.Of the three aforementioned parameters, the powerlaw luminosity is the only one that begins to decrease in the NB.The rate of NB decrease appears somewhat dependent on the rate of increase in the HB, with group 3 showing steep slopes in both, while the correlation decreases in groups 1 and 2. In the FB, all three parameters decrease with , with the group 3 powerlaw luminosity decrease remaining relatively unchanged from its behavior in the NB.The optical-fitted parameter plots in Figs. 19, 20, and 21 display distinct behaviors for each group.Group 1 has a defined "wiggle" in all of the parameters, similar to the group 1 optical- plot in Fig. 16, centered near = 1 (Fig. 19).It also contains a correlation between the optical and blackbody radius at the highest optical fluxes. and the cutoff energy have a maximum at the lowest optical intensities and minimum at the highest, and vice versa for the blackbody normalization and blackbody radius.Group 2 contains no correlation at all between the optical flux and fitted parameters, which one may expect given the decoupling of the optical and X-ray intensities (Fig. 20).Group 3 shows no correlation between the optical and fitted parameters when the differential optical flux is less than 1.0, accompanied by a high optical scatter (Fig. 21).Higher optical intensities contain clear correlations with blackbody parameters (normalization and temperature, and consequently blackbody radius, see highlighted data in Fig. 21 and additional group 3 fits in Igl (2023)).Ultimately, it does not appear as though the optical correlates any better with these physical parameters than it does with .
The exception to the above statement may lie with the powerlaw luminosity, which shows a decrease in the parameter with increasing optical.The behavior is seen most clearly in group 3, where the luminosity hovers around 1.5×10 38 ergs/s, and dips down to about 0.8×10 38 ergs/s at a differential optical value of 1.0.This dip is clearly present in group 1 as well, although the disparity between luminosity steps is not as large, and the change occurs at a higher optical value.The optical coverage in group 2 is too low to make a conclusive statement about the behavior, but the powerlaw luminosity drops to a lower level somewhere between 1.0 and 1.25 on the optical, and jumps to higher levels when the system is optically brightest.The jump may imply that the pattern is inconsistent with the other groups.Scaringi et al. (2015), who observe optical-XR anticorrelations on the NB and near-zero positive correlations on the FB, discuss and interpret multiple models that can explain this in our CCFs.In the Psaltis et al. (1995) model, soft photons are emitted by the neutron star magnetosphere, and hard photons are emitted when the soft ones are Comptonized in a hot central corona or the radial inflow from an outer corona.In the NB, the accretion rate increases, leading to higher radiation pressure and a pileup of material around the neutron star.X-ray photons are then absorbed and reemitted in the optical regime by this material.The more photons are absorbed, the fewer reach the observer in the X-ray regime, which would lead to an anti-correlation.On the FB, the electron scattering optical depth has increased enough that X-ray light is being scattered onto the outer disc or companion, leading to reprocessing on those regions.An increase in X-ray intensity leads to more photons being scattered and reprocessed (and vice versa), resulting in an optical lag and a positive CCF peak.
Cross Correlation Behaviors
In the extended ADC model from Bałucińska- Church et al. (2010), the NB is also associated with increasing accretion rate, although this time it is when moving away from the soft apex.Scaringi et al. (2015) predicts that this would still lead to anti-correlations due to the optical depth increasing with the accretion rate.Moving up the FB, the extended ADC model predicts that Cyg-likes will have a constant accretion rate, but an increasing blackbody luminosity.Thus, FB reprocessing still fits within the framework of the extended ADC model.However, these data show that the blackbody luminosity is decreasing through the FB (Fig. 21).It is possible that the reason for this lies in the distinction between a "dipping" branch and a "flaring" branch, a more in depth discussion of which is contained in Section 8.3.
Due to the short lags of the potential reprocessing peak maxima, it is likely that most of the reprocessing would be taking place on the accretion disc.Even though Cyg X-2 has a large orbit and a high inclination (meaning that companion reprocessing lags could be seen as low as ∼6 s), the largest and most obvious CCF peaks occur at optical lags of around 0 s.Fig. 11 has a small peak that occurs at about 1 min of lag, but being the only one in the tens of seconds range within this comprehensive dataset makes it an unlikely candidate for companion reprocessing.The lags seen in these data are similar to the lags observed in Sco X-1 (Igl et al. 2023), in spite of the different orbital periods.
In the Igl et al. (2023) Sco X-1 dataset, a number of "minor peaks" appeared in the shape of well defined piecewise exponential functions at optical lags of less than 4 s.These peaks were very small compared to surrounding CCF features, and were likely weaker versions of more obvious reprocessing peaks.After a search, it was determined that minor peaks do not appear in these data.The few possible reprocessing events that appear are not necessarily comparable in width or shape (i.e. Figs. 8 and 12).The minor peaks were also much more common, with eight appearing over the course of the nine nights with overlapping optical and X-ray data.However, in both datasets, reprocessing events occurred on the FB or soft apex, with none appearing outside of those regions.
The question could be raised why so little solid evidence of reprocessing is seen within this dataset, especially in the secondary star.Orosz & Kuulkers (1999) argues that heating of the companion star is not a large contribution based on the small amount of excess light at the photometric phase 0.5.Their model has the edge of the disc shielding the companion from the central X-ray source.They also note that the disc is fainter than the companion in the optical (a factor of 9 smaller than anticipated, based on van Paradijs & McClintock (1994)), which might be unexpected, as one would think the disc would receive more illumination.One possible solution they provide is that the X-rays are reprocessed into the ultraviolet, a regime which this comprehensive study does not attempt to detect.Also of note is the unusually long period of Cyg X-2, ∼9.8 days for one full orbit.This naturally leads to a large separation between the X-ray source and the companion, as well as an accretion disc with a bigger radius.As the donor heating drops off with inverse distance squared, one may expect that it would be more difficult to see reprocessing in Cyg X-2 because the X-ray illumination is less intense and leads to less heating compared to the intrinsic luminosity of the A9 companion.
Z Track Behaviors
These results can be contrasted with the work of O'Brien et al. (2004), who observed that (for Cyg X-2) optical flux tended to monotonically increase from HB to FB.They also found that X-ray and optical intensities did not have a simple one-to-one relationship, and that the data could be contained within an envelope comprised of three spectral components, described by the equation Here, is X-ray intensity, is the optical flux, , , and are the flaring, baseline, and accretion spectral components respectively, and is a coefficient that describes variability in , such that 0 ≤ () ≤ 1.The group 1 data (Fig. 22, top plot) follows the scheme laid out in this equation quite nicely.Because group 1 does not stretch into the FB, not much can be said about the baseline behavior.However, the value of seems to be increasing with .O' Brien et al. (2004) interprets this behavior as a result of inhomogeneities in the accretion flow, resulting in more variability in higher accretion rate (higher ) branches.The group 2 plot contains less data, but HB and NB data within could conceivably be following the same pattern as in group 1, with similar conclusions about the spectral components.Group 3 looks much different than the other two, with a more significant X-ray decrease during the NB.Here, is larger than in both groups 1 and 2 at high optical fluxes, and remains large even through most of the envelope.The group 1 and 3 plots in Fig. 16 also both support the idea that the parameter is more useful than X-ray flux at tracing optical behavior.This is because, although accompanied by multi-valued behavior at low , the optical has a general increasing monotonic trend across the Z track (for two different Cyg X-2 data groups).The optical- states show very distinct behavior between the groups, and in group 2, the optical flux appears to be completely decoupled from the parameter (although this could be due to the data sampling).Figs.16 and 22 falsify the null hypothesis that the optical perfectly traces X-ray radiation.One possible explanation of this could be disc warping, as in Pringle (1997), where X-ray illumination from the central object causes a growth in the form of a prograde spiral on the inner accretion disc (where the X-rays are strongest).Such distortions to the disc are often used to explain super-orbital periodicities, including with Cyg X-2 (Vrtilek et al. 2003).A warp of this nature could allow for outer disc illumination in some regions, and shielding in others, preventing reprocessing and decoupling the X-ray and optical behavior in these sections.The amount of reprocessed light that reaches the observer would be dependent on the inclination of the system and the precession phase of the warp.This scenerio would be consistent with the lack of reprocessing events found in the group 2 data.
Ultimately, XR intensity is better correlated with the Z track location than the optical flux.Fig. 22 shows that no single XR-optical intensity relation applies to the system at any given time, as while the group 1 data resembles what is seen in O'Brien et al. ( 2004), group 3 is quite different.Thus, the optical is likely not as useful for understanding the system and constraining models.
Physical Behaviors
In the Bałucińska- Church et al. (2010) extended ADC model, the mass accretion rate is at a minimum at the soft apex ( = 2).It increases as decreases towards = 1 (the hard apex), along with a decreasing blackbody radius, which can be interpreted as the emitting region shrinking down to an equatorial strip.This, combined with the flux reaching super-Eddington values at the hard apex, causes enough radiation pressure to disrupt the inner disc and move matter vertically to form jets in the HB.On the FB, the blackbody luminosity increases while the ADC luminosity remains constant, which implies that the neutron star has gained a non-accretion powered energy source.The accretion rate divided by the blackbody area ( /4 2 ) drops lower than the critical value for unstable burning, leading to flaring.Thus, in the FB, the increase of the blackbody luminosity was interpreted as power from nuclear burning with constant ADC luminosity (constant accretion rate).Note that the "Z" shaped HID in Bałucińska- Church et al. (2010) looks different than the more common "C" shaped HID seen in these data.
and behave very similarly to what is seen in Bałucińska- Church et al. (2010), whose observations also show little to no decrease through the HB, a greater reduction in the NB, and increasing again in the FB.The authors posit that the minimum at the soft apex corresponds to an accretion rate minimum at the same spectral location.The blackbody radius here is at a maximum, which would mean that the entire neutron star is emitting.
At the hard apex, Bałucińska-Church et al. ( 2010) also found that the blackbody radius decreased, and continued to do so into the HB.They interpret this as the region of blackbody emission shrinking down to an equatorial strip, which then has an increased emissive flux.The greater radiation pressure moves accreted material from the inner disc vertically, potentially forming jets.Our results show that this inner disc disruption may undergo a period of "uncertainty" before fully returning to the NB.Fig. 23 shows the system bouncing between the HB and NB before continuing towards the FB in group 1.However, even while Cyg X-2 moves around the hard apex, the optical continues to increase.This behavior was not found in group 3, where the hard apex optical level does not have any general trend with time.
Ultimately, the results of the spectral fitting reveal that the Z track location is a far better way to track physical behaviors than the optical flux in Cyg X-2 data.Although interesting trends can be found within the optical plots, such as correlations during periods of high optical intensities, the plots display clearer positive and negative correlations by branch.In addition, the behaviors of these parameters are unique with regards to Z track location.Other than the cutoff energy and the blackbody temperature, each parameter has a different sequence of positive and negative correlations by branch.The optical plots, however, have more similarities between parameters, and lack consistency in patterns between groups, with the relationship to physical parameters often being multi-valued.
CONCLUSIONS
In this study, we analyzed simultaneous optical (Argos) and X-ray (RXTE) lightcurves with 1 s time resolution.Performing discrete cross correlations on overlapping segments of optical and X-ray lightcurves revealed both positive and negative correlation peaks occurring near zero lag in the CCF, many of which had visible corresponding features in the lightcurves.The clearest near zero positive peaks all occurred on the FB, where reprocessing is normally seen.Only one instance of an anti-correlation was seen, but it occurred while Cyg X-2 was on the NB, which is associated with such features.Filtering data by branch further solidified the aforementioned conclusions.Potential reprocessing peaks that were present with all of the data disappeared when looking only at the normal branch data, and peaks that did not exist in the CCFs using all of the data in a segment appeared when using only the FB data.
The Z track was parameterized using a modification of the rank number scheme.Instead of performing spline interpolations on only the hard and soft colours, the X-ray intensity was included to account for the doubling back of the FB onto the NB in the group 3 data.Of the most interest was the -optical plots, which showed two different behaviors: • A multi-valued/correlated state: This can be seen in the group 1 and 3 data.The Z track position does not uniquely determine the optical intensity in the HB and NB.In the FB, the two are correlated.
• A no correlation state: In group 2, there is no correlation between optical intensity and Z track position on any branch.However, this effect may be due to the limited amount of data.
Plotting all three groups together reframes group 2 as HB and NB steps, with a transition on the hard apex (similar to behaviors seen in Sco X-1).The HB and NB data in groups 1 and 3 would then also be steps occurring at a wide variety of values.
The multi-valued correlation between the optical and Z track position in group 3 became clearer on shorter timescales.When looking at data covering about an hour, the transition from NB to FB occurred at a distinct step at the soft apex, accompanied by an increased optical scatter in the FB.The step was much more steep when the system was moving onto the FB rather than off of it.
Timed spectra with 64 s resolution were fit with a two component model that included a blackbody and a cutoff power law, along with a Gaussian to account for the iron K line.Fitted parameters tended to behave linearly in each branch, with changes in slope occurring at the apexes.Each parameter had a unique behavior moving through the Z track, with the exception of the blackbody temperature and the cutoff energy.In the plots of optical intensity and the fitted parameters, the plots tended to lack consistency between groups.The powerlaw luminosity was an outlier however, in that in each group, it tended to slide down to lower values as the optical increased.For all of these reasons, the Z track location remains a better predictor of physical parameters than the optical flux.
Figure 1 .
Figure 1.The CDs for each time separated group, where the axes are normalized to the NB midpoint ( =1.5) for each group.Each group corresponds to a different date range (see Table2). 4
Figure 2 .
Figure 2. The HIDs for each time separated group, where the axes are normalized to the NB midpoint ( =1.5) for each group.Each group corresponds to a different date range (see Table2).
Figure 3 .
Figure3.In each subplot, the top panel is the differential optical lightcurve, the middle panel is the optical lightcurve, and the bottom panel is the Z track location.Fig.3ais the data taken on MJD 54005.The X-ray data contains examples of hour-long features and increased variability while the system is on the FB.Fig.3bis the data taken on MJD 54004.The X-ray lightcurve contains an example of increased variability while the system is on the FB.The effect described in Section 6 has adversely impacted the branch location plot here, as ranges from 1.25 -2.15.Fig.3cis the data taken on MJD 54001.The system is most active here, alternating between count rates of ∼750 and 1100 counts/s/PCU.
Figure 4 .
Figure 4.A zoomed version of the MJD 54001 data (Fig. 3c) for identification of X-ray lightcurve features.
Figure 5 .
Figure5.The X-ray burst seen on MJD 53945.This is a subset of the lightcurve seen in Fig.A1e.
Adjustments made to group data for ( , , ) coordinate mapping.The shifts have been normalized to the NB midpoint, for comparison with Figs. 1 and 2.
Figure 6 .
Figure 6.Selected values are labeled on the interpolated Z track, along with the transformed data used to create them.The dashed cyan line represents the FB.
Figure 7 .
Figure 7.An example of the effect seen when the number of active RXTE PCUs changes.
Figure 8 .
Figure8.The CCF of MJD 54001 using data that has been Butterworth filtered.The peak occurs at about -3 s of lag.The dashed lines in the lightcurves occur at the same time for both the optical and the X-ray.Note that 0.001 days corresponds to 1.44 minutes.
Figure 9 .
Figure 9.The CCF of MJD 54002, containing a clear anti-correlation at about 24 s of lag.Note that 0.001 days corresponds to 1.44 minutes.
Figure 10 .
Figure10.A CCF from MJD 54005 using data that has been Butterworth filtered.The peak is centered at 0 s of lag.Note that 0.001 days corresponds to 1.44 minutes.
Figure 11 .
Figure 11.A CCF from MJD 54005.The peak occurs at lags too high to be reprocessing, but the data contain multiple instances of optical peaks followed by X-ray dips.Note that 0.001 days corresponds to 1.44 minutes.
Figure 12 .
Figure 12.A CCF from MJD 54008, containing only FB data and high pass Butterworth filtered.This reveals a thin CCF peak centered at ∼3 s.Note that 0.001 days corresponds to 1.44 minutes.
Figure 13 .
Figure 13.Various group 1 observables and parameters plotted against .
Figure 14 .
Figure 14.Various group 2 observables and parameters plotted against .
Figure 15 .
Figure 15.Various group 3 observables and parameters plotted against .
Figure 16 .
Figure 16.Plots of optical intensity against for all three groups.
Figure 17 .
Figure 17.The optical-intensity plot of all three groups.
Figure 18 .
Figure 18.Optical- plots for four separate group 3 observations.Colours move from blue to green for earlier and later times respectively (normalized to each observation).
Figure 19 .
Figure19.Group 1 spectral fits.The highlighted points are from the same observation (blue represents earlier times, green represents later times), and show how the correlated behavior in the high optical occur at the soft apex.The blackbody normalization is in units of 10 39 (ergs/s)/(10 kpc) 2 , and the cutoff powerlaw luminosity is in units of 10 38 ergs/s.
Figure 20 .
Figure 20.Group 2 spectral fits.The blackbody normalization is in units of 10 39 (ergs/s)/(10 kpc) 2 , and the cutoff powerlaw luminosity is in units of 10 38 ergs/s.No points are highlighted here because there are no distinct optical correlations.
Figure 21 .
Figure 21.Group 3 spectral fits.The highlighted points are from the same observation (blue represents earlier times, green represents later times), and show how the correlated behavior in the high optical occur at the soft apex.The blackbody normalization is in units of 10 39 (ergs/s)/(10 kpc) 2 , and the cutoff powerlaw luminosity is in units of 10 38 ergs/s.
Figure 22 .
Figure 22.Optical-X-ray intensity plots for all three groups.
Figure 23 .
Figure 23.The group 1 optical and data plotted against time.
Figure A1 .Figure A1 .
Figure A1.(cont.)The top plot is the differential optical lightcurve (normalized by the median), the middle is the X-ray intensity, and the bottom is the location on the Z track.
Table 1 .
Time ranges of all data.
Table 2 .
Time ranges of the group separations used to compensate for secular drift. | 13,955.2 | 2024-05-24T00:00:00.000 | [
"Physics"
] |
The Predicted Mannosyltransferase GT69-2 Antagonizes RFW-1 To Regulate Cell Fusion in Neurospora crassa
Cell wall remodeling is a dynamic process that balances cell wall integrity versus cell wall dissolution. In filamentous fungi, cell wall dissolution is required for somatic cell fusion and conidial separation during asexual sporulation.
between some strains carrying alternative cwr alleles and cells complete the fusion process (29). However, in screening germinated conidia (germlings) from a Dcwr-1 Dcwr-2 mutant (Dcwr-1 DNCU01381 Dcwr-2) (Table S1 in the supplemental material) against other wild-type N. crassa isolates, we observed that the Dcwr-1 Dcwr-2 mutant failed to undergo cell fusion when paired with wild-type strain JW224 (Fig. 1A), suggesting the existence of a second locus that regulated cell wall dissolution during somatic cell fusion. To identify this second locus, we performed bulk segregant analysis (BSA) of progeny from a cross between FGSC2489 (the parental laboratory strain of the Dcwr-1 Dcwr-2 mutant) and JW224. Progeny segregated into three classes: (i) progeny that underwent chemotropic interactions with FGSC2489 and JW224, but only completed cell fusion with FGSC2489; (ii) progeny that underwent chemotropic interactions with FGSC2489 and JW224, but only completed cell fusion with JW224; and (iii) progeny that failed to fuse with either parent. This third class of progeny was paired with the Dcwr-1 Dcwr-2 mutant; approximately half of these progeny fused with the Dcwr-1 Dcwr-2 strain, while the other approximately half did not. Genomic DNA from these two progeny pools of the third class, one pool of progeny that fused with the Dcwr-1 Dcwr-2 mutant and the second progeny pool that failed to fuse with the Dcwr-1 Dcwr-2 mutant, was isolated and subjected to whole-genome resequencing. From a comparison of single nucleotide polymorphisms (SNPs) between these two pools, a region spanning approximately 3 Mb on chromosome VI was identified that showed SNP segregation between the Dcwr-1 Dcwr-2 fusion-compatible and the Dcwr-1 Dcwr-2 fusion-incompatible pools of progeny (Fig. 1B). Upon further inspection of this 3 Mb region, mapped reads coverage to NCU05915 were significantly lower in Dcwr-1 Dcwr-2 fusion-incompatible progeny pools compared to Dcwr-1 Dcwr-2 fusion-compatible progeny pools (Fig. S1A).
Using assembled genome sequences of 23 N. crassa isolates (26), we analyzed polymorphisms at NCU05915 and linked loci (NCU05914, NCU05916, and NCU05917) (Fig. S2). Among the 23 strains, alleles at NCU05914 and NCU05917 were highly conserved (.90 amino acid identity) (Fig. 1C, Fig. S1B and S2). In contrast, alleles of NCU05916 showed high sequence diversity and alleles fell into two haplogroups among the 23 wild isolates (Fig. 1C, Fig. S1B and S2). We defined the alleles of NCU05916 with high conservation to FGSC2489 (the laboratory strain; amino acid identity . 96%) as haplogroup I, and alleles that were highly similar to each other but different from haplogroup I alleles, and which included JW224, as haplogroup II (Fig. 1C, Fig. S1 and S2). Interestingly, all the strains within haplogroup II lacked the linked locus NCU05915, while within haplogroup I strains, NCU05915 alleles were highly conserved with above 98% amino acid identity (Fig. 1C, Fig. S1 and S2).
Cell fusion deficient phenotype of Dgt69-2 is suppressed by mutations in rfw-1. To determine whether gt69-2 and/or rfw-1 was responsible for cell fusion arrest, we generated Dgt69-2 and Drfw-1 single deletion mutants, and a Drfw-1Dgt69-2 double deletion mutant by replacing gt69-2, rfw-1, or the whole region containing both rfw-1 and gt69-2 with a hygromycin B-resistance cassette in an FGSC2489 background (see the Materials and Methods) ( Fig. S3A and B). Cell fusion assays were performed by pairing FM4-64-stained mutant germlings with FGSC2489 germlings expressing cytoplasmic green fluorescent protein (GFP). The Dgt69-2 and Drfw-1Dgt69-2 germlings underwent chemotropic interactions, but failed to complete cell fusion and cytoplasmic mixing with FGSC2489 germlings (Fig. 2B). In contrast, the Drfw-1 mutant showed a wild-type cell fusion phenotype when paired with FGSC2489. These data indicated that gt69-2 was required for successful cell fusion with its wild-type parental strain.
To confirm that a deletion of rfw-1 suppresses the cell fusion defect of Dgt69-2, we generated a second double mutant by introducing a Drfw-1 deletion into a Dgt69-2 mutant by replacing rfw-1 with a nourseothricin-resistance cassette (see the Materials and Methods) ( Fig. S3A and B). This independently derived double mutant (DNCU05915 Dgt69-2) (Table S1) showed an identical slant phenotype to the Drfw-1 Dgt69-2 mutant (Fig. S3C) and, identical to the Drfw-1Dgt69-2 mutant, underwent fusion in self pairings but not when paired with FGSC2489 (Fig. S3D). These data supported the original observation that deletion of rfw-1 suppressed the cell fusion defects of the Dgt69-2 mutant.
To quantify cell fusion frequencies in the mutants relative to wild-type cells, we utilized a flow cytometry method based on a robust postfusion death response in germinated spores that is mediated by genetic differences at sec-9 (29, 30). In brief, Table S1) paired with FM4-64-stained FGSC2489 (the parent of the Dcwr-1Dcwr-2 mutant) or Dcwr-1Dcwr-2 (GFP) germlings blocked in cell fusion when paired with FM4-64-stained wild isolate JW224 by epifluorescence microscopy. (B) SNP segregation on linkage group VI (from 1.2 Mb to 4.2 Mb) after bulk segregant analysis and sequencing of two pools of genomic DNA from FGSC2489 fusion-compatible versus fusion-incompatible progeny from a cross between FGSC2489 and JW224. Blue line: SNP frequency in pooled segregants compatible with FGSC2489. Red line: SNP frequencies in pooled segregants incompatible with FGSC2489. Black box shows the region of centromere. Red arrow shows the position of gt69-2 and rfw-1. (C) Genomic organization of gt69-2 (NCU05916) linked loci in FGSC2489 and wild isolates. The percentage identity of the predicted protein sequences from sequenced wild isolates was calculated using FGSC2489 as the reference. The strains lacking NCU05915 (rfw-1) are marked with a dash.
Li et al. Regulation of Cell Fusion in Neurospora crassa ® FGSC2489 and mutant strains were engineered to carry sec-9 GRD2 at the native sec-9 locus. When germlings carrying incompatible sec-9 alleles undergo cell fusion, cell death is induced within 20 min, which can be used as a proxy for cell fusion frequency using vital dyes and flow cytometry (29,30). FGSC2489 1 FGSC2489 sec-9swap pairings were used as a positive control and showed a high death rate (;22%), while a negative-control pairing between cells unable to complete cell fusion (FGSC2489 with cwr-1 JW228 1 FGSC2489 sec-9swap ) showed a low death frequency (;5%) (Fig. 2D), a value consistent with that previously reported (29). As predicted by microscopic analyses, the Dgt69-2 1 FGSC2489 sec-9swap pairings, the Dgt69-2 1 Dgt69-2 sec-9swap pairings, and the Drfw-1Dgt69-2 1 FGSC2489 sec-9swap pairings all showed a low death frequency (2 to 5%) (Fig. 2D), consistent with a block in cell fusion. In line with the microscopy results, the Drfw-1 1 FGSC2489 sec-9swap pairings and the Drfw-1 1 Drfw-1 sec-9swap pairings both showed a high level of death frequency, showing that cells lacking rfw-1 are not affected in cell fusion (Fig. 2D). The Drfw-1Dgt69-2 1 Drfw-1Dgt69-2 sec-9swap self-pairings also showed a high death frequency (Fig. 2D), confirming that the lack of rfw-1 suppressed the cell fusion defect of the Dgt69-2 mutant. Additionally, these data also showed that neither GT69-2 nor RFW-1 was essential for cell fusion, as Drfw-1Dgt69-2 germlings showed self-fusion frequencies that were slightly higher than parental WT germlings (Fig. 2D).
Genetic interactions between gt69-2 and rfw-1. The Dgt69-2 mutant showed a lower height of aerial hyphae compared to FGSC2489 (Fig. 3A), a phenotype that has been observed in other cell fusion mutants (21,32,33). However, this phenotype was not observed in the Drfw-1 or Drfw-1Dgt69-2 mutant strains, indicating that, analogously to the cell fusion process, the short aerial hyphae phenotype of Dgt69-2 was suppressed by deletion of rfw-1. To test whether the Dgt69-2 mutant showed a lower growth rate, we inoculated hyphal plugs or conidial suspensions of each strain on Vogel's minimal medium (VMM) agar plates and measured the diameters of colonies up to 2 days postinoculation. When a conidial suspension was inoculated onto plates, the Dgt69-2 mutant showed a smaller colony diameter and fewer aerial hyphae compared to FGSC2489 ( Fig. 3B and C). By plotting colony diameter over time, the Dgt69-2 showed a lower growth rate for 24 h, consistent with a lag in colony establishment, a phenotype that has also been observed in other cell fusion mutants (21) (Fig. 3C). In contrast, with hyphal plug inoculations-that is, after the colony was already established-the Dgt69-2 mutant and FGSC2489 showed a similar growth rate (Fig. 3C). These data indicated that gt69-2 was dispensable for growth rate of a mycelial colony, but important for colony establishment via germling fusion.
Cells lacking gt69-2 affect oscillation of MAK-2 and are blocked in cell wall dissolution. To assess when the cell fusion defect occurred in Dgt69-2 cells, we first used transmission electron microscopy to determine whether the fusion defect in Dgt69-2 cells was due to a failure in cell wall dissolution or in membrane merger. In FGSC2489 1 FGSC2489 samples, cell wall and plasma membrane dissolution at the point of contact between germling fusion pairs was easily observed (Fig. 5A). In contrast, in Dgt69-2 1 Dgt69-2 pairings, we failed to find cell wall dissolution at contact points ( Fig. 5A), and accumulation of cell wall material at cell-cell contact sites was not observed, in contrast to cell pairings between incompatible cwr strains (29). These data indicated that the block of cell fusion in Dgt69-2 mutant was caused by failure of cell wall breakdown upon contact between Dgt69-2 cells.
During chemotropic interactions between compatible cells, the mitogen-activated protein kinase (MAPK) signal transduction protein complex (NRC-1, MEK-2, MAK-2, and the scaffold protein HAM-5) are recruited to conidial anastomosis tubes (CATs) (19). The MAK-2 complex assembles and disassembles at CAT tips every 8 to 10 min; chemical inhibition of the phosphorylation activity of MAK-2 results in immediate cessation of chemotropic growth (20). A second protein complex bearing SOFT (SO) also assembles and disassembles at CAT tips, but perfectly out of phase with the MAK-2 complex (20). FGSC2489 (MAK-2-GFP) 1 FGSC2489 (SOFT-dsRED) cells display oscillation of MAK-2 and SOFT to CATs during chemotropic interactions until physical contact. Previously, we showed that in cell pairings between incompatible cwr strains, MAK-2 and SO continued to oscillate at the contact point, consistent with an inability of cwr incompatible cells to transit from chemotropic growth to cell wall dissolution (29).
To further explore the block in self cell fusion in the Dgt69-2 cells, we analyzed MAK-2-GFP localization in Drfw-1(mak-2-gfp) germlings, in Dgt69-2 (mak-2-gfp) germlings, and in Drfw-1Dgt69-2(mak-2-gfp) germlings. In wild-type pairings, MAK-2-GFP shows dynamic localization to CATs during chemotropic interactions, localizing to one CAT tip while disappearing from its partner cell every ;4.5 min (Fig. 5B). Consistent with microscopic observations showing wild-type levels of cell fusion, the Drfw-1 cells showed normal dynamics of MAK-2 oscillation during chemotropic interactions Regulation of Cell Fusion in Neurospora crassa ® (Fig. 5C). In pairings between Dgt69-2 cells, oscillation of MAK-2 was observed during chemotropic interactions, but when Dgt69-2 germlings were in close proximity, MAK-2 localization to CATs was no longer observed (Fig. 5D). Additionally, MAK-2 localization at the contact point between Dgt69-2 germlings was not observed, which is apparent in wild-type pairings. These data indicated that Dgt69-2 germlings were affected during interactions when cells were in close proximity and in subsequent cell wall dissolution. Importantly, normal MAK-2-GFP dynamics during chemotropic interactions were restored in self pairings of Drfw-1Dgt69-2 germlings, consistent with the suppression of the cell fusion defect of the Dgt69-2 cells by deletion of rfw-1 (Fig. 5E).
GT69-2 and RFW-1 localization, overexpression phenotypes, and sensitivity to cell wall stress. Both GT69-2 and RFW-1 have predicted signal peptides. To characterize the subcellular localization of GT69-2 and RFW-1, we fused GFP to the N-terminal region of the predicted proteins immediately after the predicted signal peptides. The GFP-fused gt69-2 and rfw-1 were driven by the ccg-1 promoter and expressed in Dgt69-2 and Drfw-1 cells, respectively; GFP fluorescence was not observed in constructs using the gt69-2 or rfw-1 native promoters. The ccg-1-regulated gfp-gt69-2 construct fully complemented the growth and cell fusion defects of the Dgt69-2 mutant (Fig. S3E). Both GFP-GT69-2 and GFP-RFW-1 showed a similar subcellular localization pattern as numerous fluorescent punctate structures in hyphal compartments (Fig. 6A and B), with a similar localization pattern in germlings (Fig. S4). It is likely that increased protein levels from ccg-1-driven gt69-2 and rfw-1 expression resulted in a more abundant localization to Golgi. Localization of GFP-GT69-2 or GFP-RFW-1 to puncta within the cell did not change in germlings undergoing chemotropic interactions or cell fusion. To determine which organelles the puncta were, we coexpressed GFP-GT69-2 or GFP-RFW-1 with the Golgi marker mCherry-VPS-52 or the ER marker mCherry-ERV-25 in heterokaryotic strains. Colocalization of GFP-GT69-2 or GFP-RFW-1 with the ER marker ERV-25 was not observed, however, many of the GFP-GT69-2 and GFP-RFW-1 puncta colocalized with mCherry-VPS-52 ( Fig. 6A and B). These data suggested that the punctate structures to which GFP-GT69-2 and GFP-RFW-1 localized were Golgi compartments.
The gt69-2 locus encodes an alpha-1,3-mannosyltransferase predicted to transfer a mannosyl group to either a carbohydrate or a lipid. We therefore hypothesized that loss of gt69-2 might affect aspects of the cell wall biosynthesis. To test this hypothesis, we assessed growth of Drfw-1, Dgt69-2, and Drfw-1Dgt69-2 mutants on agar media containing different cell wall stress drugs, including the b-1,3-glucan synthase inhibitor caspofungin and two different anionic dyes that bind chitin and block chitin-glucan cross-linking, calcofluor white and Congo red. Similar to the parental strain FGSC2489, the Drfw-1 and Drfw-1Dgt69-2 mutants were mildly sensitive to all three drugs (Fig. S5). Consistent with conidial inoculations, the Dgt69-2 mutant showed a slight growth defect in drug-free medium. However, these defects were not exacerbated on caspofungin, calcofluor white, or Congo red, indicating that the absence of gt69-2 did not result in major cell wall defects.
Alleles at gt69-2 and rfw-1 show evidence of balancing selection. Genes that regulate allorecognition, such as the major histocompatibility complex (MHC) in humans, the S locus in plants, allorecognition loci in colonial ascidians, and heterokaryon incompatibility loci in fungi, often show evidence of balancing selection, which includes the presence of discrete haplotypes in populations, nearly equal frequency of allelic classes in population samples, and transspecies polymorphisms (26,(34)(35)(36). In N. crassa populations, gt69-2 alleles fell into two discrete haplotypes, suggesting a role in allorecognition ( Fig. 2A). In strains containing rfw-1, the gene was always linked with gt69-2 and was highly conserved among isolates. Phylogenetic trees were constructed to test whether allelic polymorphisms at rfw-1 (NCU05915) and gt69-2 (NCU05916) were retained among different Neurospora species. Consistent with their potential role in allorecognition, the gt69-2 alleles clustered by haplogroup rather than by species (Fig. 7B). The gt69-2 alleles from Neurospora discreta and Neurospora tetrasperma isolates grouped into the same two N. crassa haplogroups. Similar to N. crassa, the haplogroup I gt69-2 alleles in both N. discreta and N. tetrasperma were linked to rfw-1, while species of all strains within haplogroup II lacked rfw-1. The transspecies polymorphisms observed in the gt69-2 alleles suggested that this locus was under balancing selection and that allelic polymorphisms at this locus predates divergence of these species. We tested this hypothesis by calculating the Tajima's D values for the gt69-2 alleles. The high, positive, and significant Tajima's D values calculated for gt69-2 (Tajima's D = 2.07708; P , 0.05), but not NCU05914 (Tajima's D = 0.73738; P . 0.1) or NCU05917 (Tajima's D = 1.07540; P . 0.1), indicated that gt69-2 is under balancing selection in Neurospora species.
To assess whether allelic polymorphisms were present in other species of fungi, we analyzed the gt69-2 and rfw-1 homologs among various species of Fusarium, in particular, Fusarium oxysporum, as genome sequences for multiple isolates are available (Table S3). In Fusarium species, most strains have more than one paralog of gt69-2 and rfw-1 (Fig. S6). However, in strains of different species of Fusarium, if rfw-1 was present, it was always linked with gt69-2, although gt69-2 loci were identified that lacked linked rfw-1. In a sample of F. oxsporum isolates, although variation was observed in the number of gt69-2 and rfw-1 homologs in these isolates, allelic polymorphisms and discrete haplotypes were not observed (Fig. S6B).
DISCUSSION
In this study, we identified a linked gene pair, gt69-2 and rfw-1, that functions to regulate somatic cell fusion in N. crassa. The gt69-2 locus is predicted to encode a CAP59-like a-1,3-mannosyltransferase and, based on its similarity to C. neoformans CMT1, to catalyze the transfer of mannose from GDP-mannose to a-1,3-linked mannose disaccharides (31). A paralog of CMT1 in C. neoformans, CAP59, is required for capsule synthesis by playing a role in the export of the capsular polysaccharide glucuronoxylomannan (31). Both gt69-2 and CAP59 orthologs belong to glycosyltransferase family 69 and contain the conserved CAP59 family alpha-1,3-mannosyltransferase catalytic domain. In Aspergillus fumigatus, the Golgi-localized protein ClpA adds an alpha-1-3linked mannose to glycosylphosphatidylinositol (GPI) anchors (37); clpA is a homolog of Cap59. GPI anchors are important for anchoring cell surface proteins to the plasma membrane/cell wall (38). The attachment of the GPI anchor occurs in the ER, but the understanding of the maturation of the GPI anchor that occurs in the Golgi is limited.
We hypothesized that GT69-2 functions to modify secreted protein(s), such as GPI-anchored proteins, destined for the cell wall or plasma membrane, or that a small fraction of GT69-2 is trafficked to the cell surface during chemotropic interactions, modifying proteins important for late stages of MAK-2 signaling and cell wall remodeling/dissolution during the process of cell fusion. A wrinkle in this hypothesis was the observation that loss-of-function mutations in rfw-1 suppressed the cell fusion defect of the Dgt69-2 mutant; Dgt69-2Drfw-1 mutants were fusion competent. These data indicated that neither GT69-2 nor RFW-1 are essential for cell fusion in N. crassa, but rather, in the absence of GT69-2, RFW-1 functions to block cell fusion. We predict that in the absence of GT69-2, RFW-1 may inappropriately modify a protein or block secretion of a protein needed for mediating the transition from chemotropic interactions to cell wall dissolution, resulting in the loss of MAK-2 localization at cell contact sites and cessation of the cell fusion process. Localization of MAK-2 to the fusion pore as cell wall dissolution and membrane merger are occurring has been reported previously (20), and MAK-2 kinase activity is required for cell wall dissolution (39).
Consistent with the above hypothesis, overexpression of rfw-1 resulted in a block in cell fusion, even in the presence of gt69-2. The overexpression rfw-1 strain also showed a conidial separation deficiency associated with an inability to remove cell wall material at the double-doublet stage of conidial development. The phenotype of the rfw-1 overexpression strain most closely resembles the csp-2 mutant in N. crassa, where csp-2 encodes a homolog of grainy head-like transcription factors (40). An inability to remove the thin connectives between adjacent conidia has been associated with a decrease in autocatalytic activity of the cell wall, hypothesized to be due to a lack of secreted enzymes, such as chitinases (41); a gene encoding a chitinase and additional proteins associated with cell wall structure were identified as transcriptional targets of CSP-2 (40). Two cell wall glycosyl hydrolases, the CGL-1 b-1,3-glucanase and the NAG-1 exochitinase, function in remodeling the cell wall between adjacent conidia to facilitate conidia formation and dissemination (42). Two additional predicted GPI-anchored proteins, BGT-1 and BGT-2, encoding predicted b-1-3 endoglucanases (GH17 family) (43), localize to double-doublets in developing conidia and also to fusion points of germlings and hyphae (44). The Dbgt-1 and Dbgt-2 mutants display a deficiency in conidial separation, but do not display a cell fusion defect (44). Other mutants in N. crassa that show defects in conidial separation do show defects in cell fusion, however, including loss-of-function mutations in whi-2, csp-6, and amph-1 (23,32). CSP-6 and WHI-2 physically interact (45) and WHI-2, which localizes to the cell periphery, is required for signaling during chemotropic interactions via the MAK-2 MAPK pathway (23). Future studies to identify targets of RFW-1 and GT69-2 should help to understand the molecular basis of the cell wall remodeling process regulated by the RFW-1/GT69-2 system.
In the genomes of Fusarium and Neurospora species, all predicted rfw-1 genes were always linked to gt69-2 genes, although homologs of gt69-2 occurred without a linked rfw-1 gene (Fig. S6). These observations suggest that GT69-2 and RFW-1 also function as a pair in species other than in N. crassa. Coevolution of linked genes to maintain physical or functional interactions of their products occurs via coordinated sequence changes between the gene pairs (46). In Neurospora species, gt69-2 orthologs found in two haplogroups showed evidence of balancing selection, similar to other systems regulating allorecognition (25,27,29,30,47). However, expression of a gt69-2 JW224 (haplogroup II allele) in a gt69-2 FGSC2489 (haplogroup I allele) strain was insufficient to activate allorecognition and block cell fusion. The gt69-2 JW224 allele was fully functional, as it fully complemented the fusion-deficiency phenotype of a Dgt69-2 mutant. One possible explanation is that the gt69-2 alleles from haplogroup II have adapted to the loss of rfw-1, while haplogroup I strains need both gt69-2 and rfw-1 to correctly modify their targets in the Golgi. Alternatively, it is possible that the evolutionary forces driving balancing selection at gt69-2/rfw-1 do not reflect the function of these two proteins in cell fusion/conidial separation. Further work to identify the targets of the GT69-2/RFW-1 pair from haplogroup I relative to GT69-2 from haplogroup II will help to resolve this question, in addition to identifying cell membrane/cell wall-associated proteins required for late functions of MAK-2 signaling involved in cell wall dissolution and membrane merger during somatic cell fusion.
MATERIALS AND METHODS
Strains and growth conditions. Standard procedures and protocols for N. crassa can be found on the Neurospora homepage at the Fungal Genetics Stock Center (FGSC, www.fgsc.net/Neurospora/ NeurosporaProtocolGuide.htm). Vogel's minimal medium (VMM) (with supplements, if required) was Li et al.
used to culture all strains (48). Crosses were performed on Westergaard's synthetic crossing medium (49). All the strains used in this study are listed in Table S1 in the supplemental material. The wild N. crassa isolates from a Louisiana population have been previously described (25,26,50). FGSC2489 served as the wild-type (WT) control for all experiments and the parental strain for gene engineering, unless stated otherwise.
Strain construction. All gene deletion constructs were generated by double-joint PCR (25,51). The deletion mutants were obtained as described (25,29). For the Drfw-1Dgt69-2 double mutant, the whole region containing both NCU05915 and NCU05916 was replaced with the hygromycin B-resistance cassette in FGSC2489. For the independently derived DNCU05915 Dgt69-2 double mutant, rfw-1 was replaced with the nourseothricin-resistance cassette (52) in the Dgt69-2 mutant. Putative deletion mutants were screened for drug resistance and further confirmed by PCR (Fig. S3A and B). The primers are listed in Table S2.
The FGSC2489 sec-9swap strain, which was engineered to carry sec-9 GRD2 at the native sec-9 locus, has been previously described (30). The Drfw-1 and/or Dgt69-2 mutants were crossed with FGSC2489 sec-9swap to obtain the resulting sec-9swap strains.
Bulk segregant analysis. Bulk segregant analysis (BSA) followed by whole-genome resequencing was performed as previously described (25). Approximately 60 ng of genomic DNA from ;49 progeny strains in each DNA pool was used for library preparation and sequencing. All paired-end libraries were sequenced on a HiSeq2000 sequencing platform using standard Illumina operating procedures (QB3 Genomics Lab, University of California, Berkeley).
Microscopy. Cell fusion experiments were performed as described (25). Cytoplasmic or histone 1tagged GFP-expressing cells and FM-64-stained (Thermo Fisher Scientific) cells were mixed in a 1:1 proportion and incubated on VMM plates at 30°C in the dark for 4 h. Cytoplasmic mixing was examined with a Zeiss Axioskop 2 microscope equipped with a Q Imaging Retiga-2000R camera (Surrey) using a 40Â/1.30 Plan-Neofluar oil immersion objective and the iVision Mac 4.5 software.
Heterokaryotic strains bearing both GFP and mCherry fluorescent proteins were prepared as described (25) for colocalization analysis. Images were taken with a Leica SD6000 confocal microscope equipped with a Yokogawa CSU-X1 spinning disk head, and a 488-nm or 561-nm laser controlled by Metamorph software.
For MAK-2 oscillation experiments, conidia from strains expressing MAK-2-GFP were prepared for microscopy as described (25). Time-lapse microscopy was performed using the confocal microscope system as described above. Images were captured at 30 s intervals. The software ImageJ was used for image processing. Fluorescence signals were quantified as previously described (20).
Transmission electron microscopy. Conidia were inoculated in 100 ml of liquid VMM at a final concentration of 10 6 conidia/ml for 5 hat 30°C (shaking at 220 rpm for 2.5 h and standing for 2.5 h). Cells were harvested by centrifugation and then fixed with electron microscopy fix buffer (2% glutaraldehyde, 4% paraformaldehyde, 0.04 M phosphate buffer [pH 7.0]), followed by 2% KMnO 4 treatment. Samples were dehydrated using a graded ethanol series before embedding the samples in resin.
Flow cytometry. Flow cytometry was performed as described (29). For each experiment, 20,000 events per sample were recorded on a BD LSR Fortessa X-20 (BD Biosciences, Franklin Lakes, NJ, USA). Cell death frequencies were analyzed with a specifically designed MATLAB script (29). Each experiment was performed at least three times.
Growth assays. To evaluate growth rate, a hyphal plug (1 mm 2 ) or 5ml of a conidial suspension (10 6 conidia/ml) was inoculated onto the center of 14.2-cm diameter petri dishes and grown at 30°C in constant dark. The colony diameter was recorded twice a day.
Cell wall stress assays were conducted on VMM 1 FGS with 1.3 mg/ml caspofungin, 1.5 mg/ml calcofluor white, or 1 mg/ml Congo red as described (55). A 1:5 dilution series was prepared starting with a concentration of 10 6 conidia/ml. Conidial solutions were then spotted onto freshly poured plates at 5 ml per spot.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. | 5,965.4 | 2021-03-16T00:00:00.000 | [
"Biology"
] |
Amplitude analysis and branching fraction measurement of Bs->J/\psi K+K-
An amplitude analysis of the final state structure in the Bs->J/\psi K+K- decay mode is performed using 1.0/fb of data collected by the LHCb experiment in 7 TeV center-of-mass energy pp collisions produced by the LHC. A modified Dalitz plot analysis of the final state is performed using both the invariant mass spectra and the decay angular distributions. Resonant structures are observed in the K+K- mass spectrum as well as a significant non-resonant S-wave contribution. The largest resonant component is the \phi(1020), accompanied by f0(980), f'2(1525), and four additional resonances. The overall branching fraction is measured to be B(Bs->J/\psi K+K-)=(7.70 +/-0.08 +/- 0.39 +/- 0.60)x 10^(-4), where the first uncertainty is statistical, the second systematic, and the third due to the ratio of the number of Bs to B- mesons produced. The mass and width of the f'2(1525) are measured to be 1522.2 +/- 2.8^{+5.3}_{-2.0} MeV and 84 +/- 6^{+10}_{-5} MeV, respectively. The final state fractions of the other resonant states are also reported.
Introduction
The study of B 0 s decays to J/ψh + h − , where h is either a pion or kaon, has been used to measure mixing-induced CP violation in B 0 s decays [1][2][3][4][5][6][7].‡ In order to best exploit these decays a better understanding of the final state composition is necessary.This study has been reported for the B 0 s → J/ψ π + π − channel [8].Here we perform a similar analysis for B 0 s → J/ψ K + K − .While a large φ(1020) contribution is well known [9] and the f 2 (1525) component has been recently observed [10] and confirmed [11], other components have not heretofore been identified including the source of S-wave contributions [12].The tree-level Feynman diagram for the process is shown in Fig. 1.In this paper the J/ψK + and K + K − mass spectra and decay angular distributions are used to study resonant and non-resonant structures.This differs from a classical "Dalitz plot" analysis [13] since the J/ψ meson has spin-1, and its three helicity amplitudes must be considered.
Data sample and detector
The event sample is obtained using 1.0 fb −1 of integrated luminosity collected with the LHCb detector [14] using pp collisions at a center-of-mass energy of 7 TeV.The detector is a single-arm forward spectrometer covering the pseudorapidity range 2 < η < 5, designed for the study of particles containing b or c quarks.Components include a high precision tracking system consisting of a silicon-strip vertex detector surrounding the pp interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip detectors and straw drift-tubes placed downstream.The combined tracking system has momentum § resolution ∆p/p that varies from 0.4% at 5 GeV to 0.6% at 100 GeV.The impact parameter (IP) is defined as the minimum distance of approach of the track with respect to the primary vertex.For tracks with large transverse momentum with respect to the proton beam direction, the IP resolution is approximately 20 µm.Charged hadrons are identified using two ring-imaging Cherenkov detectors.Photon, electron and hadron candidates are ‡ Mention of a particular mode implies use of its charge conjugate throughout this paper.
§ We work in units where c = 1.
identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter.Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers.
The trigger [15] consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage that applies a full event reconstruction.Events selected for this analysis are triggered by a J/ψ → µ + µ − decay, where the J/ψ is required at the software level to be consistent with coming from the decay of a B 0 s meson by use either of IP requirements or detachment of the J/ψ from the primary vertex.Monte Carlo simulations are performed using Pythia [16] with the specific tuning given in Ref. [17], and the LHCb detector description based on Geant4 [18] described in Ref. [19].Decays of B mesons are based on EvtGen [20].
Signal selection and backgrounds
We select B 0 s → J/ψK + K − candidates trying to simultaneously maximize the signal yield and reduce the background.Candidate J/ψ → µ + µ − decays are combined with a pair of kaon candidates of opposite charge, and then requiring that all four tracks are consistent with coming from a common decay point.To be considered a J/ψ → µ + µ − candidate, particles identified as muons of opposite charge are required to have transverse momentum, p T , greater than 500 MeV, and form a vertex with fit χ 2 per number of degrees of freedom (ndf) less than 11.These requirements give rise to a large J/ψ signal over a small background [21].Only candidates with a dimuon invariant mass between −48 MeV to +43 MeV relative to the observed J/ψ mass peak are selected.The asymmetric requirement is due to final-state electromagnetic radiation.The two muons are subsequently kinematically constrained to the known J/ψ mass [9].
Our ring-imaging Cherenkov system allows for the possibility of positively identifying kaon candidates.Charged tracks produce Cherenkov photons whose emission angles are compared with those expected for electrons, pions, kaons or protons, and a likelihood for each species is then computed.To identify a particular species, the difference between the logarithm of the likelihoods for two particle hypotheses (DLL) is computed.There are two criteria used: loose corresponds to DLL(K −π) > 0, while tight has DLL(K −π) > 10 and DLL(K − p) > −3.Unless stated otherwise, we require the tight criterion for kaon selection.
We select candidate K + K − combinations if each particle is inconsistent with having been produced at the primary vertex.For this test we require that the χ 2 formed by using the hypothesis that the IP is zero be greater than 9 for each track.Furthermore, each kaon must have p T > 250 MeV and the scalar sum of the p T of the kaon candidates must be greater than 900 MeV.To select B 0 s candidates we further require that the two kaon candidates form a vertex with χ 2 < 10, and that they form a candidate B 0 s vertex with the J/ψ where the vertex fit χ 2 /ndf < 5. We require that this B 0 s vertex be more than 1.5 mm from the primary vertex, and the angle between the B 0 s momentum vector and the vector from the primary vertex to the B 0 s vertex must be less than 11.8 mrad.
The B 0 s candidate invariant mass distribution is shown in Fig. 2. The vertical lines indicate the signal and sideband regions, where the signal region extends to ±20 MeV around the nominal B 0 s mass [9] and the sidebands extend from 35 MeV to 60 MeV on either side of the peak.The small peak near 5280 MeV results from B 0 decays, and will be subject to future investigation.The background consist of combinations of tracks, which have a smooth mass shape through the J/ψ K + K − region, and peaking contributions caused by the reflection of specific decay modes where a pion is misidentified as a kaon.The reflection background that arises from the decay B 0 → J/ψK − π + , where the π + is misidentified as a K + , is determined from the number of B 0 candidates in the control region 25 − 200 MeV above the B 0 s mass peak.For each of the candidates in the J/ψK + K − control region, we reassign each of the two kaons in turn to the pion mass hypothesis.The resulting J/ψKπ invariant mass distribution is shown in Fig. 3.The peak at the B 0 mass has 906 ± 51 candidates, determined by fitting the data to a Gaussian function for the signal, and a polynomial function for the background.From these events we estimate the number in the B 0 s signal region, based on a simulation of the shape of the reflected distribution as a function of J/ψK − K + mass.Using simulated B 0 → J/ψK * 0 (892) and B 0 → J/ψK * 2 (1430) samples, we calculate 309 ± 17 reflection candidates within ±20 MeV of the B 0 s peak.This number is used as a constraint in the mass fit described below.
To determine the number of B 0 s signal candidates we perform a fit to the candidate J/ψK + K − invariant mass spectrum shown in Fig. 4. The fit function is the sum of the B 0 s signal component, combinatorial background, and the contribution from the B 0 → J/ψK − π + reflections.The signal is modeled by a double-Gaussian function with a common mean.The combinatorial background is described by a linear function.The reflection background is constrained as described above.The mass fit gives 19,195±150 signal together with 894 ± 24 combinatorial background candidates within ±20 MeV of the B 0 s mass peak.We use the decay B − → J/ψ K − as the normalization channel for branching fraction determinations.The selection criteria are similar to those used for J/ψ K + K − , except for particle identification as here a loose kaon identification criterion is used.Figure 5 shows the J/ψK − mass distribution.The signal is fit with a double-Gaussian function and a linear function is used to fit the combinatorial background.There are 342,786±661 signal and 10,195±134 background candidates within ±20 MeV of the B − peak.
Analysis formalism
One of the goals of this analysis is to determine the intermediate states in B 0 s → J/ψK + K − decay within the context of an isobar model [22,23], where we sum the resonant and non-resonant components testing if they explain the invariant mass squared and angular distributions.We also determine the absolute branching fractions of B 0 s → J/ψ φ(1020) and B 0 s → J/ψ f 2 (1525) final states and the mass and width of the f 2 (1525) resonance.Another important goal is to understand the S-wave content in the φ(1020) mass region.
Four variables completely describe the decay of B 0 s → J/ψK + K − with J/ψ → µ + µ − .Two are the invariant mass squared of J/ψK + , s 12 ≡ m 2 (J/ψK + ), and the invariant mass squared of K + K − , s 23 ≡ m 2 (K + K − ).The other two are the J/ψ helicity angle, θ J/ψ , which is the angle of the µ + in the J/ψ rest frame with respect to the J/ψ direction in the B 0 s rest frame, and the angle between the J/ψ and K + K − decay planes, χ, in the B 0 s rest frame.To simplify the probability density function (PDF), we analyze the decay process after integrating over the angular variable χ, which eliminates several interference terms.
The model for
In order to perform an amplitude analysis a PDF must be constructed that models correctly the dynamical and kinematic properties of the decay.The PDF is separated into two components, one describing signal, S, and the other background, B. The overall PDF given by the sum is where ε is the detection efficiency.The background is described by the sum of combinatorial background, C, and reflection, R, functions where f com and f refl are the fractions of the combinatorial background and reflection, respectively, in the fitted region.The fractions f com and f refl obtained from the mass fit are fixed for the subsequent analysis.
The normalization factors are given by This formalism similar to that used by Belle in their analysis of B 0 → K − π + χ c1 [24], and later used by LHCb for the analysis of B 0 s → J/ψπ + π − [8].The invariant mass squared of J/ψK + versus K + K − is shown in Fig. 6 for B 0 s → J/ψK + K − candidates.No structure is seen in m 2 (J/ψ K + ).There are however visible horizontal bands in the K + K − mass squared spectrum, the most prominent of which correspond to the φ(1020) and f 2 (1525) resonances.These and other structures in m 2 (K + K − ) are now examined.
The signal function is given by the coherent sum over resonant states that decay into where A R i λ (s 12 , s 23 , θ J/ψ ) describes the decay amplitude via an intermediate resonance state R i with helicity λ.Note that the J/ψ has the same helicity as the intermediate K + K − resonance.Each R i has an associated amplitude strength a R i λ and a phase φ R i λ for each helicity state λ.The amplitude for resonance R, for each i, is given by where P R is the momentum of either of the two kaons in the di-kaon rest frame, m B is the B 0 s mass, P B is the magnitude of the J/ψ three-momentum in the B 0 s rest frame, and are the B 0 s meson and R i resonance decay form factors.The orbital angular momenta between the J/ψ and K + K − system is given by L B , and the orbital angular momentum in the K + K − decay is given by L R ; the latter is the same as the spin of the K + K − system.Since the parent B 0 s has spin-0 and the J/ψ is a vector, when the K + K − system forms a spin-0 resonance, L B = 1 and L R = 0.For K + K − resonances with non-zero spin, L B can be 0, 1 or 2 (1, 2 or 3) for L R = 1(2) and so on.We take the lowest L B as the default value and consider the other possibilities in the systematic uncertainty.
The Blatt-Weisskopf barrier factors and F (L R ) R [25] are For the B meson z = r 2 P 2 B , where r, the hadron scale, is taken as 5.0 GeV −1 ; for the R resonance z = r 2 P 2 R , and r is taken as 1.5 GeV −1 [26].In both cases z 0 = r 2 P 2 0 where P 0 is the decay daughter momentum at the pole mass; for the B 0 s decay the J/ψ momentum is used, while for the R resonances the kaon momentum is used.
In the helicity formalism, the angular term, T λ (θ KK ) is defined as where d is the Wigner d-function, J is the resonance spin, θ KK is the helicity angle of the K + in the K + K − rest frame with respect to the K + K − direction in the B 0 s rest frame, and may be calculated directly from the other variables as The J/ψ helicity dependent term Θ λ (θ J/ψ ) is defined as The mass squared shape of each resonance, R is described by the function A R (s 23 ).In most cases this is a Breit-Wigner (BW) amplitude.When a decay channel opens close to the resonant mass, complications arise, since the proximity of the second threshold distorts the line shape of the amplitude.The f 0 (980) can decay to either ππ or KK.While the ππ channel opens at much lower masses, the K + K − decay channel opens near the resonance mass.Thus, for the f 0 (980) we use a Flatté model [27] that takes into account these coupled channels.
We describe the BW amplitude for a resonance decaying into two spin-0 particles, labeled as 2 and 3, as where m R is the resonance mass, Γ(s 23 ) is its energy-dependent width that is parametrized as Here Γ 0 is the decay width when the invariant mass of the daughter combinations is equal to m R .The Flatté mass shape is parametrized as where the constants g ππ and g KK are the f 0 (980) couplings to π + π − and K + K − final states, respectively.The ρ factors are given by Lorentz-invariant phase space For non-resonant processes, the amplitude A(s 12 , s 23 , θ J/ψ ) is constant over the variables s 12 and s 23 , but has an angular dependence due to the J/ψ decay.The amplitude is derived from Eq. ( 5), assuming that the non-resonant K + K − contribution is an S-wave (i.e.L R = 0, L B = 1) and is uniform in phase space (i.e.A R = 1),
Detection efficiency
The detection efficiency is determined from a phase space simulation sample containing 3.4 × 10 6 B 0 s → J/ψK + K − events with J/ψ → µ + µ − .We also use a separate sample of 1.3 × 10 6 B 0 s → J/ψφ events.The p and p T distributions of the generated B 0 s mesons are weighted to match the distributions found using J/ψφ data.The simulation is also corrected by weighting for difference between the simulated kaon detection efficiencies and the measured ones determined by using a sample of Next we describe the efficiency in terms of the analysis variables.Both s 12 and s 13 range from 12.5 GeV 2 to 24.0 GeV 2 , where s 13 is defined below, and thus are centered at s 0 = 18.25 GeV 2 .We model the detection efficiency using the dimensionless symmetric Dalitz plot observables and the angular variable θ J/ψ .The observables s 12 and s 13 are related to s 23 as To parametrize this efficiency, we fit the cos θ J/ψ distributions of the J/ψK + K − and J/ψφ simulation samples in bins of m 2 (K + K − ) with the function giving values of a as a function of m 2 (K + K − ).The resulting distribution, shown in Fig. 7, is described by an exponential function with a 1 = −0.76± 0.18 and a 2 = (−1.02± 0.15) GeV −2 .Equation ( 18) is normalized with respect to cos θ J/ψ .The efficiency in cos θ J/ψ depends on s 23 , and is observed to be independent of s 12 .Thus the detection efficiency can be expressed as After integrating over cos θ J/ψ , Eq. ( 20) becomes and is modeled by a symmetric fifth order polynomial function given by where i are the fit parameters.The B 0 s → J/ψK + K − phase space simulation sample is modeled with the polynomial function.The fitted function is shown in Fig. 8, and the projections of the fit are shown in Fig. 9.The efficiency is well described by the parametrization.
For the region within ±20 MeV of the φ(1020) mass, the cos θ KK acceptance is used separately, due to the large number of signal events.Here the cos θ KK distribution shows a variation in efficiency, which can be parametrized using the efficiency function where the parameter 12 is measured from a fit to the simulated J/ψφ sample with ε 1 (x, y) × A(θ KK ), giving 12 = −0.099± 0.010, as shown in Fig. 10.The mass resolution is ∼ 0.7 MeV at the φ(1020) mass peak, which is added to the fit model by increasing the Breit-Wigner width of the φ(1020) to 4.59 MeV.
Background composition
The shape of the combinatorial background is modeled as where C 1 (s 12 , s 23 ) is parametrized as with c i , m 0 , Γ 0 and α as the fit parameters.The variables x and y are defined in Eq. ( 16).
Figure 11 shows the mass squared projections from the B 0 s mass sidebands with the fit projections overlaid.The χ 2 /ndf of the fit is 291/305.The value of α is determined by fitting the cos θ J/ψ distribution of background, as shown in Fig. 12, with a function of the form 1 + α cos 2 θ J/ψ , yielding α = −0.14 ± 0.08.
5 Final state composition
Resonance models
The resonances that are likely to contribute are produced from the ss system in Fig. 1, and thus are isoscalar (I = 0).The K + K − system in the decay B 0 s → J/ψK + K − can, in principle, have zero or any positive integer angular momentum.Both the Pparity and C-parity of K + K − pair in a state of relative angular momentum L are given by (−1) L .Therefore the allowed resonances decaying to K + K − are limited to J P C = 0 ++ , 1 −− , 2 ++ , ..., with isospin I = 0.In the kinematically accessible mass range up to 2 GeV, resonances with J P C = 3 −− or higher are not expected and thus the subsequent analysis only uses spins up to J = 2. Possible resonance candidates are listed in Table 1.
There could also be a contribution from non-resonant events which we assume to be S-wave and evenly distributed over the available phase space.To study the resonant structures of the decay B 0 s → J/ψK + K − we use 20, 425 candidates with an invariant mass within ±20 MeV of the observed B 0 s mass peak.This includes both signal and background, with 94% signal purity.We begin our analysis considering only the resonance components φ(1020), f 2 (1525) and a non-resonant component, established in our earlier measurement [10], and add resonances until no others are found with more than two standard deviation statistical significance (2σ).The significance is estimated from the fit fraction divided by its statistical uncertainty.Our best fit model includes a non-resonant component and 8 resonance states: φ(1020), f 0 (980), f 0 (1370), f 2 (1525), f 2 (1640) (|λ| = 1), φ(1680) (|λ| = 1), f 2 (1750), and f 2 (1950).Most of the resonances considered here are well established except for the modes f 2 (1640), f 2 (1750), and f 2 (1950).Although the existence of f 2 (1640) is not confirmed yet [9], the right shoulder The f 2 (1640) (λ = 0) and φ(1680) (λ = 0) components have less than two standard deviation significance when added separately to the fit, and therefore are not included in the best fit model. of f 2 (1525) fits better when we add this state.The presence of multiple broad overlapping resonances in this region may indicate a failure of the isobar model used in this analysis, but with the present data sample alternative descriptions are not feasible.Indeed, the situation is not clear for the resonance states in the vicinity of 1750 MeV.The PDG lists a spin-0 resonance, f 0 (1710), around 1.72 GeV of K + K − invariant mass [9].The Belle collaboration observed a resonance in the vicinity of 1.75 GeV with J P C = (even) ++ in their study of γγ → K + K − [28], but could not establish its spin.A state of mass 1767±14 MeV was seen by the L3 collaboration decaying into K 0 S K 0 S with J = 2 [29].We find that our data are better fit including the f 2 (1750) mode.If we substitute either the f 0 (1710) or f 0 (1750) resonance the fit is worsened, as the −lnL increase by 59 and 7 units, respectively.
In the same analysis of γγ → K + K − , Belle also observed the f 2 (1950) [28] resonance.We include this state in our best fit model.Furthermore, we do not expect significant contributions from the f 2 (1270) and f 0 (1500) resonances, since the PDG branching fractions are much larger in the π + π − final state than in K + K − [9] and we did not see significant contributions from these two resonances in the B 0 s → J/ψπ + π − final state [8].Therefore, these two resonances are not considered in the best fit model.However, we add these states, in turn, to the best fit model in order to test for their possible presence.
The masses and widths of the BW resonances are listed in Table 2.When used in the fit they are fixed to the central values, except for the f 2 (1525), whose mass and width are allowed to vary.
Belle [28] The f 0 (980) is described by a Flatté resonance shape, see Eq. ( 12).The parameters describing the function are the mass, and the couplings g ππ and g KK , which are fixed in the fit from the previous analysis of B 0 s → J/ψπ + π − [8].The parameters are m 0 = 939.9±6.3MeV, g ππ = 199 ± 30 MeV and g KK /g ππ = 3.0 ± 0.3.All background and efficiency parameters are fixed in the fit.
To determine the complex amplitudes in a specific model, the data are fitted maxi-mizing the unbinned likelihood given as where N is the total number of candidates, and F is the total PDF defined in Eq. ( 1).The PDF normalization is accomplished by first normalizing the J/ψ helicity dependent part by analytical integration, and then for the mass dependent part using numerical integration over 400×800 bins.
The fit determines the relative values of the amplitude strengths, a R i λ , and phases, φ R i λ , defined in Eq. ( 4).We choose to fix a φ(1020) 0 = 1.As only relative phases are physically meaningful, one phase in each helicity grouping must be fixed.In addition, because J/ψK + K − is a self-charge-conjugate mode and does not determine the initial B flavor, the signal function is an average of B 0 s and B 0 s .If we consider no K + K − partial-waves of a higher order than D-wave, then we can express the differential decay rate (dΓ/dm KK d cos θ KK d cos θ J/ψ ) derived from Eq. ( 4) in terms of S-, P-, and D-waves including helicity 0 and ±1 components.The differential decay rates for B 0 s and B 0 s , respectively are where A s k λ and φ s k λ are the sum of amplitudes and reference phases, for the spin-k resonance group, respectively.The decay rate for B 0 s is similar to that of B 0 s , except θ K + K − and θ J/ψ are now changed to π − θ K + K − and π − θ J/ψ respectively, as a result of using K − and µ − to define the helicity angles and hence the signs change in front of the A s P 0 and A s D ±1 terms.Summing Eqs. ( 28) and ( 29) results in cancellation of the interference involving λ = 0 terms for spin-1, and the λ = ±1 terms for spin-2, as they appear with opposite signs for B 0 s and B 0 s decays.Therefore we have to fix one phase in the spin-1 (λ = 0) group (φ s P 0 ) and one in the spin-2 (λ = ±1) group (φ s D ±1 ).The other phases in each corresponding group are determined relative to that of the fixed resonance.
Fit results
The goodness of fit is calculated from 3D partitions of s 12 , s 23 and cos θ J/ψ .We use the Poisson likelihood χ 2 [30] defined as where n i is the number of candidates in the three dimensional bin i and x i is the expected number of candidates in that bin according to the fitted likelihood function.An adaptive binning algorithm is used, requiring a minimum of 25 entries in each bin.The associated number of degrees of freedom (ndf) is where k is the number of free parameters in the likelihood function.The χ 2 /ndf and the negative of the logarithm of the likelihood, −lnL, of the fits are given in Table 3. Starting values of parameters are varied in order to ensure that global likelihood minimums are found rather than local minimums.Attempts to add one more resonance such as f 2 (1270) and f 0 (1500) improve the −lnL marginally, but the χ 2 /ndf are worse than the best fit model.We retain only those resonances that are more than 2σ significant, except for the f 2 (1750) where we allow the |λ| = 1 component, since the λ = 0 component is significant.For models with one more resonance, the additional components never have more than 2σ significance.Figure 15 shows the projection of m 2 (K + K − ) for the best fit model, the m 2 (J/ψK + ) and cos θ J/ψ projections are displayed in Fig. 16.The projection of the K + K − invariant mass spectrum is shown in Fig. 17.
While a complete description of the B 0 s → J/ψ K + K − decay is given in terms of the fitted amplitudes and phases, knowledge of the contribution of each component can be summarized by defining a fit fraction, F R λ .To determine F R λ we integrate the squared amplitude of R over the Dalitz plot.The yield is then normalized by integrating the entire signal function over the same area.Specifically, Note that the sum of the fit fractions is not necessarily unity due to the potential presence of interference between two resonances.Interference term fractions are given by If the Dalitz plot has more destructive interference than constructive interference, the sum of the fit fractions will be greater than unity.Conversely, the sum will be less than one if the Dalitz plot exhibits constructive interference.Note that interference between different spin-J states vanishes because the d J λ0 angular functions in A R λ are orthogonal.The determination of the statistical uncertainties of the fit fractions is difficult because they depend on the statistical uncertainty of every fitted magnitude and phase.Therefore we determine the uncertainties from simulated experiments.We perform 500 experiments: each sample is generated according to the model PDF, input parameters are taken from the fit to the data.The correlations of fitted parameters are also taken into account.For each experiment the fit fractions are calculated.The distributions of the obtained fit fractions are described by Gaussian functions.The r.m.s.widths of the Gaussian functions are taken as the statistical uncertainties on the corresponding parameters.The fit fractions and phases of the contributing components are given in Table 4, while the fit fractions of the interference terms are quoted in Table 5.
Table 4: Fit fractions (%) and phases of contributing components.For P-and D-waves λ represents the helicity.
The off-diagonal elements give the fit fractions of the interference.The null values originate from the fact that any interference contribution between different spin-J state integrates to zero.Here the resonances are labeled by their masses in MeV and the subscripts denote the helicities.
9.9 ± 0.7 9.8 ± 0.7 9.5 ± 0.7 f 2 (1525), |λ| = 1 5.1 ± 0.9 5.1 ± 0.9 4.9 ± 0.9 the second largest contribution is the f 2 (1525), and the third the f 0 (980) resonance.There are also significant contributions from the f 0 (1370), f 2 (1640), φ(1680), f 2 (1750), f 2 (1950) resonances, and non-resonant final states.The amount of f 0 (980) is strongly parametrization dependent, so we treat these three models separately and do not assign any systematic uncertainty based on the use of these different f 0 (980) shapes.Therefore we refrain from quoting a branching fraction measurement for the decay B 0 s → J/ψf 0 (980).The determination of the parameters of the f 2 (1525) resonance are not dependent on the f 0 (980) parametrization.The parameters of the f 2 (1525) are determined to be: Whenever two or more uncertainties are quoted, the first is the statistical and the second systematic.The latter will be discussed in Section 5.6.These values are the most accurate determinations of the f 2 (1525) resonant parameters [9].Note that our determination of the mass has the same uncertainty as the current PDG average.
K + K − S-wave in the φ(1020) mass region
It was claimed by Stone and Zhang [12] that in the decay of B 0 s → J/ψφ, the K + K − system can have S-wave contributions under the φ(1020) peak of order 7% of the total yield.In order to investigate this possibility we calculate the S-wave fractions as given by the fit in 4 MeV mass intervals between 990 < m(K + K − ) < 1050 MeV.The resulting behavior is shown in Fig. 18.Here we show the result from our preferred model and also from the alternative f 0 (980) parameterizations discussed above.The observation of significant S-wave fractions in this region means that this contribution must be taken into account when measuring CP violation in the φ mass region.The total S-wave fraction as a function of the mass interval around the φ mass is also shown in Fig. 19.Using a time dependent analysis of B 0 s → J/ψφ(1020), LHCb reported (2.2 ± 1.2 ± 0.07)% [3] of S-wave within ±12 MeV of the φ(1020) mass peak.We measure the S-wave fraction within the same mass window as a consistent, and more precise (1.1 ± 0.1 +0.2 −0.1 )%.CDF measured the S-wave fraction as (0.8 ± 0.2)% for m(K + K − ) within about ±9.5 MeV of the φ mass [6], while ATLAS quotes (2±2)% for an 11 MeV interval [7].These results are consistent with ours.The D0 collaboration, however, claimed a (14.7±3.5)%S-wave fraction within approximately ±10 MeV of the φ meson mass [5], in disagreement with all of the other results.The squares (blue), triangles (red), and circles (green) represent the LHCb, BES and BaBar parameterizations of f 0 (980), respectively.The experimental statistical uncertainties are only shown for the LHCb model; they are almost identical for the other cases.The experimental mass resolution is not unfolded.
Helicity angle distributions
The decay angular distributions or the helicity angle distributions are already included in the signal model via Eqs.( 8) and (9).In order to test the fit model we examine the cos θ J/ψ and cos θ KK distributions in two different K + K − mass regions: one is the φ(1020) region defined within ±12 MeV of the φ(1020) mass peak and the other is defined within one full width of the f 2 (1525) mass.The background-subtracted efficiency-corrected distributions are shown in Figs.20 and 21.The distributions are in good agreement with the fit model.
Angular moments
The angular moment distributions provide an additional way of visualizing the effects of different resonances and their interferences, similar to a partial wave analysis.This technique has been used in previous studies [8,34].We define the angular moments Y 0 l as the efficiency-corrected and backgroundsubtracted K + K − invariant mass distributions, weighted by orthogonal and normalized spherical harmonic functions Y 0 l (cos θ KK ), If we assume that no K + K − partial-waves of a higher order than D-wave contribute, then we can express the differential decay rate, derived from Eq. ( 4) in terms of S-, P-, and D-waves including helicity 0 and ±1 components as where S λ , P λ , D λ and Φ k λ are real-valued functions of m KK , and we have factored out the S-wave phase.We can then calculate the angular moments The angular moments for l > 4 vanish.Figures 22 and 23 show the distributions of the angular moments for the fit model around ±30 MeV of the φ(1020) mass peak and above the φ(1020), respectively.In general the interpretation of these moments is that Y 0 0 is the efficiency-corrected and background-subtracted event distribution, Y 0 1 the sum of the interference between S-wave and P-wave, and P-wave and D-wave amplitudes, Y 0 2 the sum of P-wave, D-wave and the interference of S-wave and D-wave amplitudes, Y 0 3 the interference between P-wave and D-wave amplitudes, and Y 0 4 the D-wave.As discussed in Section 5.1, the average of B 0 s and B 0 s cancels the interference terms that involve P 0 and D ± .This causes the angular moments Y 0 1 and Y 0 3 to be zero when averaging over B 0 s and B 0 s decays.We observe that the fit results well describe the moment distributions, except for the Y 0 1 and Y 0 4 values below 1.2 GeV.This may be the result of statistical fluctuations or imperfect modeling.
Systematic uncertainties
The sources of the systematic uncertainties on the results of the Dalitz plot analysis are summarized in Table 7.The uncertainties due to the background parametrization are estimated by comparing the results from the best fit model with those when the background shape parameters are obtained from a fit to the lower sideband region only.The uncertainties in the efficiency are estimated by comparing the fit results when the efficiency parameters are changed by their statistical uncertainties and are added in quadrature.The effect on the fit fractions of changing the efficiency function is evaluated using a similar method to that used previously [8].Briefly, we change the efficiency model by increasing the minimum IP χ 2 requirement from 9 to 12.5 on both of the kaon candidates.This has the effect of increasing the χ 2 of the fit to the angular distributions of B 0 s → J/ψ φ data by 1 unit.The new efficiency function is then applied to the data with the original minimum IP χ 2 selection of 9, the likelihood is re-evaluated and the uncertainties are estimated by comparing the results with the best fit model.The largest variations among these two efficiency categories are included in the uncertainty.We estimate additional uncertainties by comparing the results when one more resonance is added to the best fit model.The uncertainties due to the line shape of the contributing resonances with fixed mass and width parameters are estimated by varying them individually in the fit according to their combined statistical and systematic uncertainties added in quadrature.We compare the results with the best fit and add them in quadrature to estimate the uncertainties due to the line shape.
Another source of systematic uncertainty is the value we choose for L B , the orbital angular momentum in the B 0 s decay.If L R equals zero then L B equals zero.If, however, L R is 1 then L B can either be 0 or 1, and if L R is 2, L B can be 1, 2 or 3.For our best fit we don't allow multiple values for L B , but choose the lowest allowed value.To estimate the systematic uncertainties due to the choice of L B , we repeat the fit changing the default value of L B , in turn, to each higher allowed value and compare the fit results with the best fit.The differences are grouped into the fit model category, and we assign the largest variations as the systematic uncertainties.These later two categories often give in asymmetric uncertainties.
Absolute branching fractions
Branching fractions are measured from ratios of the decay rates of interest normalized to the well established decay mode B − → J/ψK − .This decay mode, in addition to having a well measured branching fraction, has the advantage of having two muons in the final state and hence the same triggers as the B 0 s decay.However, we require knowledge of the B 0 s /B − production ratio.For this we assume isospin invariance and use the B 0 s /B 0 production ratio f s /f d = 0.256 ± 0.020, given in Ref. [35].The branching fractions are calculated using where X indicates a specific K + K − state, N represents the yield of the decay of interest, and corresponds to the overall efficiency.We form an average of B(B − → J/ψ K − ) = (10.18± 0.42) × 10 −4 using the recent Belle [36] and BaBar [37] measurements, corrected to take into account different rates of B + B − and B 0 B 0 pair production from Υ(4S) using The detection efficiency is obtained from simulation and is a product of the geometrical acceptance of the detector, the combined reconstruction and selection efficiency and the trigger efficiency.The efficiency also includes the efficiency of the Dalitz plot model for the case of B 0 s → J/ψK + K − , where the best fit model is used.The detection efficiencies and their various correction factors are given in Table 8.To ensure that the p and p T distributions of the generated B meson are correct we weight the B 0 s simulations using B 0 s → J/ψφ(1020) data and the B − simulations using B − → J/ψK − data.Since the control channel has a different number of charged tracks than the decay channel, we weight the simulations with the tracking efficiency ratio by comparing the data and simulations in bins of the track's p and p T .we further weight the B 0 s → J/ψ K + K − simulation, using the PDG value of B 0 s lifetime, (1.497 ± 0.015) × 10 −12 s [9], as input.where the branching fractions B(φ(1020) → K + K − ) = (48.9± 0.5)% and B(f 2 (1525) → K + K − ) = (44.4± 1.1)% are used [9].Here the first uncertainty in each case is statistical, the second is systematic and the third reflects the uncertainty due to f s /f d .Note that these are the time-integrated branching fractions.Results on the polarization fractions of B 0 s → J/ψ φ(1020) from a time-dependent analysis will be forthcoming in a separate publication [38].The ratio of B(B 0 s → J/ψf 2 (1525))/B(B 0 s → J/ψφ(1020)) is consistent with our previous result [10], D0 [11], and the Belle result [39].The current PDG value of B(B 0 s → J/ψφ(1020)) = (1.4 ± 0.5) × 10 −3 is dominated by the CDF measurement [40].Our measured value is in good agreement with this measurement and also the most recent yet unpublished values measured by CDF [41] and Belle [39].The Belle collaboration has also recently reported the branching fraction of B(B 0 s → J/ψK + K − ) [39], where B 0 s → J/ψφ(1020) is excluded.
The systematic uncertainty on the branching fraction has several contributions listed in Table 9.Since the branching fractions are measured with respect to B − → J/ψK − which has a different number of charged tracks than the decays of interest, a 1% systematic uncertainty is assigned due to differences in the tracking performance between Table 9: Relative systematic uncertainties on branching fractions (%).data and simulation.Another 2% uncertainty is assigned for the additional kaon which is due to decay in flight, large multiple scatterings and hadronic interactions along the track.Using the PDG value for the B 0 s lifetime [9] as input gives rise to an additional 1.5% systematic uncertainty.Small uncertainties are introduced if the simulation does not have the correct B meson kinematic distributions.We are relatively insensitive to any of these differences in the B meson p and p T distributions since we are measuring the relative rates.By varying the p and p T distributions we see at most a change of 0.5%.There is a 1% systematic uncertainty assigned for the relative particle identification efficiencies.An uncertainty of 0.02% is included due to the change of the efficiency function Eq. (20).Three additional uncertainties are considered in the branching fractions of B(B 0 s → J/ψφ(1020)) and B(B 0 s → J/ψf 2 (1525)) as these are measured from the fit fractions of the Dalitz plot analysis.The total systematic uncertainty is obtained by adding each source of systematic uncertainty in quadrature as they are uncorrelated.
Conclusions
We have determined the final state composition of the B 0 s → J/ψK + K − decay channel using a modified Dalitz plot analysis where we include the decay angle of the J/ψ.The largest contribution is the φ(1020) resonance, along with other S-, P-and D-wave K + K − states, and a non-resonant K + K − contribution.All of the components are listed in Table 4.The mass and width of the where the first uncertainty in each case is statistical, the second is systematic and the third due to f s /f d .These results provide a good understanding of the J/ψ K + K − final state in B 0 s decays over the entire kinematically allowed region.The J/ψf 2 (1525) results supersede those of Ref. [10].This decay mode offers the opportunity for additional measurements of CP violation [42].
Figure 3 :
Figure 3: Invariant mass distribution for J/ψK + K − candidates 25 − 200 MeV above the B 0 smass, reinterpreted as B 0 → J/ψK ∓ π ± events.The fit is to a signal Gaussian whose mass and width are allowed to vary as well as the polynomial background.
Figure 4 :
Figure 4: Fit to the invariant mass spectrum of J/ψK + K − combinations.The dotted (black) line is the combinatorial background, the dashed (red) shape shows the misidentified B 0 → J/ψK − π + decays, and the solid (blue) curve shows the total.The vertical dashed lines indicate the signal region.
Figure 5 :
Figure 5: Fit to the invariant mass spectrum of J/ψK − candidates.The dotted line shows the combinatorial background and the solid (blue) curve is the total.
Figure 7 :
Figure 7: Exponential fit to the efficiency parameter a(s 23 ).The point near the φ(1020) meson mass is determined more precisely due to the use of a large simulation sample.
Figure 9 :
Figure 9: Projections of the invariant mass squared (a) K + K − and (b) J/ψK + from the simulation used to measure the efficiency parameters.The points represent the generated event distributions and the curves the polynomial fit.
Figure 11 :Figure 12 :Figure 13 :
Figure 11: Invariant mass squared projections of (a) K + K − and (b) J/ψK + from the background Dalitz plot of candidates in the B 0 s mass sidebands.
Figure 15 :
Figure 15: Dalitz plot fit projection of m 2 (K + K − ) using a logarithmic scale.The points with error bars are data, the (black) dotted curve shows the combinatorial background, the (red) dashed curve indicates the reflection from the misidentified B 0 → J/ψK − π + decays, and the (blue) solid line represents the total.
Figure 16 :Figure 17 :
Figure 16: Dalitz plot fit projections of (a) m 2 (J/ψK + ) and (b) cos θ J/ψ .The points with error bars are data, the (black) dotted curve shows the combinatorial background, the (red) dashed curve indicates the reflection from the misidentified B 0 → J/ψK − π + decays, and the (blue) solid line represents the total fit results.
Table 6 :
Comparison of the fit fractions (%) with the LHCb, BES and BaBar f 0 (980) parameterizations described in the text.For P-and D-waves, λ represents the helicity.
Figure 18 :Figure 19 :
Figure 18: S-wave fraction as a function of m(K + K − ) starting from 990 MeV up to 1050 MeV in 4 MeV mass intervals.The squares (blue), triangles (red), and circles (green) represent the LHCb, BES and BaBar parameterizations of f 0 (980), respectively.The experimental statistical uncertainties are only shown for the LHCb model; they are almost identical for the other cases.The experimental mass resolution is not unfolded.
Figure 22 :
Figure 22: Dependence of the spherical harmonic moments of cos θ KK as a function of the K + K − mass around the φ(1020) mass peak after efficiency corrections and background subtraction.The points with error bars are the data and the solid curves are derived from the fit model.
Figure 23 :
Figure 23: Dependence of the spherical harmonic moments of cos θ KK as a function of the K + K − mass above 1050 MeV, after efficiency corrections and background subtraction.The points with error bars are the data and the solid curves are derived from the fit model.
Table 1 :
Possible resonance candidates in the B 0 s → J/ψK + K − decay mode.
Table 7 :
Absolute systematic uncertainties on the fit results.
Table 8 :
Detector efficiencies determined from simulation and the correction factors. | 11,337.2 | 2013-02-05T00:00:00.000 | [
"Physics"
] |
Presynaptic Mechanisms of Lead Neurotoxicity: Effects on Vesicular Release, Vesicle Clustering and Mitochondria Number
Childhood lead (Pb2+) intoxication is a global public health problem and accounts for 0.6% of the global burden of disease associated with intellectual disabilities. Despite the recognition that childhood Pb2+ intoxication contributes significantly to intellectual disabilities, there is a fundamental lack of knowledge on presynaptic mechanisms by which Pb2+ disrupts synaptic function. In this study, using a well-characterized rodent model of developmental Pb2+ neurotoxicity, we show that Pb2+ exposure markedly inhibits presynaptic vesicular release in hippocampal Schaffer collateral-CA1 synapses in young adult rats. This effect was associated with ultrastructural changes which revealed a reduction in vesicle number in the readily releasable/docked vesicle pool, disperse vesicle clusters in the resting pool, and a reduced number of presynaptic terminals with multiple mitochondria with no change in presynaptic calcium influx. These studies provide fundamental knowledge on mechanisms by which Pb2+ produces profound inhibition of presynaptic vesicular release that contribute to deficits in synaptic plasticity and intellectual development.
Introduction
Childhood lead (Pb 2+ ) intoxication continues to be a public health problem of significant proportion not only in the United States, but also globally [1,2]. Many studies over several decades have consistently demonstrated that one of the most prominent effects of Pb 2+ in children is to decrease their capacity to learn with devastating effects on cognitive and intellectual development [3,4,5,6,7]. The consequences of childhood Pb 2+ intoxication on the intellectual capacity of children and society as a whole are immeasurable in a world dominated by an economy that rewards knowledge. Recent human studies have also shown that Pb 2+ exposure early in life is associated with longitudinal declines in cognitive function [8] and loss of brain volume [9,10] in aging individuals. Therefore, Pb 2+ exposure in early life has immediate and long-term consequences to neurological and mental health.
Neuronal chemical communication is mediated by synapses that undergo fast and efficient release of neurotransmitters. Neurotransmitter release occurs at presynaptic active zones (PAZ) which are specialized regions of the synapse juxtaposed to postsynaptic densities (PSD). In presynaptic terminals, neurotransmitters are packaged in synaptic vesicles that are organized in clusters or functional pools that include: 1) the readily-releasable/docked vesicle pool, 2) the rapidly recycling pool from which vesicles undergo exo-endocytosis as a result of stimulation and, 3) the resting pool, which serves as a reservoir containing vesicles that are releasedreluctant unless there is sustained and strong stimulation [11,12]. Disruption of synaptic function is known to result in neurological disease [13,14].
A number of studies have shown that acute and chronic exposure to Pb 2+ alters neurotransmitter release in both in vivo models and in vitro systems. Pb 2+ decreases evoked release of glutamate (Glu) and gamma-aminobutyric acid (GABA) in young adult rats developmentally exposed to Pb 2+ [15] and in hippocampal cultures and brain slices acutely exposed to Pb 2+ [16,17]. Both spontaneous and action potential-evoked release of Glu and GABA are affected by Pb 2+ exposure [18], but there is a paucity of knowledge of the mechanisms underlying these effects. Recent studies from our laboratory have provided the first working model by which exposure to Pb 2+ during the period of synaptogenesis affects synapse development and function that can explain effects of Pb 2+ on both presynaptic and postsynaptic compartments of the synapse [19,20,21]. Using a Pb 2+ exposure paradigm during the period of synaptogenesis in primary hippocampal neuron cultures, we found that Pb 2+ inhibition of postsynaptic NMDA receptors (NMDAR) alters downstream calcium signaling and impairs the CREB-dependent transcription of activity-regulated genes, such as brain-derived neurotrophic factor (BDNF) and alters the function of its cognate receptor TrkB and downstream signaling to modify synapsin I phosphorylation at serine sites involved in vesicle movement [19,20,21]. These studies also showed that the Pb 2+ -induced impairment of BDNF trans-synaptic retrograde signaling decreased the levels of the vesicular proteins synaptophysin and synaptobrevin and inhibited vesicular release [19]. Importantly, the addition of exogenous BDNF normalized synaptophysin and synaptobrevin protein levels and reversed the impairment in vesicular release providing the first evidence of the beneficial effects of BDNF on Pb 2+ -induced synaptic dysfunction [19]. We also showed that the inhibition of vesicular release by Pb 2+ was specific to a fast-releasing pool of vesicles that we hypothesized to be the rapidly-releasable/docked vesicle pool [19].
To determine whether the effects of Pb 2+ exposure that we have observed using in vitro exposure of hippocampal neurons in culture is operational in the hippocampus of young adult rats that have been chronically exposed to Pb 2+ in vivo, we performed electrophysiological and two-photon imaging studies in Schaffer collateral-CA1 synapses. Further, to identify the subcellular basis of the Pb 2+ -induced impairment of vesicular release we used transmission electron microscopy (TEM) to measure vesicle number in the different vesicle pools of the presynaptic compartment as well as presynaptic mitochondria number and size. We report here that chronic in vivo exposure to Pb 2+ during development resulted in a marked inhibition of Schaffer-collateral-CA1 synaptic transmission by inhibiting vesicular release of glutamate, an effect that was not associated with a persistent change in presynaptic calcium entry. On the other hand, we found a reduced number of synaptic vesicles associated with the rapidly-releasable/docked vesicle pool confirming our hypothesis originating from previous in vitro studies [19] that the inhibitory effect of Pb 2+ on vesicular release was due to, at least in part, reductions in the number of vesicles in the rapidly releasable/docked vesicle pool. Furthermore, we observed an increase in the dispersion of vesicles (nearest neighbor distance) in the resting pool and a decrease in the number of presynaptic terminals containing multiple mitochondria. The latter suggests that energy availability in the form of ATP may also be compromised by Pb 2+ exposure and influence vesicular release.
Blood Pb 2+ analysis.. Blood Pb 2+ analysis was conducted using the LeadCare system (ESA Laboratories, Inc, Chelmford, MA) as described by the manufacturer.
Animals
Adult female Long-Evans rats were purchased from Charles River, Inc. (Wilmington, MA) and fed 0 (control) or 1500 ppm lead acetate (PbAc) in the diet (Dyets, Bethlehem, PA) 10 days prior to breeding with non-exposed, normal Long-Evans males. Litters were culled to 10 on postnatal day 1 (PN1) and dams were maintained on their respective diet until weaning of pups. After weaning, offspring remained on their respective maternal diet until PN50. At weaning, rats were housed in same sex pairs in plastic cages at 22 ± 2°C on a 12/12 light: dark cycle. Food and water were allowed ad libitum. Litters of rats were considered a single experimental unit for statistical purposes so that for each experiment, one animal was used from a single litter for one data point. This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by Columbia University and New York Medical College Institutional Animal Care and Use Committees (AC-AAAF4810). All non-survival procedures were performed under sodium pentobarbital anesthesia, and all efforts were made to minimize suffering.
Whole cell patch-clamp recordings were performed in CA1 pyramidal neurons using standard techniques. Patch pipettes (R = 3-4 MO) were filled with recording solution containing (mM): 135 CsMeSO 2 , 8 NaCl, 10 HEPES, 2 Mg-ATP, 0.3 Na-GTP, 0.5 EGTA, and 1 QX-314 (275 mOsm, pH = 7.25 adjusted with Cs(OH) 2 ). Access resistance was carefully monitored, and only cells with stable access resistance (<5% change) were included in analyses. CA1 pyramidal cells were recorded under voltage clamp using a MultiClamp 700B (Axon Instruments) with Clampex (v9). Recording signals were filtered through an eight-pole Bessel lowpass filter with a 3 kHz cutoff frequency, digitized at 10 kHz, and sampled using Clampex (v9). Sampled data was analyzed off line with Clampfit (v9) and OriginPro (v6.1). Neurons were clamped at -60mV, and Schaffer collateral-evoked EPSCs were triggered by MultiClamp and delivered by a bipolar stimulating electrode (FHC, USA) via a stimulus isolator (ISO-Flex, AMPI Instruments; 50-100 pA, 100 μs duration). EPSC slopes were calculated by linear interpolation of the initial downward current from 20% to 80% of the maximum EPSC amplitude. Paired-pulse facilitation was assessed by applying a pair of Schaffer collateral stimuli at intervals of 10-125 msec, and the ratio of slopes of the second to the first response was calculated, so that numbers greater than 1.0 represented facilitation, less than 1.0 inhibition. Chemicals for making extra-and intracellular solutions were purchased from Sigma (USA) or Fluka (USA).
Two-photon laser scanning microscopy
Fluorescence was visualized using a customized two-photon laser-scanning Olympus BX61WI microscope with a 60x/0.90W water immersion infrared objective lens and an Olympus multispectral confocal laser scan unit. The light source was a Mai-Tai™ laser (Solid-State Laser Co., Mountain View, CA), tuned to 820 nm for exciting Magnesium Green and FM1-43. Epifluorescence was detected with photomultiplier tubes of the confocal laser scan head with pinhole maximally opened and emission spectral window optimized for signal over background. In the transfluorescent pathway, a 565 nm dichroic mirror was used to separate green and red fluorescence to eliminate transmitted or reflected excitation light (Chroma Technology, Rockingham, VT). Depending on the nature of the fluorescent dyes, HQ525/50 and HQ610/50 or HQ710/50 filters were placed in the "green" and "red" pathways, respectively. Image acquisition was controlled by Fluoview FV300 software (Olympus America, Melville, NY). After confirming the presence of Schaffer collateral-evoked fEPSPs >1mV in amplitude in CA1 stratum radiatum, and inducing LTP, 10 μM 6-cyano-7-nitroquinoxaline-2,3-dione (CNQX) was bath-applied throughout the rest of the experiment to prevent synaptically-driven action potentials in CA3 pyramidal neurons from accelerating dye release. Presynaptic boutons were loaded by bathapplying 5μM FM1-43 (Molecular Probes) in hypertonic ACSF supplemented with sucrose to 800mOsm for 25 sec to selectively load the rapidly-recycling pool (RRP) [24,25], then returned to normal ACSF. Stimulus-induced destaining was measured after 30 min perfusion with dyefree ACSF, by bursts of 10Hz bipolar stimuli (150 μs DC pulses) for 2 sec applied once each 30 sec. We fitted a single exponential to the first 6 fluorescence time course values, and taus between groups compared by two-tailed Student's t-test, as we have shown previously that the early release reflects vesicular release from the RRP prior to recycling and reuse of vesicles [24,25]. At the end of each experiment, complete depolarization-induced destaining was evoked by bath-applying 85 mM [K + ] ACSF. Using established methods for measuring [Ca 2+ ] transients [26], we filled Schaffer collateral presynaptic fibres with Magnesium Green AM. Briefly, an ejection electrode (tip diameter, 5-10 μm) containing Magnesium Green AM (1mM Magnesium Green AM, 10% DMSO, 1% pluronic acid in ACSF) was lowered into the Schaffer collateral pathway between the stimulating electrode and the presynaptic terminal field to be observed, air pressure pulses (6-9 psi, 100-200 msec) controlled by a Picospritzer (General Valve Corp. USA) were applied to the electrode until a small bright spot (10 mm in diameter) was observed. Then the slice was maintained with a 3ml/min flow of oxygenated ACSF for~30 minutes to allow the dye to sufficiently diffuse into presynaptic boutons. To verify that magnesium green selectively loaded into presynaptic terminals, FM4-64 was loaded with high [K + ]o at the end of each experiment. To measure Ca 2+ dynamics the fluorescence was collected by scanning at 200 Hz in a surface-scanning mode (XYT). Baseline fluorescence (F 0 ) was the average of four images during control, ΔF/F was calculated as (ΔF/F) (t) = (F (t) -F 0 )/F 0 .
Transmission electron microscopy specimen preparation
At PN50, rats were anesthetized with 20 mg/kg pentobarbital. Rats were perfused transcardially with 2.5% glutaraldehyde + 2% paraformaldehyde in 0.1 M Phosphate Buffered (PBS). The brain was removed and post-fixed in the same solution overnight at room temperature (RT). Brains were sectioned into 500 μm slices with a vibratome. Tissue from the same CA1 region in which the electrophysiological studies were performed were dissected from the hippocampus using a 1.5 mm hole-punch. Dissected tissue was placed in 2.5% glutaraldehyde + 2% paraformaldehyde in PBS mixture for 3 hr at RT and rinsed with PBS. Secondary fixation in 1% osmium tetroxide in PBS was done for 60 min at RT. Following osmium fixation, tissue was rinsed in PBS then rinsed in water to remove all traces of phosphate from samples. Tissue was subsequently dehydrated in 50% ethanol, a mixture of 70% ethanol + 1% uranyl acetate, 85% ethanol and 2 changes of 100% ethanol (15 min per step). Tissue was then placed in the transition solvent propylene oxide twice (15 min per step) and was left to infiltrate in a 1:1 mixture of propylene oxide-Spurr's Resin overnight at RT. Steps involving osmium tetroxide and uranyl acetate were done in containers covered with foil to block light. Tissue was transferred to pure Spurr's Resin for infiltration for 24 hours at RT. Tissue was then placed into Beem Capsules with fresh Spurr's Resin, allowed to sit for 30 min and then placed in a 70°C oven for 24 hours for polymerization. After polymerization, ultrathin sections (70 nm) were obtained using a Sorval 5000 ultramicrotome and placed onto 200 mesh copper grids. 2 μm of tissue was cut in between each collected section to prevent repeat analysis of any synapses. Sections on grids were stained with uranyl acetate for 45 min, rinsed with water, stained with lead citrate for 90 sec, rinsed again with water and left to dry on clean filter paper.
Transmission electron microscopy
Tissue was examined under a Hitachi 7500 Transmission Electron Microscope operated at 80 kV. Images were obtained at 100,000x magnification using an AMT digital camera and software. For each hippocampi under investigation (10 total; 5 Control and 5 Pb 2+ ), a total of 40 images of simple, asymmetric synapses were obtained. Five synapses were imaged from each grid. Synapses were spaced by a minimum of one grid box to reduce bias. The microscopist was blinded to treatment conditions while imaging.
Transmission electron microscopy image analysis
The presynaptic active zone (PAZ) and the center of each pre-synaptic vesicle were marked using ImageTool (UTHSCSA, ImageTool, Version 3.0). The distance between each vesicle and the PAZ as well as the distance in between each vesicle and its nearest neighbor was calculated using ImageTool coordinates in LoClust [27]. The diameter of each vesicle was measured using ImageJ. The PAZ length was also measured using ImageJ. PAZ membrane appears more electron dense after staining than surrounding membranes, which allows for measurement. The postsynaptic density (PSD) length was measured using ImageJ. The PSD is large and electron dense after staining, which facilitates measurement. Vesicles were classified as readilyreleasable/docked vesicle pool if they were physically contacting the PAZ. Vesicles were classified as belonging to the recycling pool if their center was within 200 nm of the PAZ. Vesicles were considered part of the reserve pool if their vesicular center was greater than 200 nm from the PAZ. The number and diameter of mitochondria in pre-synaptic terminals was also determined. All measurements were made by one individual (SRG) who was blinded to the treatment groups.
Statistics
Forty EM images were obtained from each Schaffer Collateral-CA1 region. An average of each measurement made from the 40 images was used to represent the animal. A t-test with Welch Correction was used to determine differences between the control and Pb 2+ treated groups for each particular measure (GraphPad Prism). Welch correction was used to account for differences in variance. In analyses requiring comparisons between multiple groups, an ANOVA with Sidak's Multiple Comparisons analysis was used, to determine which groups were significantly different than one another. Significance level was preset to p < 0.05."
Blood lead levels and body weight of rats
The Pb 2+ exposure paradigm used in the present study does not produce any overt toxicity based on body weight gain. Body weight at postnatal day 50 (PN50) were: 294.4 ± 4.8 grams (n = 24) for control animals and 281.6 ± 6.9 grams for Pb 2+ -exposed animals.
Blood Pb 2+ levels of littermates to animals used in this study at PN50 were: 0.8 ± 0.3 μg/dL (n = 11) for control animals and 21.1 ± 1.6 μg/dL (n = 15) for Pb 2+ -exposed animals. This level of Pb 2+ exposure is environmentally relevant and previous studies using this animal model have shown that it produces in deficits in synaptic plasticity in the form of long-term potentiation [28], decreased adult neurogenesis with alterations in the morphological development of newly born granule cells in the hippocampus [29], and impairments of spatial learning and contextual fear conditioning [28,30,31] in animals of similar age.
Chronic lead exposure persistently enhances paired-pulse facilitation at Schaffer collateral-CA1 synapses Neuronal short-term presynaptic plasticity is classically assessed with "paired-pulse stimulation," two stimuli in close succession [32,33]. One form of paired-pulse modulation, paired pulse facilitation (PPF), is typically attributed to an increase of release probability (Pr) during the second stimulus, arising from prior accumulation of residual Ca 2+ near active zones and/or lingering effects of Ca 2+ on a Ca 2+ sensor [33,34]. This residual Ca 2+ , when present at terminals that fail to release on the first stimulus, will cause them to release and increase response amplitude from the second stimulus. Therefore, if initial Pr is reduced, as by manipulations such as reducing extracellular [Ca 2+ ], the magnitude of PPF (the ratio of second to first response amplitude) will be increased [33,34]. As Fig 1A illustrates, PPF in CA1 pyramidal neurons was readily elicited by two Schaffer collateral stimuli applied with a 30 ms interpulse interval. As summarized in Fig 1B, the ratio of excitatory postsynaptic current (EPSC) amplitude in response to the second stimulus versus the first stimulus (P2/P1) was significantly greater in slices from Pb 2+ -exposed animals compared to controls, consistent with a decrease in initial Pr. When paired-pulse stimuli were applied at intervals that varied from 20-300 msec, the significant increase in PPF in slices from Pb 2+ -exposed rats was statistically significant for interpulse intervals from 20-125 msec (Fig 1C and 1D; P <0.05, repeated measures ANOVA with posthoc Student's t-test with Bonferroni correction).
Chronic lead exposure persistently reduces vesicular release from the rapidly-recycling vesicle pool at Schaffer collateral presynaptic terminals To determine directly whether presynaptic vesicular release is altered by chronic in vivo Pb 2+ exposure, we used two-photon excitation to visualize release of the styryl dye FM1-43 from the rapidly-recycling pool of presynaptic vesicles after selective loading by hypertonic shock into Schaffer collateral-CA1 terminals in hippocampal slices. In these experiments, presynaptic vesicles in the rapidly-recycling pool are first stimulated by a brief hypertonic shock to fuse with the membrane and release their transmitter, whereupon they take up FM1-43 from the extracellular space and are endocytosed and recycled into the rapidly-recycling pool for the next evoked release. We have used this method previously to show that generation of LTP and LTD can be associated with persistent increases [24] and decreases [25] in the rate of stimulusevoked FM1-43 destaining at Schaffer collateral terminals in field CA1. of chronic Pb 2+ exposure on vesicular release from Schaffer collateral presynaptic terminals. Fig 2A shows representative pseudocolor images of FM1-43 labelled Schaffer collateral terminals before (0 min) and after 8 minutes of 2Hz stimulation in control versus slices from Pb 2 + -exposed rats. Fig 2B summarizes the time course of all slices, showing the markedly slower vesicular release evoked by 2Hz stimulation of Schaffer collateral terminals in field CA1 of slices from Pb 2+ -exposed rats compared to control rats. The initial time constant of release calculated from a single exponential fit of the first 6 times points [24,25] was significantly slower (Student's t-test, P<0.05), while the residual fluorescence at the end of the stimulus train was significantly higher in slices from chronically Pb 2+ -exposed rats, confirm a slower rate of vesicular release (Fig 2C).
While Schaffer collateral-CA1 synapses appear to exhibit a mixture of presynaptic and postsynaptic alterations in expressing long-term synaptic plasticity, mossy fiber-CA3 synapses have been suggested to express largely presynaptic long-term plasticity [35]. We used the same experimental protocol to study vesicular release rates from mossy fiber terminals in field CA3 to determine if the effect of Pb 2+ exposure on vesicular release was site specific. Fig 3B shows that, while the initial rate of FM1-43 release from mossy fiber terminals was not altered by chronic Pb 2+ exposure, mean destaining (grey bar indicates values averaged) at the end of the 2Hz stimulus train was reduced (Student's t-test, P<0.05). This reduced destaining late in the stimulus train may suggest reduced rates of vesicle recycling in mossy fiber terminals. Comparison of vesicular release of Schaffer collateral to CA1 synapses versus mossy fiber synapses suggests that Schaffer collateral terminals are more susceptible to effects of early developmental exposure to Pb 2+ than mossy fiber boutons. Further, the results also suggest that the molecular mechanisms by which Pb 2+ impairs vesicular release in Schaffer collateral-CA1 synapses are different than those in mossy fiber-CA3 synapses.
Variance-mean analysis confirms that chronic lead exposure persistently reduces presynaptic release probability at Schaffer collateral terminals Variance-mean (VM) analysis according to a binomial model of synaptic transmission is a method that has been employed to study a variety of synapses [36,37]. It is mainly applied to steady-state sequences of evoked EPSCs recorded under a variety of conditions by varying extracellular [Ca 2+ ], or delivering long repetitive trains of stimulation of different frequencies, each resulting in a range of mean response size with variance that is a parabolic function of Pr [38,39,40,41] were made. To ensure that postsynaptic AMPA receptors were responding to a non-saturating concentration of glutamate, as required for VM analysis, experiments were conducted in a low concentration of the selective AMPA receptor antagonist CNQX (100 nM). Fig 4 illustrates experiments using VM analysis to determine the long-term effects of Pb 2+ exposure on Pr. Fig 4A shows individual CA1 pyramidal neuron membrane resistance and EPSC amplitudes recorded in representative slices from a control and a Pb 2+ -exposed rat, during the stable periods after each change of extracellular [Ca 2+ ] o . The stability of the data recorded was assessed by fitting a straight line to the amplitudes in each recording condition and plotted versus repetition number. For analysis, only data that displayed less than 20% change in the regression line over at least 30 data points were selected for further VM analysis [39]. Fig 4B shows representative envelopes of individual EPSCs at 1/4mM (left), 2/2mM (center) and 4/1mM (right) [Ca 2+ ]/[Mg 2+ ] ratios in a slice from a control and a Pb 2+ -exposed rat. As shown in panel 4C, the VM relationship obtained by varying extracellular [Ca 2+ ] is parabolic, with maximum variance at the peak of the parabola. In pyramidal hippocampal neurons from Pb 2 + -exposed rats, individual slice data point (Fig 4F) and mean amplitudes (Fig 4D) (P<.0.01, Student's t-test), consistent with a reduction in presynaptic release probability compared to controls. Panel 4E shows a typical associated variance/mean versus mean linear plot, where the linear fits of pyramidal neurons from control and Pb 2+ -exposed rats significantly differed in slope (P<0.05, Student's t-test), consistent with a presynaptic site of reduced Pr. Across all experiments (N = 7 for each treatment group), Pr calculated by this method was significantly reduced in slices from Pb 2+ -exposed rats at low (1/4 mM; Pb 2+ = 0.037 ± 0.0013, control = 0.051 ± 0.0041, P<0.05, Student's t-test), medium (2/2 mM; Pb 2+ = 0.28 ± 0.028, control = 0.45 ± 0.02, P<0.05, Student's t-test) and high (4/1 mM; Pb 2+ = 0.60 ± 0.026, control = 0.72 ± 0.023, P<0.05, Student's t-test) [Ca 2+ ]/[Mg 2+ ] ratios. These reductions in Pr were not associated with significant changes in number of release sites (N) or quantal size (Q).
Chronic lead exposure does not produce marked changes in presynaptic calcium influx into Schaffer collateral terminals
Calcium channels (P/Q and N-type) are the major source of action potential mediated Ca 2+ influx into presynaptic terminals and previous studies have shown that Pb 2+ inhibits calcium channels in cultured cells, an effect that is reversible by washing the cellular preparation [42]. Therefore, if Pb 2+ exposure chronically alters the activity of these channels, this could indirectly alter Pr of synaptic vesicles. To directly test whether chronic Pb 2+ exposure produces a persistent inhibition of presynaptic Ca 2+ influx, we injected Mg 2+ Green-AM, a calcium indicator dye that is membrane-permeable [43], directly into stratum radiatum of field CA1 of hippocampal slices. Mg 2+ Green positive fluorescent puncta were visualized in field CA1 using twophoton excitation (Fig 5 insets). Fig 5 demonstrates the kinetics of Mg 2+ Green fluorescence increases in response to a 100Hz burst of four Schaffer collateral stimuli. We have shown previously that these responses persist in the presence of NMDA and AMPA receptor antagonists, despite the loss of fEPSPs, are blocked by cadmium and omega conotoxin, and co-localize with FM4-64, confirming a presynaptic nature for these calcium transients [44].
Comparison of the mean fluorescent time courses of stimulus-evoked presynaptic Ca 2+ influx transients in Schaffer collateral terminals in slices from control ( Fig 5A) versus Pb 2+ -exposed (Fig 5B) rats revealed that these transients were almost completely superimposable, with the exception of the first two peak fluorescence time points, which were significantly larger in amplitude in control slices in the first 400 milliseconds (repeated measures ANOVA, F(1,20) = 12.747; p = 0.0019), but not different for the rest of the transients (repeated-measures ANOVA F(1,20) = 0.097; p = 0.7588). Overall, the area under the curves for these two sets of terminals were not significantly different (P > 0.05), suggesting that reductions in presynaptic [Ca 2+ ] influx are not likely to be responsible for the marked reduction in vesicular release associated with chronic Pb 2+ exposure. Taken together, our data indicate that chronic exposure to Pb 2+ during early development results in a persistent reduction in presynaptic Pr that is likely due to ] o . Data from control and Pb 2+ -exposed slices were both well fit by a single parabola forced to pass through 0,0 with grey circles shifted to the left, consistent with a presynaptic reduction in P r from chronic Pb 2+ exposure. (E) Plot of variance/mean ratio versus mean EPSC amplitude (pA) from a single representative slice, which converts the parabolic relationship between mean and variance to a linear one. The number of release sites (N) was derived by estimating the slope of the linear fit, while the y-intercept denotes the quantal size (Q) of the EPSC. The reduction in slope indicates that chronic Pb 2+ exposure was associated with a reduction in presynaptic P r . (F) Individual variance-mean data points for each control slice (black circles) and each Pb 2+ -exposed slice (red circles) at each [Ca 2+ ] o .
doi:10.1371/journal.pone.0127461.g004 Chronic in vivo exposure to lead resulted in a reduction in number of ready releasable pool/docked vesicles and a dispersion of vesicles in the resting pool in Shaffer Collateral terminals measured by transmission electron microscopy Previous studies from our laboratory using hippocampal neuronal cultures have shown that the effect of Pb 2+ on vesicular release was specific to a fast-releasing pool of vesicles that we hypothesized to be associated with the readily-releasable vesicle pool [19]. Therefore, to directly confirm that Pb 2+ inhibition of vesicular release may be mediated by alterations in the readilyreleasable/docked vesicle pool, we used TEM to visualize presynaptic axon terminals of control versus Pb 2+ -exposed rats to analyze vesicle position (Fig 6). We classified vesicles as: 1) part of the readily-releasable/docked vesicle pool if they were physically contacting the presynaptic active zone (PAZ), 2) part of the rapidly recycling pool if their center was within 200 nm of the PAZ, or 3) part of the reserve pool if their vesicular center was greater than 200 nm from the PAZ (Fig 7) based on previous demonstration of three pools of vesicles [11,12]. Our results indicate marked changes in the vesicular pools of presynaptic terminals in the CA1 region of the hippocampus from rats that were exposed to Pb 2+ relative to controls (Table 1). That is, Pb 2+ exposure resulted in a significant reduction in the number of readily-releasable/docked vesicles in Shaffer Collateral terminals (p = 0.034, Table 1, IA). We also measured vesicle nearest-neighbor distance as a function of distance from the PAZ (Fig 7B). The distance between the centers of each vesicle was marked and the nearest neighbor distance between vesicles measured (Fig 7C). The distance from the PAZ to each vesicle was binned and arranged into two clusters representing (1) readily-releasable/docked plus recycling pools of vesicles (0-200 nm from PAZ) and (2) the reserve vesicle pool (201-500 nm from PAZ). Schaffer Collateral terminals of Pb 2+ -exposed animals did not show significant changes in nearest-neighbor distance between vesicles of the RRP/docked pools plus recycling pools, but did show a highly significant increase in nearest neighbor distance between vesicles of the reserve pool (Table 1, IVB; p = 0.0017). This data suggests there are alterations in clustering of vesicles in the reserve pool that may alter vesicle movement and availability, contributing to the significant reduction in functional release probability and synaptic transmission produced by chronic Pb 2+ exposure.
Effect of chronic lead exposure on presynaptic mitochondria number and size
To determine if changes in vesicular release and vesicle clustering were related to energy availability in axon terminals, we examined the number of mitochondria in presynaptic terminals from the TEM images. Exposure to Pb 2+ had no significant effect in the number of terminals containing at least one mitochondrion (Table 1, VA) but there was a marked decrease in the number of terminals that contained multiple (>2) mitochondria in the Pb 2+ -exposed group relative to controls (Table 1, VB) suggesting that trafficking of mitochondria to presynaptic terminals may be altered by chronic Pb 2+ exposure. There were no significant differences in total mitochondria counted (Table 1, VC), and the mean diameter of presynaptic terminal mitochondria was similar between control and Pb 2+ -exposed groups (Table 1, VD), although there was a nearly significant increase in the number of mitochondria with diameter greater than 300 nm (Table 1, VD). These findings suggest that mitochondrial function and energy availability in the form of ATP in presynaptic terminals may be decreased by Pb 2+ exposure.
Discussion
Our present findings in hippocampal brain slices containing relatively intact local synaptic networks, supply multiple lines of evidence that chronic in vivo exposure to Pb 2+ during development results in reductions in presynaptic vesicular release, an effect that is associated with ultrastructural changes in the presynaptic terminals of young adult rats. At Schaffer collateral-CA1 synapses in the hippocampus, paired-pulse facilitation, VM analysis and direct multiphoton imaging of vesicular release using FM1-43 all indicate a persistent reduction in presynaptic vesicular release probability in Pb 2+ -exposed young adult rats. These findings are consistent with our previous in vitro studies demonstrating a marked impairment in presynaptic vesicular release using FM1-43 by Pb 2+ in hippocampal neuron cultures [19]. TEM analysis revealed that these functional impairments in vesicular release were associated with fewer vesicles in the readily-releasable/docked vesicle pool (Table 1, IA). In the CA1 region, presynaptic terminals from Pb 2+ -exposed rats had a significantly lower number of terminals that contained two or more mitochondria suggesting that the amount of ATP available for release mechanisms and vesicle movement may be significantly reduced by Pb 2+ exposure.
Synaptic mitochondria play an important role in synaptic transmission, organization and movement of vesicles in the reserve pool to the readily releasable pool, and calcium buffering and homeostasis [42,14]. Therefore, it is likely that the impairment in vesicular release in Pb 2 + -exposed animals that we have identified in the present study may be due, at least in part, to reduced ATP availability resulting from a reduced number of presynaptic terminal mitochondria. Our TEM studies also showed that Pb 2+ exposure altered the clustering of vesicles in the resting pool by increasing vesicle nearest-neighbor distance (Fig 7). Longer distance between vesicles in the reserve pool may be associated with reduced vesicular release. In mice deficient in the neural adhesion molecule L1, there was a marked increase in the nearest-neighbor distance between vesicles. These mice had a higher number of failures in transmitter release [45]. Other TEM studies have shown that synaptic vesicle clustering occurs via connectors of different sizes, reflecting a diffuse intervesicular matrix [46,47]. Furthermore, the brain-specific phosphoprotein synapsin plays an important role in clustering and movement of vesicles in the reserve pool in a phosphorylation-dependent manner [47,48,49]. We have previously shown that synapsin I phosphorylation at sites 4 (serine 62) and 5 (serine 67) were significantly decreased by Pb 2+ exposure with no effect on total synapsin I protein levels [21]. Thus, it is possible that the increase in vesicle nearest-neighbor distance that we have found in the reserve vesicle pool of presynaptic CA1 terminals from Pb 2+ -exposed animals may be the result of lower levels of synapsin I phosphorylation leading to alteration in synapsin dimerization, increasing vesicle interconnector length and thus, increasing vesicle nearest-neighbor distance in the resting pool. Consistent with this idea, synapsin I deficient mice have been shown to exhibit dispersed vesicles [50], and decreased phosphorylation of synapsin I by cdk5 at site 7 (serine 551) has been shown to disrupt synaptic vesicle clustering and increase vesicle nearest-neighbor distance [51]. Therefore, although we have not assessed the effect of Pb 2+ exposure on synapsin-serine 551 phosphorylation, it is clear that phosphorylation at this site is both necessary and sufficient to alter vesicle clustering in the reserve pool and increase nearestneighbor distance.
What are potential mechanisms of Pb 2+ -induced impairments in vesicular release? Previous studies using acute exposure to Pb 2+ in cultured cells have shown inhibition of Ca 2+ channels by Pb 2+ , an effect that is reversible by washing the cellular preparation [41]. On the other hand, there could be additional mechanisms by which Pb 2+ may alter vesicular release by changes at the level of SNARE proteins as we have previously shown [19,20]. Therefore, we needed to assess whether the impairment of vesicular release that we have documented here in chronically Pb 2+ -exposed animals could be directly due to chronic inhibition of Ca 2+ channel function that persists upon Pb 2+ washout, or whether chronic Pb 2+ -exposure results in additional changes downstream of Ca 2+ influx that persist in the absence of extracellular Pb 2+ . To directly measure the effect of in vivo Pb 2+ exposure on presynaptic calcium entry, we directly measured Ca 2+ influx into Schaffer collateral-CA1 terminals. Fluorescent imaging of presynaptic Ca 2+ influx showed little change in amplitude or duration of voltage-dependent Ca 2+ channel-mediated Ca 2+ entry, though the reduction in the first two time points of the transients suggests that channel kinetics may be somewhat altered. Thus, our slice experiments support the presence of additional effects of chronic developmental exposure to Pb 2+ that are downstream of Ca 2+ entry, somewhere at the level of vesicular SNARE protein-mediated docking, recycling, or even affecting the long-term stability of the release complex, as we have previously shown in hippocampal neuron cultures [19].
The correlation between functional and ultrastructural alterations in Pb 2+ -exposed animals was striking, and is strongly suggestive of profound alterations in SNARE protein levels and function that resulted in both a functional impairment in the vesicular release process, and a reduction in the number of readily-releasable/dock vesicles. While these changes in excitatory glutamatergic Schaffer collateral terminals are clear, it is by no means assured that the same pattern of alterations occurs at all presynaptic terminals in the brain. Indeed, a pressing question is whether inhibitory GABAergic terminals are affected similarly in magnitude and direction by chronic developmental exposure to Pb 2+ , since such alterations could have important effects on cognition and propensity for seizures. To this aim we should note a recent study using stereological cell counting data indicated that the number of parvalbumin-positive GABAergic interneurons was significantly reduced in number in the hippocampus of rats of similar age and Pb 2+ exposure as our present study [52].
In this study, rats were exposed to Pb 2+ chronically during gestation, postnatally and continuing through to young adulthood. The preparation of brain slices for the studies did not contain Pb 2+ in the artificial cerebrospinal fluid used to maintain slice viability during experiments, suggesting that the effects observed were of a developmental nature. This leads to the question of whether removal of Pb 2+ in vivo could allow recovery of normal presynaptic function, and whether there are treatments that might protect against these effects of Pb 2+ exposure. Neal et al. [19] have shown that BDNF synthesis and release are decreased by exposure of hippocampal neurons in culture to Pb 2+ , an effect associated with reductions in the levels of SNARE proteins and inhibition of vesicular release. These effects of chronic Pb 2+ exposure were rescued by the exogenous addition of BDNF. Moreover, Stansfield et al. [21] using the same Pb 2+ exposure paradigm of hippocampal neurons in culture showed that Pb 2+ also impairs the transport of BDNF-containing vesicles possibly by altering Huntingtin phosphorylation at a site that promotes anterograde BDNF vesicle movement. This effect of Pb 2+ results in impaired BDNF release, decreasing TrkB activation, and leading to decreased phosphorylation of synapsin I. Together, these studies strongly suggest that BDNF and treatments such as enriched environments that increase BDNF levels and release, may rescue the effects of Pb 2+ that we observed in vivo in intact hippocampal slice synaptic networks. In fact, previous studies from our laboratory have shown that environmental enrichment is able to reverse the Pb 2+ -induced impairment of spatial learning deficits in rats of similar age and Pb 2+ treatment [53]. Ongoing studies will investigate whether enhancing BDNF-TrkB signaling by pharmacological approaches or using environmental enrichment paradigms can reverse the Pb 2+ -induced impairment of vesicular release and presynaptic structural changes that we have documented in the present study. | 8,919.4 | 2015-05-26T00:00:00.000 | [
"Biology"
] |
The Fields of Flow and Temperatures in the Chambers of Radiation of Tube Furnaces with Multi-tier Wall Burners of Two Types
In the article, the differential method of thermal calculation of a furnace is used to determine the aerodynamic and thermal characteristics in the chambers of radiation of tube furnaces with wall burners of two types located on several tiers. In the methane steam reforming furnace, the acoustic burners of the near-wall flame gas fuel are arranged in three tiers on the side walls of the radiation chamber. In the primary reforming furnace for the production of ammonium nitrate, wall-mounted burners are located on six tiers. The method implies joint numerical solution of 2D radiation transfer equations using the S2approximation of the discrete ordinate method, of energy equations, flow equations, k-ε turbulence model, and two stage modeling of gas fuel combustion. Is it given a brief description of the boundary conditions for differential equations and the method of their numerical solution. The results of the calculation of the temperature fields and the flow of combustion products in the radiation chamber of the furnace obtained with the help of a computer program that implements the described method are given.
Introduction
The petroleum refinery industry and petrochemical industry operates with tube furnaces ensuring a short-term stay of combustion products; so a high heat intensity for long tubes should be provided in the furnace designing. The common practice is using of cup-shaped injection-type burners, or surface panel burners, or burners designed for near-wall flame; the latter version needs a multi-tier arrangement of burners on the lateral walls within the radiation section of a tube furnace. Recently, the market has offered the flat-flame gas burners of acoustic type (Acoustic Gas Burner, AGB) [1]. Figure 1 depicts a simplified view for a quadrant of the radiative chamber of tube furnace; it is equipped with AGBs arranged in three tiers on the lateral walls. A mixture of gaseous hydrocarbons and water steam are fed inside the vertical tubes (one raw) and the mixture is heated to the desired temperature due to radiation from combustion products and from hot wall of furnace (symmetric arrangement relative the tubular screen). The composite flame from many burners covers the lining in the circular shape and this creates a temperature field on the radiating walls.
In figure 2 shows a simplified diagram of the tubular furnace of the primary reforming of ammonium nitrate production, which is the main apparatus at the gas preparation stage and is designed to produce converted gas using the steam catalytic conversion of natural gas hydrocarbons. In the radiation chambers of the primary reforming furnace, 288 pieces are located, reaction tubes with a nickel-containing catalyst R-67R-7H / R-67-7H. The primary reforming furnace consists of two lined heated radiation chambers with a forced draft from the exhaust fan. Radiation chambers are equipped with wall horizontal burners (720 pieces in two chambers) without forced air supply. For milder conditions of heating the reforming furnace at start-up, the project provides for the installation of two types of burners of different power. Burners of the Walard WA4 type with a power of 266 W are located in the first tier of the burners (120 pcs., 30 pcs. On one side wall). Torches of type Walard WA5 with a power of 530 W are located in the remaining 5 tiers. A review of simulation methods applied to fluid dynamics and heat and mass transfer is available in [2]. The current state of numerical methods for the study of the thermal radiation of combustion products was reviewed earlier [3]. A critical analysis of the publications dealing with energy transfer by radiation is available in the literature [4]. The methods for calculation of heat transfer in screened furnaces are classified into three groups: integral, zonal, and differential methods. The integral methods are based on similarity theory, including the technique of norm-calculation for tube furnaces [3]. This approach does not produce the local values for heat fluxes in the radiated surface, no locals for the lining temperature, or accounting of contribution from nonisothermal and optically non-uniform medium inside the furnace. When we deal with a zonal method of calculation, the inside space of furnace and the boundaries of emissive system are slit into several zones: each of selected zone has optical and thermal uniformity. All the mentioned approaches are a rough version of a differential scheme: it is simplified through using of empirical laws and parameters [5]. For example, the zonal method procedure (but without presenting of calculation results for the type of furnace described in the current paper) was outlined in [6]. In the zonal approach, the temperature field, coefficients of convectiveturbulent transfer between the selected zones are taken from the other fields of knowledge: simulations and experiments on fluid dynamics and convective heat transfer for tube furnaces. Recently, researchers start to apply new methods for thermal calculation of furnaces: they are based on joint numerical solution of radiative energy transfer and gas dynamics equations (all written in differential form). The paper [7] offered a general method for simulation of threedimensional furnace space with account for return flow combustion products, combustion, and combined heat transfer. In the article [8] presents the results of a numerical study of flow of an unsteady MHD free convection heat and mass transfer flow of a viscous, incompressible and electrically conducting fluid over an impulsively started infinite vertical plate in presence of thermal radiation. Heat radiation is taken into account in a simplified form in a gray approximation. A three-dimensional calculation of the heat transfer in the chamber of a technological tubular oven with the combustion of methane in air with acoustical burners of a floor flame has been carried in [9].
Today we have several market-available software packages like ANSYS FLUENT, CFX, Flow Vision, etc. We can also tell about a code packages like VP2/3 and σ-Flow [10]: they are adapted for simulation of a 3D flow with account for combustion of gaseous, liquid or pulverized fuel and with account for convective-radiative heat transfer.
Mathematical Formulation
As can be seen from Figures 1 and 2, the radiation chamber of the tube furnaces under consideration is almost symmetrical with respect to the tubular reactors. As a result, calculations can be performed for only half of the camera. The depth of the chamber along the z axis is 13-16 m. Thus, the width of the radiation chamber section 1-1.3 m is much smaller than the height (12-13.6 m) and the depth of the radiation chamber, which allows us to consider the problem of complex heat and mass transfer in a two-dimensional formulation.
This method is based on concurrent integration of equations: 2D equations of radiative transfer within approximation of discrete ordinate method (1), energy equation (2), turbulent flow of a gas mixture (3), twoparametric k−ε turbulence model (4), continuity equation and gas state (5), convective-diffusion equation for transfer of air and fuel components (6): (1) In those equations we use the following notations: k m I is the radiation spectral emittance for selected directions S m {m = 1, N 0 }, and those vectors are assigned by a set of angular coordinates {µ m , ξ m }; I bλ (T) is the spectral emittance for a black body at temperature Т; α k , β k are the average spectral coefficients of absorption and scattering; w m are the weight coefficients [11]; u, υ are the components of velocity v for combustion products flow along axices х and у; ρ is the density of combustion products; с р is the isobaric heat capacity; λ ef = λ + λ t is the coefficient for effective thermal conductivity; р is the pressure, µ mix is the molar mass of the gas mixture; R is the universal gas constant; q v is the volumetric density of heat sources; div q p is the power of radiant flux density; µ ef = µ + µ t is the effective viscosity; the coefficients of turbulent viscosity and thermal conductivity calculated from formulas: µ t = c µ ⋅f µ ⋅ρ k 2 /ε, λ t = с р µ t /Pr t ; where Pr t is the turbulent Prandtl number; volumetric expansion coefficient; g is the gravity acceleration; T ∞ = 290 K is the temperature taken as standard for calculation of buoyancy force; φ = {k, ε, 4 CH m , m CO , m air }; k, ε is the kinetic energy of turbulent pulsations and kinetic energy dissipation rate; S φ is a source term [12]; Г φ = µ + µ t /σ φ is the transfer coefficient in equation (4), 4 CH m and m CO are the mass concentrations of methane and CO; m air are the mass concentrations of air; S f = 0,53ρ 1/2 f g ε/k is the rate of chemical reaction of combustion defined from "vortex break" model [13], g i = 2,27(µ t k/(ρε)(∂m i /∂y) 2 is the rootmean-square pulsation component of fuel, i = CH 4 , CO; Г φ = µ /σ t is the transfer coefficient in equation (4), where σ t is the Schmidt number. The constants for k−ε model and expression for f µ are taken according to recommendations [13]. Equation like (4) works also for m air . The source term in the equation for the mass concentration of oxidizer (air) is found from relation S air = S f А, where А is stoichiometric air ratio for combustion of 1 kg of fuel (Г f = Г air ).
In this paper, complete methane combustion is expected with the formation of CO 2 and H 2 O, N 2 , and O 2 in final products (for example, for 1 kg of CH 4 ): where α is the air excess factor. To calculate the spectral absorption coefficients of gases, it is necessary to know the distribution of the mole fractions of H 2 O, CO 2 , and CO in the combustion chamber volume. the methane combustion model was used to determine them in two stages: CH 4 + 1.5 O 2 → CO + 2 H 2 O and CO + 0.5 O 2 → CO 2 . (7) Thus, the combustion model includes two equations of the (4) type for m CH4 and m CO . In this case, the mass concentrations of the remaining components at the individual nodal points of the difference grid of the volume are determined by the mass concentrations of the fuel (methane) according to the stoichiometric coefficients of equations (6) and (7): The flow in the tubular oven is subsonic, turbulent, and of a spatial character. The Reynolds number calculated in the width of the radiation section in the combustion region is 6·10 4 . The features of specifying the boundary conditions for equations (1)-(5), the method of difference approximation of differential equations, and methods for numerically solving the system of algebraic equations obtained by an iterative method are described in detail elsewhere [2,14].
In technical applications, the Edwards broadband model is often used to account for the selectivity of gas emissions, as in this paper. In this model, nine spectral bands are identified, with four bands corresponding to H 2 O, two bands corresponding CO 2 , one band corresponding to the transparent spectral region, and two more bands that are due to overlapping of two pairs of bands: 2.7 µm for H 2 O and CO 2 , 10 µm for H 2 O, and 15 µm for CO 2 . To reduce the number of bands of absorption, the Planck method of overlapping bands was used. As a result, the number of bands is reduced to six. A description of this model with reference to original works is given elsewhere [2].
The absorption coefficient at a given concentration is expressed in terms of the partial absorption coefficient К рi a λi = К рi р i , where р i is the partial pressure of the ith component of the mixture [К рi ] = (m·Pa) −1 .
Since the content of soot particles is insignificant in the combustion products of gaseous fuels, their spectral absorption coefficient was calculated with the empirical formula [11]: where Ф(λ) is the function describing the dispersion of the optical constants of soot. In the spectral range up to 10 µm, the following formula can be used [15]: The volume fraction of soot f v in (12) is determined by the empirical formula [16]: where the value of С р /Н р characterizes the relative weight of carbon in the working mass of the fuel [16]: In modeling of acoustic burner operation, we assume that the gas (premixed with air) is fed to the radiation chamber through two narrow slots from opposite sides (see Figure 3). The secondary air is available from other four slots (two sides). Then the air-gas mixture is ignited, and combustion takes place near the wall.
The radiation chambers of the primary reforming furnace ( Figure 2) are equipped with wall-mounted horizontal burners (360 pieces in one chamber) without forced air supply.
A simplified burner circuit is shown in figure 4a. A diagram of the simulation of the burner in two-dimensional calculations is shown in figure 4b. For milder conditions of heating the reforming furnace at start-up, the project provides for the installation of two types of burners of different power. Burners of the Walard WA4 type with a power of 266 W are located in the first tier of the burners (120 pcs., 30 pcs. on one side wall). Torches of type Walard WA5 with a power of 530 W are located in the remaining 5 tiers. A similar approach to the boundary conditions, numerical methods for equation solving, and testing of simulation versus experiment have been presented previously in [2,14]. The complex heat transfer problem is solved via an iteration scheme. For every iteration, the gas flow and heat transfer problems must be solved. However, the iterative process for joint solution of energy equation and transfer equation generates strong oscillations for temperature field and velocity field (for first external iterations). To suppress the "oscillation" magnitude, we used the under-relaxation approach and linearization of source terms in equations.
Calculation Results
The simulation of external heat transfer and flue gas aerodynamics for the case of near-wall combustion with using of acoustic-type nozzles was (figure 1) carried out for a configuration of a tube furnace with dual-side heating of the reaction-flow tubulars. In this type of furnace, the radiation chamber comprises two sections placed symmetrically relative to the one-row tubular coil (with vertical positioning of tubes). The number of tubes for a heated coil is 28, their diameter is 134×12 mm, the coil pitch is 300 mm, and the total length of tubes is L = 10 m, the section width is H = 1.5 m. For simulated cases, the acoustic-atomizing burners were allocated in three tiers: the bottom tier is by 1.5 m above the hearth, and height interval between tiers is 2.5 m. In our simulations, the fuel gas was methane. The gas flow rate for a half-chamber was В f = 0.198 m 3 /s, the lower calorific value is р l Q = = 35818 kJ/m 3 . The temperature of fuel at the nozzle inlet was 323 K, and the inlet air has the same temperature. The excess-air factor is α t = 1.07. The gas fuel was distributed evenly to all burner of all tiers. The effective emissivity factor was 0.79. The external surface of tubes has a temperature in the range from 1000 to 1200 K. We assume that the solid surfaces have the diffusive mechanism for radiating of emission and reflecting the incident radiation. The emissivity factor for lateral lining was ε = 0.42, the thermal conductivity coefficient for multi-layered walls was λ = 0.35 W/(m⋅K). The thermal conductivity is the main mechanism of heat loss through the walls. The outside temperature of furnace was 300 K. The wall thickness is 0.45 m. The emissivity factor for ceiling was 0.67, and for the furnace hearth it was 0.69. Figure 5 presents the change in temperature of the inside lining for a lateral wall as a function of chamber height. The outer temperature of the reactant-carrying tubes is also plotted. The temperature of lining is minimal near the burners: this zone is blown with gas-air mixture. The lining temperature increases in the zone of flame, and then it declines again as we depart away the nozzle tiers. In our simulation, the thermal conductivity takes place only across the wall. If one accounts for the heat transfer along the wall, the variation of temperature along х-axis would be smoother. Figure 6 shows the coordinate system and isotherms in the volume of the radiation chamber of the furnace with acoustic burners. For convenience, this plotting of isotherms was clockwise rotated by the angle of 90° (in reality, the х-axis is directed upward, to the chamber ceiling). One can see that the top temperatures for combustion products take place near the lateral wall close to the fuel combustion area. The most of chamber volume is filled with combustion products; their temperature decreases gradually from 1500 K to 1420 K (near the tubular screen). The flue gas temperature declines as they flow to the convection section and becomes about 1270 K at the bridgewall: this fits the experimental data for furnace within accuracy ± 5 K (normal operational regime). Combustion of fuel gas, consisting mainly of methane, in the burners of the primary reforming furnace is performed with an excess air ratio α = 1.1, which is Air Gas + air Air sucked from the atmosphere through the registers of the burner itself. The flue gas temperature at the outlet of each chamber of the radiant zone of the primary reforming furnace (the entrance to the convection zone) should be no more than 1060°C measured by the instrument 1-TIAH-12025A and 1-TIAH-12025B. An alarm is provided in the central high-temperature control panel at 1075°C of flue gases. Figure 8 shows the stream function ψ for a single section of radiation chamber (figure 1). As was noted above, in the reality the х-axis is directed upward. The flow pattern has six zones of direct flow intermitted with recirculation flow zones. The first direct-flow zone is formed by flow of combustion products from the third tier of burner. These streamlines go along the upper part of lining of the lateral wall, then near the ceiling, and to the flue gas tunnel. For this layer, the temperature gradually decreases from 2210 K (combustion zone for the upper tier of burners) down to 1270 K (exit from the radiation chamber). The number six zone of direct flow originates from the combustion from bottom tier of burner: the flow moves toward the furnace sheath, then it goes along the tubes and reaches the bridgewall. For the second tier of burners, the flow pattern is different: the flue gases from the first tier and second tier are intermingled, and they join the flue gases of the first zone. The stream functions from the zones number four and five originate from the burners in the second and third tiers: the stream function line travels near the lateral wall (and heat up the lining) and later (above the second tier) joins the general direct flow. The recirculation zones happen in the furnace volume: they are detected near the tiers placement. The temperature of the recirculation flow is slightly lower (1450 K): this is due to cooling action of the heating surface and to a remote position from the heat sources. The combustion products from the recirculation flow give inflow to the burner head, and this stabilizes the fuel mixture combustion.
Conclusion
1) These simulations demonstrate that the differential approach for calculation of furnaces is a good tool for identifying local temperatures and local velocities of combustion products inside the radiation chamber of a tube furnace (when the furnace is equipped with burners allocated in tiers).
2) The design thermal calculations of the furnaces considered are made on the assumption that the flue gas temperature is the same throughout the radiation chamber of 1200°C (1473 K) and the same lining temperature of the side walls according to the integral method. As can be seen from Figures 4, 5, 6 and 7, the actual temperature field is very non-uniform. The estimated temperature level is obtained only closer to the tubular screen. At the same time, on the pass (transition to the convection section) the gas temperature corresponds to the design value. 3) Even with the use of a large number of wall burners, it is not possible to provide a uniform temperature field in the combustion chamber and a uniform distribution of heat fluxes along the reaction tubes. At the same time, when using acoustic burners located in three tiers, a more uniform temperature field is obtained near the reaction tubes than when using six longlines using horizontal wall burners. However, even in this case, the transfer of the required amount of heat to the reaction tubes for carrying out steam reforming of hydrocarbons and obtaining the required temperature of the reaction mixture at the outlet of the reaction tubes is ensured. 4) As calculations show, the maxima of temperature and heat flux are located at the level of the tiers of the burners. Therefore, in order to prevent local overheating of the pipes, it is necessary to control the uniform flow of fuel gas to the individual burner tiers and monitor the combustion mode. The correct burner flame should be blue with a yellow (straw) tip. In case the flame is yellow, the combustion does not occur completely and it is required to supply more air through the registers for combustion. Excess air will give the burner flame a clear blue color without a yellow tip. | 5,119.8 | 2019-06-04T00:00:00.000 | [
"Physics",
"Engineering"
] |
Comments on the NEMA NU 4-2008 Standard on Performance Measurement of Small Animal Positron Emission Tomographs
The National Electrical Manufacturers Association’s (NEMA) NU 4-2008 standard specifies methodology for evaluating the performance of small-animal PET scanners. The standard’s goal is to enable comparison of different PET scanners over a wide range of technologies and geometries used. In this work, we discuss if the NEMA standard meets these goals and we point out potential flaws and improvements to the standard.For the evaluation of spatial resolution, the NEMA standard mandates the use of filtered backprojection reconstruction. This reconstruction method can introduce star-like artifacts for detectors with an anisotropic spatial resolution, usually caused by parallax error. These artifacts can then cause a strong dependence of the resulting spatial resolution on the size of the projection window in image space, whose size is not fully specified in the NEMA standard. If the PET ring has detectors which are perpendicular to a Cartesian axis, then the resolution along this axis will typically improve with larger projection windows.We show that the standard’s equations for the estimation of the random rate for PET systems with intrinsic radioactivity are circular and not satisfiable. However, a modified version can still be used to determine an approximation of the random rates under the assumption of negligible random rates for small activities and a constant scatter fraction. We compare the resulting estimated random rates to random rates obtained using a delayed coincidence window and two methods based on the singles rates. While these methods give similar estimates, the estimation method based on the NEMA equations overestimates the random rates.In the NEMA standard’s protocol for the evaluation of the sensitivity, the standard specifies to axially step a point source through the scanner and to take a different scan for each source position. Later, in the data analysis section, the standard does not specify clearly how the different scans have to be incorporated into the analysis, which can lead to unclear interpretations of publicized results.The standard’s definition of the recovery coefficients in the image quality phantom includes the maximum activity in a region of interest, which causes a positive correlation of noise and recovery coefficients. This leads to an unintended trade-off between desired uniformity, which is negatively correlated with variance (i.e., noise), and recovery.With this work, we want to start a discussion on possible improvements in a next version of the NEMA NU-4 standard.
Introduction
The National Electrical Manufacturers Association's (NEMA) NU 4-2008 standard on "Performance Measurements of Small Animal Positron Emission Tomography" specifies "standardized methodology for evaluating the performance of positron emission tomographs (PET) designed for animal imaging" [1]. The standard's goal is to enable comparison of the performance of different PET systems over a wide range of technologies and geometries used. Thus, the methods specified in the standard should not artificially favor or disfavor certain choices in scanner geometry and technology and the performance results should indicate the expected performance in real-world applications as closely as possible. Virtually all commercial small-animal PET systems and most research prototype PET systems have published performance evaluations based on the NEMA standard and Goertzen et al. [2] have published a review comparing small-animal PET systems based on the respective NEMA performance publications. These publications are an essential benchmark in the development of new PET systems and an important tool for the purchase decisions of potential buyers.
The NEMA standard specifies 5 measurements with respective analysis: evaluation of spatial resolution; evaluation of total, true, scattered, random, and noise-equivalent count rates; evaluation of system sensitivity; and quantitative evaluation of image quality in a standardized imaging situation using a hot-rod phantom.
The standard was devised over 10 years ago, so it does not incorporate newer technological developments and paradigm shifts. For instance, the use of data acquisition into sinograms and filtered backprojection reconstruction mandated in the standard was more widespread than it is today. Nowadays, these methods are often only implemented to evaluate the PET performance based on NEMA but never actually used for real-world applications In this work, we examine if the NEMA standard meets its goals to enable a fair comparison of PET systems and we point out potential flaws and improvements in the standard. In our opinion, the standard is underspecified in several parts, limiting the comparability of different systems, since the investigators performing the performance evaluations are still free to choose parameters which significantly influence the results. The methods specified for evaluation of the spatial resolution disadvantages certain system geometries, where those geometries do not exhibit the same reduction in spatial resolution in real-world applications. The definition of random rates is circular and allows the use of very different other methods generating different results. The chapter on sensitivity is ambiguous, leading to publications using different or even unclear methods for the measurement of sensitivity, creating ambiguity in the interpretation of sensitivity of different PET systems.
If applicable, we demonstrate the claimed issues with simple simulation studies. All discussions in this work should be universally applicable to any PET system. However, it is still helpful and instructive to support the claims in this work with real-world data. This is done using data obtained with the Hyperion II D PET/MRI scanner, which was developed by our group [3]. Using the same data, we already have published a performance evaluation based on the NEMA standard [4].
The goal of this work is to start a discussion on a revised version of the NEMA standard and to provide input for this discussion.
Spatial resolution
To evaluate the spatial resolution, the NEMA standard mandates the use of point source scans which are reconstructed using filtered backprojection. However, basically all modern PET scanners instead use an iterative maximum likelihood expectation maximization (MLEM) algorithm for reconstruction [4][5][6][7][8][9][10][11][12][13][14][15], so a scanner's spatial resolution using filtered backprojection is not necessarily indicative of its spatial resolution for applications. While the mandated filtered backprojection is intended to benchmark the detector performance alone, we will demonstrate in the following that it disadvantages certain scanner geometries. Furthermore, the NEMA standard specifies that the spatial resolution must be determined using the projections of the reconstructed point sources inside a window in image space, without strictly specifying the size of this projection window. We will demonstrate that this can lead to an ambiguous spatial resolution which depends on the size of the projection window and allows for artificially enhancing the spatial resolution by choosing a particularly large projection window for certain scanner geometries.
The main disadvantage of filtered backprojection is that it does not include any model of the detector and assumes an ideal, ring-like PET scanner, while the detectors in realworld PET scanners are usually in a block geometry with anisotropic spatial resolutions. Line of responses (LORs) perpendicular to the detector's front face are detected with the highest resolution, while tilted LORs have a parallax error in the detected position, which increases with more tilt of the LORs relative to the detector's front faces as illustrated in Fig. 1. In principle, this effect can be reduced by detectors which are able to determine the depth of interaction (DOI) of the gamma interaction, but in practice most PET system do not employ detectors with DOI determination [4-6, 11, 12, 14, 16]. Additionally, PET rings have gaps between the detector, where no LORs are detected at all.
These issues with filtered backprojection will lead to artifacts in the reconstructed activity. For instance, each angle where the PET ring has an enhanced spatial resolution creates an excess in reconstructed activity along the line connecting this position with the point source and each angle with degraded spatial resolution creates a reduction in reconstructed activity along the respective line. Similarly, gaps between the detector create a lack of reconstructed activity along these lines.
To understand and demonstrate this behavior, it is instructive to look at these effects in sinogram space. In sinogram space, the enhanced spatial resolution of perpendicular LORs manifests as hot spots or rather peaks in the center of each detector modules as Fig. 2g shows. With increasing distance from the center of the detector module the spatial resolution degrades, blurring the line of the point source in sinogram space. We model this as the convolution of the sinogram of a Gaussian point source and the parallax error of the detector. The parallax error of the detector stack can be modeled as the shape of two triangles, connected at their tips as shown in Fig. 2d. The parallax error is proportional to sin ϕ, where ϕ is the angle to the normal of the block detector as defined in Fig. 1. The parallax error shown in Fig. 2d is a small-angle approximation of this.
In addition to the inherent problems of mandating the use of filtered backprojection in the NEMA standard, the standard additionally mandates projecting the reconstructed Fig. 1 Ring geometry that was used for the simulations and the measurement. The blue bands show the parallax error of LORs, which increases approximately proportional to the angle ϕ to the normal of the block detector three-dimensional activity onto different one-dimensional axes using a projection window. However, the size of the projection window is not fully specified: "The response function is formed by summing all one-dimensional profiles that are parallel to the direction of measurement and within at least two times the FWHM of the orthogonal direction" [1, p. 7]. The first issue is that this definition is circular, since the minimal size of the projection window to determine the FWHM is defined using the FWHM itself. One can easily fix this problem, either using a sufficiently large projection window in the first place, or by reducing the size of the projection window iteratively in dependence of the determined FWHM in the previous iteration. However, the much bigger problem is that the size of the projection window can strongly influence the resulting spatial resolution. The cause of this is the integration of the star-like artifacts created by the anisotropic spatial resolution, as we demonstrate with the following simulation, shown in Fig. 2.
We created the activity distribution of an ideally reconstructed point source by assuming a rotationally symmetric two-dimensional normal distribution, shown in Fig. 2a. The position of the point source is off-center at a radial offset of 10 mm. To investigate the essence of the effects, we do not include noise in our simulation. From this ideally reconstructed point source, we create a sinogram by forward projection (i.e., by applying a Radon transformation). The resulting sinogram is shown in Fig. 2b.
We include the gaps between the detector stacks in our simulation by creating a sensitivity sinogram, where all bins corresponding to gaps are 0 and bins corresponding to sensitive detector area are 1 shown in Fig. 2c. The simulated geometry is depicted in Fig. 1 Fig. 2 Visualization of the influence of anisotropic detector resolution on the filtered backprojection and resulting spatial resolution along the two axis. e, h, k The simulation with only gaps. f, i, l The simulation with anisotropic detector resolution and gaps of 10 detector modules. g, j, m A measurement. The simulation with anisotropic detector resolution and the measurement exhibit a star-like artifact in the reconstruction, which leads to a split in spatial resolution along x and y axis, as shown in the bottom row and follows the geometry of the Hyperion II D scanner to allow a comparison between simulation and measurement. When we include this model of gaps in our simulation by multiplying the sensitivity sinogram with our point-source sinogram (Fig. 2e) and then performing a filtered backprojection (i.e., an inverse Radon transformation), we get a reconstructed point source with slight artifacts, shown in Fig. 2h. As stated above, the artifacts are a lack of reconstructed activity along the lines connecting the gaps and the point source. When analyzing the spatial resolution of the filtered backprojection with gaps, we observe little influence of the gaps compared to the filtered backprojection of an ideal sinogram without gaps. More importantly, the resulting spatial resolution of 1.2 mm FWHM is stable to changes in the size of the projection window, as shown in Fig. 2k. Thus, gaps between the detectors are not the cause of severe artifacts and only have a very minor influence on the resulting spatial resolution with the usually small gaps of PET scanners.
When we additionally include the effect of the anisotropic detector resolution due to parallax errors by convolving the point-source sinogram and the point spread function in Fig. 2d, the resulting filtered backprojection in Fig. 2i exhibits a star-like artifact, i.e., the lines connecting the center of each detector stack and the point source exhibit a visible excess in activity.
If one of these excesses aligns with one of the Cartesian projection axis, and with the simulated geometry they do so for the x axis, the projection onto the axis perpendicular to this axis will result in a peaked excess at the maximum of the line profile, as shown in Fig. 3. A scanner's spatial resolution is defined by the FWHM and FWTM of this profile, which depends strongly on the height of the maximum. Therefore, a peaked excess of the maximum will significantly enhance the resulting spatial resolution. For our geometry, this enhancement is only observed for the y axis, because only the x axis has an excess in activity aligned with it, as there are not any detector stack which are perpendicular to the y axis. This difference between the resolution in x and y is essentially an artifact and basically non-existent in real-world applications using an iterative maximum likelihood expectation maximization (MLEM) reconstruction. More importantly, the extent of this effect depends strongly on the size of the projection window as demonstrated in . Increasing the size of the projection window enhances the resulting spatial resolution in y (i.e., decreases FWHM and FWTM) while degrading the spatial resolution in x. This makes comparison of the spatial resolution of different PET system difficult and maybe even impossible, as the NEMA standard does neither specify a clear projection window size nor does it mandate that the used window size should be reported. Thus, most publications do not state the used projection window [5,7,14,16]. Other geometries may not exhibit this behavior at all, favoring or disfavoring systems which have detectors perpendicular to a Cartesian axis.
The measurement and filtered backprojection reconstruction of point sources with the Hyperion II D scanner shown in Fig. 2g and j look very similar to the simulation which includes parallax error and gaps: The sinogram has the same hot spots at the angles where the line of responses is perpendicular to the detector surface and the reconstruction exhibits the same star-like artifact. The analysis of the reconstruction yields the same observed difference in spatial resolution between the x and y axis. Additionally, we observe the same strong dependence on the size of the projection window, shown in Fig. 2m.
An extreme example of a scanner geometry affected by this issue would be a box geometry instead of the conventional ring geometry, i.e., a PET scanner with 4 large perpendicular detector modules without DOI capabilities. With such a geometry, the filtered backprojection artifact would have the shape of a cross, with both lines of excessive activity aligned with the x and y axis. Thus, the artifact would enhance the resolution along both x and y axis by boosting the maximum of both projections. This scenario is not solely hypothetical, as small-animal PET scanners with the described box-like geometry exist such as PETbox 4 [17]. In PETbox's NEMA NU-4 performance evaluation they state that using FBP was not possible "since a FBP algorithm specific for the PETbox4 system with the unconventional geometry has not been developed" [17, p. 3797].
Other examples of published performance evaluation which have omitted the filtered backprojection altogether when evaluating the spatial resolution are [8,18]. This is an indication that these groups do not find the results based on filtered backprojection not indicative for the performance of their system.
Fixing the issues of this method and proposing a better method to evaluate the spatial resolution is challenging. The NEMA standards committee surely knew many of these issues and we believe most of the PET community will be aware of issues with filtered backprojection, as well. However, so far, none of the performance publications based on NEMA discussed the issues presented here, so we believe it is worthwhile to state them to start a discussion.
One obvious solution would be to simply not use filtered backprojection and to perform the reconstruction with the default reconstruction method provided with the scanner, which is also used for the evaluation of the image quality phantom and for real-world applications. In modern scanners, this is usually an iterative reconstruction algorithm, e.g., ordered subset expectation maximization [19] and maximum likelihood expectation maximization [20,21]. However, these algorithms can artificially enhance the spatial resolution of point sources without background activity due to, e.g., the non-positivity constraint or resolution recovery [22][23][24]. Thus, the reconstruction of a point source would mostly be a benchmark of the reconstruction and not of the underlying detector performance. We suspect that these arguments were the main reason why the NEMA standards committee chose filtered backprojection instead.
One alternative could be the evaluation of spatial resolution using a Derenzo hot-rod phantom. The standard could specify the geometry of such a phantom, specify the activity and scan time, allow the use of the reconstruction method supplied by the manufacturer, and then define a quantitative analysis method. The Derenzo phantom is already well-established in the community as a benchmark to evaluate the spatial resolution. For instance, several NEMA performance publications already include such a measurement as a benchmark of spatial resolution [5,7,12,15]. However, these results are not easily comparable, as there currently is no standardized quantitative analysis method to determine the spatial resolution from a measurement of a Derenzo phantom. Usually, the spatial resolution is estimated by making a qualitative judgment at which distance the hot rods are still discernible. In principle, such a definition of spatial resolution based on the ability to resolve to close points is very reasonable and commonly used as a definition of spatial or angular resolution for telescopes and microscopes [25,26]. However, for a quantitative definition of spatial resolution, there must be a standardized limit of the peak-to-valley ratio between two resolvable point sources, i.e., how much the intensity between the two peaks must dip to make them just resolvable. In a new standardized definition of PET spatial resolution, the PET community could follow the commonly used Rayleigh criterion with an intensity dip of 26.5% [27], or standardize a different limit.
For the scan of a Derenzo phantom, such a resolvability criterion would require to determine the valley-to-peak ratios of the profile lines over the different regions of the phantom. To include anisotropies in the spatial resolution, the profile lines should be defined over multiple angles as demonstrated in Fig. 4a. Figure 4b shows the resulting distribution of valley-to-peak ratios for the phantom's 0.9-mm region. We would recommend that the spatial resolution is defined as the hot-rod distance in the region where at least 90% of the peak-to-valley ratios are below 0.735, i.e., the valley dips are above 26.5% for consistency with the Rayleigh criterion. Alternatively, one could define a limit based on the average peak-to-valley ratio of a region or using a different percentile than the suggested 90%. As shown in Fig. 4b, the region with distances of 0.9 mm has 100% of the valley-to-peak ratios below 0.735. For the 0.8 mm region, over half of the valley-to-peak ratios would be above 0.735 in our measurement. Thus, the resulting spatial resolution would be 0.9 mm.
To prevent arbitrary selection of peaks and valleys in a noisy reconstruction, the standard could specify a limit for the allowed deviation from the physical hot-rod distances when selecting the position of peak and valleys in the profiles of the Derenzo region.
To evaluate the influence of radial and axial offsets on the spatial resolution, the standard could specify different radial distances at which the Derenzo phantom should be placed. Similarly, the standard could also specify additional measurements of the rotated phantom to investigate the isotropy of the spatial resolution.
In our opinion, such a method would depend much less on the system's geometry and technology and would provide a much more realistic benchmark, closely mirroring real-world use of the system. As one of the disadvantages, the precision of this method would be limited by the differences in hot-rod distances between the phantom's region. However, with commonly used Derenzo phantoms, one would achieve a precision in the determination of the spatial resolution of 0.1 mm, which is more than adequate to Fig. 4 Evaluation of spatial resolution using a Derenzo phantom. a Reconstruction of Derenzo phantom scan. The labels indicate the diameters and the distance between the rods. The red lines show an example of profile lines which would be used for determination of valley-to-peak ratios to evaluate the spatial resolution. b Distribution of valley-to-peak ratios for the region with a rod distance of 0.9 mm. All ratios are below 0.735, which is marked with a red vertical line assess the scanner's viability for intended applications. Another drawback of the Derenzo phantom is that it is missing warm background activity and which could potentially lead to an artificial enhancement of spatial resolution with a high number of reconstruction iterations.
The outlined method is only intended as one possible first suggestion. We believe that developing a robust and objective method to benchmark the spatial resolution is a challenging and important research problem. One advantage of the current evaluation method is its simplicity, which simplifies Monte Carlo simulation and similar research.
As another alternative, Lodge et al. [28] have recently proposed a novel method for the measurement of clinical PET spatial resolution using a homogeneous cylinder phantom at an oblique angle. Another idea would be two use two adjacent point sources in a warm background, similar to the method described in [24].
Scatter fraction, count losses, and random coincidence measurements
The definitions of the randoms rate, scatter rate, and scatter fraction are not satisfiable and thus ill-defined for systems employing detector material containing intrinsic radioactivity, such as LYSO or LSO scintillators, as most modern PET systems do.
To explain this issue, we give a brief summary of the NEMA standard for the measurement of the scatter fraction, count losses, and random coincidence rate in the following. The measurement is specified as a scan of an FDG-filled line source inside a scatter phantom consisting of polyethylene. The rows of the measured sinogram are centered at their maxima and the sum of all rows is calculated. In the resulting radial profile of the phantom scan, the NEMA standard specifies a signal window of 7 mm around the maximum. All event counts outside this signal window are regarded as either scatter or randoms. It is assumed that the sum of scatter and random event counts is at the same level inside the signal window as on the edges of the signal window. The sum of random and scatter event counts is denoted as C r+s , and the sum of all event counts are denoted as the total event count C TOT .
For systems without intrinsic radioactivity, the scatter fraction is supposed to be determined by assuming that the contribution of the randoms rate to the combined scatter and random counts C r+s is negligible for measurements at a low activity. Then, the randoms rate is determined from the total event rate R TOT and true event rate R t .
For systems with intrinsic radioactivity, the sum of random and scatter event counts also includes the random event counts produced by the intrinsic radioactivity and this contribution of the intrinsic randoms rate cannot be neglected at low measured activities [29]. The NEMA standard acknowledges this issue by specifying: "For systems employing detector material containing intrinsic radioactivity, the scatter fraction shall be evaluated by first evaluating the scattered event counting rate (see section 4.4.5 below). " [1, p. 13] Section 4.4.5 gives the following formula for the scattered event counting rate R s , which already includes the randoms rate R r [1, p. 14] The formula for the randoms rate is given above, in section 4.4.4 and it includes the scatter fraction SF The scatter fraction SF, which is defined in the mentioned section 4.4.5, in turn includes the scattered count rate These three equations are not satisfiable for R int > 0 as shown in the following proof. We insert the definition of SF (i.e., Eq. 3) into the definition of R r (i.e., Eq. 2: This is inserted into the definition of R s (i.e., Eq. 1): This is a contradiction, because by definition it is true that R int = 0, since the standard specifies these definitions of R r and R s for scanners with intrinsic radioactivity.
We can speculate on the intended meaning of the NEMA standard's definitions. One simple explanation is that the term − R int was simply forgotten in Eq. 2 since subtracting R int from R r would remove the contradiction. However, that would still leave the definition circular and would thus require explicit instructions on how to solve this set of equations in practice. One sensible instruction could be to neglect the influence of the randoms rate R r (i.e., assume R r = 0) in Eq. 1 for measurements at low activities to determine R s and SF. We can then assume that SF is approximately constant with increasing activity and use SF determined at a low activity to calculate the randoms rates R r and scatter rates R s at higher activities.
The NEMA standard specifies the following lower activity threshold: "For scanner employing, radioactive scintillator material, measurements shall be performed until the single event rate is equal to twice intrinsic single event rate" [1, p. 11]. Our scanner has an intrinsic single event rate of 80 kcps and we reach a single event rate of 160 kcps at 430 kBq. Thus, we use this activity to estimate the scatter rate R s using Eq. 1 while neglecting the randoms rate. This scatter rate is then used with Eq. 3 to determine the scatter fraction SF. This scatter fraction is assumed to be constant with varying activity and we use this with Eq. 2 to determine the randoms rates R r at different activities. With these randoms rates we can evaluate Eqs. 1 and 3 again to determine the scatter rates and fractions at higher activities without neglecting the randoms rates.
Alternatively, the NEMA standard allows the usage of a randoms rate estimate supplied directly by the scanner. Such estimates usually use one of two techniques: one using a delayed coincidence window (DCW) [30,31] and one based on the singles rate [30]. The singles rate (SR) method infers the randoms rate R ij between to detector element ij from the singles rates S i and S j using the formula with the time coincidence window τ . However, this method systematically overestimates the randoms rate [32,33]. Oliver et al. [34] proposed an improved method "Singles Prompt" (SP) which includes corrections based on the coincidence rate (or prompt rate) P i to account for the contribution of true coincidences and pile-up events: where λ is the solution of the equation with the total singles rate S = i S i and the total prompt rate P = i P i . We have implemented these methods with the Hyperion II D scanner and can compare them empirically with the modified method the NEMA standard suggests. The NEMA standard specifies a cylindrical signal window of 8 mm around the phantom (i.e., a total diameter of 41 mm) in sinogram space. We applied an equivalent cylindrical signal window, i.e., we only determined the randoms rate for the pairs of detector elements whose line of responses intersect with the cylindrical signal window. Figure 5 shows the total randoms rates as a function of activity inside the scanner for the four different methods: NEMA, DCW, SR, and SP.
As expected, the randoms estimates R SR is larger than the randoms estimate R SP : R SR ≥ R SP . The randoms estimate R DCW is similar to R SP , and the modified NEMA randoms estimate R NEMA is similar to R SR , which is known to be the less precise than R DCW and R SP [34].
Oliver et al. [34] showed that randoms estimates R DCW using a delayed coincidence window (DCW) are larger or equal to the randoms estimates R SP : R SR ≥ R DW ≥ R SP . There are many publications investigating the correctness of these methods, providing evidence from theory, simulations, and measurements. For the NEMA method, on the other hand, we are not aware of any publications investigating the correctness of the method. Additionally, the verbatim definition of the NEMA method for systems with intrinsic radioactivity is contradictory, as shown above. However, we acknowledge the value of allowing a randoms estimations method which is independent on the ability to Fig. 5 Comparison of different methods for the determination of random event rates. NEMA means a method based on the NEMA standard using Eq. 1, DCW uses a delayed coincidence window, SR is based on the singles rate using Eq. 4, and SP incorporates additional corrections using Eq. 5 either measure delayed coincidence or single rates. Thus, one simple revision to the standard could be to correct the contradictions in the definition, possibly in the way described in this work.
All of these points apply also to the scatter rate R s defined in Eq. 1 and the noiseequivalent count rate, as the definitions of these observables depend on the randoms rate.
Sensitivity
We think the NEMA standard's protocol for the evaluation of the sensitivity is unclear. Section 5.3 of the NEMA standard specifies to axially step a point source through the scanner. Further, Section 5.3.4 implies that a different scan for each source position should be acquired. In Section 5.4, all of the data analysis is specified for single sinogram slices i. For instance, the sensitivity is defined as with the counting rate R i and the background rate R B,i of sinogram slice i. However, the NEMA standard only ever references sinogram slices and never different measurements. We have one measurement per source position and each of these measurements has many sinogram slices. In other words, there are many measurements for each axial sinogram slice. Whenever the NEMA standard refers to sinogram slices, it remains unclear which measurement to consider. One possible intention could be to calculate the sum of all measurements; however, this is never explicitly stated. This would effectively create a sensitivity measurement with a virtual line source of activity n · A, where n is the number of measurements. Such a line source would be similar to the source distribution specified in the sensitivity protocol in the clinical NEMA NU 2-2012 standard. However, the sensitivity S i is defined by the activity A cal in Eq. 7, not a virtual activity n A of the combined measurements. Unfortunately, the NEMA standard does not define A cal in this equation, the only definition of A cal is in Section 1.2 as "activity at time T cal ". In conclusion, if this interpretation was the intention of the NEMA standard, multiple required instructions would be missing.
Another possible interpretation could be to take the slice i of the measurement where the point source is located at the center position of the slice. However, this interpretation is not consistent with the formulas given for the total system sensitivity which lack a normalization for the total number of slices. With a normalization with the total number of slices, this would effectively be an additional axial signal window around the point source, However, the size of this axial signal window would depend on the scanner's slice thickness, giving an unfair disadvantage to high-resolution scanners. For instance, with a slice thickness of 1 mm, this axial signal window would cut into the point source. Additionally, this interpretation would not be realistic in the context of realworld applications, where the sensitivity is supposed to be an indicator of how many true coincidences one can expect for a given activity inside the scanner's FoV.
In summary, the NEMA standard does not include any instructions on how to analyze the data of the multiple measurements it instructs to take. It only defines the sensitivity of sinogram slices without specifying the relationship of the sinogram slices and measurements with different source positions.
One consistent alternative definition of sensitivity could simply sum all sinogram slices and then divide the total coincidence counts by the acquisition time and activity for each measurement (i.e., source position). The sensitivity profile would consequently be defined as this total sensitivity as a function of the source position. To calculate the mouse-and rat-equivalent sensitivities, one would average this sensitivity profile inside the central 7 cm or 15 cm. Because the NEMA standard specifies a transversal signal window with a width of 20 mm in sinogram space, it would be consistent to apply the same signal window around the point source in axial direction. We believe that this method is already used in multiple performance evaluations based on NEMA [5,12,14,35], although the exact details of the methods are usually not explained.
Therefore, the ambiguity of the NEMA standard can lead to unclear and incomparable results in performance publication based on NEMA, impeding an objective comparison of different sensitivity results.
For instance, Prasad et al. [13] seem to follow the formulas given by NEMA quite closely, without clearly specifying how the data of the different measurements at different source measurement is used in the data analysis. The reported sensitivity profile has data points above 1 cps/Bq, i.e., an impossible sensitivity larger than 100% for the central slices. They claim a total absolute sensitivity of 12.74%, which is implausibly large compared to the expected geometric sensitivity of 12.9%. We calculated this ideal geometric sensitivity using their scanner's diameter, axial length, and crystal thicknesses with the simple geometric model explained in [4]. The usual ratio between measured peak sensitivity and geometric sensitivity is between 0.3 and 0.5 [4].
Image quality, accuracy of attenuation, and scatter corrections
The NEMA standard defines several observables for quantitative analysis of the image quality phantom. The uniformity is defined as the relative standard deviation of all voxels in a large cylindrical volume of interest over the uniform region in the image quality phantom. For determination of the recovery coefficients, the image slices along the central 10 mm of the hot rods are averaged. Then, the recovery coefficients are defined as the maximum values in a circular region of interest around the hot rods with different diameters, divided by the mean activity in the volume of interest over the uniform region. The issue with this definition is that the recovery coefficients are correlated with the uniformity: The maximum value of a randomly distributed sample increases with variance, even if the mean value of the distribution is constant. Thus, this definition of the recovery coefficients does not measure the mean recovery in the hot rods, but measures a combination of recovery and variance. With a high variance and a good recovery the recovery coefficients can even reach values larger than 1.
We can demonstrate this behavior in a simple Monte Carlo simulation, where we assume that the reconstructed activity in a voxel follows a normal distribution with the standard deviation given by the uniformity. The simulated geometry is the NEMA image quality phantom. Figure 6 shows the simulated recovery coefficients of the 5-mm rod as a function of the uniformity. The ground truth for the recovery coefficient for the activity in the rod was 0.95. The data analysis follows the NEMA standard, i.e., the recovery coefficient is defined by the maximum activity in the region of interest. The drawn errors are calculated from the errors on the mean of the averaged pixels in the region of interest. The simulation demonstrates that the recovery coefficient is always overestimated compared to the ground truth and increases with increasing variance (i.e., larger uniformity values).
Thus, the NEMA standard's definition of the recovery coefficients hampers an easy comparison of different scanner's recovery performance, because the recovery and uniformity must be compared at the same time. In other words, the same scanner can achieve different recovery performance at different uniformity points. The user can influence the uniformity with parameters such as the amount of filtering during reconstruction. Figure 7 shows measured recovery coefficients as a function of varying uniformity. Each uniformity value corresponds to different widths of a Gaussian kernel used during reconstruction of a scan of the image quality phantom. We used the maximum likelihood expectation maximization reconstruction described in [36]. As predicted by the Monte Carlo simulation, the recovery coefficients are correlated with the relative standard deviation in the uniformity region: Both values decrease with large filter width, i.e., reduced variance in the image. Of course, it is not unexpected that the recovery decreases with stronger filtering during reconstruction. However, the observed effect is on top of the expected decrease in recovery due to filtering. Using the NEMA standard's observables, improving the uniformity performance will always lead to a loss in observed recovery, regardless of whether the actual true recovery degraded or not. When conducting a NEMA performance evaluation, one has to chose an arbitrary point on the uniformity and recovery curve resulting in one of many possible results, which are difficult to compare with the results of other scanners. Fig. 7 Measured recovery coefficients as a function of uniformity. The curves of the recovery coefficients correspond to rods with diameters of 5 to 1 mm, from top to bottom. Each different uniformity value corresponds to a different filter width used during reconstruction. A larger filter reduces variance and therefore increases uniformity (i.e., decreasing relative standard deviation). The recovery coefficients are increasing with increasing uniformity values, so overall image quality performance is a trade-off between uniformity and recovery As another minor issue, the NEMA standard derives the standard deviation of the recovery coefficients from the standard deviations of the line profiles along axial directions and the standard deviations of the uniform regions using Gaussian error propagation. This is not the correct standard deviation of the recovery coefficient, because the standard deviation of the maximum value of a randomly distributed value is not the standard deviation of the underlying distribution.
Fixing the definition of the recovery coefficients is not trivial. The NEMA standard probably uses the maximum due to the small diameters of the hot rods. For the very small rods, very few, if any, voxels lie clearly in the center of the rods. Alternative definitions using the mean in a volume of interest will therefore be biased by the smaller reconstructed activity in the border regions of the rods. However, with today's high-resolution PET scanners, we believe it would be possible for most scanners to define volume of interest (VoI) inside the hot rods and then define the recovery coefficients using the mean reconstructed activity inside the VoI. Even if these VoIs would partially include the border regions of the rods, it would still at least be a comparable measure of recovery for every scanner. For the larger rods it should not be any problem to define VoI which are well inside the hot rods with a sufficient number of voxels. It is these larger rods where the current definition of recovery coefficients leads to basically a recovery of 1 or larger for all current scanners, hindering a differentiation of subtle differences in recovery between the scanners.
Another addition to the NEMA standard could be a scan of the image quality phantom at low activities to evaluate the performance of the reconstruction under low statistics, because iterative reconstruction methods usually exhibit bias at low statistics [37,38] Another research opportunity would be the development of a new phantom geometry using hot small spheres instead of axial hot rods. Such a geometry would be more similar to hot lesions in rodents and thus provide a benchmark of contrast recovery which is more similar to uptake in rodents. It would also be better comparable to the phantom used in the clinical NEMA NU-2 standard [39]. Ideally, such hot spheres would be situated in a warm background, although that would introduce the problem of cold sphere walls [40]. However, manufacturing a practical phantom with millimeter-sized fillable spheres is mechanically challenging.
General points
The NEMA standard does not explicitly mandate the use of the same settings for each measurement. Most scanners offer a multitude of settings for measurements and data processing, such as trigger settings, coincidence, and energy window sizes and quality filters for gamma interactions (e.g., detector scatter rejection [41,42]). The choice of setting parameters requires often a trade-off for different performance parameters. For example, the sensitivity benefits from wide energy and coincidence windows and no quality filters, while the image quality and spatial resolution benefit from narrow windows and strict quality filters. One could report very misleading performance results by optimizing the settings for each performance measurement separately, thus achieving performance results which are unattainable at the same time in real-world applications.
While following the standard, many performance publications based on NEMA do neither state if they used the same settings for every measurement explicitly nor report all used settings for each measurement. For example, Nagy et al. [5] use wide energy windows for the sensitivity and count rates measurements and a narrow energy window for the measurement of spatial resolution. They do not report any settings for the image quality measurement.
Another issue is the mandated use of sinograms. The data analysis for every measurement except the image quality measurement are described on sinograms. However, most modern scanners store their data in listmode format and might only implement sinogram support to conduct the NEMA measurements. To our knowledge, all NEMA NU-8 measurements published in the last 5 years used listmode files for data acquisition and had to convert the listmode files to sinograms after the measurements [4][5][6][7][8][9][10][11][12][13][14][15]43]. Spinks et al. [8] even mentions that the calculation of scatter fractions were omitted due to missing sinogram support, so this performance evaluation did apparently only use listmode data for the data analysis. The number of scintillator crystals is usually above 30 000 in modern small-animal PET systems, so that full 3D sinograms have a file size of multiple gigabytes even for very short measurements. Listmode files on the other hand are usually much smaller, making sinograms much more unwieldy.
PET scanners with monolithic scintillator blocks [15,44] might not have clear bins which correspond to sinogram bins. For instance, such detectors might use continuous regression methods for determining the most likely position of gamma interactions [45].
The data analyses in the NEMA standard could be specified without the use of sinograms, since most of the specified cuts in the sinograms could be specified as cylindrical cuts in the scanner's field of view. The standard could still allow the use of sinograms as one possibility to implement the specified geometric cuts for backwards compatibility.
Conclusion
Eleven years after the publication of the NEMA NU-4 standard, we believe it is time for a revision of the standard. In this work, we have pointed out several flaws in the standard which should be addressed in the next version. Additionally, the new technological developments in the last decade would warrant discussing an updated version in itself. With this publication, we would like to open this discution. | 10,197.6 | 2019-10-26T00:00:00.000 | [
"Medicine",
"Engineering",
"Physics"
] |
On various R-duals and the duality principle
The duality principle states that a Gabor system is a frame if and only if the corresponding adjoint Gabor system is a Riesz sequence. In general Hilbert spaces and without the assumption of any particular structure, Casazza, Kutyniok and Lammers have introduced the so-called R-duals that also lead to a characterization of frames in terms of associated Riesz sequences; however, it is still an open question whether this abstract theory is a generalization of the duality principle. In this paper we prove that a modified version of the R-duals leads to a generalization of the duality principle that keeps all the attractive properties of the R-duals. In order to provide extra insight into the relations between a given sequence and its R-duals, we characterize all the types of R-duals that are available in the literature for the special case where the underlying sequence is a Riesz basis.
Introduction
A countable collection of vectors {f i } i∈I in a separable Hilbert space H is a frame for H with (frame) bounds A, B if A and B are strictly positive constants and the inequalities hold for all f ∈ H. Frames play an increasing role in analysis and applications, mainly due to the fact that frames yield expansions of the elements in the Hilbert space of a similar type as the one that is known for orthonormal bases. In fact, if {f i } i∈I is a frame for H, the frame operator S : H → H, Sf := i∈I f, f i f i , is known to be invertible, and for all f ∈ H. It is clear that it might be a nontrivial matter to verify the two inequalities in (1). For so-called Gabor systems in L 2 (R) (see the description below), the duality principle [5,11,12] states that the frame condition is equivalent to a Riesz basis condition on an associated sequence (the adjoint Gabor system), see Theorem 1.4; this leads to a method to check the frame condition for Gabor systems in an (at least conceptually) easier way. In an attempt to extend this to general sequences in arbitrary Hilbert spaces, Casazza, Kutyniok and Lammers introduced the R-duals in the paper [1]. The R-duals also yield a method for checking the frame condition for a sequence of vectors by checking the Riesz basis condition for a related sequence. At present it is not known whether the theory for R-duals yields a generalization of the duality principle. In [13] the authors introduced certain variations of the R-duals (see Definition 1.1) and showed that R-duals of type II cover the duality principle for integer-oversampled Gabor systems leaving open the general case, while R-duals of type III generalize the duality principle and keep some of the attractive properties of the R-duals, but not all. In the current paper we show that R-duals of type II in fact do not generalize the duality principle for arbitrary Gabor frames. This brings the attention to the R-duals of type III and we determine a sub-class of the R-duals of type III, which possesses the missing properties. We also provide further insight into the various R-duals by providing characterizations in the special case where the given frame is a basis.
In the rest of this introduction we state the key definitions and results from the literature concerning the R-duals. In Section 3 we introduce the modified R-duals of type III and prove that they generalize the duality principle and keep the main properties known from the Gabor case. The special case of Riesz bases is analysed in Section 4. We refer to the monographs [2,9,10] for detailed treatments of frames and further references.
For a sequence {f i } i∈I satisfying at least the upper frame condition, the analysis operator is defined by and the synthesis operator is In this case S is a bijection on span{f i } i∈I and the sequence {S −1 f i } i∈I is a frame for span{f i } i∈I satisfying the representation formula for all finite sequences {c i } i∈I . A Riesz sequence is called a Riesz basis for H if H = span{f i } i∈I . Recall that if {f i } i∈I is a frame for H, then S is the optimal upper frame bound and S −1 −1 is the optimal lower frame bound.
Let us now collect the various definitions of R-duals that are available in the literature. The definition by Casazza & al. corresponds to what we call R-duals of type I. Definition 1.1 Let {e i } i∈I and {h i } i∈I be sequences with elements in H and let {f i } i∈I be a sequence in H for which i∈I | f i , e j | 2 < ∞, ∀j ∈ I.
(i) [1] When {e i } i∈I and {h i } i∈I are orthonormal bases for H, the R-dual of type I of {f i } i∈I with respect to ({e i } i∈I , {h i } i∈I ) is the sequence {ω j } j∈I given by (ii) [13] Let {e i } i∈I and {h i } i∈I be orthonormal bases for H. If {f i } i∈I is a frame for H with frame operator S, the R-dual of type II of {f i } i∈I with respect to ({e i } i∈I , {h i } i∈I ) is the sequence {ω j } j∈I given by (iii) [13] Let {e i } i∈I and {h i } i∈I be orthonormal bases for H. If {f i } i∈I is a frame sequence in H with frame operator S and Q : H → H is a bounded bijective operator with Q ≤ ||S|| and (iv) [14] When {e i } i∈I and {h i } i∈I are Riesz bases for H, the R-dual of R-duals of type I are interesting because they form Riesz sequences if and only if the given sequence {f i } i∈I is a frame for H : Let {f i } i∈I be a sequence in H and let {ω j } j∈I be an R-dual of type I of {f i } i∈I . Then the following hold: In the literature, several characterizations of the various types of R-duals are formulated in terms of the condition relating the synthesis operator T for the sequence {f i } i∈I to the sequence {ω j } j∈I . We collect these results here: Let {f i } i∈I be a frame for H and let {ω j } j∈I be a Riesz sequence in H. Denote the synthesis operator for {f i } i∈I by T , the frame operator of {f i } i∈I by S, and the frame operator of {ω j } j∈I by S Ω . The following statements hold.
in the affirmative case (7) holds.
(ii) [13] If {f i } i∈I is tight and {ω j } j∈I is tight with the same bound, then {ω j } j∈I is an R-dual of type I of {f i } i∈I if and only if (7) holds.
(iii) [4] {ω j } j∈I is an R-dual of type I of {f i } i∈I if and only if (7) holds and there exists an antiunitary transformation Λ : H → span{ω j } j∈I so that S Ω = ΛSΛ −1 .
(iv) [13] {ω j } j∈I is an R-dual of type II of {f i } i∈I if and only if (7) holds, {S −1/2 ω j } j∈I is orthonormal, and the frame bounds of {f i } i∈I are also bounds of {ω j } j∈I .
(v) [13] {ω j } j∈I is an R-dual of type III of {f i } i∈I if and only if (7) holds and the frame bounds of {f i } i∈I are also bounds of {ω j } j∈I .
In the discussion of the duality principle we need the definition of a Gabor system. Consider the Hilbert space L 2 (R). For p, q ∈ R, let T p : . Given parameters a > 0, b > 0 and g ∈ L 2 (R), the associated Gabor system is the sequence {E mb T na g} m,n∈Z ; the adjoint Gabor system is the sequence {E m/a T n/b g} m,n∈Z .
The duality principle, due to Janssen [11], Daubechies, Landau, and Landau [5], and Ron and Shen [12], states the following: Theorem 1.4 [5,11,12] Let g ∈ L 2 (R) and a, b > 0 be given. Then the It is well known that if a Gabor system {E mb T na g} m,n∈Z is a Riesz basis, then ab = 1; via the duality principle this implies that the Gabor system is a Riesz basis for L 2 (R). Thus, the relation between the Gabor system {E mb T na g} m,n∈Z and the system { 1 √ ab E m/a T n/b g} m,n∈Z corresponds exactly to the relation between the sequence {f i } i∈I and its R-duals, see Theorem 1.2. However, it is still not known whether this is a coincidence, or the Rduals of type I actually generalize the duality principle. That is, given a Gabor frame {E mb T na g} m,n∈Z for L 2 (R), we do not know whether the Gabor system { 1 √ ab E m/a T n/b g} m,n∈Z always can be realized as an R-dual of type I of {E mb T na g} m,n∈Z (by [1], the answer is affirmative for tight Gabor frames and Gabor Riesz bases). This is the motivation for the introduction of the other types of R-duals in [13]. In fact, in [13] it was shown that the R-duals of type III generalize the duality principle for all Gabor systems. However, R-duals of type III do not enjoy all of the attractive properties of the duality principle, so it is natural and necessary to search for subclasses which are in closer correspondence with the duality principle. In the present paper we determine a relevant subclass of the R-duals of type III which both extends the duality principle and has the desired properties as in Theorem 1.2.
Note that the duality principle and the R-duals by Casazza et al. have trigged a lot of research activity; we refer to the papers [3,4,6,7,8,14].
R-duals of type II
In this section we solve one of the remaining problems in [13], by showing that the R-duals of type II do not generalize the duality principle. This will motivate the analysis in the rest of the paper, where we focus on a subclass of the R-duals of type III having particular properties.
Denote ω m,n := 1 √ ab E m/a T n/b B 2 , m, n ∈ Z. We will show that {S −1/2 ω m,n } m,n∈Z is not an orthonormal sequence, which by Lemma 1.3(iv) will imply that {ω m,n } m,n∈Z is not an R-dual of type II of {E mb T na g} m,n∈Z . For m = 0 and n = 1, we have Since and for x ∈ [ 3 2 , 7 2 ], 7 2 ], it follows that Therefore, {ω m,n } m,n∈Z is not an R-dual of type II of {E mb T na g} m,n∈Z .
R-duals of type III
The key motivation behind the definition of R-duals of type III is that they generalize the duality principle [13], in the sense that whenever {E mb T na g} m,n∈Z is a frame for L 2 (R), the system { 1 √ ab E m/a T n/b g} m,n∈Z can be realized as an R-dual of type III of {E mb T na g} m,n∈Z . However, not all R-duals of type III have exactly the same properties as encountered in the duality principle. For example, for a frame {f i } i∈I with frame operator S, the optimal frame bounds are 1 S −1 , S ; these numbers are also bounds for the R-duals of type III, but not necessarily the optimal bounds. This calls for an identification of a subclass of the R-duals of type III with properties that better match what we know from the duality principle.
As starting point we will now determine conditions on the operator Q in (5) which are necessary and sufficient for an R-dual of type III of {f i } i∈I to keep the optimal bounds of {f i } i∈I . Proposition 3.1 Let {f i } i∈I be a frame for H (resp. Riesz sequence in H) with frame operator S and analysis operator U, both considered as operators on span{f i } i∈I , and let {ω j } j∈I be an R-dual of type III of {f i } i∈I with respect to the triplet ({e i } i∈I , {h i } i∈I , Q). Then the following are equivalent: (i) The Riesz sequence (resp. the frame) {ω j } j∈I has the same optimal bounds as {f i } i∈I .
(ii) The operator Q has the property Proof. Notice that when {f i } i∈I is a frame for H (resp. Riesz sequence), [13,Prop. 4.3] shows that {ω j } j∈I is a Riesz sequence (resp. frame for H), with bounds 1/ S −1 , S . We first consider the case where {f i } i∈I is assumed to be a frame for H. (i) ⇒ (ii) Assume that 1 S −1 and S are the optimal bounds of {ω j } j∈I . We will prove that (8) holds. Since Q ≤ ||S|| and Q −1 ≤ ||S −1 ||, it follows that the inequalities of (8) hold for all {d i } i∈I ∈ ℓ 2 and in particular whenever {d i } i∈I ∈ R(U); it remains to prove the optimality of the bounds of (8). Assume that there exists B 1 < S so that Then for every finite scalar sequence {c j }, taking This implies that B 2 1 is an upper bound of the Riesz sequence {ω j } j∈I , which contradicts the assumptions. Therefore, S is the optimal upper bound in (8). In a similar way, it follows that 1/ S −1 is the optimal lower bound in (8).
(ii) ⇒ (i) Now assume that (8) holds; we will prove that 1/ S −1 and S are the optimal bounds of the Riesz sequence {ω j } j∈I . As already mentioned, 1/ S −1 and S are bounds of the Riesz sequence {ω j } j∈I , so it remains to prove their optimality. Assume that the optimal upper bound of {ω j } j∈I is B 2 with B 2 < S . Then for every finite scalar sequence {c j }, we have Since the set of (finite) linear combinations j c j e j is dense in H, it follows that for every y ∈ H. Since S −1/2 is bijective and self-adjoint, it follows that i∈I for all u ∈ H, which contradicts (8). Now let {f i } i∈I be a Riesz sequence in H. In this case R(U) = ℓ 2 ; since the optimal bounds in the inequalities C||x|| ≤ ||Qx|| ≤ D||x||, x ∈ H, are C = ||Q −1 || −1 , D = ||Q||, the condition (8) means precisely that Q = ||S|| and Q −1 = ||S −1 ||. An argument as in the proof of [13,Prop. 4
.3(ii)]
shows that {ω j } j∈I is a frame with optimal bounds 1 Q −1 2 , Q 2 . Therefore, {ω j } j∈I has optimal bounds 1 S Proof. By the duality principle (Theorem 1.4), { 1 √ ab E m/a T n/b g} m,n∈Z is a Riesz sequence which has the same optimal bounds as {E mb T na g} m,n∈Z . By [13,Corollary 4.5], { 1 √ ab E m/a T n/b g} m,n∈Z can be realized as an R-dual of type III of {E mb T na g} m,n∈Z with respect to some orthonormal bases {e i } i∈I , {h i } i∈I , and an appropriate operator Q. Now by Proposition 3.1, this operator Q must satisfy the property (8).
Furthermore, the following result (which is an immediate consequence of Proposition 3.1 and [13,Prop. 4.3]) shows that the class of R-duals of type III having the property (8) provides us with exactly the same frame bounds as the given frame: Theorem 3.3 Let {f i } i∈I be a frame sequence in H and let {ω i } i∈I be an R-dual of {f i } i∈I of type III with the property (8). Then the following holds. We have now identified the correct subclass of the R-duals of type III. It has a compact characterization in terms of the condition (7): Theorem 3.4 Let {f i } i∈I be a frame for H and let {ω j } j∈I be a Riesz sequence in H with the same optimal bounds. Then (7) holds if and only if {ω j } j∈I is an R-dual of type III of {f i } i∈I having the property (8).
Proof. First assume that {ω j } j∈I is an R-dual of type III of {f i } i∈I having the property (8). Then [13,Theorem 4.4] implies that (7) holds.
Conversely, assume that (7) holds. By [13,Theorem 4.4(ii)], {ω j } j∈I is an R-dual of type III of {f i } i∈I with respect to an appropriate triplet ({e i } i∈I , {h i } i∈I , Q). By Proposition 3.1, the operator Q must satisfy (8).
The next result relates the class of R-duals of type III having the property (8) with the R-duals of type I and III, respectively. Proposition 3.5 Let {f i } i∈I be a frame for H. Then the following holds.
(i) The class of type I duals of {f i } i∈I is contained in the class of type III duals having the property (8).
(ii) When {f i } i∈I is tight, the classes mentioned in (i) coincide.
(iii) When {f i } i∈I is not tight, the class of R-duals of type III having the property (8) is a strict subset of the class of R-duals of type III.
Proof. (i) Let {ω j } j∈I be an R-dual of type I of {f i } i∈I . By Theorem 1.2, {ω j } j∈I is a Riesz sequence in H and the optimal bounds of {ω j } j∈I are the same as the optimal ones of {f i } i∈I . By [13,Theorem 4.4(iii)], {ω j } j∈I can be written as an R-dual of type III of {f i } i∈I with respect to some triplet ({e i } i∈I , {h i } i∈I , Q) and by Proposition 3.1, the property (8) must hold.
(ii) Assume that {f i } i∈I is tight. Then the classes of R-duals of type I and type III of {f i } i∈I coincide [13]. Now the statement follows from (i).
(iii) Assume that {f i } i∈I is not tight and let A and B denote the optimal bounds of {f i } i∈I , A < B. Take any constant C ∈ ( √ A, √ B) and let Q := C Id H . Let {ω j } j∈I be an R-dual of type III with respect to some orthonormal bases {e i } i∈I , {h i } i∈I and the operator Q. Then {C −1 ω j } j∈I is an R-dual of type I of {S −1/2 f i } i∈I , which by Theorem 1.2 implies that {C −1 ω j } j∈I is an orthonormal sequence. Therefore, {ω j } j∈I is a tight Riesz sequence with bound C 2 ∈ (A, B), which by Theorem 3.3(i) implies that {ω j } j∈I can not be written as an R-dual of type III with property (8).
In [13] we have proved that canonical dual frames lead to biorthogonality of appropriately determined R-duals of type III. Here we provide further insight in the relations considering the converse situation, namely, biortogonality of appropriate R-duals of type III leading to canonical dual frames. Proposition 3.6 Let {f i } i∈I be a frame for H with frame operator S and analysis operator U. The following holds.
(i) If {ω j } j∈I is an R-dual of type III of {f i } i∈I , then the biorthogonal sequence of {ω j } j∈I in span{ω j } j∈I is an R-dual of type III of { f i } i∈I .
(ii) If {ω j } j∈I is an R-dual of type III of {f i } i∈I with respect to ({e i } i∈I , {h i } i∈I , Q) having the property (8), then the biorthogonal sequence of Then and furthermore, Now assume in addition that Q satisfies the property (8). By Theorem 3.3, {ω j } j∈I has the same optimal bounds as {f i } i∈I . Then the optimal bounds of { ω j } j∈I are the same as the optimal bounds of { f i } i∈I , which by Proposition 3.1 implies that the operator V must satisfy a property analogue to (8), precisely as stated in the proposition (note that the ranges of the analysis operators of { f i } i∈I and {f i } i∈I are the same, and S f = S −1 ).
(iv) The R-duals of type III of {f i } i∈I are precisely the Riesz bases which have A, B as bounds.
(v) The R-duals of type IV of {f i } i∈I are precisely the Riesz bases for H.
Proof. (i) Let {z i } i∈I be the orthonormal basis {S −1/2 f i }.
First assume that {ω j } j∈I is an R-dual of type I of {f i } i∈I with respect to some orthonormal bases {e i } i∈I , {h i } i∈I . By Theorem 1.2, {ω j } j∈I is a Riesz sequence with optimal bounds A, B. Consider the mapping G determined by Gh := i∈I h i , h z i , h ∈ H. Then G is an antiunitary transformation of H and for every j ∈ I, S −1/2 Gω j = S −1/2 i∈I h i , ω j z i = S −1/2 i∈I e j , f i z i = S −1/2 i∈I S 1/2 e j , z i z i = e j , which leads to the desired conclusion.
Conversely, assume that G : H → H is an antiunitary transformation on H and {S −1/2 Gω j } j∈I is an orthonormal basis of H; denote this orthonormal basis by {e j } j∈I . The sequence {G −1 z i } i∈I is an orthonormal basis of H and the mapping E given by Eh := i∈I h, z i G −1 z i is well defined from H into H. Furthermore, observe that E is a unitary operator and Ez i = G −1 z i , i ∈ I. Define h i := Ez i , i ∈ I. Then {h i } i∈I is an orthonormal basis of H and for every j ∈ I, i∈I f i , e j h i = i∈I f i , S −1/2 Gω j Ez i = i∈I S −1/2 f i , Gω j Ez i = i∈I z i , Gω j G −1 z i = G −1 ( i∈I Gω j , z i z i ) = ω j , which implies that {ω j } j∈I is an R-dual of type I of {f i } i∈I .
(ii) First assume that {ω j } j∈I is an R-dual of type II of {f i } i∈I . Then {ω j } j∈I is a Riesz basis for H and by Lemma 1.3(iv), {S −1/2 ω j } j∈I is an orthonormal basis for H. Now [13, Lemma 1.2] implies that {ω j } j∈I has optimal bounds A, B.
For the converse, assume that {ω i } i∈I is a Riesz basis for H such that {S −1/2 ω j } j∈I is an orthonormal basis for H. By Lemma 1.3(ii), {S −1/2 ω j } j∈I | 5,765.2 | 2015-09-21T00:00:00.000 | [
"Mathematics"
] |
Transactional Distance Theory: A Critical View of the Theoretical and Pedagogical Underpinnings of E-Learning
This chapter provides a critical look at the literature surrounding Distance Education and targets Transactional Distance Theory. It will examine in detail the three components: structure, interaction (or dialogue) and autonomy. The structure necessary for successful distance learning starts the chapter. Next, interaction (or dialogue) is introduced and the complexity of this in relation to the student experience is discussed. Finally, autonomy is explored in detail. This overview will relate specifically to the student perspective. Alternative approaches, links to seminal authors and a critical viewpoint is taken throughout.
Introduction
Within this chapter, the objective is: To review literature on the theoretical and pedagogical underpinnings of distance education, specifically transactional distance theory and the concepts of structure, interaction and autonomy.
Search strategies
Data bases were searched including: Scopus, Psychinfo, Web of Knowledge, Medline ERIC and CINAHL to identify potentially relevant material using the following terms: (Effective or successful or valuable or useful) and (DL or distance learning or computer assisted learning or e-learning or elearning or online learning or online education or distance education or technology enhanced learning or computer mediated learning or computer based learning or ICT).
In Scopus alone, this wielded over 9000 results consisting of: Interactive Multimedia 2 • work on effective DL investigating specific media or resources; • undergraduate education; • editorial and opinion papers; • comparative studies (i.e. to traditional face to face teaching); • systematic reviews (few); • K-12 education; • an abundance of 'how-to' books; • reams of advocacy papers and success stories; and • anecdotal and promotional articles.
The choice of databases reflected the heterogeneous nature of the research in the area of technology, education, and social sciences. Unless reviewing theoretical literature (learning or organisational theories), only technological literature published in the last 10 years was reviewed. Striving to strike a balance between comprehensiveness (or sensitivity) and precision this date restriction was chosen which is common practice in literature reviews. This time frame appears to be congruent with other literature reviews in this area including: 9 years [2] and 8 years [3]. The focus was specifically on higher education and online courses if possible (for example, excluded blended learning). Both synchronous and asynchronous delivery, were included. Abstracts of all identified papers were read and full copies of articles that appeared relevant were saved as electronic files in endnote. Duplicates were deleted. E-books, books and photocopied chapters of traditional books were used and organised manually by topics. Citation searches were done on all articles that related directly to transactional distance theory or reviews of DL. Searches were limited to English language books and journals.
Overview
Distance education was first introduced into mainstream lexicon in the 1970s [4]. There were early attempts to define it, and controversies around what it actually was. One of the barriers (and 40 years on, the most revolutionary argument for me) was basically this: Is distance education a geographic separation of learners and teachers, or a pedagogical concept? Moore suggested the latter. He developed Transactional Distance Theory (TDT) in an attempt to demonstrate and explain that distance education was more concerned with pedagogy than geography [4,5].
Results
In 1973, Moore initially defined TDT as a psychological and communications gap that was a function of the interplay of structure, and dialogue. It was the cognitive space between teachers and students that must be crossed yet was a place of potential misunderstanding between the teacher and the learner. This space was continuous, relative and never exactly the same. Ideally, this distance or space needed to be minimised or shortened. Even in traditional education there was transactional distance and therefore the actual theory was a subset, albeit specialised, of conventional teaching and learning, [6]. However, in DL, due to the unique environment teachers and learners experienced more of a distance due to the physical distance (and if asynchronous, time) that separated these two groups. Therefore, transactional distance theory, more specifically, the transactional distance between teacher and learner was potentially more problematic at a distance and may have contributed to students' feelings of isolation, reduced motivation and engagement and eventually attrition in early DL [5]. Moore originally suggested that developers of DL must consider two variables that affect transactional distance: structure and dialogue [4]. Structure was the rigidity or flexibility of the instructional methods and strategies whilst dialogue referred to the interaction between the instructor and learner during a DL experience. Transactional distance was a function of dialogue and structure. With less dialogue and more structure, the transactional distance was higher (Figure 1).
In a course with little transactional distance, learners have guidance through ongoing dialogue [7]. This would be more appropriate, or attractive to learners who were less secure in managing their own learning. Moore later recognised with minimal dialogue, students were forced to make their own decisions for themselves and generally exercise autonomy [5]. Working with Kearsley, he later identified three interactive components or constructs [8] that needed to be considered to shorten the transactional distance and provide a meaningful learning experience for students. These included the original two: • structure of the instructional programs; • dialogue or interaction between learners and teachers and the new addition; and • autonomy or the nature and degree of self-directedness of the learner.
This third hypothesised factor, autonomy, interacted with both structure and dialogue and the three together formed a model or theory [9] for understanding online learning [8] (Figure 2).
Structure was determined by the actual design of the activity, how the instruction was organised and the use of different media communications [8]. Dialogue could be synchronous, asynchronous and dialogue that was internalised within the student. Learner autonomy related to the individual learner's self-directedness or sense of personal responsibility. There appeared to be a relationship between structure, dialogue and autonomy. The greater the autonomy, the less teacher control Relationship of structure and dialogue to transactional distance [4].
Interactive Multimedia 4 there needed to be to decrease the transactional distance and have a successful distance module. Conversely, with less dialogue and more structure, the likelihood of an increased transactional distance, which in turn led to less successful online programmes, was greater [10]. Successful distance environments depended on the teacher providing opportunities for dialogue and 'appropriately' [10] structured learning materials. This became extremely complex. Identifying the level of structure required, facilitating dialogue and encouraging individual learner autonomy was demanding and multifaceted as the greater the structure and the lower the dialogue, the more autonomy the student must demonstrate.
Deweyian link
These three complex factors relate to Dewey's seminal work. He suggested the educational process is a collaborative reconstruction of experience and has two sides: one psychological (cognitive) and one sociological. He warned that neither could be subordinated to the other or neglected without consequence.
Dialogue or interaction between learners and teachers: Dialogue, and engaging in interaction forces individuals to construct ideas in a deep learning sense [7]. Dewey [11] supported this constructivist approach to learning. He discussed the need to support learners' in their construction of meaning and argued only through social interaction and interaction with the environment could the learner construct conceptualisations and find solutions. He reasoned that through interpersonal, instructional dialogue the learner gains advantages in the pursuit of knowledge and understanding.
Structure of the instructional programs: Dewey described the function of education as improving the reasoning process [12]. Based on active experience, the role of the educator was to shape experience and structure the environment to promote experiences leading to growth. This role was one of a guide, or facilitator encouraging creative interaction and emphasising the development of solving problems and discovering knowledge. These higher order activities are encompassed in Dewey's practical inquiry model which includes four phases: triggering event, exploration, integration and resolution.
Autonomy or the nature and degree of self-directedness of the learner: Autonomy, the third factor in TDT is reflected in constructivist views encouraging active, collaborative and responsible learners [13]. The genesis of self-directed learning can be attributed to Dewey [7] who suggested that autonomy helped create the conditions that encourage individuals to exercise initiative, reflection and choice [11].
A critical view of transactional distance theory
Many researchers [1,[14][15][16] identified transactional distance as important and viewed TDT and as a basic analytical framework for understanding distance education systems.
'Transactional distance theory provides a useful conceptual framework for defining and understanding distance education in general and as a source of research hypotheses more specifically' ( [14], p. 527).
Despite considerable time span over which this theory has evolved, there are critics and little empirical research has been carried out to test the validity and relationships of the constructs [16,17].
TDT has been investigated from different perspectives. Two studies were found using questionnaires as data collection tools [18,19]. Bischoff et al. were interested in student perceptions of transactional distance, structure and dialogue [18]. Transactional distance, dialogue and structure were all related to certain 'items' (in reality questions). Each variable was then measured using data generated from fixed questionnaire. Transactional distance was measured by two items, dialogue by one item and structure by three. The results supported Moore's theory showing dialogue and transactional distances were inversely proportional. However, dialogue (a complex variable) was measured by only one item, there was no discussion of quality of dialogue (only quantity) and the actual items being measured were not clearly defined.
In an attempt to investigate TDT further and create a clear connection between dialogue, structure and autonomy as they related to learning outcomes, 121 learners were part of a study in a DL environment [19]. Operational definitions were given and they looked at dialogue in terms of frequency and occurrence, structure in terms of delivery and implementation and autonomy in terms of personal ratings of independence. These variables were compared to student's self-assessment. The results found only two variables had significant effects on perceived learning outcomes: the greater the perceived transactional distance, the lower the perceived outcomes and the greater the frequency of discussion, the higher the perceived achievement of learning outcomes. The results support Moore's theory, although as in [18] a simple questionnaire was used, data was collected only once and dialogue was measured only by frequency.
Two articles were found addressing TDT that measured observable behaviour as opposed to student perceptions [20,21]. Data was collected on 30 interactions between instructors and learners and measured behaviours using the 'systems dynamic model' [21]. Verbal behaviour was measured using a discourse analysis and, combined this with a measure of 'structure' of the programme then identified the variance. By measuring the rate of instructor and learner control, this variance (the ratio between amount of dialogue and extent of structure) was the transactional distance. The results demonstrated that transactional distance varied with dialogue and structure. As dialogue increased, distance decreased; as structure increased, transactional distance increased. This model produced values for transactional distance consistent with Moore's theory and suggested that transactional distance was directly proportional to dialogue and inversely proportional to structure. Although this supported Moore, the quantification of dialogue and structure of a programme was problematic to me. They looked only at one-to-one synchronous communications between learner and teacher. Therefore, the generality of the study is limited and it is hardly representative of the majority of DL trends. The effects of change in structure on dialogue was investigated during an audio-conferenced course [20]. Only structure and dialogue were compared. Over 100 students participated and dialogue was measured in frequency and duration whilst structure was defined by one aspect of instructional design (question asking behaviour of instructor). In support of TDT, different types of interactions and questions appeared to determine learner participation. According to the authors, of the four experimental procedures one was cancelled and one was biased. The instrument for measuring interaction was not shown to be reliable, the samples were not clearly described and the grouping unclear. Again, dialogue was measured in terms of frequency and duration. However, the results suggested that certain types of question-asking behaviour by the instructor could predict dialogue in the student [20]. The authors claimed that both structure and dialogue were important to success and by increasing dialogue and structure, one could increase student participation and decrease transactional distance.
Two articles were found [22,23], from very different perspectives, using questionnaires to explore influences of variables in DL and presenting conflicting results. The effects of course format, satisfaction and perceived knowledge gained were examined during an online programme. Satisfaction was broken down into different aspects to relate to the constructs set out by Moore in TDT. A questionnaire was used and the instrument was described. A very low response rate (17%) was not explained, however, there did appear to be a relationship between course design and satisfaction. The more satisfied the learners were with the structure and with interaction, the more satisfied they were with their perceived knowledge gained. This supported Moore's assertion that structure needed to be appropriate for the learner and that low structure and high dialogue could lessen transactional distance. An interesting article, publishing negative findings investigated the impact of individual and instructional variables on 71 (87% return rate) learner's perceived transactional distance [22]. Once again, questionnaires were used to measure student perceptions (on a 23 item sliding scale) and results analysed against four variables. The results did show a high ratio of certain variables to perceived transactional distance. Although peripheral, their findings also included that neither face to face interaction during an online course or previous experience changed transactional distance. Interestingly, some of the results suggested a negative effect between transactional distance and 'online tutoring' or interaction although 'online tutoring' was not clearly described. Content validity of the survey was addressed in that 'experts' and 'educationalists' reviewed the tool and there was a high response rate. The conclusions were that alternative measures of transactional distance (qualitative, observation, interviews) would help understand these phenomena. Predominantly published literature was biased towards positive results [24], so this article was a valuable alternative perspective.
In 2009, a review classifying 695 articles on DL was carried out. The focus was to identify gaps and priority areas in DL research. A consensus of 25 experts reviewed research published between 2000 and 2008 [3]. The method and results were clearly described and this was one of the only DL reviews found that included non-English journals. (One of the criticisms of distance education reviews is the focus on 'peer reviewed' English language journals [2]). Fifteen main research areas and strong imbalances were described. They found research 'dreadfully neglected' on organisational change and development, costs and faculty support. These are all addressed in this submission and in my own review. However, closely related to TDT, they identified an imbalance with over 50% of all articles focusing on: • instructional design; • interaction and communication in learner communities; and • learner characteristics (including motivation and autonomy). Although not highlighted by the authors of this review, these corresponded directly with Moore's three components of TDT. Admittedly, TDT appears to be a descriptive, rather that predictive theory, but there is a clear collaboration with outcome variables [9]. Furthermore, Moore's concept of transactional distance was a significant paradigm shift for educationalists as it grounded the concept of distance in distance education in a social science framework and not in its usual physical science interpretations [7]. Whether there are strong empirical studies supporting Moore's theory or not, it is evident his three components continue to be a priority in research [2,3,16].
Summary of research on TDT
• TDT had roots in humanistic and behavioural ideologies.
• Structure and dialogue were the initial factors in Moore's [4] TDT theory and a third factor, autonomy was later added [8].
• Structure, dialogue and autonomy were related, dynamic and necessary, in successful distance education [8].
• Moore did not define any of the constructs operationally [17], which has led to lack of clarity in follow up research.
• Studies investigating the complex constructs of autonomy and self-directedness using closed questionnaires and scales were common.
• The majority of published work investigating TDT has been approached from a positivist paradigm looking for correlation and statistically significant relationships between complex concepts (for example, autonomy and perceived learning outcomes).
• None of the studies found supported or totally negated the proposition of transactional distance.
Student experience: structure or design
'Educators must recognise that poorly designed educational programs…are not improved by being presented on a Web page' ( [26], p. s87).
Introduction
This section of the literature review addresses the three component parts of TDT separately.
Results
Formal 'instructional design' (ID) models, a systematic approach for developing educational products, used liberally when designing web-based courses at the University level [16,27] all contained a number of key elements or components and have been widely adapted in e-learning [28]. The four core components of ID as they related to educational programmes are found in Table 1 [29]: Various models have adapted ID, but they are based on the desire to provide guidance to designers as they aim to develop effective and consistent educational solutions on a reliable basis [27,28]. One of the most popular [30] and best documented models [31] was ADDIE, comprised of five stages of instructional design: analysis, design, development, implementation and evaluation. The ADDIE model specifically [31][32][33] and ID in general [27,29,30,34,35] have been researched intensely relating education to technology. This systematic approach to ID provides an empirical and replicable process when developing learning materials [31,33].
A critical view of instructional design
Although there was a plethora of research suggesting these models were the clear way to structure DL, there were critics as well. Much of what is termed 'e-learning' was still based on the recursive decomposition of knowledge and skill principles of ID [28]. The supporters of rigid ID tended to be training organisations with a training philosophy whose intellectual base consisted of principles derived from behaviourism and associationism [28]. A well-known and published author in the field of ID in America, looked critically at four different 'tools' based on ID, including the ADDIE model. He critiqued all four for their expertise required, lack of collaborative learning, lack of authenticity and linear nature [32].
Structure or instructional design and transactional distance theory
Instructional design seemed uniquely poised to bridge the knowledge gap in the provision of DL by identifying what historically had been done in education and describing new directions in course design and structure [7]. This gap in knowledge relative to course design was especially applicable in the area of medical and allied health education [27]. Forty years ago, Moore prophetically discussed design or structure as being imperative in successful DL environments [4]. In 2010, design was addressed again and it was suggested it was an ideal term to use as it bridged both theory and practice [36]. Using surveys only, the structural factors affecting DL were investigated focusing on satisfaction, assessment of learning outcomes and perceived achievement of learning outcomes [37,38]. 38,000 students taking 264 online courses in New York, were studied, analysing course documents and student questionnaires (38% return rate) [37]. In another study, 21 online courses were investigated using expert reviews of learning designs and student perception surveys [38] . Both studies demonstrated a correlation between greater structural consistency within the course, student satisfaction and perceived learning, used at least two methods of data collection and multiple raters for analysis of the data. However, the persistent attempt to quantify and measure people's perceptions of satisfaction and perceived learning is questionable given the complex nature of these constructs. Regardless, students were more satisfied with courses that had defined structure and they felt they had learned more than totally open and flexible courses.
Components of instructional design
In a study using closed question surveys followed by interviews, data was collected data from 76 students who were asked to identify either challenges or useful components to their online experience [39]. The students were all undertaking a full degree using different technologies and structures, yet all from a distance. The closed response questions were followed by nine semi-structured interviews. Two researchers conducted the interviews and data was thematically analysed and used to substantiate and extend earlier results from the questionnaire. The results suggested (89%) that the design of the course was the most important component of a successful e-learning experience [39] which supported the necessity and importance of instructional design, regardless of the mode of delivery. The sample size was small; the response rate of the survey was not given, nor was the relationship of the interviewees to the students. However, this is one of the few studies using mixed methods that have approached instructional design and student learning or satisfaction from a less positivist approach. Multiple sources of data collection were used which may have allowed researchers to validate and crosscheck findings [40].
Two studies both investigated structure in relationship to student satisfaction and perceived learning [23,41]. One surveyed 6088 (31% return rate) DL students in New York and compared levels of structure and instructional design to student satisfaction [41]. The other surveyed 201 (17% response rate) learners in a Midwestern American University comparing levels of satisfaction with structure and design, satisfaction and perceived knowledge gained [23]. Both of these studies used closed questions and rating scales, the questions were not clear to the reader and the response rates were low. However, in both studies, the central role of structure and student satisfaction or perceived knowledge gained was supported.
In one of the few studies specifically addressing context, Benson and Samarawickrema [42] compared six case studies of 'successful' DL initiatives in Australia. Definitions and programmes were clarified and their focus was to illustrate how e-learning designs (specifically those using Web 2.0 technologies) were instrumental in increasing success and decreasing transactional distance. With a practical focus and rich contextual description, these cases suggested that by carefully structuring and designing a course, transactional distance can be decreased. They also highlighted that design must be variable and provide a clear strategy for an analytic approach that is responsive to both the learners and the context of their learning.
Summary of research on instructional design or structure
Formal instructional design, in its prescriptive and inflexible sense was the basis for most early DL initiatives. Although when subscribing to a learner centred perspective this seems problematic, more progressive models have been developed incorporating constructivist and interactive approaches to planning DL. The amount and type of structure necessary appears to be inconsistent. However, there does appear to be a relationship between the level of structure and student satisfaction and an increase in perceived learning.
Originally, ID was developed to emphasise 'learning by doing' with immediate feedback on success, careful analysis and atomisation of learning outcomes and above all aligning these learning outcomes with instructional strategies and methods to assess the learning outcomes.
The ID approach to e-learning has become widely, yet perhaps unfairly discredited [28]. This may be due to the fact that a number of terms and expressions are used synonymously with ID and although the basis is behaviourism, or a teacher centred model, this is often an unfair association [43].
Many models that are labelled as 'constructivist' are indistinguishable from those derived from the associationist perspective [28].
Recently ID and general DL structure has moved towards creativity and interaction and away from low-level immediate responses [34].
Empirical and case study literature has repeatedly explored the relationship between (a) structure or design and (b) student satisfaction, transactional distance and learning.
There appears to be a close relationship between (a) structure and (b) transactional distance, student satisfaction and increase in perceived learning.
Introduction
The published research on DL is abundant, however, the actual student experiences have gone relatively undocumented [44,45] and are not fully understood [46]. The challenge was to understand, students' use of technology to support higher-order learning, interaction and dialogue [7]. The second factor contributing to an understanding of TDT was interaction, communication or dialogue and is the focus of this section.
Results
Communication, interaction and support from faculty and peers is consistently rated as having a major influence on DL [16,39,[47][48][49][50][51][52][53]. However, our understanding of its use is seriously limited [7] by empirical research which has used rating scales and closed questionnaires to explore perceived support and perceived learning. With the exception of two papers, the papers above investigated student satisfaction and barriers or facilitators to DL [51,52]. They were not directly focused on interaction or dialogue; they were exploring experiences generically. One paper specifically nurses' experiences. The findings supported the other studies; the interaction between the instructor and student, or student to student, was highlighted as integral to a positive learning experience or improved outcome [53].
A highly respected and well published five stage model illustrating online interaction or engagement (Figure 3) is found below [54].
This model is used as the basis for analysing and describing how the teacher or 'e-moderator' could support student learning. Other models and conversational frameworks of analysing online discourse [55-57] followed a relatively similar pattern of generating ideas, increasing interaction and information exchange followed by divergent thinking and development. These models have been criticised as being artificial, prescriptive and based on personal experience, not empirical research [9]. Salmon's work specifically has been criticised for its focus on the advancement of individual practitioners and the lack of attention paid to leadership and the institution as a whole. Successful initiatives must be scaffolded by dialogue and promote interaction and participation [54].
As discussed, the majority of the literature included interaction as one of the several factors affecting success in DL. A small amount of literature was found that addressed interaction, dialogue or engagement specifically.
Learner-learner and instructor-learner dialogue
Learner-learner and instructor-learner dialogue was the focus in a study of 38,000 students taking 264 online courses in New York [37]. Course documents and student questionnaires (38% return rate) were analysed. Student perceptions were explored based on learning, interaction with instructor and classmates, and personal level of activity. She found significant correlations with student satisfaction and interaction with the instructor (r = 0.761, p = 0.01) and perceived learning (r = 0.707, p = 0.01). There were also significant correlations between interactions with other students and course satisfaction (r = 0.440, p = 0.01) and perceived learning learned (r = 0.437, p = 0.01). Her findings appeared consistent with the literature in that interaction with instructor and amongst peers was consistently associated with the success of online courses [37]. Although this study was supported by research in a similar vein [7], there were some fundamental issues that were problematic. The survey consisted of multiple-choice and forced-answer questions investigating the 'dimensions' of satisfaction and perceived learning with no explanation as to how these questions were developed. There was no explanation for this quantitative attempt to measure the complex nature of satisfaction and learning.
Instructor-learner dialogue
Instructor-learner dialogue, specifically, examining the relationships between verbal immediacy and affective and cognitive learning in DL was explored. 145 post-graduate students involved in an asynchronous online course were surveyed using a questionnaire based on several verbal immediacy scales (described in detail) and both cognitive and affective learning scales [58]. The verbal immediacy scale consisted of 20 statements concerning instructor behaviour, the affective learning scale six dimensions and the cognitive learning scale was designed to produce a measure of learning loss. The hypothesis of correlation between instructor immediacy and affective learning was supported (r = 0.73, p < 0.01). The hypothesis of positive correlation between instructor immediacy and cognitive learning was supported (r = .054, p < 0.01). The verbal immediacy scale was based on other scales developed in a traditional face to face environment, yet the use of them in a non -traditional asynchronous environment was not justified. These students were all studying humanities and may not represent other post graduates as their requirement for instructor interaction may be unique. Regardless, the conclusion included a positive relationship between instructor immediacy and affective learning. Students who rated their instructors as more verbally immediate expressed improved affective and cognitive learning. Although immediacy of feedback was part of the original aim, it was not the focus for review. The majority of the literature found investigated the value and necessity of speed in asynchronous interactions. Learner-learner and instructor-learner interaction has been shown to be effective in creating successful DL environments, but what has become key is timely interactions [7]. Timely interaction related to Moore's [4] concept of TDT. This psychological separation was an interaction between levels of dialogue and levels of structure or autonomy. Therefore, the greater, and faster, and more involved the level of interaction or dialogue was, the lower the level of psychological feeling of separation there would be [7]. Timeliness of interactions, frequency, occurrence, type of interaction and immediacy are all areas that need to be examined more in distance education research [7].
Learner-learner dialogue
Learner-learner interaction is essential [10]. Two recent studies were found specifically addressing collaboration and peer interaction on performance in DL. One investigated social performance in computer supported collaborative learning [51], while another [52] analysed participants' experiences thematically in web conferences. In the first study, 39 undergraduate students were assigned to groups with either specialised collaborative activities and structure or none [51]. Data was collected on group performance using self and peer assessments and a rating scale for both behaviour and performance. These terms were all defined, although the rating scales were not validated or transparent. The group exposed to the specialised collaborative activities demonstrated a perceived increase in team development, ability to deal with team conflict and a more positive attitude towards collaborative problem solving [51]. The second study explored dialogue relating to learning in participants undertaking web conferences on leadership. Using data from two series of online seminars lasting over a year, the authors analysed all recorded 'text chat' data using thematic analysis. Validity was addressed by making the analysis process transparent, the analysis itself was done by three researchers and the final data was compared to the literature. Themes identified relating to learning were: social interaction, information giving, internalisation, co-construction of knowledge and multi-process learning. The results of both of these studies suggest that online activities that promote learner-learner interaction are important for effective team performance and collaborative learning [51,52].
Alternative approaches
Adults, as learners, need to see relevance or usefulness in their learning activities [59]. Therefore, these learners needed to see how interacting with their peers would benefit them and have relevance to their learning. Two slightly eclectic studies were found that addressed this from alternative viewpoints. One of the few longitudinal studies within this entire review followed groups of adult learners over 15 years [60]. This three-stage ethnographic-action research study tracked learners and their learning community at a virtual university in Australia as they undertook a Masters of Arts degree. The cycles, agents of change and staged findings were well explained. Conclusions suggested peer dialogue provided the mechanism for deep learning experiences and a sense of community. They related their findings to Bandura [61] suggesting that a community of learning requires: • relevance-social and situational; • involvement-reflective action and interpretive practice; • technology-enabling and self-efficacy with ICT; and • acceptance-recognition by peers.
The aim of this interpretive study was to explore how post-graduates could be guided to create conditions for effective peer discourse. In order to understand this, a study using traditional scientific methods would be inappropriate. Of the four concepts listed as necessary, the social relevance or usefulness appeared to play the biggest role to students. This study was not addressing whether group interaction was valuable but what conditions were necessary for it to occur and be valuable for students. Supporting these findings, but from an alternative angle, a case study was presented in which the interaction between learners was a failure [62]. This empirical positivist study used a questionnaire survey and statistical analysis addressing several hypotheses of why students did not participate in an online discussion forum at a University in West London. Hypotheses included low level of usage was due to either: attitudes of the student, low perceived usefulness of discussion board or technological complexity. The results from the 24 questions showed statistically significant results in that low perceived usefulness of the discussion board was the primary cause for its failure. The questionnaire consisted of scaled questions only and the development of the tool itself was not discussed. Although not made explicit, it appears that only 10% of the potential students completed the questionnaire. However, the conclusions support another study [60] that usefulness or relevance is necessary for successful learner-learner interactions. The approach to present findings of an unsuccessful initiative was unique. One of the general biases with published materials is the possibility of publication bias where negative studies are unpublished [24].
Summary of research on dialogue and interaction
• Interaction or dialogue was clearly related to student satisfaction and perceived learning whilst relevance, usefulness and immediacy of interactions appeared to be the most integral issues in decreasing TD and contributing to successful DL environments.
• Interaction/dialogue/engagement were terms used simultaneously in the literature and there were three different divisions: instructor-learner, learnerlearner and learner-content.
• Online 'community' or collaboration was an important variable in online classes. Without this online discourse online courses became a mere transmission of information.
• Several frameworks for designing and analysing interaction in DL were found all aimed at student's progression into higher levels of thinking [54][55][56][57].
• E-moderators took on multiple roles: they moderated or facilitated discussion, answered emails and managed the flow of content or responses. Their presence and immediacy impacted on student satisfaction.
• Students required usefulness, value or relevance in online interaction or discussion for it to be adopted successfully.
• The roles that interaction and dialogue play in DL is not well understood Moore (1973) warned this area should not be underestimated and argued no other area of study will have a greater impact on the future of distance education.
Student experience: autonomy 4.1 Introduction
A hallmark of DL has been its reliance on learner autonomy [63] which was the third hypothesised element of TDT [8] and the focus of this section.
Results
Literature addressing autonomy in DL, unlike structure or dialogue which was relatively straightforward, was complex and multi-faceted [1]. Major reviews were found discussing autonomy in learning [64] and specifically autonomy in DL [4,10]. In a review of autonomy and learning, literature was investigated over the last two decades, describing various definitions, and highlighting inconsistencies in the literature [64]. The review was divided into topics; however, there was no explanation as to search criteria or strategies. Autonomy was defined in terms of a redistribution of power concerning the construction of knowledge and the roles of participants. Although, DL was not addressed explicitly, the paper claimed autonomy was '…a departure from education as a social process' (p. 116). Over 2000 pieces of literature concerning autonomy were reviewed [4]. This visionary work (preinternet!) explained 'The autonomous learner is not to be thought of as an intellectual Robinson Crusoe, castaway and shut-off in self sufficiency' ([4], p. 669).
In a later review, research on autonomous learning was reviewed [7] and further, explained that there were two dimensions of autonomy in DL: selfmanagement of pedagogy and self-monitoring of cognition, or metacognition. Both cognitive autonomy and taking responsibility for one's learning were essential. Focusing on the meta-cognitive aspects of learner autonomy, strategies were compared in classroom vs. DL [65]. Using questionnaires followed by verbal reports, the relationship was explored between autonomy and the instructional DOI: http://dx.doi.org /10.5772/intechopen.81357 context of distance learners (n = 274) or classroom learners (n = 143) in a language programme. Variant analysis was applied to the questionnaire data to determine the relationship between learning strategies and context. The results showed that mode of study (distance vs. traditional) was the principal influence of the relationship between students and autonomy (more so than age, level etc.). Distance learners made greater use of metacognitive strategies than classroom learners, especially relating to self-management. A further analysis was done using verbal reports (n = 37) and the data was classified from the transcripts by the researcher and an independent rater. A total of 836 instances of strategies relating to autonomous work were identified. The average instance of strategy use from distance learners was 26.6 whilst a traditional student was 10.2. Instances of using metacognitive strategies in classroom learners was on average four, whilst distance learners reported an average of 15. The results suggested distance learners used more metacognitive strategies than classroom learners [65]. Critically, the numbers in the two groups were uneven and the development of the questions was not well described. However, the dual nature of the study, independent raters, transparency of inter-rater reliability and clear analysis suggested rigour. This study suggested that learners either approach DL with, or develop very quickly, metacognitive and self-management skills.
In a later study, metacognitive knowledge was investigated and experiences in distance education [66]. Thirty one students were interviewed focusing on a model of metacognitive knowledge comprising self, task, strategy and goals. Content analysis was used to identify categories of metacognitive experiences. There was an average of 19.7 instances of metacognitive knowledge per student and in descending order, the four dimensions of metacognition were: self-knowledge, strategy knowledge, task knowledge and knowledge of goals. Each student was able to recount at least one instance of a metacognitive experience. Conclusions included: students appeared to have experienced some, often extremely memorable, metacognitive experiences and metacognitive knowledge of distance students appeared to be primarily about self and strategy and less about tasks and goals. However, these dimensions were highly interactive and not distinct. The quantification of a complex concept such as metacognition, and the suggestion that students can identify a 'metacognitive experience' suggested a positivist approach to a subject containing multiple realties. However, the author attempted rigour in that the methods were clearly explained, two raters were used, and transcripts were revisited for further analysis with discussion to resolve differences. Overall, the metacognitive aspect of autonomy seemed to be occurring and seemed to be important in these student's DL experiences [66]. Knowledge about oneself and strategies were more important for successful learning than knowledge about tasks and goals. This perhaps, suggested that self-monitoring is one of the keys to autonomy in DL.
Another study investigated how DL students conceptualised the three elements in TDT: structure, dialogue and autonomy [67]. Using a pre-tested and piloted questionnaire, 169 distance education students (72% response rate) were surveyed. Learner autonomy was measured by students indicating which of 11 statements described themselves (i.e. able to learn without lots of guidance, able to develop a personal plan, able to find resources, self-directed, prefer learning in a group, need collaborative learning). The results were analysed using factor analysis and suggested a two-factor solution: independence and interdependence. Independence accounted for 29% of the total variance with a Cronbach's alpha of 0.82. Interdependence (interpersonal, interactive aspects) accounted for 26% of total variance with a Cronbach's alpha of 0.77. The results suggested that the concepts of dialogue, structure and autonomy were complex and that students tended to describe themselves as both independent and interdependent. The lack of correlation also suggested these features of autonomy were essential, but separate and distinct attributes. Although the attempt to quantify with statistical analysis something as complex as autonomy was fundamentally flawed, this study provided a particularly interesting idea: an individual's autonomy as a distance learner should be understood as including their abilities to work with others, or be interdependent. Autonomy is multi-faceted and interdependence appeared to be essential. These results suggested that there may be an attempt to move beyond the focus of independence in this environment and move towards 'interdependence' . Other) earlier findings support this 'personal control' [68]. It is suggested successful adult learners demonstrated appropriate dependency needs when participating in DL including: help, approval and support, leadership of others and sharing efforts and responsibility.
Summary of research on autonomy
• Autonomy or self-directedness has been a core feature of adult learning for years and closely relates to TDT. DL, when considered as a social process relates to this complex construct. Autonomy has been described as both self-management of pedagogy and metacognition. Furthermore, to 'traditional' autonomy, has been added 'interdependence' in group activities in DL.
• Moore and Kearsley (1997) suggested autonomy, a third factor in TDT, influenced and interacted with dialogue and structure in transactional distance.
• Self-directed learning/autonomy/independent learning were all used with a considerable degree of equivalence in the literature and became popularised in the 1970s.
• Literature appeared to focus on measuring autonomy and relationships of factors within TDT, attempting to quantify and compare a complex subject using statistical analysis and were often lacking a theoretical framework.
• There appeared to be varying perspectives concerning autonomy and independence vs. interdependence. I disagreed with Thanasoulas [64] that autonomy was a departure from education as a social process. I supported Moore [4], Garland [68] and Chen and Willits [67]. An individual's ability to work online in groups was essential.
• Individual autonomy has been classified as self-management of pedagogy and metacognition. Both of these appeared to be important and occurring in DL. Studies exploring these involved constructs have attempted to quantify these complex subjects.
• Studies that have compared the different dimensions of autonomy suggested knowledge about oneself and self-strategies were more important than knowledge about tasks and goals, yet students must manage both 'academic' learning and the process of learning.
© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 9,702 | 2018-11-05T00:00:00.000 | [
"Education",
"Computer Science"
] |
METHYLATION OF RIBOSOMAL PROTEIN L42 REGULATES RIBOSOMAL FUNCTION AND STRESS-ADAPTED CELL GROWTH
Lysine methylation is one of the most common protein modifications. Although lysine methylation of histones has been extensively studied and linked to gene regulation, that of non-histone proteins remains incompletely understood. Here, we show a novel regulatory role of ribosomal protein methylation. Using an in vitro methyltransferase assay, we found that Schizosaccharomyces pombe Set13, a SET domain protein encoded by SPAC688.14, specifically methylates lysine 55 of ribosomal protein L42 (Rpl42). Mass spectrometric analysis revealed that endogenous Rpl42 is monomethylated at lysine 55 in wild-type S. pombe cells and that the methylation is lost in Delta set13 mutant cells. Delta set13 and Rpl42 methylation-deficient mutant S. pombe cells showed higher cycloheximide sensitivity and defects in stress-responsive growth control compared with wild type. Genetic analyses suggested that the abnormal growth phenotype was distinct from the conserved stress-responsive pathway that modulates translation initiation. Furthermore, the Rpl42 methylation-deficient mutant cells showed a reduced ability to survive after entering stationary phase. These results suggest that Rpl42 methylation plays direct roles in ribosomal function and cell proliferation control independently of the general stress-response pathway.
by the SET domain-containing protein (SET protein) family (2). The SET domain was originally identified in three Drosophila proteins, Su(var)3-9, Enhancer of zest, and Trithorax, and was later demonstrated to be the responsible domain for histone lysine methyltransferase function (3). The characterization of these enzymes has revealed the roles of histone lysine methylation in many different biological processes, including higher-order chromatin assembly and transcriptional regulation (4). Recent studies have shown that histone lysine methylation is directly reversed by several histone demethylase families (5).
Lysine methylation has also been identified in non-histone proteins that include Rubisco in plants (6), Cytochrome c in yeast (7), mammalian TAF10 and p53 (8)(9)(10), and ribosomal proteins in a diverse range of species (11). The methylation of ribosomal proteins has been observed in both prokaryotes and eukaryotes. In the budding yeast Saccharomyces cerevisiae (S. cerevisiae), a combination of in vivo labeling and direct mass spectrometric analysis of the ribosomal proteins revealed that six of them, Rpl1, Rpl3, Rpl12, Rpl23, Rpl42, and Rpl43, are post-translationally methylated (12,13). By analyzing the methylation state of S. cerevisiae mutant strains with deletions in candidate SET domain-containing genes, two SET proteins, Rkm1 and Rkm2, were identified as specific methyltransferases responsible for the dimethylation of Lys-105 and Lys-109 in Rpl23, and the trimethylation of Lys-3 in Rpl12 (14,15), respectively. A recent study further demonstrated that the monomethylation at Lys-40 and Lys-55 in Rpl42 is dependent on two other SET proteins, the Ybr030w gene product and Set7, respectively (16), although the direct enzymatic activity of these proteins has yet to be demonstrated. Several mass spectrometric studies have also identified methyl modifications on ribosomal proteins in plants and mammals (17)(18)(19). While ribosomal protein methylation appears to be conserved among different organisms, the physiological roles of these lysine methylations remain to be fully elucidated.
The ribosome plays a central role in a cell's adaptation to environmental stress, as a checkpoint for sensing shifts in temperature and nutrient levels (20,21). Global translation is reduced in response to these cellular stresses by triggering the phosphorylation of the eukaryotic initiation factor 2α (eIF2α) (22,23). This prevents the formation of the eIF2-methionine-initiator tRNA (Met-tRNA i Met )-GTP ternary complex and thus blocks translational initiation. The stress-induced attenuation of global translation is often accompanied by the selective translation of proteins that are required for cell survival under stress. A downshift in temperature, one of the most common environmental changes for microbial life, induces the expression of genes encoding a number of ribosomal proteins and proteins involved in ribosome biogenesis and assembly (24). Thus, it has been suggested that cells remodel the translational machinery and secondary structure of RNA for cold growth. Interestingly, ribosomal protein biogenesis and the stress-responsive signaling pathway are also linked with the life span in both yeast and C. elegans (25)(26)(27)(28).
In the fission yeast Schizosaccharomyces pombe (S. pombe), 13 SET proteins have been identified in the genome, four of which (Set1, Set2, Clr4, and Set9) are histone lysine methyltransferases and involved in transcriptional regulation (29)(30)(31)(32). Using an in vitro methyltransferase assay and genome-wide screen for methylated proteins, we previously demonstrated that Set5, Set10, and Set11 are specific methyltransferases for EF1α, Rpl23, and Rpl12, respectively (33,34). However, the roles played by other SET proteins in cellular processes and their physiological substrates remain unresolved.
In this study, we show that S. pombe Set13, a SET protein encoded by SPAC688.14, is a specific methyltransferase responsible for monomethylation at lysine-55 in Rpl42. We further demonstrate that this methyl modification is highly conserved from yeast to humans. Rpl42-methylation-deficient mutant S. pombe cells showed defects in stress-adapted growth control and reduced survival potential. Notably, Rpl42-methylation-mediated stress adaptation occurred independently of the general stress-response pathway. These results suggested that this ribosomal protein methylation is involved in global ribosomal function and cellular growth control.
EXPERIMENTAL PROCEDURES
Strains and media-The strains used in this study are listed in Supplemental Table S4. All of the yeast strains were grown at 30˚C or the indicated temperature in YEA (0.5% yeast extract, 3% glucose, 75 µg/ml adenine) or minimal medium (SD or EMM) supplemented with amino acids for auxotrophic markers and antibiotics. The deletion and tagging of endogenous genes were conducted using a PCR-based gene-targeting protocol (35). The deletion mutants for the set13 + , gcn5 + , or gcn2 + gene were made by replacing the gene with a kan r or ura4 + marker gene. The integrated ura4 + gene was then removed by homologous recombination of the flanking TEF terminator sequences to obtain cells lacking it. To obtain the rpl42 K55R and rpl42 P56Q mutant strains, the rpl42 + coding sequence was first cloned into pCRII-TOPO (Invitrogen), and each mutation was introduced by site-directed mutagenesis. After insertion of the ura4 + marker gene, the resultant plasmids were digested with MfeI and used to transform cells. Cells in which the plasmid was introduced into the original rpl42 + locus were isolated as ura4 + -expressing cells. The mutant S. pombe strains that lost the wild-type rpl42 + allele and ura4 + gene by internal homologous recombination were isolated using counter-selective media containing 5-fluoroorotic acid (FOA).
To obtain S. pombe cells expressing EGFP-fused Set13, the set13 + coding sequence was cloned into pREP1-EGFP, a pREP1 derivative containing the EGFP-coding sequence (36), and cells transformed with the resulting plasmid were isolated with minimal medium lacking leucine. To express EGFP-fused Rpl42, the rpl42 + coding sequence previously isolated as part of the ORFeome project (37) was transferred to a pDUAL-GFH1c vector (38) by the "LR" recombination reaction. The resultant plasmid, pDUAL-GFH1-rpl42, was digested with NotI and introduced into the leu1 locus of wild-type or ∆set13 cells. The transformed cells were selected on SD lacking leucine.
Expression and purification of recombinant proteins-To produce recombinant Set13 or Rpl42 proteins in E. coli, the coding sequence for set13 + or rpl42 + was amplified by PCR and cloned into pRSET (Invitrogen) for Set13, or pGEX6P-3 (GE Healthcare) or pTriEX-4 Hygro (Novagen) for Rpl42. To produce mutant Rpl42 proteins, the above plasmids were subjected to site-directed mutagenesis. Each expression vector was introduced into E. coli BL21 (DE3), and protein expression was induced by adding 1 mM isopropyl-b-D-thiogalactopyranoside. The culture was incubated for 2 h more at 37 ˚C before harvesting, and the cells were then lysed by sonication (for His-Set13 and GST-Rpl42, -Rpl42-N, -Rpl42-M, and -Rpl42-C) or with buffer containing guanidine hydrochloride (for Rpl42-His and its derivatives). The expressed proteins were purified using TALON metal affinity resin (Invitrogen) or Glutathione Sepharose (GE Healthcare), according to the manufacturer's instructions. The eluted materials were dialyzed against phosphate-buffered saline (PBS) or PBS with 10% glycerol, divided into aliquots, and stored at -80°C before use.
Antibodies-Anti-Rpl42 rabbit polyclonal antibodies were raised and affinity purified using recombinant Rpl42-His. The purified antibodies were used for western blot analyses. Other antibodies used in this study were: anti-eIF2α [pS 52 ] (Invitrogen, 44728G) and anti-Tubulin (kindly provided by K. Gull).
In vitro methyltransferase assay-S. pombe nuclear extracts of wild-type and ∆set13 cells were prepared as described previously (39). The in vitro methyltransferase assay and the chromatographic fractionation of S. pombe nuclear extracts were performed as described previously (34).
Analysis of methylated peptide by nano-liquid chromatography tandem mass spectrometry (LC-MS/MS)-The LC-MS/MS analysis was performed as described previously (34).
Preparation of ribosomes from HEK293T cells-To prepare ribosomes from human cells, HEK293T cells were grown to ~80-90% confluence in 100-mm dishes, washed once with cold PBS, and harvested using a rubber scraper. The cells were pelleted, resuspended in 2 volumes of homogenization buffer (10 mM Tris-HCl, pH 7.5, 5 mM MgCl 2 , 10 mM KCl, 1 mM dithiothreitol), and lysed with a Dounce homogenizer. The lysate was freed of cell debris by centrifugation at 20,000 x g for 10 min at 4˚C. The supernatant was layered at a 1:1 ratio (v/v) over a sucrose cushion buffer (50 mM Tris-HCl, pH 7.5, 5 mM MgCl 2 , 25 mM KCl, 2 M sucrose) and centrifuged at 100,000 x g for 24 hr at 4˚C. The ribosome-enriched pellet was resuspended in homogenization buffer, and the proteins were resolved on 8-13% SDS-polyacrylamide gel electrophoresis (PAGE) gels.
Microscopy analysis-To analyze the localization of EGFP-fused Set13, wild-type S. pombe cells harboring the pREP1-EGFP-set13 plasmid were grown to early-log phase in liquid medium and washed twice with deionized H 2 O, and the DNA was visualized by incubation with 1 µg/ml Hoechst 33342. To analyze the localization of GFP-fused Rpl42, wild-type and ∆set13 mutant S. pombe cells that had integrated the pDUAL-GFH1-rpl42 construct were grown and treated as described above. Microscopic images were captured on a Zeiss Axioplan 2 imaging microscope and an ORCA-ER camera (Hamamatsu).
Spotting assay-Wild-type and mutant cells were grown in YEA medium. Five-fold serial dilutions were made from cultures of 1×10 7 cells/ml and spotted onto plates with YEA alone or YEA containing antibiotics of the indicated concentrations. To analyze the growth rate under different stress conditions, the spotted YEA plates were incubated at 30˚C for 2-4 days, 38˚C for 2-4 days, or 15˚C for 19 days.
Two-dimensional electrophoretic analysis of proteins-The two-dimensional gel analysis of methylated proteins was performed as described previously (34).
Polysome analysis-S. pombe cells were grown in 100 ml YEA medium to mid-log phase (optical density at 595 nm = 0.8) and harvested EDTA-free; Roche Applied Science]), and lysed with Multi-Beads Shocker (Yasui Kikai). An aliquot of the cleared lysate was overlaid on top of a 5-45% (w/v) sucrose gradient containing 20 mM Tris-HCl (pH 7.5), 50 mM KCl, 10 mM MgCl 2 , 1 mM DTT, 100 µg/ml cycloheximide, 200 µg/ml Heparin, and proteinase inhibitor cocktail, and centrifuged for 2 h at 36,000 rpm (222,000 x g max) at 4˚C in a Beckman SW41TI rotor. The gradients were then fractionated using a Piston Gradient Fractionator (Biocomp). Polysome profiles were generated by continuous absorbance measurement at 254 nm using a UV monitor (Econo UV-monitor, BioRad) connected to a chart recorder.
qRT-PCR analysis of cold-induced genes-Yeast cells were grown until they reached 1-2×10 7 cells/ml, and then 15 ml of the yeast culture was collected for a time 0 reference. The cells were harvested by centrifugation, flash-frozen in liquid nitrogen, and stored at -80˚C until RNA preparation. The remaining cultured cells were cold-shocked at 15˚C in a precooled water bath and were further aerobically cultured at 15˚C. Cells were collected at 1, 4, 8, 24, and 30 h after the cold shock by centrifugation. The harvested cells were also flash-frozen and stored as described above. The total RNA was prepared by a hot phenol method (40). Real-time PCR was performed using the Applied Biosystems 7300 Real-Time PCR system. The following components were mixed on ice: 10 µl of 2× one-step SYBR RT-PCR Buffer III, 0.4 µl of ROX reference Dye, 0.8 µl of PrimeScript RT enzyme Mix II, 8 µM of each primer, 50 ng of template RNA, and RNase-free distilled H 2 O to a total volume of 20 µl. The RT reaction (cDNA synthesis) was carried out at 42 ˚C for 5 min. The reaction mixture was then incubated at 95˚C for 10 s to inactivate the enzyme and denature the RNA/cDNA hybrid. The DNA amplification by PCR was next performed for 40 cycles, each cycle consisting of denaturation at 95˚C for 5 s, primer annealing and extension at 60˚C for 31 s. The PCR product was subjected to dissociation conditions to confirm that it consisted of a single component. The relative mRNA amount was calculated and normalized to the amount of act1.
Survival in stationary phase-Yeast strains were streaked on YEA and grown for 5 days at 30˚C. From these plates a preculture was inoculated on YEA and allowed to grow it until it reached 1-2×10 7 cells/ml. This preculture was then used to inoculate a 100-ml culture. This culture was grown until the end of the exponential phase, when the OD 595 stopped increasing, and the cells had reached their maximum density. At this point, we started monitoring the cell survival by measuring the ability of individual yeast cells/organisms to form a colony (colony forming units [CFUs]). The cultures were serially diluted to reach a 1: 2×10 4 dilution in YEA, and 100 µl of this dilution was plated in triplicate onto YPD plates. After 5-7 days, the total number of colonies was counted, with this number representing 100% survival and day 0 of the curve. The subsequent measurements were taken every day.
RESULTS
S. pombe Set13 is an active methyltransferase that modifies ribosomal protein L42-Thirteen SET domain-containing proteins (SET proteins) have been identified in the S. pombe genome (Supplemental Table S1). Using an in vitro methyltransferase (MTase) assay, we previously demonstrated that one of these SET proteins, Set11, specifically methylates ribosomal protein L11 (34). Here we applied the same approach to investigate the cellular function of Set13, a previously uncharacterized SET protein encoded by SPAC688.14.
We prepared a recombinant His-tagged full-length Set13 (His-Set13) (Fig. 1A), and incubated it with acid-extracted nuclear extracts prepared from wild-type or ∆set13 S. pombe cells in the presence of 3 H-labeled methyl donor ([ 3 H] AdoMet). Although specific methylation signals were not detected in the assay using an extract from wild-type cells, a strongly methylated band with a molecular mass of ~15 kDa on SDS-PAGE was detected in the assay using the ∆set13 cell extract (Fig. 1B, p15me). This result suggested that Set13 is an active methyltransferase and that p15 is one of Set13's physiological targets. As was the case for Set11 (34), the methylation site(s) may have been modified already by endogenous Set13 in the wild-type cells.
To identify the target protein(s) of Set13, we separated the nuclear extracts of ∆set13 mutant cells by reverse-phase chromatography, and the eluted proteins were tested in the in vitro MTase assay. As shown in Figure 1C, the target protein(s) was eluted in several fractions, with a peak at fraction 17. The protein band showing the same elution profile in the chromatography (indicated by an arrowhead) was excised from the gel and subjected to LC-MS/MS analysis. In parallel with this chromatographic approach, we separated the methylated product(s) by two-dimensional (2D) acetic acid-urea-Triton X-100 (AUT) and acetic acid-urea-cetyltrimethylammonium bromide (AUC) gel analysis (34). After 2D separation, one discrete signal was detected in the autoradiograph (Supplemental Fig. S1). The protein spot corresponding to this signal was excised and subjected to LC-MS/MS analysis. From both the chromatographic (Fig. 1C) and 2D gel (Supplemental Fig. S1) approaches, we obtained a series of peptides that matched perfectly with the deduced amino acid sequence of the S. pombe ribosomal large subunit protein L42 (Rpl42) (Fig. 1D, indicated by underlines). Rpl42 is highly conserved from yeast to humans (Fig. 1D), and its structural homologue was also identified in Haloarcula (41).
Set13 modifies recombinant Rpl42 in vitro-To confirm that Rpl42 is a physiological substrate for Set13, an in vitro MTase assay was performed using recombinant full-length Rpl42 (GST-Rpl42-Full) ( Fig. 2A). The full-length GST-Rpl42 was clearly methylated by Set13 (Fig. 2B), indicating that Rpl42 is indeed a substrate for Set13. To determine the methylation site(s) on Rpl42, we separated Rpl42 into three parts: N, N-terminus; M, middle part; and C, C-terminus, and GST-fusion proteins containing each part ( To identify the site of Rpl42 methylation, we introduced a series of alanine substitutions for the candidate lysine residues in Rpl42-M-His, either alone or in combination (Fig. 2C), and used these mutant proteins in the in vitro MTase assay. While the K60-67A mutation decreased the Set13 activity, the K40-55A mutation completely abolished it (Fig. 2D, Rpl42-M-His K40-55A ). Further detailed mapping revealed that the single alanine substitution of lysine 55 (K55A) clearly blocked the Set13 MTase activity (Fig. 2D, Rpl42-M-His K55A ). We also confirmed that the same K55A mutation in full-length Rpl42 (rRpl42-His) abolished Set13's activity (Fig. 2E). Together, these results indicate that Rpl42 is a physiological substrate of Set13 and that lysine 55 of Rpl42 is the candidate target residue for Set13. The reduced activity seen with the K60-67A mutant ( Fig. 2D) may have been caused by poor substrate recognition by Set13, since these mutations could have altered the local structure of the protein.
Determination of the methylation sites of Rpl42 by LC-MS/MS-To determine the in vivo methylation site of Rpl42, the endogenous Rpl42 in wild-type or ∆set13 cells was isolated by reverse-phase chromatography (as shown in Fig. 1C) and analyzed using LC-MS/MS. While the overall elution profiles of the digested Rpl42 peptides in the nano-LC spectra were superimposable, the representative masses of several eluted peptides were different between the wild-type and ∆set13 cells (Fig. 3A,B, indicated by asterisks). The experimental mass of the faster-eluted peptides obtained from a linear ion-trap TOF system was 1532.04 (MH + ) for wild-type cells and 1517.99 (MH + ) for ∆set13 cells (Supplemental Table S2, NanoFrontierLD) and the mass difference was 14.05 Da, which corresponds to the mass of one methyl group. A similar mass difference was observed for a slower-eluted fragment at 8.12 min.
MS/MS analysis revealed that the amino acid sequence of both the faster-and slower-eluted fragments matched the 47-60 residues of Rpl42, and that the slower-eluted peptide was a deaminated derivative of the faster-eluted peptide (Fig. 3C,D, and Supplemental Table 2). Importantly, the peptide in wild-type cells was monomethylated at lysine 55 (Fig. 3C), whereas the corresponding residue in ∆set13 cells was unmodified (Fig. 3D). This result is consistent with our in vitro MTase assay results (Fig. 2E), and suggests that Set13 is a specific MTase that monomethylates lysine 55 of Rpl42.
By analyzing the MS/MS results, we were able to identify additional methylated peptides from the Rpl42 of wild-type cells (Supplemental Table S3). We found, however, that these modifications were present in the ∆set13 cells, and that the corresponding unmethylated peptides were frequently detected in both wild-type and ∆set13 cells. Thus, it is unlikely that these additional lysine residues are the physiological targets of Set13.
Rpl42 methylation at lysine 55 is conserved from yeast to humans-Rpl42 is evolutionarily conserved among eukaryotes, and the methylation at lysine 55 has also been identified in other species, including budding yeast (16) and plant (19). To obtain further evidence for the importance of the Rpl42 methylation, we determined whether the methyl modification is present on human Rpl36a, the homologue of S. pombe Rpl42 (Fig. 1D).
Ribosomes were purified from HEK293T cell extracts by centrifugation, and the associated proteins were resolved by SDS-PAGE (Fig. 4A,B). Human Rpl36a, detected by an anti-SpRpl42 antibody (Fig. 4B, indicated by an arrowhead), was excised and subjected to the LC-MS/MS analysis. Mass spectrometric analysis revealed that human Rpl36a was also monomethylated at lysine 53, which corresponds to lysine 55 of S. pombe Rpl42 (Fig. 4C). The high conservation of Rpl42 methylation suggests that it has an important role in the ribosomal function.
Location of Rpl42 and lysine 55 on the large ribosomal subunit-To gain insight into the role of Rpl42 methylation, we examined the structure and relative position of Rpl42 on the large ribosomal subunit. Since structural information on the S. pombe ribosome is not yet available, we used the previously characterized cryo-EM structure of the S. cerevisiae 80S ribosome (42) to visualize the three-dimensional structure and relative location of Rpl42 in the large ribosomal subunit (Fig. 5A,B). Rpl42 has a large loop extension and is positioned between the central-and L1-protuberances of the large ribosomal subunit.
Within Rpl42, lysine 55 is located in this loop extension and lies close to the E site (Fig. 5A,B, indicated by yellow).
In Haloarcula marismortui, the L44e protein, the structural homolog of Rpl42, is located at the same position in the large ribosomal subunit (41) and interacts through its loop extension with an RNA oligonucleotide that mimics the CCA end of deacylated tRNA bound to the E site (43). Lysine 51, one of the L44e residues that make specific contact with the C75 of CCA (Fig. 5C, indicated by pink arrowheads), corresponds to lysine 55 of Rpl42. Thus, Rpl42 methylation may play a role in the recognition of deacylated tRNA bound to the E site, although this possibility needs to be clarified by further detailed structural analyses.
Set13 predominantly localizes to the nucleus-To gain further insight into the function of the Set13 methyltransferase, we examined its localization by expressing it as an EGFP-fusion protein. EGFP-Set13 predominantly localized to the nucleus, including both the nucleolus and the other DAPI-dense nuclear hemisphere (Fig. 5D) (37), suggesting that Set13 modifies Rpl42 in the nucleus, presumably prior to ribosome assembly. Since the localization of Rpl42-GFP was the same for wild-type and ∆set13 cells (Fig. 5E), it is likely that the assembly of Rpl42 into the ribosome is independent of its methylation.
Rpl42 methylation-defective mutants show cycloheximide sensitivity-Cycloheximide is a potent protein synthesis inhibitor that blocks translational elongation by interfering with the peptidyl transferase activity of the 60S ribosome. In several organisms, mutations in the large ribosomal subunit lead to a recessive cycloheximide-resistance phenotype (44). One of these mutations maps to proline 56 of Rpl42 (45), which is close to lysine 55 (Fig. 5A, indicated in blue). This result prompted us to investigate the role of Set13 and Rpl42 methylation in cycloheximide sensitivity.
The ∆set13 mutant cells were viable and showed no noticeable growth defects under normal culture conditions (e.g. see Fig. 6B, 0 µg/ml CYH). However, the ∆set13 cells displayed higher cycloheximide sensitivity than wild-type cells (Fig. 6B, 10 and 20 µg/ml CYH). The increased sensitivity was not attributable to a change in the Rpl42 protein level, which was comparable between wild-type and ∆set13 cells (Fig. 6A). To rule out the possibility that other Set13 substrates indirectly affected the cycloheximide sensitivity, we introduced two amino acid substitutions (K55R and P56Q) into the rpl42 + gene, and examined the cycloheximide sensitivity. As previously observed (45), the rpl42 P56Q mutation conferred a strong resistance to cycloheximide (Fig. 6A,B). In contrast, the rpl42 K55R mutant cells were even more sensitive to cycloheximide than the ∆set13 cells, although the Rpl42 protein level was the same (Fig. 6A,B). Taken together, these results demonstrated that Set13 and Rpl42 methylation play a direct role in cycloheximide sensitivity, which is tightly linked to ribosomal function. The rpl42 K55R cells' greater sensitivity to cycloheximide was presumably owing to the imperfect mimicry of the lysine residue by arginine.
To gain further insight into the roles of Set13 and the methylation of Rpl42 in ribosomal function, we examined whether the ∆set13 and rpl42 mutant cells were sensitive to other ribosome-targeting translational inhibitors, including anisomycin, paromomycin, G418, and hygromycin (46)(47)(48). The ∆set13 and rpl42 mutant cells showed no clear sensitivity to these translational inhibitors, and only the rpl42 K55R cells showed a weak resistance to hygromycin (Supplemental Fig. S2). These results suggested that methylated Rpl42 contributes to particular step(s) in the peptidyl transferase reaction rather than affecting overall ribosomal function. Considering its location and a previous observation that cycloheximide arrests the ribosome when the first deacylated tRNA reaches the E-site (49), it is likely that cycloheximide blocks elongation by interacting with the E-site when it contains deacylated tRNA, and that Rpl42 methylation-defective mutant cells affect the binding or affinity of cycloheximide.
Rpl42 methylation-defective mutants show abnormal cell growth under various environmental stresses-We next examined whether Set13 and Rpl42 methylation are involved in other cellular processes that are linked to ribosomal function. As described above, Rpl42 methylation-defective mutant cells showed no obvious growth defects under normal culture conditions (Fig. 6B). This finding was confirmed by a polysome analysis that showed similar levels of 40S, 60S, and 80S ribosomes, and comparable profiles of polyribosomal components (Supplemental Fig. 3) in the methylation-defective and wild-type cells. We noticed, however, that unlike the wild-type cells, the ∆set13 and the rpl42 K55R mutant cells showed robust growth at low temperature (Fig. 6C, 15°C). In addition, these mutant cells showed resistance, to different degrees, to other stress conditions such as high temperature (Fig. 6C, 38°C), high salt concentration (Fig. 6D), and glucose starvation (Fig. 6E). The rpl42 K55R cells showed stronger effects compared with the ∆set13 cells. The cycloheximide-resistant rpl42 P56Q mutant cells also displayed abnormal responses to these environmental stresses. Interestingly, the rpl42 P56Q mutant cells showed the opposite phenotype for high temperature (Fig. 6C, 38°C) and glucose starvation (Fig. 6E) to that observed for the ∆set13 and rpl42 K55R cells. Together, these results demonstrated that Set13 and Rpl42 methylation are required for proper growth control under various stress conditions, and suggest that the E-site configuration determined by Rpl42 may control ribosomal function under these environmental stresses.
Cold adaptation of Rpl42 methylation-defective mutants is independent of the general stress-response pathways-Among several stress conditions that we examined, the Rpl42 methylation-defective mutant cells displayed the greatest differences under the low-temperature condition (15°C) (Fig. 6C). While this temperature is far below that of laboratory culture conditions (~30°C), it commonly occurs in the natural world, and therefore the phenotype appears to be linked with important cell-survival responses. To investigate the role of Rpl42 methylation further, we focused on this cold-adaptive phenotype.
Cells first grown at 30˚C were shifted to 15˚C, and the cellular growth after the temperature shift was monitored (Fig. 7A). Upon the shift to the low temperature (Fig. 7A, 0 hr), the growth rate of the wild-type cells immediately slowed, although they continued to grow. While the ∆set13 and rpl42 mutant cells showed a similar growth arrest at the time of the temperature shift, their growth, following adaptation, was clearly different from that of wild-type cells (Fig. 7A). That is, the ∆set13 and rpl42 mutant cells showed a normal response to the temperature shift, but grew at an abnormal rate under cold-adapted conditions. The expression profiles of known cold-induced genes (Supplemental Fig. S4) supported the notion that the initial cold responses functioned properly in these mutant cells.
The rpl42 K55R mutant cells showed a higher growth rate than the wild-type cells, consistent with the spotting assay (Fig. 6C). In contrast, the ∆set13 and rpl42 P56Q mutant cells showed a lower growth rate (Fig. 7A). For ∆set13, this was seemingly in contrast to the results of the spotting assay (Fig. 6C). We think, however, that these mutants may simply have needed a longer time to recover from the cold-induced growth arrest, because once they recovered, their cold-adapted growth was faster than that of wild-type cells.
Translational control is tightly linked to stress responses. Many different types of stress reduce global translation by triggering the phosphorylation of eIF2α, which is mediated by stress-activated protein kinases, Gcn2 and Hri2, in fission yeast (50). To investigate the relationship between Rpl42 methylation and the global stress-response pathway, we combined ∆gcn2 with the ∆set13 or rpl42 mutations and examined the growth of these double-mutant cells under stress conditions (Fig. 7B). Although the ∆gcn2 mutation itself caused a weak growth defect or resistance under the stress conditions, it caused little or no change in the cold-adapted growth of ∆set13 or rpl42 mutant cells. The same was true for ∆hri2 mutant cells (data not shown). In addition, cold stress did not trigger the phosphorylation of eIF2α (Fig. 7C), whereas heat-shock stress did induce the phosphorylation. These results suggested that the cold-adapted growth of Rpl42 methylation-deficient cells was not coupled to the eIF2α-mediated stress-response pathway.
We also explored potential genetic links between the ∆set13 mutation and other stress-response pathways such as TOR (51) and the stress-activated protein kinase (SAPK) (52). However, we did not find any correlations with the cold-adapted growth of Rpl42 methylation-deficient cells (data not shown). Therefore, it is likely that the cold-adapted growth of ∆set13 and rpl42 K55R mutant cells was caused, at least in part, by an altered physical property of ribosomes rather than by defects in the stress-response pathways.
Rpl42 methylation is linked to chronological aging-The ∆set13 and rpl42 K55R mutant cells appeared to have a growth advantage under several stress conditions, while the global stress-response pathway was maintained. However, it is possible that proper growth suppression under such stress conditions is beneficial to the survival of the population. Consistent with this idea, we frequently observed that these mutant cells recovered poorly after prolonged storage in the laboratory refrigerator (data not shown). Thus, to determine the potential involvement of Rpl42 methylation in population survival, we examined cell viability after the cells entered the stationary phase.
The viability of individual yeast cells decreases with the time they spend in stationary phase, a phenomenon known as chronological aging (53,54). To study the role of Rpl42 methylation in this process, the ∆set13 and rpl42 mutant cells were grown until the stationary phase, and their survival rate was determined by counting the colony-forming units (CFUs). Under our experimental conditions using rich culture medium, 99.9% of the wild-type cell died after 9 days (Fig. 8A). Interestingly, the ∆set13 and rpl42 K55R mutant cells exhibited roughly half the life span of wild-type cells, with the rpl42 K55R mutant cells showing the more severe phenotype (Fig. 8A,B). In contrast, the cycloheximide-resistant rpl42 P56Q mutant cells exhibited a longer life span than wild-type cells (Fig. 8A). The survival potential appeared to correlate with cycloheximide sensitivity. Together, these results suggested that the Set13 activity and Rpl42 methylation correlate with the chronological lifespan.
DISCUSSION
Using an in vitro methyltransferase assay, we demonstrated that fission yeast Set13 is a methyltransferase for lysine 55 of the ribosomal protein Rpl42. Since under our assay conditions we could not detect any other proteins that were efficiently modified by Set13 (Fig. 1B), it is most likely that Rpl42 is a specific substrate for Set13.
Although its direct enzymatic activity has yet to be examined, the budding yeast SET7 gene was recently demonstrated to be required for the monomethylation at lysine 55 of Rpl42ab (16). In addition, we found that human Rpl36a/Rpl42 is also monomethylated at the corresponding lysine residue (Fig. 4). Together with a mass spectrometric analysis of plant ribosome proteins (19), these findings indicate that Rpl42/Rpl36a methylation and its responsible enzyme are highly conserved among a wide range of eukaryotic species.
In S. cerevisiae, Rpl42 is also monomethylated at lysine 40, and this methylation is dependent on the Ybr030w gene product (16). Although we could not obtain concrete evidence for a corresponding methylation in S. pombe Rpl42 or human Rpl36a (data not shown), a combination of methylation modifications at different residues on Rpl42 may modulate its function in certain species. Most of the methyl modifications on histones appear to be enzymatically reversed by a family of demethylases (5). It remains an open question whether a member of the demethylase family can target the methyl modifications on ribosomal proteins.
Because Rpl42 prepared from wild-type cells was not a good substrate for Set13 in vitro (Fig. 1B), it is likely that Rpl42 is predominantly methylated at lysine 55 in wild-type cells (16). In addition, Rpl42 is tightly associated with, or rather embedded in, the ribosomal RNA of the 60S subunit (Fig. 5B). Rpl42 methylation appears to occur during the ribosomal assembly process, as supported by the nuclear localization of Set13 (Fig. 5D), and, once Rpl42 is assembled into the 60S subunit, its methylation might be stably maintained and important for ribosomal function.
From the cryo-EM structure of the S. cerevisiae 80S ribosome (42), Rpl42 was determined to lie close to the E-site (Fig. 5B). In addition, the x-ray crystal structure of complexes between the Haloarcula marismortui 60S subunit and E-site substrates revealed that deacylated tRNA makes specific contacts with the loop extension of the L44e protein, the structural homolog of Rpl42, and the residues responsible for this interaction lie close to the region containing lysine 55 and proline 56 of Rpl42 (43) (Fig. 5C). Together, these observations support the possibility that the loop extension of Rpl42 and the methylation at lysine 55 play a critical role in an interaction with the deacylated tRNA positioned at the E-site. Intriguingly, human L36a-like, which is closely related to Rpl36a/Rpl42, has been demonstrated to make contact with the CCA end of P-site-bound tRNA (55). Therefore, it is also possible that Rpl42 plays a role in the recognition of tRNA positioned at the P-site.
The function of the E-site has been extensively studied for the E. coli ribosome. A specific interaction between the 3' end of deacylated tRNA with the E-site is required for an efficient translocation reaction (56). Furthermore, the mutation of a highly conserved residue in the 23S rRNA that interacts with the 3' end of deacylated tRNA at the E-site leads to a translocation defect and promotes frameshifting and misreading at stop codons in vivo (57). According to these observations, we examined the efficiency of frameshifting events in the ∆set13 and rpl42 mutant cells. However, we could not obtain direct evidence for an effect on frameshifting (data not shown). The role of E-site may be regulated differently in the eukaryotic ribosome.
The ∆set13 and rpl42 K55R mutant cells showed enhanced cycloheximide sensitivity, and in contrast, the rpl42 P56Q mutant cells showed enhanced resistance to cycloheximide. Although the exact mechanism and position of this antibiotic's action has yet to be determined, these two residues and the methyl modification may affect the binding affinity of cycloheximide for the 60S ribosome. This is quite consistent with a previous observation that cycloheximide arrests the ribosome when the first deacylated tRNA reaches the E-site (49).
It is noteworthy that the ∆set13 and rpl42 mutant cells also showed defects in stress-adapted growth control. Under stress or starvation conditions, translation initiation is blocked by eIF2α phosphorylation and eIF4F disassembly (22,23). In the present study, we demonstrated that the defects of the stress-induced growth control of ∆set13 and rpl42 mutant cells were distinct from the signaling pathways that control translation initiation.
Another intriguing finding was that the stress-adapted growth defect observed in these mutant cells appeared to be correlated, at least to some extent, with the cycloheximide sensitivity and chronological life span. As described above, a simple explanation for the cycloheximide sensitivity is that the loss of methylation or amino acid substitution at lysine 55 changes the structural conformation of Rpl42 in a way that affects the binding affinity of cycloheximide to the ribosome. However, this idea does not fully explain our observation that the cycloheximide sensitivity correlates with stress-adapted cell growth control.
It is conceivable that cycloheximide treatment mimics the condition of ribosomes under some sort of stress and that the methylation of Rpl42 at lysine 55 fine-tunes the ribosomal function rather than affects the ribosome's affinity for cycloheximide. The loss of the methylation may lead to a defect in this precision machinery that results in a defect in the stress-responsive growth control. An alternative is that the E-site configuration determined by Rpl42 functions as an intrinsic sensor of environmental conditions and modulates ribosomal function. In this scenario, the ∆set13 and rpl42 mutant cells would be defective in their ability to sense environmental conditions or to change the growth rate to the appropriate level.
Some yeast species possess a variant Rpl42 that confers cycloheximide resistance on the cell (58). It is possible that cells adapt to stress by preparing several subtypes of ribosomes that provide distinct responses to environmental stresses. It is interesting to imagine that a minor fraction of ribosomes that contains unmodified Rpl42 plays a role under stress conditions and affects cellular processes such as cancer development in humans (59). Further studies are necessary to elucidate how Rpl42's methylation regulates stress-adapted cell growth. GST-Rpl42-C (73-106) | 8,484 | 2010-05-05T00:00:00.000 | [
"Biology"
] |
How Well Can Quantum Embedding Method Predict the Reaction Profiles for Hydrogenation of Small Li Clusters?
Quantum computing leverages the principles of quantum mechanics in novel ways to tackle complex chemistry problems that cannot be accurately addressed using traditional quantum chemistry methods. However, the high computational cost and available number of physical qubits with high fidelity limit its application to small chemical systems. This work employed a quantum-classical framework which features a quantum active space-embedding approach to perform simulations of chemical reactions that require up to 14 qubits. This framework was applied to prototypical example metal hydrogenation reactions: the coupling between hydrogen and Li2, Li3, and Li4 clusters. Particular attention was paid to the computation of barriers and reaction energies. The predicted reaction profiles compare well with advanced classical quantum chemistry methods, demonstrating the potential of the quantum embedding algorithm to map out reaction profiles of realistic gas-phase chemical reactions to ascertain qualitative energetic trends. Additionally, the predicted potential energy curves provide a benchmark to compare against both current and future quantum embedding approaches.
Introduction
Theoretical chemistry is a key driver in the development of algorithms for quantum computers which have the potential to solve the expectation value of the electronic Hamiltonian accurately and efficiently [1][2][3][4][5].Using classical computers, the computational cost of performing electronic structure calculations that rely on solving the Schrodinger equation grows exponentially with respect to system size.Alternatively, quantum computing can potentially address this exponential cost problem by utilizing the collective properties of quantum states, including superposition, interference, and entanglement, to store the wave function using a linear number of qubits.In a seminal work, Peruzzo et al. reported the use of iterative quantum phase estimation (QPE) algorithms to handle electronic correlation effects given a numerically best wave function within the space spanned by the basis set [6].While an advantage in computational scaling against its classical counterpart is achieved, the long circuit depths required to generate the wave function causes QPE's accuracy to be severely constrained by the physical characteristics of currently available noisy intermediate-scale quantum (NISQ) devices such as coherence time issues and error rates.Though the mitigation of these physical limitations via the implementation of a fault-tolerant scheme has been reported, the resource requirements are still too large for practical deployment [7][8][9].
In contrast, the use of the hybrid quantum-classical variational quantum eigensolver (VQE) algorithm to solve the electronic structure problem is considered the most suitable model for chemical application, as this algorithm mitigates the significant hardware demands needed by QPE on NISQ devices [10,11].VQE is based on the Rayleigh-Ritz variational principle: E ≤ ⟨Ψ| Ĥ|Ψ⟩ (1) where Ψ is the molecular wave function, Ĥ is the electronic Hamiltonian, and E is the ground state energy.The electronic Hamiltonian is written in the second quantized form: Ĥ = ∑ pq h pq â † p âq + 1 2 ∑ pqrs g pqrs â † p â † q âr âs (2) where h pq and g pqrs represent one-and two-body integrals and â † p and âp are anti-commuting creation and annihilation operators, respectively.Equation ( 2) is mapped to a qubit Hamiltonian by using one of three encoding methods: Jordan-Wigner, parity, or Bravyi-Kitaev [12][13][14].VQE uses a quantum processing unit (QPU) to prepare a parametric version of either a heuristic ansatz that is easily implemented on a real quantum device or a chemistry-inspired trail ansatz, Ψ( → Θ).This trial ansatz depends on a vector of real-valued variational parameters, ⇀ Θ = {θ i }, which is variationally tuned to minimize the energy expectation value.On the other hand, a classical processing unit is used to collect quantum computer data and optimize the parameters within the variational loop.The number of qubits required for the quantum calculation is dependent on the number of spin orbitals considered within the chemical system in question.A pioneering application of VQE is its examination of the ground-state properties of H 2 using 2 qubits [6].The method has also been extended to other molecular applications (LiH, H 2 O, BeH 2 , NH 3 , CH 4 , and CO), with a simulation scale ranging from 4 to 20 qubits [15][16][17][18].In these studies, calculated ground-state energies achieve chemical accuracy when compared to results computed using a classical matrix eigenvalue decomposition and full-configuration interaction (FCI) method.The simulation of an even larger C 2 H 4 molecule requiring 28 qubits with reasonable computing resources has also been demonstrated by employing group symmetry to significantly reduce the circuit depth of the ansatz [19].
Despite progress for simple molecular systems, application of VQE to describe realistic chemical systems with technological applications remains challenging in the foreseeable future, as larger systems demand an impractical number of qubits, rendering the calculations prohibitive and inferior to classical methods.Although progress has been made in scaling up hardware with respect to the number of available qubits, short coherence time and sensitivity to the external environment are still limiting factors for performing quantum computing calculations on large chemical systems.The concomitant drastic accumulation of noise requires expensive mitigation protocols to reach the desired level of accuracy [8,20].To make VQE practical in the near term, hardware-friendly multi-scale hybrid quantumclassical embedding algorithms have been proposed to partition large-scale systems into fragments, confining the application of quantum simulation to more manageable active regions [16,[21][22][23][24][25][26].A particular category of embedding consists of reducing the molecular orbitals to a subset of active orbitals that provide a reliable description of the static electronic correlation, and this quantum subsystem is then embedded in the mean field generated by the inactive electrons [27].Within this scheme, the expectation value of the Schrodinger equation in the active space is evaluated over a quantum circuit using VQE on a quantum simulator or quantum device.The rest of the system is then treated with classical Hartree-Fock (HF) [28] or density functional theory (DFT) methods.Such an embedding approach has been employed previously to investigate the ground-state properties of H 2 O and the dissociation profiles of N 2 and O 2 [27] (See Supplementary Material).Additionally, within this study the energetics associated with the cleavage of the C-C bond in C 2 H 4 O was examined; a reaction system beyond the reach of all-electron VQE in the absence of hardware which provides the resources required to handle the large number of qubits and circuit depth representation.This technique, in combination with algorithms to reduce circuit depth, is also used to simulate the triple bond-breaking process in butyronitrile [29].From a chemistry perspective, technologically relevant reactions entail more than the sim-ple stretching of covalent bonds or changing of bond angles.Recently, we utilized this embedding approach to examine the coupling of CO 2 and NH 3 , a representative complex reaction that leads to the formation of the NH 2 -COOH species [30,31].When referenced to results calculated using classical coupled cluster with single and double substitutions (CCSD), the embedding approach performs better than HF in calculating a reaction profile despite the drastic reduction in qubit resources.
Further application of this embedding algorithm to study complex reactions has received limited attention.This study aims to extend the embedding algorithm within the VQE framework to hydride complexes of Li.We also previously reported the first-of-itskind all-electron VQE simulations of the ground-state properties of LiH n (n = 1-3) species, including their singly charged ions [31].The present manuscript is an extension of our prior studies, describing the application of the embedding approach in the simulation of the hydrogenation of Li n (n = 2-4) clusters.This reaction system provides a simplified model for exploring the interaction between hydrogen atoms and metals, and can be applied to the chemisorption of hydrogen on metal surfaces [32].This study could also serve as a baseline testbed in understanding previously reported results suggesting the potential of using Li-based metal systems as hydrogen storage media [33][34][35].An additional aim is to further benchmark the embedding approach with respect to classical multireference ab-initio methods for reactions requiring an accurate description of bond breaking and formation.
Methodology
In preparation for the embedding calculations, geometries of all local minima (reactants and products) associated with the Li 2 + H 2 , Li 3 + H 2 , and Li 4 + H 2 reactions were optimized using classical spin-polarized plane-wave DFT, as implemented in the Vienna Ab-Initio Simulation Package (VASP) version 5.4.4 [36,37].Perdew-Burke-Ernzerhof (PBE) [38] functional plane-wave basis sets with a cutoff energy of 520 eV and projector augmented wave [39] pseudopotentials [40] were employed.The self-consistent simulated valence electrons were taken to be 1s 1 for H and 2s 1 for Li.A three-dimensional 30 Å × 30 Å × 30 Å periodic box was inserted in each simulation model to exclude artificial periodic interaction.The sampling of the Brillouin zone was conducted with a Γ-point k-point mesh.The ionic and electronic convergence limit was set to 0.03 eV/Å and 1 × 10 −5 eV, respectively.The Methfessel-Paxton scheme [41] was utilized with a modest smearing width of 0.1 eV, and the total energies are extrapolated to σ → 0.
To determine the reaction path and to locate the transition-state structures along the H 2 dissociation routes, the climbing-image nudged elastic band method (CI-NEB) was employed [42].Thirteen-to-nineteen images interpolated between the initial and final states were used.The reaction barriers were referenced to initial states constructed by placing a molecular H 2 on the clusters which was then fully relaxed using DFT-PBE to obtain weakly adsorbed H 2 models.The geometries of the final states are based on previously reported most stable structures of Li 2 , Li 3 , Li 4 , Li 2 H 2 , Li 3 H 2 , and Li 4 H 2 , where the adsorbed H atoms in the hydrogenated species are two-fold coordinated to two different cluster sites [32,43,44].Geometry re-optimization using plane-wave DFT-PBE was conducted prior to the CI-NEB calculations.Use of plane-wave DFT in combination with CI-NEB for reaction pathway preprocessing has been adopted in previous studies [45].
At each of the DFT-PBE-optimized structures along the reaction pathway, single-point quantum computing calculations were performed, and the results were compared to those obtained from classical HF and post-HF quantum chemistry methods.Here, we used a quantum embedding scheme which allows for the treatment of a select number of electrons and orbitals to facilitate the simulation.In this framework, the electronic structure of the full system is broken into fragments consisting of (i) the active orbitals, which define a subset of valence electrons and frontier orbitals, and (ii) the environment [27].Each region is described quantum mechanically with the environment treated using classical HF, while employing a post-HF quantum mechanical description of the active orbitals which is carried out on the quantum simulator.The time-independent Schrödinger equation of the active space is connected to the exchange-correlation embedding potential of the environment, such that a new Hamiltonian is defined for the full system.A reduction in qubit resources is achieved since the VQE computation is restricted to the active orbitals.
The embedding calculations were carried out using the Qiskit Nature platform, a Python package that interfaces the quantum computing framework with the existing classical quantum chemistry software PySCF v2.6.2 to generate classical data such as electronic integrals in the atomic orbital basis [46,47].The molecular orbitals were prepared by performing restricted open-shell HF calculations on the optimized geometries, and an active space with m electrons, n occupied orbitals, and p virtual orbitals (AS(m,(n + p)) was selected.The active space is identified by first looking at the active electrons involved in the reaction.The orbitals involved two σ bonds, one each from H 2 and the considered Li clusters.As seen below, these undergo conversion into essentially two partial σ bonds between the two species in the transition state, prior to conversion into complete bonds in the product.Overall, this process involves at least a 4-electron and 4-orbital active space, AS(4e, 4o).Orbitals beyond this baseline are selected next, as the accuracy of quantum embedding generally improves with an increasingly larger active space [27].We managed to push our hardware to use active spaces that require up to 14 qubits in a reasonable amount of time and memory.A unitary coupled cluster Ansatz with single and double excitations (UCCSD) was used to represent the electronic wavefunction in the active space and the Jordan-Wigner scheme is used to map the wavefunction onto qubits [3].VQE simulations were carried out on the Qiskit statevector simulator along with the STO-6G basis set [28].The gradient-based Broyden-Flecher-Goldfard-Shanno minimization algorithm (BFGS) was used for the energy minimization for calculations on the simulator [48].
Results and Discussion
The structures of all reactant and product clusters considered in generating reaction profiles are displayed in Figure 1.These geometries are taken from the lowest energystable configurations identified from previous exhaustive studies using classical DFT techniques [32,43,44].For Li 2 , only one trivial linear configuration is possible.For Li 3 and Li 4 , planar structures with C 2v and D 2h symmetries are reported, respectively.The geometry of the lowest energy structure of Li 2 H 2 is a rhombus, while that for Li 3 H 2 can be viewed as a deformed trapezoid.For Li 4 H 2 , the lowest energy predicted ground-state configuration is planar, two-dimensional and consisting of a distorted rhombus Li 4 with the hydrogen atoms bonded on two adjacent sides of one Li atom site.With geometries from classical computation in hand, we then turned to calculations using a quantum simulator to determine energies and reaction profiles.The potential energy curve for the Li2 + H2 reaction calculated from the embedding approach is shown in Figure 2. The relative energy is defined as EINIT − ETS, where the initial and transition state energies are calculated using the different methods, respectively.In the initial state, the bond axis of the hydrogen molecule is nearly perpendicular to Li-Li, with the dissociation initiated via the H2 migration toward the cluster.At the transition state, the H-H bond Spot checks were conducted to initially validate the embedding approach.In particular, the ground-state energies of Li 4 and the selected low-lying energy isomers of Li 4 H 2 were calculated (see Figure 1).Predicted energy variations are compared using HF and post-HF methods such as CCSD, complete active space configuration interaction (CASCI), and complete active space self-consistent field (CASSCF), as implemented in PySCF.Table 1 summarizes the results calculated using the STO-6G basis set.In agreement with post-HF methods, the embedding approach predicts that the modified version of Li 4 (Li 4 (I); see Figure 1) is slightly less stable.For all Li 4 H 2 isomers considered, the embedding method used here yields the correct stability trend: Li 4 H 2 > Li 4 H 2 (I) > Li 4 H 2 (II).The Li 4 H 2 (I) isomer, whose metal atoms do not lie in the same plane, is a critical case since it exhibits quasi-degeneracy within less than 3 milli-Hartree (mHa) from CCSD calculations.The embedding approach can unambiguously predict such property.Additionally, it outperforms the conventional HF for all considered isomers, despite the drastic reduction in the number of qubits.The predicted energies are within a few 10 mHa from the refence CCSD, and the discrepancy is much lower when compared to CASCI and CASSCF.In particular, the energy difference is not more than 1 mHa with respect to CASCI.With geometries from classical computation in hand, we then turned to calculations using a quantum simulator to determine energies and reaction profiles.The potential energy curve for the Li 2 + H 2 reaction calculated from the embedding approach is shown in Figure 2. The relative energy is defined as E INIT − E TS , where the initial and transition state energies are calculated using the different methods, respectively.In the initial state, the bond axis of the hydrogen molecule is nearly perpendicular to Li-Li, with the dissociation initiated via the H 2 migration toward the cluster.At the transition state, the H-H bond distance is lengthened by ~0.2 Å relative to the initial state, indicating that the molecule starts dissociating over Li 2 .The process does not pass through a further stationary point, and, thus, the conversion to the Li 2 H 2 product occurs in a single concerted step.With geometries from classical computation in hand, we then turned to calculations using a quantum simulator to determine energies and reaction profiles.The potential energy curve for the Li2 + H2 reaction calculated from the embedding approach is shown in Figure 2. The relative energy is defined as EINIT − ETS, where the initial and transition state energies are calculated using the different methods, respectively.In the initial state, the bond axis of the hydrogen molecule is nearly perpendicular to Li-Li, with the dissociation initiated via the H2 migration toward the cluster.At the transition state, the H-H bond distance is lengthened by ~0.2 Å relative to the initial state, indicating that the molecule starts dissociating over Li2.The process does not pass through a further stationary point, and, thus, the conversion to the Li2H2 product occurs in a single concerted step.Figure 2 shows comparisons of the relative energies computed using classical mutireference full-space FCI, as well as CASCI and CASSCF in which similar active spaces are used as that in the embedding approach.All methods predict a kinetically activated and thermodynamically downhill process.The embedding method using an active space of 14 qubits (AS(4e, 7o)) is in good agreement with the FCI curve, albeit not within the desired chemical accuracy of 1 mHa.As the dissociating H 2 moves toward Li 2 , the embedding curve is within 4 mHa of FCI.A larger deviation is observed at the transition state where strong correlation effects are significant, with embedding slightly overestimating the barrier by 33 mHa.As the reaction proceeds to product, the embedding curve remains mostly parallel to FCI with an energy difference in the 10-20 mHa range.With respect to CASCI, the embedding method displays relatively lower discrepancy in energy across the entire reaction profile.The predicted barrier overestimates the CASCI value by a smaller value of 26 mHa, while the energy deviation outside the transition state is not more than 2 mHa.We note that the CASSCF curve differs considerably from embedding, FCI, and CASCI.In particular, the method yields a visible discontinuity in the pre-transition state region of the curve, where H 2 begins to dissociate, and each H atom begins to form a bond with Li 2 .Unlike CASCI, CASSCF orbitals tend to localize due to additional orbital coefficients optimization, leading to electrons preferentially correlated in a region of space over another [49,50].While it results in the further lowering of the total energy, the unphysical localization causes the wavefunction to change discontinuously along the section of the reaction path, where bond breaking and new bond formation begin.The resulting discontinuity in the energy is quite discernable due to the larger nearly degenerate virtual orbitals present in the active space.One can get around this issue by rotating the necessary orbitals with virtuals outside of the used active space, however the same active space was used for the embedding, CASCI and CASSCF calculations throughout this study.
The dissociation on Li 3 starts with H 2 physiosorbed on a single Li site (Figure 3).The H-H bond length then expands as the molecule approaches the substrate.At the transition state, H 2 is essentially cleaved, and both H atoms stay near the initial Li site, concurrently forming H-Li bonds with a bond length of 1.70 Å.Finally, both H atoms move away from each other and toward the adjacent bridge Li atoms.Embedding using an active space of (AS(5e, 7o)) predicts an exothermic process with a reaction energy of −43 mHa.The activation barrier of 99 mHa corresponds to the H-H bond cleavage and formation of Li-H bonds with subsequent steps not transiting through a further stationary point.By comparison, FCI exhibits a transition state occurring a step earlier, with a lower barrier than the embedding approach.We note that the structural difference in the transition states is minimal in that the H 2 molecule is cleaved, and the variation in the Li-H bond length is <0.03Å.Moreover, both embedding and FCI methods show a qualitative consistency across the potential energy curve.Both embedding and CASCI methods yield a similar location of the transition state and calculated energy barrier, overestimating the FCI barrier by 20 mHa.The transition state location predicted by CASSCF is more consistent with FCI, but it underestimates the FCI barrier by 20 mHa.The overall variation between the two curves is also more discernable.As in the Li 2 + H 2 case, the level of agreement between embedding and CASCI is relatively better.
The potential energy curve for H 2 dissociation on Li 4 with an irregular rhombic molecular geometry is shown in Figure 4a.Initially, H 2 is weakly physiosorbed at a single Li site.Bond breaking between the H-H bond then occurs as the molecule approaches the Li 4 cluster.Simultaneously, Li 4 folds along its short diagonal to facilitate the addition of H atoms.At the transition state, the two triangles sharing a common edge in Li 4 are oriented perpendicular to each other, while the H atoms bind to adjacent bridge sites.Finally, Li 4 H 2 reorients toward the most stable rhombic configuration.Here, embedding calculates an activation barrier of 62 mHa and a reaction energy of −58 mHa.Alternatively, H 2 can also be weakly adsorbed along the short diagonal of Li 4 , parallel to the cluster plane (Figure 4b).Proceeding from the initial step, H 2 gradually moves toward Li 4 , and the molecule interacts with adjacent Li atoms at the transition state.It then begins dissociating as it moves away from the short diagonal, forming the rhombic Li 4 H 2 product.This path is also exothermic but leads to a barrier of 46 mHa, which is less kinetically activated than starting from a weakly H 2 physiosorbed at a single Li site (Figure 4a).The potential energy curve for H2 dissociation on Li4 with an irregular rhombic molecular geometry is shown in Figure 4a.Initially, H2 is weakly physiosorbed at a single Li site.Bond breaking between the H-H bond then occurs as the molecule approaches the Li4 cluster.Simultaneously, Li4 folds along its short diagonal to facilitate the addition of H atoms.At the transition state, the two triangles sharing a common edge in Li4 are oriented perpendicular to each other, while the H atoms bind to adjacent bridge sites.Finally, Li4H2 reorients toward the most stable rhombic configuration.Here, embedding calculates an activation barrier of 62 mHa and a reaction energy of −58 mHa.Alternatively, H2 can also be weakly adsorbed along the short diagonal of Li4, parallel to the cluster plane (Figure 4b).Proceeding from the initial step, H2 gradually moves toward Li4, and the molecule interacts with adjacent Li atoms at the transition state.It then begins dissociating as it moves away from the short diagonal, forming the rhombic Li4H2 product.This path is also exothermic but leads to a barrier of 46 mHa, which is less kinetically activated than starting from a weakly H2 physiosorbed at a single Li site (Figure 4a).The embedding scheme predicts potential energy surfaces with double wells due to the presence of an additional product channel.For the first pathway (Figure 4a), this The potential energy curve for H2 dissociation on Li4 with an irregular rhombic molecular geometry is shown in Figure 4a.Initially, H2 is weakly physiosorbed at a single Li site.Bond breaking between the H-H bond then occurs as the molecule approaches the Li4 cluster.Simultaneously, Li4 folds along its short diagonal to facilitate the addition of H atoms.At the transition state, the two triangles sharing a common edge in Li4 are oriented perpendicular to each other, while the H atoms bind to adjacent bridge sites.Finally, Li4H2 reorients toward the most stable rhombic configuration.Here, embedding calculates an activation barrier of 62 mHa and a reaction energy of −58 mHa.Alternatively, H2 can also be weakly adsorbed along the short diagonal of Li4, parallel to the cluster plane (Figure 4b).Proceeding from the initial step, H2 gradually moves toward Li4, and the molecule interacts with adjacent Li atoms at the transition state.It then begins dissociating as it moves away from the short diagonal, forming the rhombic Li4H2 product.This path is also exothermic but leads to a barrier of 46 mHa, which is less kinetically activated than starting from a weakly H2 physiosorbed at a single Li site (Figure 4a).The embedding scheme predicts potential energy surfaces with double wells due to the presence of an additional product channel.For the first pathway (Figure 4a), this The embedding scheme predicts potential energy surfaces with double wells due to the presence of an additional product channel.For the first pathway (Figure 4a), this additional channel involves a H-Li bond formation as H 2 approaches the cluster to produce a preliminary H 2 -Li 4 pair.Embedding finds the process to be slightly downhill (−2 mHa), with a barrier of 31 mHa.This intermediate then undergoes an isomerization step in which the scission of the adsorbed H-H bond is followed by H atom migration to the adjacent Li-Li sides to yield the final product.For the second pathway (Figure 4b), the additional product channel consists of each H atom three-fold bonded to the cluster.The barrier height associated with the subsequent isomerization step is 9 mHa, which is 37 mHa smaller than the transition state for the preceding step.
The potential energy curves obtained using the classical CCSD method are provided in Figure 4a,b.Due to the large computational cost, FCI can only be calculated for the relatively smaller H 2 + Li 2 and H 2 + Li 3 reaction systems.The embedding generally compares well with the CCSD curves.The prediction that hydrogenation of Li 4 transits through an additional stationary point is consistent with CCSD, although the barriers are slightly overestimated.Both embedding and CCSD predict that the hydrogenation reactions considered are thermodynamically downhill and that the second pathway (Figure 4b) is more kinetically favorable.For the H 2 + Li 2 and H 2 + Li 3 reaction systems, we find that embedding curves follow CASCI over the entire region and that the more expensive CASSCF treatment yields an inferior quantitative agreement.These trends are also observed for the hydrogenation of Li 4 .
The total numbers of qubits and excitation parameters (N ex ) utilized by the embedding method for the various reaction systems in comparison to conventional VQE were calculated (see Table 2).Assuming a singlet state for Li 2 + H 2 and Li 4 + H 2, N ex is calculated as the sum of single-and double-excitation terms: N ex = n occ n vir + n occ n vir (n occ n vir + 1)/2, where n occ and n vir represent the number of occupied and unoccupied spatial orbitals [51].For Li 3 + H 2 , the corresponding excitation parameters are multiplied by 2, assuming a doublet state (½ or −½).Table 2 shows a significant reduction in computational resources when embedding is used.Compared to conventional VQE, embedding uses about 40-70% less qubits, while the reduction is up to several orders of magnitude for the excitation parameters.The calculations presented here are performed on a quantum simulator, and calculations on quantum computers would need to be performed to further assess the viability of the embedding algorithm for modeling complex chemical reactions.The current hardware requires the use of error mitigation techniques, and as these techniques mature, the desired simulation accuracy could be achieved [52,53].A viability assessment on quantum computers is beyond the scope of this work and will become the subject of future investigations.
Conclusions
While quantum computing is considered a paradigm shift in our basic understanding of physical computation, effective implementation of quantum computing in practical applications also depends on progress and development in the dimensions of both quantum computing hardware and quantum computing algorithms.From the perspective of quantum computing hardware, the availability of the number of qubits and the noise level of the qubits should be weighed, whereas from the perspective of quantum computing algorithms, error tolerance capability in the algorithm and gain of speed-up relative to classical computing should be considered.In addition, current quantum processing devices and quantum computing algorithms may also require pre-and post-processing using classical computers for basic operation within realistic architecture.
Although quantum computers have already been used to model chemical reactions, and as this technology continues to develop, it will have transformative implications for material design and discovery.The ability to model large systems and rapidly screen material properties will significantly benefit many application fields.Due to the limitation of the quantum device and available qubits, at current stage, the classical-quantum hybrid approach is a practical way to solve real problems.To build such hybrid solutions, in this work, we presented some insights into the quantum active space-embedding approach, which seeks a balance between accuracy and computational cost.We particularly focused on its deployment to the calculations of the activation and reaction energies of the hydrogenation of Li 2 , Li 3 , and Li 4 clusters as a testbed for performance evaluation.These are prototypical examples of a complex chemical reaction between two reactants reacting in synchronous fashion.The significant structural reorganization that occurs during bond breaking and formation is a compelling testbed for validation, offering data for future refinement.The considered reactions involve a transition state which is typically more sensitive to approximations in the solution of the Schrodinger equation than reactants and products, due to the presence of partial bonds.The quantum calculations, using the statevector simulator, successfully map out each process over the entire potential energy curve.The predicted potential energy curves qualitatively reproduce the classical FCI results for H 2 + Li 2 and H 2 + Li 3 reactions in both the bond-breaking and bond-formation regions.A similar trend is found for the corresponding hydrogenation of Li 4 when comparing the curves acquired using the embedding and CCSD approach.The error notably increases as the reaction proceeds to the transition state, which could be attributed to its far complex electronic structure in contrast to reactants and products.The embedding results show better accuracies when compared to the CASCI method, which deploys the same active orbitals.Similarly, the embedding approach shows qualitative advantages over CASSCF in the description of the reaction profiles of the hydrogenation of the considered Li clusters, though this is attributed to the active space used, as mentioned previosuly.Our results confirm the qualitative viability of the deployed embedding approach for mapping out reaction profiles for molecular systems, while providing data to compare against future calculations, using different flavors of embedding schemes and positioning the approach as a focal point for further ongoing development.Our study also indicates that it is possible to perform quantum computing on large reaction systems for practical applications.
Figure 2 .
Figure 2. Potential energy curves for the hydrogenation reaction of Li2 with H2.The atom coloring scheme follows the one in Figure 1.CASCI and CASSCF calculations utilize the same active orbitals in the embedding approach (AS(4e,7o)).
Figure 2 .
Figure 2. Potential energy curves for the hydrogenation reaction of Li 2 with H 2 .The atom coloring scheme follows the one in Figure 1.CASCI and CASSCF calculations utilize the same active orbitals in the embedding approach (AS(4e,7o)).
Figure 3 .
Figure 3. Potential energy curves for the hydrogenation reaction of Li3 with H2.The atom coloring scheme follows the one in Figure 1.CASCI and CASSCF calculations utilize the same active orbitals in the embedding approach (AS(5e,7o)).
Figure 4 .
Figure 4. Potential energy curves for the hydrogenation reaction of Li4 with H2.(a) Path1: H2 is initially physiosorbed at a Li site.(b) Path2: H2 is initially above the Li4 plane.The atom coloring scheme follows the one in Figure 1.CASCI and CASSCF calculations utilize the same active orbitals in the embedding approach (AS(6e,7o)).
Figure 3 .
Figure 3. Potential energy curves for the hydrogenation reaction of Li 3 with H 2 .The atom coloring scheme follows the one in Figure 1.CASCI and CASSCF calculations utilize the same active orbitals in the embedding approach (AS(5e,7o)).
Figure 3 .
Figure 3. Potential energy curves for the hydrogenation reaction of Li3 with H2.The atom coloring scheme follows the one in Figure 1.CASCI and CASSCF calculations utilize the same active orbitals in the embedding approach (AS(5e,7o)).
Figure 4 .
Figure 4. Potential energy curves for the hydrogenation reaction of Li4 with H2.(a) Path1: H2 is initially physiosorbed at a Li site.(b) Path2: H2 is initially above the Li4 plane.The atom coloring scheme follows the one in Figure 1.CASCI and CASSCF calculations utilize the same active orbitals in the embedding approach (AS(6e,7o)).
Figure 4 .
Figure 4. Potential energy curves for the hydrogenation reaction of Li 4 with H 2 .(a) Path1: H 2 is initially physiosorbed at a Li site.(b) Path2: H 2 is initially above the Li 4 plane.The atom coloring scheme follows the one in Figure 1.CASCI and CASSCF calculations utilize the same active orbitals in the embedding approach (AS(6e,7o)).
Table 2 .
Qubits and excitation parameters (N ex ) used by the quantum embedding method vis-à-vis conventional VQE for the reaction systems considered. | 7,613.2 | 2024-07-29T00:00:00.000 | [
"Chemistry"
] |
Development of a Wi-Fi Controlled Mobile Video Device on the Arduino NANO Basis
The development of a Wi-Fi-controlled video machine using Arduino NANO is described. The connection diagram of Arduino NANO and additional modules is presented. The relevance of the topic under development is emphasized by the increasing demand for the use of remotely controlled video devices.<br><br>A Wi-Fi-controlled video device (machine) was developed, which is powered by a battery that is connected to the charge controller module with microUSB. Possible battery life is 5-6 hours without recharging.<br><br>In the process of developing a Wi-Fi-controlled video machine, a large amount of work was carried out, including adding the necessary libraries for the correct writing of programs and determining the necessary conditions for the functioning of the device. Program (sketch) for controlling the engines of the machine is also developed; the main components for creating the device are identified.<br><br>For the mobile camera application to work, it is necessary to download the JoyLite application from the AppStore or PlayMarket; after which the smartphone “connects” to the Wi-Fi network and the SANNCE HD 720p camera.<br><br>In the software part of the development of a Wi-Fi-controlled video machine using Arduino NANO in the Arduino IDE software environment, the program (sketch) was developed for the SANNCE HD 720p “JoyLite” mobile application. This program configures the signals from the stepper motors of the camera to asynchronous motors of the machine, as well as adjusting the speed of the wheels.<br><br>During the tests of the device, it turned out that the Wi-Fi-controlled video machine has a sensitivity to speed impacts, namely, the speed should exceed 255 r/s.<br><br>The developed Wi-Fi-controlled video machine can be used in various fields. For example, the device can be used in systems such as "Smart Home" or in security systems, or be implemented as a training project in the course of robotics.
Introduction
For a long time, people needed a safe way to collect video information in various situations, for example, in places with poisonous or infected air, in narrow technical openings, a security perimeter, and the like. Previously, they used all possible means for this: that is, mirrors, binoculars, and the like. Subsequently, they began to use the system of camera probes. Let's note that in the conditions of the city there are many places in which a person cannot directly get into, but in which a small mobile platform on wheels can pass in which the camera is installed. The problem was in managing the platform and timely information received.
The advent of cheap and affordable microcontrollers has led to the use of the Internet for communication of the "human-device" and "device-device" type [1]. Now most residents of modern cities daily transmit or receive any data. At the same time, the modernization of data transmission channels is at an incredibly dynamic pace.
Wi-Fi wireless LAN technology is no longer an innovation. It is used in many areas of human activity, in-cluding in everyday life. An example is the use of tablets, smartphones, laptops or other devices to work on the Internet remotely.
Let's note the development of the Wi-Fi wireless network [2], in the form of public and secure access points available in educational institutions, hotels, restaurants and other public places. Let's also take into account the availability of free access points for all, without restrictions, set at train stations, subways, shopping centers, universities, and, by decision of local authorities, in any public places. So, taking into account the above, studies aimed at developing a device based on the use of Wi-Fi technology and microcontrollers of the most used brands are relevant.
Literature review and problem statement
One of the ways to use technology for transmitting data over a wireless local Wi-Fi network in the developed device is to use the Arduino platform. This platform is used both in educational developments [3] and in modern systems, for example, in solar power plant control systems [4] or security systems [5].
Arduino has established itself as an accessible platform in programming, software, and development methods [6].
In [7], a review of about 100 works was published, which were published in the period 2006-2016, and in which the study of applications and experiments related to the use of Arduino boards was carried out. Based on the study of these works, a great interest in the design of robots on Arduino platforms was noted. The author of the study also notes that Arduino is an ideal platform for educational robotics. In addition, Arduino is the most popular board for amateur and educational electronics [8] and robotics [9], which has many variations with open source projects, textbooks, and forums for beginners [7][8][9].
In [10], when developing a robot-electromechanical machine controlled by a computer and electronic programming, a four-wheeled platform was used. The authors of [10] in their study developed a robot that can be controlled using mobile applications for Android. Researchers also noted that the robot, which is designed, along with quality and repeatability is unparalleled. But this robot does not transmit video information.
In [11], robot on a two-wheeled platform was considered. The system architecture contains a pair of DC motors and an Arduino microcontroller board, a single-axis gyroscope, and two axial accelerometers used to determine orientation. But such a system needs constant position correction.
In [12], a four-wheeled robot was developed for observation using the Arduino and Android APIs. But this robot is too large.
More recent studies by the authors of [13] offer fourwheel DC motors that help control the robot and the ultrasonic sensor to avoid obstacles. The camera is connected to the robot via Wi-Fi. But this robot can also be smaller.
So, an analysis of the literature shows that Arduinobased applications for microcontrollers are very successful in the development and implementation of robotics. But among the solutions considered; there are no solutions that would allow using the standard Arduino sets to minimize the robot size while maintaining its stability in space without constantly adjusting its position.
The aim and objectives of research
The aim is to develop and get a robot in the form of a Wi-Fi-controlled video machine on an Arduino-based platform, which was the smallest when using Arduino components and could perform a fairly wide range of functions.
To achieve the aim, the following objectives are set: -development of a project of Wi-Fi-controlled video machine with a minimum size of the wheel platform, which ensures stability of movement; -writing a sketch, which has the ability to decode the signals from the stepper motors of the camera, the function of transmitting signals to asynchronous motors of the machine and setting the speed of rotation of the wheels of the machine in the Arduino IDE software environment; -configuring the SANNCE HD 720p camera model I21AG with Wi-Fi network and robot motion control.
Materials and research methods in the design and development of Wi-Fi-controlled video machines
The following Arduino modules are used: Arduino Uno, Arduino Leonardo, Arduino Ethernet, Arduino Mega 2560, Arduino Mini, Arduino Micro, Arduino Due, LilyPad Arduino, Arduino Pro, Arduino Yún, Arduino NANO 2.x, Arduino NANO 3.0, when designing the robot, Arduino NANO 3.0 was chosen. The advantages of this module are its small size and ATmega328 memory, the volume of which is 32 KB (of which 2 KB allocated for the bootloader).
Connection to a PC is via a mini USB cable. Among the components of Arduino, Table 1, enclosure No. 4 was elected, which, with its miniature size, allows to place a video camera. The following components were used for the WiFi-controlled video machine: 1. Arduino NANO Atmega328p and USB cable. 2. SANNCEHD 720p Wi-Fi camera model I21AG with charger.
3. Machine body kit (platform with two DC3V-6V gear motors and three wheels -two main and one auxiliary).
4. Two ceramic capacitors 0.1-10 µF. 5. L298N motor driver. 6. The 18650 battery and a compartment under it. 7. Boost Retractor MT3608. 8. MH TP4056-PROTECT 5V charge controller module. 9. Connecting wires. A more detailed image of the elements necessary for the development of a WiFi-controlled video machine, with the selected SANNCE HD 720p WiFi camera model I21AG, is presented in Fig. 1.
A typewriter body kit (a platform with two DC3V-6V gear motors and three wheels) with a universal chassis configuration has the ability to adapt to various machine assemblies.
Also, for the correct operation of two gear motors, two ceramic capacitors of 0.1-10 µF are installed [14].
The Arduino L298N motor driver is used to control two low-power DC collector motors or a low-power 4-wire twophase stepper motor [14]. In practice, it is used to control the engines of small wheeled robots or the engines of mobile toys [15]. To fix the module on a flat surface, the board has one mounting hole.
Driver power is supplied either from the Arduino controller, or another microprocessor control device or an external power source (power supply, battery). The supply voltage is 2-9 V DC. The control signal is 1.8-7 V DC. The maximum current consumption of connecting motors is up to 1.5 A.
The step-up voltage converter MT3608 is designed to receive voltage up to 28 V with a load current of up to 2 A from a low-voltage voltage source. The regulator on the converter board allows you to select the desired output voltage level.
The MH TP4056-PROTECT 5V charge controller module is based on the TP4056 chip -a charge controller for Li-Ion and Li-Po 3.7 V batteries with a built-in temperature sensor. TP4056 automatically completes the charging cycle when it reaches a voltage of 4.2 V and the charge current drops to 1/10 of the programmed value. The module has an indication of the charge process. At the time of charging, the red LED lights up, and when the battery is fully charged, the green LED lights up, the red one turns off. The project uses an 18650 battery and a compartment PV-3 connecting wires with a cross-sectional area of 0.75 mm 2 is a power wire consisting of a monolithic copper conductive core with a cross section of 2 to 95 mm 2 and polyvinyl chloride insulation. For convenience, the differences of the poles in the circuit, different colors were used.
As a method of researching the operation of a finished video machine, it was tested in action, the test data are given in Table 2.
Research results for the development of Wi-Fi-controlled video machines
Arduino NANO is a small, mock-compatible device on the ATmega328 microcontroller with a clock frequency of 16 MHz. The controller provides 32 KB of flash memory for storing firmware, 2 KB of RAM and 1 KB of non-volatile EEPROM memory for data storage. To connect to the computer, the CH340G chip is used (the driver for it must first be installed at the beginning of work with Arduino). Arduino NANO can be powered by a Mini-B USB connector or an external 6-12 V power supply (pin "Vin"), or 5 V stable external power (pin "5V"). The power automatically switches to a higher voltage source.
The purpose of the module in a Wi-Fi-controlled video machine is to connect the power via a standard micro USB connector. At the same time, we note that power supply via a micro USB wire is impractical, because freedom of movement is needed to move a Wi-Fi-controlled video machine. To achieve this goal, the project uses a 18650 battery and a compartment. If the battery runs out, the charger is used.
When choosing a Wi-Fi camera, they were guided by its size and a list of functions that are performed. The SANNCE-HD 720p model I21AG Wi-Fi camera was selected (Fig. 1). The SANNCE HD 720p I21AG Intelligent Wireless Wi-Fi IP Camcorder with night motion sensor has a number of features: -P2P technology -the ability to watch what is happening in the room or hole, to control the camera in real time (from anywhere in the world, only Internet access is required); -reversing camera -horizontally 350°, vertically 90°; -clear night shooting, thanks to built-in IR illumination, visibility up to 8 meters (daytime shooting up to 30 meters); -record video and photos on a memory card, smartphone/tablet or FTP server (support for memory cards up to 64 Gb); -two-channel audio communication has a built-in microphone and speaker; -security device mode -activation of an alarm about movement in the camera's field of view (it is possible to set a certain time during which a notification will come to the smartphone if movement is recorded in the camera's field of view); -dual stream technology (local recording is carried out separately from remote viewing); -special software (JoyLite) with which you can control the camera.
Fig. 1. SANNCE HD 720p Wi-Fi Camera Model I21AG
Arduino NANO is connected to the SANNCE HD 720p Wi-Fi camera and the L298N motor driver. The connection to the camera was due to 6 wires A0 -A5, which are connected to the microprocessor of asynchronous motors on the camera. The L298N motor driver is connected to the Arduino NANO using INT1 -D3, INT2 -D9, INT3 -D10, INT4 -D11 (Fig. 2).
Fig. 2. ArduinoNANO connection diagram
To connect the engines, they were connected to the ports of Motor-A (right motor) and Motor-B (left motor) on the L298N driver. Connecting wires from the driver to the gearmotors. To control the motors, the standard library AFMotor.h [15] was used.
Speaker Camera SANNCE HD 720p Arduino Nano
The Wi-Fi-controlled video machine (Fig. 3) was powered by a battery, which is connected to the micro USB charge controller module. The project scheme also includes an adjustable step-up converter 2 A -28 V MT3608 to increase the voltage, because the rotation speed of DC motors when connecting the Arduino board from a computer and a power supply or batteries is significantly different. From the boost converter, the power goes to the SANNCE HD 720p Wi-Fi camera, Arduino NANO. The L298N motor driver, which in turn supplies power to the motors through ceramic capacitors, is connected to the charge controller module along with the wires of the «+» and «-» poles of the boost converter.
After identifying all components, the complex part of the development was the software part, which provided decoding of signals from stepper camera motors and transmission of signals to asynchronous motors of the machine.
Considering that to perform the functions of the movements of the motors of the machine, there are enough signals of the form of a vector with three Boolean variables, then three wires for each motor were connected to the camera microcontroller.
In the software part of the development of a Wi-Ficontrolled video machine using Arduino NANO in the Arduino IDE software environment, the program code was created for the SANNCEHD 720p "JoyLite" camera mobile application. This program (sketch) has the ability to decode signals from stepper camera motors, the function of transmitting signals to asynchronous motors of the machine and adjusting the speed of rotation of the wheels of the machine.
The decoding algorithm of the camera motor signals is valid all the time when the typewriter receives a signal from a smartphone.
When determining, with the help of an algorithm, the movement of the camera motors, the received data is written into the motorsTrick function in the program code. This function also transmits signals to the motors of the machine through the motor driver.
After transmitting information to the machine's motors, it is possible to adjust the wheel speed.
The block diagram of decoding signals from a stepper motor and supplying the corresponding signals to asynchro-nous motors is shown in Fig. 4, and some fragments of the sketch are shown below: #define FRW_SPEED 120// forward speed (0-255) #define BKW_SPEED 90// backward speed (0-255) #define TURN_SPEED 95// rotational speed (0-255) #define MOVE_TIME 3// time that the car rides after the command (seconds) #define TURN_TIME 0.4// time that the machine spins after the command (seconds) #define TIMEOUT 700// driver exit polling timeout (duration of signal sending to CAMERA motors) #define START_DELAY 100// turn-on delay, seconds (waiting for camera calibration) #define Setting the speed of movement forward, backward, rotations, the one with which the machine goes after the command, and the one with which the machine rotates after the team, caused many problems during development. As it turned out, in a Wi-Fi-controlled video machine there is sensitivity to its speed limits, namely, the speed should be no more than 255 r/s.
In addition, with this connection of cameras and stepper motors, the functions of the reversing camera are disabled. The appearance of the finished device, the dimensions of which in the finished form are: the width to the outer side of the axle of the wheels is 155 mm, the length is 260 mm, the height with wheels and the camera is 190 mm, shown in Fig. 5.
For the mobile application of the SANNCE HD 720p JoyLite camera: firstly, the JoyLite application was downloaded from the AppStore or PlayMarket; after downloading the application to the smartphone, they tied up the wellknown Wi-Fi network and the SANNCE HD 720p camera. Upon completion of the setup, the "live-video" camera is connected in the "JoyLite" application. Table 2. It reflects the effect of the type of surface on the operating time of the device, the effect of the type of lighting on the camera, type of control device. So, as can be seen from the Table 2, control of the machine is possible both using a PC and using a smartphone. The image obtained with the help of a video camera is enough to use the developed device for fixing and observation.
Checking the additional features of the camera, shown below, shows that they can be fully utilized: -record video and photos on a memory card, smartphone/tablet or FTP server (support for memory cards up to 64 Gb); -two-channel audio communication has a built-in microphone and speaker; -security device mode -activation of motion alarm in the camera field of view; -dual stream technology (local recording is carried out separately from remote viewing).
Discussion of research results on the development of a Wi-Fi-controlled mobile device on a wheeled platform (video machines)
Developed Wi-Fi-controlled video machine are tested. The results reveal a high quality image transmission (Table 2, HD=1080p), which is clear even in a dark room (720p) at a distance of up to 8 meters and in daylight at a distance of up to 30 meters, respectively, of the characteristics of the camera. When connecting the machine to a smartphone or computer, ease of operating the machine is noted, which does not require special skills due to the presence of an interface similar to those widely used in computer games or radio-controlled toys. Therefore, it can be controlled even by a child if he has the necessary device configured for the machine.
It should be noted that the project development plans are considering the possibility of increasing the operating time of a Wi-Fi-controlled video machine without additional recharging up to ten hours.
The developed device, using the developed sketch, allows to control the engines of the machine using the camera's mobile application via Wi-Fi control based on the Arduino NANO platform.
Disabling the reversing camera function is compensated by the twists and turns of the robot itself.
Using the Arduino NANO platform allows to place additional sensors and add new functions on the same device in the future [7,13]. But this is the subject of the following studies.
In addition, the further use of schemes already developed and worked out during research also leads to increased competitiveness of the product.
Performs tests show the performance and quality characteristics of the device are shown in Table 2.
According to the results of the demand analysis, the developed Wi-Fi-controlled video machine can be used in various industries. For example, a device can be integrated into the Smart Home system [7], a security system [12]. Also, this project can be used or carried out as a training course in robotics [3,7].
Conclusions
1. The developed Wi-Fi-controlled mobile device has a three-wheeled platform of a video machine, allows to minimize dimensions to the following values: width to the outer side of the wheel axle is 155 mm, length 260 mm, height with wheels and a camera 190 mm using standard parts from Arduino kits. In addition, the use of the Arduino NANO platform allows to place additional sensors on the same device and add new functions.
2. Connecting to Wi-Fi networks found high quality image transmission, which is 1080p in daylight up to 30 meters and 720p in eclipse or at night, thanks to the built-in IR illumination, and is clear even in a dark room at a distance of up to 8 meters. When controlling, thanks to the sketch written, the movement of a Wi-Fi-controlled video machine, its ease is determined due to the presence of an interface similar to those widely used in computer games or in radio-controlled toys, it does not require special skills.
3. The above characteristics of the device allow to say about the possibilities of its use in the systems of "Smart Home", security, etc. In addition, this project can be carried out as a training course in the course of robotics. | 4,926.6 | 2020-06-26T00:00:00.000 | [
"Computer Science"
] |
Drawing Halin-graphs with small height
In this paper, we study how to draw Halin-graphs, i.e., planar graphs that consist of a tree $T$ and a cycle among the leaves of that tree. Based on tree-drawing algorithms and the pathwidth $ pw(T) $, a well-known graph parameter, we find poly-line drawings of height at most $6pw(T)+3\in O(\log n)$. We also give an algorithm for straight-line drawings, and achieve height at most $12pw(T)+1$ for Halin-graphs, and smaller if the Halin-graph is cubic. We show that the height achieved by our algorithms is optimal in the worst case (i.e. for some Halin-graphs).
Introduction
It is well-known that every planar graph has a planar straight-line drawing in an O(n) × O(n)grid [17,24] and that an Ω(n) × Ω(n)-grid is required for some planar graphs [16] (definitions will be given in the following section).But for some subclasses of planar graphs, planar straight-line drawings of smaller area can be found.In particular, for any tree one can easily create a straight-line drawing of area O(n log n) [6]; the area can be improved to n2 O( √ log log n log log log n) [5] and O(n) if the maximum degree is O(n 1−ε ) [18].Outerplanar graphs can be drawn with area O(n 1.48 ) [7] and with area O(n log n) if the maximum degree is bounded [15] or a constant number of bends are allowed in edges [2].There are also some sub-quadratic area results for series-parallel graphs [2], though they require bends in the edges.
These existing results suggest that bounding the so-called treewidth of a graph may be helpful for obtaining better area bounds.In particular, trees have treewidth 1, and outer-planar and series-parallel graphs have treewidth 2. However, one can observe that the lower-bound graph from [16] can be modified to have treewidth 3, so we cannot hope to achieve subquadratic area for all planar graphs of constant treewidth.However, there are some subclasses of planar graphs that have treewidth 3 and a special structure that may make them amenable to be drawn with smaller area.This is the topic of the current paper.
Halin-graphs were originally introduced by Halin [20] during his study of graphs that are planar and 3-connected and minimal with this property.He showed that any such graph consists of a tree without vertices of degree 2 where a cycle has been added among the leaves of the tree.These graphs have attracted further interest in the literature, see for example [28,25,13,14,10].It is folklore that they can be recognized in linear time since they are planar graphs and have treewidth 3, but a direct and simpler approach for this was recently given by Eppstein [10].
In this paper, we study how to create planar drawings of a Halin-graph that have small area.To our knowledge, no such algorithms have been given before, and the best previous result is to apply a general-purpose planar graph drawing algorithm that achieves area O(n 2 ).In contrast to this, we exploit here that a Halin-graph consists of a tree T with a cycle C among its leaves, and give two results.The first one states that for any drawing of T , we can "fiddle in" the cycle C at a cost of increasing the height by a factor of 3.However, the resulting drawing has bends.For our second result, we take inspiration from one particular tree-drawing algorithm by Garg and Rusu [19] to create an algorithm that achieves straight-line drawings of area O(n log n).In fact, the height of our drawings, which is O(log n) in the worst case, can be bounded more tightly by O(pw(T )), where the pathwidth pw(T ) is a well-known graph-parameter.It is known that the pathwidth is a lower bound on the height of any planar graph drawing [12] and that the pathwidth of a Halin-graph is within a constant factor of the pathwidth of the tree T [13].Therefore our algorithm gives a O(1)-approximation algorithm on the height of planar straight-line drawings of Halin-graphs if we ignore small constant terms.Similarly as was done for trees by Suderman [27] and Biedl and Batzill [1], we can also argue that the constant in front of "pw(T )" cannot be improved for some Halin-graphs.Our paper is structured as follows.After reviewing the necessary background in Section 2, we briefly argue in Section 3 how to use any tree-drawing algorithm to create (poly-line) drawings of Halin-graphs of asymptotically the same height.Section 4 gives the algorithm for straight-line drawings of small height, while Section 5 defines a class of Halin-graphs that have small pathwidth, yet require a large height in any (straight-line or poly-line) planar drawing.We conclude in Section 6.
Background and notations
We assume familiarity with graphs and basic graph-theoretic terms, see for example [8].Throughout this paper, we use n for the number of vertices in a given graph G = (V, E).A tree is a connected graph without cycles.A leaf of a tree is a vertex of degree 1.A rooted tree is a tree together with one specified vertex (the root); this defines for any edge of the tree the parent-child relationship with the parent being the endpoint that is closer to the root.In a rooted tree, the term leaf is used only for those vertices that have no children, i.e., the root is not considered a leaf unless n = 1.
Fix a rooted tree T .For any vertex v ∈ T , we use T v to denote the subtree of T rooted at V , i.e., vertex v and all its descendants.We assume throughout that trees are ordered, i.e., come with a fixed cyclic order of neighbours around each vertex.In a rooted tree, this hence gives a left-to-right order of its children (starting in counter-clockwise direction after the parent).The leftmost leaf L of T is the one reached by starting at the root and repeatedly taking the leftmost child until we reach a leaf.Define the rightmost leaf R symmetrically.Note that L = R if T is a rooted path, i.e., it is a path with the root as one of its endpoints.If T consists of only one vertex (the root r), then L = r = R , but otherwise L = r = R .
Halin-graphs and skirted graphs: Let T be an (unrooted, ordered) tree without vertices of degree 2. To avoid trivialities, we assume that T has at least three leaves.Let H be the graph obtained by connecting the leaves of T in cyclical order; this is the Halin-graph formed by T (and sometimes denoted H(T )).Tree T is called the skeleton of Halin-graph H, and the edges of the cycle are called cycle-edges.See Figure 1.
Observe that any Halin-graph is planar, i.e., can be drawn without crossing in the plane.The condition 'no vertex has degree 2' is not crucial for our drawing algorithm (though it was crucial in the original study of Halin-graphs as minimal 3-connected planar graphs [20]).As in [13], we use the term extended Halin-graph for a graph H(T ) obtained by taking an arbitrary tree T and connecting its leaves in a cycle in order, while a regular Halin-graph refers to a Halin-graph as above, i.e., the skeleton has no vertices of degree 2. Our drawing algorithms will be based on rooted, rather than unrooted, trees, and therefore exploit subgraphs of Halin-graphs formed by rooted trees.Let T be an (ordered) tree that has been rooted at vertex r.Let H be the graph obtained by connecting the leaves of T in order from left to right in a path; this is the skirted graph [25] formed by T (and sometimes denoted H − (T )).Graph H − (T ) is a subgraph of H(T ); it is missing either the edge ( L , R ) or (if the root r has degree 1) the path L , r, R .
Pathwidth and rooted pathwidth: The pathwidth of a graph G is defined as follows.A path decomposition is an ordered sequence X 1 , . . ., X ξ of vertex-sets (bags) such that any vertex belongs to a non-empty subsequence of bags, and for any edge at least one bag contains both endpoints.The width of such a path decomposition is max i {|X i | − 1}, and the pathwidth pw(G) is the minimum width of a path decomposition of G.A graph consisting of a singleton vertex hence has pathwidth 0.
We will in this paper almost only be concerned with the pathwidth of trees; here an equivalent and simpler definition is known.For a path P in a tree T , let T (T, P ) denote the connected components of the graph obtained by removing the vertices of P .Suderman [27] showed that for any tree T we have where the minimum is taken over all paths P in T .Our constructions will use a rooted tree T , and therefore consider width-parameters for rooted trees that are illustrated in Figure 2. Define as in [4] the rooted pathwidth r pw(T ) as follows: where the minimum is over all rooted paths P r of T .(The recursive formula differs from the one for pathwidth only in that the path must end at the root; hence the name.)One can show that any tree T can be rooted at a leaf such that we have pw(T ) ≤ r pw(T ) ≤ 2pw(T ) + 1 [4].We call a path P r that can be used to obtain the minimum a spine.The rooted pathwidth was actually used much earlier for the classification of the order of rivers and streams [21,26] and became known as the Horton-Strahler number: where the minimum is over all children c of the root r, the maximum v is over all children of the root, and χ denotes the characteristic function.One can show [4] that the Horton-Strahler number and the rooted pathwidth are identical.We use the term spine-child for a child c where the minimum is achieved; this is the same as a child that maximizes the Horton-Strahler number among the children.(One can show that it belongs to a spine of T .) Graph drawing: A poly-line is a polygonal curve, i.e., a curve that is the union of finitely many line segments; the transition between two such segments is called a bend.A planar poly-line drawing Γ of a graph G consists of assigning a point to each vertex and an (open) poly-line to each edge such that all points and poly-lines are disjoint, and the poly-line of an edge ends at the points of the endpoints of the edge.The drawing is called y-monotone if all poly-lines of edges are y-monotone and straight-line if all poly-lines of edges are straight-line segments.
We assume throughout that identifying features (i.e., points of vertices and bends in poly-lines of edges) have integral y-coordinates.The layers of a drawing are the horizontal lines with integral y-coordinate that intersect the drawing; we usually enumerate them from top to bottom as 1, 2, . . ., h.The number h of layers is called the height of the drawing (notice that this is one unit more than the height of the minimum enclosing box).Minimizing the height of drawings is the main objective in this paper.When constructing drawings, it will sometimes be expedient to use integral x-coordinates as well; we then use the term column for a vertical line of integral x-coordinate that intersects the drawing and enumerate columns from left to right.
We usually identify the graph-theoretic object (vertex, edge) with the geometric object (point, poly-line) that corresponds to it in the drawing.All our drawings are required to be planar (i.e., without crossing edges) by definition.We often require that they are plane, i.e. reflect the given order of edges around every vertex, and (for a Halin-graph) the infinite region is adjacent to the cycle-edges.
Transforming tree drawings
In this section, we show that any order-preserving tree-drawing algorithm can be used to obtain poly-line drawings of Halin-graphs.Put differently, we can draw the skeleton-tree T , and "fiddle in" the cycle-edges.As it will turn out, we do not need to use a drawing of T ; it suffices to take a drawing of a suitably chosen subtree of T , which may make the height bound a bit smaller and (as we will see) give a tight bound.
The following defines the subtree of T that we draw; see also Figure 1.Let the inner skeleton of a Halin-graph be the tree T obtained by deleting all leaves of the skeleton.We say that T leaf-extends a tree T if T can be obtained from T by (possibly repeatedly) adding a leaf incident to a leaf of the previous tree.The leaf-reduced inner skeleton of a Halin-graph H(T ) is the smallest subgraph of the inner skeleton T that can be leaf-extended to T .We now have the following result: Theorem 1.Let H(T ) be an extended Halin-graph.If its leaf-reduced inner skeleton T has an order-preserving poly-line drawing Γ of height h, then H(T ) has a plane poly-line drawing of height 3h.
Proof. Figure 3 illustrates how to find this drawing, with the final result in Figure 1b.As a first step, insert a dummy-vertex at every bend of Γ to get a straight-line drawing Γ d of a tree T d that is tree T with some edges subdivided.Also subdivide the same edges in trees T and T (where T is the inner skeleton of H(T )) to get trees T d and T d .Next, convert Γ d into a flat visibility representation Γ vr of T d .This consists of assigning a horizontal segment s(v) to every vertex and a horizontal or vertical segment s(e) to every edge such that the segments are interior-disjoint and the segment of edge (v, w) ends at s(v) and s(w).We can always do such a conversion while giving integral y-coordinates to all segments and maintaining the same height and planar embedding [3].
We next convert visibility representation Γ vr of T d into a visibility representation Γ vr of T d .Recall that T is a leaf-extension of T , so we can obtain T d by repeatedly adding a leaf incident to a leaf p of the current tree.Since p is a leaf, there is no incident horizontal edge next to one end (left or right) of its segment s(p).We place a segment for at this end (inserting columns if needed to make space), and connect it horizontally.Repeating this gives a visibility representation Γ vr of T d .By inserting further columns, we may assume that any segment s(v) in Γ vr has at least one unit width and overhangs any incident vertical edge-segment by at least one unit.
Next, triple the grid, i.e., insert a new grid-line before and after each existing one.In consequence, we can surround the entire drawing of Γ vr with a cycle C that traces along all segments.Formally, C consists of all those points that are horizontally or vertically exactly one unit away from segments of Γ vr , and these points form a cycle since we tripled the grid.Let Γ C be the resulting drawing.Now we insert the leaves of the skeleton.Let an angle of a vertex v in T be any two consecutive edges e, e at v in T in the planar embedding.Because s(v) overhangs its incident vertical edges, cycle C has a segment s α of at least unit length for every angle α of v such that placing a leaf on s C and connecting it vertically puts edge (v, ) between e and e in the planar embedding.So for any v ∈ T and any angle α at v, insert Note that C runs within unit distance of s(e) and s(e ) at some point, and since e, e are consecutive at v, a part of C between this is within unit distance of s(v) throughout.Furthermore, since s(v) overhangs incident edge-segments, this part contains a horizontal segment s α .Insert as many leaves on s α as are required by the planar embedding of the skeleton (we can insert columns to widen s α if needed) and connect them vertically to s(v).This gives a flat orthogonal drawing Γ od : every vertex is represented by a horizontal segment, and every edge is a poly-line with only horizontal or vertical segments.Furthermore, the height is 3h and the drawing represents H(T d ) since we took care to re-insert the leaves exactly according to the planar embedding.Drawing Γ od can be converted to a poly-line drawing Γ d of H(T d ) of the same height [3].Finally by reverting dummy-vertices of T d back to bends, we obtain the desired poly-line drawing of H(T ).
Since every tree T has an order-preserving straight-line drawing Γ of height 2pw(T ) + 1 [1], we get: Corollary 1.Any extended Halin-graph H(T ) has a plane poly-line drawing of height 6pw(T ) + 3, where T is the reduced inner skeleton.
Since every tree has pathwidth at most log 3 (2n+1) [23] we can in particular draw extended Halin-graphs with height O(log n).The width can easily be seen to be O(n), so the area is O(n log n).Our construction may seem very wasteful (cycle C has many bends that could be removed with suitable post-processing stages), but as we shall see in Theorem 4, the height-bound is tight, even for some regular Halin-graphs.
Straight-line drawings
The transformation of Section 3 creates poly-line drawings, and it is not at all clear whether one could convert them into straight-line drawings without changing the height.We hence give a second, completely different algorithm that creates a straight-line plane drawing of a Halin-graph that, at the cost of doubling the height.(The width may be exponential, so this construction is of mostly theoretical interest.)Crucial for our result is that it suffices to construct poly-drawings in which all edges are drawn as y-monotone curves; by the result of Pach and Tóth [22] or Eades et al. [9] such drawings can be converted into planar drawings of the same height.
The algorithm proceeds by considering an increasingly larger subtree T of the skeleton T (rooted at an arbitrary leaf), and to draw the skirted Halin-graph H − (T ).There are three edges (called connector-edges) that connect H − (T ) with the rest of H(T ): they attach at the root and at the leftmost and rightmost leaf of T .To be able to add them later with a y-monotone curve, we restrict the locations of their endpoints.So we specify below whether the leftmost and rightmost leaf should have empty rays towards west (W) or east (E).We also restrict the root to be in the leftmost column and either as far north (N) as possible or as far south (S) as possible; sometimes either placement is acceptable and we use W to indicate this.The full set of restrictions is as as follows: Definition 1.Let T be a rooted tree with r pw(T ) ≥ 2 (and therefore L = R ).Let Γ be a plane poly-line drawing of H − (T ) in layers 1, . . ., h (enumerated top to bottom), where h ≥ 2. We call Γ an α L α r α R -drawing, for α L , α R ∈ {W, E} and α r ∈ {N, W, S}, if it satisfies the following (see also Figure 4): (d1) R is in layer 1 and L is in layer h.Root r is in the leftmost column and the only element of Γ in that column.
(d2) For X ∈ {L, R}, if α X = W, then the westward ray from X is unobstructed (i.e., intersects no other element of Γ).Otherwise (α X = E) the eastward ray from X is unobstructed.
r is in an arbitrary layer.We assumed r pw(T ) ≥ 2 in the above definition since otherwise L = R and then condition (d1) cannot be satisfied for h > 1.We hence create drawings for trees T with r pw(T ) ≥ 2 and deal with subtrees that do not satisfy this as special cases.The construction works for both regular and extended Halin-graphs, but the latter may require a bit more height.To express this succinctly, set χ ext (T ) to be 1 if T contains a degree-2 vertex that is not the root (this in particular implies that H(T ) is not regular), and χ ext (T ) = 0 otherwise.Note that χ ext (T ) ≤ χ ext (T ) for any subtree T of T .
The case r pw(T ) = 2 and some useful observations: The drawing for T if r pw(T ) = 2 is a bit special; we can save two rows (compared to drawings for higher rooted pathwidth) at the cost of no flexibility for the y-coordinate of the root.Lemma 1.Let T be a rooted ordered tree with r pw(T ) = 2. Then for any α Proof.See Figure 5a for the following construction.Fix a spine P that goes from root to a leaf, and place P on one layer, with the root leftmost.
Any T ∈ T (T, P ) has rooted pathwidth 1 since P is a spine.If χ ext (T ) = 0, then T has no vertices of degree 2, so it is a single leaf.Place it in the layer above or below P depending on whether T is right or left of the spine P .The cycle-edges can now be completed along these layers.If χ ext (T ) = 1, then initially contract all vertices of degree 2 and draw the tree as above.Then insert extra layers before/after the spine-layer and place degree-2 vertices (or a bend, if there are none) within those layers.
So we have constructed a WWW-drawing of height 3 + 2χ ext (T ).Any of the other drawing-types is constructed by "turning rays" around.We describe this in a more general lemma below since it will be useful for later cases as well.
Proof.Leaf R is in the topmost layer, so its incident edges are routed y-monotonically and leave horizontally or downward from R .To achieve α R = E, add a new layer above Γ, move R into it, and extend its incident edges via a bend near where R used to be.See also Figure 5b.This gives a y-monotone drawing where the bottom layer is unchanged (in particular, it still contains L with its unobstructed ray).The root is no longer be in N-position if it was before, but this is not a problem since we only promised an α L Wα R -drawing.Similarly one achieves α L = E by adding a layer below Γ and moving L into it.
The following will be useful when merging a drawing of a subtree T that uses fewer layers than permitted (e.g. because χ ext (T ) = 0 while χ ext (T ) = 1): We can "pad" such a drawing by inserting empty layers suitably, even while maintaining the drawing type.
Claim 2. Assume that H − (T ) has a y-monotone α L α r α R -drawing of height h ≥ 3. Then for any h > h it also has a y-monotone α L α r α R -drawing of height h .
Proof.First insert bends whenever an edge crosses a layer without a bend; now all edgesegments are horizontal or connect adjacent layers.If α r = N then r is in layer 2. Insert h − h horizontal grid-lines between layer 2 and layer 3, and add bends to any edge that crosses the inserted lines.So edge-segments again are horizontal or connect adjacent grid-lines, which means that we can change the y-coordinates of grid-lines to be integers (i.e., stretch the drawing between layers 2 and 3) without affecting planarity or y-monotonicity.This gives the desired α L Nα R -drawing since r remains in layer 2, and no changes were made within the top or bottom layer.The construction is symmetric (inserting layers between h − 2 and h − 1) for α r = S, and either construction can be used for α r = W.
The induction hypothesis: We create drawings for arbitrarily large rooted pathwidth by induction; the following states the induction hypothesis.(It differs from Lemma 1 in that we sometimes permit α r = N or α r = S while Lemma 1 only holds for α r = W.) Lemma 2. Let T be a rooted ordered tree with r pw(T ) ≥ 3, and let α L , α r , α R be any of the combinations WWE, EWW, EWE, WNW and WSW.Then H − (T ) has a plane y-monotone α L α r α R -drawing of height 6r pw(T Before proving this lemma, we briefly argue why it suffices.Specifically, the height is 6r pw(T ) − 9 + 2χ ext (T ) for a suitable choice of root for T .
Proof.Root the skeleton T at a leaf r such that r pw(T ) ≤ 2pw(T ) + 1 [4].Apply Lemma 1 or 2 to this rooted version of T to obtain a y-monotone WWW-drawing of H − (T ) of height 6r pw(T ) − 9 + 2χ ext (T ) ≤ 12pw(T ) − 3 + 2χ ext (T ).The westward ray from L is unobstructed; we can draw ( L , r) along this ray until the leftmost column and then go up to r. Likewise we can draw ( R , r) to obtain a y-monotone drawing of H(T ).This can be transformed into a straight-line drawing of the same height [22,9].
The height of Theorem 2 is (roughly) a factor 2 worse than the height in Corollary 1.However, in terms of rooted pathwidth, Theorem 2 is tight, see Theorem 5 and 6.
The rest of this section is dedicated to the proof of Lemma 2. It suffices to show how to construct a WNW-drawing of height h := 6r pw(T ) − 9 + 2χ ext (T ) ≥ 9; the construction of a WSW-drawing is symmetric and all other cases are covered by Claim 1.
We use the following notations throughout.Let r be the root of T , let d be its degree, and let c 1 , . . ., c d be the children of the root, in order.We use the notation i L and i R (for i = 1, . . ., d) for the leftmost and rightmost leaf of T c i .Recall that HS(T ) = r pw(T ) ≥ 3. Let c s be the spine-child of the root; by definition of Horton-Strahler number this is the only child whose subtree could have the same rooted pathwidth as T .If r pw(T cs ) < r pw(T cs ) then (to avoid some cases) we re-assign s = d.Whether or not we reassigned, we hence have r pw(T c i ) < r pw(T ) for all i = s.
We prove the lemma by induction on r pw(T ), with the base case at r pw(T ) = 3.We do an inner induction on the size of the tree, and use as base case the case r pw(T cs ) < 3 (this must occur since at the leaf of the spine the rooted pathwidth is 1).Much of the construction will be the same for base case and induction step, and we therefore prove them together.
Drawing subtrees up to T cs :
The following algorithm (illustrated in Figure 6) states which drawing to use for each subtree and how to combine them.We build the drawing left-to-right, beginning with the root and then adding the subtrees at the children.
Figure 6: The constructions if c s is the rightmost child.
1. Place the root r in the leftmost column of layer 2. We reserve the eastward ray from r for edge (r, c d ), i.e., we will make sure that nothing is added to intersect it until the edge-segment is completed.
2. For i = 1, . . ., d−1, if r pw(T c i ) ≥ 2, then recursively (or via Lemma 1) obtain an WWEdrawing Γ i .This has height at most 6r pw( If needed we can use Claim 2 to make Γ i have height exactly h − 5. Place Γ i in layers 6, . . ., h, to the right of everything drawn thus far. If r pw(T c i ) = 1, then T i is a rooted path.Place its leaf on layer h and its degree-2 vertices (if any) on layer h − 1, with the root leftmost.
We place (parts of) the connector-edges of T c i as follows: • Connect c i to r by going upward to layer 3 and then (via a bend) to layer 2.
• We draw part of the connector-edge ( i R , i+1 L ) by going eastward from i R (in its layer) beyond Γ i , and adding (if needed) a bend to go downward to layer h.The eastward ray in layer h from here is reserved for edge ( i R , i+1 L ).
L is the leftmost leaf of T ; its westward ray is unobstructed as required.For i > 1, leaf i L was placed on the ray reserved for edge ( i−1 R , i L ), which is hence completed.Since this edge receives no further bend at i L , and was drawn y-monotonically extending from i−1 R , it is drawn y-monotone.
3.
To handle the spine-child c s we have three cases.
Assume first that s = d and r pw(T cs ) ≥ 2. Recursively (or via Lemma 1) obtain a WWW-drawing Γ s of H − (T cs ) and increase its height (if needed) to be h.Place Γ s in layers 1, . . ., h, to the right of everything drawn thus far.Connect c s to r by going upward to layer 2 and then horizontally to r. Edge ( s−1 R , s L ) is completed automatically, and s R is the rightmost leaf and its eastward ray is unobstructed.Assume next that s = d and r pw(T cs ) = 1.(This can happen if we re-assigned s.) Place the leaf of T cs on layer 1 and all other vertices on layer 2, with the root leftmost.(If |T cs | = 1 then place a bend in row 2.) Edge (r, c s ) is completed automatically, and s R has an eastward unobstructed ray.To route connector-edge ( s−1 R , s L ), we undo the partial routing that we did earlier; instead we go eastward from d−1 R and then upward to d L = d R in row 1. Assume finally that s < d, i.e., c s is not the rightmost child.The drawing here is much more complicated and will be explained below.
Drawing the remaining subtrees if s < d − 2: Our construction is done if s = d, so assume not.By the re-assignment, this implies r pw(T cs ) = r pw(T ).In particular, we are not in the base case of the inner induction, and we know that r pw(T cs ) ≥ 3.This allows us (crucially) to choose an WSW-drawing for H − (T cs ), which in turn permits us to route (r, c s ) y-monotonically while leaving sufficiently much space for T c s+1 , . . ., T c d .
We assume for now that s ≤ d − 2; the case s = d has been dealt with above and the case s = d − 1 is not difficult but requires a variation that will be explained below.
reserved for (r, cs) 4. Draw parts of the edge (r, c s ), by going from r to a bend in layer 3 (to the right of everything drawn thus far), then down to another bend in layer h − 1.We reserve the eastward ray in layer h − 1 from this bend for edge (r, c s ).
5
. By s ≤ d − 2 child c s+1 exists and is not c d .
If r pw(T c s+1 ) ≥ 2, then let Γ s+1 be a recursively obtained EWE-drawing of T c s+1 ; since r pw(T c s+1 ) < r pw(T ) this has height at most h − 4. Increase its height (if needed) so that it has height exactly h − 4, and place Γ s+1 in layers 3, . . ., h − 2 to the right of everything drawn thus far.If T c s+1 is a rooted path, then place all its vertices in layer h − 2. Draw the connector-edges as follows: • Connect c s+1 to r by going upward to layer 3 and then (via a bend) to layer 2.
• Leaf s+1 L is placed in layer h − 2; we reserve its eastward ray for edge ( s+1 L , s R ). • If T c s+1 is not a rooted path, then leaf s+1 R has an unobstructed eastward ray; we begin drawing edge ( s+1 R , s+2 L ) by going eastward from s+1 R , then vertically to layer h − 3 and reserving the eastward ray in layer h − 3 for ( s+1 R , s+2 L ).If T c s+1 is a rooted path, then s+1 R is in layer h − 2. We go up one unit to layer h − 3 and reserve the eastward ray for ( s+1 L , s R ).
6.For i = s + 2, s + 3, . . ., d − 1, we process T c i and its connector-edges as we did in Step 2, only we put the drawing three levels higher.
was placed in layer h − 2, and the edge was routed by going upward to layer h − 3. We now go eastward from there and then upward to layer 1.This is the only situation where a connector-edge receives bends when placing both endpoints, but one verifies that this route is y-monotone.
Recall from
Step 5 that the eastward ray in layer h − 2 was reserved for connector-edge ( s+1 L , s R ).We now add a bend in it to the right of everything drawn thus far, then go vertically to layer 1 and reserve the eastward ray.9. Finally, recursively obtain an WSW-drawing Γ s of T cs of height h.(This exists since c s is not the rightmost child, hence r pw(T cs ) = r pw(T ) ≥ 3 and induction can be applied.)Place Γ s to the right of everything drawn thus far.Since s R , c s , s L are in layers 1, h − 1 and h, respectively, this completes the connector-edges of T cs .
The case s = d − 1: Previously, we used an EWE-drawing for c s+1 and an WNW-drawing for c d in Steps 5 and 7.If s = d − 1, then c s+1 = c d takes on the roles of both of these drawings.The following step (see Figure 7) replaces step 5 and 6 if s = d − 1.We have constructed a WNW-drawing in all cases, and one easily verifies that all edges are drawn y-monotonically, hence Lemma 2 and with it Theorem 2 holds.
It is worth mentioning that this poly-line drawing can easily be found in linear time, as long as coordinates of vertices are expressed initially with via offsets to their parents, and evaluated to their final value only after finishing the construction of the entire tree.
Halin-graphs with maximum degree 3
Observe that in Figures 6 and 8 (where s ∈ {d − 1, d}) we are "wasting" layers; the same construction could have been done with three fewer layers.This leads to the following.Lemma 3. Let T be a rooted binary tree with r pw(T ) ≥ 2, and let α L , α r , α R be any of the combinations WWE, EWW, EWE, WNW and WSW.Then H − (T ) has a plane y-monotone α L α r α R -drawing of height 3r pw(T Proof.We again proceed by induction and show that there exists an WNW-drawing of height 3r pw(T ) − 2 (all other drawing-types are symmetric or obtained with Claim 1).We only sketch the necessary changes to the previous algorithm here; the reader should be able to fill in the details using Figure 9.The previous base case construction gives 3 + 2χ ext (T ) layers.We can also achieve at most 4 layers, by placing a spine-vertex on layer 2 if the spine-child is the right child and on layer 3 otherwise.Using the better of the two (depending on χ ext (T )) we hence have 3 + χ ext (T ) = 3r pw(T ) − 3 + χ ext (T ) layers.In the induction step, we have d ≤ 2 children and hence always either s = d or s = d − 1.So construct a drawing of H − (T ) as in Figure 6 or Figure 8, except use h = 3r pw(T ) − 3 + χ ext (T ) and place drawing Γ i for i < s in layers 3, . . ., h.
If an extended Halin-graph has maximum degree 3, then its skeleton T is binary when rooting it at a leaf.Since we can do so and achieve r pw(T ) ≤ 2pw(T ) + 1, this implies as in the proof of Theorem 2: Theorem 3. Every extended Halin-graph with maximum degree 3 and skeleton T has a straight-line drawing of height at most 6pw(T ) + χ ext (T ).
Lower bounds on the height
Both papers that gave approximation algorithms for the height on tree drawings [27,1] also constructed trees where this bound is tight.In particular, Batzill and Biedl showed that there exists an ordered tree that requires height 2pw(T ) + 1 in any ordered drawing [1].In the same spirit, we now construct Halin-graphs that need as much height as we achieve with our algorithms. 1 Definition 2. For w ≥ 1, define C w and F w as follows: • C 1 consists of a path r, c (where r is the root) with a leaf attached at each of them on each side of the path.See Figure 10a.
• F w is obtained from C w as follows.Let r be the root of C w .Add a parent p and a grand-parent g to r and make g the root.Attach a leaf on each side of path p, g, r at each of p, g.See Figure 10b. 1 The graphs were chosen as to keep the argument as simple as possible; like much smaller trees would do.
• C w+1 is obtained as follows.Start with a spine consisting of vertices s 1 , . . ., s S for some sufficiently large constant S that we will specify later, and make s 1 the root.At each spine-vertex except s S , attach on each side of the spine L copies of F w via its root, for some sufficiently large constant L that we will specify later.See Figure 10c.We prove Lemma 4 by induction on w.In the base case (w = 1) vertex c in C 1 is surrounded by a 5-cycle in H − (C 1 ).Since we need one layer for c, and two more layers to surround it, any plane drawing of H − (C 1 ) requires three layers as desired.The induction step will proved over the next four subsections, but we sketch here the main idea.Fix an arbitrary plane poly-line drawing Γ of H − (C w+1 ) for some w ≥ 1. Tree C w+1 contains lots of copies of F w , hence of C w .Therefore, Γ contains lots of copies of H − (C w ); each of them uses at least h(w) layers by induction.We can argue that some copy of H − (F w ) inside Γ actually requires h(w) + 1 layers; this is the most difficult part that we defer to last.Furthermore, there are 5 polylines inside Γ that are disjoint from this copy of H − (F w ) and that "bypass" it (defined below).It is known that 5 bypassing polylines need 5 additional layers.Therefore the height is at least h(w) + 1 + 5 ≥ h(w + 1).
Preliminaries and preprocessing
We first introduce some terms concerning the abstract tree C w+1 .Recall that C w+1 is rooted and has a total order among the children of every vertex.We therefore have a total order among the leaves, starting at the leftmost leaf and ending with the rightmost one.However, we will use "left" to refer to the order of vertices within one level of the drawing, which may or may not reflect the order in the tree.To avoid confusion, we will therefore treat the order of chidren/leaves as if it were time, and so speak of the "first"/"last" leaf and that a leaf comes earlier than another.
We distinguish leaves of C w+1 (other than s S ) by whether they are on the before-spine or after-spine, i.e., before or after s S in the enumeration of leaves.Likewise for a spine-vertex s i = s S we distinguish the non-spine children by whether they are before or after the spine.Any such non-spine child g is the root of a copy of F w which we denote by F (g).For any two leaves , of C w+1 , the cycle-path from to consists of the subpath of the cycle-edges between and .Now we introduce some terms concerning drawing Γ. Enumerate the layers of Γ, from top to bottom, as 1, 2, . . ., h.We are done if h ≥ h(w + 1), so assume for contradiction that h < h(w + 1) = h(w) + 6.In fact we may assume h = h(w) + 5 because we can add empty layers.For two points p, q, we write p ≺ q (or "p is left of q") if p and q are on the same layer and p has smaller x-coordinate.
A few minor modifications to drawing Γ will make later arguments easier and do not affect the height.First, insert a bend into any edge-segment that crosses a layer without having a bend there.(These new bends may not have integral x-coordinates, but integrality of x-coordinate is never used in the lower-bound proof.)Second, do the following for any spine-vertex s i (with i < S) of C w+1 , and any non-spine child g of s i .Recall that g had three children; one is vertex p while two are leaves.Delete the two edges to these leaves; their sole purpose was to ensure that the Halin-graph is regular and they will not be used in the proof.With this, g now has degree 2. For the third modification, if (s i , g) is not drawn as a straight-line, then move g to the bend on (s i , g) nearest to s i .This makes (s i , g) a straight-line and (by the first preprocessing step) puts g either on the same level as s i or one level above or below; this will be frequently used below.
Recall that F (g) denotes the copy of F w attached at g.We use Γ(g) for the drawing of H − (F (g)) as it appears after these modifications.Since Γ(g) contains a drawing of H − (C w ) within, it must use at least h(w) layers.
Finally we briefly review the concept of bypassing (see also Figure 11a); we use a version here that is 90 • rotated from the one in [4].Recall that bends of a polyline (like all bends and vertices of Γ) are required to have integral y-coordinates.Definition 3. Consider a set of poly-lines π 1 , . . ., π k that are disjoint except (perhaps) at their endpoints.Let π be a poly-line that is disjoint from π 1 , . . ., π k .We say that π 1 , . . ., π k bypass π if there exists a layer that intersects π, and for i = 1, . . ., k poly-line π i begins and ends in layer and all points in π ∩ are between the two ends of π i .Lemma 5. [4] If a planar poly-line drawing Γ contains k poly-lines that bypass a poly-line π, and if π intersects h layers, then Γ uses at least h + k layers.
The ideal case
We first argue that the height-bound holds in one special case; we will show later that this situation must occur somewhere in C w+1 (up to symmetry), as long as S and L are big enough.We assume that the following holds (see also Figure 11b): π F (C1) There are three spine-vertices s j 1 , s j 2 , s j 3 that are all located in one layer ≤ 5. Furthermore, 1 ≤ j 1 < j 2 < j 3 < S and s j 1 ≺ s j 2 ≺ s j 3 .
(C2) For k = 1, 2, 3, vertex s j k has an after-spine child g j k and a before-spine child g j k on layer +1.In fact, s j 2 has five after-spine children on layer +1.
Furthermore, one of the spine-edges incident to s j 2 has a bend or endpoint b on layer + 1.If b is on edge (s j 2 , s j 2 −1 ) then g (3) ≺ b, otherwise b ≺ g (1) .
We will later argue that the following property holds automatically, given (C1-C4).
(C5) There exists a path π within Γ(g (2) ) that connects g (2) (which is on layer + 1) to layer + h(w) + 1, and all points in π ∩ ( + 1) lie strictly between g (1) and g (3) .Now we define five interior-disjoint paths in C w+1 as follows: (see also Figure 11c): • π 1 : This path begins at g j 1 , continues within F (g j 1 ) to the last leaf, and from there along the cycle-path to the first leaf of F (g j 3 ).From there it goes upwards in the tree to g j 3 .This path uses only F (g j 1 ) and F (g j 3 ) and cycle-edges among leaves that are before the spine.
• π 2 : This path begins at g j 3 , continues within F (g j 3 ) to the last leaf, and from there along the cycle-path to the first leaf of F (g (1) ).From there it goes upwards in the tree to g (1) .This path uses only F (g j 3 ) and F (g (1) ) and cycle-edges among leaves that are between s S and the first leaf of F (g (1) ) in the total order of leaves.
• π 4 : This path is built symmetrically to π 2 : begin at g j 1 , go to the first leaf of F (g j 1 ), from there along the cycle-path (in reverse) to the last leaf of F (g (3) ), and from there to g (3) .This path uses only F (g j 1 ) and F (g (3) ) and cycle-edges among leaves that the last leaf of F (g (3) ) or later.
• π 5 : Recall that one bend b of a spine-edge incident to s j 2 lies on layer + 1. Path π 5 begins at b, and goes along spine-edges, away from s j 2 , until it reaches either s j 1 or s j 3 .
From there it goes to the after-spine child on layer + 1, i.e., either g j 1 or g j 3 .Except for this last edge, π 5 uses only spine-edges.
Proof.Directly from the edges that they use, one observes that the five paths are disjoint from π, and from each other except that they may have endpoints in common.(We use here that g (2) lies between g (1) and g (3) in the order of children at s j 2 by (C3).)Assume that b is right of g (3) , the other case is symmetric.Then all five paths begin at a point in {g j 1 , g j 1 , g (1) } and end at a point in {g (3) , b, g j 3 , g j 3 }.Observe that g j 1 is necessarily left of g (1) , otherwise the straight-line segments (s j 1 , g j 1 ) and (s j 2 , g (1) ) would intersect.Likewise g j 1 ≺ g (1) and g (3) ≺ g j 3 , g j 3 .So all five paths connect a point on layer + 1 that is at or to the left of g (1) with a point on layer + 1 that is at or to the right of g (3) .Since π uses only points on ( + 1) that are strictly between g (1) and g (3) by (C5), the claim holds.
Guaranteeing conditions (C1-C4)
Now we argue that conditions (C1-C4) are satisfied at some subtrees if S and L are big enough.Recall that we assumed (for contradiction) that h = h(w) + 5. Since each copy of H − (F w ) uses at least h(w) layers, we therefore have only 5 layers for bypassing any copy of H − (F w ).Roughly speaking, this forces spine-vertices to be in the top 5 or the bottom 5 layers.Therefore (C1) holds if S is big enough.Next we argue that of the L attached copies of F w at a spine-vertex s, only L − 72 can share a layer with s.This, plus the preprocessing, forces (C2) if L ≥ 81.It also implies that many non-spine children satisfy (C4), and an appropriate choice among them ensures (C3).
To give the details, we first study various properties of non-spine children of one fixed spine-vertex s i with i < S.
Proof.There are h = h(w) + 5 layers in total, and by induction Γ(g) intersects at least h(w) layers.It therefore can avoid only the top 5 and the bottom 5 layers.
We say that g is bad if the layer of s i intersects Γ(g), otherwise g is good.
Claim 4. At most 72 non-spine children of s i are bad. 2roof.We say that a non-spine child g has type (t, b) if the topmost and bottommost layer used by Γ(g) are t and b.By Observation 1 we have 1 ≤ t ≤ 6 and h(w) ≤ b ≤ h(w) + 5, so there are at most 36 types.Assume for contradiction that there are 73 = 2 • 36 + 1 bad non-spine children of s i , hence three of them (say g 1 , g 2 , g 3 ) have the same type (t, b).
For k = 1, 2, 3, let B k be a poly-line within Γ(g k ) that begins in layer t and ends in layer b.Let Q k be a poly-line that starts at s i (which is within layers {t, . . ., b} since g k is bad), goes along the straight-line edge to g k (also within {t, . . ., b}) and continues within Γ(g k ) until it reaches B k .Note that B 1 ∪ Q 1 and B 2 ∪ Q 2 and B 3 ∪ Q 3 are disjoint except at s i , and reside entirely within layers {t, . . ., b}.See also Figure 12a.
Exactly as in the proof of Lemma 5 in [1], one argues that this is impossible.Consider the drawing induced by k (B k ∪ Q k ).Add a vertex v in layer t − 1 and connect it to the top ends of B 1 , B 2 , B 3 (they are in layer t).Likewise add a vertex v in layer b + 1 and connect it to the bottom ends of B 1 , B 2 , B 3 (they are in layer b).This gives a planar drawing of K 3,3 , with {s i , v , v } as one side and the points B k ∩ Q k for k = 1, 2, 3 as the other side.Contradiction.Corollary 2. If L ≥ 37 then the layer of s i is in {1, . . ., 5} ∪ {h(w)+1, . . ., h(w)+5}. 3roof.If s i were in any layer in {6, . . ., h(w)}, then by Observation 1 all 2L ≥ 74 non-spine children of s i would be bad.
Claim 5.If s i is on layer where ≤ 5 and ≤ h/2, and if L ≥ 81, then s i has at least 5 good after-spine children on layer + 1.
Proof.There are L after-spine children, hence at least L − 72 ≥ 9 that are good.Any such good child g cannot be on layer by definition of good, and it is at most one layer away by the preprocessing.So g is on layer − 1 or + 1. Assume for contradiction that there at most 4 good after-spine children on layer + 1.So at least 5 good after-spine children are on layer − 1, call them g 1 , . . ., g 5 , enumerated in left-to-right order along the layer.We now have two cases.In the first case, ≤ h(w) (which is always true for w ≥ 2 since then h(w) ≥ 9 while ≤ 5).Since g 1 is good, drawing Γ(g 1 ) cannot use layer , so it is contained within layer 1, . . ., − 1.So it uses at most h(w) − 1 layers, which is impossible.
Since edge (s i , g k ) (for k = 1, . . ., 5) is drawn straight-line by the pre-processing, and Γ respects the planar embedding, the cyclic order of neighbours of s i must contain g 1 , . . ., g 5 in this order.The spine-edges and before-spine children at s i may appear somewhere between g 1 and g 5 in the cyclic order, but regardless of where they are, either g 1 , g 2 , g 3 or g 3 , g 4 , g 5 are a subsequence of the linear order of children of s i .By Claim 6 (proved below, but there is no circularity) drawing Γ(g 2 ) or Γ(g 4 ) hence uses a point on layer + h(w) + 1 = 5 + 3 + 1 = 9.This gives the required contradiction of our assumption.Now we explain how to satisfy (C1)-(C4).Assuming S ≥ 42, we have 41 spine-vertices s i with i < S. Assuming L ≥ 37, each of them is on one of 10 possible layers by Corollary 2. By the pigeon-hole principle, therefore, at least 5 of these spine-vertices are on one layer .After a possible vertical flip of Γ, we may assume ≤ h/2, therefore ≤ 5 by Corollary 2. 4 Among the 5 spine-vertices on , we can (by the Erdős-Szekeres theorem [11]) find a subsequence of √ 5 = 3 spine-vertices s j 1 , s j 2 , s j 1 such that j 1 < j 2 < j 3 and either s j 1 ≺ s j 2 ≺ s j 3 or s j 3 ≺ s j 2 ≺ s j 1 .After a possible horizontal flip of Γ we have s j 1 ≺ s j 2 ≺ s j 3 and therefore (C1) holds.
(C2) holds (assuming L ≥ 81) due to Claim 5 and a symmetric lemma, proved exactly the same way, for before-spine children.
To argue (C3), let g 1 , . . ., g 5 be the 5 after-spine children of s j 2 that are good and on layer + 1, enumerated in left-to-right order along the layer.Let g be a before-spine child of s j 2 that is on layer + 1, and notice that the cyclic order of neighbours of s j 2 contains g , s j 2 +1 , g 1 , . . ., g 5 , s j 2 −1 =: ρ as subsequence.Since the edges from s j 2 to g , g 1 , . . ., g 5 are straight-line by the pre-processing, the x-coordinate order of g , g 1 , . . ., g 5 along layer + 1 must fit the (cyclic) order ρ.Depending on whether g 3 is right or left of g , therefore either g ≺ g 1 ≺ g 2 ≺ g 3 or g 3 ≺ g 4 ≺ g 5 ≺ g , See Figure 12b.
Arguing (C5)
So we have now found subtrees such that (C1-C4) hold.This always implies (C5), but the argument for this is lengthy.We also need to prove the missing piece for Claim 5.Both will be done with the same argument as follows.Claim 6.Let s i (for i < S) be a spine-vertex on layer that has three good after-spine children g (1) , g (2) , g (3) on layer + 1 and the order of children at s i contains g (1) , g (2) , g (3) as subsequence.Then there exists a path π within Γ(g (2) ) that connects g (2) to layer + h(w) + 1, and all points in π ∩ ( + 1) lie between g (1) and g (3) .Proof.Recall that tree F w is built by extending tree C w ; let C be the copy of C w that is inside F (g (2) ).Also let I be the open interval of points on layer + 1 between g (1) and g (3) , so path π should intersects layer + 1 only in I.We need an observation.
Observation 2. H − (C) uses no points in I.
Proof.Define a cycle Q in H − (C w+1 ) as follows.Start at the unique child p of g (2) , go to its last child R (which is a leaf) and from there along the cycle-path to the first leaf of F (g (3) ).Go upwards in tree F (g (3) ) to g (3) and from there to s i .Continue symmetrically through F (1) , i.e., go from s i to g (1) to the last leaf of F (g (1) ), then along the cycle-path to the first child L of p and then to p. See Figure 13a.This cycle separates g (2) from H − (C) in the planar embedding since g (2) is between g (1) and g (3) in the order of children of s i .Now study the corresponding poly-line Q in Γ.Since g (1) , s i , g (3) is drawn with straightline segments between layers + 1 and , and since g (2) ∈ I and Γ is plane, all of I is on or inside Q.On the other hand H − (C) is strictly outside Q and the claim holds.
Let the pocket P be defined as follows, see also Figure 13b.For k = 1, 3, let B k be a poly-line within Γ(g (k) ) that connects g (k) to a point b k on layer + h(w); this exists since Γ(g (k) ) spans at least h(w) layers and contains no point in layer .We choose b k such that B k is minimal, i.e., contains no other point on layer + h(w); in particular all its points are hence in layers + 1, . . ., + h(w).Let the lid σ be the line-segment b 1 b 3 ; note g (1) s (a) (b) that σ is not necessarily a segment of Γ.Now define pocket P to be the set bounded by B 1 ∪ g (1) , s i , g (3) ∪ B 3 ∪ σ, where the lid σ is included in P while all other points on the boundary are excluded.Note that any point in ( + 1) ∩ P is in I, because B 1 and B 3 contain no points on layer or above by (C4).Assume for contradiction that all of Γ(g (2) ) (and in particular therefore H − (C)) resides within pocket P .Then H − (C) uses no points on layer + 1, because it does not use points in I. Therefore H − (C) fits within h(w) − 1 layers, a contradiction.So Γ(g (2) ) must use points outside the pocket.These cannot be on B 1 ∪ B 3 or g (1) , s i , g (3) since these paths do not belong to F (g (2) ).So to get to a point outside P , some polyline of Γ(g (2) ) must contain a point q on σ ⊂ P from which it goes downward.Let q be the next bend of this polyline, which is on layer + h(w) + 1 by the preprocessing.Let π be the poly-line from g (2) (on layer + 1) to point q (on layer + h(w) + 1) that is within Γ(g (2) ).With the exception of the segment from q to q, poly-line π was inside pocket P ; in particular it can use no points on layer + 1 except the ones that are on I.This proves the claim.So we have proved Claim 6, which finishes the proof of Claim 5. Hence (C2) holds.From this we derived (C3) and (C4), hence the precondition for Claim 6 holds for the three children g (1) , g (2) , g (3) of s j 2 that we chose.Claim 6 hence implies (C5) and the proof of Lemma 4 is complete.
Proving the lower bounds
We now finally prove the lower bounds.To do so, we first bound the (rooted) pathwidth of F w and trees derived from it.Observation 3. We have r pw(F w ) ≤ w+1 and pw(F w ) ≤ w−1, where F w is the leaf-reduced inner skeleton of H(F w ).
Proof.We proceed by induction on w.Tree F 1 consists of a path g, p, r, c with leaves attached; this has rooted pathwidth 2. Also F 1 consists only of g, since it is obtained from F 1 by first deleting all leaves (this gives a path), and then repeatedly doing leaf-reductions (this removes all but g).So pw(F 1 ) = 0. Now consider F w+1 for w ≥ 1.This consists of a path g, p, s 1 , . . ., s S with copies of F w attached.Using this path as spine, we immediately get r pw(F w+1 ) ≤ r pw(F w ) + 1 ≤ w + 2. Also, F w+1 consists of the same path with copies of F w attached; therefore pw(F w+1 ) ≤ pw(F w ) + 1 ≤ w.
Thus far all constructions and lower bounds have been for plane drawings (respecting the embedding and have the cycle-edges at the infinite region).But we can easily prove lower bounds even for planar drawings which have no requirement except to be crossing-free.Theorem 4.There exists a regular Halin-graph H(T ) such that any planar poly-line drawing of H(T ) requires at least 6pw(T ) + 3 layers, where T is the reduced tree of the inner skeleton of H(T ).
Proof.For any w ≥ 2, consider the tree T obtained by taking two copies of F w and combining them by adding an edge between the two copies of the root g.Fix an arbitrary planar poly-line drawing Γ of H(T ).Since H(T ) is 3-connected [20] the clockwise order of edges must be the same in H(T ) and in Γ.But the infinite region of Γ could be incident to some face different from the one bounded by the cycle-edges.Tree T contains two copies of F w , and the infinite region of Γ can be a face of H − (F w ) for at most one of them.Therefore Γ contains a plane drawing of H − (F w ), hence also one of H − (C w ).By Lemma 4 this requires at least h(w) = 6w − 3 layers.The reduced inner skeleton of H(T ) consists of two copies of F w , each of which had pathwidth at most w − 1, and this bound is obtained with a main path that ends at g. Therefore we can use the two combined paths as main path for T and so have pw(T ) ≤ w − 1 and the bound holds.
We note that this lower bound implies a lower bound of Ω(log n) on the height, since C w contains c w vertices for some (rather large) constant c.However, this bound is not new since already using the Halin-graph of a complete ternary tree could give a lower bound of Ω(log n) on the height.The main contribution of our lower bound is that it matches the upper bound relative to "pw(T )" in Theorem 1. (This was also the reason why we used the leaf-reduced inner skeleton, rather than the skeleton, in Theorem 1.) We also promised a lower bound in terms of the rooted pathwidth.Note that the skeleton of a Halin-graph is an unrooted tree T ; to be able to talk about r pw(T ) we define this to be the minimum over all choices of the root.Theorem 5.There exists a regular Halin-graph H(T ) such that any planar poly-line drawing of H(T ) requires at least 6r pw(T ) − 9 layers.Proof.For any w ≥ 2, again let T be two copies of F w , combined by adding an edge between the two roots.We know r pw(F w ) ≤ w + 1, and the same holds for T if we root it suitably.Namely, the spine of F w is g-p-s 1 -. . .-s S ; if we root T at one copy of s S then we can use as its spine the two combined spines of the two copies of F w and have the same rooted pathwidth.H(T ) is a regular Halin-graph and since (as above) any planar drawing of it includes a plane drawing of H − (C w ), by Lemma 4 it requires at least h(w) = 6w − 3 ≥ 6r pw(T ) − 9 layers.
Figure 1 :
Figure 1: (a) A regular Halin-graph.Cycle-edges are blue dashed/dotted, skeleton T is black/gray, and the skirted graph H − (T ) would omit the dotted edge if T were rooted at r.The inner skeleton is gray, the leaf-reduced inner skeleton is light gray.(b) A poly-line drawing obtained with the transformation in Section 3.
Figure 2 :
Figure 2: The skeleton-tree T of Figure 1 has HS(T ) = r pw(T ) = 3. Numbers indicate the Horton-Strahler number; thick paths (solid red for the whole tree, dashed blue for the subtrees) are possible spines.
Figure 3 :
Figure 3: Transform (a) a poly-line drawing of the leaf-reduced inner skeleton (with a white dummy-vertex inserted at a bend) into (b) a visibility represention.(c) Expand leaves and widen vertex-segments to overhang (x-coordinates are not to scale).Then (d) triple the grid and insert cycle C and the leaves to get a flat orthogonal drawing (inserted columns are not shown).
Theorem 2 .
Every regular Halin-graph H(T ) has a straight-line drawing of height at most 12pw(T ) − 3, and every extended Halin-graph H(T ) has a straight-line drawing of height at most 12pw(T ) − 1.
Figure 7 :
Figure 7: (Top) The construction if s ≤ d − 2. (Bottom) The construction again, with other subtrees as rooted paths.
7 . 1 R
We process T c d very similarly to Step 3.So assume first that r pw(T c d ) ≥ 2. Recursively (or via Lemma 1) obtain an WWWdrawing Γ d of H − (T c d ) of height at most h − 6. Increase its height to be h − 3. Place Γ d in layers 1, . . ., h − 3, to the right of everything drawn thus far.Connect c d to r by going upward to layer 2 and then horizontally to r. Edge ( d−1 R , d L ) is completed automatically, and d R is the rightmost leaf and its eastward ray is unobstructed.Now assume that T c d is a rooted path.Place the leaf of T c d on layer 1 and all other vertices on layer 2, with the root leftmost (if |T cs | = 1, then place a bend in row 2).Edge (r, c s ) is completed automatically, and d R (which is the rightmost leaf) has an eastward unobstructed ray.To route connector-edge ( d−1 R , d L ), we have two cases.If d > s + 2 and/or r pw(T c d−1 ) ≥ 2, then undo the partial routing that we did earlier; instead we go eastward from d−and then upward to d L = d R in row 1.If d = s + 2 and r pw(T c d−1 ) = 1, then the partial drawing of
Figure 9 :
Figure 9: The constructions if the maximum degree is 3.
Figure 10 :Lemma 4 .
Figure 10: A Halin-graph requiring much more height than its pathwidth.(a) Tree C 1 with cycle C (cycle-edges are dotted red) that encloses c.(b) Obtaining F w from C w .Dashes edges are not needed except to avoid degree-2 vertices in the trees.(c) Obtaining C w+1 using many copies of F w .(d) Tree Ĉ2 needed for Theorem 6.
Figure 12 :
Figure 12: (a) Three bad non-spine children of type (t, b) imply a planar drawing of K 3,3 .(Picture based on [1]).(b) Possible arrangements of non-spine children of s j 2 on layer + 1.
Figure 13 :
Figure 13: For the proof of Claim 6.(a) Poly-line Q separates I from C. (b) The pocket P . | 16,431.8 | 2020-03-31T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Epigenetic age is a cell‐intrinsic property in transplanted human hematopoietic cells
Abstract The age of tissues and cells can be accurately estimated by DNA methylation analysis. The multitissue DNA methylation (DNAm) age predictor combines the DNAm levels of 353 CpG dinucleotides to arrive at an age estimate referred to as DNAm age. Recent studies based on short‐term observations showed that the DNAm age of reconstituted blood following allogeneic hematopoietic stem cell transplantation (HSCT) reflects the age of the donor. However, it is not known whether the DNAm age of donor blood remains independent of the recipient's age over the long term. Importantly, long‐term studies including child recipients have the potential to clearly reveal whether DNAm age is cell‐intrinsic or whether it is modulated by extracellular cues in vivo. Here, we address this question by analyzing blood methylation data from HSCT donor and recipient pairs who greatly differed in chronological age (age differences between 1 and 49 years). We found that the DNAm age of the reconstituted blood was not influenced by the recipient's age, even 17 years after HSCT, in individuals without relapse of their hematologic disorder. However, the DNAm age of recipients with relapse of leukemia was unstable. These data are consistent with our previous findings concerning the abnormal DNAm age of cancer cells, and it can potentially be exploited to monitor the health of HSCT recipients. Our data demonstrate that transplanted human hematopoietic stem cells have an intrinsic DNAm age that is unaffected by the environment in a recipient of a different age.
potentially be exploited to monitor the health of HSCT recipients. Our data demonstrate that transplanted human hematopoietic stem cells have an intrinsic DNAm age that is unaffected by the environment in a recipient of a different age.
| INTRODUCTION
Several publications describe DNA methylation (DNAm)-based biomarkers of aging which can be used to estimate the age of a tissue (Hannum et al., 2013;Horvath, 2013;Spolnicka et al., 2016;Weidner & Wagner, 2014). For example, the multitissue age estimator utilizes the weighted average of 353 CpG sites to arrive at an age estimate that is referred to as DNAm age (Horvath, 2013). Ageadjusted measures of DNAm age are predictive of life span (Chen et al., 2016;Marioni et al., 2015) and relate to a host of conditions, including obesity (Horvath et al., 2014), HIV infection , Down syndrome , Parkinson's disease (Horvath & Ritz, 2015), Werner syndrome (Maierhofer et al., 2017), and menopause (Carroll et al., 2017). Lifestyle factors have only a weak effect on the DNAm age of blood (Quach et al., 2017), suggesting that DNAm age largely reflects cell-intrinsic properties. It is not yet known to what extent secreted factors (e.g., hormones, cytokines, growth factors, and metabolites) from other organs affect the DNAm age of blood, or whether DNAm age is a cell-intrinsic feature.
To address this question, we analyzed blood samples from allogeneic hematopoietic stem cell transplantation (HSCT) recipients.
Allogeneic HSCT is an effective treatment for leukemia (Thomas et al., 1979;Thomas, Lochte, Lu, & Ferrebee, 1957). In HSCT, the patient's original hematopoietic stem cells (HSCs) are eradicated (using ablative chemotherapy and/or radiotherapy) and subsequently replaced by healthy HSCs from a donor (obtained via bone marrow (BM) aspiration or granulocyte-colony-stimulating factor (G-CSF)stimulated leukapheresis; Dreger et al., 1993;Russell, Hunter, Rogers, Hanley, & Anderson, 1993). If the treatment is successful, the donor cells will engraft in the recipient's BM and go on to reconstitute the entire hematopoietic system, including white blood cells, red blood cells, and platelets. After transplantation, the donor cells will thus be exposed to the environment of the recipient for many years.
Experiments involving heterochronic parabiosis or the transfer of factors from human cord blood to old mice have demonstrated that factors present in the younger blood might rejuvenate older tissues (Castellano et al., 2017;Conboy, Conboy, & Rando, 2013;Eggel & Wyss-Coray, 2014). On the other hand, recently published work suggests that the DNAm age of transplanted blood cells is maintained at the donor's age, at least under short-term observations (Spolnicka et al., 2016;Stölzel et al., 2017;Weidner et al., 2015). However, it is not yet known if the DNAm age of the donor cells is affected by the recipient's age after prolonged exposure to the recipient's signaling environment. The aforementioned DNAm studies have limitations regarding their feasibility for identifying a potential rejuvenating effect on donor cells after long-term exposure to a younger environment. Therefore, in an ethically acceptable human in vivo setting, the key question regarding age rejuvenation has still not been adequately addressed. The previous studies have one or more of the following unexamined issues. First, most of the donorrecipient pairs were of roughly the same age or the donor was younger than the recipient. Second, HSC donors were mostly younger than the recipients as opposed to the other way around.
Third, Spolnicka et al. (2016) and Weidner et al. (2015) used blood cell-specific DNAm age estimators, but not the multitissue DNAm age estimator. Fourth, children or adolescents were not included in any of these studies. Finally, these studies had a short follow-up time, with a mean of 126 days, 1 year, and up to 8.8 years (Spolnicka et al., 2016;Stölzel et al., 2017;Weidner et al., 2015). The study by Stölzel et al. involved recipients who were followed up for more than 12 months; however, the age difference between donor and recipient was not reported. This is an important aspect, as factors in the plasma of human cord blood were reported to have a rejuvenating effect in a human-animal study (Castellano et al., 2017).
The present study was designed to test two competing hypotheses: (a) DNAm age of hematopoietic cells is a cell-intrinsic property that is not influenced by factors in the stem cell niche and non-hematopoietic tissues in the human body, and (b) DNAm age of hematopoietic cells is determined through interactions with the stem cell niche and other cell types in the human body. To this end, we overcame the key limitations of previous studies by (a) analyzing several donor-recipient pairs with a substantial age difference (1-49 years), (b) including young children, and (c) including long followup times.
Here, we report that, despite a substantial age difference between donor and recipient, the DNAm age of transplanted donor blood reflects the age of the donor, even after many years of exposure to the recipient's body. This observation was consistent for both adult and child recipients. Our data demonstrate that the DNAm age of transplanted blood cells is cell-intrinsic in the human body.
| Study population
In total, 31 HSC recipients aged 18-74 years were included in the study. Their blood was collected between 1 month and 17 years after HSCT (Figures 1 and 2). The recipients were 2-74 years old at the time of transplantation, and the HSC donors (n = 31) were 21-58 years old at the time of donation. Acute myeloid leukemia (AML) was the most common indication for HSCT (n = 26), but three recipients had other hematological cancers and two had other indications (Table 1).
Recipients were included in the study after giving written informed consent, and blood samples were obtained for determination of DNAm age and donor chimerism. Ten recipients contributed two blood samples each, whereas the remaining 21 recipients contributed one blood sample (Table 1; Figure 2). For the first blood sample of the twice-sampled recipients, donor chimerism was above 94% in all but one of the recipients (88%). For the second blood sample, six recipients had donor chimerism >97%, whereas four recipients showed low chimerism scores (as low as 24%, 12%, 12%, and 7%), indicating repopulation of the recipient's leukemic cells (Table 1). Therefore, the four samples with low chimerism were excluded from further analysis. In addition, a statistically extreme outlier sample (Sample ID 806 in Table 1; Supporting Information Figure S1a,b) with a DNAm age of 111 years and donor chimerism of 97% was identified. This was from a patient with relapse of leukemia, several viral infections, and acute GVHD grade 3 who died shortly after the sample was obtained. We excluded this sample from the presented statistical analysis, but included it where it was relevant as well as in the discussion because we cannot exclude that it may represent a clinically meaningful rare case. After the abovelisted exclusions, a total of 36 blood samples from 30 recipients were subjected to further analysis.
F I G U R E 1 Schematic explanation of the study design. Blood was collected from recipients between 1 month and 17 years after HSC transplantation (HSCT). Donor chimerism and DNAm age were measured. Donor-recipient pairs with a large age difference (1-49 years) were included F I G U R E 2 Study flow chart with exclusions. Ten recipients were sampled twice, and ten others were sampled once. Five recipients were excluded because of low donor chimerism. * Three recipients had hematological cancer, while two had other indications. **In addition, one sample that was a statistical outlier (1.5 times the interquartile range above the third quartile) with a DNAm age of 111 years (donor chimerism was 97%) was excluded from the analyses. The rationale for this exclusion is supported by the fact that the recipient died due to relapse of leukemia within 1 month of blood sampling, implying the presence of an unrecognized health problem at the time of blood collection
| Differences in DNAm age
The mean absolute difference between the DNAm age of the recipients' blood (1 month to 17 years after transplantation) and the SØRAAS ET AL. Ages are at the time blood was drawn for our study, 1 month to 17 years after HSCT. b Sample excluded from statistical analyses as an outlier or due to low donor chimerism (<80%).
chronological age of the recipient was 27 years (Figure 3). In comparison, the mean absolute difference between the DNAm age of the recipients' blood and the chronological age of the donor was 7.2 years, which is significantly less than the preceding value (paired t test and Wilcoxon test, p < 0.0001; Figure 3). This initial comparison suggests that the DNAm age of the recipients' blood more closely reflects the chronological age of the donor than the chronological age of the recipient.
To assess how DNAm age relates to the chronological age of recipient and donor, we carried out a correlation analysis. Pearson correlations were calculated and showed that the DNAm age of the recipients' blood post-transplantation did not correlate with the chronological age of the recipients (R = −0.14, p = 0.43, 36 samples from 30 recipients; Figure 4a). Instead, the DNAm age of the recipients' blood closely correlated with the donors' chronological age (R = 0.79 and p < 0.0001; Figure 4b). This correlation was even more pronounced in samples obtained from the 19 recipients who did not experience relapse of AML within the study period The mean absolute difference between the DNAm age of the relapse samples (n = 12) and the donor age was 10 years, which was not significantly higher than that of the non-relapse samples (n = 24) at 5.8 years (t test, p = 0.04 and Mann-Whitney U (MW-U) test, p = 0.12; Figure 5a).
When we restricted the analysis to the 24 recipients who did not relapse, we found that samples obtained within a year of HSCT exhibited a statistically significant rejuvenation of DNAm age (4.7 years, t test: p = 0.003, Wilcoxon test: p < 0.004; Figure 5b). In the five participants with >1 year (4-17 years) of follow-up, donors were also included in the study and contributed blood samples. The DNAm age of the donors' blood was strongly correlated to both donor age (R = 0.84, p = 0.08) and the DNAm age of the recipients' blood (R = 0.76, p = 0.14; Figure 6a,b).
In theory, the HSC harvesting method might have influenced the DNAm age in the 24 recipients without relapse. In 20 cases, HSCs were harvested from peripheral blood (PB in Table 1) of G-CSF-treated donors. In the other four cases, HSCs were obtained directly from bone marrow ("BM" in Table 1). The type of age gap (i.e., positive or negative) between the donors' age (DNAm age or chronological age) and the recipients' DNAm age was different between the two methods ( Figure 7a). On average, the difference was −5.0 years (standard error (SE) = 1.3) for the G-CSF method and +3.3 years (SE = 1.0) for the BM method ( Figure 7b). This difference is statistically significant; however, this was an unplanned analysis from which we cannot draw strong conclusions (t test, p = 0.01; and MW-U test, p = 0.01).
| DISCUSSION
The present study shows that the DNAm age of donor blood is not influenced by the environment of the recipient's body, whether younger or older, and that the DNAm age continues to increase after transfer to the recipient's body as if the donor cells were still in the donor's body. This trait persisted even 17 years after the transfer to recipients who were 1 and 3 years old at the time of HSCT. This suggests that the DNAm age of human hematopoietic cells is not affected by BM niche cells or other factors in the recipient's body. We can therefore conclude that epigenetic age is a cellintrinsic property in transplanted human hematopoietic cells.
Our observation is consistent with previous studies examining other types of age-dependent DNAm levels in hematopoietic cells (Spolnicka et al., 2016;Weidner et al., 2015). In these previous studies, three (Weidner et al., 2015) or five (Spolnicka et al., 2016) CpG sites were analyzed after 4 months or 1 year after HSCT. Stölzel et al. also used the same multitissue DNAm age estimator that we used in the present study, but they did not report the age difference between donors and recipients and only analyzed blood samples collected within 8 years after HSCT (Stölzel et al., 2017). In contrast, we analyzed blood samples collected up to 17 years after HSCT from recipients who had much older or younger donors. Through access to the Norwegian national records of child HSCT, we were able to identify five pairs of pediatric patients (children and adolescents) and adult donors who were willing to participate in this study.
These patients received HSCT between 4 and 17 years before their F I G U R E 3 The DNAm age of the recipients' blood resembles the chronological age of the donors more closely than the recipients. The mean absolute difference between the DNAm age of the recipients' blood and chronological age of the recipients was 26 years, whereas the difference between the DNAm age of the recipients' blood and the age of the donor was 7.2 years. This difference between these averages is statistically significant (t-test: p < 0.0001, n = 36 13). (e) In the 24 samples from non-relapsing recipients, the age difference between donor and recipient correlated strongly with the difference between the DNAm age of the recipients' blood and the recipients' chronological age (R = 0.98, p < 0.0001, n = 24). The correlation was indifferent to the size or direction of the age difference between recipient and donor. For example, DNAm age was much higher than the recipients' ages if they received HSCs from donors much older than themselves (upper left quadrant) myeloablative conditioning regimens. Since these regimens will alter the physiology of the BM niche, we cannot exclude the possibility that these treatments influenced the progression of the DNAm age of the blood cells transplanted into the recipients (Hooper et al., 2009). It is also important to state as a possibility that the transplanted HSCs influenced the DNAm age of the recipient cells. Further studies are needed to examine these important remaining questions.
We monitored time-dependent changes of the DNAm age of blood after HSCT (Table 1; Figure 5). In some cases, especially in recipients who experienced relapse of leukemia, the DNAm age of the blood was unstable, probably due to the gradual repopulation by leukemia cells. For example, DNAm age was abnormally accelerated or rejuvenated in blood with low chimerism percentage scores; that is, the recipient's cancer cells repopulated in these patients (Table 1; e.g., Sample ID 926, 950, 1021, and1176). In contrast, the DNAm age of transplanted blood was maintained and aged normally for up to 17 years in the recipients who remained in remission (Table 1; e.g., Sample ID 1-5; Figure 3). These results support those of other studies demonstrating that epigenetic age acceleration in blood is predictive of cancer (Ambatipudi et al., 2017;Dugué et al., 2018;Horvath, 2013;Levine et al., 2015). In our study, there was an outlier with high donor chimerism (97%) that showed an accelerated DNAm age of 111 years in a 69-year-old recipient with HSCs obtained from a 25-year-old donor. Although this was a rare case (1 outlier in 37 samples), it suggests that the mechanisms affecting DNAm age in the transplantation setting may not be the same in all donor/recipient settings. The outlier might also be an extreme F I G U R E 5 Comparison of the differences between the DNAm age of the recipient blood and donor's age among groups with different patient outcomes. (a) The bar graphs show the absolute difference between the donor's chronological age and the DNAm age of the recipient's blood in the "relapse" and "no relapse" groups. The difference is smaller in the "no relapse" patient group. Detailed patient information is listed in Table 1. (b) DNAm age rejuvenation was observed within 1 year of HSCT. The average donor age and the recipients' blood DNAm age are shown in three different groups: relapse, no relapse (0-1 years), and no relapse (4-17 years). There was a statistically significant rejuvenation of DNAm age (4.7 years p < 0.003) in the "no relapse" (0-1 year) group in comparison with the donors' chronological age. Detailed patient information is listed in Table 1 F I G U R E 6 The DNAm age of the recipients' blood cells correlates with the DNAm age of the donors' blood cells in the five donor-recipient pairs with donors' blood available. In five HSCT donor-recipient pairs, blood from both the donor and the recipient was available. Blood from these pairs was obtained between 4 and 17 years after HSCT to treat childhood leukemia (n = 3) or other hematological disorders (n = 2). In these pairs, the DNAm age of the recipients' blood correlated with the chronological age of the donors (a) (R = 0.76, p = 0.14, n = 5) as well as with the DNAm age of the donors' blood (b) (R = 0.84, p = 0.076, n = 5) obtained at the same time example of the observed drift in DNAm age that was seen to affect other patients experiencing relapse of leukemia. Inclusion of this sample in the analyses did not change any conclusions in this study except for the correlation between donor age and DNAm age of recipients with relapse, which switched from being statistically significant to non-significant (R = 0.22, p = 0.47). Our limited data set does not allow us to conclude whether DNAm age can be used as a predictive biomarker for leukemia relapse or other health problems following HSCT; however, our data encourage further work to follow up on this. Future prospective large-scale studies with detailed outcome data are warranted to determine whether DNAm age analysis is useful for prediction of leukemia relapse after HSCT.
We observed a rejuvenation of DNAm age in the blood of the "no relapse" recipients within 1 year after HSCT ( Figure 5), which is consistent with the study by Stölzel et al. (2017). Stölzel et al. also reported that accelerated epigenetic aging was observed more than 6 months after HSCT (2.4 years per chronological year up to 8 years after HSCT; Stölzel et al., 2017). However, we did not observe a significant DNAm age acceleration (or rejuvenation) in our long-term follow-up analysis ( Figure 5). This difference between the two studies may be due to the different treatment strategies (e.g., selection of chemotherapies) or reflect technical differences (e.g., DNA storage conditions, bisulfite conversion, or different DNAm normalization methods).
Our study advances understanding of the mechanism of the epi- year difference in age, making it the largest study of its kind. In addition, the unplanned analyses shown in Figure 7 suggest that G-CSF may have an ability to rejuvenate DNAm age of HSCs. G-CSF has a pronounced influence on cellular processes in HSCs and has been shown to selectively mobilize dormant HSCs to the bloodstream in mice (Bernitz, Daniel, Fstkchyan, & Moore, 2017;Panch, Szymanski, Savani, & Stroncek, 2017
| DNAm analysis and epigenetic clock analysis
All DNAm analyses were performed with the Illumina Infinium 450 K platform in the core facility at UCLA as previously reported . Genomic DNA extraction and STR PCR were performed as reported (Thiede et al., 1999). CpG methylation analysis was performed using Illumina BeadChip arrays (Illumina, San Diego, USA). DNAm age was estimated using the published algorithm with Noob normalization (Horvath, 2013).
ACKNOWLEDGMENTS
The authors are grateful to all the patients and HSC donors who
CONFLI CT OF INTEREST
The authors declare no conflicts of interest. | 4,925.2 | 2019-02-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
Three Decades of Research on Smart Cities: Mapping Knowledge Structure and Trends
: The concept of smart cities has gained significant momentum in science and policy circles over the past decade. This study aims to provide an overview of the structure and trends in the literature on smart cities. Bibliometric analysis and science mapping techniques using VOSviewer and CiteSpace are used to identify the thematic focus of over 5000 articles indexed in the Web of Science since 1991. In addition to providing insights into the thematic evolution of the field, the three-decade study period is divided into two sub-periods (1991–2015 and 2016–2021). While splitting the dataset into more sub-periods would have been desirable, we decided to only examine two sub-periods as only very few papers have been published until 2010. The annual number of publications has progressively increased since then, with a surge in the annual number of publications observable from 2015 onwards. The thematic analysis showed that the intellectual base of the field has been very limited during the first period, but has expanded significantly since 2015. Over time, some thematic evolutions, such as further attention to linkages to climate change and resilience, and more emphasis on security and privacy issues, have been made. The thematic analysis shows that existing research on smart cities is dominated by either conceptual issues or underlying technical aspects. It is, therefore, essential to do more research on the implementation of smart cities and actual and/or potential contributions of smart cities to solving societal issues. In addition to elaborating on thematic focus, the study also highlights major authors, journals, references, countries, and institutions that have contributed to the development of the smart cities literature.
Introduction
There are indications that the concept of smart cities emerged as early as 1974, when the city of Los Angeles attempted to create the world's first urban big data project [1]. However, it is after 20 years (1994) that a major milestone in the smart sities pursuit occurred in Amsterdam, when a virtual 'digital city' was created with the purpose of promoting internet usage among local populations [2]. Since then, there have been extensive research and attempts to create smart city digital infrastructures, with large ICT corporations, such as Cisco and IBM, taking the lead [3,4], especially in research and development. For instance, in 2008, IBM launched the 'IBM Smarter Planet', with an aim to investigate and test how applying sensors, networks, and analytics to different urban fabrics can render more performance and, as a result, identify business opportunities. The success of this and numerous others in varying geographies, which have already embraced and implemented some aspects of smart cities concept. Coupled with the development of new technologies supporting the concept, it is expected that more smart cities will continue to emerge in the coming years. In the academic sphere, both existing and emerging smart cities are spurring a surge in literature touching on the global discussion on smart cities across a range of themes including governance [22,23], liveability [24,25], safety [26][27][28], economic performance [29], mobility [30,31], health [32], culture [4], education [33], communication infrastructures [34], energy [35], and others.
Along with the increasing interest in smart city development, the number of academic articles published on smart cities has also grown rapidly over the past two decades. In the past few years, several review articles have been published that have improved our understanding of the state of development of the smart cities field and have highlighted key successes and challenges. According to these studies, while the smart city concept is hailed for its transformative prospects on the urban planning sphere, there are notable issues that must be streamlined. Camero and Alba [36] note that one such issue is the lack of a universally agreed definition and the scope of the same. They argue that the lack of unanimity in the definition has led to research on the concept being built in a wide array of silos depending on the understanding of those conducting the research. This argument is affirmed by Cocchia [37] who highlights that a number of terminologies such as the Intelligent City and Digital City have been used to depict the technological foundations of urban concepts, without properly linking them to the broader smart city concept. The point by Camero and Alba [36], on creation of application silos, is seen to align with the proposition by Ruhlandt [38], supporting that research on smart cities is hampered by lack of understanding of various components such as smart governance, technological application, and the metrics adopted as yardsticks for those components. Without any explicit understanding of the components and metrics, it becomes problematic in determining the expected outcomes, and, as is expressed by Pereira et al. [39], this leads to researchers concentrating on specific components rather than generally focusing on the entirety of the smart city concept. To overcome these challenges, several efforts have been made to provide metrics for assessment of different smart city aspects and dimensions [40,41]. On their end, Talari et al. [42] note that concentrating on specific aspects of the smart city concept would have far reaching implications in respect to philosophical approaches and perspectives about the concept's implementation, especially in different regions and in different sectors. Focusing on specifics in the smart city concept would also be impacted by time as already technological advancements are emerging fast and in diverse sectors. This, then, has a bearing on the research in the present paper, especially in relation to increasing publications, covering a diverse set smart city dimensions, raising the challenge and need for regular reviews.
While traditional reviews are essential for detailed understanding of research fields, they are not always suitable to keep up with the rapid pace of scientific publishing, especially in popular fields such as sustainability or smart city development. This issue can be partially solved by using science mapping and bibliometric analysis techniques that allow the collection of overall understanding of knowledge structure and trends using advanced text-mining methods [43]. In this way, bibliometric analysis studies can complement traditional systematic reviews. There are several generic bibliometric studies on smart cities research [44][45][46][47][48][49][50][51]. These studies have improved our understanding of the overall landscape of the field, and interestingly point out similarities between some concepts, particularly regarding the 'intelligent city', and the 'smart city'; where the former focusses on systems design, without necessarily engaging in heavy utilization of technology. Smart cities, on the other hand, approach systemic design by supporting a technological foundation. In addition, smart cities follow a more comprehensive approach that, in addition to technological focus, acknowledges the significance of people, economy, and institutions. There are also several bibliometric studies focused on specific issues such as governance of smart [52], smart city applications in the building and construction sectors [53], relationship between smart cities and migration [54], and smart city indicators [55].
Despite contributions of the systematic review and bibliometric analyses mentioned earlier, and while this field has been booming, there is a lack of literature examining the thematic evolution of the smart cities research over the past three decades. In addition, as will be discussed in Section 3.1, a large number of articles has been published over the past two years that warrants an updated analysis. Indeed, considering the rapid pace of scientific publishing on smart cities, regular scientific mapping and bibliometric analyses are necessary to keep up with the recent developments, identify emergent areas, and highlight gaps. Accordingly, the main objective of this study is to provide an updated understanding of the knowledge structure of smart cities research published over the past three decades. Other objectives are to identify major thematic areas and discuss their transition over time; to identify influential authors, sources, institutions, and references that have made relatively more contributions to the development of the field; and to highlight understudied themes that warrant further research. Overall, this bibliometric analysis builds on those mentioned earlier by providing period-based thematic analysis of the evolution of the field, and also by taking into account the large number of papers published recently. As this is a fast-growing area of research, results of this study can be used as a point of reference for researchers new to the field to gain a rather quick understanding of the intellectual base of smart cities research, its evolution, and gaps and emergent topics. This will allow them to develop future research ideas in a more effective way. Further, interested readers and those new to the field can refer to the influential sources and references highlighted in this study to gain further knowledge on specific topics.
Materials and methods for bibliometric analysis are presented in Section 2. Results are presented and discussed in Section 3. These include information on publication trends, thematic focus areas and their transition over time, and influential authors, references, sources, countries, and institutions. Finally, a summary of the findings and some recommendations for future research are provided in the final section.
Materials and Methods
Data for bibliometric analysis was obtained from the Web of Science (WoS). Several other databases, such as Scopus, exist that index and archive academic publication. We selected WoS for three main reasons: first, its reputation for indexing quality peer reviewed research; second, it provides detailed bibliometric information which allows researchers to obtain more accurate results using the bibliometric analysis software tools (i.e., VOSviewer and CiteSpace); third, a large number of publications on smart cities exist in the WoS, and this is enough for meeting our objective of understanding the overall structure and tends (in other words, even if some studies are missing, it will not affect the overall structure and trends). In order to include as many relevant papers as possible in the analysis, we used a broad-based search string that includes two keywords: smart city and smart cities. The specific search string was: TS = ("smart city" OR "smart cities") AND LANGUAGE:(English) Indexes = SCI-EXPANDED, SSCI, A&HCI, ESCI Timespan = 1900-2021. In other words, we searched for articles written in the English language that have mentioned 'smart city' and/or 'smart cities' in either the title, abstract, or keywords. The literature search was conducted on 24 April 2021, in all citation indexes of the WoS (i.e., SCI-EXPANDED, SSCI, A&HCI, and ESCI). All types of publications archived in the WoS until the search date were included in the analysis. The search returned 7228 documents, and after excluding studies that were out of scope (i.e., focused on fields such as physics, material sciences, and mathematics), 5722 articles were selected for analysis. The 'Full Record and Cited References' of these studies were downloaded from the WoS, in formats compatible with VOSviewer and CiteSpace.
Several software tools exist for science mapping and bibliometric analysis [56]. While these tools sometimes adopt different analysis and illustration approaches, they all contribute to understanding thematic focus and trends by providing details on the complex Sustainability 2021, 13, 7140 5 of 23 interactions between different components of academic publications (i.e., keywords, references, authors, journals, etc.). Further details about unique features of different tools can be found in Cobo, López-Herrera, Herrera-Viedma, and Herrera [56]. In this study, VOSviewer and CiteSpace were selected considering their abilities to meet the study objectives. Both tools are freely available Java applications (VOSviewer at: https://www.vosviewer.com (accessed on 10 May 2021); and CiteSpace at: http://cluster.cis.drexel.edu/~cchen/citespace/ (accessed on 10 May 2021)). The software developers also provide free access to user manuals and demo projects. Interested readers are referred to the tool manuals for detailed step-by-step description of the analyses. We used VOSviewer for conducting different analyses, namely, term co-occurrence analysis (using the 'full counting' counting method and setting 'all keywords' as the unit of analysis), citation (setting 'documents' as the unit of analysis), co-citation (using the 'full counting' counting method, and setting 'cited references', 'cited sources', and 'cited authors', as units of analysis), and bibliographic coupling (using the 'full counting' counting method, and setting 'organizations' and 'countries' as units of analysis) [57]. Term co-occurrence analysis presents frequently occurred terms and the way they are connected to one another. This can be used to highlight major thematic areas. As a term may have different variants, before conducting the analysis, a thesaurus file was developed and added to the VOSviewer database to avoid separate counting of synonyms (e.g., Internet of Things and IoT). As can be seen in Section 3.2, results of the term co-occurrence analysis (and also other analyses done by VOSviewer) are presented as a network graph of nodes and links. The node size is proportional to the occurrence frequency, and the link width is proportional to the strength of connections between two nodes. Terms that co-occur more frequently form clusters that show different thematic areas.
To understand thematic transition over time, we divided the study period into two sub-periods (1991-2015 and 2016-2021). The beginning year was set as 1991 as this is the publication date of the first paper indexed in the WoS. Additionally, 2015 was selected as a milestone considering that different reports and policy documents that may have influenced smart city research were published in that year (e.g., Agenda 2030 and the New Urban Agenda). As the process of publishing academic papers sometimes takes more than one year, we have assumed that potential influences of such policy documents on academic research have been reflected in academic publication starting from 2016. It was also possible to introduce other sub-periods. However, as shown in Section 3.1, relatively fewer studies have been published before 2015, not warranting further sub-periods. To understand thematic focus during each period, separate analyses were conducted. Cocitation analysis was conducted to identify influential authors, journals, and references. Co-citation indicates the link between two documents that are both cited simultaneously by a third document [57]. In other words, it considers not only the documents in the database, but also their cited references. In contrast, citation analysis was conducted to understand highly cited papers in the database.
To understand which countries and institutions have contributed more to this field of research, we used the bibliographic coupling analysis. "A bibliographic coupling link is a link between two items that both cite the same document" [57]. Finally, the citation burst function of CiteSpace was used to better understand which subjects have received more attention at specific times during the study period. This can complement the findings of the term co-occurrence analysis and allow a better understanding of thematic transition and intellectual turning points over time [58]. accounting for less than 1% of the documents in the database. In the following 5 years, 429 papers have been published, indicating that 'smart cities' has become a mainstream research topic since 2010. From 2015, however, a rapid growth pattern can be observed. In particular, the growth pattern has been exponential since 2018. Interestingly, the number of articles published over the past three years is greater than the cumulative number of articles published between 1991 and 2018. As explained in the Introduction, this is a clear indication of the increasing recognition of the significance of smart cities for dealing with multiple challenges that cities around the world are facing and have been highlighted in major policy documents such as the New Urban Agenda and the Sustainable Development Goals (SDGs). Based on this, an upward trend in publications on smart cities is expected for the coming years. Figure 1 displays the distribution of the 5722 publications over the study period . The results show that there is an overall growth in the number of publications per year. It is clear that this is still a young field, as most papers have been published over the past 10 years or so. In fact, only 17 papers have been published between 1991-2010, accounting for less than 1% of the documents in the database. In the following 5 years, 429 papers have been published, indicating that 'smart cities' has become a mainstream research topic since 2010. From 2015, however, a rapid growth pattern can be observed. In particular, the growth pattern has been exponential since 2018. Interestingly, the number of articles published over the past three years is greater than the cumulative number of articles published between 1991 and 2018. As explained in the Introduction, this is a clear indication of the increasing recognition of the significance of smart cities for dealing with multiple challenges that cities around the world are facing and have been highlighted in major policy documents such as the New Urban Agenda and the Sustainable Development Goals (SDGs). Based on this, an upward trend in publications on smart cities is expected for the coming years.
Thematic Clusters
The output of the term co-occurrence analysis for the entire dataset is shown in Figure 2. Three major clusters can be identified from this figure. These are: (1) the smart city concept, depicted by the color red, (2) big data analytics, represented by the color blue, and (3) the technological aspects, especially in relation to Internet of Things, depicted by the color green. The link's thickness between nodes indicates strength of connection, while the size of the node is directly proportional to the term frequency. Therefore, as is clearly depicted in the diagram, the most dominant clusters were the smart city and the IoT. The other cluster that is mainly focused on the application of big data analytics and other smart
Thematic Clusters
The output of the term co-occurrence analysis for the entire dataset is shown in Figure 2. Three major clusters can be identified from this figure. These are: (1) the smart city concept, depicted by the color red, (2) big data analytics, represented by the color blue, and (3) the technological aspects, especially in relation to Internet of Things, depicted by the color green. The link's thickness between nodes indicates strength of connection, while the size of the node is directly proportional to the term frequency. Therefore, as is clearly depicted in the diagram, the most dominant clusters were the smart city and the IoT. The other cluster that is mainly focused on the application of big data analytics and other smart solutions (e.g., machine learning, deep learning, etc.) in the energy sector has received relatively less attention.
solutions (e.g., machine learning, deep learning, etc.) in the energy sector has received relatively less attention.
The Smart City Concept Cluster
This co-occurrence analysis, shown in Figure 2, showcases that most of the literature in respect to the smart city is centered around the concept itself, with much attention given to the broader urban context [32,33,36,38,59] and how those have been influenced by technology application and adoption (red cluster). The literature on this concept also centered on how cities have been striving to achieve sustainability [25], more so after the high level global meetings that culminated in diverse accords and agreements such as the Paris Agreement, SDGs, the New Urban Agenda, and others. The dominance of the term 'sustainability' in this cluster is not surprising considering that a lot of research on smart and sustainable cities has been published in the past few years, demonstrating how smart solutions can contribute to solving issues related to various social, economic, and environmental dimensions of sustainability (e.g., see [60,61]). The analysis also shows that, in the smart cities literature, the term 'cities' has been frequently used in conjunction with other terms such as 'innovation' [62][63][64], which relates closely with the application of technology, and in the pursuit of sustainability [8,12]. Other terms that were researched in conjunction with cities include 'policy', showcasing a drive from researchers in understanding how policy frameworks impact on issues such as urban planning, infrastructure development, initialization of innovative programs, and others. Other terms that occurred in this cluster include 'mobility' and 'transport', showing that the transport sector is crucial in the whole agenda of making cities smarter. In fact, along with the energy sector, transportation has been one of the major sectors in which applications of smart city technologies have been studied [65]. Further, terms such as 'e-government' and 'citizen participation' have also appeared, especially in relation to 'innovation' and 'information', showing that researchers are keen to understand governance dimensions of smart cities, though not many recognize and appreciate the intricate roles and the dynamics of governments and citizens in the actualization of smart city agendas [66][67][68][69][70][71]. There are also some other terms, such as 'indicators' and 'Geographic Information System' (GIS) that,
The Smart City Concept Cluster
This co-occurrence analysis, shown in Figure 2, showcases that most of the literature in respect to the smart city is centered around the concept itself, with much attention given to the broader urban context [32,33,36,38,59] and how those have been influenced by technology application and adoption (red cluster). The literature on this concept also centered on how cities have been striving to achieve sustainability [25], more so after the high level global meetings that culminated in diverse accords and agreements such as the Paris Agreement, SDGs, the New Urban Agenda, and others. The dominance of the term 'sustainability' in this cluster is not surprising considering that a lot of research on smart and sustainable cities has been published in the past few years, demonstrating how smart solutions can contribute to solving issues related to various social, economic, and environmental dimensions of sustainability (e.g., see [60,61]). The analysis also shows that, in the smart cities literature, the term 'cities' has been frequently used in conjunction with other terms such as 'innovation' [62][63][64], which relates closely with the application of technology, and in the pursuit of sustainability [8,12]. Other terms that were researched in conjunction with cities include 'policy', showcasing a drive from researchers in understanding how policy frameworks impact on issues such as urban planning, infrastructure development, initialization of innovative programs, and others. Other terms that occurred in this cluster include 'mobility' and 'transport', showing that the transport sector is crucial in the whole agenda of making cities smarter. In fact, along with the energy sector, transportation has been one of the major sectors in which applications of smart city technologies have been studied [65]. Further, terms such as 'e-government' and 'citizen participation' have also appeared, especially in relation to 'innovation' and 'information', showing that researchers are keen to understand governance dimensions of smart cities, though not many recognize and appreciate the intricate roles and the dynamics of governments and citizens in the actualization of smart city agendas [66][67][68][69][70][71]. There are also some other terms, such as 'indicators' and 'Geographic Information System' (GIS) that, comparatively, have occurred less frequently. In the recent years there has been an increasing focus on developing and implementing smart city assessment tools and indicator sets. Among other things, such tools and indicators contribute to better informed decision making regarding smart cities and evaluate their contributions to other societal goals, such as sustainability and resilience [72]. Similarly, it is increasingly recognized that GIS technologies are essential for effective development and implementation of smart cities and, generally, for betterinformed urban planning [73]. For instance, platforms enabled by real-time GIS enable acquiring, storing, processing, and visualizing large amounts of geospatial data in an efficient manner [74]. Such platforms can facilitate enhanced modeling of urban operations, enable better informed and more timely decision making, and improve the efficiency and safety of various sectors such as urban transportation [74].
The Internet of Things Cluster
This cluster (depicted in Figure 2 in green) showcases a drive by researchers to understand the influence of technology at varying levels of the smart city concept. From Figure 2, it is clear that that most of the research has been on the topic of 'IoT' [75][76][77], with most of the articles being centered on the influence of the 'internet' on the concept. The research on 'IoT' is interestingly seen to touch on different aspects of a city, such as financials [78,79], smart devices [80], the security of such devices [81][82][83], and many others. As far as the internet is concerned, it is clear that this term had two major thematic focus areas. One focuses on 'security' aspects and how they impact on the adoption of smart devices in cities. Associated research is seen to be closely related to terms such as 'authentication' and 'surveillance'. It is apparent that there are notable 'security' concerns in regard to IoT, with themes linking to the issue of 'privacy' surging [69,71], generating substantial attention in terms of number of publication and citations.
The second thematic focus area that attracted some publication traffic, as shown in Figure 2, is 'challenges', in the application of internet aspects in smart cities. This is particularly seen to have been researched in conjunction with internet security and privacy, indicating that it is a main concern, and part of the obstacles that researchers perceive to have the potential to derail the successful implementation of smart city concepts. The term 'infrastructure' also appears in this cluster, depicting that researchers were interested in understanding the intricate matters of how the internet infrastructure was rendered to facilitate application of IoT technology [84][85][86]. Another term that is seen to have received considerable attention is 'blockchain', especially in regard to security [87]. The presence of this term in this cluster is timely, as this technology is seen as the future of security [88,89], especially in regard to issues such as contracts [90,91], computing [88,92], and in enhancing privacy [88,93]. Other terms that appear in this cluster but seem to have received, relatively, less attention include 'sensors', 'fog-computing', 'authentication', and 'edge computing', among others.
The Big Data Analytics
This cluster (indicated by blue in Figure 2) is seen to be less dominant in respect to the smart city concept as per this co-occurrence analysis. The term 'big data analytics', however, has co-occurred frequently with other terms and has a central position in the figure. This is not surprising considering the significance of big data analytics for smart city operations [94]. This cluster seems to be specifically focused on the applications of smart city products in the energy sector. In conjunction with the term 'big data analytics', are terms such as 'networks', 'models', and 'design' that appear to have been frequently researched in respect to this topic. These terms are also seen to appear in the edges of both the smart city cluster and the IoT cluster, highlighting their nature as overarching terms [95,96]. Among the terms that appear to have been researched in respect to big data analytics also include 'smart grid', 'optimization', 'efficiency', and 'renewable energy'. These are not surprising, as the current, and probably future debate and trends in the energy sector are expected to be on renewable energies [97,98], and how those can be enhanced through the use of technology to increase efficiency. In respect to 'optimization', terms such as 'machine learning', 'deep learning', 'data mining', and 'artificial intelligence' are seen to have received some search attention, but in a relatively limited way. While these are at the edges of the three clusters, the, relatively, limited research and literature on them is surprising, as they are among some of the enablers of IoT technologies [99,100], showcasing a gap in the literature.
Thematic Focus Transition over Time
To explore the thematic focus transition, we divided the dataset into two sub-periods and conducted separate term co-occurrence analyses for each period.
First Period (1991Period ( -2015 Although the concept of 'smart cities' can be noted as early as the 1980s, it did not gain considerable attention until after the fourth revolution (early 2000). This is evident in the co-occurrence analysis map (Figure 3) for the period between 1991 and 2015, where the frequency of many terms relevant to the smart city concept is seen to be low. Additionally, the list of keywords with citation bursts (Figure 4) shows that during this period, topics related to precedent concepts such as 'intelligent city' have still been dominant. This indicates that, though there have been some interests by researchers, the level of understanding of smart cities was still in its infancy stage. During this period, the main focus and publications were on general application of smart technologies in cities, with most of the literature touching on issues such as policies, transport, innovation, performance, model, growth, institutions, and knowledge. Other terms appearing in respect to cities include climate change, and Information and Communications Technology (ICT). This is understandable, especially in relation to climate change, as the importance of urban actions for tacking climate change was widely recognized in this period, following the publication of the 4th and 5th assessment reports of the Intergovernmental Panel on Climate Change (IPCC), in 2007 and 2014, respectively [101,102]. The importance of cities was further recognized with the ratification of the Paris Agreement [103] and the birth of the Sustainable Development Goals (SDGs) towards the end of this period [104]. In conjunction with climate change, it is evident that research attention and publications were drawn to the issue of ICT and innovation as some of the possible dimensions where solutions, especially in regard to sustainability, could be drawn; hence, publications on these topics started to gain some prominence. In respect to technology as an enabler of smart cities, it is evident that attention was consolidated on issues such as security, innovation, and policies. It is evident that within the period 1991-2015, the concept of smart city started to attract attention in many intellectual quarters, though the internet employment in different urban fabric was still in infancy. Therefore, considerations to issues of security, especially in regard to data and surveillance, is noted to be prevalent in the civic realm. In addition, it is evident from the analysis that most technologies were being directly focused on the energy sector, though the number of publications in regard to the same are still limited, and key words such as 'smart grid' and 'renewable energy' were not attracting much attention from academia. Another area of noted interest during this period is the Internet of Things. This topic was very relevant during this period, as most of the emerging technologies that could help actualize the smart city concept were hinged on it. For this reason, most of the publications In respect to technology as an enabler of smart cities, it is evident that attention was consolidated on issues such as security, innovation, and policies. It is evident that within the period 1991-2015, the concept of smart city started to attract attention in many intellectual quarters, though the internet employment in different urban fabric was still in infancy. Therefore, considerations to issues of security, especially in regard to data and surveillance, is noted to be prevalent in the civic realm. In addition, it is evident from the analysis that most technologies were being directly focused on the energy sector, though the number of publications in regard to the same are still limited, and key words such as 'smart grid' and 'renewable energy' were not attracting much attention from academia. Another area of noted interest during this period is the Internet of Things. This topic was very relevant during this period, as most of the emerging technologies that could help actualize the smart city concept were hinged on it. For this reason, most of the publications on IoT are seen to have focused on other sub-cluster terms such as 'big data analytics', 'internet', 'management', and 'design'. During this period, considering that smart devices need to communicate with each other, terms such as 'information', 'architecture', and 'sensor network' were seen to have started gaining some consideration amongst researchers, but in limited scale. For instance, there were very few publications and interest on issues such as 'cloud computing', 'sensors', and 'wireless sensors networks', and this could be attributed to the fact that those technologies had not gained substantial traction by then.
Second Period (2016-2021)
This period, as depicted in Figure 5, has witnessed an explosion of publications on all the three main clusters. As explained earlier, publication of international policy frameworks and their emphasis on smart solutions may have contributed to this increased attention to smart cities research. The IoT cluster has seemingly gained more attention in this period. Attention in this cluster is seen to have focused especially on the sub-cluster 'internet', which is seen to have attracted further research in new terms such as 'challenges, 'protocol', and 'cloud'. These terminologies may have emerged with the realization that IoT enabled devices need to communicate in a standardized protocol to aid in the seamless collection of data, and further, relaying back the insights after analysis of data to relevant parties for better urban management [95]. During this period, it is observable that the term 'energy' is still a key term that is strongly connected to the term 'optimization'. This may indicate the increasing use of smart solutions for enhancing operational efficiency of energy systems. Attention to energy is not surprising, given that energy sector plays an essential role in addressing key challenges highlighted in policy documents such as SDGs and the New Urban Agenda. In addition, several other key terms such as 'machine learning' and 'deep learning' have appeared in this cluster and gained even a stronger position than 'energy'. This may indicate that, though energy-related issues are dominant in the global agenda, they have been overtaken by other research areas in respect to IoT technologies, leading to the understanding that research interests and global conversations moved towards how to utilize the concept across an array of other fields, hence leading to the need for the development and deployment of the data and technology infrastructure.
on IoT are seen to have focused on other sub-cluster terms such as 'big data analytics', 'internet', 'management', and 'design'. During this period, considering that smart devices need to communicate with each other, terms such as 'information', 'architecture', and 'sensor network' were seen to have started gaining some consideration amongst researchers, but in limited scale. For instance, there were very few publications and interest on issues such as 'cloud computing', 'sensors', and 'wireless sensors networks', and this could be attributed to the fact that those technologies had not gained substantial traction by then.
Second Period (2016-2021)
This period, as depicted in Figure 5, has witnessed an explosion of publications on all the three main clusters. As explained earlier, publication of international policy frameworks and their emphasis on smart solutions may have contributed to this increased attention to smart cities research. The IoT cluster has seemingly gained more attention in this period. Attention in this cluster is seen to have focused especially on the sub-cluster 'internet', which is seen to have attracted further research in new terms such as 'challenges, 'protocol', and 'cloud'. These terminologies may have emerged with the realization that IoT enabled devices need to communicate in a standardized protocol to aid in the seamless collection of data, and further, relaying back the insights after analysis of data to relevant parties for better urban management [95]. During this period, it is observable that the term 'energy' is still a key term that is strongly connected to the term 'optimization'. This may indicate the increasing use of smart solutions for enhancing operational efficiency of energy systems. Attention to energy is not surprising, given that energy sector plays an essential role in addressing key challenges highlighted in policy documents such as SDGs and the New Urban Agenda. In addition, several other key terms such as 'machine learning' and 'deep learning' have appeared in this cluster and gained even a stronger position than 'energy'. This may indicate that, though energy-related issues are dominant in the global agenda, they have been overtaken by other research areas in respect to IoT technologies, leading to the understanding that research interests and global conversations moved towards how to utilize the concept across an array of other fields, hence leading to the need for the development and deployment of the data and technology infrastructure. In respect to data management, it is clear that during the second period ( Figure 5), there has been increased attention on areas such as big data [35,105,106] and computation, especially in relation to climate change [107,108] and others become apparent. Interestingly, there is a perceived focus on addressing to climate change [8,10,[107][108][109]. The significance of 'big data' in the second period is also evident from the citation burst analysis ( Figure S1 of the Supplementary Materials). The importance of data in the management of the smart city may also have triggered the explosion of research and publications in this cluster, with new terminologies such as 'artificial intelligence', 'optimization', 'simulation', 'efficiency', 'prediction', 'deep learning', 'machine learning', and 'performance' gaining traction. The emergence of these new terms shows how much attention has been given to data management, and the advancement in technology regarding the analysis of data that is continuously being produced in, and by, smart cities. Data management and big data analytics are expected to gain further momentum in the post-COVID era. In fact, there are arguments that the recent pandemic has increased interest in big data analytics and smart city development [110].
In respect to the smart city cluster, this second period is observed to have experienced numerous publications, not only touching on cities, but also on the relationship between cities, data, and IoT. However, in respect to cities, and with the growing number of smart cities [111][112][113], it is evident that researchers were and continue to be interested in areas such as sustainability, governance, technology, policies, and the impacts of the same. Those areas of interest are some of the new emerging frontiers that the literature has gained during this timeline. However, some topics, such as institutions and their influence in achieving 'smartness', are seen to continue attracting more attention, even in the second period. Further, other areas that seem to have remained pertinent since the emergence of the smart city concept, as captured in Figures 4 and 5 above, include the transport sector where, in the second period (Figure 4), researchers are seen to have concentrated more on mobility, especially due to emergence of technology-enabled services such as ridesourcing [114], smart cars [31,114,115], and new concepts such as the 15-min concept [12] which emphasizes the reducing of vehicular use by giving precedence to cycling and walking due to reduced proximity between different urban essentials.
Influential Sources
The co-citation analysis was used to find out which journals have had the highest impact in the development of the field. Again, the size of the nodes is proportional to the number of citations, and link width is proportional to the strength of connection between two nodes. Quantitative details related to the citation count and total link strengths of the top 20 journals can be found in Table S1 of the Supplementary Materials. Results show that journals such as IEEE Access, Cities, IEEE Internet of Things, Lecture Notes in Computer Science, IEEE Communications Magazine, Future Generation Computer Systems, and Journal of Urban Technology have had higher influence. Three major clusters can be identified from the results of the co-citation sources analysis, as shown in Figure 6. The largest cluster (green) includes journals that are mainly focused on Internet of Things (IoT) and other technical issues (e.g., related to internet, cloud computing, architecture of smart cities, and wireless networks, etc.). This relates to the green cluster (The Internet of Things cluster) in Figure 2 The second largest cluster (red in Figure 6) is primarily composed of urban planning and policy journals. This cluster relates to the red cluster (the smart cities topic cluster) in the thematic cluster analysis (Figure 2). Results show that urban planning and policy issues have mainly been addressed by journals such as Cities, Journal of Urban Technology, Sustainable Cities and Society, Sustainability, and Urban Studies.
The third cluster (depicted in blue in Figure 6) is dominated by journals focused on energy-related issues. This corresponds to the blue cluster in Figure 2 that is focused on topics such as 'energy', smart grid', 'optimization', 'efficiency', and 'renewable energies'. Once again, this shows the specific attention of the smart cities literature to energy-related applications. The most influential journals of this cluster include Journal of Cleaner Production, IEEE Transactions on Smart Grid, Renewable & Sustainable Energy Reviews, and Energy and Buildings.
Major Contributing Countries and Institutions
In order to identify the most prominent countries that have contributed the most to the knowledge in the field, a bibliographic coupling analysis was conducted. The results are shown in Figure 7. The list of the top 20 most prominent countries with the number of documents, number of citations, and total link strength is presented in Table S2 of the Supplementary Materials. The second largest cluster (red in Figure 6) is primarily composed of urban planning and policy journals. This cluster relates to the red cluster (the smart cities topic cluster) in the thematic cluster analysis (Figure 2). Results show that urban planning and policy issues have mainly been addressed by journals such as Cities, Journal of Urban Technology, Sustainable Cities and Society, Sustainability, and Urban Studies.
The third cluster (depicted in blue in Figure 6) is dominated by journals focused on energy-related issues. This corresponds to the blue cluster in Figure 2 that is focused on topics such as 'energy', smart grid', 'optimization', 'efficiency', and 'renewable energies'. Once again, this shows the specific attention of the smart cities literature to energy-related applications. The most influential journals of this cluster include Journal of Cleaner Production, IEEE Transactions on Smart Grid, Renewable & Sustainable Energy Reviews, and Energy and Buildings.
Major Contributing Countries and Institutions
In order to identify the most prominent countries that have contributed the most to the knowledge in the field, a bibliographic coupling analysis was conducted. The results are shown in Figure 7. The list of the top 20 most prominent countries with the number of documents, number of citations, and total link strength is presented in Table S2 of the Supplementary Materials. It is noted that countries such as China, USA, Italy, England, India, Spain, Australia, South Korea, and Canada have published more on this topic. These countries also rank high in terms of the total number of citations. Examples of highly cited papers from the top 10 countries are shown in Table 1. Interestingly, while developed countries have con- It is noted that countries such as China, USA, Italy, England, India, Spain, Australia, South Korea, and Canada have published more on this topic. These countries also rank high in terms of the total number of citations. Examples of highly cited papers from the top 10 countries are shown in Table 1. Interestingly, while developed countries have contributed more, several developing countries have also been highlighted in the figure, showcasing that there is a global adoption of the concept, irrespective of GDP and development status. National Institute of Technology Allahabad [125] However, there are many countries, particularly from Africa and Asia that are missing. The clusters shown in Figure 7 indicate that there is a close collaboration among countries that are geographically proximate. For instance, the red cluster primarily includes European countries (England, France, Italy, Spain, Germany, and Greece), while the green cluster includes a broader range of countries from Northern America (USA, Canada) to Asia (China, South Korea, India, Japan, Saudi Arabia, and Pakistan).
The bibliographic coupling analysis was also used to find out which countries are in the forefront of the field. Figure 8 shows that universities from Italy, the US, China, and Saudi Arabia have contributed more to the development of the discourses in the field. Those are universities such as Polytechnic University of Milan, University of Naples Federico II, King Saud University, MIT, Chinese Academy of Sciences, and Huazhong University of Science and Technology. The list of the most prominent organizations with the number of documents, number of citations, and total link strength is presented in The bibliographic coupling analysis was also used to find out which countries are in the forefront of the field. Figure 8 shows that universities from Italy, the US, China, and Saudi Arabia have contributed more to the development of the discourses in the field. Those are universities such as Polytechnic University of Milan, University of Naples Federico II, King Saud University, MIT, Chinese Academy of Sciences, and Huazhong University of Science and Technology. The list of the most prominent organizations with the number of documents, number of citations, and total link strength is presented in Table S3 of the Supplementary Materials.
Influential Documents
The co-citation analysis was also used to identify the most prominent publications in this field. The output of co-citation analysis by cited references is shown in Figure 9. The top 20 most cited references are shown in Table S4 of the Supplementary Materials. It should be noted that this list is not exhaustive, and other influential documents also exist
Influential Documents
The co-citation analysis was also used to identify the most prominent publications in this field. The output of co-citation analysis by cited references is shown in Figure 9. The top 20 most cited references are shown in Table S4 of the Supplementary Materials. It should be noted that this list is not exhaustive, and other influential documents also exist that have not been included in order to ensure readability of the figure (e.g., [126]). This analysis shows that there are three clusters of the most prominent publications (red, green, and blue). The red cluster includes studies primarily focusing on the main core concepts, definitions, and trends of smart cities (e.g., [122,123,[127][128][129][130][131]). This cluster associates with the red cluster (the smart city concept cluster), identified in Thematic analysis in Section 3.2. The majority of the influential documents in this cluster include studies that are primary resources of the smart cities literature in the first period . Obviously, given their focus on fundamental smart city concepts, they have played important roles in guiding smart city research. The second largest cluster depicted by blue color, is mainly comprised of studies focused on the implementation and realization of smart cities. These cover various issues such as success of smart city programs in achieving their (utopian) goals [132][133][134], utilities of big data analytics for improving socio-economic and governance mechanisms in cities and enhancing operational efficiency [135,136], weaknesses that need to be addressed [133], corporate smart cities [132,134], smart mentality [136], and roles of different stakeholders and actors in the management and realization of smart cities [136]. This cluster is in close relationship with the blue cluster in Figure 2 that is focused on the applications of smart cities. The third largest cluster (green) primarily addresses issues related to the underlying technological foundations of smart cities. These include issues related to the IoT, cloud computing, internet algorithms, wireless networks, etc. [117,137,138]. This cluster is closely associated with the Internet of Things cluster identified in the thematic cluster analysis in Figure 2.
Sustainability 2021, 13, x FOR PEER REVIEW 17 of 25 [117,137,138]. This cluster is closely associated with the Internet of Things cluster identified in the thematic cluster analysis in Figure 2. The results of citation burst analysis also confirm those of the co-citation analysis and show that works such as [123,127,133,139] have been very influential in advancing smart city discourse in the literature ( Figure S2 of the Supplementary Materials).
Influential Authors
To identify the most influential authors it is also possible to set 'cited authors' as the unit of analysis in the co-citation analysis. Details related to the top 20 most prominent authors (in terms of citation) is shown in Table S5 of the Supplementary Materials. As shown in Figure 10, three major clusters of influential authors can be identified that are, to a great extent, consistent with the results of the analysis in the previous sections (prominent publications and thematic cluster analysis). The red cluster includes authors that have mainly worked on the core concepts and definition of smart cities (such as Tan Yigitcanlar, Andrea Caragliu, Nicos Komninos, Taewoo Nam, Margarita Angelidou, Vito Al- The results of citation burst analysis also confirm those of the co-citation analysis and show that works such as [123,127,133,139] have been very influential in advancing smart city discourse in the literature ( Figure S2 of the Supplementary Materials).
Influential Authors
To identify the most influential authors it is also possible to set 'cited authors' as the unit of analysis in the co-citation analysis. Details related to the top 20 most prominent authors (in terms of citation) is shown in Table S5 of the Supplementary Materials. As shown in Figure 10, three major clusters of influential authors can be identified that are, to a great extent, consistent with the results of the analysis in the previous sections (prominent publications and thematic cluster analysis). The red cluster includes authors that have mainly worked on the core concepts and definition of smart cities (such as Tan Yigitcanlar, Andrea Caragliu, Nicos Komninos, Taewoo Nam, Margarita Angelidou, Vito Albino, and Michael Batty, etc.). In other words, the authors of this cluster have primarily contributed to topics included in the smart city concept cluster of Figure 2. The second important cluster (blue) includes the authors that have expertise in big data and have worked on issues related to the implementation and the future of smart cities (e.g., Rob Kitchin, Anthony Townsend, Robert Hollands, and Alberto Vanolo). This cluster also associates with the big data analytics and smart city applications cluster identified in thematic analyses (blue cluster in Figure 2) and blue cluster identified in previous section (influential documents in Figure 9).
The third cluster (green) includes authors whose expertise revolve around the Internet of Things and the underlying technical issues related to smart cities. Key authors in this cluster are Andrea Zanella and Luigi Atzori. This cluster is also in close connections with the green cluster (Internet of Things) identified in Figure 2 and the green cluster in the most influential documents (Figure 9).
Discussion
The concept of smart cities has gained significant momentum in science and policy circles over the past decade or so. In fact, many cities around the world have developed and/or are planning to develop smart city programs. In doing so, they hope to make great strides towards improving the quality of life of their citizens, enhance the efficiency and effectiveness of urban operations, and develop solutions to overcome societal challenges such as climate change. Along with the growing interest in smart city agendas, the scientific literature on smart cities has also been expanding rapidly over the past few years. The main objectives of this study were to provide an overall understanding of the knowledge structure and trends in the smart cities literature, and to find out which sources, references, countries, institutions, and authors have made significant contribution to the development of the field. For this purpose, we relied on bibliometric analysis and science mapping tools (VOSviewer and CiteSpace) that allow analyzing performance and visualizing knowledge domain. The second important cluster (blue) includes the authors that have expertise in big data and have worked on issues related to the implementation and the future of smart cities (e.g., Rob Kitchin, Anthony Townsend, Robert Hollands, and Alberto Vanolo). This cluster also associates with the big data analytics and smart city applications cluster identified in thematic analyses (blue cluster in Figure 2) and blue cluster identified in previous section (influential documents in Figure 9).
The third cluster (green) includes authors whose expertise revolve around the Internet of Things and the underlying technical issues related to smart cities. Key authors in this cluster are Andrea Zanella and Luigi Atzori. This cluster is also in close connections with the green cluster (Internet of Things) identified in Figure 2 and the green cluster in the most influential documents (Figure 9).
Discussion
The concept of smart cities has gained significant momentum in science and policy circles over the past decade or so. In fact, many cities around the world have developed and/or are planning to develop smart city programs. In doing so, they hope to make great strides towards improving the quality of life of their citizens, enhance the efficiency and effectiveness of urban operations, and develop solutions to overcome societal challenges such as climate change. Along with the growing interest in smart city agendas, the scientific literature on smart cities has also been expanding rapidly over the past few years. The main objectives of this study were to provide an overall understanding of the knowledge structure and trends in the smart cities literature, and to find out which sources, references, countries, institutions, and authors have made significant contribution to the development of the field. For this purpose, we relied on bibliometric analysis and science mapping tools (VOSviewer and CiteSpace) that allow analyzing performance and visualizing knowledge domain.
Our results show that, although some research has been published on smart cities since the early 1990s, the field experienced a slow pace of growth until 2010. However, there has been a sudden increase in interest since 2010, and the number of publications has progressively increased ever since. This could be attributed to the extensive investment campaigns of companies such as IBM in the late 2000s [5], and the major advances in information and communication technologies and their widespread penetration of urban communities [6]. While between 2010 and 2015 the field has grown considerably, it was not until 2015 that the growth rate become exponential. Generally, 2015 is considered a milestone year for research related to cities, given the emphasis on the role of cities in major international policy documents released in or around 2015, such as the 2030 agenda, the New Urban Agenda, the Sendai Framework for Disaster Risk Reduction, and the Paris Climate Agreement.
To better understand the thematic focus of the field over the past three decades, we first conducted a term co-occurrence analysis for the whole study period. This showed that existing literature can be divided into three major clusters focused on smart city concepts, IoT, and big data analytics. Obviously, as this is still a relatively new field of research, the literature is dominated by underlying conceptual and technical issues related to the planning and development of smart cities. In contrast, research on implementation and applications of smart cities has been relatively limited. Regarding smart city concepts, existing research has mainly focused on issues such as linkages between smart and sustainable cities, and use of smart solutions to transform urban governance. Linkages between smart cities, and other important topics such as climate change and resilience, seem to have received relatively less attention. As for the cluster focused on underlying technical issues, there has been a relatively balanced focus on various topics such as IoT, the architecture of internet networks, security/privacy issues, and cloud/fog/edge computing. The cluster on implementation is mainly focused on applications of big data analytics and machine learning techniques for enhancing efficiency and optimizing urban operations. It is clear that these have mainly been applied to the energy and transportation sectors, and further research in other sectors is needed.
Dividing the study period into two sub-periods (1991-2015 and 2016-2021) provided more insights into the thematic structure of the field and how it has evolved over time. It was found that until 2015 the knowledge base was limited. However, the three major cluster that were discussed earlier could still be identified. As expected, during these initial years, concepts and fundamental technical issues were dominant. During the second period, the intellectual base has grown significantly but the focus has remained on the three major clusters identified for the first period. In terms of concepts and planning-related issues, more attention to linkages between smart cities and sustainability/climate change can be observed in the second period. The term 'climate change', however, is still not a dominant term. Attention to sustainability and climate change is expected to further continue in the coming years as there are increasing hopes that smart cities will increase the capacity to overcome societal challenges. Additionally, more attention to issues related to decision making and governance in the second period indicates the increasing recognition of the role that smart solutions can play in transforming urban governance. As for the cluster on technical issues, a clear transition is the attention to issues related to security and privacy. As discussed earlier, these have been, particularly, discussed in relation to blockchain, indicating the increasing recognition of its importance for addressing securityrelated concerns that may further increase in the coming years. Regarding the cluster on applications, the noteworthy transitions include increased attention to data management and big data analytics, and further attention to utilities of smart technologies for enhancing operational optimization. As for the sectoral focus, energy and transportation are the two sectors that have been emphasized in both periods. Based on this, we argue that more research in other sectors is also needed as smart city initiatives continue to be rolled out around the world in the coming years. In fact, the COVID-19 pandemic has given even more momentum to smart city initiatives around the world, and many cities have relied on smart solutions to combat the pandemic [110]. An updated bibliometric analysis in the next 1-2 years is needed to understand the influence of the recent pandemic on the structure of the field.
Overall, what can be learned from the thematic analysis is that existing research on smart cities is dominated by either conceptual issues or underlying technical aspects. It is, therefore, essential to do more research on the implementation of smart cities and actual and/or potential contributions of smart cities to solving societal issues. Terms related to energy and transportation were highlighted in the term co-occurrence analysis, indicating that issues related to the implementation of smart solutions in these sectors have received some attention. However, smart solutions can also be applied to other urban sectors. Another potential gap is limited focus on people and governance as two major dimensions of smart city development [72]. Terms related to these dimensions (e.g., citizen participation and e-government) had a marginal position in the term maps and, therefore, warrant further research. This, however, does not mean that other dimensions do not deserve further research. For instance, while climate change has gained traction over time, it still does not have a central position in the term maps. Considering the significance of addressing climate change, more research is needed to better understand actual and/or potential contributions of smart cities to achieving climate change adaptation and/or mitigation targets.
In addition to analyzing the thematic structure, this paper also identified key journals, authors, references, institutions, and countries that have made relatively more contributions to the development of the field. This information can be used by interested readers, especially those new to the field, to better understand the structure of the smart cities literature and make more informed decisions regarding their future research planning and design. Overall, this study has provided a better understanding of the overall structure of the smart cities literature. However, given the rapid pace of publications, regular updates are needed to keep up with the tremendous knowledge explosion on smart cities. Additionally, more specific analyses on each of the three clusters identified in this study is recommended, as it may provide more detailed information on how they have evolved over years. Finally, it is essential to mention that the approach taken in this study is not without limitations. The main limitation is that the existing software tools for bibliometric analysis can only process data related to publications indexed in scientific databases such as the WoS and Scopus. Accordingly, they cannot be used to analyze potentially influential documents not indexed in such databases (i.e., grey literature). Indeed, it is important to reiterate that bibliometric analyses cannot replace systematic reviews that can be used to analyze both grey and peer-reviewed literature. Conducting more systematic reviews is also necessary to gain more granular information regarding, for example, geographic focus of the research or the actual/potential contributions of smart cities to addressing societal challenges such as climate change.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/su13137140/s1, Figure S1: Top 25 keywords with the strongest citation bursts in the second period. Figure S2: Top 25 references with the strongest citation bursts. Table S1: Top 20 most influential journals. Table S2: Countries making more contribution to the smart cities literature. Data Availability Statement: The datasets generated during and analyzed in the current study are not publicly available due to further, ongoing research projects but are available from the corresponding author on reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 14,053.2 | 2021-06-25T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science",
"Political Science"
] |
Cheminformatics and the Semantic Web: adding value with linked data and enhanced provenance.
Cheminformatics is evolving from being a field of study associated primarily with drug discovery into a discipline that embraces the distribution, management, access, and sharing of chemical data. The relationship with the related subject of bioinformatics is becoming stronger and better defined, owing to the influence of Semantic Web technologies, which enable researchers to integrate heterogeneous sources of chemical, biochemical, biological, and medical information. These developments depend on a range of factors: the principles of chemical identifiers and their role in relationships between chemical and biological entities; the importance of preserving provenance and properly curated metadata; and an understanding of the contribution that the Semantic Web can make at all stages of the research lifecycle. The movements toward open access, open source, and open collaboration all contribute to progress toward the goals of integration.
INTRODUCTION
C heminformatics is usually defined in terms of the application of computer science and information technology to problems in the chemical sciences. Brown 1 introduced the term chemoinformatics in 1998, in the context of drug discovery, although informatics techniques have been applied in chemistry since 1950s and cheminformatics now relates to a broader set of contexts. Willett, 2 who uses the name 'chemoinformatics', provides a brief history of the development of the discipline. Warr, 3 who parenthesizes the 'o' in the title of her article gives a more comprehensive description. We follow the Journal of Cheminformatics 4 in adopting the shorter name. Both articles describe the application of cheminformatics to drug discovery and how the latter has influenced the development of cheminformatics. The allied dis-cipline of bioinformatics evolved more recently, in response to the vast amount of data generated by molecular biology, applying mathematical, and computational techniques not only to the management of that data but also to understanding the biological processes, pathways, and interactions involved. In his paper about the commercialization of bioinformatics, Jones 5 sums up the key factors that have influenced the development of the discipline. Sukumar et al. 6 have reviewed the interaction between cheminformatics and bioinformatics. They identify data transformation and data fusion as vital aspects on which further integration depends, noting the importance of semantics for achieving a more holistic approach. The goal is to establish systems chemical biology as a discipline, as outlined by Oprea et al. 7 Very recently, Wild et al. 8 have surveyed the current status of systems chemical biology, particularly with regard to the Semantic Web. Chepelev and Dumontier 9 refer to the emergence of systems chemistry, suggesting the development of a more systematic view of chemical experiments in an interdisciplinary context. However, they do not include among their references the 2008 review of systems chemistry by Ludlow and Otto, 10 which considers this emerging discipline from a complex systems perspective. They restrict themselves to synthetic systems in solution, for example, combinatorial chemistry, but also cover other multivariate systems, including models that might contribute to the understanding of biological systems.
With increases in computing power came not only a growth in capability but also a dramatic expansion of the volume of data produced and a demand for more sophisticated information technology to keep pace with the increased quantities of data. As chemistry and biology evolved, the greater information processing capacity stimulated differentiation and specialization within these disciplines, leading to subcategories within each field. At its most basic, chemometrics applies mathematical and statistical methods to the design of experiments with chemical systems, the analysis of the data obtained, and the understanding of those systems. As such, chemometrics clearly predates cheminformatics. Similarly, biostatistics, the application of statistical methods to biology, came before bioinformatics.
In general terms, chemometrics does not entail knowledge of chemical structure, being concerned mainly with obtaining information from data. The same might be said of biostatistics. Cheminformatics and bioinformatics seek to discern the patterns in the information, to elicit chemical and biological knowledge. Any distinction between these two branches of informatics relies mainly on the size and complexity of the molecules studied. Figure 1 shows the relationship between the four disciplines, but without clear divisions owing to the potential overlaps. The two informatics disciplines take their respective sciences, distinguished here by the size and complexity of the molecules studied, further along the datainformation-knowledge sequence. The scope for applying all four remains large, as demonstrated in the recent review of the enumeration of chemical space by Reymond et al. 11 Cheminformatics also embraces the distribution, management, access, and sharing of chemical data, and it is to these aspects of the discipline that the Semantic Web has so much to offer, by integrating heterogeneous sources of chemical, biochemical, biological, and medical information. The twenty-firstcentury e-Science and e-Research programs stimulated progress toward a more holistic and data-centric approach to the chemical sciences: Kim 12 of Semantic Chemistry, Adams 13 describes chemistry as a 'conservative discipline', having noted its comparative reluctance to evolve a culture of data and knowledge sharing, but adds that chemistry is now participating in the Semantic Web.
Hawizy 14 discusses a 'semantification workflow' for exploiting the potential of linked data, which she argues will have a profound impact on the development of science in the twenty-first century. However, she acknowledges the inhibitors to accessing chemical information sources. Frey 15 discusses the significance of the support of virtual organizations and the need for the coordinated development of ontologies for chemistry, and other nonbiological disciplines. A Semantic Science blog makes a plea that we do not forget the data from small projects, which can become big data when aggregated. 16 Semantic Web technologies can achieve that aim, even though the social and commercial aspects of using the Semantic Web remain areas in need of work. The linkage of data and resources is a recurrent theme in 'The Fourth Paradigm', a book about data-intensive scientific computing. 17 With regard to chemistry, Frey 15 stresses the importance of links between laboratory records and the computer systems that hold the data, but notes the need for better ways to maintain those links. Later in the same article, he says: 'It is the links that add value; but getting people to add them, or add sufficient information that they can be created automatically, is proving to be hard.' Links can reduce the time to data discovery, but the provenance of that data, and indeed of computational services, remains a concern. The outputs of one phase of the research lifecycle are often inputs to another phase: semantic links can help to ensure that the provenance trail remains intact. The so-called 'Dukes University scandal' strongly endorses this point. Although not directly related to chemistry, the article by Ince 18 amply demonstrates the importance of provenance information for both audit and reproducibility. However, to reinforce the need to capture the relevant metadata, researchers must perceive advantages in terms of, for example, improved accuracy, easy record keeping, and less repetition 15 : the ultimate aim is Curation@Source. 19 This review shows how the Semantic Web is beginning to have an impact on cheminformatics by aiding the discovery and reliable reuse of data, facilitating automated processing of that data, as well as providing enhanced provenance.
We start our discussion by considering the generation of chemical data and the nature of this data in comparison to other related disciplines. This data needs to be managed, an increasing difficult task given the quantities of data now available. To be useful the data needs to be integrated, abstracted, and made discoverable and deliverable in an intelligent and intelligible manner to other chemists and researchers in general. We discuss the value of chemical identifiers, metadata, vocabularies, linked data, provenance, and how these are being achieved with Semantic Web technologies and ontologies. We return to an overview of the application of these ideas to the overall research lifecycle to place them more fully in context, to then talk about the deployment of the Semantic Web, workflows, open data, and, more generally, interoperability and semantically enhanced provenance.
DATA MATTERS
Chemists have always generated data, and the chemical sciences have relied on data to advance the understanding of the discipline. Vast quantities of experimental data are now available, owing to new spectroscopic and visualization techniques, combinatorial and high-throughput methodologies, and increasingly complex computational investigations: quantum mechanical structural determinations and simulation dynamics. Each year computing facilities become more powerful, and indeed have to do so, just to keep pace with the expanding volume of data. The imperative to make the best possible use of the data available, especially given the costs associated with its collection, raises issues with preservation, curation, discovery, and access. These issues are at the core of the Semantic Web vision. 20,21 Handling this data and extracting information and knowledge from it almost becomes a discipline in its own right, the science of informatics. Informatics depends on data, but it is essential that data is reliable, and of an assured quality; moreover, that quality must be capable of being assessed. This requirement is particularly pertinent to the drug discovery process, for which the emphasis of cheminformatics has shifted from techniques to the management, curation, and integration of the large amounts of potentially useful data, with increasing dependence on Web services (see Ref 22 and references therein). Drug discovery has evolved from being an essentially empirical process through rational design and large-scale, high-throughput experiments to approaches based on genomics, which generate large amounts of potentially useful data. 23 Drug discovery also relies on bioinformatics. Curcin, 24 reviewing Web services in the life sciences, acknowledges the potential importance of Semantic Web technologies, but remarks that a systematic and standardized approach is needed. Tetko 25 compares the adoption of Web services by the bioinformatics and cheminformatics communities, stressing that the differences arise from the quantity of data involved and the scale of public funding to the bioinformatics area. The complexity of ownership, perceived potential to generate income, on top of the native complexity and scale inherent in the descriptions of chemistry (chemical space) lead to fundamental problems in the management of the data. It is essential to address these problems if data intensive chemistry is to realize its potential for integrating with other material and life science disciplines that are underpinned by chemistry.
Data Management and Integration
Frey notes a preference among laboratory scientists for storing data in flat files (in computers hidden under desks), which is not a good approach for curation, reuse, or preservation. 15 He examines alternative for larger-scale preservation, such as relational database and laboratory information management systems (LIMS), and discerns a need to cover 'the middle ground between the uncontrolled flat files and the rigid relational database'. Reese 31 suggests that relational databases are appropriate for data that changes frequently and for which maintaining integrity is important. He argues that data that does not change is best preserved in flat files, in tabular form wherever possible, and also proposes that, as well as the raw data, the archive should also contain a codebook that records how the data is entered and the descriptive metadata. 31 The Semantic Web is also capable of covering the middle ground and capturing the same information, given sufficient attention to metadata descriptions. In recent years, storage and computation Volume 3, September/October 2013
BOX 1: WEB SERVICES
In the early days of scientific computing, researchers wrote their own, almost inevitably bespoke, code. Subsequently, application packages and software libraries were developed, enabling considerable efficiency gains. The next key evolutionary step was the service-oriented architecture (SOA) approach, with the sharing of functionality increasingly provided through Web-based resources. A measure of the extent of the services available in the bioinformatics area is provided by the BioCatalogue, 26 which maintains a list of these services and service providers. Web services can be used for functions ranging from information retrieval to performing calculations. These services offer well-defined programming interfaces that are essentially independent of the programming languages and platforms used to access them. The formal definitions of Web services interfaces, such as the WSDL 27 and SOAP 28 specifications, are beyond the scope of this review. However, the simpler REST (Representational State Transfer) architecture is now the preferred approach to implementing Web services, 29 a choice that presumably also influences the design of Web services deployed in drug discovery. Another design consideration is that of thin versus thick clients. 30 Thick clients employ a formal, machine-processable, interface definition, whereas thin clients rely on the server to interpret each request. Enterprise applications require rigorous specifications of business requirements, so prefer thick clients.
'in the cloud' have added a fresh dimension to the management of large volumes of data. Several of the references cited in this review mention cloud computing, but none cover it as a specific topic.
On a smaller scale, Alsberg and Clare 32 have used a wiki in conjunction with version control software to manage the data objects generated by their chemometric research projects, enabling them to integrate project information with data. They point to the advantages of flexibility and communication, but acknowledge a number of shortcomings, some of which are the undesirable consequences of flexibility. From the perspective of this review, the lack of semantic annotation is significant: the data is not curated for machine processing.
In 2006, Taylor 33 reviewed the use of electronic laboratory notebooks (ELNs). His focus was on commercial systems and the regulatory considerations for electronic laboratory records, remarking that academic researchers had shown little interest in ELNs. The two exceptions he noted were the CombeChem 34 and SmartTea 35 projects, to be discussed more fully in later sections of this review.
Considering the volume and complexity of the data available for pharmaceutical R&D, Slater et al. 36 argue that it is not enough to bring together data and information from multiple sources. Semantics are necessary to interpret the information and derive knowledge. They propose a knowledge representation scheme that corresponds to the Semantic Web vision of data and resources described for use by humans and machines. In 2009, Wild 37 reviewed the use of data mining, together with Semantic Web techniques, for achieving the semantics-based integration envisioned by Slater et al. 36 The following year, Guha et al. 38 reviewed advances in the data mining of large heterogeneous chemical datasets, noting throughout the influence of semantic technologies on infrastructures for processing chemical information. Stephens et al. 39 have used an RDF (Resource Description Framework) data model to aggregate the disparate data used for drug discovery. 40 McCusker et al. 41 have created a data warehouse based on Semantic Web technologies, as a tool for the caGrid developed by the US National Cancer Institute (NCI). The Chem2Bio2RDF project illustrates what can be achieved by using semantics to integrate data from multiple chemical and biological sources. 42 Chem2Bio2RDF demonstrates how the federation of resources can facilitate search.
The RDF data model describes entities in terms of subject-predicate-object expressions, commonly known as triples. These expressions are held in a triple store, which is a database optimized for the storage and retrieval of triples. 43 Frey 44 describes the choice of RDF for the CombeChem project, and considers the implications of using RDF.
Hastings et al. 45 assert that the application of cheminformatics is critically dependent on the data exchange process, and are developing the Chemical Information Ontology (CHEMINF) to facilitate the precise description of chemical entities. Their motivation is twofold: (1) to provide a common reference point for interrelating terminology developed independently; and (2) to enable Semantic Web tools to integrate data from disparate sources for reuse in data-driven research. They state their aim to be the adoption of CHEMINF as a standard by the cheminformatics community.
Two of the coauthors of the CHEMINF paper, Chepelev and Dumontier, 9 report related activities intended to improve the ability of Semantic Web tools to federate chemical data and information. SADI (Semantic Automated Discovery and Integration) is a framework that deploys RESTful Semantic Web Services. The novel feature is that SADI services generate an output class by annotating the input class, thus preserving the provenance of the service explicitly. They also implement CHESS (Chemical Entity Semantic Specification) for representing chemical entities and their descriptors. 46 A key aim for CHESS is to enable the integration of data derived from various sources, thereby facilitating better use of Semantic Web methodologies.
The integration and aggregation of data from multiple sources reaches a zenith in drug discovery research. Blomberg et al. 47 consider a range of initiatives aimed at increasing the interoperability of data and information, paying particular attention to semantic approaches and the use of Semantic Web technologies. They describe the formation and objectives of the Open PHACTS consortium, which will adopt a Semantic Web approach to address the bottlenecks in small molecule drug discovery.
Discovery and Access
Discovery techniques that exploit the semantics of document content were in use well before the Semantic Web concept emerged. Jiao and Wild 48 have applied text-mining techniques to biomedical literature, identifying characteristic data that enables them to extract information about chemical interactions. The SPECTRa-T project has used text-mining tools to extract chemical objects from electronic theses. 49 A key difference is that SPECTRa-T stores the extraction results as RDF triples, allowing subsequent reuse and analysis with Semantic Web tools. Correspondingly, raw data, if sufficiently well described, should be susceptible to data mining techniques.
A recent example of the application of such techniques is the Collaborative Chemistry Database Tool (CCDBT), 50 which is a repository for the raw data generated by computational chemistry packages. The authors recognize the vital importance of extracting metadata from the raw data, thereby enabling other computational chemists to reuse the data and/or the results derived. A sequence of parsers extracts metadata from the raw data and populates a database for subsequent query based on the metadata model.
However, text mining is retrospective discovery. Frey 15 argues for a prospective approach to discovery, advocating the use of systems compatible with the Semantic Web in the laboratory, thus facilitating at source any subsequent discovery process. He warns, however, 'it is crucial to appreciate that the researcher's view of the content of an information system can be, and usually is, quite different from the "view" required by a computer system attempting to act for, or with, that human.' Both with retrospective or prospective approaches to gathering machine read-able and processable data, the metadata is essential, and it is in handling this aspect that Semantic Web technologies come to the fore.
Taylor et al. 51 demonstrate how Semantic Web technologies can be deployed in the storage and access of molecular structures and properties. Using unique identifiers and relationships, represented as RDF triples, they create a semantic database with the potential to enrich the exploitation of the data therein. One aspect of structure searching that has yet to feel the influence of the Semantic Web is that of finding chemical structures in patents, an area recently reviewed by Downs and Barnard. 52 Frey 15 also draws attention to the need for access control, in particular to protect intellectual property rights. He suggests that security models need to be rich but not overwhelming. Park has considered the requirements for secure collaborative work on the Semantic Web, including the need for efficient access control. 53 The issues that arise are clearly generic and not confined to any specific application areas.
DESCRIBING CHEMICAL DATA
A key and essential part of making data available via the Semantic Web is the existence of unique identifiers. In this requirement, the Semantic Web lines up with a considerable volume of work on chemical nomenclature as a way to create systematic (if not always unique) identifiers. Identifiers are the keys to the description of chemical structures and data although, of necessity, chemical identifiers should relate uniquely to a single structure. The chemical names used in publications are unique, but are not suitable for machine manipulation. Historically, the Wiswesser Line Notation 54 gave way to SMILES (Simplified Molecular-Input Line-Entry Specification). 55 Owing to some limitations with SMILES representations, IUPAC introduced the International Chemical Identifier (InChI) and its derivative, the InChIKey, which is a fixed-length hash code representation of the InChI itself. 56 With the notable exception of polymers, the great majority of compounds, including organometallics, can be represented with InChI identifiers.
Williams 57 notes the importance of the InChI for the Semantic Web in chemistry. Taylor et al. 51 highlight the unique nature of the InChI and consider the construction of a uniform resource identifier (URI) from an InChIKey. Such URIs enable links between chemical properties, data, and publications, or entries in an ELN. Coles et al. 58 have investigated the potential of the InChI for chemical information retrieval. Using the InChI strings for a corpus of 104 Volume 3, September/October 2013 molecules whose crystal structures were published under the eCrystals/eBank project, they obtained high values for both precision and recall. Tests with other corpora were similarly encouraging.
Bhat 59 discusses some potential difficulties with integrating the information needed for AIDS research and proposes methods and procedures to prepare data for a Chemical Semantic Web. He identifies as a specific challenge the unique naming of each substructure of a given compound and aims to build an ontology for the formal description of these components. Describing the relationships between chemical and biological entities can be of equal importance, especially for drug discovery. Guha et al. 38 suggest that the aim should be a holistic view of the relationships between small molecules and biological systems. Although Williams praises the quality of the chemical information provided by Wikipedia, 57 he points out that such descriptions are not machine-readable. However, DBpedia Live specifically aims to extract structured information from Wikipedia and convert it to RDF. 60 Kohler, 61 reviewing the three-volume set 'Chemical Biology: From Small Molecules to Systems Biology and Drug Design', emphasizes the importance of integrating chemical and systems biology. 62 Describing the relationships between small molecules and biological entities will be key to that integration. The Semantic Web offers a formal mechanism for representing those relationships. For example, the ChEBI ontology 63 captures the role of a chemical entity in a biological context. PubChem 64 provides full descriptions of an extensive range of molecules, a chemical identifier (that is not unique in that while a PubChem identifier points to only one molecule many molecules have more than one PubChem identifier) with associated Web services, but does not include the semantic descriptions needed for machine reasoning.
Metadata
Discussing the gap between bioinformatics and cheminformatics that existed in 2005, Curcin et al. 24 identify the lack of integration with differences in databases and tools and a shortage of cross-domain expertise, but do not highlight the importance of metadata, which now plays a vital role in achieving interoperation between these disciplines. Metadata is crucial for realizing the vision of the Semantic Web and enabling machines to perform the essential steps of integration: discovering data, interrelating data, and initiating cheminformatics tasks that act upon that data.
The commonly cited description of metadata as 'data about data' runs into difficulties even in basic situations. Pancerella et al. 65 give the example of a chemical formula, which can be metadata itself or be the object of other metadata, pointing out that the 'about' view can depend on perspective. Metadata is at the heart of their collaboratory for the multiscale chemical sciences (CMCS). They attach particular importance not only to discovering data across scales but also to preserving its provenance, goals that nearly 10 years later are regarded as essential. Moreover, the concerns they expressed about enforcing metadata standards across communities are in many ways alleviated by the tools of the Semantic Web, which provide, and work with, semantic metadata.
The formal recording of semantic metadata relies on ontologies, which are discussed in a later section. Ontology development is a rapidly evolving area and there has been a tendency for each group to create an ontology that meets its own needs. Although a set of standard chemical ontologies might seem desirable, the concern about alienation expressed by Pancerella et al. 65 remains pertinent. Fortunately, infrastructures based on RDF, for example, do permit interoperation. The reuse of parts of existing ontologies is becoming more common and systems are becoming available for recording metadata, for example, the Investigation/Study/Assay (ISA) infrastructure. 66 ISA assists with the reporting of experimental data, using community-agreed minimum metadata descriptions, thus ensuring that the metadata is sufficient to provide confidence in the data.
The reliability of metadata depends strongly on its capture as early as possible in the research lifecycle. Frey 19 makes a strong case for designing curation into research practices, which would require metadata to be captured in context, as the data itself is generated. Capture at source requires a combination of manual and automatic recording: for manual recording, it is essential that recording is easy and, insofar as is possible, places no additional burdens on researchers; automatic data acquisition should capture context as well as data. Frey 34 provides several examples of projects that have tackled the issues of curation, notably CombeChem. However, with regard to automatic data capture from networked instruments, Frey 15 also sounds a cautionary note. There are still issues with regard to ensuring that the data produced by such instruments conforms to international standards and has high quality metadata in a form that is usable by Semantic Web technologies. In an editorial for Drug Discovery Today, Williams and Ekins 67 express more general concern about the quality of much of the structure-based chemical data in the public domain, and make a case for government funding to support data curation. Previously, Williams 68 had emphasized the similar need for careful curation to ensure data quality in his review of Public Compound Databases. In former times, this was the role of national standards organizations and the international professional scientific bodies (ICSU, IUPAC, IUPAP, etc.), but funding has not been available to keep pace with the validation needs of the growing data volumes.
Vocabularies
A common vocabulary is fundamental to understanding and communication in cheminformatics and the Semantic Web, just as it is in most other spheres of human activity. Bhat 59 sees the development of common vocabularies and general ontologies, amongst other technologies, as research directions for the chemical Semantic Web. However, for a vocabulary to be common, the terms it contains must be agreed and workable in practice. Moreover, the vocabulary must be in a form that is readable by Semantic Web tools. Frey 15 notes that the capture of semantic relationships can lead to tension between freedom and control, in that controlled vocabularies inhibit the free text annotation with which researchers often feel more comfortable.
Many cheminformatics tools depend on metadata constructs that provide formal data descriptions by means of controlled vocabularies. Prominent among such constructs is the Chemical Markup Language (CML) for describing molecular species, first proposed in 1995. Since then, Murray-Rust and Rzepa 69
Linked Data
Linked data, although generically an established concept, is fundamental to the Semantic Web. Tim Berners-Lee 72 has published a range of notes concerning Web design issues, including four principles for putting linked data on the Web. The InChI and InChiKey, discussed in an earlier section, are very important for linking both raw and processed data that relates to molecules. The eCrystals archive 73 uses InChI identifiers for linking to the data resulting from a single crystal X-ray structure determination, produced, for example, by the UK National Crystallography Service (NCS). 74 The significant aspect of this service (both the NCS and eCrystals) is its preservation of links to all the raw and processed data, thus exposing the details of the structure refinement to scrutiny. This approach is not only interesting and useful but also provides a good exemplar for provenance conservation and a route to unconventional dissemination with accepted provenance.
To enable either a human user or a software agent to access linked data, URIs must be dereferenceable, by one of the variations described by Berners-Lee. 72 The number and range of compliant datasets is growing, as shown by the W3C page that lists sources with dereferenceable URIs, 75 describing them as 'part of the emerging Web of Linked Data'. However, a search for the stem 'chem' produces only two matches, suggesting that the Semantic Web has much further to emerge if cheminformatics is to benefit from linked data. Curiously, the Linking Open Drug Data (LODD) Web site 76 does not appear in the list of sources, despite being under the auspices of the W3C. The LODD Web site lists several interesting resources, available in a number of formats including RDF, and Samwald et al. 77 describe the work of the LODD task force. They note that some of the LODD datasets are not fully open, owing to considerations that the task force is actively exploring (e.g., patient confidentiality).
ChemCloud 78 adopts the linked data initiative in providing an infrastructure to integrate a range of chemical, biochemical, and pharmaceutical databases. This project recognizes that the formats in these sources present a challenge to semantic integration. Given the prevalent use of XML formats in these databases, ChemCloud has developed tools for converting the XML data to RDF.
In 2004, Murray-Rust and Rzepa 79 published an article challenging the transclusion model on integrity grounds. They admit that their message is 'slightly tongue-in-cheek' but go on to propose a datument model, in which publications contain all the relevant parts, incorporated as the datument is published. Berners-Lee published his principles of linked data two years later, but it is perhaps notable that a search of all his design issues produces no matches for the stem 'integr' (to cover variants of 'integrity'). Although capturing links is likely to remain a challenge in the context of chemical experiments, it is perhaps fortunate that ensuring that laboratory data is linked to some at least of its related information should suffice to prevent that data becoming isolated. the union of cheminformatics and the Semantic Web. Borkum et al., 80 describing the oreChem project, point out the importance of the relationship between the level of trust in reported results and the provenance, or pedigree, of the data from which those results were derived. Their words echo the earlier observations of Pancerella et al., 65 regarding the importance of provenance for the accuracy and currency of scientific data. To ease the checking of provenance and validity, repositories need as much information as possible about the data they contain, and Semantic Web technologies offer the means for capturing and preserving that information.
In 2005, Simmhan et al. 81 published a survey of data provenance in e-Science. Although the CMCS is the only chemistry project they examine, they raise several general issues that remain pertinent today, including, but not limited to: rich provenance information can become larger than the data it describes, provenance usability depends on federating descriptive information, coping with missing or deleted data requires further consideration.
To some extent, these issues can be addressed by the use of inference techniques, which is a natural step, given the enabling technologies of the Semantic Web. Provenance Explorer generates graphical views of scientific data provenance by using rule-based methods to infer provenance relationships automatically. 82,83 The system comprises a knowledge base of Web Ontology Language (OWL) files with relationships defined in the Semantic Web Rule Language (SWRL), an inference engine (Algernon), and a provenance visualizer.
The CombeChem project is an exemplar for capturing provenance information at source. 34,51,84 This project also recognized the need for the descriptive information to be pervasive, for example, including units. The ChemAxiom set of ontologies includes ChemAxiomMeta, which is intended to allow the provenance of data to be specified. 85 The need for provenance information to be reliable has potential significance for drug discovery, when molecular properties are computed: the provenance should show clearly the method of performing calculations. The Blue Obelisk Movement makes a similar point in the general cheminformatics context. 86 Its members urge that chemical computations should satisfy the scientific tenet of reproducibility, but note the surprising difficulty of ensuring the reproducibility of a calculation. They go on to argue that a global chemical Semantic Web will be difficult to implement without the processes necessary for validating resources and methods. Hastings et al. 45 also consider the provenance of calculated data to be particularly important, and use their Chemical Information Ontology (CHEMINF) to capture that information, for example, the parameters and the version of the code used to compute chemical properties.
SEMANTIC WEB TECHNOLOGY
Maximizing the value of the Semantic Web to cheminformatics depends in part on the availability of good tools. Murray-Rust et al., 87 in a perspective article, published in 2004 and entitled 'Representation and use of Chemistry in the Global Electronic Age', discuss the importance of appropriate tools for all aspects of the Chemical Semantic Web. A 2006 survey of the technologies comprising the Semantic Web and its architecture provides a comprehensive set of references. 88 This survey acknowledges the wide range of application areas without mentioning any specifically. Two years later, a survey of semantic e-Science applications describes chemistry as a 'hot field'. 89 The authors look forward to a promising future but note among the challenges two that remain pertinent today: existing data and social issues. Of the former, they say: 'providing structured data already existing in legacy database according to an agreed ontology can be a very labor-intensive task'. The social issues relate essentially to willingness to contribute to the creation of the Semantic Web.
In their book Introduction to Pharmaceutical Bioinformatics, Wikberg et al. 90 include a chapter about the Semantic Web that describes the standards and technologies in the context of cheminformatics and bioinformatics. Of all the Semantic Web technologies, arguably the most significant in terms of dependencies is RDF, the Resource Description Framework. In 2010, the Journal of Cheminformatics devoted a Thematic Series to 'RDF technologies in chemistry'. 91 Two of the papers in this series, about SADI 9 and Chess 46 have been covered in Data Management and Integration; the article by Samwald et al. 77 about LODD has been covered in Linked Data. Another article in the series, by Willighagen and Brändle, 92 addresses the use of RDF in chemistry specifically. The authors are generally optimistic about the future value of RDF technologies for chemistry, although they do question the usefulness of RDF for data in tabular forms and also sound a cautionary note about the inability of RDF to provide guarantees about data quality or data availability, for example.
Adams 13 published an overview in 2009 that considered semantic markup languages for chemistry, such as CML, as well as Semantic Web technologies.
Notably, he raises issues similar to those discussed by Chen et al. 89 in 2006: the processing of existing data, which Adams refers to as 'semantification'; and the sociocultural challenges. He observes that chemistry has lagged behind other disciplines in evolving a culture of data and knowledge sharing. As Frey 34 noted when describing the CombeChem project: 'All progress depends on individual scientists building on the results already produced by others'. Adams warns of the risk to progress in the biosciences in particular if chemistry continues to be reluctant to share its data.
The SPECTRa-T project has demonstrated the use of text-mining tools to extract semantic information from theses stored in legacy document formats, generating an RDF representation of the chemically relevant content. 49 It is self-evident that the issues related to data extraction and sharing would be mitigated by publishing open access data together with the article to which the data relates, as advocated by Bachrach. 93 This is an interesting development on a scheme that he and colleagues proposed a decade earlier, for journal articles to be marked up for reuse by readers. 94 Bachrach suggests the use of Web 2.0 tools to assist with peer review in an open environment. Fox et al. 95 envisage a wider use for Web 2.0 technologies, including SOAs for cheminformatics.
Storage and retrieval tools are essential, with an extensive range of triplestore implementations providing databases for persisting Semantic Web relationships, which consist of subject-predicate-object triples. The W3C standard for retrieving triples is SPARQL (SPARQL Protocol and RDF Query Language). 96 Willighagen and Brändle 92 discuss the use of SPARQL in cheminformatics, as do Chen et al., 42 when describing the Chem2Bio2RDF framework: these are just two examples. SemanticEye is a system intended to improve the accessibility of electronic publications and associated data, 97 along similar lines to those discussed above. The architecture of SemanticEye is based on the digital music model and relies on descriptive metadata that its stores as RDF. The original implementation used the Sesame framework 71 ; subsequently, Casher and Rzepa 98 have integrated SemanticEye with SPARQL.
Ontologies
Ontologies for chemistry are not yet as well developed as those in the life sciences, but several initiatives are making encouraging progress. The first Casher and Rzepa 97 paper describes SemanticEye as an ontology with associated tools. Other groups have also created formal semantic descriptions as taxonomies and ontologies, in many cases to meet their own needs. The ChemCloud initiative is, to some extent, an attempt to contain this proliferation, but it still requires new ontologies to represent the information in existing databases. 78 Currently, ChEBI (Chemical Entities of Biological Interest) 63 is the most established ontology in chemistry, as described by Adams et al. 99 with a subsequent update by de Matos et al. 100 Adams 85 is also one of the originators of the ChemAxiom set of ontologies, which aims to provide a framework for the formal description of chemistry, in the form of a set of interoperable ontologies that describe both chemical concepts and chemical data.
The CHEMINF ontology, as described in Data Management and Integration, is particularly concerned to cater for the exchange of data about chemical entities with biological and bioinformatics applications. 45 As covered fully in the paper, CHEM-INF extends several ontologies that are important in the biological context. Although the authors acknowledge the influence of CombeChem 34 they do not refer to the development of ChemAxiom, 85 possibly owing to concerns about the ChemAxiom approach, for example, that it does not provide dereferenceable URIs. All three are domain-specific ontologies that aspire to integrate with upper ontologies, particularly those in the Open Biomedical Ontologies (OBO) format. 101 CHEMINF also provides mappings to the Blue Obelisk Descriptor Ontology (BODO), which is covered in the 2011 review of the Blue Obelisk movement five years after its inception. 102 Choi et al. 103 have generated a small molecule ontology (SMO) to address the problem of integrating the properties of small molecules with data relating to biological activity. They emphasize the importance of Semantic Web technologies for both the development and exploitation of their SMO. On a broader level, Chen and Xie 104 have surveyed the use of Web ontologies in drug discovery, which is an activity that manifestly depends on the integration of chemical and biological data. One rather specific example of the use of ontologies in this respect is the semantic mining of patents. 105 Under the auspices of the CombeChem project, Frey et al. 35 adopted a human computer interaction (HCI) approach to designing an information system for capturing the data and metadata recorded by chemists during an experiment. From a Smart Lab perspective, CombeChem used RDF to classify chemical descriptors and demonstrated the explicit capture of the provenance of an experiment. 34 The Smart Tea project developed an ontology to model the Materials and Processes comprising the experiment, as one part of a system to support the experimental process from planning through to publication (at source).
Volume 3, September/October 2013
Representations of experiments at both the planning and enactment stage are at the core of the oreChem infrastructure: the model enables researchers to describe both the prospective and retrospective provenance of a chemistry experiment. 80
THE RESEARCH LIFECYCLE
All scientific investigations generate a much wider range of material than just the results obtained, whether they are numbers or recorded observations. If such investigations are to benefit the wider science community, care is needed in the capture, preservation, and description of all of the material. Equal care is required in recording the subsequent stages of analysis and dissemination. This section examines how Semantic Web technologies can assist the cheminformatics community to achieve what the authors of this review refer to as continuous curation, throughout the research lifecycle.
Borkum et al. 80 highlight the need for 'collaboration between chemistry scholars and computer and information scientists to develop and deploy the infrastructure, services, and applications that are necessary to enable new models for research and dissemination of the scholarly results of chemistry research'. Frey 15 identifies three main phases in the research lifecycle: planning, execution, and dissemination. He contends that Semantic Web technology can speed up the planning phase by enhancing the discovery process, not only of relevant information, including publications, but also of people with similar interests and required skills. The e-Science community has encouraged the necessary collaboration by forming virtual organizations, but support for formal virtual organizations (VOs) has waned in favor of groups set up around social networking tools such a LinkedIn, FaceBook, and Google circles.
The execution phase involves the capture of both data and observations in context and, importantly, the curation of that information. Chin and Lansing 106 set out the basic principles of capture in context, albeit for a biosciences collaboratory but one developed from the CMCS. 65 They note that context is both physical and scientific and is captured as metadata. They also discuss the importance of data provenance for tracing the evolution of datasets, to which contextual information can also be relevant. To apply these principles in an environment that exploits semantics, it is important to capture information in machine-processable formats. Frey 19 argues for curation to be an indispensable part of the experimental process, to be designed into every experiment: cura-tion at source. The UK has established a national organization, the Digital Curation Centre, for tackling the challenges of preserving and managing research data. 107 The ELN is now essential to good practice in capture and curation. 'ELN and the Paperless Lab' is a selective compilation of articles written about ELNs in recent years. 108 This eBook provides a broad range of insights into the evolution of ELNs and the motivations of the experimenters who use them. Previously, Taylor 33 had reviewed the use of ELNs specifically for chemistry and biology: at that time (2006) he predicted that increased adoption would depend on the technology becoming proven and affordable. More recently, Quinnell et al. 109,110 have reported trials of an ELN with selected undergraduate and postgraduate chemistry students at the University of New South Wales, Australia.
The dissemination phase is, in a sense, recursive, in that collaboration pervades the research lifecycle. Williams reviewed the use of Internet-based tools, including Semantic Web tools, for drug discovery, 57 concluding that, for commercial organizations, blogs and wikis are more likely to be adopted internally than for external collaboration. Academic institutions are likely to be significantly less inhibited. However, it might be necessary to distinguish between the informal sharing of ideas and the more formal exchange of structured information. Several authors have commented on the antipathy of chemists toward data sharing. In 2008, Downing et al. 111 conducted a survey of all research chemists at both Cambridge and Imperial College to determine data preservation practices and needs. They found a tendency to store data as hard copy, and where data was preserved electronically, a range of formats were in use. The attitude to storing data in an open repository depended in part on a reluctance to make data available prior to publication, allowing only other group members to see information before publication.
For scientists, publication is the ultimate form of dissemination, so researchers with an interest in semantic and Web 2.0 technologies have been drawn toward approaches that go beyond the traditional paper publishing. Marking up text with a language that conforms to a publicly known schema is one approach, leading Murray-Rust and Rzepa 112 to propose CML for this purpose. At the same time, Frey et al. 113 presented a case for publication at source, using Grid technology to disseminate information about the conduct of experiments as well as the resulting data: Figure 1 in their paper is an early depiction of the linked data concept. Shotton 114 has reviewed progress toward semantic publishing, in which he cites journals published by the Royal Society of Chemistry and particularly the RSC Project Prospect as an exemplar of semantic publishing. The RSC has made significant advances in this area, with RSC Semantic publishing 115 (as Project Prospect is now known), which is linked to the RSC ChemSpider database. 116 Manuscripts submitted to the RSC are annotated with semantic markup to highlight the important chemical data, particularly the structures. The data markup includes links to the relevant text and additional property data. Subsequently, search engines can exploit the annotations, for instance to discover papers that relate to a particular structure. The approach taken by this RSC project demonstrates the advantages of publication in a format that is compatible with Semantic Web technologies, which can in turn generate further insights from such semantically enriched information. RDF functionality has recently been added to the ChemSpider interface, enabling Richard Kidd, Informatics Manager at the RSC, to blog about what might be possible with semantic chemistry. 117 Martinsen 118 refers to the RSC project when discussing semantic tagging in his report on the Evolving Network of Scientific Communication session at the 223rd meeting of the American Chemical Society. His report notes the increasing impact of Web 2.0 technologies, a theme taken up by Bachrach, 93 as discussed in the Semantic Web Technology section of this review.
DEPLOYING THE SEMANTIC WEB
The design and discovery of new drugs is the most prominent application of cheminformatics and therefore the natural area for deploying Semantic Web technologies. Willett 2 identifies structure search and property modeling as two related areas at the foundations of modern cheminformatics. The eMolecules database provides for substructure and molecular similarity searches, but does not currently exploit semantic labelling. 119 ChemSpider provides equivalent facilities and also provides Web services for querying and accessing its database. 116 Although ChemSpider is moving toward including semantic methods, 117 these are not yet evident on its Web site. The Crystal-Eye database accumulates crystallographic structures, to which it can add semantic markup when converting the data to CML. 120 Richard et al. 121 have discussed the value of semantic markup in associating structures with important properties, in their case toxicity data. However, the overall message is that structure search has been notably slow to adopt Semantic Web technology. The issue is potentially quite fundamental in that structure search is mostly about substructure search and efficient algorithms exist for this and it is not clear that this substructure view of the world is actually compatible with the semantics of the whole structure.
Quantitative structure activity relationships (QSAR) are the established basis for deriving structure property relationships that can be used in drug design to predict the chemical properties of new structures. QSAR modelling has made reasonable progress in using Semantic Web technologies, such as RDF: Willighagen et al. 122 give a number of examples of linking RDF and QSAR modeling; Chepelev and Dupontier 9 use SADI to link to QSAR functionality in the CDK (Chemistry Development Kit).
As well as investing in the discovery of new drugs, the pharmaceutical industry also devotes resources to finding new uses for known drugs. Oprea et al. 123 have recently reviewed the techniques used to find new uses. They argue that Semantic Web technologies could contribute to an integrated approach to discovering the associations on which drugrepurposing efforts depend.
The Indiana University School of Informatics has developed a variety of tools that deploy the Semantic Web for drug discovery. The best known is arguably Chem2Bio2RDF, 29 but Wild 124 describes the full range of tools on his home page. WENDI looks particularly interesting in that it uses an RDF inference engine to reveal potential but not otherwise obvious biological applications for chemical compounds. 125
Workflows, Web Services, and Interoperability
The authors have recently reviewed the deployment of workflows and Web services for drug design and discovery 22 and concluded that the increasing use of Web services means that it is becoming easier to use workflows and workflow systems to provide assemblies of services that are useful in drug design and discovery. Kuhn et al. 126 have developed CDK-Taverna to provide a workflow engine specifically for cheminformatics by developing a Taverna plugin to integrate CDK: in their article, they provide six scenarios as examples of the use of CDK-Taverna. 'Web 2.0 for Grids and e-Science' is the subject of a book chapter by Fox et al. 127 Previously, Curcin et al. 24 had paid particular attention to the role semantics in their review of Web services for the life sciences.
Although workflows can use Semantic Web technologies to communicate the characteristics of Volume 3, September/October 2013 data in precise manner, cheminformatics applications have to maintain that precision when interfacing with semantic methods. Willighagen et al. 122 examine the interoperation of a range of molecular chemometrics applications and conclude that these techniques can integrate successfully with RDF data. The OpenTox project 128 aims to provide semantic services to assist integration of toxicology information with the rest of the drug discovery process. The Chem2Bio2RDF repository exploits semantics to facilitate interoperation between chemistry and biology by integrating chemogenomics repositories with other chemical biology resources. 42 In the context of managing research projects, Alsberg and Clare 32 demonstrate the use of MediaWiki for handling the interoperation of the various aspects of chemometric research projects. However, among the shortcomings that they point out are the lack of semantic annotation and an outstanding issue with integrating large amounts of structured data: clearly there is scope for introducing further semantic technology.
Open Data
The activities of the Linking Open Drug Data task force 77 47 The consortium will use trusted third parties to resolve security issues related to proprietary data. Hohman et al. 129 foresee open access, open source, and open collaboration as the future for drug discovery. They argue that a growing community of networked scientists, sharing data and expertise, can achieve more efficient discovery of new candidate drug molecules. However, if their vision is to be realized, collaborating researchers will need to be sure of the semantics of the data they access 'out in the open'.
The ChemCloud infrastructure, discussed above, is based on linked open data principles. 78 The Blue Obelisk movement 86 was founded specifically to promote open source, open standards, and open data: the members of the group continue to do so. 102 Jean-Claude Bradley is a leading exponent of open science: he provides all the experimental results from his work on antimalarial compounds online. 130 Neylon and Todd have also made some of their laboratory notebooks available and in the latter case a whole research project is coordinated in public view as Project Lab Books on the ourexperiment.org site; for example, the Pictet-Spengler route to Praziquantel. 131 Todor 132 surveys a range of use cases in his presentation: 'Semantic Linked Data Integration for Chemical eScience'. Hunter et al. 133 have focused on the annotation of 3D crystallographic models, essentially a form of curation. The main tool they use for their AnnoCryst system is Annotea, which is a W3C Semantic Web project that uses RDF schema. 134 Adams and Murray-Rust 135 published an early example of deploying semantic technologies for a specific application, polymer informatics, in 2008.
CONCLUSION
Rajarshi Guha's blog 136 illustrates that applications of Semantic Web technologies in cheminformatics are still the subject of active discussion. It has become clear that the role of the Semantic Web in promoting systematic use of agreed metadata for integration of data is currently the most powerful driving force in the development of Semantic Web tools. The possibilities for reasoning over the semantically rich data produced are still in their infancy. The major advances that have been made in the Chemical Semantic Web in the last few years have brought chemical informatics into closer alignment and integration with bioinformatics. The RDF description works best in an 'open world' both in the technical and administrative meaning of the word. Developments have been faster where data was easily available, but other routes to accessing the necessary data are increasing possible and will ensure that the exciting demonstration based on freely available data can spread to environments were the data is necessarily more controlled and restricted. | 12,276.6 | 2013-01-08T00:00:00.000 | [
"Biology",
"Chemistry",
"Computer Science"
] |
A Machine Learning Model for the Prediction of COVID-19 Severity Using RNA-Seq, Clinical, and Co-Morbidity Data
The premise for this study emanated from the need to understand SARS-CoV-2 infections at the molecular level and to develop predictive tools for managing COVID-19 severity. With the varied clinical outcomes observed among infected individuals, creating a reliable machine learning (ML) model for predicting the severity of COVID-19 became paramount. Despite the availability of large-scale genomic and clinical data, previous studies have not effectively utilized multi-modality data for disease severity prediction using data-driven approaches. Our primary goal is to predict COVID-19 severity using a machine-learning model trained on a combination of patients’ gene expression, clinical features, and co-morbidity data. Employing various ML algorithms, including Logistic Regression (LR), XGBoost (XG), Naïve Bayes (NB), and Support Vector Machine (SVM), alongside feature selection methods, we sought to identify the best-performing model for disease severity prediction. The results highlighted XG as the superior classifier, with 95% accuracy and a 0.99 AUC (Area Under the Curve), for distinguishing severity groups. Additionally, the SHAP analysis revealed vital features contributing to prediction, including several genes such as COX14, LAMB2, DOLK, SDCBP2, RHBDL1, and IER3-AS1. Notably, two clinical features, the absolute neutrophil count and Viremia Categories, emerged as top contributors. Integrating multiple data modalities has significantly improved the accuracy of disease severity prediction compared to using any single modality. The identified features could serve as biomarkers for COVID-19 prognosis and patient care, allowing clinicians to optimize treatment strategies and refine clinical decision-making processes for enhanced patient outcomes.
Introduction
The global impact of the COVID-19 pandemic has warranted a robust and nuanced understanding of the factors influencing disease severity to improve clinical decision support and patient outcomes.With the emergence of advanced technologies, particularly in artificial intelligence (AI) and ML, a growing opportunity exists to harness the available data for predictive modeling and disease management.Previous studies have demonstrated the efficacy of these technologies in diagnosing and managing viral diseases, including COVID-19 [1,2].
The unique nature of COVID-19 infection and disease progression poses challenges for treatment development.While SARS-CoV-2 RNA tests diagnose infections qualitatively, the early determination of disease severity is crucial for devising an appropriate treatment strategy.Although CT scans and conventional laboratory procedures are helpful, they may not capture lung alterations in 20% of COVID-19 cases [3].On the other hand, lab tests like blood cell counts offer practical alternatives, revealing reduced white blood cell and platelet counts alongside elevated serum ferritin and C-reactive protein levels in COVID-19 patients [4].Clinical characteristics like the C-reactive protein amount, gender, age, lactic dehydrogenase, and lymphocyte count correlate significantly with COVID-19 severity [5].RNA-based assessments, applicable across healthcare, are crucial in COVID-19 diagnosis and prognosis [6].Gene expression patterns across patient populations, identified through RNA-seq data, can be explored to identify potential biomarkers for COVID-19 progression and severity [6,7].On this front, ML emerges as a promising tool for precise and rapid disease severity assessment.ML algorithms, designed to uncover hidden patterns and intricate correlations, have been employed in various studies predicting contributing factors for COVID-19 severity [8][9][10].
Despite the efforts to leverage clinical and gene expression data for predicting COVID-19 severity, the current challenge lies in integrating genomic and clinical data to develop accurate prognostic models for effective disease management.
In this study, we devolved machine-learning models to predict COVID-19 severity by incorporating three data modalities: RNA-seq-based gene expression, diverse clinical features, and co-morbidity information.Combining these three data types aims to capture the correlations among the three modalities, enhancing disease severity prediction accuracy and offering accurate clinical decision support.Further, our study employs SHAP analysis and pathway enrichment techniques to unravel the contributing factors for prediction and the biological pathways involved in disease severity.
Datasets and Preprocessing
We obtained a GSE212041 dataset from the GEO database [11].The dataset comprised 392 patients: 306 hospitalized COVID-19 patients, 78 symptomatic controls, and 8 healthy controls.From these patients, a total of 722 blood samples were collected at different time points: 374 samples on day 0 (D0), 212 samples on day 3 (D3), and 136 on day 7 (D7) from the COVID-19-positive patients admitted to the Massachusetts General Hospital Emergency Department (ED).
In the present study, we used data from only 299 COVID-19 patients out of 306 because the missing metadata for the remaining seven patients provided samples at D0.The original research classified patients into five classes (A1-A5) based on the severity of the disease (Table 1).Classes A1 and A2 included patients recognized as dead within 28 days and those who survived but required mechanical ventilation and intubation, respectively.We regrouped patients from these classes into a single group termed 'severe'.Patients in the A3 class were placed in the 'moderate' group, while patients originally in A4 and A5 were placed in the 'mild' group (Table 1).
Gene expression data
All patients' raw read count data underwent initial filtration, removing genes with expression values as zeros or NaN in over 20% of the samples.The total number of gene features after preprocessing was 5293 (Supplementary Table S1).Subsequently, the DEseq2 package was applied to normalize raw read counts, and FPKM values were computed using the FPKM function [12,13].We also used an independent dataset (GSE172114) comprised exclusively of blood gene expression profiles (FPKM values) of 69 COVID-19 patients (46 critical and 23 non-critical) to test the performance of models.
Data Augmentation
Data augmentation artificially increases the size or diversity of a dataset used for biological analysis.This technique is commonly employed in biological research, particularly in genomics, bioinformatics, and image analysis, where the control sample size is very low compared to the treatment sample size [14,15].In the present study, we needed to balance the sample size for the 'mild' and 'severe' classes to be on par with that of the 'moderate' class (Table 1).We used Adaptive Synthetic Sampling (ADASYN) to oversample the minority class and address the class imbalance problem [16].ADASYN mitigates this issue by adaptively generating synthetic samples for the minority class based on the local density distribution of existing instances [17].The algorithm works mainly in four steps: (1) the data distribution analysis of all the classes, (2) the density estimation and identification of k-nearest neighbors of all instances in the minority classes, (3) the difficulty level measurement of minority and majority class instances, and (4) adaptive sampling based on the difficulty ratio to determine the number of synthetic samples needed for each minority class instance.In our experiments, we used default values of all parameters and hyperparameters such as, sampling_strategy: 'auto', n_neighbors: 3, details: n_jobs: 1, and random_state: None.
The Determination of Feature Weights and Integration
In disease severity prediction, implementing feature weights plays a crucial role in enhancing the accuracy and interpretability of ML models.It assigns different levels of importance to various features within each data type, allowing the model to focus on the most influential factors in predicting disease severity.Below, we describe strategies for assigning and utilizing feature weights for each data modality before model training and severity prediction, as depicted in Figure 1.
Weights to Gene Features
A LASSO (Least Absolute Shrinkage and Selection Operator) regularization approach was implemented for gene expression data to ascertain the correlation coefficients for each gene with the severity of COVID-19 [18].All parameters were set as defaults with an alpha value of 1.0.This technique aids in identifying and emphasizing the genes that exhibit a significant impact on predicting disease severity.The model can prioritize their influence by assigning weights to these genes based on these expression values, contributing to a more refined and accurate prediction (Supplementary Table S4).
Weights to Gene Features
A LASSO (Least Absolute Shrinkage and Selection Operator) regularization approach was implemented for gene expression data to ascertain the correlation coefficients for each gene with the severity of COVID-19 [18].All parameters were set as defaults with an alpha value of 1.0.This technique aids in identifying and emphasizing the genes that exhibit a significant impact on predicting disease severity.The model can prioritize their influence by assigning weights to these genes based on these expression values, contributing to a more refined and accurate prediction (Supplementary Table S4).
Weights to Clinical Features
In this case, we calculated the Gini index, representing the importance of each clinical feature.This index, integrated with the Random Forest Classifier module, assigned weights to clinical features based on their predictive power [19].Features deemed to be more critical in determining disease severity were assigned higher weights, ensuring that the model precedes these influential factors during prediction.Finally, a weighted clinical feature matrix was generated, as illustrated in Figure 1.The clinical features and their corresponding weights are provided in (Supplementary Table S5).
Weights to Co-Morbidity Features
The impact of pre-existing conditions on COVID-19 severity was assessed using the Python library Lifelines, which calculated the concordance index (CI) [20].The CI, representing the weight of each pre-existing condition, was then integrated into the original matrix to create a general final weighted co-morbidity matrix (Figure 1, Supplementary Table S6).By assigning weights to different medical conditions, the model could discern their relative contributions to the overall prediction of COVID-19 severity.
Weights to Clinical Features
In this case, we calculated the Gini index, representing the importance of each clinical feature.This index, integrated with the Random Forest Classifier module, assigned weights to clinical features based on their predictive power [19].Features deemed to be more critical in determining disease severity were assigned higher weights, ensuring that the model precedes these influential factors during prediction.Finally, a weighted clinical feature matrix was generated, as illustrated in Figure 1.The clinical features and their corresponding weights are provided in (Supplementary Table S5).
Weights to Co-Morbidity Features
The impact of pre-existing conditions on COVID-19 severity was assessed using the Python library Lifelines, which calculated the concordance index (CI) [20].The CI, representing the weight of each pre-existing condition, was then integrated into the original matrix to create a general final weighted co-morbidity matrix (Figure 1, Supplementary Table S6).By assigning weights to different medical conditions, the model could discern their relative contributions to the overall prediction of COVID-19 severity.
Integration of Weighted Feature Matrices
The weighted gene expression, clinical, and co-morbidity data were concatenated to generate a final integrated matrix, which was used as the input for the ML model, as shown in Figure 1.Including feature weights ensured that the model considered the varying importance of genes, clinical indicators, and pre-existing conditions when predicting disease severity.This approach allowed for more refined and accurate prediction, as the model assigned higher importance to features with greater predictive power.
Machine Learning Model
Four distinct ML algorithms, including LR, XG, NB, and SVMs, were employed to identify a robust prediction model for disease severity [21][22][23][24].These are the most used algorithms for classification problems due to their strengths and adaptability to different data types.LR is well-suited for binary or multiclass classification with interpretable results, while XG excels in boosting decision trees for improved predictive performance.NB is effective in probabilistic classification, particularly with relatively simple and independent features.On the other hand, an SVM is powerful for finding optimal hyperplanes in high-dimensional spaces and is useful in scenarios where complex decision boundaries are needed.ANN, conversely, can capture intricate patterns and non-linear relationships in data, making them suitable for tasks demanding high complexity and abstraction.Exploring these diverse algorithms allows for a comprehensive exploration of the data's characteristics and the potential to achieve better overall model performance.Ten-fold cross-validation was used for all models.
The Scikit-learn libraries were employed to import these classifiers (Scikit-learn Machine Learning in Python) [25].At first, we applied LR, recognized as a heuristic method for multi-class classification.The LR algorithm was implemented using the Scikit-learn library's Logistic Regression module, utilizing default parameters while specifying the 'OvR' mode (One-vs-Rest) for the multiclass parameter.The algorithm XG was executed through the XG Python library.The algorithm was configured with a learning rate of 0.5, a maximum tree depth of 3, and 800 runs (n-estimators) for learning.The NB was implemented with its default parameters of class_count as three and class_prior as 'none'.The SVM classifier algorithm was also applied with all default settings (C = 1.0, kernel = 'rbf', degree = 3).Finally, an ANN was implemented with three layers, 100 epochs, ReLU (Rectified Linear Unit), and SoftMax as activation layers, Adam as the optimizer, and Categorical Cross-Entropy set as the loss function.
Evaluation of Model Performance and Comparison
We evaluated the model's performance by measuring the accuracy, F1 score, and the AUC.We used the cross_value_score function from Scikit-learn Python to calculate the evaluation metrics.
Feature Importance and Contribution Analyses
We adopted SHapley Additive exPlanations (SHAPs), commonly used to explain the output of any ML model in the context of the feature's contributions.Because of the different combinations of input features, Shapley was utilized to find features with high classification power between COVID-19 severity groups [26].In the context of gene expression data, SHAP helps discern the impact of individual genes on predicting disease severity.For clinical features, the impact of variables such as age, neutrophil count, and other clinical indicators on prediction can be identified.Similarly, it elucidates the influence of pre-existing conditions on the overall severity prediction.We used a combined (gene-expression, co-morbidity, and clinical feature matrix) input matrix in SHAP with 299 rows (patients) and 294 columns (features).By integrating SHAP values across these three different data types, a comprehensive understanding of feature contributions is attained, facilitating the interpretation of ML model predictions and enhancing the model's transparency and interpretability.
Downstream Analysis of Significant Gene Features
We performed pathway enrichment analysis using 2753 significant gene features obtained after applying feature selection using LASSO regression.All the significant genes were used as input for Ingenuity Pathway Analysis (IPA) with default parameters [27].Enriched biological pathways were observed to understand their associations with the severity of COVID-19.
Results
This study seeks to employ ML models to predict disease severity and identify the associated clinicogenomic features in COVID-19 patients.We analyzed the gene expression data and the clinical and co-morbidity information of 299 hospitalized COVID-19 patients.After preprocessing the data, we had 253 gene features, 11 clinical features, and 9 comorbidity features for all the patients, as mentioned in Supplementary Tables S3 and S4.In the gene expression dataset, our feature selection strategy identified 2753 genes that were most relevant and highly associated with disease severity.These genes and the clinical and co-morbidity features were further used as input in model training.Multiple machine learning algorithms, including LR, NB, XG, and SVM models, were trained to classify the severity classes of 'severe', 'moderate', and 'mild'.We used F1 and accuracy metrics to evaluate each model's performance.The schematic workflow of the data integration approach, feature selection, and model development is provided in Figure 1.
Effects of Data Augmentation on Model Performance
As the method mentions, ADASYN oversamples the 'severe' and 'mild' groups to address the class imbalance.This experiment used only gene expression data due to its rich feature size.As a result, the number of samples was increased from 76 to 120 in the 'severe' class and 74 to 134 in the 'mild' class after augmentation (Table 2).ADASYN automatically determines the augmentation size of the minority classes to bring them up to par with the majority class.We evaluated LR, XG, NB, and SVM performances before and after augmentation.As shown in Table 3, the augmented model demonstrates a noticeable improvement in accuracy and the AUC compared to the original models.XG achieved a remarkable enhancement from a 40% accuracy and an AUC of 0.47 to a 95% accuracy and a 0.99 AUC after data augmentation.In comparison, LR demonstrated a slight increase in accuracy from 43% to 81% and an AUC from 0.56 to 0.93.Similarly, NB and the SVM showed slight improvement after data augmentation (Table 3).In this, we observed that increasing the size and diversity in the training dataset allowed the model to encounter more features and generalize better to test data.More specifically, the strategy introduced noise and variation in the classes of "Severe" and "Mild", which, in a true sense, helped prevent the model from fitting to the noise in the training data and improved its ability to generalize to new and unseen examples.
The Evaluation of ML Models with Single-and Multi-Modality Data
In earlier stages, data augmentation only contributed to marginal improvements in class predictions for a limited number of models.This raised concerns about the potential misallocation of feature weights during model training, leading to suboptimal performance even after oversampling.Therefore, we calculated weights for each feature and generated individually weighted matrices for each data type (i.e., gene expression, clinical, and comorbidity) and subsequently used them as input for the model.As mentioned in the methodology, the Gini index score, the concordance index, and the R-squared score from LASSO regression were used to calculate weights to corresponding features in each data matrix, i.e., the clinical, co-morbidity, and gene expression data matrices.The assignment of weights to feature matrices is a critical aspect influencing the performance of predictive models.By assigning different weights to individual feature matrices, the model learns to prioritize and emphasize specific types of information.The complete set of utilized clinical and co-morbidity data can be found in Supplementary Table S7.
As shown in Figure 2, the 10-fold accuracies for ML models generated from singlemodality-weighted matrices are low for all algorithms, indicating that the features were insufficient for the ML Model to predict the difference between the three COVID-19 groups.Additionally, we evaluated our model using an independent dataset (GSE172114), consisting solely of blood gene expression profiles from 69 COVID-19 patients (46 critical and 23 non-critical).The preprocessing procedure mirrored that of GSE212041.In this experiment, XG demonstrated superior performance, achieving a peak accuracy/AUC of 75%/0.87.In comparison, the original XG model trained on dataset GSE212041 (gene expression only) achieved lower accuracy and AUC of 41 and 0.54 (Figure 2), respectively.Other classifiers, such as Naive Bayes, exhibited the lowest accuracy and AUC of 46% and 0.51, respectively, to identify "critical" and "non-critical" cases.The LR and SVM models yielded accuracy/AUC values of 50%/0.64 and 57%/0.71,respectively.We further utilized different combinations of the multi-modality weighted matrices as input for ML models, which showed increased prediction accuracies across the board (Figure 3).Combining two data modalities has significantly improved the accuracy of all ML models except for the SVM, and combining all three data modalities has substantially increased the accuracy in all cases except for the SVM.Specifically, the XG algorithm attained an accuracy of 95% and an AUC of 0.99, making it the top-performing algorithm for distinguishing between the three severity groups ('severe', 'moderate', and 'mild') of COVID-19 patients (Figure 3).
The Evaluation of Model Performance Using Different Weight Combinations for Data Modalities
To investigate the optimal combination of weights for each data modality, we as signed different weights to each data matrix, followed by concatenation to generate an integrated matrix used as input for the model.Gene expression, clinical features, and co morbidity matrices were weighted at 1:1:1, 2:1:1, 1:2:1, and 1:1:2 proportions to build the corresponding models.Interestingly, the model with an equal weightage (1:1:1) for al data modalities produced the highest accuracy of 95% and an AUC of 0.99 using XG (Fig ure 4).A similar trend was observed with LR and NB models with corresponding weigh combinations; however, the SVM models showed a different trend, with the highest AUC observed in the 1:1:2 model.The comparison of predictive performance among these
The Evaluation of Model Performance Using Different Weight Combinations for Data Modalities
To investigate the optimal combination of weights for each data modality, we assigned different weights to each data matrix, followed by concatenation to generate an integrated matrix used as input for the model.Gene expression, clinical features, and co-morbidity matrices were weighted at 1:1:1, 2:1:1, 1:2:1, and 1:1:2 proportions to build the corresponding models.Interestingly, the model with an equal weightage (1:1:1) for all data modalities produced the highest accuracy of 95% and an AUC of 0.99 using XG (Figure 4).A similar trend was observed with LR and NB models with corresponding weight combinations; however, the SVM models showed a different trend, with the highest AUC observed in the 1:1:2 model.The comparison of predictive performance among these models reveals the impact of different combinations of feature matrices on the overall model effectiveness.Models with various combinations of weights for each data modality unveil the relative importance of molecular, clinical, and co-morbidity data in the overall performance of the models and help optimize the ML models for the best performance.
Feature Importance Analyses
After determining XG to be the best-performing model and optimizing the weight combination for different data modalities (1:1:1), we sought to identify the contributions of individual features to predicting disease severity.We used the SHAP method, which provided the SHAP score for each feature used in the model training [26].This score ranges from −1 to +1 and represents the significance of each feature and its effect on the model's performance for predicting COVID-19 severity.The beeswarm plot shows how each SHAP feature positively or negatively contributes to the model prediction (Figure 5).The points are distributed horizontally along the x-axis according to their SHAP value, reflecting the strength of a feature's impact on the model's output.The color of the dot represents the original value of the feature, in an instance, with red representing a high value and blue representing a low value.The points are stacked vertically in places with a high density of SHAP values.Examining the color distribution horizontally along the x-axis for each variable provides insights into the general relationship between a variable's original value and its SHAP value.The topmost gene expression features significantly affecting the model's accuracy are COX14, LAMB2, DOLK, SDCBP2, RHBDL1, and IER3-AS1 genes from the RNA-seq data.The absolute neutrophil count and Viremia were identified among the clinical features, but no co-morbidity features stood out in the SHAP analysis (Figure 5).We see a dense cluster with low correlation with small-but-positive SHAP values for DOLK.LAMB2 extends further towards the left, suggesting LAMB2 has a stronger negative impact on COVID-19.The top gene features from SHAP can be further analyzed to understand the enriched pathways associated with the top contributing genes.
Feature Importance Analyses
After determining XG to be the best-performing model and optimizing the weight combination for different data modalities (1:1:1), we sought to identify the contributions of individual features to predicting disease severity.We used the SHAP method, which provided the SHAP score for each feature used in the model training [26].This score ranges from −1 to +1 and represents the significance of each feature and its effect on the model's performance for predicting COVID-19 severity.The beeswarm plot shows how each SHAP feature positively or negatively contributes to the model prediction (Figure 5).The points are distributed horizontally along the x-axis according to their SHAP value, reflecting the strength of a feature's impact on the model's output.The color of the dot represents the original value of the feature, in an instance, with red representing a high value and blue representing a low value.The points are stacked vertically in places with a high density of SHAP values.Examining the color distribution horizontally along the xaxis for each variable provides insights into the general relationship between a variable's original value and its SHAP value.The topmost gene expression features significantly affecting the model's accuracy are COX14, LAMB2, DOLK, SDCBP2, RHBDL1, and IER3-AS1 genes from the RNA-seq data.The absolute neutrophil count and Viremia were identified among the clinical features, but no co-morbidity features stood out in the SHAP analysis (Figure 5).We see a dense cluster with low correlation with small-but-positive SHAP values for DOLK.LAMB2 extends further towards the left, suggesting LAMB2 has a stronger negative impact on COVID-19.The top gene features from SHAP can be further analyzed to understand the enriched pathways associated with the top contributing genes.
The Pathway Enrichment Analysis of Top Contributing Genes
Based on SHAP scores, we selected the top 25% (1324) of contributing genes (Supplementary Table S8) and subjected them to pathway enrichment analysis using IPA.This analysis revealed several significantly enriched pathways, shedding light on the severity of key molecular processes associated with COVID-19.The top five canonical pathways are shown in Table 4.The generic transcription pathway is the topmost pathway.Several biochemical pathways, such as the generic transcription pathway, are key to understand-
The Pathway Enrichment Analysis of Top Contributing Genes
Based on SHAP scores, we selected the top 25% (1324) of contributing genes (Supplementary Table S8) and subjected them to pathway enrichment analysis using IPA.This analysis revealed several significantly enriched pathways, shedding light on the severity of key molecular processes associated with COVID-19.The top five canonical pathways are shown in Table 4.The generic transcription pathway is the topmost pathway.Several biochemical pathways, such as the generic transcription pathway, are key to understanding the host-pathogen interactions during a SARS-CoV-2 infection in the nucleoplasm, impacting etiology, pathogenesis, or prognosis (Figure 6).The assembly involving nuclear receptor (NR) protein(s), CDK8, and MED proteins, forming the TRAP coactivator complex [TRAP coactivator], may modulate transcription factors and other proteins that are vital in the host's immune response, potentially affecting the prognosis of COVID-19 [28] (Table 4).The second pathway is 'immunoregulatory interactions between a lymphoid and a non-lymphoid cell' that may involve interactions between SARS-CoV-2 and immune cells during COVID-19 pathogenesis.This pathway triggers HLA interactions with the KLRC1 complex and KLRF interactions with the CLEC2B dimer [29].The virus then infects various immune cells, including lymphoid cells such as T lymphocytes, leading to the dysregulation of immune responses [30] (Supplementary Figure S1).The next one is the 'mitotic prometaphase pathway', where the dysregulation of mitosis can lead to cellular stress and affect tissue homeostasis.In this pathway, phosphorylated p-T2055-NUMA1 homodimer binds to nucleated microtubules in the cytoplasm.Mitotic kinase, CCNB1 phosphorylates Condensin I complex, forming phosphorylated CDK1 Phosphorylated Condensin I. PLK1 catalyzes the phosphorylation of STAG2, the RAD21-Ac-Cohesin: PDS5:CDCA5: WAPAL complex at centromeres, affecting sister centromeres and microtubule interactions, which in turn contributes to the pathophysiology of COVID-19 in various organs [31] (Supplementary Figure S2).The fourth pathway is FCGR-dependent phagocytosis, reflecting the role of Fcgamma receptors (FCGR) in mediating phagocytosis by binding to antibodies and opsonizing viral particles.The phosphorylated clustered PLCG complex in the plasma membrane yields the PI (3,4,5) P3 and p-PLCG complex.Moreover, the branching complex in the cytoplasm forms the ARP2/3: actin: ADP complex and activates WAVE2, WASP, and N-WASP proteins [32] (Supplementary Figure S3).The last one is the 'cilium assembly pathway' that COVID-19 may impact in respiratory epithelial cells.Multiple proteins in cilia form the IFT-B complex for intraflagellar transport, and the BBS/CCT complex catalyzes the assembly of the BBSome complex in the cytoplasm for ciliary function, affecting the clearance of mucus and pathogens from the airways [33] (Supplementary Figure S4).Overall, COVID-19's impact on these pathways and processes reflects its complex interactions with host cells and the immune system, contributing to the diverse clinical manifestations and outcomes observed in infected individuals.Understanding these connections is critical for developing targeted therapies and interventions against the virus.
Discussion
ML models have been widely used on COVID-19 data to improve risk predictions for hospitalization and critical disease outbreaks [34][35][36].Despite the numerous ML models that have been built, there are very few studies in which the models tried to use both clinical and genomic data to predict the severity of COVID-19 [37,38].Hence, the project aims to develop a prognostic ML model to predict the severity of COVID-19 based on gene expression and clinical and co-morbidity data.We used data augmentation to balance the class sample size, explored various ML models to identify the best-performing model, and optimized the ML model's performance using different weights.In addition, we used the SHAP score to find the features that contribute the most to the model's performance (Figure 5).
Four machine learning algorithms, LR, XG, NB, and SVMs, were used to initially build a classification model only based on the normalized gene expression data from COVID-19 patients that belong to three severity groups, 'mild, moderate, and severe' (Table 1).To avoid overfitting the 'moderate' group with the same sample size as the other two groups combined, we augmented and balanced the sample size of the minority classes using ADASYN (Table 2).Models built from balanced datasets have shown significantly improved performance (accuracy and AUC) for all ML methods compared to those using unbalanced datasets (Table 3).Only gene expression features were used for the initial testing of ML models as this data modality has thousands of data points compared to merely twelve and nine features in the clinical and co-morbidity modalities, respectively.
We have built separate models for each data modality, their pair-wise combinations, and all three combined.The integration of the three data modalities showed a significant improvement in the predictive power of the ML models compared to those using a single modality or pair-wise data modalities (Figures 2 and 3), with the accuracy reaching 95% and an AUC of 99% for the XG model that was trained with all three modalities.Our results align with the other studies highlighting the importance of using integrated multi-omics data in predictive models to leverage the synergistic effect of combining different data modalities.For example, ML models integrating transcriptomic and clinical data for predicting the clinical outcomes of COVID-19 patients showed enhanced accuracy [39].In addition, the XG algorithm outperformed the other classifiers because it implemented a gradientboosting framework, allowing it to build decision trees sequentially and optimize for bias and variance.Incorporating regularization techniques, such as L1 and L2 regularization, effectively prevents overfitting [40].
Furthermore, the most important features with the highest predictive power in the integrated model were shapely identified.The COX14 gene was identified as the top feature, significantly contributing to the model's predictive power.The COX14 gene (cytochrome c oxidase; COX) encodes a core protein of the mitochondrial electron transport chain's complex IV assembly, a vital component of the COX protein's catalytic core, essential in electron transport [41].A recent proteomic study of COVID-19 patients suggested elevated levels of the components of cytochrome c electron transport complexes in the plasma of COVID-19 patients compared to that of the normal controls [42].The second most important feature from the SHAP analysis, an absolute number of neutrophil counts, emerged from the clinical feature set.Several studies reported high levels of neutrophils in severe COVID-19 patients and neutrophil-related cytokines like IL-8 and IL-6 [43][44][45].Neutrophils detect single-stranded RNA viruses like SARS-CoV-2 because they express multiple Toll-like receptors: TLR7, TLR8, and TLR9.Once the TLR receptors are activated, other physiological processes, such as NF-κB and interferon regulatory factors, are activated (IRF7) [46].The latter activation process produces chemokines and pro-inflammatory cytokines in neutrophils that induce pulmonary infiltration and hyperinflammation in COVID-19 patients [47].
Furthermore, the LAMB2 gene was also identified among the top three features in our SHAP analysis.This gene encodes the basement membrane protein laminin β2, part of the heterotrimeric laminin isoforms [48].LAMB2 was identified as a diagnostic biomarker for COVID-19 based on a bioinformatics analysis of the gene expression dataset of COVID-19 patients [49].Moreover, our findings underscore the significance of specific pathways enriched in the top 25% of genes identified through SHAP values.Pathways include generic transcription, immunoregulatory interactions between a lymphoid and non-lymphoid cell, mitotic prometaphase, FCGR-dependent phagocytosis, and cilium assembly.In a SARS-CoV-2 infection, fundamental host cellular processes such as generic transcription and immune responses are expected to be perturbed.Some of the genes involved in these processes could indicate disease progression and severity.
The super pathway of Inositol Phosphate Compounds involves genes responsible for inositol production, which is essential to generate the phosphatidylinositol (PtdIns) needed to preserve the signaling pathways.A prior study has found that SARS-CoV-2 also affects metabolic pathways like the inositol phosphate metabolism, glycolysis, and oxidative phosphorylation [50].The dysregulation of those pathways blocks surfactant secretion and alveolar epithelial differentiation.In addition, disrupting the inositol phosphate metabolism may induce neutrophil infiltration and disrupt the lung barrier [50].
In this study, we demonstrated that integrating genomic and clinical features has helped improve the performance of ML models, and implementing the data augmentation approach has addressed the data imbalance issues to enhance the model's performance further.Similarly, SHAP analysis has helped identify the topmost contributing factors (genes and clinical features) to the model performance that could be biomarkers for predicting disease severity.
Conclusions
Our study significantly enhances the predictive capabilities for COVID-19 severity by integrating genomic and clinical data.We identified the key contributors to severity prediction by leveraging a sophisticated workflow involving ML techniques, feature selection, data augmentation, and SHAP analysis.We also demonstrated the importance of integrating multi-modality data to improve the performance of prediction models rather than singular modalities.The observed correlations between pre-existing conditions, such as heart disease, lung disease, and hypertension, and the severity of COVID-19 underscore the clinical relevance of our integrative approach.The superior performance of XG in classifying severity groups further validates the efficacy of our predictive models.
The application of SHAP analysis pinpointed specific genes, including COX14, LAMB2, DOLK, SDCBP2, RHBDL1, and IER3-AS1, along with critical clinical features like the absolute neutrophil count and Viremia categories as influential factors in severity prediction.These identified biomarkers offer valuable insights for clinicians for early disease prognosis.
Our study contributes to the evolving understanding of COVID-19 prognosis and provides a foundation for refining clinical decision-making processes.Integrating clinical and genomic data in predictive models holds promise for personalized and timely interventions, ultimately leading to improved patient outcomes.As we continue to navigate the complexities of the pandemic, our findings pave the way for future research and clinical applications aimed at advancing precision medicine in the context of COVID-19 severity prediction.
Figure 1 .
Figure 1.The workflow of preprocessing various data types, generating individual feature weight matrices and integrations, and machine learning model training for COVID-19 severity prediction.
Figure 1 .
Figure 1.The workflow of preprocessing various data types, generating individual feature weight matrices and integrations, and machine learning model training for COVID-19 severity prediction.
Diagnostics 2024 , 17 Figure 2 .
Figure 2. The evaluation of ML models with 10-fold cross-validation when individual data types are used as input.LR: Logistic Regression, XG: XGBoost, NB: Naïve Bayes, SVM: a Support Vector Machine.
Figure 2 .
Figure 2. The evaluation of ML models with 10-fold cross-validation when individual data types are used as input.LR: Logistic Regression, XG: XGBoost, NB: Naïve Bayes, SVM: a Support Vector Machine.
Figure 2 .
Figure 2. The evaluation of ML models with 10-fold cross-validation when individual data types are used as input.LR: Logistic Regression, XG: XGBoost, NB: Naïve Bayes, SVM: a Support Vector Ma chine.
Figure 3 .
Figure 3.The evaluation of ML models with 10-fold cross-validation when different combinations of data types are used as input.LR: Logistic Regression, XG: XGBoost, NB: Naïve Bayes, SVM: a Support Vector Machine.
Figure 3 .
Figure 3.The evaluation of ML models with 10-fold cross-validation when different combinations of data types are used as input.LR: Logistic Regression, XG: XGBoost, NB: Naïve Bayes, SVM: a Support Vector Machine.
Diagnostics 2024 ,
14, x FOR PEER REVIEW 9 of 17 models reveals the impact of different combinations of feature matrices on the overall model effectiveness.Models with various combinations of weights for each data modality unveil the relative importance of molecular, clinical, and co-morbidity data in the overall performance of the models and help optimize the ML models for the best performance.
Figure 4 .
Figure 4.The evaluation of machine learning models using different combinations of weights for the three data modalities.The numbers in parenthesis represent the proportions of weights used for each modality in the data matrices used for model building.LR: Logistic Regression, XG: XGBoost, NB: Naïve Bayes, SVM: a Support Vector Machine.
Figure 4 .
Figure 4.The evaluation of machine learning models using different combinations of weights for the three data modalities.The numbers in parenthesis represent the proportions of weights used for each modality in the data matrices used for model building.LR: Logistic Regression, XG: XGBoost, NB: Naïve Bayes, SVM: a Support Vector Machine.Diagnostics 2024, 14, x FOR PEER REVIEW 10 of 17
Figure 5 .
Figure 5.A beeswarm plot, ranked by mean absolute SHAP value.This provides a rich overview of how the variables impact the model's predictions across all data.The input variables are ranked from top to bottom by their mean absolute SHAP values.
Figure 5 .
Figure 5.A beeswarm plot, ranked by mean absolute SHAP value.This provides a rich overview of how the variables impact the model's predictions across all data.The input variables are ranked from top to bottom by their mean absolute SHAP values.
Figure 6 .
Figure 6.The canonical generic transcription pathway was enriched in the top 25% highest-scoring features based on SHAP scores.Figure 6.The canonical generic transcription pathway was enriched in the top 25% highest-scoring features based on SHAP scores.
Figure 6 .
Figure 6.The canonical generic transcription pathway was enriched in the top 25% highest-scoring features based on SHAP scores.Figure 6.The canonical generic transcription pathway was enriched in the top 25% highest-scoring features based on SHAP scores.
Table 1 .
Table with a number of samples in the original class and our class definitions.
Table 2 .
The number of samples in each class, 'severe', 'moderate', and 'mild', before and after data augmentation (using ADASYN).
Table 3 .
The evaluation of ML models with 10-fold cross-validation before and after data augmentation for predicting COVID-19 severity.LR: Logistic Regression, XG: XGBoost, NB: Naïve Bayes, SVM: a Support Vector Machine.
Table 4 .
The top canonical pathways from the Ingenuity Pathways Analysis of the top 25% of genes (1324) with the highest SHAP scores. | 8,597.4 | 2024-06-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |