text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Ischemic Preconditioning Promotes Autophagy and Alleviates Renal Ischemia/Reperfusion Injury Autophagy is important for cellular survival during renal ischemia/reperfusion (I/R) injury. Ischemic preconditioning (IPC) has a strong renoprotective effect during renal I/R. Our study here aimed to explore the effect of IPC on autophagy during renal I/R injury. Rats were subjected to unilateral renal ischemia with or without prior IPC. Hypoxia/reoxygenation (H/R) injury was induced in HK-2 cells with or without prior hypoxic preconditioning (HPC). Autophagy and apoptosis were detected after reperfusion or reoxygenation for different time. The results showed that the levels of LC3II, Beclin-1, SQSTM1/p62, and cleaved caspase-3 were altered in a time-dependent manner during renal I/R. IPC further induced autophagy as indicated by increased levels of LC3II and Beclin-1, decreased level of SQSTM1/p62, and accumulation of autophagosomes compared to I/R groups at corresponding reperfusion time. In addition, IPC reduced the expression of cleaved caspase-3 and alleviated renal cell injury, as evaluated by the levels of serum creatinine (Scr), neutrophil gelatinase-associated lipocalin (NGAL), and kidney injury molecule-1 (KIM-1) in renal tissues. In conclusion, autophagy and apoptosis are dynamically altered during renal I/R. IPC protects against renal I/R injury and upregulates autophagic flux, thus increasing the possibility for a novel therapy to alleviate I/R-induced acute kidney injury (AKI). Introduction Acute kidney injury (AKI) is a common kidney disorder characterized by a rapid loss of renal function resulting in an accumulation of metabolic waste and an imbalance of electrolytes and body fluid [1]. Renal ischemia/reperfusion (I/R) injury is the most common risk factor for AKI. The tubular segments located within the outer stripe of the outer medulla, including the proximal tubules, are particularly sensitive to hypoxia and I/R injury because of their high rates of adenosine triphosphate consumption. In response to injury, renal tubular cells activate a myriad of defense mechanisms. Studies have shown that autophagy is upregulated and plays an important role in AKI in vitro and in vivo [2]. Autophagy is an intracellular lysosomal degradation process involving the formation of a double-membrane structure known as an autophagosome, which sequesters longlived proteins, damaged organelles, and malformed proteins and transfers them to lysosomes to be degraded by proteases, a process that yields nutrients that can be reused and energy that can be recycled [3]. Autophagy is induced under various stressful conditions including oxidant injury, cell starvation, growth factor deprivation, and I/R injury [4], and enhanced autophagy has been shown to protect renal tubular epithelial cells from I/R injury in vitro by attenuating apoptosis. Renal proximal tubule-specific autophagy-associated gene (Atg) knockout mice demonstrate autophagy inhibition and I/R injury sensitization compared to their wild-type littermates [5], confirming the renoprotective effect of autophagy in I/Rinduced AKI. Ischemic preconditioning (IPC), which was first described in the heart by Murry et al. [6], consists of brief periods of vascular occlusion. It has been reported to confer protection against subsequent I/R injury via endogenous protective mechanisms in various organs. This protection has been linked to decreased apoptosis [7], preservation of the ATP content, and overproduction of prosurvival and anti-inflammatory proteins [8,9]. However, little is known about the effect of IPC on autophagy during renal I/R. In this regard, we monitored dynamic changes of autophagic flux in IPC-induced renoprotection utilizing the tandem RFP-GFP-LC3 adenovirus construct. Understanding the role of autophagy in IPC during renal I/R injury may provide a potential strategy for the clinical treatment of AKI in the future. Design and Procedures of Animal Experiments. Sprague-Dawley male rats (8-9 weeks old; 230-270 g) were purchased from the Animal Center of Fudan University, Shanghai, China. Prior to the start of the experiment, the rats were kept at a constant temperature of 23 ± 2 ∘ C with 40%-60% relative humidity on a 12/12 h light/dark cycle and were given access to standard rodent chow and tap water ad libitum. All animal experiments described below were conducted in compliance with the Guide for the Care and Use of Laboratory Animals. Rats were randomized into the sham group, I/R group, IPC group, and IPC + I/R group, with eight rats in each group. The rats were anesthetized with 2% pentobarbital sodium (50 mg/kg, i.p.) and were kept on a heating pad to maintain their body temperature at 37 ∘ C. A midline laparotomy was then performed in which the abdominal cavity was fully exposed. In the I/R group, the left renal artery was isolated and clamped for 40 min using a nontraumatic artery clamp after a right nephrectomy. In the IPC + I/R group, rats were subjected to IPC prior to I/R, which consisted of four cycles of clamping the left renal artery for 8 min, separated by 5 min of reperfusion. The rats were sacrificed after reperfusion for 2 h, 6 h, 12 h, and 24 h to collect the blood and renal tissues for analysis. The rats in the IPC group underwent surgical manipulation and IPC without I/R. The rats in the sham group were subjected to surgical manipulation with no intervention. Renal Function. Renal function was assessed by measuring serum creatinine (Scr) levels using a commercially available kit (Wako, Osaka, Japan). 2.5. Immunohistochemistry. Formalin fixed kidney sections (4 m) were deparaffinized in xylene and rehydrated through a graded ethanol series to water. After blocking with 10% normal horse serum in PBS, the tissue was stained with rat NGAL and KIM-1 polyclonal antibody overnight at 4 ∘ C and then was stained with the Anti-Goat HRP-DAB Cell&Tissue Staining Kit (R&D system, Minneapolis, MN, USA) and counterstained with hematoxylin. 2.6. TUNEL Staining. Apoptotic tubular cells in renal I/R injury were detected using the terminal deoxynucleotidyl transferase-mediated deoxyuridine triphosphate (dUTP) nick-end labeling (TUNEL) method following the manufacturer's protocol (Roche Diagnostics, Mannheim, Germany). To calculate the apoptotic index (AI), the number of TUNELpositive versus total cell nuclei was counted in the renal cortex and cortical-medullary junction in a blinded manner using 10 sequentially selected fields for each section containing at least 500 cells at 400x magnification. TUNEL-positive apoptotic cells exhibited brown nuclear staining. The AI was calculated by dividing the number of apoptotic cells by the total number of cells and multiplying by 100. Electron Micrographs. The tissue samples (∼1 mm thick) were fixed with 1 ml of 2.5% glutaraldehyde in phosphate buffer (pH 7.4) for 4 h and then were immersed in 1% osmium tetroxide for 2 h. The samples were washed in PBS three times for 10 min each, dehydrated in a graded series of ethanol, and imbibed in epoxy resin. Ultrathin sections (50-90 nm) were obtained using an ultramicrotome and were then poststained with uranyl acetate and lead citrate. The grids were examined using transmission electron microscopy (Tenai G2 Spirit; Hillsboro, OR, USA). For quantification, 10 fields of low magnification (×1000) were randomly obtained for each renal sample, and the total number of autophagic vacuoles was counted. USA) were isolated and cultured in Dulbecco's modified Eagle's growth medium (DMEM; ThermoFisher, Hudson, NH, USA) containing 10% fetal bovine serum under 5% CO 2 and 95% atmospheric air at 37 ∘ C. To establish a hypoxia/reoxygenation (H/R) model in vitro, renal cells were subjected to oxygen and glucose deprivation (OGD) by changing the medium to serum/glucose-free DMEM and then were incubated in the Modular Incubator Chamber (MIC-101; Billups-Rothenberg, Del Mar, CA, USA) for 15 h in an atmosphere of 1% O 2 , 5% CO 2 , and 94% N 2 at 37 ∘ C, followed by reoxygenation in normal complete medium and normoxic condition for 30 min, 1 h, 2 h, 6 h, or 12 h. Transient OGD for 6 h and subsequent reoxygenation for 2 h were implemented before prolonged H/R injury to achieve HPC. Control cells were cultured under normal condition. The experimental design is detailed in Figure 1. Cell Viability Assay. Cell viability was detected with a Cell Counting Kit-8 (CCK-8, Dojindo, Kumamoto, Japan) according to the manufacturer's instructions. The absorbance was measured using a microplate reader at a wavelength of 450 nm. The percentage of living cells was calculated based on the ratio of absorbance of the experimental group to that of the normoxic group. Annexin V-FITC/PI Staining. The cells were collected and incubated with 100 l of 1x binding buffer containing 5 l of Annexin V-FITC and 5 l of PI (ThermoFisher, Hudson, NH, USA). After a 30 min incubation in the dark at room temperature, the cells were analyzed by flow cytometry (BD FACSCalibur6, Becton Dickinson, NJ, USA). Annexin V+/PI− cells were considered early apoptotic cells. Evaluation of Autophagy Flux. Primary HK-2 cells were infected with adenovirus encoding mRFP-GFP-LC3 (Hanbio, Shanghai, China) at a concentration of 20 multiplicity of infection. Twelve hours after adenovirus transduction, the cells were subjected to different interventions as described above. The cells were then visualized using a confocal microscope (Olympus, Tokyo, Japan). The processing and analysis of images were done using Metamorph (Molecular Devices) and the NIH ImageJ software. The number of GFP and mRFP dots was determined by manual counting of fluorescent puncta. At least 40 cells were scored in each group. The number of dots/cell was obtained by dividing the total number of dots by the number of nuclei in each microscopic field. Statistical Analyses. The results are expressed as the mean ± standard deviation (SD). Two-tailed unpaired Student's -tests were used to evaluate the data, and multiple comparisons among groups were performed using one-way analysis of variance (ANOVA) with post hoc Bonferroni test correction (SPSS, version 17.0). The level of statistical significance was < 0.05. Ischemic Preconditioning Affects Autophagic Activity during Renal Ischemia/Reperfusion Injury. Autophagy is a dynamic cellular catabolic process during renal I/R that is accompanied by apoptotic changes. Apoptosis during renal I/R was notably increased, as indicated by a gradual increase in cleaved caspase-3 (Figure 2(a)), which reached a maximum at 24 h after reperfusion. As indicated by the expression of LC3II and Beclin-1 in Figure 2(a), a basal level of autophagy was observed in the sham group, and it increased remarkably between 2 h and 6 h after reperfusion, peaked at 6 h, and then began to decline. Autophagy is thought to be influenced by IPC in AKI. As shown in Figure 2(b), although autophagic vacuoles were rarely seen in the untreated sham kidneys (panel (A)), typical autophagosomes containing intracellular organelles (black arrowheads) were apparent after I/R and were increased compared to the sham kidneys at the different reperfusion time points, reaching a maximum at 6 h after reperfusion (panel (B)-(E)). However, the number of autophagosomes was further increased in the IPC-treated animals after I/R injury compared with that in the animals that underwent I/R injury alone at corresponding reperfusion time ( < 0.05, panel (F)-(J)). The quantitative analysis of the autophagic vacuoles confirmed the effects of IPC. In addition, the Atg immunoblots showed a dynamic change in the autophagosomes. The levels of LC3II and Beclin-1 were significantly increased in the IPC + I/R groups compared with those in the I/R groups after 6 h, 12 h, and 24 h of reperfusion ( < 0.05). Along with these changes, SQSTM1/p62 levels reached the nadir at I/R 6 h, which presented opposite change with the extension of reperfusion time. IPC pretreatment increased SQSTM1/p62 degradation at 6 h and 12 h after reperfusion ( < 0.05). But the expression of cleaved caspase-3 was higher at 6 h and 24 h after reperfusion in the animals that did not undergo IPC ( < 0.05, Figure 2(c)). Ischemic Preconditioning Protects Renal Tissues from Ischemia/Reperfusion Injury. To determine whether IPC protects renal tissues from I/R injury, we assessed the severity of renal damage using serum creatinine levels and renal pathology. Serum creatinine activity in the sham group remained at basal levels. IPC alone increased serum creatinine slightly; however, prior IPC ameliorated I/R injury as determined by the serum creatinine levels after 6 h, 12 h, and 24 h compared to the I/R only group ( < 0.05, Figure 3 with IPC significantly alleviated the observed pathological changes ( < 0.05, Figure 3(b)). TUNEL staining confirmed that there was no apparent apoptosis in the sham group, whereas the I/R and IPC + I/R groups showed increased apoptosis. However, the level of apoptosis was much lower in the IPC + I/R group than the I/R only group ( < 0.05, Figure 3(c)). Moreover, as the early biomarkers of AKI, the renal immunochemistry of NGAL and KIM-1 revealed that there were more intense KIM-1 and NGAL-positive areas in renal tubules of I/R group than IPC + I/R group (Figure 3(d)). Hypoxia Preconditioning Affects Time-Dependent Autophagy and Apoptosis in Renal Hypoxia/Reoxygenation Injury. Different durations of reoxygenation led to time-dependent changes in autophagy and apoptosis in the HK-2 cells exposed to H/R, which peaked at 2 h of reoxygenation, as indicated by the expression of LC3II, Beclin-1, and cleaved caspase-3 ( < 0.05, Figure 4(a)). HPC-tubular cells showed a significant time-dependent increase in the expression of LC3II and Beclin-1 and a decrease in the level of cleaved caspase-3 compared to the H/R group ( < 0.05, Figure 4(b)), suggesting that HPC promotes autophagic activity and inhibits apoptosis in renal cells exposed to H/R injury in a time-dependent manner. Apoptosis was then measured by Annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) staining and flow cytometric analysis. As shown in Figure 4(c), H/R resulted in an increased percentage of apoptotic cells ( < 0.05), which was suppressed by HPC pretreatment ( < 0.05). The CCK-8 assay showed that HK-2 cell viability was also diminished after H/R injury ( < 0.05, Figure 4(d)), whereas HPC increased cell viability at H/R 2 h ( < 0.05 versus H/R), indicating that hypoxia/reoxygenation (H/R) results in renal injury and HPC is involved in the protective effect against renal H/R injury and apoptosis. Given that autophagy flux is a dynamic process, autophagosome formation only could not fully reflect the extent of autophagic activity. We utilized the tandem RFP-GFP-LC3 adenovirus construct to monitor autophagic flux, according to previously described methods [11], because this technique is based on the pH difference between the acidic autolysosome and the neutral autophagosome. A green fluorescent protein (GFP) signal is quenched in the acidic and/or proteolytic conditions of the lysosome lumen. By contrast, a red fluorescent protein (RFP) is relatively stable. Therefore, colocalization of both GFP and RFP fluorescence implies a compartment that has not fused with a lysosome, such as a phagophore or autophagosome (green and red dots overlaid in merged images appear yellow), whereas autolysosomes are labeled red. Therefore, autophagy flux can be monitored by detecting the different fluorescence signals. Renal cells cultured in the normoxic environment displayed minimal autophagy flux. H/R increased the number of autophagic structures, showing an increase in both autophagosomes and autolysosomes ( < 0.05, Figure 5). Treatment with HPC increased the number of red and yellow puncta compared to the H/R group ( < 0.05), further confirming that HPC promoted the whole autophagy flux during renal H/R injury, rather than blocking lysosomal degradation. Discussion Although a large number of studies have confirmed autophagy induction during renal I/R, studies investigating the role of autophagy in AKI have reported both beneficial and detrimental effects [12]. It is generally recognized that autophagy could not only play a role in cell protection but also promote cell death. On the one hand, it is involved in degrading abnormal proteins, helping to prevent the accumulation of harmful substances and playing the role of cell scavenger. On the other hand, too high or too low levels of autophagy could damage organelles and turn them to autophagic cell death, which regulates cell survival or death through changing its threshold level [13,14]. The regulatory mechanism of autophagy during AKI is still not clear, as it is reported that some signaling molecules such as Draper, JNK, and DAPK could direct autophagy to transform from promoting cell survival to cell death [15]. This may be related to the nonspecific destruction of macromolecular materials in cells or selective degradation of cytoprotective factors, leading to irreversible loss of cell viability [16]. Of note, Kimura and his colleagues demonstrated heightened renal I/R injury in proximal tubule-specific Atg5 knockout mice, providing the first in vivo genetic evidence for a renoprotective role in this AKI model [5]. While both autophagy and apoptosis can be induced in response to a common stimulus, autophagy contributes to cell survival by inhibiting necrosis or apoptosis in renal I/R. The activation of autophagy inhibits apoptosis by clearing misfolded/unfolded proteins and damaged organelles and mitochondria, inhibiting caspase-8 activation, as well as eliminating SQSTM1/p62 [17]. Caspases also play important roles in the regulation of autophagy that are separate from their roles in apoptosis. The activation of autophagy is an immediate response, whereas caspases are subsequently activated [18,19]. Studies have demonstrated that autophagosomal membranes serve as platform for intracellular deathinducing signaling complex-(iDISC-) mediated caspase-8 activation and apoptosis [17]. Activated caspases cleared several key autophagy proteins including Beclin-1, Atg5, VPS34, ATG3, ATG4D, and Atg16 L, resulting in the suppression of autophagy [20]. It is known that autophagy is a double-edged sword in cell injury. On the one hand, it is involved in degradation of abnormal proteins to prevent the accumulation of harmful substances, which plays the role of cell scavenger. On the other hand, it acts as a switch which regulates life and death of cells through changes in the level of the threshold; it will damage organelles and transform cells to autophagic cell death (autophagic cell death, AuCD) if the level of autophagy is too high or too low, being the type II programming cell death [13]. As there are common signaling pathways and interplay regulatory mechanisms between autophagy and apoptosis, the duality of autophagy may be related to the regulation of apoptosis under stress condition. In our study, autophagy was observed in a dynamic manner both in vivo with renal I/R and in vitro with H/R. The expression of LC3II and Beclin-1 gradually increased before they were reduced as the duration of renal I/R increased, reaching a peak at 6 h after reperfusion. In addition, the expression of cleaved caspase-3 was low at the beginning of the reperfusion phase but increased as the reperfusion time extended. At the later reperfusion time, decreased autophagy might limit the ability of cells to remove damaged organelles and proteins, thus further aggravating the degree of apoptosis and slowing down recovery from I/R injury, although the expression of cleaved caspase-3 was notably decreased at 12 h after reoxygenation in vitro, which may be due to a lack of synthetic biological materials (Figure 4(a)). These results implied that early activation of autophagy inhibits the occurrence of apoptosis. As the apoptotic process improved, it inhibited autophagic activity in turn, suggesting that the autophagy and apoptotic cell death signaling pathways may interact with each other during AKI. Autophagy is induced under stressful conditions to ensure cell survival by limiting necrosis or apoptosis. Similarly, decreased apoptosis has been reported as a biologically protective mechanism triggered by IPC [21,22]. Recently, growing evidence has indicated that autophagy partly mediates IPC-induced organic protection by inhibiting apoptosis [22]. The cytoprotective role of IPC-induced autophagy has been reported in chemotherapy-treated livers, neuroblastoma cells, and human recipients of fatty liver grafts. Autophagy is also known to render cells resistant to apoptosis induced by topoisomerase II inhibitors [3,23]. Zhang et al. [24] reported recently that HPC reduced myocardial cell injury by I/R through inducing FUNDC1-dependent mitophagy in Platelet. In this study, we clearly demonstrated that HPC induced autophagy flux and reduced apoptosis at different time points after renal H/R in vitro. In addition, we confirmed in the other studies that 3-MA inhibited IPCinduced protection against renal cell apoptosis and injury in renal I/R in vitro (data not shown), validating that IPCmediated activation of autophagy is crucial in affording protection in renal I/R-induced AKI. Although the exact mechanism by which IPC-induced autophagy protects from oxidants, ATP depletion, or I/R injury is unclear, it is likely that autophagy activation meets necessary bioenergetic needs and eliminates protein aggregates and damaged organelles formed during the injury. Thus, a normal flux of autophagic activity is important for suppressing cell death [3]. Autophagic flux refers to complete autophagic activity with the degradation and removal of autophagic cargo. To monitor the various stages of autophagy during renal I/R, the tandem GFP-RFP-LC3 adenovirus construct was used in this study, confirming the effect of IPC on promoting the whole autophagy flux in renal I/R injury. Some situations including impairment in the autophagy process or overdigestion of cytoplasmic contents due to excessive autophagy may result in autophagic cell death [25]. IPC was reported to inhibit excessive autophagic cell death in a rat spinal cord I/R model [26]. Other studies have reported that autophagic cell death is motivated by prolonged IPC induced by cigarette smoke extract in human umbilical vein endothelial cells [27]. This discrepancy could result from differences in the methods and the degree of autophagy activation, the methods of implementing IPC, or test subjects. In summary, we demonstrated that autophagy and apoptosis were altered in a time-dependent manner both in cultured RTECs and in kidney tissues. IPC could protect against I/R-induced renal cell apoptosis and injury by increasing autophagy. Further studies are needed to gain insights into the specific molecular mechanism whereby IPC mediates autophagy and protects cells from AKI. showing LC3 staining in renal tubular cells of different groups infected with mRFP-GFP-LC3 adenovirus for 12 h and then exposed to H/R injury with or without HPC prior to it. The autophagosomes (APs) were represented by yellow puncta and autolysosomes (ALs) were represented by red puncta in merged images. The results were obtained from three independent experiments with at least 100 cells analyzed. The data are expressed as the mean ± SD. * < 0.05 versus the number of APs in the H/R 2 h group. # < 0.05 versus the number of ALs in the H/R 2 h group. Conflicts of Interest The authors declare that they have no conflicts of interest.
5,090.4
2018-01-22T00:00:00.000
[ "Biology", "Medicine" ]
Constraints on the Spacetime Variation of the Fine-structure Constant Using DESI Emission-line Galaxies We present strong constraints on the spacetime variation of the fine-structure constant α using the Dark Energy Spectroscopic Instrument (DESI). In this pilot work, we utilize ∼110,000 galaxies with strong and narrow [O iii] λ λ4959, 5007 emission lines to measure the relative variation Δα/α in space and time. The [O iii] doublet is arguably the best choice for this purpose owing to its wide wavelength separation between the two lines and its strong emission in many galaxies. Our galaxy sample spans a redshift range of 0 < z < 0.95, covering half of all cosmic time. We divide the sample into subsamples in 10 redshift bins (Δz = 0.1), and calculate Δα/α for the individual subsamples. The uncertainties of the measured Δα/α are roughly between 2 × 10−6 and 2 × 10−5. We find an apparent α variation with redshift at a level of Δα/α = (2–3) × 10−5. This is highly likely to be caused by systematics associated with wavelength calibration, since such small systematics can be caused by a wavelength distortion of 0.002–0.003 Å, which is beyond the accuracy that the current DESI data can achieve. We refine the wavelength calibration using sky lines for a small fraction of the galaxies, but this does not change our main results. We further probe the spatial variation of α in small redshift ranges, and do not find obvious, large-scale structures in the spatial distribution of Δα/α. As DESI is ongoing, we will include more galaxies, and by improving the wavelength calibration, we expect to obtain a better constraint that is comparable to the strongest current constraint. INTRODUCTION The Standard Model of particle physics assumes that fundamental physical constants are universal and constant.They do not vary in space and time and their values can only be obtained from experiments.On the other hand, modern theories beyond the Standard Model predict or even require the variation of the fundamental constants (Martins 2017).In the past several decades different methods, including laboratory experiments and astrophysical observations, have been developed to search for possible variations of these constants Corresponding author: Linhua Jiang linhua.jiang@pku.edu.cn(see Uzan 2003Uzan , 2011, for a review).A particularly interesting constant is the fine-structure constant, denoted by α ≡ e 2 /ℏc or α ≡ 1/(4πϵ 0 ) e 2 /ℏc, where e, ℏ, c, and ϵ 0 are the elementary charge, reduced Planck constant, speed of light in vacuum, and electric constant of free space, respectively.It is a dimensionless quantity and characterizes the strength of the electromagnetic interaction between elementary charged particles.Its possible spacetime variation can be measured from observations of distant astrophysical objects or radiation. Probing the α variation has a long history (e.g., Savedoff 1956;Bahcall & Schmidt 1967).It is now clear that any relative α variation ∆α/α in the local Universe or time variation α/α ( α ≡ dα/dt) at the present time, if it exists, must be extremely small (e.g., Damour & Dyson 1996;Petrov et al. 2006;Rosenband et al. 2008;Murphy et al. 2022).Therefore, astrophysical observations mostly focus on high-redshift objects and explore the α variation at early times when a variable α is theoretically more possible (e.g., Barrow et al. 2002;Alves et al. 2018).The majority of the observational studies in the past three decades utilized quasar absorption lines based on high-resolution spectral observations.This powerful approach uses very bright quasars as background sources and analyzes absorption lines from intervening gas clouds.The absorption lines used in early works were mainly fine-structure doublet lines such as C IV, N V, Mg II, and Si IV (e.g., Potekhin & Varshalovich 1994;Cowie & Songaila 1995;Murphy et al. 2001c;Chand et al. 2005).In this so-call the alkalidoublet (AD) method, the wavelength separation of each doublet is directly related to α (particularly for these relatively light elements) and thus provides an excellent tracer of the α variation. While the AD method is relatively clean and straightforward, it does not use all the information in quasar spectra, since one absorption system usually contains a series of absorption lines.To make use of all (or most) absorption lines in an absorption system, the manymultiplet (MM) method was introduced (e.g., Dzuba et al. 1999a;Webb et al. 1999).This method takes advantage of different relativistic effects on the α measurement for different elements.In particular, heavy elements usually have strong relativistic effects and some of them show opposite relativistic corrections (Dzuba et al. 1999b).By comparing the absorption lines from these heavy atoms (commonly used lines include Fe II, Zn II, Ni II, Cr II, etc.) and light atoms, the MM method can achieve a much greater accuracy compared to the AD method.Therefore, later studies are mostly based on the MM method (e.g., Molaro et al. 2013;Songaila & Cowie 2014;Wilczynska et al. 2015;Kotuš et al. 2017;Milaković et al. 2021), and so far the strongest constraints (null results) on the α variation are also from this method. As the MM method uses all absorption lines in quasar spectra, it requires not only accurate laboratory wavelengths for all atomic transitions involved, but also accurate measurements of the line wavelengths in the observed spectra.There could be a number of systematic uncertainties, including isotope effects, slit effects, kinematic effects, velocity or wavelength distortions, etc., that may cause false detections and these systematics have been extensively discussed in the literature (e.g., Murphy et al. 2001a).Some systematic uncertainties can be suppressed by large samples.For example, the absorption lines of different atoms from one absorption system may originate from different parts of a gas cloud and thus have different velocities.It is quite common in astrophysical objects and this effect can be reduced using a large number of absorption systems.One intriguing result from previous studies of quasar absorption lines was tentative evidence for both temporal and spatial variation of α at a level of ∆α/α ∼ (1−10)×10 −6 (e.g., Murphy et al. 2001b;Webb et al. 2001;Murphy et al. 2003;Webb et al. 2011;King et al. 2012).Longrange wavelength distortions were later found to be the dominant source of systematic uncertainty in those measurements (e.g., Whitmore & Murphy 2015), but their overall impact may not be as severe as initially thought (e.g., Dumont & Webb 2017).With the advances of precision radial velocity measurements on large telescopes such VLT ESPRESSO and new analysis methods such as artificial intelligence, the wavelength measurement is being improved and systematics associated with the wavelength calibration are being alleviated (e.g., Milaković et al. 2020;Lee et al. 2021a,b;Schmidt et al. 2021;Milaković & Jethwa 2024;Schmidt & Bouchy 2024). In addition to the lines in the optical, atomic finestructure lines and molecular lines in the sub-millimeter and radio bands have also been applied to probe the α variation (e.g., Levshakov et al. 2017;Kanekar et al. 2018), but they have not been widely used.There are other astronomical methods, including galaxy clusters, type Ia supernovae, and cosmic microwave background (e.g., de Martino et al. 2016;Holanda et al. 2016;Hart & Chluba 2018;Smith et al. 2019).These methods are usually model dependent with astronomical assumptions, and have not yet achieved stringent constraints on the α variation. Despite the fact that quasar absorption lines are currently the most widely-used method, the earliest astrophysical observations actually used emission lines, particularly quasar/AGN emission lines (e.g., Savedoff 1956;Bahcall & Schmidt 1967).Emission lines in astronomy provide a great advantage compared to absorption lines: they are much easier to observe with high signal-to-noise ratios (S/Ns).The emission lines of the SDSS quasars have been used to constrain the α variation (Bahcall et al. 2004;Albareti et al. 2015), though the constraint is not very stringent.The main emission line in this work is the [O III] λλ4959,5007 doublet (hereafter [O III]).The [O III] doublet is arguably the best emission line to probe the α variation (see Section 2 for a detailed explanation).The problem is that quasar emission lines are broad, and the [O III] doublet lines are often affected by the Hβ emission line.They also suffer from other issues, such as Fe II contamination (see Section 2).In emission-line galaxies (ELGs), however, the [O III] doublet lines are much narrower and cleaner, and thus offer a great tool in searches for a variable α.On the other hand, they require relatively higher spectral resolution.Because there is a lack of large samples with medium to high resolution spectra, ELGs have been rarely used so far. We are carrying out a program to probe the α variation using massive galaxy spectroscopic surveys from the Dark Energy Spectroscopic Instrument (DESI; Levi et al. 2013;DESI Collaboration et al. 2016a,b, 2022).In this paper we present our first constraint using a large sample of > 100, 000 ELGs at z ≤ 0.95.At z = 0.95, the Universe was roughly half the current age.We primarily rely on the [O III] doublet in this work, and will include the [Ne III] λλ3869,3967 (hereafter [Ne III]) doublet and possibly other doublets in the next work.The layout of the paper is as follows.In Section 2, we will introduce our galaxy sample and spectroscopic data.In Section 3, we will measure ∆α/α from the galaxies and present our main results.We will discuss our results in Section 4 and summarize the paper in Section 5. Throughout the paper, all magnitudes are expressed on the AB system.We use a Λ-dominated flat cosmology with H 0 = 69 km s −1 Mpc −1 , Ω m = 0.3, and Ω Λ = 0.7. DATA AND WAVELENGTH CALIBRATION In this section, we will first explain the [O III] doublet as an excellent probe of the α variation (Figure 1).We will then introduce the DESI spectroscopic survey and our galaxy sample selection from DESI.A key step in all previous studies is the wavelength calibration.We will use sky lines to refine wavelength calibration for a small sample of our galaxies. [O III] Doublet as a Probe of Varying α As for the alkali-like doublets used in the AD method mentioned in Section 1, the wavelength separation of the [O III] doublet lines ∆λ = λ 2 − λ 1 (where λ 1 and λ 2 are the line wavelengths) is directly related to α.With a reasonable, non-relativistic approximation, ∆λ/ λ ∝ α 2 , where λ is the average of λ 1 and λ 2 .In this work, we replace λ by λ 2 , because the [O III] λ5007 line is about three times stronger than the [O III] λ4959 line and thus has a much higher S/N.This does not affect the calculation of ∆α/α.Then ∆α/α is calculated by where 0 and z label two redshifts.Any uniform shift of a spectrum caused by either a cosmological redshift or a Doppler effect does not influence the measurement.The present-day vacuum wavelengths of the doublet lines, typically used as the reference values, are λ 1 (0) = 4960.295Å and λ 2 (0) = 5008.240Å, respectively 1 .The uncertainties of the two values are unclear.They are barely sufficient for this work and should be improved in the future.On the other hand, the absolute values are not critical if we have a range of redshifts (Bahcall et al. 2004), because one can use Equation 1 to compare any two redshifts or two systems.Among all UV and optical emission lines, the [O III] doublet lines are the best choice for the purpose of detecting a varying α.The reason is twofold.The first one is that the doublet lines have a very wide wavelength separation ∆λ (nearly 50 Å), roughly one order of magnitude larger than most fine-structure doublet lines.This wavelength separation is directly proportional to the sensitivity of the ∆α/α measurement: this is because Equation 1 can be rewritten as ∆α/α ≈ 0.5 × δ(∆λ)/∆λ, where δ(∆λ) = ∆λ(z) − ∆λ(0).From this formula, a ∆λ change of 0.01 Å for [O III] means ∆α/α ≈ 10 −4 .In other words, a systematic or measurement uncertainty of 0.01 Å sets a detection limit of ∆α/α ≈ 10 −4 .As we will see in Section 3, the accuracy of 0.01 Å is roughly the best that we can achieve for individual [O III] doublet systems in the DESI data. The other reason is that the [O III] doublet lines are very strong in many galaxies.In some galaxies they are the strongest emission lines in the UV and optical range, far stronger than any other doublets.Therefore, it is easy to obtain high S/N spectra for [O III], which is critical for the ∆α/α measurement.Figure 1 As mentioned in Section 1, currently the most widely used method is based on quasar absorption lines.Emission lines have been rarely used in previous work.Quasar emission lines (mainly [O III]) were used before (e.g., Bahcall et al. 2004;Albareti et al. 2015).While the [O III] emission lines are usually strong in quasars, they are difficult to use for an accurate measurement of the variation in α due to the following reasons.The first reason is that quasar emission lines are broad and broad lines are not optimal for an accurate radial velocity measurement.Although the [O III] lines are not typical broad emission lines such as C IV and Mg II in quasars, they are generally much broader than those in galaxies.Because lines are broad, the [O III] lines are often blended with the broad Hβ emission line, which can affect the measurement of the line centers.Furthermore, there are plenty of Fe II emission lines that form a pseudo continuum in quasar UV and optical spectra (e.g., Vanden Berk et al. 2001;Vestergaard & Wilkes 2001;Tsuzuki et al. 2006;Jiang et al. 2007;Wang et al. 2022).The strength of the Fe II emission varies from one quasar to another.The existence of the Fe II emission affects both flux and wavelength measurements for other emission lines.In quasar studies, the contaminant Hβ and Fe II emission can be modeled and subtracted, but the accuracy requirement is far lower than the level that we demand in this work. DESI Survey and Sample Selection ELGs are a much better probe of α compared to quasars, as their lines are much narrower and cleaner.The SDSS BOSS and eBOSS surveys (Dawson et al. 2013(Dawson et al. , 2016) ) produced a large number of ELGs.These galaxies were primarily used to study the expansion history of the Universe via the measurement of the baryon acoustic oscillations.The resolving power of the SDSS spectra is about 2000, barely enough to resolve individual, narrow emission lines from most galaxies.Therefore, SDSS galaxies have not been widely used to study the α variation.Now DESI is accumulating the largest number of galaxy spectra.Its resolving power reaches ≥ 5000 in the long wavelength range.A higher wavelength resolution means not only a better wavelength determination, but also less contamination from sky lines.DESI is a robotic, fiber-fed, highly multiplexed spectroscopic surveyor that operates on the Mayall 4-meter telescope at Kitt Peak National Observatory (DESI Collaboration et al. 2022Collaboration et al. , 2023a,b;,b;Schlafly et al. 2023).The primary goal of DESI is similar to that of BOSS and eBOSS, i.e., to study the expansion history of the Universe and the growth of the large-scale structure.DESI selected targets from an imaging survey that covers more than 14,000 deg 2 of the sky (Zou et al. 2017;Dey et al. 2019).Its main extragalactic targets (Myers et al. 2023) include luminous red galaxies (LRGs; Zhou et al. 2023), ELGs (Raichoor et al. 2023), and quasars (Alexander et al. 2023;Chaussidon et al. 2023).These targets are used as the tracers of the large-scale structure.In addition, DESI also carries out a Bright Galaxy Survey (BGS; Hahn et al. 2023;Lan et al. 2023;Juneau et al. 2024) and a Milky Way Survey (Cooper et al. 2023).When we select our galaxies for this work below, we do not discriminate between different samples.We simply select galaxies with narrow and strong [O III] lines from all extragalactic samples.DESI has a large field-of-view of 8 deg 2 , equipped with 5000 fibers for science targets in the focal plane.Light is split into three arms and recorded by three different cameras.The three arms are denoted as blue "B" (3600-5800 Å), red "R" (5760-7620 Å), and near-IR "Z" (7520-9820 Å), respectively.Wavelength ranges from adjacent arms slightly overlap for the purpose of calibration.The overlap regions have a much lower throughput, so we exclude galaxies whose [O III] lines are located in the central parts of the overlap regions.We select galaxies from a DESI internal value-added catalog (v2.0) generated by FastSpecFit 2 (J.Moustakas et al. in preparation;Moustakas et al. 2023).We focus on strong and narrow [O III] emitters, since the accuracy of the wavelength measurement is dependent of both line strength and line width.We define a line quality parameter Q ≡ f / √ σ, where f is the flux of the [O III] λ4959 line in units of 10 −17 erg s −1 and σ is the velocity dispersion in units of km s −1 .If a line profile is a Gaussian, the full-width at half maximum (FWHM) of the line is 2.35 σ.Note that the [O III] λ5007 line is three times stronger than the [O III] λ4959 line, and they have the same intrinsic line profile.The redshift range considered in this work is 0 < z < 0.95, and the upper limit of the redshifts is determined by the wavelength coverage of the Z arm.Our initial selection in this work includes galaxies with Q > 10 and σ < 170 km s −1 (FWHM < 400 km s −1 ) and galaxies with 5 < Q < 10 and σ < 130 km s −1 (FWHM < 300 km s −1 ).Our final sample contains ∼ 110, 000 galaxies after we remove a small fraction of the sources for a variety of reasons (Section 3).The redshift distribution of the galaxies is shown in Figure 2. Most of them are at relatively lower redshifts. Wavelength Calibration 2 https://fastspecfit.readthedocs.org/Wavelength calibration is a critical step for the study of the variation in α.It is particularly important for the MM method, as it deals with absorption lines in a wide wavelength range.It is often less important in the AD method, because the wavelength separations of most doublets are very small (a few Å in the rest frame).However, the [O III] doublet lines have a wide wavelength separation of nearly 50 Å, which reaches nearly 100 Å in the observed frame for galaxies at z ∼ 1, the highest redshift in this sample.While the wide wavelength separation greatly improves the detection sensitivity, a wavelength or velocity distortion can be obvious in such a wide wavelength range. DESI performs its wavelength calibration in three steps, including an initial guess, a calibration with arc lamp lines, and a calibration with sky lines.It reaches a high accuracy of rms ∼ 0.02 Å (Guy et al. 2023), which is good enough for galaxy and cosmology science programs.We test the wavelength calibration using some of the brightest [O III] emitters based on the method that will be addressed in Section 3. Specifically, we calculate ∆α/α for these galaxies.Because previous studies have demonstrated a non-variation of α at a level of ∼ 5 × 10 −6 or ∼ 5 ppm (ppm: one part per million) (e.g., Lee et al. 2023;Webb et al. 2023), we do not expect to detect a varying α from individual galaxies here.From the above test we find that some galaxies show a significant α variation, which should be caused by a wavelength distortion.We try to refine the wavelength calibration using sky lines (particularly OH lines) if an [O III] doublet is located in a wavelength range with sufficient sky lines.We do not try to derive a new global wavelength solution for each galaxy.Instead, we calculate the wavelengths of the sky lines and estimate their deviations from their theoretical or lab values. For each galaxy with an [O III] doublet surrounded by sky lines, we start with its two intermediate files "sframe-CAMERA-EXPID.fits" file ("sframe" file for short) and "sky-CAMERA-EXPID.fits"file ("sky" file for short), where CAMERA is one of the spectrograph cameras and EXPID is a 8-digit exposure ID.The "sky" file contains the sky model for the galaxy and the "sframe" file contains the sky-subtracted galaxy spectrum before the step of the flux calibration.We combine the "sframe" and "sky" files to produce a galaxy spectrum with sky lines.We then select isolated and strong sky lines in the spectrum to calculate their wavelengths.A Gaussian profile is fitted to each line.A single Gaussian usually works well.Figure 3 shows an example.Sky lines are mostly made of OH vibrationrotation lines and they are doublets owing to Λ-splitting with the same flux.The separations of the doublets are typical a few tenths of Å, so either individual lines or doublets are not resolved by DESI.Therefore, a more complex sky line model is not demanded. We perform a series of tests regarding how to select sky lines.First of all, we only consider strong and isolated lines.Weak lines produce large uncertainties and do not help us improve the wavelength calibration.Second, some galaxies were observed in bright nights and/or with short exposure times (much shorter than the nominal time of 1000 seconds).In this case, some strong sky lines appear weak in the spectra and are excluded in our analyses.Finally, we note that we can directly fit the lines in the sky model spectrum rather than the combined spectrum.We do a test and find that the results are well consistent.Since the sky model spectrum usually has a higher S/N, we decide to use the sky model spectrum to derive the sky line wavelengths and the whole procedure is the same. After we obtain the wavelengths of the sky lines, we compare them with lab wavelengths.Abrams et al. (1994) measured OH line wavelengths between 1850 and 9000 cm −1 in the lab to a great accuracy (better than 0.00001 Å), and their result is one of the calibration standards.Their measurements were done under conditions similar to those in the upper atmosphere where OH lines are produced.We do not directly use the result of Abrams et al. (1994), because most of their measured lines are very weak (or not detected) in the DESI spectra.Instead, we pick up strong lines from the list of Rousselot et al. (2000).Rousselot et al. (2000) measured the wavelengths of OH lines using spectra taken by the VLT telescope, and their wavelengths were calibrated to match those in Abrams et al. (1994).In addition to the OH lines, there are some other lines (such as O 2 and O I) in the wavelength range that we consider.These lines were not included in the above sky line list.We add several of these isolated and strong lines using the result of Osterbrock et al. (1996Osterbrock et al. ( , 1997)).Osterbrock et al. (1996) measured sky lines using spectra taken by the Keck telescope, and their wavelength calibration was also done by matching the result of Abrams et al. (1994).Note that DESI uses the wavelength measurements of Hanuschik (2003) in the last step of its wavelength calibration.The result of Hanuschik (2003) is consistent with previous studies mentioned above.We also use Hanuschik (2003) to crosscheck our result.Section 3.2 will describe how the new wavelengths are applied to our analyses. It is worth mentioning that wavelengths values are either air or vacuum wavelengths in the literature.Different works used different conversion formulas between air and vacuum wavelengths (e.g.Edlén 1953;Edlén 1966;Ciddor 1996).For example, Abrams et al. (1994) claimed that they used the Edlén (1966) conversion.Later Osterbrock et al. (1996) found that Abrams et al. (1994) actually used the Edlén (1953) conversion.Rousselot et al. (2000) stated that they used the Allen (1973) conversion, which is actually based on the Edlén (1953) conversion.Hanuschik (2003) used the Edlén (1966) conversion.We use a relatively new conversion by Ciddor (1996).Nevertheless, the differences among these conversions are tiny; see Murphy et al. (2001b) for a detailed comparison.These differences can be completely ignored in this work.In this section, we measure ∆α/α values for our galaxy sample and present our main results.We use Equation 1to calculate ∆α/α, meaning that we need to measure the separation of the [O III] doublet lines ∆λ(z) and the wavelength of the [O III] λ5007 line λ 2 (z).The two doublet emission lines intrinsically have the identical line profiles as both transitions are from the same excited energy level of double-ionized oxygen.However, this does not mean that the two lines have the same profiles in an observed spectrum, because the spectral resolution and dispersion could be slightly different at two different wavelengths.In addition, a doublet line could be the superposition of multiple components, as we will see in Section 3.1.In this case, the two line profiles are still intrinsically identical, but the wavelength measurement becomes complex.A direct cross correlation is not necessarily a good method to find the separation of the doublet lines.Bahcall et al. (2004) used a modified cross-correlation method, i.e., the relative strength and width of the two lines were allowed to vary.This method worked for quasar emission lines that are broad and cover many pixels.It does not work well for our narrow emission lines that cover only several pixels (see Galaxy 1 in Figures 4 and 5), because the fine resampling algorithm for narrow lines during the cross correlation strongly relies on the assumption of intrinsic line profiles.Therefore, we prefer direct model fitting to the lines. We will first use a single Gaussian to fit individual [O III] doublet lines and then use double Gaussian profiles to improve the line fit.We will finally use sky lines to refine our measurements when possible.In this work we will focus on the time variation of α, but will also briefly estimate its spatial variation. Single Gaussian Fit and Double Gaussian Fit For each emission line, we first subtract its underlying continuum emission, which is obtained by fitting a second-order polynomial curve to a short spectral range around the line.In this work we only selected strong [O III] emitters described in the previous section, and the majority of the galaxies in our sample have relatively weak continuum emission and thus large equivalent widths (EWs).After the continuum subtraction, we fit a Gaussian to the line.The two [O III] doublet lines are fitted independently.Figure 4 shows an example of our best single Gaussian fits to two galaxies, including a good fit to a narrow [O III] doublet and a relatively poor fit to a broader [O III] doublet.The catalog that we use contains a small number of contaminants.In addition, some galaxies or emission lines were misclassified.We remove them in this step. A single Gaussian works reasonably well for narrow [O III] emitters.We also try to fit a Voigt profile that has one more free parameter, and the results are generally poorer.As the line width increases, a single Gaussian works less well, mainly because the lines show more asymmetric profiles, i.e., a line is presumably a superposition of more than one component under the DESI spectral resolution.Galaxy 2 in Figure 4 clearly shows an asymmetric profile.We will see in Figure 5 that this galaxy can be well fitted by two Gaussian components. In order to improve the performance of the line fitting, we add one more Gaussian component in the line model.For each emission line, we subtract its underlying con-Figure 5. Example of a double Gaussian fit to [O III] in two galaxies.The two galaxies are the same as shown in Figure 4.The diamonds represent the data points and the underlying continuum emission has been subtracted.The red profiles are the best Gaussian fits.For the two [O III] λ5007 lines, we also show individual Gaussian components in green and blue.A double Gaussian usually provides a good fit to our sample galaxies. tinuum emission as we did above.We first perform a double Gaussian fit to the [O III] λ5007 line, since it is three times stronger than the other doublet line.A double Gaussian fit requires at least 6 effective data points.As we can see from Figures 4 and 5, narrow lines in the DESI spectra barely meet this requirement.After we obtain the best fit for the [O III] λ5007 line, we fit the [O III] λ4959 line using the [O III] λ5007 result as a prior.During the fit, we fix the separation, the relative strength, and the relative line width of the two Gaussian components of the [O III] λ4959 line as determined from the [O III] λ5007 line, given that the two doublet lines are intrinsically identical.Therefore, the number of free parameters in this fit is only 3. Figure 5 illustrates our double Gaussian fit to the same two galaxies displayed in Figure 4.The fitting result is significantly improved for broad lines. We carefully evaluate the single and double Gaussian fitting results.For each galaxy, we prefer its single Gaussian result unless the single Gaussian fit is poor as judged by its χ 2 value.As expected, a single Gaussian profile works well for narrow lines (roughly FWHM ≤ 130 km s −1 ) and a double Gaussian model is not needed in most cases.For broader lines, a double Gaussian model works better.For the broadest lines in our sample with FWHM > 250 km s −1 , a double Gaussian is often far better than a single Gaussian.As a result, we usually adopt the single Gaussian results for narrow lines and the double Gaussian results for broad lines. A small fraction of the galaxies are poorly fitted by either a single Gaussian or double Gaussian.We visu-ally inspect many of these galaxies and find that most of them have very broad and complex line profiles that cannot be described by a double Gaussian.The remaining of these galaxies include different types of objects.Here are two typical examples.The first example is that the λ4959 and λ5007 lines have apparently different line profiles.In the second example, the λ4959 line shows strong and abnormal emission or absorption features due to unknown reasons.We remove these poorly fitted lines/galaxies.For very broad and complex lines, one may in principle add one or more Gaussian components to achieve a better fit.We do not do it because the resultant uncertainties would be much larger.In addition, the overall fraction of these galaxies is small in our sample, so we just exclude them. Wavelength Refinement We refine the measured wavelengths of the [O III] line centers using sky lines.This can only be done for a small fraction of galaxies or lines with plenty of nearby sky lines in the "R" and "Z" arms (corresponding to z ≥ 0.2).This cannot be done for [O III] lines in the "B" arm since there are no bright sky lines.The details how we calculated and calibrated the wavelengths of sky lines have been elaborated in Section 2.3.We do not try to find a new global wavelength solution for each galaxy, because the DESI pipeline has already done this.Instead, we refine the wavelength calibration for each pair of the [O III] doublet lines with four or more well measured nearby sky lines.We first compute the wavelength differences between the measured values and reference values for the sky lines, as stated in Section 2.3.We then fit a second-order polynomial curve to the wave-Figure 6. Examples of ∆α/α distributions before and after the wavelength refinement (blue and red histograms, respectively).The four redshifts correspond to four wavelength ranges (for [O III]) with plenty of sky lines.Each histogram has been normalized so that the total number is 10.The wavelength refinement moderately improves the results, because only a fraction of galaxies can be improved in this step. length differences as a function of wavelength.From the best-fit curve, we estimate the wavelength differences for the [O III] doublet lines and apply the differences to the wavelengths of the doublet lines. Figure 6 compares the results for four galaxy samples before and after the wavelength refinement, by illustrating their distributions of the relative α variation ∆α/α measurements.The four redshifts used in the figure correspond to four wavelength ranges (for [O III]) with a few or more usable sky lines.The blue and red histograms represent the ∆α/α distributions before and after the wavelength refinement.The figure shows that the wavelength refinement only moderately improves the results, and the difference between the distributions is small.This is because only a fraction of galaxies in each sample can be improved in this step.We will further discuss the wavelength calibration and refinement in the Section 4. Figure 7 shows the ∆α/α distributions for the whole galaxy sample in four redshift bins.We have removed a tiny fraction of galaxies with ∆α/α > 10 −3 .We visually inspect these galaxies and their lines suffer from serious issues mentioned in the above section.With a close inspection, the distributions in Figure 7 are not symmetric.They are slightly skewed with higher numbers at lower values.This is particularly obvious in the first panel for the sample at z < 0.2.The asymmetric distributions naturally lead to non-zero detections of ∆α/α, as we will see in Section 3.4. Error Analyses For each galaxy, the uncertainty in the ∆α/α measurement is the combination in quadrature of a measurement error and a systematic error from the wavelength calibration.The measurement error is estimated by Equation 1 from the propagation of the errors of the two line centers.It is dominated by the uncertainty of the λ4959 line, which is usually smaller than 0.05 Å.In a tiny fraction of galaxies that have narrow lines with very high S/Ns, the measurement uncertainties of the line centers can be smaller than 0.01 Å.Most of these galaxies are at z < 0.1 from the BGS sample.In this case, the ∆α/α measurement is dominated by the systematic error. It is not straightforward to estimate the systematic error.Guy et al. (2023) claimed that the wavelength calibration of DESI reaches an accuracy of rms ∼ 0.02 Å.As we will see in Section 4, the absolute calibration is actually worse than rms ∼ 0.02 Å.This has a small impact on our ∆α/α measurement since ∆α/α is much more sensitive to the relative calibration in the wavelength range between the two doublet lines.However, the relative wavelength calibration is complex.In fact, this was one of the major challenges in the past.We estimate the relative wavelength calibration using the wavelength range with plenty of sky lines, and find rms ∼ 0.02−0.03Å.We suspect that the accuracy of the wavelength calibration mentioned by Guy et al. (2023) is actually for the relative wavelength calibration.Therefore, we apply a uniform systematic error of 0.02 Å in our wavelength calibration. In the next two subsections, we will estimate the α variation in space and time.We divide the whole sample into different subsamples and measure ∆α/α for individual subsamples.For each subsample, we calculate its ∆α/α and related error using two methods.In the first method, we take a weighted average.This is a straightforward approach, assuming that individual input errors are reliable.The second method is the bootstrap estimate (e.g., Bahcall et al. 2004).For a subsample with n galaxies, its ∆α/α and error are estimated from 10,000 simulated samples.To generate a simulated sample, we randomly draw n galaxies with replacement from the real subsample, and calculate the weighted average of ∆α/α for the simulated sample.We obtain 10,000 weighted averages from the 10,000 simulated samples and they obey a Gaussian distribution.Finally, the ∆α/α and error values of the subsample are the mean and standard deviation of this Gaussian distribution.We will see in the next subsection that the results from the two methods are well consistent. Time Variation of α We estimate ∆α/α (relative to the z = 0 value) at different redshifts.The whole galaxy sample is divided into 10 redshift bins from z = 0 to 1.0, with a bin size of 0.1.The ∆α/α values are calculated using the two methods above.The color-coded symbols in the upper panel of Figure 8 exhibit the results of the whole sample from the two methods and they are well consistent, except that the uncertainties from the bootstrap estimate are slightly smaller.The sizes of the subsamples decrease towards higher redshifts as seen from Figure 2, so the ∆α/α uncertainties increase towards higher redshifts.The uncertainties are roughly between 2 × 10 −6 and 2 × 10 −5 .Since the two methods provide nearly the same results, we will only use the weighted average method in the following analyses.In the lower panel of Figure 8, we divide the whole sample into two subsamples with relatively high (Q > 10) and low (Q < 10) line quality parameters.The results are indicated in green and magenta and they are also consistent. Figure 8 shows an apparent variation of α in most redshift bins.Specifically, ∆α/α is below zero (at ≥ 2σ level) at z < 0.5 and z ∼ 0.75.As mentioned in Section 1, previous studies based on observations of bright quasars have indicated ∆α/α < 10 ppm, so the α vari-Figure 8. Relative α variation ∆α/α with redshift.The blue and red circles in the upper panel represent the results of the whole sample from the weighted average and bootstrap estimate methods, respectively.The circles are slightly shifted along the x-axis for clarity.The two results are well consistent.In the lower panel, we show the results for two subsamples with Q > 10 (green circles) and Q < 10 (magenta circles), respectively, and they are also consistent.The figure shows an apparent α variation with time.This should be caused by systematics associated with the wavelength measurement (see Section 3.4).Note that the zero point is only relative. ation seen in Figure 8 should be caused by systematics associated with the wavelength measurement.If this is caused by the wavelength distortion between the two doublet lines, we can estimate the level of the distortion as follows.From Figure 8, the variation of ∆α/α is roughly (2−3)×10 −5 .Based on our estimate in Section 2.1 that an uncertainty of 0.01 Å sets a detection limit of ∆α/α ≈ 10 −4 , ∆α/α = (2 − 3) × 10 −5 suggests a wavelength distortion of 0.002 − 0.003 Å.This is beyond the detection capability of the DESI wavelength calibration.Therefore, we have reached a regime in which the constraint on the α variation is completely dominated by systematics.If there were no wavelength-related systematics, we would be able to achieve an accuracy of several ppm on the ∆α/α constraint in most redshift bins, and a stronger constraint can be obtained when all data are combined. It is worth noting that the zero point of ∆α/α is only relative.It is relative to the z = 0 value, but the uncertainty of the z = 0 value is unclear.As mentioned in Section 2.1, the wavelength measurement of the [O III] doublet lines at z = 0 is accurate to a few times 0.001 Å.This translates to an uncertainty of a few times 10 −5 for ∆α/α, which is close to the uncertainties shown in Figure 8.If the z = 0 value is very accurate, ∆α/α is consistent with zero at z ∼ 0.65 and z ≈ 0.85.We notice that these two redshift ranges for the [O III] doublets correspond to the wavelength ranges with plenty of sky lines, which makes a robust wavelength calibration.Nevertheless, we focus on the relative α variation in this work. Spatial Variation of α Our large galaxy sample provides a great advantage that allows us to measure the spatial variation of the fine-structure constant.The current DESI data already cover a large portion of the sky.In order to estimate the spatial variation, we divide the whole sample into smaller subsamples based on redshift.The reason is twofold.The first reason is to estimate the spatial variation at different redshifts.The second reason is that this can partially remove the effect of the wavelength calibration that probably causes the time variation seen in Figure 8, if the wavelength calibration has a strong wavelength dependence.To efficiently remove the potential wavelength dependence, we first estimate ∆α/α in narrow redshift slices (narrower than those shown in Figure 8), and find relatively larger redshift ranges in which the measured ∆α/α values do not vary significantly.We then build subsamples for these large redshift ranges.Figure 9 shows our ∆α/α measurements in 6 redshift slices. For a subsample in a redshift slice, we calculate ∆α/α in a grid of coordinates.The grid is made as follows.The declination range of −15 • < Decl.< 75 • is divided into 6 bins with a bin size of 15 • .The R.A. range is also divided into bins with different bin sizes depending on declination.From Decl.= −15 • to 75 • , the bin sizes are 12 • , 12 • , 15 • , 18 • , 20 • , and 30 • , respectively.This ensures that different grid cells have similar area.We require that each cell has at least 15 galaxies, otherwise this cell is not used.The central position of a cell in Figure 9 is the median position of the galaxies in this cell.The color-coded squares represent our ∆α/α measurements.The ∆α/α values are relative in each redshift range (or in each panel).We can see that the .Spatial variation of ∆α/α.We show our ∆α/α measurements for 6 redshift slices in the 6 panels.In each panel, ∆α/α is calculated in a grid of coordinates.The colorcoded squares indicate the positions of the grid cells.Each cell contains at least 15 galaxies (otherwise this cell is not used), and its position is the median position of the galaxies in this cell.The figure does not show an obvious, large-scale variation of α. ∆α/α distribution in each panel is quite random, with a typical amplitude smaller than 10 −4 .The rms values of the ∆α/α distributions from the top to the bottom panels are 0.44, 0.29, 0.51, 0.74, 0.70, and 0.62 × 10 −4 , respectively.The difference is mainly caused by the different subsample sizes.We do not see obvious, largescale structures of ∆α/α.wavelength calibration.We have two reasons for this claim.The first one is that the α variation seen in the figure can be caused by a very small wavelength distortion of 0.002−0.003Å, and the DESI wavelength calibration cannot reach such an accuracy.The second reason is that previous studies have put a strong upper limit on ∆α/α and the limit is roughly 5 ppm.These previous constraints were mostly from the measurements of quasar absorption lines that are produced by intervening gas.The [O III] emission lines in this work are produced by the interstellar medium in star-forming galaxies.Although the absorption and emission lines are from different environments, they should obey the same laws of physics when they are at the same redshift.Because of the systematics, we did not measure the time variation of α/α in Section 3.4. From the wavelength refinement procedure in Section 3.2, we find that the absolute wavelength calibration, as described by the wavelength differences between the measured values and reference values of sky lines, is poorer than what we expected.The wavelength differences sometimes reach a few tenths of Å.The ∆α/α measurement is not sensitive to the absolute wavelength calibration.Instead, it mainly relies on the relative calibration.In order to quantify the relative calibration, we first calculate the above wavelength differences in small wavelength ranges (50 − 100 Å).Each wavelength range covers the [O III] doublet lines of one galaxy.The wavelength differences in this range are subtracted by their median value, i.e., the absolute wavelength deviation is removed.Figure 10 shows the distributions of these wavelength differences in two redshift ranges.We can see that the relative calibration is generally good, with a rms of 0.02 ∼ 0.03 Å. In Section 3.1 we mentioned that narrower lines usually provide better wavelength measurements and thus better constraints on the α variation.We further look into the effect of the line widths on our results.The galaxy sample is divided into subsamples with FWHM < 150 km s −1 and FWHM > 150 km s −1 , respectively.The upper panel of Figure 11 shows the ∆α/α distributions of the two subsamples in two redshift ranges.It is clear that narrow lines (in red) provide slightly better results.The lower panel of Figure 11 shows the redshift evolution of the ∆α/α measurements for the two subsamples.In the first redshift bin, we only show the narrow-line sample because the size of the other sample is too small.The results for the two subsamples are well consistent, suggesting that the effect of the line widths on our results is minor. Future Prospects We have used about 110,000 strong [O III]-emitting galaxies to constrain the α variation in Section 3. When the galaxies were grouped into 10 redshift bins, the statistical uncertainties in the ∆α/α measurements are between 2 × 10 −6 and 2 × 10 −5 .The uncertainty in the lowest-redshift bin reaches 2 × 10 −6 , owing to the largest number of galaxies and the highest average S/N in this bin.The uncertainties in other redshift ranges are mostly around (0.5 − 1.0) × 10 −5 .On the other hand, we have reached the regime in which our results are actually limited by the systematics related with the wavelength calibration. There are more systematics that may have affected our results, but their impact should be much weaker compared to those related with the wavelength calibration.Among them, the most important one is probably from line shapes.As mentioned earlier, the two doublet lines intrinsically have the identical line profiles.For example, if one line shows an asymmetric feature caused by inflows, outflows, or other reasons, the other line will have the same feature.In the observed spectra, the two lines could have slightly different shapes due to the instrument effect and telescope optics.In particular, the [O III] λ5007 line is three times stronger than the [O III] λ4959 line and thus has a much higher S/N.The combination of the asymmetric line shapes and different S/Ns will likely introduce systematics.Furthermore, line emission from background or foreground ob- jects will likely contaminate our line shapes.In this work we only use narrow and strong emission lines, and we have removed doublet lines with apparently different line shapes.Much higher-resolution observations in the future are needed to investigate these systematics. As the DESI survey is ongoing and accumulating more data, we will improve our results as follows. • We will improve the wavelength calibration by cooperating with the DESI pipeline team.In particular, we will improve the calibration for the wavelength ranges with plenty of sky lines.Currently, we have three steps of the wavelength calibration after the initial step, including the calibration with arc lamp lines and sky lines within the official pipeline and our own wavelength refinement using sky lines.Our hope is to combine the three steps to one and improve the wavelength calibration. • We will focus on relatively high-redshift (z ≥ 0.2) galaxies.Despite the fact that we have a large number of galaxies with high S/N ratios at low redshift, we do not expect to achieve a sufficiently high accurate wavelength calibration for our purpose, due to the lack of strong sky lines.Strong, isolated (under the DESI resolution) sky lines in the "R" arm are also sparse, so we will mainly focus on the redshift range of z ≥ 0.4, particularly some wavelength ranges (redshift slices) with plenty of strong sky lines. • We will use fainter galaxies.In this pilot study, our galaxies are required to have Q > 5. We have checked a small sample of fainter galaxies with 3 < Q < 5 and FWHM < 250 km s −1 , and found that many of them are quite good.Their spectral quality is generally poorer, meaning larger uncertainties on the ∆α/α measurements.One problem is that their spectral shapes could be severely affected by strong sky lines, so we need to properly deal with these cases.Note that the number of faint galaxies is much higher than that of brighter galaxies, particularly at z > 0.6.This is the redshift range for the DESI ELG targets. • We will use a much larger sample to achieve a more stringent constraint.With the combination of new DESI data and fainter galaxies, we will significantly increase our sample size at z ≥ 0.4.The current uncertainties of ∆α/α in the ∆z = 0.1 bins at z ≥ 0.4 are about (0.5 − 1.0) × 10 −5 .We hope to improve them to (0.2 − 0.3) × 10 −5 .When all data at z ≥ 0.4 are combined, we expect to reach a better constraint statistically, close to or even comparable to the strongest constraint from previous quasar observations. SUMMARY We have used a large sample of roughly 110,000 [O III] emission-line galaxies from DESI to constrain the variation of α in space and time.The galaxy sample was drawn from an internal data release.A quality parameter Q was defined to select strong and narrow [O III] emitters, since our primary goal is to precisely measure the wavelengths of the [O III] doublet lines and the wavelength measurement is dependent of both line strength and line width.In this pilot work, we required Q > 5, and this may be relaxed in the next work.Each emission line was fitted by a single Gaussian or double Gaussian profile and its wavelength was obtained from the best fit.We used the traditional AD method to measure the relative α variation ∆α/α.A great advantage of [O III] is its wide wavelength separation between the two doublet lines, which makes it very sensitive to the measurement of ∆α/α.On the other hand, this also makes it sensitive to the (relative) wavelength calibration. Our galaxy sample spans a redshift range of 0 < z < 0.95, which covers roughly half of all cosmic time.To estimate the variation in α, we grouped the sample into 10 redshift bins with a bin size of ∆z = 0.1.We then calculated ∆α/α for individual bins using two methods, a weighted average and a bootstrap estimate.The two methods provided well consistent results.The statistical uncertainties of the 10 measured ∆α/α values are roughly between 2 × 10 −6 and 2 × 10 −5 .The measured ∆α/α values show an apparent variation with redshift and the variation amplitude is about (2 ∼ 3) × 10 −5 .This variation should be caused by systematics associated with the wavelength calibration.We found that a small wavelength distortion of 0.002 − 0.003 Å can cause such a variation, and this is beyond the accuracy that the current DESI data can achieve.We have tried to refine the wavelength calibration using sky lines.This can only be done for a small fraction of the galaxies that have strong and isolated sky lines around their [O III] lines, so it does not change the above results. The large galaxy sample with a large sky coverage has allowed us to probe the possible variation of α in space.We did it in narrow redshift ranges (Figure 9).If the wavelength calibration has a wavelength dependence as we suspected above, this would partially suppress the wavelength dependence.We did not find obvious, largescale structures in the spatial distribution of ∆α/α in grids of ∼ 15 • × 15 • .The distribution is quite random at a level of < 10 −4 in individual redshift ranges. DESI is still accumulating data, so we expect to have a much larger dataset soon.This work focused on bright galaxies (in terms of the [O III] emission), but we will consider fainter galaxies in the future.We only used [O III] in this work, and we will use more emission lines (e.g., [Ne III]) in the next work.In addition, we plan to improve the wavelength calibration using sky lines.With these improvements, we expect to achieve a better constraint on ∆α/α that is statistically comparable to the best constraints from previous quasar observations. Figure 1 . Figure 1.Power of the [O III] doublet by a comparison with [Ne III] (magenta) and two doublet absorption lines C IV (blue) and Mg II (cyan).The [O III] and [Ne III] lines are from the same low-redshift galaxy.Note that the [Ne III] λ3967 line is blended with Hϵ.We have assumed a resolving power of R = 20, 000 for C IV and Mg II for the purpose of clarity.C IV and Mg II are the most abundant (and often the strongest) metal absorption lines in quasar spectra.Compared to [O III], however, they are usually weaker by orders of magnitude. Figure 2 . Figure 2. Redshift distribution of our galaxy sample.Most galaxies are at relatively lower redshifts.The highest redshifts in the sample correspond to roughly half the cosmic time. Figure 3 . Figure3.Example of Gaussian fits to sky lines in a short wavelength range.The diamonds represent data points and the red profiles are the best Gaussian fits.A single Gaussian usually works well.Weak lines or blended lines are not used.Note that OH vibration-rotation lines are doublets owing to Λ-splitting, but the separations of the doublets are small and are not resolved in the DESI spectra. Figure 4 . Figure 4. Example of a single Gaussian fit to [O III] in two galaxies, including a good fit to a narrow [O III] doublet and a poor fit to a broader [O III] doublet (due to its asymmetric line profile).The diamonds represent the data points and the underlying continuum emission has been subtracted.The red and blue profiles are the best Gaussian fits. Figure 7 . Figure7.Distributions of ∆α/α for the whole galaxy sample in four redshift ranges.Each histogram has been normalized so that the peak value is 1.The distributions are slightly skewed with higher numbers at lower values, which leads to non-zero detections of ∆α/α. Figure 9 Figure9.Spatial variation of ∆α/α.We show our ∆α/α measurements for 6 redshift slices in the 6 panels.In each panel, ∆α/α is calculated in a grid of coordinates.The colorcoded squares indicate the positions of the grid cells.Each cell contains at least 15 galaxies (otherwise this cell is not used), and its position is the median position of the galaxies in this cell.The figure does not show an obvious, large-scale variation of α. Figure 8 in Section 3.4 shows an apparent variation of α with redshift.As we have briefly discussed, this is mostly likely caused by systematics associated with the Figure 10 . Figure10.Relative wavelength calibration in two redshift ranges.The wavelength differences shown in the x-axis are the differences between the measured wavelengths and reference values of sky lines in small wavelength ranges (50 − 100 Å).Each wavelength range covers the [O III] doublet lines of one galaxy.A median difference value for each galaxy has been subtracted from each range.The distributions indicate that the relative calibration is generally good, with a rms of 0.02 ∼ 0.03 Å. Figure 11 . Figure 11.Effect of line widths on the ∆α/α measurements.The upper panel shows the ∆α/α distributions in two redshift ranges for two galaxy samples with FWHM < 150 km s −1 (in red) and FWHM > 150 km s −1 (in blue), respectively.The lower panel shows the redshift evolution of the ∆α/α measurements for the two samples. lines are from the same low-redshift galaxy. C IV and Mg II are the most abundant (and often the strongest) metal absorption lines in quasar spectra. Compared to [O III], however, they are usually weaker by orders of magnitude. demonstrates the power of the [O III] doublet.In this figure, we compare a [O III] doublet with [Ne III] and two doublet absorption lines C IV and Mg II.The [O III] and [Ne III] • We will consider more emission lines in the next work, particularly the [Ne III] doublet emission lines.The [Ne III] doublet has a wider wavelength separation (nearly 100 Å), so wavelength calibration is also critical for [Ne III] to measure ∆α/α.The [Ne III] emission is generally much weaker than the [O III] emission in galaxies, and thus we do not expect to obtain a stronger constraint on ∆α/α.Compared to [O III], however, [Ne III] allows us to probe higher redshifts (up to z ≈ 1.45).
12,888.4
2024-04-04T00:00:00.000
[ "Physics" ]
How the formation of interfacial charge causes hysteresis in perovskite solar cells In this study, we discuss the underlying mechanism of the current–voltage hysteresis in a hybrid lead-halide perovskite solar cell. We have developed a method based on Kelvin probe force microscopy that enables mapping charge redistribution in an operating device upon a voltage- or light pulse with sub-millisecond resolution. We observed the formation of a localized interfacial charge at the anode interface, which screened most of the electric field in the cell. The formation of this charge happened within 10 ms after applying a forward voltage to the device. After switching off the forward voltage, however, these interfacial charges were stable for over 500 ms and created a reverse electric field in the cell. This reverse electric field directly explains higher photocurrents during reverse bias scans by electric field-assisted charge carrier extraction. Although we found evidence for the presence of mobile ions in the perovskite layer during the voltage pulse, the corresponding ionic field contributed only less than 10% to the screening. Our observation of a time-dependent ion concentration in the perovskite layer suggests that iodide ions adsorbed and became neutralized at the hole-selective spiro-OMeTAD electrode. We thereby show that instead of the slow migration of mobile ions, the formation and the release of interfacial charges is the dominating factor for current–voltage hysteresis. Introduction Herein we report on experiments to visualize local charge re-distribution in lead halide perovskites that are responsible for current-voltage hysteresis. We found that interface charges localized at the electron-selective contact generated an electric field that modified the charge extraction in the device. Thus, the formation and release of these ionic interface charges instead of a slow ionic charge transport determine the time scales for current-voltage hysteresis in perovskite solar cells. Recently, consensus has been reached that current densityvoltage ( J-V) hysteresis in perovskite solar cells is connected to ion migration in the perovskite layer. 1 In this picture, space charge layers of mobile ions at the electrodes shield the electric field in the perovskite layer on timescales of milliseconds to seconds. 2 Such a screening of the electric field in the perovskite layer has been observed both at short-and open circuit conditions. 3,4 This modification of the internal electric field influences the charge extraction efficiency and recombination losses inside the perovskite layer and the photocurrent becomes dependent on the voltage scan direction. 5,6 However, until now it is unclear if the timescales of the ion migration itself are the sole reason for the slow response of the electric field or if other factors such as interface dipoles [7][8][9] or chemical reactions of ionic species at the interfaces 10 play a role, too. In the commonly used model that explains hysteresis by ion migration, the ionic species are evenly distributed and free to move throughout the entire perovskite film 1,5,6,11,12 (Fig. 1a). Experimentally, the lateral migration of methylammonium and iodide species has been observed in perovskite films, proving that mobile ionic species exist. [13][14][15][16] The exact mechanism of the migrating process and how it affects the charge extraction perpendicular to the perovskite film, however, remains unclear. In particular, it is still unclear (i) what the exact mechanism of migration is (hopping via vacancies or via interstitials 17 ), (ii) where exactly the mobile ions are located (across the film or at grain boundaries or interfaces) and (iii) what the migration paths are (bulk transport 15 or grain boundaries, 18 Fig. 1a and b). The simple bulk ion migration model is not sufficient to explain the observation that the incorporation of a suitable material in the electron transport layer (ETL), such as mesoporous TiO 2 , 19 or SnO 2 , 24 in perovskite solar cells results in almost hysteresis-free devices. In the case of SnO 2 , the reduction in hysteresis was attributed to a more efficient electron extraction, reducing recombination losses. 24 In the case of PCBM, the ion migration was suggested to be suppressed by passivating interfacial defects. 25 The strong influence of the ETL material demonstrates the crucial role of interfaces for hysteresis. Nevertheless, the exact interplay between ion migration and interface effects remains unexplored. In order to solve the puzzle, methods that can map the spatial and temporal evolution of the electric field in the device are required. We developed such a time-resolved method based on Kelvin probe force microscopy (KPFM) 3,4,[26][27][28][29][30][31] by decoupling the spatial mapping and the recording of the temporal evolution of the local contact potential difference (CPD). Thereby, we achieved sub-ms KPFM time resolution (Fig. 1c and Methods section). We used this time-resolved (tr-)KPFM to follow the charge re-distribution in a hysteretic methylammonium/formamidinium lead iodide/bromide perovskite solar cell device (details on the composition in the Methods section) upon applying an external voltage or a light pulse. We found that the electric field across the perovskite layer was screened via a combination of localized positive interface charge at the ETL and a negative ionic space charge across the entire perovskite layer. Our experiments demonstrate, that the electric field generated by the surface charge was the dominating factor for screening electric fields in the perovskite layer. Dynamics with pulsed voltage For our study, we chose a perovskite solar cell with particularly pronounced hysteresis, as we expected to observe significant effects of internal charge re-arrangement in such a device. To gain access to the cross section, the device was cleaved and a 50 mm wide region was polished with a focused ion beam procedure (see also Methods and ref. 3 and 26). The procedure only had minor influence on the performance and the hysteresis in the device, with efficiencies of 12.2% (upward scan) and 16.5% (downward scan, Fig. 2a, and Fig. S2, ESI †). On the cross section, we identified the different layers of the perovskite solar cell from the scanning force microscopy (SFM) phase signal of the mechanical tip oscillation (Fig. 2b). For more clarity, we have marked the interfaces to the active perovskite layer in the phase image and in all following images with dashed lines. In the CPD map recorded before the voltage pulse, the potential in the perovskite layer was flat, i.e. a built-in potential, which might have been introduced by the difference of work functions of contact layers, was shielded by the internal charge distribution ( Fig. 2c and Fig. S2, ESI †). This CPD distribution represents the equilibrium potential distribution that we refer to as the static CPD. Using tr-KPFM, we were able to reconstruct CPD maps with a temporal resolution of 0.5 ms (see ESI † and Methods for more details). Movies generated from these KPFM snapshots are provided as ESI. † For the pulsed voltage experiments, we applied a 750 ms long voltage pulse of À0.5 V to the FTO electrode while the gold electrode was grounded. The voltage pulse simulates a forward (0 V -À0.5 V) and backward (À0.5 V -0 V) voltage Illustration of potential (U), electric field (black arrows) and ion distribution for different migration paths for mobile ions: (a) if ions can migrate through the bulk of the perovskite layer, uniform electrostatic double layers (EDLs) will form at either electrode that screen the external device potential. (b) If ions migrate preferentially at grain boundaries, the EDLs and the electric field distribution will be heterogeneous. (c) In time-resolved KPFM, the CPD response to a voltage-or light-pulse is recorded in a pointwise spectroscopy approach subsequently at different positions on the sample. scan when applying a positive voltage to the gold cathode during a JV scan. To distinguish between transient and static charge distribution, we subtracted the static CPD ( Fig. 2c) from the CPD maps at later times. These transient DCPD maps contain information about charge re-distribution dynamics caused by the external voltage pulse at a given time after applying the voltage pulse. In the DCPD map, a potential gradient across the perovskite layer was present 2.5 ms after switching on the external voltage (Fig. 3a). The corresponding section graph of the potential distribution (black curve Fig. 3b) shows a strong asymmetry: at the spiro-OMeTAD/perovskite interface, the potential bends downwards, becoming flatter towards to perovskite/SnO 2 interface. At the interface to the SnO 2 /FTO electrode, the DCPD drops by more than 200 mV within less than 100 nm. Section graphs from the same position recorded at 10 ms and 700 ms after the switching (red and blue curve in Fig. 3b) show that the magnitude of the potential decay across the perovskite layer decreased over time. Now, the majority of the potential (B400 mV) drops at the perovskite/SnO 2 interface, effectively screening the external potential from the perovskite layer. The sharp potential step at the perovskite/SnO 2 /FTO interface corresponds to a dipolar charge with positive charges on the perovskite side and negative charges on the SnO 2 /FTO side (red and blue shaded areas at the perovskite/SnO 2 interface in Fig. 3b; see Methods section for a note on how to identify charges from the potential distribution). Comparing the CPD distributions for 10 ms and 700 ms shows that it took hundreds of milliseconds to reach an equilibrium potential distribution across the perovskite layer (Fig. 3b). Relaxation times on millisecond timescales have been attributed to the slow migration of ionic species inside the perovskite layer, 2,5,6,32 suggesting ion migration as the underlying mechanism for the observed CPD dynamics. The dimensions of the During the first 2.5 ms both on the FTO and inside the perovskite layer the local potential decreased (gray arrows). Afterwards, the potential on the FTO electrode decreased and the potential in the perovskite layer increased (green arrows). The increase was caused by a slow screening of the internal electric field, mainly by positive interface charges at the perovskite/SnO 2 interface that were compensated by electrons on the FTO side. Here, regions of positive charge (convex curvature) are shaded in red and negative regions (concave curvature) in blue. (c) To explain the asymmetry in the potential distribution, we propose that positive charges are localized at the perovskite/SnO 2 interface. A possible scenario for localization is the formation of an interlayer with locally lower activation barrier for iodide/iodide vacancy formation. Thereby, the migration of iodide into the perovskite is possible (A), but the migration of vacancies is blocked (B). imaging area were larger than the typical grain size of B100 nm for this solar cell type. Nevertheless, both potential distribution in the perovskite layer and the surface charge at the perovskite/SnO 2 interface showed a homogeneous CPD contrast along the perovskite layer. A CPD contrast could be an indication for faster ion migration along grain boundaries, 16,33 for example (Fig. 1b). The absence of a CPD contrast is therefore an indication of bulk ion transport (Fig. 1a). To quantify the charge at different positions in the cell, we calculated the charge carrier densities based on the onedimensional Maxwell equation, that connects the electric field, E, with the charge carrier density, r: (ee 0 : vacuum and relative dielectric permittivity, respectively). Assuming that the charges are localized within a thin layer of thickness Dx with a surface charge density s, we can write the volume charge density as r = s/Dx and obtain for the change in electric field over Dx: From the DCPD trace 2.5 ms after switching on the voltage, we calculated via linear fits to the potential that the electric field changed from 0.4 kV cm À1 in the perovskite bulk to 24 kV cm À1 directly at the perovskite/SnO 2 interface. Assuming a homogeneous relative permittivity 34 of e = 62 across the perovskite layer, we obtain a positive charge density of 1.1 Â 10 À7 C cm À2 , corresponding to a capacity of B220 nF cm À2 . Towards the end of the voltage pulse, the interface charge further increased and finally reached 2.6 Â 10 À7 C cm À2 (B500 nF cm À2 ). However, impedance spectroscopy studies have found higher capacitances 9,23,35 on the order of mF cm À2 -mF cm À2 . These higher capacities indicate that the surface charges could be confined to an even thinner region of a couple of nanometers, which is currently below the lateral resolution of KPFM. Inside the perovskite layer, the potential was exponentially decaying towards the FTO side with a decay length of (161 AE 15) nm, corresponding to a negative space charge density (dashed gray fit in Fig. 3b). We assume that this charge is caused by ionic space charge. To estimate the ion concentration, we assume that the mobile ions in the perovskite form an electrostatic double layer (EDL) towards the spiro-OMeTAD interface, similar to a solid-state electrolyte. 6,11,32 The screening length of the EDL in an electrolyte with charge carrier concentration c and dielectric permittivity e can be estimated by the Debye screening length: 36 (e: elemental charge, k B T: thermal energy at temperature T). The only free parameter in eqn (3) is the charge carrier concentration c 0 . Using a dielectric permittivity 34 of e = 62, we calculate the ion concentration in the perovskite layer as (1.7 AE 0.3) Â 10 15 cm À3 . This value is four orders of magnitude lower than the ion concentrations of around 10 19 cm À3 calculated from measurements of activation energies for defect formation. 32,37 Using our results, the total ionic charge in a B400 nm thick perovskite layer is (1.1 AE 0.2) Â 10 À8 C cm À2 . Thus, the amount of negative ionic charge in the perovskite layer only accounts for B10% of the localized positive charge at the perovskite/ SnO 2 interface. This discrepancy shows that additional processes happen that either increase the positive interface charge or that neutralize parts of the negative space charge. The commonly used model for ion migration in perovskite solar cells assumes that mobile ions and ion vacancies are evenly distributed across the perovskite layer. Our observations of a purely negative diffuse space charge in the perovskite and highly localized positive surface charge at the perovskite/SnO 2 interface clearly contradict this solid-state electrolyte model. Here, we would expect the EDLs to form on either side of the perovskite layer. 5,6,32 Due to symmetry, the EDLs should -once equilibrated -generate potential steps of the same magnitude and with the same screening length at either side of the perovskite layer. 11,32 However, what we observed here and in our previous KPFM cross section study 3 is a strong potential drop at the perovskite/ETL interface and a much weaker potential screening towards the perovskite/HTL interface. This asymmetric screening indicates that different screening mechanisms act on both of these interfaces. Richardson et al. have found an asymmetric potential distribution due to a complete depletion of the mobile ionic charge in a 100 nm wide region at one interface and a high concentration of ions at the other interface. 12 In such a depletion region, the potential would deviate from the exponential behavior of an EDL. In our experiments, we did not observe such deviations over the duration of the voltage pulse (Fig. S3, ESI †). As a model to explain the observed voltage transients and potential distributions, we suggest that free iodide/iodide-vacancy pairs are generated upon application of an electric field in a thin interlayer, thinner than the lateral KPFM resolution of 50 nm, close to the perovskite/SnO 2 interface (Fig. 3c). The here generated iodide species are free to move across the entire perovskite layer, whereas the vacancies remain localized at the perovskite/SnO 2 interface (A in Fig. 3c). The assumption of immobile positive ions is necessary to keep the positive charges localized at the perovskite/ SnO 2 interface. Here, we can only speculate about the physical reason for the formation of the interlayer: one scenario could be a field-induced destabilization of the perovskite crystal structure close to the interface. Thereby, the activation energy for the formation of an iodide defect could be locally lowered. 17 Such spatial variations in the activation energies could also explain the differences in the ion densities between our estimation and the results and the literature, 32,37 as macroscopic measurements cannot account for spatial variations in the activation energy. The interfacial iodide vacancies would be immobile because they cannot be filled by iodide from outside the surface region (B in Fig. 3c). In our model, iodide would therefore migrate via interstitial sites/Frenkel defects. One explanation for the surplus of surface charge at the perovskite/ETL interface would be that this interlayer of destabilized perovskite is strongly polarizable, inducing the formation of a surface dipole (higher local e). In this case, the polarization at the interface would be following almost instantaneously any change in the external voltage. To test if this is the case, we next analyzed the CPD dynamics after the voltage pulse. Upon switching off the external voltage, the DCPD jumped within 6 ms to a positive value inside the perovskite layer (Fig. 4a, t = 6 ms). The distribution of this overshoot voltage is again homogeneous along the perovskite layer and we did not observe strong fluctuations in the DCPD signal, e.g. caused by grain boundaries. The section graphs in Fig. 4b reveal that the DCPD increases linearly from the spiro-OMeTAD side towards the SnO 2 side. During the voltage decay up to 500 ms after switching off the external voltage, the potential remained linear within the perovskite layer. Such a linear DCPD distribution corresponds to a homogenous electric field inside the perovskite layer with no space charges in between the electrodes. The electric field is generated by negative charges localized in a 50 nm wide region at the spiro-OMeTAD/perovskite interface and a dipolar surface charge in a 170 nm wide region at the perovskite/SnO 2 interface, respectively (red and red/blue shaded areas in Fig. 4b). To follow the decay of the electric field over time, we analysed the DCPD maps over 500 ms after the voltage pulse (Fig. 4b). The section graphs show that although the magnitude of the positive DCPD decreased, the overall triangular shape of the section curve remained unchanged. Over the measurement time, the space charge regions remained confined in regions at the selective contacts. To obtain the timescale of the CPD decay after the switching, we analysed a local CPD transient curve recorded close to the perovskite/SnO 2 interface. We fitted the decay with an exponential curve with a decay time of (125 AE 1.4) ms (Fig. 4c). This timescale for the surface charge relaxation is a factor of 40 slower compared to the formation time. This stabilization indicates that the surface charge does not simply originate from a region of increased polarizability. Rather, the charge is stabilized directly at the perovskite/SnO 2 interface, e.g. by the molecular interface structure 38 or chemical reactions. 10,13 The absence of negative space charges in the perovskite layer suggests that the dynamics of the mobile ions are much faster than we could resolve in our measurements and thus play no role for the observed decay dynamics. This is a further confirmation for our earlier conclusion that the observed ion migration and the surface charge are two separate phenomena. Model for hysteresis The results on the dynamics of the internal electric field so far can directly explain the strong hysteresis in the investigated device: during the upward scan of the voltage, the electric field in the device drives electrons and holes to the opposite electrode and the extraction efficiency is lower ( Fig. 5a and b). During the relaxation of the electric field in the perovskite, we propose that ions adsorb at the spiro-OMeTAD/perovskite interface and ''freeze'' the charge distribution (Fig. 5c). During the downward scan, the strong reverse field induced by the stabilized charge distribution supports the charge extraction and a higher photocurrent is measured (Fig. 5d). Next to the influence on the electronic drift, the presence of interfacial charges could also influence the charge extraction via energetic barriers or recombination at the interfaces. 5,39,40 To search for a possible stabilization mechanism for the charge distribution after switching off the external voltage, we further analysed the dynamics of the negative space charges in the perovskite layer after switching on the voltage. Here, we found that the EDL decay lengths varied for different times after the voltage pulse. In particular, the decay length reached a minimum of (128 AE 13) nm at 5.5 ms and recovered to a value of B200 nm at 10 ms after switching on the voltage (Fig. 6a). Naively, we could assign the variation in the screening length to a non-equilibrium ion distribution after a change in the electrode voltage. However, in numerical simulations of the ion dynamics upon a change in the applied voltage the screening length remained constant. 41 We can therefore conclude that the observed ion distributions are in a quasi-equilibrium at all measuring times. Using eqn (3), we again calculated the time-dependent ion concentration inside the perovskite layer from the measured screening length. Starting from a value of (1.2 AE 0.5) Â 10 15 cm À3 (0.5 ms), the charge density increased to (2.7 AE 0.6) Â 10 15 cm À3 at 5.5 ms after the switching, and decreased back to 0.3-1 Â 10 15 cm À3 within the following 10-20 ms (Fig. 6b). We observed that all potential curves overlap at a point B50 nm away from the spiro-OMeTAD layer (grey horizontal bar in Fig. 6a). The formation of a potential elbow in this point indicates a higher charge density, e.g. connected to the formation of a Helmholtzlayer of localized ions at the interface to the spiro-OMeTAD (Fig. 5c). Contreras et al. 42 have suggested that a specific interaction between the methylammonium cation and the TiO 2 interface is responsible for slow (0.1-10 Hz) dynamics in perovskite solar cells. From impedance spectroscopy and bias-dependent capacitance measurements, they concluded that methylammonium cations accumulate at the TiO 2 interface. Although we found an accumulation of positive charge at the anode interface, as well, our observation of a negative space charge in the perovskite together with the strong localization of the positive charge contradict this interpretation. Furthermore, Contreras et al. found reduced slow dynamics in mixed methylammonium/ formammidinium (60/40) lead iodide devices. Our devices were made with a (1/5) methylammonium/formammidinium ratio, making a specific cation interactions of methylammonium as the sole reason for hysteresis unlikely. Carrillo et al. have discussed possible chemical reactions of mobile iodide with TiO 2 and spiro-OMeTAD in context of hysteresis and device degradation. 10 They suggested that weak Ti-I-Pb bonds could ''facilitate interfacial accommodation of moving iodide ions''. In this model, the release of iodide from the interfacial layer should be slower, as it is an endothermal process. However, we observed that the formation of positive charge at the anode interface (corresponding to the release of iodide) was much faster than the reverse process. Here, a reversible chemical reaction or complexation of the iodide with positively charged spiro-OMeTAD (Fig. 5c, also suggested by Carrillo et al. 10 ) could explain both the observed timedependent negative charge density and the asymmetry in the charge distribution with a surplus of positive charges at the perovskite/SnO 2 interface. Nevertheless, hysteresis has been observed in devices without any hole transport layer, 43 suggesting that the observed effects are not specific for spiro-OMeTAD. Recent KPFM results suggest that the position of this interface charge, i.e. anode or cathode interface, can be influenced by the composition of the perovskite, namely by the iodide excess in the precursor solution, 4 suggesting that the interface charge is located within and influenced by the perovskite crystal. Independent of the stabilization mechanism, the slow release of bound ions from the perovskite/hole transporter interface explains the slower relaxation dynamics upon switching off the external voltage. The strong electric field will make the ion transport across the perovskite layer very fast, explaining the absence of space charge (Fig. 5d). Dynamics with pulsed illumination To test if similar effects are present in an illuminated device, we performed experiments with light pulses generated by a white light source. The device was set to open circuit conditions with the gold cathode grounded and the FTO anode floating. This situation simulates the so-called light-soaking conditions when the device is illuminated under open circuit for a couple of minutes before recording a j-V curve. Upon switching on the light, the electrode potential on the FTO jumped to BÀ170 mV within B2 ms and slowly decreased over the following three seconds by another B150 mV (see ESI †). In the DCPD map 2.5 ms after the switching, we again found a strong interface charge at the perovskite/SnO 2 interface. However, almost no potential gradient in the B400 nm thick perovskite layer could be observed (Fig. 7a). Only when averaging over entire DCPD map (Fig. 7b) we found a weak electric field of B870 V cm À1 over the perovskite layer. At later stages, the potential in the perovskite layer became flat again with a negative offset of 30-40 mV (orange curve in Fig. 7b), comparable to the offset that we observed in the experiments with pulsed voltage (Fig. 6a). Possible reasons for the observation of a much weaker electric field and the absence of curvature in the potential are the higher noise level due to intensity fluctuations in the light source or the relatively slow switching time of B10 ms of its mechanical shutter. Moreover, the screening of the field could be stronger due to a higher photo-induced ion concentration 44 or due to photo-excited electronic carriers, although we would expect symmetric screening at both electrodes in this case. The observation of a DCPD offset within the perovskite layer at later times indicates that negative ionic charges accumulated at the spiro-OMeTAD/perovskite interface (Fig. 5c). The absence of an exponentially decaying potential in the perovskite layer indicates again that mobile ions play a minor role in the electric field screening compared to the perovskite/SnO 2 interface charge. After switching off the illumination, we observed a similar DCPD distribution (Fig. 7c) and dynamics (Fig. 7d) as observed after the voltage pulse (Fig. 4). The maximum DCPD values at the perovskite/SnO 2 interface were B180 mV, which corresponds to B60% of the externally measured photovoltage. After the external voltage pulse, we observed a maximum DCPD signal of B300 mV at an external voltage of 500 mV, which corresponds to roughly the same voltage ratio of 60%. The observation of an identical charge decay behavior suggests that the same charge re-distribution processes occurred during the illumination and the application of the external voltage. Conclusion/summary With tr-KPFM, we established a method that is able to map and track the potential distribution in perovskite solar cells with sub-ms resolution. Our results demonstrate that currentvoltage hysteresis in perovskite solar cells is dominated by the dynamics of the formation and release of ionic charges at the interfaces. This interface charge formed within B3 ms after switching on an electric field across the device and decayed 40 times slower than it formed. This asymmetry in the kinetics indicates that the surface charges are stabilized at the interfaces of the perovskite towards the electrodes. One possible scenario for a stabilization of the negative charge at the cathode interface would be a chemical binding or complexation of ions at the interfaces. 10,14 Although we found that the interface charges clearly dominated the electric field dynamics, we measured an additional negative ionic space charge in the perovskite layer upon switching on the electric field. The absence of strong contrast in the direction parallel to the electrodes shows that the observed ion migration was not localized at grain boundaries. The asymmetric potential distribution suggests that a thin layer of destabilized perovskite exists at the interface to SnO 2 , where mobile iodide and immobile iodide vacancy pairs are generated (Fig. 5b). Furthermore, we found a time-dependent ion concentration that indicates chemical binding of iodide at the spiro-OMeTAD interface (Fig. 5c). Our observations underpin the importance of the interfaces for the device behaviour in a perovskite solar cell. We have demonstrated that tr-KPFM can provide a comprehensive understanding of the exact role that ion migration has on hysteresis. Methods Planar perovskite solar cells with mixed formammindinium/ methylammonium cations and mixed Br/I halides were prepared using an antisolvent method. Perovskite deposition. The perovskite solution was spin coated in a two steps program (10 s at 1000 rpm and 20 s at 6000 rpm). 20 s prior to the end of the second step, 100 mL of chlorobenzene was poured on the spinning substrate. After spin coating, the substrates were annealed at 100 1C for 1 h in a nitrogen glove box. Cross section preparation The cells were cleaved and polished with a focused ion beam (FEI Nova 600 Nanolab). 3,26 Thereby, the effect of the focused ion beam polishing is minimized by using a 2 mm thick protective layer of platinum and by milling deep into the substrate. In an earlier study using this preparation method, no traces of Ga were found in the active layers of the solar cell. 26 Furthermore, the efficiency and hysteretic behaviour (measured with an ABET Class ABA solar simulator and a Keithley2100 source meter) remained almost unchanged (Fig. S1, ESI †). To further minimize the effect of ion contamination, we only investigated changes in the potential distribution, where any influence on the static CPD, i.e. changes in the work function due to ion doping etc., is subtracted. Nevertheless, cleaving the cell and exposing the cross section is creating an additional grain boundary, which could have an influence on the charging dynamics. Such an influence, however, should be homogeneous along the cross section, because the impact of a grain boundary or residual ions should be the same at the top and at the bottom of the cell. What we observed, however, was an asymmetric charging with stronger DCPD signals towards the perovskite/SnO 2 interface. While we cannot entirely exclude an influence of the cross section, these considerations ensure us that the observed effects represent the charging dynamics in the bulk of the solar cell. Time-resolved KPFM The tr-KPFM measurements were conducted on a MFP3D scanning force microscope (Asylum Research/Oxford Instruments) in a glovebox filled with dry nitrogen. We used a PtIr coated cantilever (Bruker SCM-PIT-V2) with a resonance frequency of 75.9 kHz, spring constant of 3.6 N m À1 and nominal tip radius of 25 nm. The KPFM detection was done in frequency modulation (FM) mode on an external HF2 Lock-In amplifier (Zurich Instruments) at a drive amplitude of 2 V using the so-called heterodyne KPFM mode. This KPFM mode enables higher detection bandwidths with better signal-to-noise ratio and thereby faster measurements. 45 As electrostatic forces are long-ranged, the resolution of the standard amplitude modulated (AM) KPFM is prone to crosstalk from adjacent structures with different surface potentials. 46 In comparison, FM KPFM offers quantitative surface potential measurements with a lateral resolution of 10-50 nm. To completely decouple the measurement of the charging dynamics from the scanning motion, we implemented timeresolved KPFM (tr-KPFM), where the local CPD response to a light-or a voltage pulse is recorded subsequently at each position of a predefined grid across the cross section. In particular, the tip was approached until the amplitude decreased to a setpoint of 87% of the free amplitude of 21.4 nm. During this ''surface dwell'', the vertical position was regulated by a feedback to keep the amplitude constant (''tapping mode''). From the simultaneously recorded z-position of the tip and the phase lag of the cantilever oscillation, a map of average height (ESI †) and average phase can be calculated (Fig. 2b). The CPD value recorded in the first 100 ms before the voltage pulse represents the static CPD, i.e. the CPD at an equilibrium ion distribution (Fig. 2c). The topography and static CPD distribution as obtained from tr-KPFM is identical to conventional KPFM mapping (Fig. S2, ESI †), demonstrating that both methods provide the same information. For the present study, the time resolution was limited by the sampling rate of the SFM controller (2 kHz). Using better sampling and more sophisticated KPFM methods, the temporal resolution can be easily extended into the ms-range. 29,47 Interpretation of tr-KPFM maps To translate the potential distribution U(x) into a charge carrier distribution, r(x), we can use the 1D-Poisson equation (ee 0 is the dielectric constant of the perovskite): Essentially, the Poisson equation states that a concave CPD curve corresponds to negative and a convex CPD curve corresponds to positive charge density. The corresponding positive and negative space charge regions are marked in red and blue, respectively, in the DCPD maps throughout this paper. Conflicts of interest There are no conflicts to declare.
7,610.4
2018-09-12T00:00:00.000
[ "Physics" ]
Survival Comparison between Melanoma Patients Treated with Patient-Specific Dendritic Cell Vaccines and Other Immunotherapies Based on Extent of Disease at the Time of Treatment Encouraging survival was observed in single arm and randomized phase 2 trials of patient-specific dendritic cell vaccines presenting autologous tumor antigens from autologous cancer cells that were derived from surgically resected metastases whose cells were self-renewing in vitro. Based on most advanced clinical stage and extent of tumor at the time of treatment, survival was best in patients classified as recurrent stage 3 without measurable disease. Next best was in stage 4 without measurable disease, and the worst survival was for measurable stage 4 disease. In this study, the survival of these patients was compared to the best contemporary controls that were gleaned from the clinical trial literature. The most comparable controls typically were from clinical trials testing other immunotherapy approaches. Even though contemporary controls typically had better prognostic features, median and/or long-term survival was consistently better in patients treated with this dendritic cell vaccine, except when compared to anti-programmed death molecule 1 (anti-PD-1). The clinical benefit of this patient-specific vaccine appears superior to a number of other immunotherapy approaches, but it is more complex to deliver than anti-PD-1 while equally effective. However, there is a strong rationale for combining such a product with anti-PD-1 in the treatment of patients with metastatic melanoma. Introduction The introduction of monoclonal antibody checkpoint inhibitors, especially the anti-programmed death molecule-1 (anti-PD-1) agents nivolumab and pembrolizumab, and anti-BRAF/MEK agents for patients with BRAF mutations, have revolutionized the treatment of metastatic melanoma. Anti-PD-1 agents have become the treatment of choice for the primary treatment of distant metastatic melanoma, and for the adjuvant treatment of high-risk surgically resected stage 3 and stage 4 melanoma, due to their curative potential [1]. However, there remains an unmet need because long-term disease control is still achieved in only a minority of patients. For this reason, there is a need for additional therapies, especially those that may be additive or synergistic with anti-PD-1 therapy without added toxicity [2][3][4]. In terms of mechanism of action, monoclonal antibodies to PD-1 and monoclonal antibodies to protein death molecule ligand (PDL-1) remove the enervating effects that result from the intercellular interaction of PD-1 and PDL-1 on cytotoxic T lymphocytes and other immune cells, thereby releasing suppressed immune responses that already existed in the host. In contrast, the mechanism of action of therapeutic vaccines is to induce new immune responses to tumor antigens, or to enhance weak existing immune responses to such antigens. For more than two decades, there has been great interest in the potential therapeutic application of dendritic cell vaccines (DCV) for patients with metastatic melanoma [5][6][7][8]. There are some commonly used approaches for generating dendritic cells from the peripheral blood and cryopreserving them [7][8][9][10], but there is tremendous variation in the sources of antigens for DCV [8,[11][12][13]. Various investigators have consistently reported that such vaccines are well-tolerated and associated with desired antigen-specific immune responses, but rarely associated with significant clinical benefit [3,7,8]. Some of the most encouraging clinical results have been reported for a DCV consisting of autologous dendritic cells (DC) that were loaded with autologous tumor antigens (ATA) from autologous tumor cells that were self-renewing in tissue culture, and administered in granulocyte-macrophage colony stimulating factor (GM-CSF) [14][15][16][17][18]. Unlike most clinical investigations of DCV, the clinical trials with this DC-ATA vaccine have been associated with survival benefit. In a 54-patient single-arm phase 2 trial, the projected five-year survival was 54% at a time when median follow up was 4.5 years [16], and the eventual actual observed five-year survival was 50% with no patients lost to follow up. In a subsequent randomized phase 2 trial, the DC-ATA was superior to an irradiated autologous tumor cell vaccine that was also admixed with GM-CSF [17]. Long-term follow-up confirmed a doubling of median survival from 20.5 to 43.4 months, a higher observed survival rate at three years of 61% vs. 25%, and a 70% reduction in the risk of death [18]. Two of the major differences between these trials and most cancer vaccine trials is that the starting point for the preparation of the vaccine was surgical resection of tumor, and a short-term cell line had to be established as the source of ATA. Patients were typically referred for possible vaccine because they had surgically-resectable regionally recurrent stage 3 or distant stage 4 oligometastatic disease, or they were undergoing resection of a metastatic lesion for diagnostic or palliative reasons [19]. It was often several months later before the autologous cell line was available, and/or the patient was referred by their managing physician for treatment. As a result, in the interval from tumor collection to referral for treatment, many patients experienced recurrence and/or progression of disease, while others remained disease free, or they were rendered free of measurable metastatic disease by stereotactic radiation to the brain or resection of new metastases, or in rare cases by combination chemotherapy and biotherapy [19]. Survival varied depending on patients' disease status at the time DC-ATA was initiated. The survival curves for each of these cohorts and their clinical characteristics were recently published [19]. Patients whose most advanced stage of melanoma was recurrent stage 3 and had no measurable disease at the time of DC-ATA treatment had a 72% survival rate at five years; patients whose most advanced stage of melanoma was stage 4, but had no measurable disease at the time of DC-ATA treatment had a 53% five-year survival; patients with measurable stage 4 disease had a median survival of 18.5 months and a two-year survival of 46% [19]. In the current study, efforts were made to identify comparable control groups from the literature for the purpose of survival comparisons between patients that were treated with DC-ATA and patients treated with other forms of immunotherapy. Melanoma Patients Treated with Patient-Specific Dendritic Cell Vaccines The DCV-treated patients had metastatic melanoma and were enrolled in either a single-arm phase II trial (clinicaltrials.gov NCT00436930), or a randomized phase II trial (clinicaltrials.gov NCT00948480). Both clinical trials were conducted per the principles outlined in the Declaration of Helsinki and were approved by appropriate institutional review boards including the Hoag Hospital Institutional Review Committee for the Protection of Human Subjects (first approved 12 January 2000), the U.S. Food and Drug Administration (FDA; BBIND 8554), and reviewers for the National Cancer Institute's Physician Data Query (clinical trial number: NCI-V01-1646) and the Western Institutional Review Board (Seattle, WA.) (first approved 01 February 2006, WIRB ® Protocol #20090753). All patients provided written informed consent prior to treatment. Details regarding these trials and the 72-DCV-treated patients were previously published [16][17][18][19]. Patient-Specific Dendritic Cell Vaccines The manufacturing of the DCV-ATA product was previously described in detail [16,18,19]. Briefly, DC were derived from peripheral blood mononuclear cells that were obtained during a leukapheresis procedure and then cultured in interleukin-4 and GM-CSF. DC-ATA consisted of autologous DC that had been co-cultured with irradiated self-renewing autologous cancer cells for phagocytosis and loading of ATA. The cancer cells were grown in short-term cell cultures and they had characteristics and features of tumor initiating cells, including cancer stem cells and progenitor cells [11,20]. DC-ATA were suspended in GM-CSF at the time of subcutaneous injections intended for weeks 1, 2, 3, 8, 12, 16, 20, and 24. Comparator Populations of Melanoma Patients Treated with Immunotherapy The identification of comparable patient cohorts was achieved by a review of clinical trials in clinical trials.gov and articles that were identified in PubMed. An emphasis was placed on randomized trials that tested immunotherapy treatments. Preference was for studies conducted during 2000-2011, to coincide with the time when the patients of interest were treated with DCV. Statistical Methods There were no comparative statistical tests performed on the available data, because there were no analyses that were appropriate to perform. Stage 3 with No Measurable Disease All of the stage 3 patients that were treated with DC-ATA had recurrent stage 3 disease, and many had recurred despite prior therapy. A comparable control group could not be identified from other trials, but some trials were limited to patients with stage 3 disease that had recently been eliminated by surgery. The comparative results for patients whose most advanced stage was 3 and had no measurable disease at the time of immunotherapy treatment, is shown in Table 1. One of the trials for comparison randomized 1,160 patients to either an allogeneic tumor cell vaccine + Bacillus Calmette-Guerin (BCG) that was called Canvaxin, or to BCG alone [21]. The majority of patients had primary microscopic stage 3, N1a and N2a microscopic disease involving small numbers of lymph nodes. In contrast, the DC-ATA-treated patients had recurrent, clinical stage 3, visible/palpable disease that was resected, N1b, N2b, N2c, or N3, which is associated with a worse prognosis [22]. The ECOG 1684 trial of interferon alpha was conducted in 280 patients who had primary stage 3 melanoma or recurrent stage 3 melanoma [23]; this trial was the basis for approval of that immune cytokine as the standard treatment of such patients in 1995. This trial was not limited to patients with stage 3 disease; it also included stage 2 patients with deep melanomas (>4 mm) and negative lymph nodes, in addition to patients with microscopic or clinically evident primary stage 3 melanoma, and recurrent stage 3 melanoma patients who had not received prior systemic treatment. Subsequent trials of adjuvant interferon included even higher proportions of patients with stage 2 disease and decreased proportions with stage 3 or recurrent stage 3 [24,25]; so, these trials were not included for comparison. Even though staging suggested that the comparator groups had a better prognosis, the rate of five-year survival was higher for the patients treated with DC-ATA. Table 2 shows the comparative results for patients whose most advanced stage was stage 4 but had no measurable disease at the time of immunotherapy treatment. In one report, data were pooled from five separate trials that were conducted at one institution [26]. These patients were comparable to the DC-ATA-treated patients in terms of the inclusion of patients who had been rendered surgically free of disease for stages M1a, M1b, and M1c, and both trials included some patients who had been treated for brain metastases, but all of the patients had HLA A-2 disease. Antigens in these vaccines included MAGE-3, MART-1, gp100, and tyrosinase. A few of these patients also received anti-CTLA-4. The other comparators were from arms of the Canvaxin trials for patients with resected stage 4 melanoma [21,27]. The design was similar to the trial for patients with resected stage 3 disease, in that the randomized trial tested the allogeneic tumor cell vaccine + BCG versus BCG. This trial was not restricted by HLA type. As shown in Table 2, even though the patients treated in the Canvaxin trial had less advanced disease, median survival was longer, and the percentage surviving more than five years was higher for the patients that were treated with DC-ATA. Stage 3 or 4 with No Measurable Disease There was one large placebo-controlled randomized trial that explored the use of HLA-2 restricted peptides as a vaccine, with or without GM-CSF in patients who had been rendered free of disease by surgery [28]. One of the advantages of a patient-specific vaccine that utilizes autologous tumor antigens is that one does not have to rely on models that predict which HLA types will or will not recognize a specific antigen. An autologous vaccine is applicable to each individual patient, while an HLA-specific peptide vaccine is limited to patients of a specific HLA type. The results for these stage 3 and 4 patients were reported as a single pooled cohort. Table 3 shows the comparative results for patients whose most advanced stage was stage 3 or 4 and had no measurable disease at the time of immunotherapy treatment. The comparator study tested vaccines that included the HLA-2-restricted peptides MAGE-3, MART-1, gp100, and tyrosinase, which were given with or without GM-CSF; patients who were HLA-A2 negative were randomized to GM-CSF versus placebo [28]. Patients in the comparator arms had better prognostic features when compared to patients that were treated with DC-ATA [22]. About 60% of patients in the comparator studies had stage 3 disease, but this was mostly primary microscopic stage 3: N1a and N2a disease involving small numbers of lymph nodes. In contrast, among DC-ATA-treated patients, only 38% had stage 3, and all had recurrent clinical stage 3 disease rather than primary stage 3, and they all had visible &/or palpable N1b, N2b, N2c, or N3 disease. Furthermore, the majority of patients treated with DC-ATA had experienced stage 4 disease, while the majority of patients treated in the ECOG trial had stage 3 disease. Despite this, the rate of five-year survival was higher for patients that were treated with DC-ATA. Table 3. Survival associated with immunotherapies for patients with prior stage 3 or 4 melanoma, but no measurable disease at the time of treatment. Stage 3 or 4 Non-Measurable DC-ATA Vaccine [19] Multiple Peptides [28] GM-CSF [28] Placebo [28] Median OS >60 mos >60 mos >60 mos >60 mos 5-yr OS 60% 54% 52% 51% OS = overall survival; GM-CSF = granulocyte macrophage colony stimulating factor; mos = months; DC-ATA = dendritic cell-autologous tumor antigens. Table 4 shows the comparative results for patients whose most advanced stage was 4 and had measurable disease at the time of immunotherapy treatment. The survival data for interleukin-2 (IL-2) are from a randomized trial, in which all patients had to be healthy enough to receive IL-2, and were limited to patients who were HLA*A0201 positive, because the trial was testing the benefit of adding the gp100 peptide vaccine to IL-2 [29]. The survival data for anti-CTLA-4 comes from a report that pooled data from several trials conducted in previously treated patients [30]. The nivolumab anti-PD-1 data is from a phase II trial that tested the anti-PD-1 nivolumab in patients with metastatic melanoma who had previously been treated with one to five systemic treatments [31]. This patient population was similar to those treated with DC-ATA. There were no comparable data for pembrolizumab in a pretreated patient population. IL-2 = interleukin-2; CTLA-4 = cytotoxic T lymphocyte antigen-4, PD-1 = programmed death protein-1. Results for Immunotherapies Recently Approved for the Treatment of Melanoma There were no survival comparisons among patients with minimal or no prior treatment for metastatic melanoma because DC-ATA was only administered to patients who had failed previous systemic therapies. However, in the past few years, numerous trials were conducted that tested anti-PD-1 in previously untreated or minimally treated melanoma patients, rather than heavily treated patients, and they have become the treatment of choice for first-line treatment. The results from these trials, as summarized in Table 5, demonstrate that, even in the first-line setting, objective tumor response and long-term survival are still not being experienced by most patients who have distant metastatic melanoma at the time of treatment. In the KEYNOTE-001 trial, the anti-PD-1 pembrolizumab was tested in 655 patients, 496 of whom had been previously treated (breakdown of numbers of patients who had 1, 2, or ≥ 3 prior treatments that are shown in Table 5) [32]. One article reported the response rates for all patients, regardless of whether they had measurable disease [32], while the other report focused on response only for patients who had measurable disease [33]. They reported results for the treatment naïve subset and for the whole group, but did not report specific data for the subset of patients who had been previously treated. However, based on the data provided in another analysis of this trial [33], one can determine that there were 134 objective responders among 448 previously untreated patients with measurable disease; accordingly, the response rate among previously treated patients was 134/448 = 29.9% (30%). Unfortunately, the specific progression-free and overall survival data for this subset was not included in any of these reports. In the CheckMate 066 trial, nivolumab was compared to chemotherapy [34]. In the KEYNOTE-006 trial, pembrolizumab was compared to the anti-CTLA-4 ipilimumab [35,36]. In the CheckMate 067 trial, nivolumab alone and nivolumab plus ipilimumab were both tested [37]. These studies excluded patients who had brain metastases, but patients with treated brain metastases were included in the DC-ATA trials. The results for anti-PD-1 in patients with surgically resected stage 3 or stage 4 were not included in Tables 1-3 because of the limited long-term follow up data for such patients. However, there have been two large trials of adjuvant anti-PD-1 in high-risk melanoma. Table 6 summarizes the results from these trials. In the CheckMate 238 trial, nivolumab was compared to ipilimumab in patients with resected stage 3B, 3C, or stage 4 disease [37]. The other trial compared pembrolizumab to placebo in patients with resected stage 3A, 3B, or 3C [38]. Both trials were clearly positive with pretty similar one-year recurrence-free, or progression-free survival results. However, it should be noted that the patient population in the nivolumab trial was somewhat worse in that about 18% of patients had resected stage 4 disease, and the trial did not include stage 3A (clinically occult microscopic disease), while about 16% of patients in the pembrolizumab trial had stage 3A, and there were no patients with stage 4 disease. Furthermore, in both trials a significant proportion of patients with stage 3B disease only had microscopic metastases, while all of the stage 3 patients that were treated with DC-ATA had macroscopic metastases. Therefore, neither of these trials has a patient population comparable to those treated with the dendritic cell vaccine, and neither trial has reported out long term survival data. Discussion This study shows that the survival results for DC-ATA quite favorably compare with other immunotherapies that were available or being tested during 2000-2011. However, DC-ATA was clinically tested in melanoma patients in an era before the widespread availability of anti-PD-1 monoclonal antibodies or BRAF/MEK inhibitors. In fact, only one of the 72 DC-ATA-treated patients was ever treated with an anti-PD-1 during the five years of follow up. In more recent years, anti-PD-1 for metastatic melanoma patients [31][32][33][34][35][36], and anti-BRAF/MEK for those with BRAF mutations [39][40][41][42], have had a profound effect on the survival of patients with advanced melanoma. The checkpoint and BRAF/MEK inhibitors are immediately available for administration to patients, as they are "off-the-shelf" products, while DC-ATA requires surgical resection of a lesion, and even with current methods, it requires about eight weeks to manufacture the treatment product [20]. Furthermore, the anti-PD-1 and anti-BRAF/MEK products are associated with a high rapid objective response that is based on RECIST, while there were no objective responses in DC-ATA-treated patients per RECIST, although at least one patient did experience durable delayed complete regression of all the measurable sites of disease [18,43]. For these reasons, it does not appear that DC-ATA would displace any of these therapies in the treatment of stage 4 metastatic melanoma, but there is a strong rationale for combining DC-ATA with checkpoint inhibitors, especially in patients who lack an underlying anti-tumor immune response [3,4]. The mechanism of action of anti-PD-1 is the removal of immunosuppression of existing anti-tumor immune responses, while DC-ATA induces new anti-tumor immune responses or enhances the weak existing responses that are not immunosuppressed. Potential strategies for combining DC-ATA with checkpoint inhibitors include concurrent therapy with anti-CTLA-4 in patients who have progressed on anti-PD-1, or concurrent anti-PD-1 plus DC-ATA in patients who received anti-PD-1 alone while DC-ATA was being manufactured. In animal models, the best results were observed when anti-tumor vaccine was concurrently administered with anti-PD-1 therapy [44]. In addition to the checkpoint inhibitors, in recent years talimogene laherparepvec was approved for intralesional injections of metastatic melanoma [45]. Most of the tumor responses seen with this product appear to be the direct result of the cytolytic Herpes simplex virus in the construct, but it also contains GM-CSF that could help to promote a systemic immune response against the tumor antigens released by the cytolytic virus. The potential advantage of DC-ATA over such intralesional vaccines is that the latter are being injected into an immunosuppressive microenvironment that might inhibit antigen-loading of endogenous dendritic cells. With the DCV approach, antigens are only derived from self-renewing cancer cells rather than all cells in a tumor and they are loaded ex vivo away from such in vivo immunosuppression. The population treated with the intralesional cytolytic virus and GM-CSF was primarily patients with regional cutaneous metastases; so, there is no comparable cohort of DCV-treated patients with which to compare outcomes. Unfortunately, in the trial design that led to the approval of talimogene laherparepvec, the control arm was the same GM-CSF schedule of subcutaneous administration used in the ECOG 4697 trial rather than the more appropriate intralesional injection of GM-CSF as a control arm. The best results with talimogene laherparepvec were in patients with 3B, 3C, and M1a stage 4 disease (especially subcutaneous nodules). For this combination of measurable stage 3 and stage 4 patients, the objective response rate was 26%, the time to treatment failure was 8.2 months, median survival 23.3 months, and estimated two-year and three-year overall survivals were 50% and 39%, respectively. The rationale for combining DC-ATA with checkpoint inhibitors also applies to this cytolytic virus product, namely the potential immune recognition of additional ATA. Encouraging results have been reported for combining this agent with anti-CTLA-4 and anti-PD-1 agents [46,47]. Combining DCV with intralesional cytolytic virus injection may be additive or synergistic because of the direct anti-tumor effects of the cytolytic virus. The major strength of this analysis is the focus on patient cohorts defined by stage and disease measurability at the time of treatment with immunotherapies. The most obvious weakness of this analysis is that it relies on comparisons to historical controls that are not perfectly matched, rather than direct comparison in randomized trials. However, an effort was made to match patients as closely as possible by stage and whether the disease was measurable at the time of treatment. Another weakness is the reliance on specific points such as median survival and two-year or five-year survival rates rather than a direct comparison of survival curves. Finally, the numbers of DC-ATA-treated patients in each of the subsets is rather small. Conclusions The survival outcomes for melanoma patients that were treated with DC-ATA vaccine compared favorably to other immunotherapies available or being tested at that time. In terms of convenience and rapidity of clinical benefit, this product does not appear to be as active as anti-PD-1 agents. There is a strong rational for combining DC-ATA vaccine with anti-PD-1 agents in concurrent or sequential treatment strategies due to the different mechanisms of action and toxicity.
5,269.6
2019-10-11T00:00:00.000
[ "Medicine", "Biology" ]
The STROBE: a system for closed-looped optogenetic control of freely feeding flies Manipulating feeding circuits in freely moving animals is challenging, in part because the timing of sensory inputs is affected by the animal’s behavior. To address this challenge in Drosophila, we developed the Sip-Triggered Optogenetic Behavior Enclosure (“STROBE”). The STROBE is a closed-looped system for real-time optogenetic activation of feeding flies, designed to evoke neural excitation coincident with food contact. We demonstrate that optogenetic stimulation of sweet sensory neurons in the STROBE drives attraction to tasteless food, while activation of bitter sensory neurons promotes avoidance. Moreover, feeding behavior in the STROBE is modified by the fly’s internal state, as well as the presence of chemical taste ligands. We also find that mushroom body dopaminergic neurons and their respective post-synaptic partners drive opposing feeding behaviors following activation. Together, these results establish the STROBE as a new tool for dissecting fly feeding circuits and suggest a role for mushroom body circuits in processing naïve taste responses. Introduction Drosophila melanogaster has emerged as a leading model for understanding sensory processing related to food approach, avoidance, and consumption behaviors. However, although the gustatory system is recognized as mediating a critical final checkpoint in determining food suitability, much remains to be learned about the neural circuits that process taste information in the fly brain. Taste processing is not only involved in acute feeding events, but also in the formation of associative memories, which are aversive following exposure to bitter taste (Masek et al., 2015) or positive following sugar consumption (Tempel et al., 1983). Memory formation occurs mainly in a central brain structure called the Mushroom body (MB), composed of ~2000 Kenyon cells per hemisphere (Heisenberg et al., 1985). The MBs receive sensory information that is assigned a positive or negative output valence via coincident input from dopaminergic neurons (DANs) (Perisse et al., 2013;Waddell, 2010). Little is known about how taste information is relayed to the MBs, but Taste Projection neurons (TPNs) connected to bitter GRNs indirectly drive activation of the Paired Posterior Lateral cluster 1 (PPL1) DANs (Kim et al., 2017). PPL1 neurons signal punishment to MBs 2010;Claridge-Chang et al., 2009) and are required for aversive taste memory formation (Kim et al., 2017;Kirkhart and Scott, 2015;Masek et al., 2015). Conversely, the Protocerebrum Anterior Medial cluster (PAM) cluster of DANs signals rewarding information and is involved in the formation of appetitive memories (Burke et al., 2012;Huetteroth et al., 2015;Liu et al., 2012;Yamagata et al., 2015). Although they have well-established roles in memory formation, PPL1 and PAM involvement in feeding has not been extensively investigated. Manipulating neuron activity has become a powerful means of assessing neural circuit function. Silencing neuron populations in freely behaving flies is a straightforward way to determine their necessity in feeding, forcing the neurons in a chronic 'off' state to mimic a situation where the fly never encounters an activating stimulus (Fischler et al., 2007;Gordon and Scott, 2009;LeDue et al., 2015;Mann et al., 2013;Marella et al., 2012;Pool et al., 2014). However, gain-of-function experiments for feeding and taste, or any other actively sensed stimulus, are more complicated. Forcing a neuron into a stimulus-and behavior-independent "on" state can be difficult to interpret. The possible exception is activation of a neuron that elicits a stereotyped motor program, but even these situations are more easily interpreted in a harnessed fly where the effect of a single activation can be monitored (Chen and Dahanukar, 2017;Flood et al., 2013;Gordon and Scott, 2009;Marella et al., 2006;Masek et al., 2015). To effectively probe the sufficiency of neuron activation during feeding events, it would be ideal to temporally couple activation with feeding. In a previous study, we harnessed the FlyPAD technology (Itskov et al., 2014) to develop a closed-loop system for real-time optogenetic activation of neurons during feeding behavior (Jaeger et al., 2018). This system, which we call the Sip-Triggered Optogenetic Behavior Enclosure (STROBE), triggers illumination of a red LED immediately upon detecting a fly's interaction with one of two food sources in a FlyPAD arena. Here, we provide a more extensive characterization of the STROBE and its utility. Coincident activation of sweet GRNs with sipping on tasteless agar drives appetitive behavior, and bitter GRN activation elicits aversion. We show that these effects are modulated by starvation and can be inhibited by the presence of chemical taste ligands of the same modality. Activation of central feeding circuit neurons produces repetitive, uncontrolled sipping, demonstrating the ability of the STROBE to manipulate both peripheral and central neurons. We then demonstrate that activation of PPL1 neurons negatively impact feeding, while activating PAM neurons promotes it. Finally, in agreement with current olfactory memory models, activating DAN/MBON pairs from the same compartment drives opposing feeding behaviors. Results The STROBE triggers light activation temporally coupled with sipping. The FlyPAD produces a capacitance signal that increases when a fly physically interacts with the food on either of two sensors ("channels") in a small arena (Figure 1a, Supplementary Figure 1a) (Itskov et al., 2014). This signal is then decoded post hoc by an algorithm designed to identify sipping events. The STROBE was designed to track the raw capacitance signal in real-time and trigger lighting within the arena during sips (Figure 1b). To achieve this, we built arena attachments that consist of a lighting PCB carrying two LEDs of desired colors positioned above the channels of the FlyPAD arenas. Each PCB is surrounded by a lightproof housing to isolate the arenas from other light sources ( Supplementary Figure 1b-c). In order to trigger optical stimulation with short latency upon sip initiation, we designed an algorithm that applies a running minima filter to the capacitance signal to detect when a fly is feeding. When a fly feeds, its contact with the capacitance plate generates a 'step', or rising edge in the capacitance signal. If this change surpasses a given threshold, then lighting is triggered. Because this algorithm is run on a field-programmable gate array (FPGA), the signal to lighting response transition times are on the order of tens of milliseconds, providing a nearly instantaneous response following the initiation of a sip. The system then records the state of the lighting activation system (on/off) and transmits this information through USB to the PC, where it is received and interpreted by a custom end-user program. This program displays the capacitance signals each fly arena in real-time, as well as its lighting state. It also counts the number of 'sips' (lighting changes) over the course of the experiment ( To confirm that the STROBE algorithm detects sips in line with those detected by the original post hoc FlyPAD algorithm, we first used both algorithms to analyze the capacitance signal from a short (~11 s) feeding bout ( Figure 1d). Visually, this showed us that each time a sip is detected with the FlyPAD algorithm, there is a sip detected at a similar time by the STROBE algorithm. However, we also noted that the STROBE algorithm called more sips than the FlyPAD algorithm. We confirmed these observations on a larger scale by examining the correlation between sip numbers detected by each algorithm in 1-minute bins across a full 1-hour experiment ( Figure 1e). Here, we observed a strong correlation between the two (R 2 = 0.963), with the STROBE algorithm detecting about 1.4 times the number of sips detected by the FlyPAD algorithm. This increased sip number is likely the consequence of the FlyPAD algorithm filtering out capacitance changes not adhering to certain criteria of shape and duration (Itskov et al., 2014). Since these parameters are, by definition, unknown at sip onset, the STROBE cannot use them as criteria. Thus, we expect a fraction of "sips" detected by the STROBE are actually more fleeting interactions with the food. Indeed, video of flies in the STROBE confirmed that leg touches also triggered light activation (Supplementary Movie 1). However, since flies detect tastes on multiple body parts, including the legs, these interactions are likely still relevant to taste processing and feeding initiation. Activation of GRNs modifies feeding behavior. To validate the utility of the STROBE, we first tested flies expressing CsChrimson, a red-light activated channel, in either sweet or bitter GRNs (Klapoetke et al., 2014). Flies were given the choice between two identical tasteless food options, one of which triggered light activation. Under these conditions, flies expressing functional CsChrimson in sweet neurons under control of Gr64f-Gal4 showed a dramatic preference towards feeding on the light-triggering food (Figure 2a-e). Control flies of the same genotype that were not pre-fed all-trans-retinal, and thus carried non-functional CsChrimson, displayed no preference, demonstrating that the attraction to light is dependent on Gr64f neuron activation (Figure 2a-e). The increased sipping on the light-triggering side of the chamber was dependent on light intensity, with maximum sipping observed at 6.5 mW ( Figure 2d). As expected, flies expressing CsChrimson in bitter sensing neurons under the control of Gr66a-Gal4 strongly avoided neuronal activation in the STROBE by sipping less on the light-triggering food source ( Figure 2f-j). Once again, the behavioral response was intensity-dependent, with maximum suppression of sips occurring at the highest intensity tested (16.4 mW). Since the light intensity that gives the maximal effect for Gr64f and the Gr66a activation are 6.5 mW and 16.4 mW, respectively, we decided to use 11.2 mW as the light intensity for all further experiments. a Concept of the FlyPAD: The interaction between the fly's proboscis and the food is detected as a change in capacitance between two electrodes: electrode 1, on which the fly stands, and electrode 2, on which the food is placed. b Concept of the STROBE: when the fly is not sipping, no change of capacitance is detected and the LED is OFF (left); when the fly sips, changes in capacitance turn the LED ON (right). c Flowchart of the STROBE signal processing algorithm. d Example of capacitance changes during a feeding bout, and the associated sips called by the FlyPAD (blue) and STROBE (red) algorithms. e Comparison of the sip numbers called by the FlyPAD and STROBE algorithms. Sips were counted in 1-minute bins across a 1-hour experiment for 10 different channels (5 arenas). Bins with no sips called by either algorithm were excluded from analysis. Behavioral impact of GRN activation is modulated by starvation. Starvation duration is well known to be a crucial parameter for feeding in flies -the longer flies are food deprived, the more they will accept sweet food and ingest it (Dus et al., 2013;Inagaki et al., 2012;Scheiner et al., 2004;Stafford et al., 2012). To determine if the STROBE could detect the physiological modulation of feeding behavior dependent on the internal state of the fly, we performed experiments in which starvation was manipulated (Figure 3a,b). In line with their behavior towards sugar, fed flies expressing CsChrimson in sweet neurons did not show a significant preference for the light-triggering food ( Figure 3c). However, 12 hrs or more of starvation produced a clear preference toward the light side. This increased preference index is driven by a dramatic increase in sip number ( Figure 3d). In contrast to its impact on sweet sensory neurons, starvation had no significant effect on the avoidance of light by bitter neuron activation ( Figure 3e,f). Chemical taste ligands suppress the impact of lightinduced attraction and avoidance. We next asked whether the presence of sweet or bitter ligands would interfere with light-driven behavior in the STROBE ( Can the STROBE could affect feeding behavior through the activation of central neurons, in addition to those in the periphery? Although the precise nature of higherorder taste circuits is still unclear, several neurons have been identified in the SEZ that influence feeding behavior (Chu et al., 2014;Inagaki et al., 2014;Jourjine et al., 2016;Kain and Dahanukar, 2015;LeDue et al., 2016;Marella et al., 2012;Pool et al., 2014;Yapici et al., 2016). One of them, the "feeding neuron" (Fdg), acts as a command neuron for the proboscis extension response, and shows activity in response to food stimulation only following starvation (Flood et al., 2013). Strikingly, Fdg-GAL4>CsChrimson flies placed in the STROBE with plain agar show an extremely high number of sips on the light side (Figure 5a-d; Supplementary Movie 2), resulting in a nearly complete preference for that side (Figure 5e-g), a preference retained when 100 mM sucrose is present ( Figure 5h). Thus, the STROBE can effectively modulate feeding behavior via the activation of either peripheral or central neurons. Manipulation of mushroom body extrinsic neurons modifies feeding behavior. The PAM and PPL1 clusters of dopaminergic extrinsic mushroom body neurons (DANs) are known to respond to taste inputs and mediate positive and negative reinforcement during learning (Burke et al., 2012;Das et al., 2014;Huetteroth et al., 2015;Kirkhart and Scott, 2015;Liu et al., 2012;Mao and Davis, 2009;Masek et al., 2015;Yamagata et al., 2015). Each DAN sends projections to a strikingly discrete compartment of the mushroom body, which contains the dendrites of specific Mushroom Body Output neurons (MBONs) (Aso et al., 2014b;2014a). An emerging model is that the DAN/MBON pairs that innervate a specific MB compartment are opposite in valence, and the KC-MBON synapses in that compartment are depressed upon DAN activation (Aso et al., 2014b;Cohn et al., 2015;Felsenberg et al., 2017;Perisse et al., 2016;Séjourné et al., 2011;Takemura et al., 2017) (Figure 6a). We next asked whether activating DANs or MBONs coincident with sipping behavior would modulate feeding. Since PPL1 neurons signal punishment or aversive information to the MBs (Figure 6a; 2010;Das et al., 2014;Kirkhart and Scott, 2015;Masek et al., 2015)), their activation in the STROBE is predicted to drive avoidance of the light-triggering food Another subset of PAM neurons, targeting the γ3 compartment, has recently been shown to be postsynaptic to neuropeptidergic Allatostatin-A neurons, which signal satiety state through inhibition (Hergarden et al., 2012;Yamagata et al., 2016). Interestingly, activation of PAM g3 neurons (MB441B-GAL4 and MB195B-GAL4) in the STROBE is aversive (Figure 6i MB110C-GAL4) is attractive (Figure 6j; Supplementary Figure 4a-c). Thus, PAM neurons targeting different MB compartments can be either attractive or aversive in the context of feeding. Discussion Leveraging real-time data from the FlyPAD, we built the STROBE to tightly couple LED lighting with sipping events, thereby allowing us to optogenetically activate specific neurons during feeding. To demonstrate the STROBE's utility, we showed that flies expressing CsChrimson in sweet or bitter GRNs display attraction and aversion, respectively, toward food that triggers LED lighting. Activation of central feeding command neurons also produces a dramatic enhancement of feeding behavior. Finally, we probed the effects of manipulating mushroom body input and output neurons, and demonstrated that activating DANs and MBONs within the same MB compartment generally produced opposing effects on feeding. The primary advantage of the STROBE over existing systems for neural activation during fly feeding is its temporal resolution, which provides two important benefits. First, activating neurons while the fly is choosing to interact with one of two available food sources allows us to explore the impact of neural activation on food selection in a way that is impossible with chronic activation mediated by either temperature or light. Second, by tightly coupling activation with food interaction events, light-driven activity from the STROBE should more closely mimic the temporal dynamics of taste input. Conceptually, these advantages are similar to those achieved by expression of the mammalian TRPV1 in taste sensory neurons and lacing food with capsaicin (Caterina et al., 1997;Chen and Dahanukar, 2017;Marella et al., 2006). Importantly, however, the STROBE allows activation of either peripheral or central neurons. Timing of activation also distinguishes the STROBE from another recently described optogenetic FlyPAD (Steck et al., 2018). The implementation of sip detection and light triggering on the FPGA allows the STROBE to trigger LED activation with minimal latency. Thus, neural activation is tightly locked to sip onset and offset, allowing the manipulation of circuits during active food detection. In contrast, the system described by Steck and colleagues (2018) carries out sip detection and light control on the USB-connected computer. The benefits of this strategy are the ability to implement a more complex feeding detection algorithm, and more flexible control of the lighting response timing. However, the tradeoff is longer and more variable latencies of LED activation. Each of these systems may have specific advantages, depending on the application. While they have not been directly compared, it is likely that tight temporal coupling of the STROBE to sips will be more useful for studying the effects of acutely activating core taste and feeding circuit neurons, while the longer, adjustable, light pulse from the optogenetic FlyPAD may be better for silencing neurons or activating reinforcement circuits. Interestingly, a similar closed-loop optogenetic setup has recently been developed for mice. Lick-triggered blue light stimulation of the tongue enhanced licking in water-deprived mice expressing Channelrhodopsin2 in acid sensing taste receptor cells (Zocchi et al., 2017). Thus, the same principle is able to reveal important insight into consummatory behaviors in multiple animals. Although optogenetic neuronal activation is artificial, light-driven behavior in the STROBE shows some important properties that mimic natural feeding. Starvation is known to directly increase feeding behavior on sweet food in flies (Dus et al., 2013;Inagaki et al., 2014;Scheiner et al., 2004;Stafford et al., 2012). As expected, increasing starvation led to an increased sipping on the light side for flies expressing CsChrimson in sweet sensory neurons, demonstrating behavior related to artificial activation of sweet sensory neurons is regulated by the flies' internal state. We also showed that the behavioral impact of sweet and bitter GRN activation in the STROBE could be abolished by the presence of natural taste ligands. Interestingly, this property did not hold true for attraction mediated by PAM or appetitive MBON activation, which was similar in the presence or absence of sugar. This may suggest that sweet taste input and PAM or MBON activation drive attraction via parallel circuits, producing an additive effect when both are present. It is also notable that that flies preferred 1 M sucrose alone over 1M sucrose coupled to optogenetic activation of sweet GRNs (Figure 4b). We suspect that optogenetic activation of sweet GRNs in the STROBE plateaus below the excitation achieved with 1 M sucrose, and somehow prevents further activation by very high sugar concentrations. One interesting question is whether the valence observed from GRN activation in the STROBE is mediated by hedonics or effects on the feeding program itself. For example, sweet neuron activation is thought to carry appetitive hedonics, and therefore the flies may continue feeding because consequent light activation of Gr64f neurons is somehow pleasurable. On the other hand, these neurons also initiate feeding (and conversely, activation of Gr66a neurons terminates it). Thus, it is possible that each sip evokes light-driven activation of a subsequent sip, and so on, creating a positive feedback loop. This is undoubtedly true of Fdg neuron activation, which is known to initiate a complete feeding sequence, likely downstream of any hedonic effects (Flood et al., 2013). Flies appear to become "trapped" in a feeding loop until the end of the experiment, suggested by the very high number of sips evoked (see Figure 5). We observed that activation of high order neurons such as Mushroom body DANs and MBONs also modulate sipping events in the STROBE. PPL1 DANs project mainly onto the vertical lobes of MBs have been described as signaling punishment information to the MBs for memory formation 2010;Das et al., 2014;Kirkhart and Scott, 2015;Masek et al., 2015). In the present study, we observe that paired activation of PPL1 with food contact leads to an acute avoidance of the food. On the other hand, PAM DANs that signal reward information to the MBs (Burke et al., 2012;Huetteroth et al., 2015;Lin et al., 2014;Liu et al., 2012;Yamagata et al., 2015) lead to increased interactions with the light-triggering food source. Moreover, MBON activation from the same compartment drives opposing feeding behavior when compared to corresponding DAN stimulation (see Figure 6; Supplementary Figures 3-5). This relationship supports the current model that DAN activity depresses KC to MBON synapses in their respective compartment (Cohn et al., 2015;Felsenberg et al., 2017;Perisse et al., 2016;Séjourné et al., 2011;Takemura et al., 2017). It is also interesting to note that the PAM neurons are not universally positive. PAM γ3 neurons display activity in response to electric shocks (Cohn et al., 2015) and are silenced upon sucrose stimulation (Cohn et al., 2015;Yamagata et al., 2016). Our findings that PAM γ3 activation drives aversive feeding behavior supports these neurons conveying a negative valence onto the MB (see Figure 6, Supplementary Figure 5). Although the role of DANs in feeding behavior remains unclear, accumulating evidence suggests that MBONs can modulate innate behavior such as taste sensitivity (Masek et al., 2015), naïve response to odors (Owald et al., 2015), place preference (Aso et al., 2014b) or food seeking behavior (Tsao et al., 2018). Could the effect of DANs in the STROBE be mediated by a learning-driven process? This seems unlikely given our current experimental design as both food options were always identical, and thus there would be no cues to associate with appetitive or aversive DAN stimulation. We think it is more likely that the same reward or punishment signals that underlie memory formation also acutely modify feeding behavior. However, the possibility of pairing circuit activation with specific food cues may offer a new paradigm for studying food memories, and neuronal activation by self-administration opens a potential new avenue for the study of addiction. STROBE System The STROBE system consists of a field programmable gate array (FPGA) controller attached to a multiplexor board, adaptor boards, fly arenas equipped with capacitive sensors and lighting circuits. The hardware, with the exception of the lighting circuit units, is based on the FlyPAD design (Itskov et al., 2014). Each fly arena is paired with a lighting circuit and an opaque curtain (to prevent interference from external light). This pair will be referred to as a fly chamber unit. The entire system accommodates 16 fly chamber units (16 fly arenas and 16 lighting circuits), through 8 adaptor boards. The FPGA used is a Terasic DEV0-Nano mounted onto a custom-made multiplexor board. The multiplexor board is one of the intermediate connection components between the fly chambers and the FPGA controller. The multiplexor board has eight 10-pin ports each of which facilitate communications between two fly chambers and the FPGA controller. The board also has a FTDI module allowing data transfer over serial communications with a computer. The other intermediate connection component is the adaptor board which connects on one side to the multiplexor board via a 10-pin line, and splits the 10-pin line from the multiplexor board into four 10-pin ports which connects to two fly arenas and two lighting circuits. The fly arena consists of two annulus shaped capacitive sensors and a CAPDAC chip (AD7150BRMZ) that the main multiplexer board communicates with to initiate and collect data (and ultimately to stop collecting data). The CAPDAC interprets and converts capacitance data from the two sensors on the fly-arena to a digital signal for the FPGA to process (Itskov et al., 2014). The lighting circuit consists of a two-pin connector to receive power from an external power supply, a 10-pin connector to receive signals from the FPGA controller via the intermediate components, a 617 nm light emitting diode (LUXEON Rebel LED -127lm @ 700mA; Luxeon Star LEDs #LXM2-PH01-0060), two power resistors (TE Connectivity Passive Product SMW24R7JT) for LED current protection, and two metal oxide semiconductor field effect transistors (MOSFETs; from Infineon Technologies, Neubiberg, Germany, IRLML0060TRPBF) allowing for voltage signal switching of the LEDs. When a fly performs a sip and triggers a high signal on a capacitive sensor, the CAPDAC on the fly arena propagates a signal via the multiplexor to the FPGA controller. The FPGA processes the capacitive sensor signal, decides a legitimate sip was detected and sends a high signal through the multiplexor to the MOSFET of the lighting circuit. The MOSFET then switches its lighting circuit on, allowing current to flow and turning on the monocolor LED positioned directly above the capacitive sensor. The process for determining a legitimate sip is described next. In order to trigger optical stimulation with short latency upon sip initiation, we designed a running minima filter that operates in real-time to detect when a fly is feeding. We implemented this filter by modifying the state machine on the FPGA. When a fly feeds, its contact with the capacitance plate generates a 'step', or rising edge in the capacitance signal. Our filter determines the minimum signal value in the last 100 ms and checks whether the current signal value is greater than this minimum. It further determines whether the current value is sufficiently large to be considered a rising edge relative to this minimum, based on a threshold set to exceed noise. If both these conditions are true, then the filter will prompt the lighting activation system to activate the LED (or keep it on if it is already on). By design, this means that the control system will send a signal to deactivate the lighting upon the falling edge of the capacitance signal, or if the capacitance signal has plateaued for 100ms, whichever comes sooner. At this point, a low signal is sent to the MOSFET which pinches off the current flowing through the lighting circuit, turning off the light. The signal to lighting response transition times are on the order of tens of milliseconds, providing a nearly instantaneous response. After each lighting decision (on/off/no change), the system will then automatically record the state of the lighting activation system (on/off) and transmit this information through USB to the computer, where it is received and interpreted by a custom end-user program (built using Qt framework in C++) which can display and record both the activation state and signal measured by the STROBE system for each channel of every fly arena, in real-time. All STROBE design materials are available as a supplemental download. Fly preparation and STROBE experiments After eclosion, adult female flies were kept for several days in fresh vials containing standard medium, and were then transferred at 25 °C into vials covered with aluminum foil containing 1ml standard medium (control flies) or into vials containing 1ml standard medium mixed with 1mM of all-trans-retinal (retinal flies) for 2 days. Then flies were transferred to vials covered with aluminum foil containing 1ml of 1% agar (control flies) or into vials containing 1ml of 1% agar mixed with 1mM of all-trans-retinal (retinal flies) for 24 hours. All flies were 5-9 days old at the time of the assay. Both channels of STROBE chambers were loaded with 4 μl of 1% agar with or without sucrose (0, 1, 10, 100, 1000 mM) or denatonium (0, 0.1, 1, 10 mM). For aversive assays using denatonium, 50 mM sucrose was also added to increase basal sips number. Acquisition on the STROBE software was started and then single flies were transferred into each arena by mouth aspiration. Experiments were run for 60 minutes, and the preference index for each fly was calculated as: (sips from Food 1 -sips from Food 2)/(sips from Food 1 + sips from Food 2). The red LED is always associated to the left side (Food 1). For temporal curves, data are pooled within 1s time-period. Sucrose, denatonium, agar and all-trans-retinal were obtained from Sigma-Aldrich. For experiments done in figure 2, light intensity used are 0, 0.12, 1.85, 6.56, 11.26 and 16.44 mW. All the other experiments were performed with a light intensity of 11.2 mW. All images were acquired using a Leica SP5 II Confocal microscope with a 25x water immersion objective. All images were taken sequentially with a z-stack step size at 2 mm, a line average of 2, speed of 200 Hz, and a resolution of 1024 x 1024 pixels. Statistical analysis Statistical tests were performed using GraphPad Prism ! 10 6 software. Descriptions and results of each test are provided in the figure legends. Sample sizes are indicated in the figure legends. Sample sizes were determined prior to experimentation based on the variance and effect sizes seen in prior experiments of similar types. All experimental conditions were run in parallel and therefore have the same or similar sample sizes. All replicates were biological replicates using different flies. Data for behavioral experiments were performed with flies from at least two independent crosses. There was one condition where data were excluded, which were determined prior to experimentation and applied uniformly throughout: the data from individual flies were removed if the fly did not pass a set minimum threshold of sips (15), or the data showed hallmarks of a technical malfunction (rare).
6,641.6
2018-12-03T00:00:00.000
[ "Biology" ]
Iris Recognition Using Gauss Laplace Filter : Biometrics deals with recognition of individuals based on their behavioral or biological features. The recognition of IRIS is one of the newer techniques of biometrics used for personal identification. It is one of the most widely used and reliable technique of biometrics. In this study a novel approach is presented for IRIS recognition. The proposed approach uses Gauss Laplace filter to recognize IRIS. The proposed approach decreases noise to the maximum extent possible, retrieves essential characteristics from image and matches those characteristics with data in a database. This method will be effective and simple and can be implemented in real time. The experiments are carried out using the images of IRIS acquired from a database and MATLAB application has been applied for its effective and simple manipulation of IRIS image. It was observed that developed approach has more accuracy and a relatively quicker time of execution than that of the existing approaches. Introduction The IRIS is internal organ of the body that is readily visible from outside. Its main aim is to manage the amount of light that enters eye through pupil using dilator and sphincter muscles to manage the size of pupil. It is comprised of elastic fibrous tissue that provides it a very complex pattern of texture. The pattern of texture has no connection with individual's genetic structure and is produced by chaotic processes. The IRIS recognition technique is an important solution of biometrics for people identification. Unlike fingerprint identification, IRIS is more secure (Alotaibi and Hebaishy, 2014). In the recent years several researchers have come up with several techniques for IRIS detection. This research will make use of Gauss Laplace algorithm for accomplishing the same. The study of Chouhan and Shukla (2010) presented a novel statistical feature approach by using Gaussian filter laplacian to recognition of iris. Their aim is to evolve good algorithm that develops the images of iris and matches those characteristics with data in the database of iris. In the study of Savithiri and Murugan (2010) Gaussian pyramid compression technique is used to compress the image of eye and this compressed eye is used for outer and inner boundaries localization of iris region. Located iris is retrieved from the compressed image of eye and after enhancement and normalization it is indicated by a set of data. With Gaussian pyramid compression improved performance of matching is examined down to 0.25bpp attributed to reduction of noise without an essential texture loss. Schmid et al. (2006) has stated that the algorithm proposed in finds the biometric system optimization on a larger data set on Gaussian model basis acquired from a little data set. Ma et al. (2002) have mentioned that in the Gauss Laplace algorithm the pupil locations and the lower and upper eyelids are predicted using edge detection. This is carried out after the actual image of IRIS has been down sampled by a two factor in every direction which makes it one of the most superior techniques for IRIS detection. Tisse et al. (2003) has mentioned that there are four steps involved in recognition of IRIS. The first step comprises of preprocessing. Then the type and size of the picture is estimated in order to be consequently process them. After that the IRIS texture is retrieved. Lastly the coded image is compared with already coded IRIS to predict the most accurate match. In this study the researcher will apply a combination of the best approaches in every stage of the IRIS detection to obtain the most accurate outcome possible. Literature Review The study of Miyazawa et al. (2008) presented an effective algorithm for IRIS recognition using the technique of phase based image matching of image matching using components of phase in two dimensional Discrete Fourier Transform (DFT) of given images. The evaluation of experiments using the database of CASIA IRIS image and ICE 2005 (IRIS challenge evaluation) database clearly explained that the phase components use of IRIS images made it feasible to accomplish highly accurate IRIS recognition with a simple matching algorithm. In order to decrease the data size and to increase the IRIS image visibility the author introduced the notion of two dimensional Fourier Phase Code (FPC) for indicating the information of IRIS. Durai and Karnan (2010) proposed a study on recognition of IRIS using changed hierarchical phase based matching technology. The suggested system was configured for recognizing IRIS using database of training as input which consists an IRIS for each individual. The final decision was made by HPM as the matching score level architecture in which vectors of featurewere made independently for query images and then compareto enrollment templateswhere each subsystem evaluated its own score of matchingIn the proposed technique the phase components use in 2D IRIS images of Discrete Fourier Transform (DFT) made feasible to achieve highly resilientIRIS recognitionin a distinct way with a simple algorithm of matching. According to the study of Verma et al. (2012) recognition of IRIS is an accurate and reliable system for biometric identification. In this study the author used Daughman's algorithm segmentation process for recognition of IRIS. The images of IRIS are chosen from the database of CASIA then the pupil and IRIS boundary are predicted from the remaining eye image, removing noises. The segmented region of IRIS was normalized to reduce the inconsistencies of dimension between IRIS areas by using Daugman's Rubber sheet model. Then the characteristics of IRIS were encoded by convolving the normalized IRIS region with a single dimensional filter of log Gabor and phase quantizing the outcomes to produce bit wise template of biometrics. In the study of Chitte et al. (2012) an IRIS image synthesis process based on principal component analysis, independent component analysis and Daugman's rubber sheet structure was proposed. The aim of their study was to implement a prototype of working of the techniques and methods used for recognition of IRIS. Their algorithm was proven to be highly reliable and accurate over 200 billion comparisons. In the study the authors compared the outcomes of all 3 algorithms and recognized the best technology used for IRIS recognition. The study of Jayachandra and Reddy (2013) focused mainly on pupil to identify the eye. To find the edges of image this study proposed the process of canny edge detection to reduce the noisy data in image and find the edges. After finding the edges those images were stored in CASIA database. Secondly the k-means algorithm was used in this study to recognize the edge image of nearest pupil from database for the given input image. The outcomes of this study revealed that recognizing pupil was a better process to identify the eye and raising the accuracy of recognition. Yao et al. (2014) proposed a study on IRIS feature extraction based on Haar Wavelet transform. In order to, increase the IRIS recognition system accuracy, this study suggested an effective algorithm for feature extraction of IRIS based on two dimensional Haar wavelet transformations. Firstly the image of IRIS was decomposed by two dimensional Haar wavelet 3 times and then a 375 bit code of IRIS was acquired by quantizing entire high frequency co-efficient at 3rd lever. Lastly the authors used a function of similarity degree as the scheme of matching. The outcomes on IRIS database of CASIA revealed that their algorithm has motivating Correct Recognition Rate (CRR) which was nearly 93.18% accompanying with reduced Equal Error Rate (EER) with 0.54%. Design and Implementation The design and implementation section provides the design of IRIS detection system using Gaussian Laplace filters. Database used for this particular research is CASIA-Iris-Thousand. Information for the database can be seen in this website (http://biometrics.idealtest.org/dbDetailForUser.do?id=4). The goal of this IRIS recognition system is to acquire HD images of IRIS either from pre-collected images or IRIS scanner. These images clearly reveal the complete eye particularly the pupil and IRIS section. In this particular research the IRIS HD images are acquired using pre collected images that are stored in a database. The Fig. 1 shows the flowchart of the implementation steps. Segmentation Iris is an eye part that encloses the pupil. There are many ways to isolate the IRIS from the eye image. This study uses Hough transform for localization and model of Dugman rubber sheet for normalization then this study uses two varied methods of feature extraction one is log Gabor wavelet for analysis of phase and second is the Gaussian filter Laplacian for statistical analysis. The Hough transform is considered as a strong component in edge linking for line extraction. Its major advantages are its ability to extract lines and insensitivity to noise in areas with pixel gaps. The Hough transform can be used to predict circles, lines or other parametric curves. The purpose is to predict the lines location in images. The benefits of Hough transform are simple implementation, simple conceptually, can be adapted to several forms and manage occluded and missing information easily. For instance in hough transform a circle in the ab-plane is presented by (Joshi, 2004): Data was filtered through gabor convolution. Then data is implemented through linear hough transform. Iris is detected through canny edge detection. Then image is compared with the help of phase demodulator and finally calculation is made through calculation of hamming distance. Thus there are three dimension spaces of parameter. The simple process is mentioned below: The segmentation of image is used to reside boundaries and objects in images. The process of segmentation is a difficult and essential step in the system of image processing. Segmentation is a technique needed to exclude and separate the artifacts as well as residing the region of circular IRIS. The outer and inner boundaries of IRIS are estimated using segmentation (Samarati et al., 2009). The segmented image is shown in Fig. 2. Canny Edge Detection The canny edge detector is one of the most commonly used tools of image processing predicting edges in a robust way (Zhu et al., 2000). The canny edge detection algorithm is known too much as an optimal edge detector. The algorithm has five steps namely smoothing, predicting gradients, non-maximum suppression, double thresholding and tracking edge by hysteresis. The Fig. 3 shows the phase demodulator which is a 4 quadrant plane that represents the resulting matrix of feature from using the normalized image to Gabor filter, to binary code. This representation relies on the sign of both the imaginary and real part of feature matrix. The vertical axis indicates the imaginary tool with positive to up whereas the horizontal axis indicates the real tool with the positive to right (Biswas and Sil, 2012). Normalization After the segmentation of image and deciding the area of IRIS must be isolated from total picture. The process of normalization will generate regions of IRIS which have similar constant dimensions so that two pictures of similar IRIS under varied conditions will have feature characteristics at same spatial location. To recognize and contrast the area of IRIS the circular IRIS needed to be transformed to coordinate that have a fixed dimensions. This feature makes the comparison practically. The two main techniques used to retrieve features from iris image are Gabor filter and Daugman method. The Gabor filter is Gaussian's modulated by complex function of sinusoid (Palmer-Brown et al., 2009). The below figure shows the Gabor filter equation: The Generator of Gabor filter is depicted in Fig. 4. The Daugman method is used for normalization stage. Daugman process performs in a way that offers points in the IRIS transformed to pair with corresponding points in polar coordinates in which θ is the angular field and r is the radial distance (Vargahan et al., 2011). The normalized image is shown in Fig. 5. Denoising After normalization stage this study denoised the normalized image to develop the quality of it and also manage with possible noise that is added to image. In this study the normalized images are denoised using Contourlet transform. Contourlet is an isolated unidirectional 2D transform that is used to explain delicate details and curves in images. Contourlet transform effectively indicates those flat contours that are the major tools of every usual image (Zali- Vargahan et al., 2012). The Fig. 6 depicts the denoised image: Feature Extraction The essential IRIS features must be encoded as that contrast between templates can be made. Most systems of IRIS recognition make use of a band pass decomposition of the IRIS image to create a biometric template. Iris offers abundant information of texture a feature vector is formed which comprises of ordered feature sequences retrieved from different representation ofIRIS images. The Fig. 7 depicts the Laplacian of Gaussian image: The Laplacian is a two dimensional isotropic estimation of image's second spatial derivative (Hussain and Agarwal, 2015). The image Laplacian highlights the rapid intensity change areas and is always used for edge detection. The Laplacian is always applied to an image that has been smoothed with approximating a Gaussian smoothing filter to decrease its sensitivity to noise. Iris Code Matching The last stage is to match the IRIS code. The templates of two IRIS code are compared by computing the hamming distance. Hamming distance is a fractional estimation of the number of bits disagreeing between two binary patterns.The hamming distance is a number used to represent the difference between two strings of binary numbers. It is a small part of a wider set of formulas used in analysis of information. Hamming's formulas permit PC to correct and detect mistakes on their own. For example the Hamming distance d (a, ̱ b) between two vectors a ̱ , ̱ b Є f (c) is the coefficient number in which they vary for instance: Results and Discussion Iris recognition is a quickly developing biometric authentication method that uses techniques of pattern recognition on IRIS images to distinctly recognize an individual. The results of this study are depicted separately. The Fig. 8 shows the Gabor conversion and Gabor lines. The Fig. 9 is the test image and recognized Fig. 8. The next resultant Fig. 10 is the after segmentation test and noise test. The Table 1 summarizes various existing approaches for IRIS detection and their corresponding accuracy rates. There is no comparison of the common IRIS recognition metrics. As discussed in the earlier sections, in the present approach Gabor convolution has been used to filter the images. Linear Houghtransforms and canny edge detection has been used to segment the IRIS figures. Further the features have been subtracted from the figures and phase demodulator has been used encode the IRIS and save them as a template in a database file for IRIS recognition. The Hamming distance of the encoded images is calculated to while doing the IRIS recognition to retrieve the IRIS related information in a faster pace thereby finding the most similar IRIS figure from the database accurately. It can be very clearly seen that the approach adapted in this study yields results that are far better than that of the existing techniques. Conclusion In this study the author has proposed as well as implemented an effective and rapid real time Gauss Laplace algorithm for segmenting and localizing the pupil and IRIS boundaries of eye from database images. The algorithm predicts the boundaries and center reliably and accurately despite of the presence of eyelashes under reduced interface of contrast and in the occurrence of excess illumination. This study has proposed an IRIS recognition system that exhibits greater performance when compared with that of the previously available techniques. The author has also compared the accuracy of the proposed approach based on Gauss Laplace filters with different existing approaches. The outcomes have shown that the proposed approach has relatively quicker time of execution than that of the existing approaches. Recommendations for Future In future the researchers can propose some other new algorithms for recognition of IRIS. The future work would be to recognize IRIS from a larger database that has huge volume of information. Further new techniques can be proposed and tested for different variations of the input from size and illumination. The author also suggests that the future researchers can come up with algorithms for recognition of IRIS using minimal hardware and less expensive cameras such that IRIS recognition is done in a cost effective manner.
3,774.4
2016-09-15T00:00:00.000
[ "Computer Science" ]
An Improved Animal Model of Multiple Myeloma Bone Disease Simple Summary Multiple myeloma is a plasma cell cancer involving bone destruction and is considered an incurable disease despite significant improvements in therapeutic strategies. During myeloma progression, over 90% of patients develop a bone disease that causes patient injury and death. There are limited animal models available to demonstrate multiple myeloma bone disease (MMBD). The current study identifies and validates the newly developed MMBD models with uniformity of tumor burden and severe bone lesions. This model will help study the biology of MMBD and serve as a valuable tool for screening therapeutic candidates by monitoring their response to disease progression. Abstract Multiple myeloma (MM) is a plasma cell malignancy that causes an accumulation of terminally differentiated monoclonal plasma cells in the bone marrow, accompanied by multiple myeloma bone disease (MMBD). MM animal models have been developed and enable to interrogate the mechanism of MM tumorigenesis. However, these models demonstrate little or no evidence of MMBD. We try to establish the MMBD model with severe bone lesions and easily accessible MM progression. 1 × 106 luciferase-expressing 5TGM1 cells were injected into 8–12 week-old NOD SCID gamma mouse (NSG) and C57BL/KaLwRij mouse via the tail vein. Myeloma progression was assessed weekly via in vivo bioluminescence (BL) imaging using IVIS-200. The spine and femur/tibia were extracted and scanned by the micro-computer tomography for bone histo-morphometric analyses at the postmortem. The median survivals were 56 days in NSG while 44.5 days in C57BL/KaLwRij agreed with the BL imaging results. Histomorphic and DEXA analyses demonstrated that NSG mice have severe bone resorption that occurred at the lumbar spine but no significance at the femur compared to C57BL/KaLwRij mice. Based on these, we conclude that the systemic 5TGM1 injected NSG mouse slowly progresses myeloma and develops more severe MMBD than the C57BL/KaLwRij model. Introduction Multiple myeloma (MM) is the second most common hematological threat in the USA, characterized by the accumulation of terminally differentiated monoclonal plasma cells (PCs) in the bone marrow (BM) [1,2]. MM is characterized by hypercalcemia, renal failure, anemia, and lytic bone lesions. Over 90% of patients develop one or more bone lesions during the disease, called MM-associated bone disease (MMBD) [3][4][5][6][7]. MMBD develops when MM cells disrupt the balance of bone absorption and formation, resulting in activation of osteolytic devastation [8]. MM progression and osteolytic lesion development are highly linked, demonstrating the interaction of myeloma cells with the bone marrow microenvironment. Furthermore, osteoimmunology describes interdisciplinary mechanisms of bone and immune cells in both normal and pathophysiology [9]. A number of factors, including Receptor Activator of NF-κB (RANK), RANK ligand, osteoprotegerin, play roles in osteolytic lesions and MM progression. Some of these factors are produced from various immune cells. These findings suggest that crosstalk between the immune system and bone cells may affect cancer growth [10]. Various murine models have been developed to study the biology of MM. These models describe the essential tool to explore and envisage the efficacy of innovative therapeutic approaches [11]. The 5TGM1 transplanting C57BL/KaLwRij mouse via the tail vein develops MMBD features close to human MMBD [12][13][14][15][16]. This model has a short latency period to develop MM and significant osteolytic bone lesions [12]. However, we found significant variations of MMBD levels per individual mouse [17][18][19]. The mouse/human myeloma cells (5TGM1, U266, or JJN3) were intratibially injected into severe combined immunodeficient (SCID) or non-obese diabetic (NOD)/SCID. They developed MMBD but progression was limited to the tibia where tumor cells were injected [20][21][22]. The human GFP-expressing RPMI-8226/S MM cells were intravenously injected into NOD/SCID mice and demonstrated that the human MM cells infiltrate into mouse bone and sequentially develop MMBD [23]. The primary human MM cells were orthotopically injected into human fetal or rabbit bones implanted SCID mice and developed bone lesions on implanted bones [24][25][26][27]. Later, the NOD/SCID-GAMMA (NSG) mice were used to study various levels of BM infiltration and overall survival [28][29][30][31], which also have limited information on the developments of MMBD and osteolytic lesions [32][33][34]. While the orthotopic transplanting models showed extensive MMBD at the injected bone site, systemic injection models have various tumor burden and MMBD levels. In addition, the early/quantifiable detection of myeloma and myeloma progression in the murine models will be useful in the therapeutic evaluation. In the current study, we establish the MMBD model with severe bone lesions and easily accessible MM progression. In addition, we evaluated the times of disease detection, the progression of the tumor, and the extent of bone lesions in systemically injected 5TGM1 (5TGM1-Luc) cells in the NSG or C57BL/KaLwRij mice. Cell Culture 5TGM1 cells were received from the University of Texas Health Science Center, San Antonio, and maintained in RPMI 1640 medium + 15% fetal bovine serum (FBS) + 1 × penicillin/streptomycin + 1 × Glutmax (Gibco, Life Technologies, Carlsbad, CA, USA) at 37 • C in an atmosphere of 5% CO 2 /95% air. 5TGM1-Luc cells were maintained in a range of 0.5-2 × 10 6 cells/mL. Before the transplant, cells were washed with PBS three times and counted by Cellometer Mini (Nexcelom, Lawrence, MA, USA) using the trypan blue exclusion method. Intravenous Inoculation of 5TGM1-Luc Cells NSG and C57BL/KaLwRij mice were housed and bred at the University of Arkansas for Medical Sciences (UAMS) Animal Facility. All animal procedures were reviewed and approved by the UAMS IACUC and were conducted as per the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals. 1 × 10 6 cells (>90% viability) in 100 µL PBS or PBS alone were injected via the tail vein into NSG and C57BL/KaLwRij mice, [8][9][10][11][12] week-old mice (both male and female). Myeloma cell engraftment was confirmed by third week bioluminescence imaging. The mouse with no signal until third week post-injection is removed from the study. In Vivo Bioluminescence Imaging Mice were imaged from first week to eighth week post-inoculation to assess tumor burden. Mice were anesthetized using isoflurane and imaged after 10 min of D-luciferin (1.5 mg/mouse, Perkin Elmer, Waltham, MA, USA) I.P. injection using an IVIS Imaging System 200 Series (Perkin Elmer, Waltham, MA, USA). All the images were acquired by auto exposure. Using Living Image 4.7.4 software, region of interest (ROI) was generated to cover whole body, and the total flux (p/s) was obtained. Blood Sampling and Serum IgG2b Level Measurement Blood samples (50 µL) were collected by retro-orbital bleeding and separated by centrifugation at 8000 rpm for 5 min. The plasma was transferred into separate vials and stored at −20 • C for the assay. According to the manufacturer's instructions, serum IgG2 levels were measured using a mouse IgG2b Uncoated ELISA Kit (Invitrogen, Carlsbad, CA, USA). BMD Measurement and Bone Histomorphometric Analyses on the Spine When the mouse shows endpoint criteria, it is sacrificed and stored at −20 • C (>2 days). Once the mouse is thawed at Room Temperature, the vertebra on the spine is extracted from the mouse. The vertebrae's bone mineral content and bone mineral density were measured using a PIXI-mus bone densitometer with on-board PIXI-mus software (G.E. Lunar, Madison, WI, USA) adjusted with body weight. Lumbar vertebrae (L1-L6) from the control and MM mice of NSG and C57/KaLwRij mice were scanned using a microCT40 (SCANCO Medical, Bassersdorf, Switzerland). For histomorphometric analyses, the trabecular bone regions were selected to assess the complete volume and structural appearances at each lumbar vertebrae. The analyses provided information regarding the main histomorphometry parameters such as bone volume (BV/TV, %), Trabecular thickness (Tb. Th, µm), number (Tb. N, n), and space (Tb. Sp, µm). Statistical Analysis Statistical analyses were conducted with a Student's t-test. A p value of <0.05 was considered significant. Graph Prism7 (GraphPad Software Inc, San Diego, CA, USA) and SigmaPlot 13.0 (Systat Software Inc, San Jose, CA, USA) were used for all statistical analyses. 5TGM1-Luc Transplanted NSG Mouse Shows Early BL Signs, but Longer Survival than C57BL/KaLwRij Mouse We have previously monitored MM progression in 5TGM1 intravenously transplanted C57BL/KaLwRij by evaluating serum immunoglobulin (Ig) changes and other phenotypic changes such as hindlimb paralysis, significant weight loss, and moribund state [17][18][19]. Both parameters become apparent at the terminal stage of MM progression. For easy and early accessing of MM progression and MM onset in the mouse, we obtained the Luciferase gene-expressing 5TGM1 cells (5TGM1-Luc) from Dr. Oyajobi via collaboration and injected 1 × 10 6 5TGM1-Luc cells into C57BL/KaLwRij or NOD-scidIL2R null (NSG; Jackson Laboratory, Bar Harbor, ME, USA) mouse via tail vein. Each week we performed in vivo bioluminescence (BL) imaging and collected blood (50 µL) for the serum Ig assay. We found the first signs of MM engraftment and growth at the second week of post-injection in NSG mice compared to the fourth week in C57BL/KaLwRij mice demonstrated by a focal signal on BL Image analyses ( Figure 1A). As MM progressed, the BL signals increased and the focal BL positive areas spread throughout the body ( Figure 1A). Like 5TGM1 transplanted C57BL/KaLwRij mice, the endpoint phenotypes (i.e., hindlimb paralysis, significant reductions of activity, and weight loss) were seen in NSG mice at either seventh or eighth-week post-transplantation. Engraftments of 5TGM1-Luc cells via tail vein in both C57BL/KaLwRij mice and NSG mice resulted in MM progression with typical MM phenotypes. Furthermore, we lost C57BL/KaLwRij mice in the fifth to seventh week of post-injection while NSG mice in the seventh to ninth week ( Figure 1A). Comparison of the mouse survival by the log-rank (Mantel-Cox) test revealed that the survivals of the two models are significantly different (p = 0.0062) with 44.5 days in the C57BL/KaLwRij model versus 56 days of the median survival of the NSG model ( Figure 1B). Tumor Burdens Are Detected as Early as Week 2 of Post-Injection by BL Analysis Since serum Ig analysis is a commonly used method in preclinical models to assess MM burden, we compared sensitivities of tumor burden by serum Ig and BL Imaging in both C57BL/KaLwRij and NSG mice. As mentioned in 3.1, all transplanted mice showed hindlimb paralysis, a typical sign of MM progression at the terminal stage. For serum Ig analysis, the plasma were isolated from collected sera each week and stored at −20 • C as described in the Material and Method section. Serum IgG2b was measured by ELISA kit (Invitrogen Co., Carlsbad, CA, USA) Due to a difference in survival rate, serum collection ended at week 5 in C57BL/KaLwRij mice while continuing until week 6 in NSG mice. As expected, NSG has an undetectable level of IgG2b in control mice, so expressed as 0 mg/mL. As shown in Figure 2A, a significant increase of serum IgG2b level in NSG mice was noticed from 4th week and continuously increased in a ranges of 0.287-2.564 mg/mL. On the contrary, no significant change of IgG2b level in C57BL/KaLwRij mice were shown on weeks 3 and 4 with averages ranging between 0.267 and 0.363 mg/mL from 0.206 mg/mL at week 0. At week 5, serum IgG2b sharply increased to 1.303 mg/mL, suggesting that tumor burden in C57BL/KaLwRij mice become apparent, resulting in death. Apparent differences of serum Ig levels from the baseline appeared in the fourth to fifth week of post-injection in C57BL/KaLwRij mice while appearing in the third to sixth week in NSG mice. For quantitative BL signals, we created a region of interest (ROI) box to cover the whole body and obtained total flux values (photons/sec) of each mouse from each week images ( Figure 1A). The total flux values per mouse in the weekly images were plotted to express tumor burden ( Figure 2B). In this graph, the tumor burdens were detected from the second week of post-injection and increased over time in NSG and C57BL/KaLwRij mice. All NSG mice showed relative uniformity of tumor burden at each timepoint than the C57BL/KaLwRij except first week. However, such variations in the C57BL/KaLwRij model may be partially due to a signal interruption from the black color and spots of the mouse. These findings indicate that the NSG model has a better window to monitor the MM progression quantitatively. the second week of post-injection and increased over time in NSG and C57BL/KaLwRij mice. All NSG mice showed relative uniformity of tumor burden at each timepoint than the C57BL/KaLwRij except first week. However, such variations in the C57BL/KaLwRij model may be partially due to a signal interruption from the black color and spots of the mouse. These findings indicate that the NSG model has a better window to monitor the MM progression quantitatively. Both Models Demonstrate Loss of Bone Mineral Using the Ex Vivo DEXA Scan As 5TGM1 transplanted C57BL/KaLwRij mouse develops severe MMBD [35], we were interested to see how severe MMBD develops in these models. We measured the bone minerals in the spines of MM developing NSG or C57BL/KaLwRij mice compared to controls using PIXImus Densitometer (G.E. Lunar, Madison, WI, USA). At postmortem, we performed an ex vivo DEXA scan. As shown in Figure 3, ex vivo DEXA analysis reveals that MM-induced NSG mice showed a significant reduction of the bone mineral density (BMD) (p = 0.0043) in the spine compared to their control. In contrast, MM-induced C57BL/KaLwRij mice showed a reduction of BMD recorded as (p = 0.0197) compared to control. Although differences of MM vs. control in both models demonstrated significant bone mineral loss, the more minor variation in the NSG model made a further significant difference than the C57BL/KaLwRij model. Both Models Demonstrate Loss of Bone Mineral Using the Ex Vivo DEXA Scan As 5TGM1 transplanted C57BL/KaLwRij mouse develops severe MMBD [35], we were interested to see how severe MMBD develops in these models. We measured the bone minerals in the spines of MM developing NSG or C57BL/KaLwRij mice compared to controls using PIXImus Densitometer (G.E. Lunar, Madison, WI, USA). At postmortem, we performed an ex vivo DEXA scan. As shown in Figure 3, ex vivo DEXA analysis reveals that MM-induced NSG mice showed a significant reduction of the bone mineral density (BMD) (p = 0.0043) in the spine compared to their control. In contrast, MM-induced C57BL/KaLwRij mice showed a reduction of BMD recorded as (p = 0.0197) compared to control. Although differences of MM vs. control in both models demonstrated significant bone mineral loss, the more minor variation in the NSG model made a further significant difference than the C57BL/KaLwRij model. . Bone mineral density changes in 5TGM1 bearing mice versus the control mice. The mouse was sacrificed and kept in the freezer for >2 days. Once the carcass was thawed, the vertebra was extracted from the frozen carcass and scanned using PIXI-mus. BMD values are expressed here from control versus 5TGM1 bearing mice per each model and tested using Student's t-tests. * denotes p < 0.05, ** denotes p < 0.01. More Severe MMBD Developed in NSG Mice than C57BL/KaLwRij Since severe bone mineral loss occurred in these models, we further evaluated the myeloma-induced bone lesions. After the DEXA scan, the spine and femur/tibia were extracted from a carcass. Thirteen spines from MM C57BL/KaLwRij mice and 12 spines from MM NSG mice with seven spines from control mice per strain were scanned by microcomputed tomography (microCT; Scanco Medical AG, Switzerland). For histomorphometry analyses, the trabecular regions on the lumbar vertebrae (L1-L6) were analyzed by . Bone mineral density changes in 5TGM1 bearing mice versus the control mice. The mouse was sacrificed and kept in the freezer for >2 days. Once the carcass was thawed, the vertebra was extracted from the frozen carcass and scanned using PIXI-mus. BMD values are expressed here from control versus 5TGM1 bearing mice per each model and tested using Student's t-tests. * denotes p < 0.05, ** denotes p < 0.01. More Severe MMBD Developed in NSG Mice Than C57BL/KaLwRij Since severe bone mineral loss occurred in these models, we further evaluated the myeloma-induced bone lesions. After the DEXA scan, the spine and femur/tibia were extracted from a carcass. Thirteen spines from MM C57BL/KaLwRij mice and 12 spines Cancers 2021, 13, 4277 6 of 10 from MM NSG mice with seven spines from control mice per strain were scanned by micro-computed tomography (microCT; Scanco Medical AG, Switzerland). For histomorphometry analyses, the trabecular regions on the lumbar vertebrae (L1-L6) were analyzed by drawing contour. The bone volume density (BV/TV) and trabecular thickness (Tb.Th) from whole vertebrae were expressed, but further significances in NSG showed than C57BL/KaLwRij ( Figure 4A). In addition, the trabecular numbers and spaces were examined, but no statistical difference was seen. The BV/TV and Tb.Th were significantly reduced in MM NSG mice from control mice (no-MM) (p < 0.0001), while these parameters were also considerably reduced (p = 0.005 and 0.009) in C57BL/KaLwRij mice ( Figure 4A). The p-value differences come from variations of C57BL/KaLwRij since its standard errors were two to three folds higher than NSG mice. The percent reduction rates were calculated against the control and expressed in Figure 4B to compare each lumber. Both graphs showed unequal bone lesions in each lumbar. In the BV/TV, L2 of C57BL/KaLwRij and L2/3 of NSG showed higher reduction rates. In Figure 4C, the representative transverse microCT images and 3D-reconstructed images of the trabecular region in the vertebral body of corresponding lumbar (L3) from Control and MM mice are shown here. Although both images demonstrated significant bone loss on the trabecular region of the lumbar, 3D-reconstructed images showed further bone loss in the NSG model, in agreement with Figure 4B. These results demonstrate that both models induce significant bone loss, but the NSG model has more severe bone loss than C57BL/KaLwRij model. In the current study, we present an improved MMBD preclinical model with uniformity of tumor burden. This MM model was established by intra-venous engraftment of luciferase-transduced 5TGM1 MM cells into NSG. The tumor engraftment in this model was observed as early as the second week of post-injection using BL imaging and later confirmed by serum IgG2b surge. The BL imaging showed myeloma cell's dissemination and growth patterns similar to human myeloma. In addition, this model demonstrated typical MM pathological phenotypes, such as hindlimb paralysis, significant weight loss, and moribund state. Compared to C57BL/KaLwRij model, we found that i) the 5TGM1-Luc engrafted NSG model survives longer; ii) tumor burdens are apparent from the control at an earlier timepoint; and iii) it showed a uniform tumor burden throughout the testing period. Since the BL signals are interrupted by the color of the skin and fur, BL has not been used extensively in C57BL derived strains. Although we cream-shaved the back of C57BL/KaLwRij mice before BL imaging, the tumor burden variations may be partially contributed by the BL signal hindrance. However, our results clearly showed that the 5TGM1-Luc engrafted NSG model has slow MM progression and uniform tumor burden that benefit from evaluating novel myeloma therapeutics. Delaying MM progression in the NSG model from the C57BL/KaLwRij model demonstrated that the lack of immune cells causes this delay, supporting that the interactions of immune cells/MM cells/bone cells promote MM progression [10]. Human MMBD features bone lesions that can be found in various forms (i.e., classic discrete lytic lesion, diffused osteopenia, or multiple lytic lesions) at many parts of the skeleton, preferably spine, skull, and long bones [49]. However, many studies of MMBD were limited to the tibia and femur due to orthotopic injection [13,35,[50][51][52][53] or the vertebral bone lesion study was not thoroughly evaluated [54]. Furthermore, human MMBD causes several complications (i.e., bone pain, fractures, hypercalcemia, and spinal cord compression), resulting in decreased quality of life and poor mobility [55]. These bone lesions in MMBD are always lytic and promote the processes of bone resorption. The mechanism associated is the displacement of the RANKL/OPG ratio, which affects the process of osteoclastogenesis [10,56]. Another exciting feature in 5TGM1-Luc engrafted NSG model is severe osteolytic bone lesions of the lumbar spine. In ex vivo DEXA analysis, we found a significant reduction of BMD in the spine of NSG mice from non-MM mice (p = 0.0043), whereas a less significant change in the C57BL/KaLwRij model (p = 0.0197). In microCT analyses, both models (NSG and C57BL/KaLwRij) show substantial bone loss (BV/TV and Th.Th) compared to non-MM mice. However, the NSG model showed further significance. These findings aim that the osteolytic lesions observed in our study may be due to aberrant RANK/RANKL signaling along with subsequent failure of immune system in MM [10] which needs to be studied and explored in the further study. Our results here provide first insights into quantitative data of the myeloma progression in the NSG mouse after systemic 5TGM1-Luc injection and qualitative data of the myeloma-induced bone lesions, as proof of concept for detecting advanced ultrastructural deviations in the spine. These quantitative and qualitative assessments of MMBD will be useful to test various anti-MMBD drugs. Conclusions We improved the MM animal model which recapitulates human myeloma with secretion of paraprotein, lytic bone lesions and spinal compression. Our newly developed
4,862.2
2020-11-05T00:00:00.000
[ "Medicine", "Biology" ]
On the stability and Hopf bifurcation of the non-zero uniform endemic equilibrium of a time-delayed malaria model This article considers a time-delayed mathematical model of immune response to Plasmodium falciparum (Pf) malaria. Infected red blood cells display a wide variety of surface antigens to which the body in turn responds by mounting specific immune responses as well as cross-reactive immune responses. The model studied here tracks these infected red blood cells as well as the two types of immune responses. It is assumed that the immune responses are timedelayed, and hence a system of nonlinear delay differential equations is considered. The goal of the paper is to provide a vigorous analysis of the stability and Hopf bifurcation of the non-zero uniform endemic equilibrium of the mathematical model. Introduction The present paper is a development and refinement of some of the ideas first presented in [15] and [4].For the sake of completeness, we reproduce here some portions of the introduction of [15] -which provides the background of the mathematical model under consideration.We consider a time-delayed modification of the seminal Recker et al. [18] intra-host mathematical model of immune response to Plasmodium falciparum (Pf), a species of parasites that cause malaria in humans.The model incorporates the effects of time-delayed immune response (IR) mounted by the human host and was first introduced in the work of Mitchell et al. [12][13][14] as a natural development of the model proposed by Recker et al. [18].Over the years, there has been considerable work conducted on this type of model in the research literature (see [1][2][3][4][5][6][10][11][12][13][14]17] and [18] for example).In the original work of [18], the authors proposed a mathematical model of immune response to Plasmodium falciparum predicated on the hypothesis that a given antigenic variant experiences two different types of immune response.These are the long-lasting variantspecific immune response and the short-lived cross-reactive immune response mounted c Vilnius University, 2016 I. Ncube against a set of epitopes shared with other antigenic variants.The main achievement of the model is its ability to replicate the sequential appearance of dominant antigenic variants, each of which is the most immunologically distinct from its preceding types [18] -this being a strategy employed by the parasite population to evade the host's immune system [14]. The time-delayed modification of the Recker et al. [18] is expressible in the form [13,14] where Y i denotes the amount of antigenic variant i, Z i and W i denote variant-specific and cross-reactive immune responses, respectively, φ is the intrinsic parasite growth rate, α and α are the removal rates associated with specific and cross-reactive immune responses, respectively, β and β are the proliferation rates of immune responses, µ and µ are the decay rates of variant-specific and cross-reactive immune responses, and T d is the discrete time-delay of the IR.The coefficients c ij of the connectivity matrix characterise cross-reactive inter-variant interactions [2,6,18].Following in the footsteps of [13] and [14], we assume that all of the variants have identical temporal dynamics, namely, Y i (T ) = Y (T ), Z i (T ) = Z(T ), and W i (T ) = W (T ) for all i. Let us define dimensionless variables y, z, and w as deviations from the endemic steady state and a new dimensionless time [14]: We now recast (1) in the new dimensionless variables to obtain the system where y| τ := y(t − τ ), τ is the rescaled time-delay, and http://www.mii.lt/NAHere, the term nβ denotes the growth rate of the cross-reactive immune response, and n denotes the number of variants that share minor epitopes.In an effort to facilitate the analysis, [14] introduced the new variable x = z + w representing the total IR.Equation (3) thus becomes [14] x System (5) has two different types of equilibria, namely: 1) a disease-free equilibrium E 0 := (x 0 , y 0 , z 0 ) = (−(1 + qb)/(ab), −1, −q/a) and 2) a non-zero uniform (endemic) equilibrium E 1 := (x 1 , y 1 , z 1 ) = (0, 0, 0).We must comment that system (5) describes the dynamics of deviations from the uniform endemic equilibrium.The details of this derivation can be found in [15,18] and [4].Thus, even though the equilibrium E 1 of system ( 5) has all its components identical to zero, it infact corresponds to the non-zero uniform endemic equilibrium of the original system (1) [4].Likewise, the equilibrium E 0 is the genuine disease-free equilibrium of (1) [4].It is worth noting at the onset that the studies presented in [12,13] and [14] primarily focus on the dynamics of the non-zero uniform endemic equilibrium.Employing asymptotics and perturbation analysis techniques, the authors show that a range of interesting dynamics result as a consequence of the time delay.The current paper seeks to investigate the stability and Hopf bifurcation of the non-zero uniform endemic equilibrium from the viewpoint of the vigorous direct analysis of the associated characteristic equation. In this article, we study a mathematical model of Plasmodium falciparum malaria [12][13][14]17,18], which is given by the system of DDEs given in (5).As established in [15], when τ = 0, the linearisation of ( 5) about the non-zero uniform endemic equilibrium E 1 = (0, 0, 0) is given by Consider solutions of (5) of the form where c 1 , c 2 , c 3 ∈ R and λ ∈ C. Substituting ( 7) into (6) yields the system Nonlinear Anal.Model.Control, 21(6):851-860 which can be re-arranged and simplified to give the matrix equation whose solutions c 1 , c 2 , and c 3 are non-trivial if, and only if, Denoting the above determinant by D(λ), we see that c 1 , c 2 , and c 3 are non-trivial if, and only if, which defines the characteristic equation of ( 6) about the non-zero uniform endemic equilibrium point E 1 = (x 1 , y 1 , z 1 ) = (0, 0, 0), and where λ ∈ C is the characteristic exponent. Equilibria and Hopf bifurcation Linear stability of the non-zero uniform (endemic) equilibrium in the case of instantaneous IR was studied in [12,13] and [14].In [12,13] and [14], the authors also studied the linear stability of the non-zero uniform (endemic) equilibrium in the case of delayed IR, where the IR delay was taken to be discrete, very small, and fixed for all episodes of infection.In the cited studies, the authors relied heavily on perturbation and asymptotic analysis techniques.Their calculations were somewhat simplified by their assumption that the IR delay τ is very small.In Blyuss et al. [3], the authors did consider the case of an arbitrary immune response time delay and subsequently studied the phenomenon of symmetry breaking.In this article, we systematically investigate the local stability and Hopf bifurcation of the equilibrium E 1 = (0, 0, 0).Our approach differs from previous studies primarily because we rely on the direct analysis of the associated characteristic equation (8).We begin our study by noting that the characteristic equation ( 8) can be expressed in the form Setting τ = 0 in (9) leads to the equation Using the well-known Routh-Hurwitz criterion, we conclude that polynomial (10) is stable if, and only if, http://www.mii.lt/NABecause of the fact that all of the parameters in the Recker et al. model [18] are strictly positive, it follows that conditions (11) are always valid.Let λ = iω (ω > 0) in ( 9), and decompose into real and imaginary components to get We obtain from (12) that Using the identity sin 2 (ωτ ) + cos 2 (ωτ ) = 1, equation ( 13) yields the polynomial where When the Recker et al. model [18] is derived, all parameters are taken to be positive with a very specific biological meaning.The implication of this fact is that the parameters a, b, and q defined in equation ( 4) are positive.As a consequence of this, q 5 defined in equation ( 15) is always negative.Let us introduce the change of variable z = ω 2 in ( 14) to obtain H(z) := z 5 + q 1 z 4 + q 2 z 3 + q 3 z 2 + q 4 z + q 5 = 0. Using MAPLE, we find that the transversality condition is satisfied if, and only if, where In order to simplify (17), one would typically substitute (13) into the expression.However, in this case, the resulting expression is too lengthy to include here.In the light of the foregoing discussion and the general Hopf bifurcation theorem of functional differential equations [8], we arrive at the following result concerning the stability of the non-zero uniform endemic equilibrium of system (5) when τ > 0. Stability region in an appropriate two-parameter space In this section, we discuss the asymptotic stability boundary of the equilibrium E 1 in the (b, q)-parameter space.We note at the onset that the parameters b and q appear directly in system (5).To achieve this goal, we attempt to go via an intermediate step in which we may conduct our analysis in an intermediary two-parameter space [7].This approach is along the lines of the work of [7], who did a similar analysis for a nonlinear scalar delay differential equation arising in the work of [16].Unfortunately, the generalisation of their technique to systems of delay differential equations remains an open problemthis article is a modest attempt to address this problem in the particular case of a system of three nonlinear delay differential equations. To begin, we now prepare system (5) for the analysis to come by rescaling some of its parameters.In particular, let t := t/τ , q := qτ , a := aτ , and b := bτ .As a consequence of this, we obtain the following rescaled system: For notational convenience, we now drop the tilde's from system (18), leading to its less cluttered counterpart The linearisation of (19) about the equilibrium E 1 = (0, 0, 0) is given by and the corresponding characteristic equation is Instead of directly analysing the characteristic equation ( 21), we shall now digress and investigate whether we can, instead, work with an intermediary characteristic equation, as suggested and discussed in [7].To begin, let us express (20) using matrix notation to obtain where Equation ( 22) admits non-trivial solutions of the form χ(t) = ce λt , λ ∈ C, and c = (c 1 , c 2 c 3 ) T = 0, with c 1 , c 2 , c 3 ∈ R, if, and only if, λ satisfies the characteristic equation where I is the 3 × 3 identity matrix.Equation ( 10) in [7] is a scalar analogue of (22). Comparison of these two equations makes it abundantly clear that the two-parameter technique championed in [7] may not be possible in dealing with characteristic equations associated with systems of dimension greater than one.We must remark that equation ( 23) is equivalent to (21).It is evident that (21) is complicated, and any attempt aimed at studying the distribution of its zeroes directly is bound to be laborious.The idea behind the derivation of an equivalent and 'simpler' intermediate characteristic equation such as (23) (or equation ( 2) in [7]) is so that we may be able to obtain some insight about (21) by studying (23) in some appropriate 'intermediate' two-parameter space.In the one dimensional case, the intermediate parameters used in the two-parameter analysis would typically be selected to be the coefficients of χ(t) and χ(t − 1) in ( 22).This idea is well-articulated for the one-dimensional case studied in [7], and where the intermediate characteristic equation takes a very simple and prototypical form.The current effort investigates whether idea of a two-parameter analysis promulgated in [7] is extensible to systems of dimension higher than one.It is now clear that analysis of (23) in an appropriate intermediary two-parameter space as posited in [7] is not straightforward in the three-dimensional case, primarily because our two intermediate 'parameters' A and B are matrices (not scalars as in equations ( 2) and (10) derived in [7]), and computing the determinant in (23) will not preserve the matrix structure of these parameters -they will simply disappear during the ensuing calculation.At this point, we turn to looking at (21) directly.Let λ = iω, ω > 0, in (21) and decompose the resulting expression into its real and imaginary components, thus: We then solve (24) for the parameters (b, q) as functions of the frequency ω.We mention that two sets of values of the 2-tuple (b, q) are obtained, as clearly b and q occur nonlinearly in (24).Let us denote the pair of solutions so obtained as (b 1 (ω), q 1 (ω)) and (b 2 (ω), q 2 (ω)).Furthermore, let us denote http://www.mii.lt/NAwhere ξ 1 = 4ω 4 cos 2 ω + 4aω 3 sin(2ω) + 4a 2 ω 2 sin 2 ω, Then, we have the following solutions (b(ω), q(ω)) of ( 24 When b = 0 = q, the characteristic equation (21) simplifies to (λ + a) λ 2 + τ 2 e −λ = 0, whose only root is λ = −a < 0. This implies that any region containing (0, 0) in the (b, q)-parameter space is necessarily the asymptotic stability region.In such a region, the quasi-polynomial (21) has no roots whose real part is located in the right half-plane.As one crosses the boundary of the stability region, two roots whose real part is in the right half-plane appear via a Hopf bifurcation of the equilibrium E 1 = (0, 0, 0). Conclusion This article has focussed on the non-zero uniform endemic equilibrium E 1 of the celebrated Recker et al. model [18] endowed with an arbitrary immune response time delay.In the process, we were able to establish concrete conditions under which a Hopf bifurcation occurs at the equilibrium E 1 .In Section 3 of the article, we attempted to give a description of the asymptotic stability boundary of E 1 in an appropriate two-parameter space.In carrying out this analysis, we also showed that the technique advertised vigorously in the work of [7] may not be extensible to higher dimensional systems, unfortunately.
3,285.8
2016-12-18T00:00:00.000
[ "Mathematics", "Medicine" ]
A Multiagent and Machine Learning based Hybrid NIDS for Known and Unknown Cyber-Attacks The objective of this paper is to propose a hybrid Network Intrusion Detection System (NIDS) for the detection of cyber-attacks that may target modern computer networks. Indeed, in the era of technological evolution that the world is currently experiencing, hackers are constantly inventing new attack mechanisms that can bypass traditional security systems. Thus, NIDS are now an essential security brick to be deployed in corporate networks to detect known and zero-day attacks. In this research work, we propose a hybrid NIDS model based on the use of both a signature-based NIDS and an anomaly detection NIDS. The proposed system is based on agent technology, SNORT signature-based NIDS, machine learning techniques and the CICIDS2017 dataset is used for training and evaluation purposes. Thus, the CICIDS2017 dataset has undergone several pre-processing actions, namely, dataset cleaning, and dataset balancing as well as reducing the number of attributes (from 79 to 33 attributes). In addition, a set of machine learning algorithms are used, such as decision tree, random forest, Naive Bayes and multilayer perceptron, and are evaluated using some metrics, such as recall, precision, F-measure and accuracy. The detection methods used give very satisfactory results in terms of modeling benign network traffic and the accuracy reaches 99.9% for some algorithms. Keywords—Intrusion detection; zero-day attacks; machine learning; multi-agent systems; security I. INTRODUCTION The Global Internet Usage Statistics report confirms a growth of 1,114% and more than 2 quintillion bytes of data are generated every day. Along with this growth, cybercrime is becoming more sophisticated and continues to grow day by day [1,2,3]. As a result, the risks of being attacked and targeted by the hacker community remain more likely and could be costly for victims of cyber-attacks. Thus, the importance of Network Intrusion Detection Systems (NIDS) continues to grow and attract the interest of researchers [4] and NIDSs have become indispensable for securing network infrastructures against cyber-attacks [5]. However, the evolution of NIDSs is slowed down due to several challenges that are mainly related to the volume of network data, the emergence of increasingly sophisticated attacks [6] and unbalanced learning datasets [42]. In addition, real-time processing of network traffic is a very important feature of an effective NIDS to monitor all network events [8]. Not to mention that network traffic is continuously changing and therefore, the training datasets need to be updated regularly to effectively evaluate the detection models [5]. According to [22] and [42], the lack of more adequate datasets for anomaly detection-based intrusion detection has caused intrusion detection methods to suffer in analysis and deployment. The authors of [7] confirm that all these challenges remain a blocking obstacle against the evolution of the IDS domain in terms of performance, accuracy, and execution time during the learning and detection phases. Furthermore, the approaches proposed in the literature are not clear in terms of architecture and do not opt for hybrid architectures adopting, both, signature-based and anomaly detection-based NIDS. Most of the research works, carried out in this sense, remain theoretical and do not propose more efficient mechanisms capable of detecting known and unknown attacks. In this research work, we will propose an effective intrusion detection approach to detect known and unknown cyber-attacks. Our approach consists of a Snort-based intrusion detection model to detect known intrusions and then machine learning techniques to detect any suspicious deviation from the baseline profile of benign network traffic. This baseline is designed by regularly training the system on normal network events using machine learning methods. The selection of the research works carried out by the scientific community working on cybersecurity was done using a database of 17 journals (Q1 and Q2) and the used search terms are presented in Fig. 1 according to the methodology of [44]. The remainder of this paper is structured as follows. Section II highlights some related works conducted by scientific community. Section III highlights gives some basics related to our theme of research. Section IV presents our proposed approach and finally Section V handles the conducted tests and experiments to validate the classification of benign network traffic. A. Related Work In this section, we will highlight some of the research works that have been carried out by researchers to ensure a quick advance of intrusion detection mechanisms based mainly on Machine Learning, Data Mining and Deep Learning techniques. Since the beginning, researchers started to propose various approaches to effectively deal with the problem of Intrusion Detection. Notably, the Table I below summarizes some of the research works carried out by the scientific community to contribute in enhancing NIDS. B. Discussion It is true that several research works have been conducted by researchers to develop the field of intrusion detection systems. However, most of the aforementioned works have shortcomings in terms of architecture, datasets used as well as the machine learning methods used and each research work addresses a specific problem. For example, in the paper [25], the researcher limited himself to intrusion detection in wireless networks, in [39], the author proposed an IDS for SDN-based networks etc. In our research work, we will propose a universal NIDS, capable of being deployed in any type of computer networks. Our NIDS model will be based on a multi-layer architecture with the use of the multi-agent paradigm and will also be based on a hybrid detection mechanism combining a Signature-based NIDS (SNIDS) and an Anomaly-based NIDS (ADNIDS). 376 | P a g e www.ijacsa.thesai.org Our NIDS model will be based on mutli-agent technology in order to make the system modular and distributed. Thus, the proposed system will be extensible and capable of adding other components to perform large-scale detection missions in huge networks. Moreover, as we have already said, our system combines both detection mechanisms (SNIDS and ADNIDS) in order to detect all types of attacks (known and unknown). The used SNIDS is based on the famous open source NIDS SNORT and allows the detection of known intrusions. Moreover, ADNIDS intervenes when the packet is not recognized by the SNIDS and compares the packet's characteristics against the baseline patterns (benign traffic) modelled by supervised machine learning techniques applied to the cleaned and optimized CICIDS2017 dataset. In order to improve the accuracy and precision of the used detection mechanism to model the benign network traffic, we opted for cleaning and reducing the dimension of the CICIDS2017 dataset. Thus, the used training dataset is devoid of any unnecessary information that could falsify the classification results. A. Cybersecurity Cybersecurity is a discipline that has been evolving exponentially over the past decade [9]. It refers to the set of practices to protect the cyberspace environment against suspicious activities that may affect its security principles [10]. Among the security principles, we have the integrity of the data that aims to prevent the alteration of the information by unauthorized persons. The second principle is confidentiality, which confirms that the data should not be accessible by malicious people and finally the principle of high availability which ensures that the computer assets are available at any time to serve legitimate requests [11,12]. B. Intrusion Detection Systems The Intrusion Detection System (IDS) is one of the key components for ensuring the security of mobile clouds [13]. IDSs are classified according to the data source and the used detection method. Based on the nature of data sources, we can distinguish between two types of IDS: Host-based IDS and Network-based IDS. Furthermore, based on the used analysis method, we have two types of IDS: Signature-based IDS and Anomaly-based IDS [2]. NIDS analyses network traffic passing through computer environments [14,3]. Its role is mainly to monitor the network events against suspicious activities that may violate or bypass security policies of security components such as firewalls, Web Application Firewalls and proxies. A NIDS usually consists of three main modules which are Monitoring, Analysis and detection, and Alarm modules. The Monitoring module observes network traffic, resource usage and patterns. The Analysis and Detection module is the key part of the system; it identifies cyber-attacks based on specific algorithms. Finally, the Alarm module is responsible for notifying the security administrators in case of possible intrusions [15]. Furthermore, conventional security mechanisms cannot detect unknown zero-day attacks [16] that have no signature or whose patterns are not yet known to security experts. Another issue that modern computer networks are facing is that network traffic is responding to Big Data issues (volume, variety and velocity). As a result, network traffic processing must make use of Big Data technologies to improve the quality of analysis and to reduce execution time during learning phases [17]. C. Snort: Open source Network Intrusion Detection Snort is an open source IDS developed by Sourcefire in 1998 and has gained a very good reputation over the past decade due to its frequent use by researchers. Snort is structured in a TCP/IP stack architecture to capture and inspect network packets. This IDS is in its version 3.0 just released to overcome the single-thread limitation to support by default multithreading [18]. D. CICIDS2017 Dataset A dataset for intrusion detection is developed by collecting network traffic events from heterogeneous sources. These events can describe system, user and configuration behaviors [19]. These datasets do not include network events that can represent zero-day attacks [20]. The CICIDS2017 dataset is one of the most modern datasets [21]. IV. PROPOSED APPROACH A. Proposed Model 1) Architecture of the proposed model: Fig. 2 presents the proposed intrusion detection system model to ensure the detection of known and unknown attacks (0 Day) within any type of computer network. The proposed architecture is mainly based on three layers that collaborate together to perform cyber-attack detection missions. 2) Components of the proposed model The system has three main layers: • Data Acquisition Layer (DAL): This layer is responsible for data capture and pre-processing of network traffic. It also performs feature extraction to transform the captured network packets into data vectors to be used by machine learning methods. The DAL includes Snort Agent, a small component responsible for pre-processing tasks and an agent responsible for feature extraction. • Detection Layer (DL): This component is responsible for detecting deviations from a network baseline. It is based on a machine learning model developed after training the system on a training dataset containing benign network traffic. The DL also sends alerts when an intrusion is detected and allows the security administrator to generate reports and take actions on the network and system infrastructure in case of a security incident. • Machine Learning Layer: This part allows the NIDS system to perform training tasks on normal network behavior. Using supervised machine learning techniques on a dataset including benign network traffic, a model is developed that will check the fit to detect deviations from the designed baseline. B. Operating Principle Our system must be trained regularly on benign network traffic devoid of any type of cyber-attacks. Thus, datasets like CICIDS2017 are used to develop and design a baseline identifying the normal operation of a computer network. The training process of the proposed NIDS is mainly done in six steps: • Data acquisition: The system collects data to train itself and to obtain the network baseline describing normal network behaviors. We used the CICIDS2017 dataset (Benign traffic) devoid of any kind of cyber-attacks. • Pre-processing: In order for the data to be exploitable by machine learning based classification techniques, data preprocessing actions must be undertaken. Thus, missing value removal, scaling and partitioning techniques are all used to improve the quality of the training dataset. • Classification: In this step, machine leaning based classification techniques are used to model the normal behavior of the network based on the benign dataset. Several machine learning algorithms are used to select the one with the highest accuracy, with very low false alarm rates and with an increased processing speed. • Testing and validation: After using a set of machine learning techniques, it is now time to evaluate these algorithms based on specific metrics that address intrusion detection issues. From there, the most efficient machine learning technique is chosen to model the normal network traffic. • Use of the model "Baseline": After modeling the baseline of the network during normal operation based on the CICIDS2017 dataset, the generated model will be used to identify any deviation from normal behavior. Thus, unknown 0day attacks can be easily identified. Fig. 3 shows the detection principle of our NIDS model. Indeed, our system is supposed to train beforehand on benign network traffic that does not include any trace of cyber-attack, so the generated model will be considered as the network baseline to which the system will compare the real network packets. C. Real Time Detection Flowchart Our system ensures the detection of intrusions in the networks according to the following steps: • Step 1 -Sniffing and gathering: During this step, the NIDS listens to the network to collect all the packets that are passing through it. To do this, the proposed model relies on the Snort agent to capture the network traffic. • Step 2 -Matching check: During this step, the Snort agent compares the patterns of the network packets it receives against a signature database describing all known cyber-attacks (Snort DB). Based on the result of the matching check, the Snort agent notifies the NIDS administrator if there is a known attack in the network. • Step 3 -Data preprocessing: At this point, the captured packet is not recognized by Snort's knowledge base. Therefore, the network traffic must undergo preprocessing operations so that it can be consumed by machine learning algorithms. Thus, feature extraction techniques are applied to the captured network traffic in order to transform the data streams into data vectors that can be exploited by machine learning models. 378 | P a g e www.ijacsa.thesai.org • Step 4 -Filtering and matching check: After transforming the network flows into data vectors, the Filtering Agent checks the match between the data it receives and the "Network Baseline" model previously generated after training the system on benign network traffic. Depending on the result of the matching verification, two scenarios could arise: If the network packet is normal, no alert is generated and if the packet does not match the network baseline, the NIDS administrator must be informed in time to analyse the event. • Step 5 -Enrichment of the Snort knowledge base: In case an event deviates from the network baseline, the NIDS system must notify the administrator. The administrator must then intervene to diagnose and analyse the suspicious event, and can also contact security vendors and publishers to identify the nature of the suspicious network event. The security administrator can create rules in the Snort to intercept similar events that may occur in the future. The detected suspicious network event can be Zero-day attacks for which the security vendors have not yet developed a patch or signature. V. EXPERIMENTATION AND TESTS This section focuses on the experiments and tests performed to evaluate the performance of the different algorithms used for benign traffic modeling. For this purpose, the CICIDS2017 dataset is used and therefore it is necessary to analyze and clean it before using it by machine learning algorithms. A. Composition of the used Dataset We analyzed the CICIDS2017 dataset published by the Canadian Cybersecurity Institute using the Pandas framework in Python. The latter allowed us to analyze the content of the various CSV files constituting CICIDS2017 dedicated to research in the field of intrusion detection systems based on Machine Learning and Deep Learning. The CICIDS2017 dataset consists of a set of eight files in a CSV format; these files include data about network traffic captured during five days from Monday to Friday. After analyzing the content of the set of CSV files using Pandas, we were able to identify the composition of the CICIDS2017 dataset and Table II summarizes the obtained results. From the above statistics, it appears that the dataset is unbalanced due to the abundance of normal traffic compared to attack traffic, in addition to the existence of few records of certain types of attacks. This imbalance in the traffic classes automatically implies a biased machine learning model. Knowing that the class with a lot of traffic will be favored over the others with less records during the learning stage. As a result, the classes with few records make the machine learning model learn nothing about them and consequently have a biased detection model towards attacks with few records in the learning dataset. B. Cleaning and Pre-processing of the Training Dataset As we already said, the CICIDS2017 dataset dedicated to researchers operating in the field of intrusion detection is composed of eight files. Hence, these files need to be merged into one more comprehensive, one including all the labelled network traffic. The concat() function in Pandas was used to concatenate the set of CSV files and then the to_csv() command could then be used to export the concatenated dataset in CSV format. Fig. 4 shows the workflow adopted to clean, balance and reduce the size of the CICIDS2017 dataset. C. Experimenting with Machine Learning Techniques to Model benign Traffic In this part, we will see some machine learning algorithms that we applied on the optimized training dataset CICIDS2017. This experimentation consists in trying a set of algorithms that we will compare between them in order to retain only those effective and efficient that allow us to better modeling a network baseline during its normal operation (benign traffic). Throughout this phase, the Knime tool is used to evaluate the performance of the machine learning algorithms applied on the optimized dataset. Table III shows the confusion matrix and the Table IV summarizes the obtained results after applying DT algorithm on the optimized CICIDS2017 dataset. The obtained results are conclusive and highlight the efficiency of the DT algorithm. We are interested in the accuracy of the algorithm with respect to the recognition of benign traffic, especially since our intrusion detection system relies on a baseline of the network during its normal operation. Thus, the Decision Tree was able to detect benign traffic with an accuracy of 99.99% and this, with a total number of false alarms equal to 229 (135 False Negatives (FN) and 94 False Positives (FP)). b) Random Forest: The Random Forest is used to make the NIDS learn the normal behavior of the network. This algorithm performed very well in classifying the different classes of network traffic. As can be seen in Table VI, the detection accuracy reaches 99.8% for benign traffic using Random Forest classifier. RF is very effective in identifying benign traffic and thus designing the network baseline during its normal operation, knowing that the number of false alarms does not exceed 353 (FP: 75 and FN: 278) and with a number of TP equal to 74561 (see Table V). c) Naïve Bayes: The Naive Bayes (NB) was also tested and unfortunately gave poor detection results for most classes of the dataset. For example, the correct detection of benign traffic is almost zero (accuracy reaches 100% for misclassified instances). Tables VII and VIII below show the statistics related to the use of NB algorithm. The classification of benign traffic is very low compared to other algorithms, as the accuracy does not exceed 70%. D. Summary of benign Traffic Classification Results This section summarizes the obtained results after applying the classification algorithms on the optimized CICIDS2017 dataset. We emphasize that we are interested in modeling the network baseline in the absence of any suspicious activity. As a result, the different algorithms used at training time are evaluated based on the classification ability of benign traffic. Thus, Table XI summarizes the results obtained after applying the set of learning algorithms we saw in the previous section. From the summary table above, it appears that most of the techniques were able to model normal traffic. However, Naive Bayes did not perform well in classifying benign traffic. In addition, the Decision Tree and Random Forest are very efficient in terms of accuracy during training. However, the time complexity of the used algorithms is unfortunately not given in this work and will be the subject of our next article. For example, according to [43], the Decision Tree has a time complexity that is equal to O (mn2) where n is the number of instances and m represents the number of attributes. The temporal complexity metric allows for better evaluation of machine learning methods. VI. CONCLUSION It is true that many approaches based on machine learning techniques have been proposed to develop more effective and efficient NIDS. However, existing intrusion detection systems are still not able to detect unknown cyber-attacks more effectively. In this research work, we proposed a new approach based on a Multi-agent model, a Snort IDS and on machine learning techniques. The proposed NIDS is capable of handling network traffic that meets the big data issues in terms of volume and transition speed. First, we analysed the CICIDS2017 dataset with the aim of gaining more visibility on its composition, cleaned it up and removed unnecessary attributes. Then, we tried a set of classifiers on the optimized dataset in order to choose the most efficient algorithm in terms of detection and execution time. Thus, the Decision Tree and Random Forest algorithms give a detection accuracy of more than 99.8% for the detection of benign traffic. However, the work does not end here and the following tasks remain to be accomplished in a future work: • Definition of how to create rules at Snort when a deviation from the baseline is detected, • Using the benign traffic model to recognize normal packets in a production environment, • Using a redundant and powerful module for processing and storing network traffic, • Testing and validating the NIDS in a real environment.
5,075
2021-01-01T00:00:00.000
[ "Computer Science" ]
Pneumonia and Eye Disease Detection using Convolutional Neural Networks —Automatic disease detection systems based on Convolutional Neural Networks (CNNs) are proposed in this paper for helping the medical professionals in the detection of diseases from scan and X-ray images. CNN based classification helps decision making in a prompt manner with high precision. CNNs are a subset of deep learning which is a branch of Artificial Intelligence. The main advantage of CNNs compared to other deep learning algorithms is that they require minimal pre-processing. In the proposed disease detection system, two medical image datasets consisting of Optical Coherence Tomography (OCT) and chest X-ray images of 1-5 year-old children are considered and used as inputs. The medical images are processed and classified using CNN and various performance measuring parameters such as accuracy, loss, and training time are measured. The system is then implemented in hardware, where the testing is done using the trained models. The result shows that the validation accuracy obtained in the case of the eye dataset is around 90% whereas in the case of lung dataset it is around 63%. The proposed system aims to help medical professionals to provide a diagnosis with better accuracy thus helping in reducing infant mortality due to pneumonia and allowing finding the severity of eye disease at an earlier stage. INTRODUCTION A medical image based disease detection system using CNN is proposed in this paper. The suggested system has the ability of detecting pneumonia and eye disease from X-rays and scan images respectively. The novel feature of the proposed system is that it has been implemented using low cost hardware. In [1], a diagnostic system is proposed for detecting retinal diseases. The result shows that the performance of the proposed method is comparable to that of human experts. However, the implementation of the system using hardware is not suggested. A computationally efficient algorithm is introduced in [2]. Adam stochastic optimization method is used to train the neural network. Empirical results demonstrate that Adam works well in practice and compares favourably to other stochastic optimization methods. In [3], the effect of the convolutional network depth on its accuracy is investigated and changes in architectural configuration which improve the accuracy of the algorithm are proposed. A deep-learning-based approach to detect diseases and pests in tomato plants using images is presented in [4]. The images are captured in-place by camera devices with various resolutions and are processed. The experimental results show that the proposed system can effectively recognize nine different types of diseases and pests in tomato plants. In [5], the Face Detection and Face Recognition pipeline framework (FDREnet) is proposed which involves face detection through histograms of oriented gradients and uses Siamese technique and contrastive loss to train a deep learning architecture. However, disease detection is not investigated in this paper. On the other hand, a review of the applications of AI in soil management, crop management, weed management and disease management can be seen in [6], but disease management and disease detection in humans using AI are not investigated. II. DATASETS USED In order to test the proposed idea, two datasets were considered. The Lung dataset consisted from images from [7] and the eye dataset from images from [8]. Data are essential to train any neural network. The neural network, apart from other parameters, is only as good as the data it is trained on. For training the CNN, medical image data are used. Two different kinds of publicly available medical image datasets are considered for training two convolutional neural networks. OCT images in the iris region of the eye are considered for eye disease detection. OCT is a non-invasive method of capturing biological tissues using low-coherence light. It can capture two dimensional and three dimensional images of micro meter level. The images of the OCT scan are classified under 4 categories: i) choroidal neovascularization, ii) diabetic macular edema, iii) multiple drusen, and iv) normal. Choroidal neovascularization is the creation of new blood vessels in the choroid region of the eye. This problem is a major cause of vision loss. Macular edema is build-up of fluid in an area in the center of the retina. This build up causes the macular to thicken, distorting vision. Drusen consists of multiple deposits under retina. Drusen is a fatty protein made up of lipids. Having drusen may increase the possibility of age-related macular degeneration. The dataset contains normal/healthy iris scan images too. The images are collected from [8] dataset which contains more than 5GB of 84438 images from [9,10] which are classified on the above mentioned categories. Chest x-ray images of children belonging to 3 classifications: i) viral pneumonia, ii) bacterial pneumonia, and iii) normal were taken from [7] and are considered in this study. Pneumonia is an infection that accumulates in the lung's air sacs causing hindrance for breathing. The lung image dataset contains 1GB of 5238 images belonging to the 3 above mentioned categories. Both datasets were split into three sets: train, test and validation. Figure 1 describes the training of a neural network. The given data is split into training, validation and testing data with each utilizing 70%, 20% and 10% of the data respectively. After each iteration of training, the neural network is tested with the validation data to see its performance at that instant. After completing the whole training process, the performance is evaluated using the testing data. This proposed method is heavily inspired from [11] and a similar neural network with the lung data which was presented in [12]. The image datasets [7][8][9][10] are first collected and annotated or labeled in order to distinguish the normal images from images with diseases. To generate the training dataset the existing labeled data are further used to generate a new dataset using a technique called augmentation. Annotated and augmented data are used for training the proposed neural network. Block diagram of the proposed system III. CONVOLUTIONAL NEURAL NETWORKS CNNs [3] are a type of deep artificial neural networks, used mainly to identify and cluster images, and perform object recognition. A CNN consists of image processing layers and neural network layers namely: (a) convolutional layer, (b) pooling layer, (c) flattening layer, (d) ReLU layer, and (e) Softmax layer. These layers are described briefly below. A. Convolutional Layer The convolutional layer is the main building block of a CNN. The layer's parameters consist of a set of user-defined learnable filters (or kernels), which is generally a 3×3 matrix, and iterates through each submatrix of the input. The number of input filters used is generally of the order of 2N. During a forward pass, each filter is convolved across the dimensions of the input image matrix, the mathematical function carried out being dot product and thus producing a 2-dimensional featureextracted matrix of that filter. This reveals various details like vertical or horizontal edges of the images which are extracted and fed into the next layer. The weights that are used are generated randomly using the Glorot uniform distribution function. Figure 2 shows the filters. Figure 3(c) demonstrates the output image when an input image, shown in Figure 3(a) is convolved with the one of the above displayed filters. The 32 filters used in the proposed CNN B. Pooling Layer Another important concept used in CNNs is pooling, which is a form of non-linear down-sampling. Out of the several pooling functions analyzed in [13], max-pooling is the most effective. Max-pooling partitions the input image into a set of (n×n) (generally 2×2) sub matrices and the output is the maximum value. The convolved image is first converted into arrays and then maxpooling is performed. Figure 3 displays the convolution and maxpooling steps. In maxpooling the dimensions of the image are reduced from a 50×50 matrix to a 24×24 matrix. C. Flattening Layer The output from the pooling layer will be in a matrix form which can't be fed into the neural network. The flattening layer converts the n×n matrix from the pooling layer into a n 2 ×1 matrix which is a compatible format to be fed into the neural network. D. RELU Layer ReLU is the abbreviation of Rectified Linear Unit, which applies a non-saturating activation function. These functions remove negative values of weights by replacing them with zero. It increases the nonlinear properties of the decision function. This activation function is used in input and hidden layers of the neural network. The type of ReLU used is leaky ReLU. ReLU as explained in [14] is used in the neural network layers. Figure 4 shows the leaky ReLU activation function. Mathematically, the Leaky ReLU can be defined as: Graphical representation of the ReLU E. Softmax Layer This layer is predominantly used when the neural network solves multiclass-classification problems. It usually consists of a number of output nodes with Softmax as activation function. Softmax function assigns probability to each node in the output layer. These probability values are normalized to one. The node with highest value is the prediction of the neural network. The ReLu layer and the Softmax layer both use backpropagation [15] and forward propagation to train the CNN. Figure 5 shows the softmax function. Mathematically, softmax function can be defined as: where i= 1, 2,….k and z= z 1 , z 2 ,…..z k . Equation (3) shows the standard exponential function to each element z i of the input vector Z and normalizes these values by dividing by their sum. This normalization ensures that the sum of the components of the output vector σ(z) is 1. 1) Loss Function Loss function or cost function generally is the difference between the actual output and the predicted output. The main aim of the loss function is to reduce error. i.e. to minimize the difference between the predicted value and the actual value. The loss function predominantly used in both datasets is mean squared error. In this method, the difference between the predicted and the actual output is squared. It is better than the gradient descent methods for decreasing loss [16]. The sum of all these squares is divided by their total number. Mathematically this can be represented as where n is the number of inputs, Y i is the actual output and Ŷ i is the predicted output. 2) Optimizer The optimizer is a function which is guided by the loss function to update the weights so that the loss is minimized. It does so by changing the learning rate after every iteration in accordance with the calculated loss function. The weights of each node change based on the learning rate. If the learning rate is too fast, the neural network may not learn enough to generalize. If the learning rate is too low, the neural network may learn very slowly. The neural network needs to learn in an optimum speed and optimum manner and that is helped by the optimizer function. The optimizers used were the Adam optimizer and the Root Mean Square Propagation optimizer. 3) Adam Optimizer It is one of the best optimizers available. It is computationally efficient, it augments optimized learning and has very little memory requirements. Adam [2] stands for adaptive moment estimation. Instead of changing the weights based on the first moment (mean) alone or based on the second moment (variance) alone, this uses both first moment and second moment to update the learning parameters: θ t+1 = θ t -√ṽ୲ ା ࣟ m̂୲ (5) where m t and v t are first moments and second moments respectively, and η is the learning rate. Table I shows the results for the lung dataset [7]. The architecture comprises of an input layer, multiple hidden layers and an output layer. The training accuracy, training losses, validation accuracy and validation losses with respect to the number of iterations used for simulation are listed. The simulation is performed using Python Integrated Development Environment (IDE) Spyder. The maximum validation accuracy obtained in the case of the lung dataset is only around 63% with 10 epochs/iterations and 5215 steps per epoch. This result can be further improved with larger size dataset. In Table II, the complete architecture consisting of two pairs of convolution layers (named as conv2d_13 and conv2d_14), maxpooling layers (named as max_pooling2d_13 and max_pooling2d_14), and a flattening layer are shown. The optimum artificial neural network consists of an input layer consisting of 7 nodes and an output layer consisting of 3 nodes for each classification. Tables II and IV indicates the number of input weights that is processed through that one given layer. Total params is the sum of all the input weights in the total architecture of the neural network. The output shape denotes the number of inputs at a time (given by none) followed by the expected shape of the input. Table III shows the observations for eye dataset [8][9][10]. The maximum validation accuracy obtained in the case of the eye dataset is around 90% which can be further improved with a larger size dataset. Table IV shows the summary of the neural network model which yielded the best parameters during training and validation for eye disease detection. The maximum number of epochs used for the eye dataset ranged from 5 to 64, and the maximum validation accuracy was obtained for 15 epochs. It can be seen from Table III that for the eye dataset the optimizer predominantly used was the Adam optimizer. In the 5 th trial of Table III, the RMS Prop optimizer was used. The loss was the Mean Squared Error loss function with the exception of the 8 th trial where categorical cross entropy loss function was used. The complete architecture consists of two pairs of convolution layers (named as conv2d and conv2d_1) and Maxpooling layers (named as max_pooling2d and max_pooling2d_1) and a flattening layer named as Flatten. B. Eye Dataset The ANN has an output layer consisting of four nodes, one for each kind of classification. The output shape consists of a 4 dimensional array for the 2 pairs of convolutional and maxpooling layer. The 1 st dimension (denoted by None in all the given pairs) is the number of inputs that will be fed into that given layer at that particular given time. It is mentioned as None because these observations were taken after training, when there was not any input to be fed into at that instant of time. The rest of the 3 dimensions mention the dimensions of a single input unit. The same holds true for remaining flattening and neural network layers. V. DEPLOYMENT The neural networks which yielded the best parameters were saved in h5 format and were deployed in a Raspberry Pi which uses Raspbian with features to program in Python 3.5.3. A simple Graphic User Interface (GUI) was made where the user was asked to enter the directory of the image and the neural network would make the prediction and display the result. Snapsots of the results and of the GUI output for both datasets can be seen in Figures 6-11. The complete setup used to implement the proposed system using hardware is shown in Figure 12. The hardware part includes the Raspberry Pi board for interface with the GUI using Tkinter library in Python IDE. Further, there are two ways to connect the LCD to the Raspberry Pi board: 4 bit mode and 8 bit mode. In this work, 4 bit mode was used in which the byte to be sent is split into two sets that (upper bits and lower bits) of 4 bits each which are sent one by one over 4 data wires. GUI output predicting the given X-ray image has viral pneumonia Fig. 11. GUI output of the neural network predicting the given X-ray image has bacterial pneumonia Figures 13 and 14 show the eye disease detection system and the pneumonia detection system implemented in hardware. Result obtained for human chest X-ray of bacterial pneumonia VI. CONCLUSIONS AND FUTURE WORK A medical image based disease detection system using Convolutional Neural Networks is proposed and developed. The eye disease detection system effectively classifies the normal eye images and eye images with diseases like choroidal neovascularization, diabetic macular edema, and multiple drusen. Lung image dataset [7] consisted of bacterial pneumonia, viral pneumonia, and normal lung X-ray images of children in the age group of 1-5 years. The training model was simulated using Python libraries like Tensorflow, Keras, Skimage, etc. to improve training speed. The enhanced speed www.etasr.com Chakraborty & Tharini: Pneumonia and Eye Disease Detection using Convolutional Neural Networks of the training model yielded the real time implementation of the systems more suitable. The proposed system has the potential to be used in generalized high-end applications in biomedical imaging and provides a cost effective solution at a single board computer (Raspberry Pi). Regarding future work, focus will be given in improving the current results. Another promising application will be to extend the idea for identification of various diseases not only in humans but also in plants and crops.
3,948
2020-06-07T00:00:00.000
[ "Medicine", "Computer Science" ]
Estimation of Relationship Between Aerosol Optical Depth, PM 10 and Visibility in Separation of Synoptic Codes, As Important Parameters in Researches Connected to Aerosols; Using Genetic Algorithm in Aerosol Optical Depth (AOD) is closely related to PM 10 (mass concentration of particulate matter with aero dynamical diameter less than 10 µm) and visibility; and all of these three parameters are so important and useful to studies connected to aerosols, troposphere dust, air pollution and atmospheric radiation budget. This study analyzed the mathematic relations between AOD, PM 10 and visibility whit separation of 05, 06 and 07 synoptic conditions; whit using evolutional Genetic Algorithm. The area’s case study has been Yazd city as representative of central of Iran for 5 years (2011-2015). The aim of this analysis has been to reach relations that can estimate lack quantities of mentions data parameters from another existence data whit the least error. To attain these mathematic relations, liner regression equation and several kind of famous function has been comparison; which the Polynomial function selected as the best fitness function. The conclusion of this study was four function based on polynomial liner model with 95% confidence bounds that presented. These presented equations are for estimate AOD from PM 10 and visibility quantities in general condition; and in 05, 06 and 07 synoptic codes separations. Introduction Troposphere aerosols which are well known as particulate matter (PM) consist one of the regulatory parameters in the atmosphere by changing the Earth's radiation budget and thus aerosols have an extensive impact on our climate and our environment. The PM concentration has become an important index of air pollution, and gained more and more attention from the organizations and administrations of environmental protection, public health and science all over the world. Short and long term exposure to PM causes ascending mortality rates, and morbidities such as variety of cardiovascular diseases [1][2][3][4]. PMs with aerodynamic diameters less than 10 microns (μm) known as PM 10 ; that is a major component of air pollution that threatens both our health and our environment. Visibility is defined as the greatest distance in a given direction at which an object can be visually identified whit unaided eyesight [5]. Horizontal visibility is an indicator of air quality, and PM adversely associated with visibility impairment. Also, 05 (Haze), 06 (widespread dust in suspension not raised by wind) and 07 (dust or sand raised by wind) are some of the most important synoptic condition effective to horizontal visibility. As reported by previous studies, aerosol concentration variability can change particle extinction properties and thus affect visibility [6,7]. On the other hand, Aerosol Optical Depth (AOD) is defined for the entire column of atmosphere and is a measure of the extinction of light from the surface to the top of the atmosphere. AOD is an aerosol optical property which is now well retrieved from numerous sensors such as Moderate Resolution Imaging Spectro radiometer (MODIS) [8,9]. It can be measured by MODIS as well as by ground based instruments (sun photometers) at multiple wavelengths [10]. Therefore, it seems that we can establish some equations to estimate these parameters to define some lack data from other existence. Previous researches have examined the correlation between visibility and the aerosol characteristic, and have made it clear that the heterogeneous concentration of aerosols in the atmosphere, changes atmospheric optical conditions and consequently, visibility. Various studies have been carried out using the AOD to estimate the troposphere particle matters concentration in the world. The earlier studies suggested simple linear regression as the simplest type of statistical models resulting in a wide range of performance for PM predictions (correlation coefficients of 0.2 ~ 0.6) [11,12]. Some researchers have used linear models and some have also used non-linear models. Some researchers like also estimate the concentration of suspended particles using AOD based on empirical physical relationships [13][14][15][16][17][18]. In 1986, a two-year study on a turbidity network covering 11 stations on the coastline of southern Sahara in Africa, using a regression model, which uses the following equation to estimate the horizontal visibility of the aerosol mass concentration [19,20]. This equation permits the horizontal visibility VV in km to be expressed in terms of aerosol mass concentration C in µg m -3 and vice versa. Even in some cases, the greatest change in horizontal visibility is attributed to the PM 10 range [21]. In Shao et al. [22] in a study on North East Asian dust storms, found two equations for estimating dust concentration from visibility with following separation. 0.84 3802. 29 3 In these equations, V is the horizontal visibility in km, and C is the dust concentration in µg m-3. In 2006, during a study in the Athens metropolitan area, hourly PM 10 concentration was predicted by using the genetic algorithm on meteorological parameters [23,24]. In this research, the efficiency of neural network models was evaluated more than linear regression models (r 2 in the multiple liner regression models was between 0.29 and 0.35, but in the neural network models it was estimated between 0.50 and 0.70). In 2008, a study that sought to find the relation between PM 10 concentrations and visibility in Israel used the Ln function [25]. That function is as follows: where y is the PM 10 concentration, in μg m −3 , and x is the visibility, in 100 m units. Studied the relationship between AOD from Tera and Aqua sensors from the MODIS, and the PM 10 surveyed the ground at 12 Croatian air quality control stations over a five-year period [26]. In this study, AOD and PM 10 have been considered as independent and dependent sequences; and the linear multivariate model and artificial neural network have been diagnosed for estimating the relationship between these two parameters. Some Chinese researchers also presented an experimental non-linear model for estimating PM 10 concentration in 2015 using AOD from MODIS [27]. This study was based on three years daily PM 10 concentration from 13 air quality control stations and ground meteorological measurements in northwestern China the results of this study shown that there is almost a threefold improvement from 0.28 to 0.78 in the correlation coefficient when using the nonlinear model compared to using a linear regression model of AOD and PM 10 . The root-mean-square error (RMSE) is reduced from 34.42 to 21.33 μg/m 3 using the nonlinear model over the linear model. In 2017, several Malaysian and Greek researchers, using the artificial neural network and multiple linear regressions, tried to estimate PM 10 from the 550 nm AOD data from the MODIS and some meteorological parameters [28]. Their relationship is as follows: Where RH is relative humidity in percent; and k index is atmospheric stability index. The aim of our research is analyzing and finding the mathematic relationship between AOD obtained from Aqua and Terra, PM 10 obtained from ground air quality stations, and visibility in separation of 05, 06 and 07 synoptic conditions; using Genetic Algorithm in Yazd in representation of Central Iran. Data Collection and Reprocessing The present study was focused on Yazd province located in unique arid area in center of Iran. The reason for choosing a city in the central part of Iran is that this part of Iran is affected by dust and aerosols both from the outside (atmospheric conditions recorded with 06 code at synoptic stations) and from the local origin (atmospheric conditions recorded with 07 code); and in a urban area, the occurrence of the Haze or the air darkness caused by air pollution (05 synoptic code) will be more noticeable than an ineffective area of human occupation. A number of data sets from various sources were collected for this research; including five years (2011-2015) of hourly PM 10 mass concentration from 2 ground air quality monitoring stations in Yazd, Terra and Aqua-MODIS AOD at 0.55 μm, and local ground-based meteorological station in Yazd. Table 1 provides main information on different data sets and below Sections describes each data set in more detail. MODIS Satellite Data Over the past few years, the algorithm for data retrieval has continued to evolve to achieve better accuracy, and various studies have shown that the MODIS AOD products are quite accurate when compared to ground-based AOD such as the Aerosol Robotic Network AOD. MODIS-derived AOD represents the extinction of incoming solar radiation by aerosols over the whole atmospheric column [29][30][31]. Two MODIS instruments were put onboard the EOS-Terra satellite in December 1999 and the EOS-Aqua satellite in May 2002, respectively. Both instruments collect AOD data. International Journal of Environmental Sciences & Natural Resources AOD represents columnar aerosol loading of the atmosphere, and is retrieved as a level-2 product (5 minute swath granules) at a spatial resolution of 10 km at nadir. The ambient AOD is obtained through the MODIS aerosol algorithm over the oceans and over dark land surfaces In this study, we use the values of both MOD04 and MYD04 AOD, which were extracted at 550 nm (MODIS parameter name: Optical_Depth_Land_And_Ocean). The data acquired during the daytime passes of both MODIS instruments are used. Ground-Based Synoptic Meteorological Data Visibility is an important factor of air masses movement as well as mixing and thus affects PM 10 concentration. Between all of meteorological data related to Visibility that recorded from Yazd meteorological organization (31˚ 54̍ 18̎ N, 54˚ 16̍ 35̎ E), we use 3-hourly horizontal visibility observations by separation 05, 06 and 07 synoptic codes. Data Preprocessing and Integration Since the data from the three sources have different temporal and spatial resolution, all the data sets were re-processed to be consistent in space and time to form a complete data set that can be used as the basis for the following analyses. For the retrieved AOD data from both Terra and Aqua satellites, we have tested the impacts of three different window sizes (1×1, 3×3 and 5×5 pixels) of AOD values on PM 10 . We found that the nearest of AOD pixel over a window size of 1×1 pixels (~110 km) centered at a given PM 10 station is appropriate for our analysis. We used a combination of the AOD data retrieved using dark-target and deep blue algorithms. To avoid possible cloud contamination, we eliminated all the AOD-PM 10 pairs where the number of pixels is less than two. As the Terra and Aqua satellites cross Yazd near 07:30 and 10:00 Coordinated Universal Time (UTC) respectively, we use Terra's AOD data at 07:30 and Aqua's data at 10:00 UTC. The data acquired during the daytime passes of both MODIS instruments are used. The surface synoptically data from the closest distance between the meteorological station and the monitoring station were used to represent the meteorological condition for each PM 10 monitoring station. The tow per day (11:00 and 13:30 LST, corresponding to the satellite overpass times) PM 10 for each air-quality monitoring station, AOD, and ground-based meteorological values are matches together. In this manner that, at the first, the days that happens 05, 06 and 07 synoptic code observations around the 11:00 until 13:30 LST selected. Then, AOD and PM 10 values at the same time from Aqua, Terra and airquality monitoring stations respectively matches together. Modeling Techniques Artificial neural networks: Artificial neural network (ANN) is a modeling technique which can determine non-linear relationships between variables in input datasets and variables in output datasets. ANNs modeling is based on a learning (training; calibration) process, after which the ANN network can estimate values of output variables for input datasets. ANNs need a considerable amount of historical data to be trained; upon satisfactory training, an ANN should be able to provide output for previously "unseen" inputs. Often, there can be some uncertainty about precisely which input variables to use. The selection of input variables for an ANN forecasting model is a key issue, since irrelevant or noisy variables may have negative effects on the training process, resulting unnecessarily complex model structure and poor generalization power [32,33]. Genetic Algorithm: The Genetic Algorithm (GA) is a method for solving both constrained and unconstrained optimization problems that is based on natural selection, the process that drives biological evolution. The genetic algorithm repeatedly modifies a population of individual solutions. At each step, the genetic algorithm selects individuals at random from the current population to be parents and uses them produce the children for the next generation. Over successive generations, the population "evolves" toward an optimal solution. You can apply the genetic algorithm to solve a variety of optimization problems that are not well suited for standard optimization algorithms, including problems in which the objective function is discontinuous, non differentiable, stochastic, or highly nonlinear (GA and Direct Search Toolbox). GA is one of the advanced problem solving tools and the best selection of fitness functionality in MATLAB software. Information on various surface and satellite data sets used for estimating equations are given in (Table 1). Also, statistical analysis of AODs, PM 10 s and visibilities; whit their normal frequency distribution graphs are shown in sections of ( Figure 1). Pierson correlation between PM 10 _ AOD, PM 10 _ visibility, and visibility _ AOD data in 95% confidence level was 0.741, -0.58 and -0.773 respectively. The most connection and relation had seen between PM10, AOD and visibility data whit 05 and 07 synoptic codes; whit 0.87 and 0.76 correlation respectively. Also, Comparison of the trend of PM 10 variations and horizontal visibility compared with the AOD values (Figure 2), shown a very high connection especially in less than 500 µm/m 3 PM 10 quantities and from 4000 -9000 m visibility quantities. Additionally, in comparison of frequency and distribution quality of synoptic codes and visibility quantities, great frequency of 05 synoptic code especially in 5, 6, 6.5, 7 and 8 km visibility quantities, proportional equality to happen of 06 and 07 synoptic conditions; and low frequency of less than 2.5 km visibility quantities are seeable. We use the least square graph for simple two parameter regression between visibility and AOD parameters ( Figure 3) and between visibility and PM 10 parameters (Figures 4). These graphs are appearance of correlation and regression line equation between mentioned parameters. This is right that by estimate of regression line equation coefficients (X and Y coefficients) you can assessment the action of each parameter on other (in liner connections), but in here, whit due attention to the number of parameters used in this research and prospected accuracy from mathematic connections between parameters, has been used from genetic algorithm After the PM 10 data, visibility and AOD data were identified in the three-matrix formulation with the names X, Y and Z in the genetic algorithm respectively, 70% of these data were considered for the training transaction and 30% for the test transaction. Because the most accurate collection of dust and aerosol data is done with air quality control devices in the environmental organization (due to their presence in the International Journal of Environmental Sciences & Natural Resources context of this air pollution -contrary to satellite sensors -and their remoteness from errors caused by The involvement of humans in the data acquisition -contrary to the visibility data at the Yazd synoptic station -) It was decided to introduce Z and Y values which are AOD and visibility respectively, as dependent variables and PM 10 as an independent variable to the genetic algorithm. Additionally, synoptic codes should also be presented as conditions for the algorithm, which was done by programming. In optimization problems that are implemented in the context of the genetic algorithm, the goal is to minimize the error or minimize the cost function. In most optimization problems, the cost function is defined as the Sum Square Errors (SSE), and Relative Sum Square Error (RSSE). These functions are represented in relations (Function 6,7): i actual F is the function of the estimated values based on the optimization parameters displayed by the genetic algorithm. When the SSE is less than 0.0001 or the RSSE is less than 0.01, this defined as the convergence criterion. After testing various functions such as the Wei bull, Gaussian, Power, Binomial, Exponential, Linear, Furrier and Gaussian functions, according to SSE, RMSE and r 2 , the best function and model for optimization The math transitions between PM 10 , AOD and visibility were evaluated based on the 05, 06 and 07 synoptic codes, the linear model of the binomial function was evaluated. In Table 2, the statistical characteristics of the paired parameters are presented in this paper. In Figure 5, the binomial fitness graph between PM 10 , AOD and visibility in general and without regard to the synoptic codes depicted by MATLAB software, has been shown. The color range shows the best response border of the mathematical relations presented in this study based on the coefficients derived from the application of the genetic algorithm. The binomial fitness graph between PM 10 , AOD and visibility during the occurrence of synoptic codes 05, 06 and 07 are also given in (Figures 6-8). Conclusion After mentioned calculations, four functions and mathematical formulas based on binomial linear model with a 95% confidence level are presented as follows for the first time to dear researchers in the field of dust and aerosols: (11) According to the similarity of climate and topography between the natural and urban parts of central Iran, eastern and southeastern Iran, it seems that the presented relations in this study have the generalized efficiency in all of these regions and can be used in these areas. And in the end, now that the geographical phenomena such as dust and aerosol are so important and widespread in a vast section of Iran, and are among the most harmful natural hazards in the country, it is hoped that the relationships offered in this collection could be useful to researchers interested in researching these breathtaking geographic features.
4,148.2
2017-12-21T00:00:00.000
[ "Computer Science", "Environmental Science" ]
A corpus for mining drug-related knowledge from Twitter chatter: Language models and their utilities In this data article, we present to the data science, natural language processing and public heath communities an unlabeled corpus and a set of language models. We collected the data from Twitter using drug names as keywords, including their common misspelled forms. Using this data, which is rich in drug-related chatter, we developed language models to aid the development of data mining tools and methods in this domain. We generated several models that capture (i) distributed word representations and (ii) probabilities of n-gram sequences. The data set we are releasing consists of 267,215 Twitter posts made during the four-month period—November, 2014 to February, 2015. The posts mention over 250 drug-related keywords. The language models encapsulate semantic and sequential properties of the texts. a b s t r a c t In this data article, we present to the data science, natural language processing and public heath communities an unlabeled corpus and a set of language models. We collected the data from Twitter using drug names as keywords, including their common misspelled forms. Using this data, which is rich in drug-related chatter, we developed language models to aid the development of data mining tools and methods in this domain. We generated several models that capture (i) distributed word representations and (ii) probabilities of n-gram sequences. The data set we are releasing consists of 267,215 Twitter posts made during the fourmonth period-November, 2014 to February, 2015. The posts mention over 250 drug-related keywords. The language models encapsulate semantic and sequential properties of the texts. & 2016 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Subject area Biomedical informatics, data mining, natural language processing More specific subject area How data was acquired Drug-related chatter was directly collected from Twitter using the Twitter Streaming API. Posts were retrieved using drug names as keywords. To address the issue of common misspellings in social media, we employed a phonetic spelling variant generator that automatically generates common misspellings for the drug names. Data format Raw, processed Experimental factors Twitter posts were collected in raw format. Only basic preprocessing such as lowercasing was performed prior to the generation of the language models. Experimental features The language models were generated from the raw Twitter data after basic preprocessing. A neural network based technique is used to learn the distributed word representation models. The sequential language model is learned by computing probabilities of word n-gram sequences. The utilities of the two sets of models were verified via preliminary experiments of adverse drug reaction detection and text classification, respectively. Data source location Data accessibility Data is within this article Value of the data The raw data containing drug-related chatter can be used to build prototype systems for a range of tasks in the domain of pharmacovigilance and toxicovigilance from social media, to assess user sentiments about the drugs, to estimate the effectiveness of distinct drugs and for other tasks important to the broader public health community. The distributed word representations were generated by varying multiple parameter combinations, thus ensuring that the different models capture different types of semantic information. Therefore, these models can be used for developing systems focused on mining knowledge associated with prescription medications. The n-gram language models capture sequential word occurrence probabilities and can be used for tasks such as text classification and text normalization. Python scripts are provided for downloading the tweets and for loading the two different sets of models. Data The data set consists of 267,215 Twitter posts, each of which contains at least one drug-related keyword. Two sets of language models accompany the raw data-the first is a set of models based on distributional semantics, which encapsulate semantic properties by representing word tokens as dense vectors, while the second set of models is based on n-gram sequences, capturing sequential patterns. All the data are available via our webpage, along with download/usage instructions: http:// diego.asu.edu/Publications/Drugchatter.html. We will release more data and resources in the future via this link. Characteristics of the data The posts were collected over a four-month period-November, 2014 to February, 2015 and they contain over 250 mentions of unique drug-related keywords. The monthly distribution of the data is shown in Fig. 1. The figure suggests that the numbers of drug-related tweets collected are fairly consistent over the four months, with December seeing the highest number of drug-related posts, perhaps because it is holiday season. Fig. 1 also presents the frequencies of the top 10 drug-related keywords in the data sample we are releasing. 'adderall' is by far the most frequently found drug-related keyword. Many other drug-related keywords are mentioned in the data set, presenting information about an assortment of classes of drugs such as antipsychotics, narcotics, stimulants, antidepressants, pain medications, sedatives and antivirals, to name a few. Data collection The data was collected from Twitter as part of a large-scale project on pharmacovigilance from social media [1]. To commence the collection of data, we first identified a set of drugs of interest. Our goal was to incorporate drugs used for a diverse set of conditions and those with high rates of usage. Therefore, in consultation with the pharmacology expert for the project, we focused on two criteria: drugs associated with chronic conditions and drugs with high prevalence of use. For the former criterion we initially chose drugs prescribed for conditions including but not limited to nicotine addiction, coronary vascular disease, depression, Alzheimer's disease, chronic obstructive pulmonary disease, hypertension, asthma, osteoporosis and type 2 diabetes. We later complemented our initial list with drugs prone to abuse, antivirals and biologics. For the second criterion regarding prevalence of use, we selected drugs from the IMS Health's top 100 drugs by sales volume for the years 2013, 2014 and 2015. Further descriptions of our drug selection process can be found in our past publication [2]. A major problem associated with collecting drug-related data from Twitter is that drug names are often wrongly spelled by the users. Fig. 2 shows tweets mentioning the two drugs Seroquel s and Adderall s via a variety of spellings, including the correct ones. As the figure suggests, using the correct spelling for a drug as the only keyword may fail to retrieve important tweets associated with that drug, leading to a lower recall than desired. Therefore, we used a misspelling generator [3] that, given the correct spelling for a drug, generates common phonetically similar misspellings for that drug. The misspelling generator runs in three steps. First, all lexical variants of a keyword within one Levenshtein distance (i.e., difference of a single character insertion, deletion or substitution) are generated. Since these variants can be phonetically very dissimilar from the original keyword, a filter is applied which only keeps misspellings that have the same phonetic representations as the original keyword. Finally, to obtain a smaller set of misspellings, the Google custom search API is used to identify those that are frequently used as input to the search engine. The generator, which we have also made publicly available, can be used for other targeted data collection tasks from Twitter. For our work, we used the correct drug spellings and all the selected spelling variants, and the Twitter Streaming API to collect user posts. Fig. 3 presents a random sample of tweets associated with a number of drugs. The tweets appear to present a number of types of information about the drugs. Depending on the intent, distinct types of drug-related information can be mined from this data source, making it a valuable resource for the previously mentioned communities. Language model based on distributional semantics Using the word2vec 1 tool, we prepared a number of phrase-level models. The word2vec tool is the current state-of-the-art in generating distributed word representations from natural language data. The algorithm applied by this tool constructs a vocabulary from an unlabeled data set and learns vector representations of the words by training a shallow neural network with two layers. Formally, given a set of terms t and their context windows w, the objective function is to set the parameters H that maximize Pðwjt; HÞ. The context, which is a symmetric window of terms around an input word, can be varied to modify properties of the distributed models (e.g., varying window sizes may impact the capture of syntactic and semantic properties [4]). Similarly, the vector sizes and various other parameters influence the qualities of the models. For the generation of the models, we varied several parameters including vector sizes and context window sizes. For the vector sizes, we generated models between the sizes 200 and 400. For the different vector sizes, we generated models using context windows within the range [2,9]. These models can be used to explore a variety of research tasks that can benefit from drug related knowledge generated directly from the users, such as exploring associations between drugs and adverse reactions, identifying drug abuse related signals, and, from a more NLP/data science perspective, the effects of different parameters such as context window and vector sizes in capturing semantic and syntactic properties. Such distributed word representation models are already being applied for research utilizing other sources of noisy health-related data, such as clinical reports [5] and such models have been generated from texts from other domains such as published literature [6] and generic social media [7]. However, perhaps due to the absence of available drug-related chatter data from social media, there are currently no such models available for this domain. We, therefore, believe that these models, along with the set of unlabeled data, will aid drug-related data mining tasks and will complement the resources generated from other domains (e.g., clinical reports). For example, while clinical reports hold information discovered in clinical settings, social media chatter may hold information that are expressed by users in non-clinical settings. We present sample utilities of the models in the next section. Language model based on n-gram sequences We also generated sequential, n-gram language models. These language models capture the probabilities of n-gram sequences and such models have been applied in the past for tasks such as lexical normalization [8]. However, to the best of our knowledge, no such sequential probabilistic models are currently available for drug-related social media posts. Given a sequence of terms w n 1 ¼ w 1 …w n the model learns n-gram sequence probabilities, such that approximations can be made for a sequence w 1 …w m via the expression P w m To generate the n-gram language models, we used the KenLM n-gram, language modeling tool [9]. We have made available a set of n-gram language models (n ¼2-4) from this data. Utility of language models and data We now briefly discuss two sample utilities of our two sets of models. Note that the qualitative results presented here were obtained using minimal settings, without any optimization. Extracting adverse drug reaction signals using distributed word representations We tested the possibility of utilizing our distributional semantic language models for exploring associations between drugs and adverse reactions. Past research has explored co-occurrence based techniques for identifying drug-adverse reaction associations. These early approaches primarily relied on pattern generation and detection [10] or handcrafted rules [11] for discovering drug-adverse reaction associations and are limited in multiple ways, particularly for unwieldy social media data, such as the ability to handle large volumes of text and discovering associations that do not follow specific rules. One of the properties of the distributional semantics model is the ability to capture semantic associations between terms based on co-occurrence in a large-sized corpus. Words/phrases learned by the models are represented as vectors in high-dimensional spaces, with contextually similar vectors appearing closer in semantic space. Therefore, we employed a simple cosine similarity measure to compare the similarities of drug keywords with sets of terms representing adverse reactions. We chose three drugs for this experiment: two from past work on this problem-Trazodone and Aspirin [12]; while for the third drug, we randomly chose Xanax from the list of top 10 most common drug keywords from the data set, as shown in Fig. 1. Using one of our distributed representation models (vector size ¼400, context window size ¼ 9), we compared the cosine similarity values between a drug keyword and a set of adverse drug reaction terms. For simplicity, in the case of multiword adverse reactions, we computed the average similarities for all the terms. 2 In particular, we wanted to compare known adverse reaction representing keywords for each of the three drugs with adverse reactions that are not known to be associated with the drugs. We focused on non-serious adverse reactions, as we expected, based on recent research reports [13], that they occur more frequently in social media chatter. Fig. 4 shows the cosine similarity distributions for the three chosen drugs and ten adverse reaction terms for each. For Trazodone, the first four adverse reaction terms (fatigue, headache, dizziness, and nausea) are known adverse reactions, as reported by [12], while the other six are randomly chosen adverse drug reactions not known to be associated with the drug. Even without any fine tuning, simple cosine similarity measures between the drug term vector and the adverse reactions clearly represent potential associations. For Aspirin, the five bars to the left in the figure represent known adverse reactions and the five to the right represent adverse reactions for which associations are not known. Although the associations are not as strong as in the case of Trazodone, there is a clear trend of lower values for the five associations to the right. For Xanax, we chose a set of known adverse reactions and lesser known adverse reactions from Drugs.com. Fig. 4 illustrates similar findings for this drug as well, with strong signals for the five known adverse reaction terms to the left. The results obtained from the distributional semantic model are very promising, specifically when considering the fact that these results are obtained without any pre/postprocessing of the data, the tuning of the signals, or any form of normalization of terms. We have also not explored how the window sizes and vector sizes affect the accuracies of these signals. We leave these exploration possibilities to the research community. Sequential language models for text classification The n-gram language models may also be applied to an array of tasks and since these models have been available for a long time, they have been utilized for lexical normalization [14], machine translation [15] and text classification [16], to name a few. We experimented with the possibility of utilizing our n-gram models for health-related text classification. All the tweets collected by us are intended to be related to prescription drugs. Thus, the tweets are a subset of all tweets that discuss health-related topics and it is likely that they will help identify a broader set of health-related tweets from a collection. We first obtained a set of manually annotated health-related tweets from [17]. The data set contains 5128 tweets with about 35% of the tweets tagged as health-related. Our intent for this experiment was not to quantitatively analyze the effect of the language model in text classification. Instead, we focused on analyzing if these sequential models may be useful for the task. Given a sequence of tokens in a string, our model estimates the probability of that sequence based on the equation mentioned earlier. Because of the nature of our data, our intuition is that health-related tweets are likely to have higher probabilities based on this model. To test it, we chose a small sample of annotated tweets and computed scores for them using our n-gram model. We used a simple scoring mechanism for this small sample-we first generated absolute value scores for each sample tweet using the tetra-gram language model and then scaled the values to fit in the range [0,1]. Fig. 5 presents a sample of probabilities obtained for a set of tweets from the health-related data set in [15]. Although we evaluated the impact of the language model in a small number of cases, the figure suggests that health-related tweets do tend to have higher scores based on our model. Such a feature can easily be integrated into text classification problems and we suspect it will improve performance. Other utilities of data and language models The previous subsections only present two sample utilities of the models among many. The task of mining prescription drug-related knowledge is currently of high interest and resources from several domains (e.g., published literature, electronic health records and social media) are being utilized, including the combination of resources from multiple domains (e.g., [18]). Unlike other domains, social media has emerged relatively recently as a domain of interest and the need for generating and making available social media based resources have been discussed in the recent literature [19]. Therefore, we believe that the unlabeled corpus and language models will be valuable to the research community. As Figs. 2 and 3 depict, drug-related chatter from social media can be used to mine knowledge about drug effectiveness, adverse reactions and misuse/abuse. Additionally, the data can be mined to obtain information about popular off-label uses of and user sentiments towards certain drugs. The data may also be used for relative comparison of two or more similar drugs. From the perspective of the data science and natural language processing communities, the data and the language models provide opportunities for developing and testing novel text mining algorithms. The distributional semantic models have been generated by varying important parameter values, so interested researchers can readily evaluate the performances of different parameter combinations for their chosen tasks. The models may also be used for generating visualizations of interesting components (e.g., sentiments and adverse reactions) of drug-related chatter in semantic space via dimensionality reduction techniques, such as those presented in [20]. Funding This work was supported by National Institutes of Health (NIH) National Library of Medicine (NLM) Grant number NIH NLM 5R01LM011176. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NLM or NIH. Transparency document. Supplementary material Transparency data associated with this article can be found in the online version at http://dx.doi. org/10.1016/j.dib.2016.11.056.
4,232.2
2016-11-23T00:00:00.000
[ "Computer Science", "Medicine" ]
Purification and characterisation of glutathione reductase from scorpionfish (scorpaena porcus) and investigation of heavy metal ions inhibition Abstract In the current study, glutathione reductase was purified from Scorpion fish (Scorpaena porcus) liver tissue and the effects of heavy metal ions on the enzyme activity were determined. The purification process consisted of three stages; preparation of the homogenate, ammonium sulphate precipitation and affinity chromatography purification. At the end of these steps, the enzyme was purified 25.9-fold with a specific activity of 10.479 EU/mg and a yield of 28.3%. The optimum pH was found to be 6.5, optimum substrate concentration was 2 mM NADPH and optimum buffer was 300 mM KH2PO4. After purification, inhibition effects of Mn+2, Cd+2, Ni+2, and Cr3+, as heavy metal ions were investigated. IC50 values of the heavy metals were calculated as 2.4 µM, 30 µM, 135 µM and 206 µM, respectively. Introduction Scorpionfish (Scorpaena porcus), a member of the Scorpaenidae family, is most prevalent in Mediterranean and Black Sea regions, prefering shallow, algae-covered regions at depths of up to 1000 m. 1 Pollution in the sea mainly accumulates in marine organisms and sediments.Therefore, it is transmitted to humans through the food chain 2 .In the aquatic ecology, fish have the greatest trophic level. 3Pollutants such as heavy metals, environmental and industrial wastes mix with the waters and may accumulate in fish.Heavy metals, which are one of the most significant polluting factors, have negative impacts on human health and may induce production of reactive oxygen species (ROS) resulting in oxidative stress.Reactive oxygen species cause damage to cell components such as lipids, proteins, and DNA 4 .As a result, antioxidants are the most significant agents in removing oxidative damage generated by ROS in the live body 5 . Antioxidants are known as important member of defense system preventing formation of reactive oxygen species and repairing the damage they cause 6 .The continuity of life of living cells depends on the balance of complex biochemical reactions.Endogenous/exogenous compounds arising from the factors that will disrupt this balance cause cell destruction 7 . Glutathione, a natural reducing molecule, may be easily employed by cells to protect themselves against oxidative stress.This protective effect against ROS is provided by interaction with enzymes such as glutathione peroxidase and glutathione reductase 8 .In addition to being an antioxidant, GSH has a role in the detoxification system of the cell, gene expression and regulation 9 . Glutathione reductase (EC 1.8.1.7;GR), a major enzyme in glutathione metabolism, is required for the maintenance of the reduced form of cellular glutathione, which is strongly nucleophilic for many reactive electrophiles 10,11 .The flavin enzyme GR acts as an antioxidant to protect cells from oxidative stress by reducing glutathione disulphide (GSSG) to its reduced form (GSH) 12 .It has an important role in the drug and detoxification mechanisms especially in the liver.This is due to the cytochrome P-450 system found in liver microsomes, which provides detoxifying events 13 .Maintaining the GSH/GSSG ratio in the cell environment is one of the most important known targets of the GR enzyme-catalysed reactions 14 .Glutathione reductase is involved in the reduction-oxidation of intracellular glutathione for GSSG, which is generated through the detoxification of hydroperoxides and reduction of some other chemicals catalysed by glutathione perdoxidase 15 .The NADP þ dependent malate dehydrogenase and pentose phosphate pathways provide the NADPH needed in this catalytic process 16,17 .NADPH, a key product of the pentose phosphate cycle, is employed extensively in reductive biosynthesis.Furthermore, it aids in the protection of the cell against oxidative damage 9 . A lack of GR and GSH causes oxidative damage to the cell.Many disorders are caused by GR and GSH deficiencies, including Alzheimer's, Parkinson's, liver and lung diseases, sickle cell anaemia, HIV, AIDS, cancer, stroke, schizophrenia, and diabetes 9,18 . Metals found naturally in the environment and in water originating from natural and anthropogenic causes.Several metals and chemicals have been tested on various enzymes for their inhibitory effects 19 .Due to many industrial problems, heavy metal accumulation in the environment has become an important problem 20 .Heavy metals, which can be harmful even at low quantities, enter the body through the mouth, breathing and skin.They cannot be eliminated from the excretory tracts such as kidney, liver, intestine, lung and skin without special intervention.Consequently, almost all heavy metals accumulate in biological organisms.These metals build up in living things and cause major disorders such thyroid neurological diseases, autism and infertility. They trigger and enhance the generation of free radicals in aquatic species.Therefore, it should be kept under control in order to prevent the damage caused by free radicals resulting from heavy metals or chemicals [21][22][23][24][25] .Heavy metals and metal ions have an effect on the variables that affect enzyme-substrate and cofactor affinity.Metal ions, which induce transient depletion of GSH and inhibition of antioxidant enzymes, are known to have an effect on enzyme function 26 .It is also known that the glutathione reductase enzyme is highly sensitive to metal ions when the GSSG concentration is low 27 . Therefore, we aimed in this study to purify and characterise GR enzyme from the liver tissue of scorpion fish for the first time and evaluate the inhibitory effects of heavy metals, namely Mn þ2 , Cd þ2 , Ni þ2 , and Cr 3þ .The reason for investigating scorpion fish enzyme is that the fish is commonly found in our region and the purpose of selection of these metals is the fact that they are among the most common metals found in our seas and rivers as industrial and environmental waste.Thus, the risk of these metal ions should be well characterised as our seas and water sources are at great risk of metal pollution. Chemicals All chemicals used for the purification process were used from Sigma-Aldrich.All other analytical grade chemicals were obtained from Merck. Preparation of the homogenate Scorpion fish (Scorpaena porcus) were collected from a fisherman in Ordu's Fatsa region, Turkey.7.5 g of the liver tissues taken were weighed and homogenised in liquid nitrogen in a mortar.40 ml of 50 mM KH 2 PO 4 þ 1 mM EDTA þ 1 mM DTT þ 1 mM PMSF buffer was poured to a 50 ml falcon tube.The samples were then centrifuged for 60 min at þ4 C at 27000 g.The supernatant and precipitate were separated from the filter paper after centrifugation, and the enzyme activity was measured. Ammonium sulphate precipitation and dialysis Ammonium sulphate ((NH 4 ) 2 SO 4 ) precipitation was performed for the separation of the relevant protein from other proteins or for the concentration of the proteins.Scorpion fish liver tissue extract was subjected to a precipitation range of 0-100%.As a result of the process, the enzyme's active range was determined by achieving saturation in the 60-80% range.The precipitate was dissolved in a buffer of KH 2 PO 4 (300 mM; pH:6.5).After the specified interval, dialysis was performed three times to desalinate the protein solution.It was then dialysed for 2 h in a solution of 30 mM KH 2 PO 4 (pH: 6.5). 2 0 ,5 0 -Adp Sepharose-4B affinity chromatography 2 grams of dried 2,5-ADP Sepharose-4B was used for a 10 ml bed volume column (1 Â 10 cm).To eliminate foreign bodies and air, the gel was rinsed with 300 ml of distilled water.It was packaged in colas after being suspended in an equilibration solution of 50 mM K 2 PO 4 þ 1 mM EDTA þ 1 mM DTT pH: 7.3.It was understood that the column was equilibrated by equalising the absorbance and pH of the eluate and buffer at 280 nm.The column was loaded with a sample that had been precipitated in the indicated ammonium sulphate saturation range and dialysed.The column was continued to wash with 0.1 M þ K-acetate þ 0.1 M K-phosphate pH: 7.85 and 0.1 M K-phosphate þ 0.1 M KCl pH: 7.85 buffer.50 mM K-phosphate þ 1 mM EDTA þ 1 mM GSH þ 0.5 M mM NADPH pH:7.3 was used to elute the enzyme.Elutions were collected and eluted on the column with 01.M Na-acetate þ 0.5 M NaCl pH: 4.5, 0.1 M Tris þ 0.5 NaCl pH: 8.5 regeneration buffers.All these procedures were performed at 4 C. Protein determination Protein content was determined spectrophotometrically at 595 nm using Bradford's technique using bovine serum albumin as a reference for all samples 28 .The mg protein values corresponding to the absorbance values from the results obtained were turned into a standard graph. SDS polyacrylamide gel electropheresis (SDS-PAGE) The Laemmle technique was used to assess the purity of the enzymes 29 .For the separating gel and stacking gel, concentrations of 30% acrylamide-0.8%bisacrylamide were used in the gel technique.10% SDS (final concentration: 0.01%) was added to the gel solution.The gel was stabilised in a solution containing 40% methanol þ 10% acetic acid þ 80% distilled water for 1 h.For around 45 min, staining was done in a 40% methanol þ 10% acetic acid þ 50% distilled water þ 0.25% Coommassie Brilliant Blue R-250 solution.The gel was rinsed in 10% methanol þ 10% acetic acid þ 80% distilled water at the end of the colouring procedure.The gel was left in the wash solution for one day to clean the protein bands. In vitro effects of metal ions To assess the impact of metal ions on Scorpion Liver GR, an inhibition research was carried out.The effects of heavy metals on GR enzyme were studied at Ni 2þ (0.001 M), Mn 2þ (0.001 M), Cr 3þ (0.001 M) and Cd 2þ (0.001 M) concentrations using the salts Ni(NO 3 ) 2 , Mn(NO 3 ) 2 Cr(NO 3 ) 3 Cd(NO 3 ) 2 .Activity values were used by plotting the % activity-[metal ion] graphs (Figures 2, 3 and 4, Supplementary Material).Metal concentrations producing 50% inhibition of enzyme activity (IC50) were determined.The The IC 50 values of heavy metals were calculated as 2.4 mM, 30 mM, 135 mM and 206 mM, for Mn þ2 , Cd þ2 , Ni þ3 , and Cr þ , respectively (Table 1). Results and discussion Glutathione (GSH), a tripeptide that is synthesised in the liver, is present in the cytosol, nucleus and mitochondria of cells and plays vital roles in the cell 30,31 .Glutathione reductase catalyses electron transfer between low or high molecular weight disulphide substrates and reduced pyridine nucleotides, which is one of the essential enzymes of the intracellular antioxidant system that protects cells from the detrimental effects of free radicals 15 . The reduction of glutathione disulphide (GSSG) by means of NADPH is catalysed by GR 32 .The most important goal of the GR enzyme-catalysed reaction is to keep the GSH/GSSG ratio in the cell environment stable 14 .It not only maintains the GSH/GSSG ratio, but also supports the continuance of the cell's important tasks, such as detoxification of ROS 32 .There is a balance between free radicals and the antioxidant defense system.Overproduction of free radicals causes harm to antioxidant defense mechanisms.Therefore, the cell is exposed to oxidative stress as a result of the balance between free radicals and the antioxidant system 9 .GSH, which preserves proteins in their reduced state, contributes to the spherical structure of erythrocytes.Erythrocytes that are susceptible to oxidative damage live shorter as a result of GSH depletion, resulting in haemolytic anemia 33 .Asnis was the first to purify the glutathione reductase enzyme from Escherichia coli 34 .It has been purified and characterised from mammalian tissues such as human erythrocytes, bovine erythrocytes, bovine liver, turtle, chicken liver, rat liver, bovine brain, plants such as pea leaves, and many bacterial species in numerous other studies 10,[35][36][37][38][39][40][41] . Heavy metals are formed today as a result of fast population expansion, urban garbage, industrial waste, and unintentionally used fertilisers and pesticides in agriculture.These heavy metals, which mix with soil, sea and rivers, cause some problems in organisms due to the inhibition of antioxidant enzymes in the cell and oxidative damage by the exposure of living things. In this study, glutathione reductase enzyme was isolated from the liver tissue of a scorpion fish (Scorpaena porcus) and some of its kinetic features were investigated.Purification methods were performed by homogenate preparation, ammonium sulphate precipitation, dialysis, 2 0 ,5 0 -ADP Sepharose-4B affinity chromatography and SDS polyacrylamide gel electrophoresis (SDS-PAGE), respectively. The purification process began with the homogenate preparation step.Ammonium sulphate precipitations in the range of 0-100% were performed on the prepared liver tissue.The GR enzyme precipitated in the intervals of 60-80% during the precipitation procedure.After precipitation, dialysis was performed to remove ions in the medium before affinity chromatography.Following dialysis, purification was performed on a 2 0 ,5 0 -ADP Sepharose 4B affinity column, and the molecular weight of GR was determined using the SDS polyacrylamide gel electrophoresis.The liver tissue was purified 25.96 times with a yield of 28.277% and its molecular weight was determined to be 25 kDa.Quantitative protein determination was determined by the Bradford method.Ni 2þ , Mn 2þ, Cr 3þ , and Cd 2þ heavy metals were applied on the purified enzyme.IC 50 values of heavy metals were calculated as 2.4 mM, 30 mM, 135 mM and 206 mM, for Mn 2þ , Cd 2þ , Ni 2þ , and Cr 3þ , respectively. The characterisation process was also carried out in the study.For this purpose, optimum pH, optimum substrate and optimum buffer values of GR enzyme were determined from scorpion fish liver.The optimum pH of the GR enzyme was found to be 6.5, the optimum substrate was 2 mM NADPH and the optimum buffer was 300 mM KH 2 PO 4 . Many studies have been carried out on glutathione reductase enzyme including purification from many tissues, characterisation and determination of biochemical properties. Erat and C¸iftc ¸i (2006) purified glutathione reductase enzyme 5.823 times from human erythrocytes with 24% efficiency in their study 42 .Senturk et al. (2008) isolated human erythrocyte glutathione reductase enzyme 2555.56fold with 29.74% yield 43 .Akkemik et al. (2011) analysed the effects of some drugs on glutathione reductase from human erythrocytes, they purified the GR enzyme 3333 fold with a yield of 44.44% 44 .Ulusu and Tando gan (2007) purified GR enzyme 5456 fold with 38.4% yield in their study from beef liver 40 .Senturk et al. (2009) investigated the effects of some analgesic and anestethic drugs on the glutathione reductase enzyme purified from human erythrocytes 2139-fold with a yield of 29 45 .In another study, Tas ¸er and C ¸iftc ¸i (2012) purified the enzyme from turkey liver 2476 fold with a yield of 10.75% 46 .Ekinci and S¸ent€ urk (2013) isolated the enzyme from rainbow trout liver and investigated the effects of Co þ2 , Zn þ2 , Ca þ2 , Fe þ2 , Mn þ2 , Cr þ3 , Sn þ2 and Mg 2þ .The heavy metals had IC 50 values of 42.2, 63.1, 357, 486, 508, 592, and 657, respectively 47 .Tekman et al. ( 2008) also evaluated the influences of Cd þ2 , Cu þ2 , Pb þ2 , Hg þ2 , Fe þ3 and Al þ3 on rainbow trout GR enzyme.The IC 50 values were found to be 65.5, 82, 122, 509, 797 and 804 mM, respectively 48 .Karagozoglu and Ciftci investigated heavy metal inhibition of GR enzyme purified from chicken kidney with 57% efficiency.They determined the inhibition effects of heavy metals Ni þ2 , Zn þ2 , Pb þ2 , Hg þ2 , Ag þ and Al þ3 on the GR enzyme.The IC 50 values were found to be 337, 191, 168, 187 and 289, respectively 49 . The influence of different metal ions on GR enzyme isolated from scorpion liver tissue has been analysed in our work.Scorpion fish liver GR enzyme with a specific activity of 10.479 EU/mg was purified 25.96-fold with a yield of 28.277%.When the received data was compared to the literature studies, our results are in accordance with previous works.SDS-PAGE revealed the molecular weight of the enzyme as 25 kDa which is also similar to the literature data. The relevance of the study is demonstrated by the damage caused by heavy metals in nature.Because heavy metals, which arise as a result of environmental problems, mix with soil, water and seas and damage the living ecosystem.Especially heavy metals accumulating in seas and rivers cause exposure to aquatic organisms.As a result of exposure of aquatic organisms, these heavy metals are transmitted to humans through food intake. In conclusion, in this study, it was determined that liver tissue GR was inhibited by metal ions in micromolar levels and the strongest inhibitor was Mn 2þ.The inhibition of GR with various heavy metals has a negative effect on the organism.Therefore, specific care should be taken in the use of these metals, which can disrupt the GSH/GSSG balance by inhibiting the GR enzyme.This is the first research dealing with the purification and characterisation of GR enzyme from Scorpion fish liver tissue.We hope findings of our study will be helpful to further researchers interested in toxicology and enzymology. Disclosure statement No potential conflict of interest was reported by the author(s). Figure 1 . Figure1.Activity %-inhibitor regression analysis graph for Scorpionfish GR in the presence of different Mn 2þ concentrations. Table 1 . Scorpionfish GR enzyme inhibition data by heavy metal ions.
3,992.8
2023-03-20T00:00:00.000
[ "Biology", "Environmental Science" ]
Epoxy- versus Glutaraldehyde-Treated Bovine Jugular Vein Conduit for Pulmonary Valve Replacement: A Comparison of Morphological Changes in a Pig Model Valved conduits are often required to replace pulmonary arteries (PA). A widely used Contegra device is made of bovine jugular vein (BJV), preserved with glutaraldehyde (GA) and iso-propanol. However, it has several drawbacks that may be attributed to its chemical treatment. We hypothesized that the use of an alternative preservation compound may significantly improve BJV conduit performance. This study aimed to compare the macroscopic and microscopic properties of the BJV treated with diepoxide (DE) and GA in a porcine model. Twelve DE-BJVs and four Contegra conduits were used for PA replacement in minipigs. To assess the isolated influence of GA, we included an additional control group—BJV treated with 0.625% GA (n = 4). The animals were withdrawn after 6 months of follow-up and the conduits were examined. Explanted DE-BJV had a soft elastic wall with no signs of thrombosis or calcification and good conduit integration, including myofibroblast germination, an ingrowth of soft connective tissue formations and remarkable neoangiogenesis. The inner surface of DE-BJVs was covered by a thin neointimal layer with a solid endothelium. Contegra grafts had a stiffer wall with thrombosis on the leaflets. Calcified foci, chondroid metaplasia, and hyalinosis were observed within the wall. The distal anastomotic sites had hyperplastic neointima, partially covered with the endothelium. The wall of GA-BJV was stiff and rigid with degenerative changes, a substantial amount of calcium deposits and dense fibrotic formations in adventitia. An irregular neointimal layer was presented in the anastomotic sites without endothelial cover in the GA BJV wall. These results demonstrate that DE treatment improves conduit integration and the endothelialization of the inner surface while preventing the mineralization of the BJV, which may reduce the risk of early conduit dysfunction. Introduction Valved conduits are often applied for right ventricular outflow tract (RVOT) reconstruction, particularly in children with congenital heart disease (CHD).A Contegra ® conduit is widely used as the right ventricle-pulmonary artery (RV-PA) conduit.The Contegra pulmonary valved conduit is made of bovine jugular vein (BJV) with a venous valve, preserved and stored in 1% glutaraldehyde (GA) and 20% isopropyl alcohol.The benefit of such a xenograft is a natural three-leaflet valve, while the venous wall conduit is sufficient for low-pressure flow conditions.Another advantage of the Contegra conduit is The local ethics committee approved this study.All experimental procedures were performed in compliance with the local and international regulations and standards of animal care. This study included 20 healthy animals (ICG minipigs [23], Institute of Cytology and Genetics, Novosibirsk, Russia) weighing 33-72 kg (Table 1).Twelve pigs underwent implantation of the DE-BJV conduits (Figure 1).The eight remaining pigs were equally divided into groups with implanted Contegra conduits (Contegra pulmonary valved conduit; Medtronic, Minneapolis, MN, USA) and with an implanted BJV treated with 0.625% GA (GA-BJV).In all animals, the main pulmonary artery (PA) was grafted with a BJV conduit under cardiopulmonary bypass (CPB).The follow-up period was 6 months.The animals were euthanized from the experiment at the end of the 6-month observation period. Preparation of the Conduit As a cross-linking agent, we used 25% GA (catalog No. 253857, Panreac Quimica SLU, Barcelona, Spain) and 97% DE, commercially available from N. Vorozhtsov Novosibirsk Institute of Organic Chemistry, SB RAS (Novosibirsk, Russian Federation). Fresh BJVs with natural valves were extracted from healthy animals immediately after slaughter and rinsed several times with 0.9% NaCl.Hydraulic tests and endoscopic examinations (Hopkins endoscope; 30°; diameter, 4 mm; Karl Storz SE & Co. KG, Tuttlingen, Germany) were used to assess venous valve competence. Selected BJVs with a competent valve were preserved using one of two different ways: (1) For 14 days at room temperature in 5% buffered (0.05 M phosphate buffer, pH 7.4) DE solution, changed once, on day 2; (2) For 14 days at room temperature in 0.625% buffered (0.05 M phosphate buffer, pH 7.4) GA solution, with conservative replacement on days 2, 4, and 8. After preservation, the conduits were stored individually in sealed, sterile packages in a biocidal solution of the original composition [24]. The control group was a Contegra graft (Contegra pulmonary valved conduit; Medtronic, Minneapolis, MN, USA), a product commercially available for clinical use.According to the manufacturer's instructions, it consists of a xenogeneic (bovine) jugular vein with a trileaflet venous valve and a natural sinus slightly larger in diameter than its lumen.A final sterilization step was performed using a proprietary sterilizer containing 1% GA and 20% isopropyl alcohol, in which the conduit was preserved and packaged until use. Preparation of the Conduit As a cross-linking agent, we used 25% GA (catalog No. 253857, Panreac Quimica SLU, Barcelona, Spain) and 97% DE, commercially available from N. Vorozhtsov Novosibirsk Institute of Organic Chemistry, SB RAS (Novosibirsk, Russian Federation). Fresh BJVs with natural valves were extracted from healthy animals immediately after slaughter and rinsed several times with 0.9% NaCl.Hydraulic tests and endoscopic examinations (Hopkins endoscope; 30 • ; diameter, 4 mm; Karl Storz SE & Co. KG, Tuttlingen, Germany) were used to assess venous valve competence. Selected BJVs with a competent valve were preserved using one of two different ways: (1) For 14 days at room temperature in 5% buffered (0.05 M phosphate buffer, pH 7.4) DE solution, changed once, on day 2; (2) For 14 days at room temperature in 0.625% buffered (0.05 M phosphate buffer, pH 7.4) GA solution, with conservative replacement on days 2, 4, and 8. After preservation, the conduits were stored individually in sealed, sterile packages in a biocidal solution of the original composition [24]. The control group was a Contegra graft (Contegra pulmonary valved conduit; Medtronic, Minneapolis, MN, USA), a product commercially available for clinical use.According to the manufacturer's instructions, it consists of a xenogeneic (bovine) jugular vein with a trileaflet venous valve and a natural sinus slightly larger in diameter than its lumen.A final sterilization step was performed using a proprietary sterilizer containing 1% GA and 20% isopropyl alcohol, in which the conduit was preserved and packaged until use. Anesthesia and Mechanical Ventilation An anesthesiologist and veterinarian with good laboratory practice qualifications jointly carried out anesthetic management at all stages.Before the operation, the animals were examined by a veterinarian.Twelve hours before surgery, the animals were given access only to water, and were deprived of food.An hour before the start of the intervention, after confirmation by the veterinarian of a satisfactory condition, the animal was premedicated with Zoletil-100 (Virbac Sante Animale, France) 5-7 mg/kg intramuscularly.Upon reaching the target level of sedation and after the scrubbing of surgical sites, the animal was transported to the operating room and fixed on the operating table in the supine position.After preoxygenation, anesthesia was supplemented with fentanyl (6-8 µg/kg), propofol (4-6 mg/kg) and pipecuronium bromide (0.1 mg/kg).The animal was intubated and ventilated at O 2 50% vol., at a tidal volume of 6-8 mL/kg, with a positive end-expiratory pressure of 8 cm H 2 O, maintaining PaCO 2 at 36-43 mmHg (JulianPlus ventilator, Draege, Germany).In the ventilated animal, anesthesia was maintained using sevoflurane 2-4 vol% and intravenous fentanyl 1-2 µg/kg every 20 min.An invasive arterial blood line was placed in the femoral artery, and a central venous catheter was placed into the femoral vein for additional infusions and blood sampling.A urinary catheter was installed to control urine output.During cardiopulmonary bypass, anesthesia was maintained via the constant IV infusion of propofol (6-10 mg/kg/h) and the bolus administration of fentanyl 1-4 µg/kg every 20-30 min.Cefazolin (2.0 g) was used as the antibiotic prophylaxis.In the postperfusion period, in the presence of signs of heart failure, inotropic support with dopamine infusion was utilized.In our study, dopamine infusion (5 µg/kg/min) was required in one animal during CPB weaning and decannulation.In the postoperative period and restoration of spontaneous breathing, in the absence of signs of heart failure and satisfactory oxygenation (saturation > 90%, PaO 2 > 70 mmHg, PaCO 2 33-45 mmHg), the animal was extubated, observed for 30-60 min and transported to the vivarium in an individual aviary. Cardiopulmonary Bypass Before the cannulation of the great vessels, the pigs were heparinized (300 IU/kg).During cardiopulmonary bypass, the activated clotting time (ACT) was maintained at 500-600 s.A MAQUET HL 20 heart-lung machine (MAQUET Cardiopulmonary AG, Rastatt, Germany), a HCU 30 thermoregulator (MAQUET Cardiopulmonary AG, Germany), and Quadrox oxygenators (MAQUET Cardiopulmonary AG, Germany) were used during the experiment.The volumetric perfusion rate was calculated as 60-70 mL per kg of the animal's body weight.Systemic arterial pressure was maintained at 90-110 mmHg.To ensure adequate venous flow, we used a VAVD MAQUET HL 20 vacuum venous outflow controller (MAQUET Cardiopulmonary AG, Germany) with a negative pressure of 30-50 mmHg.During perfusion, body temperature was maintained at 37-37.5 • C. The hematocrit level was in the range of 24-30%.After venous and arterial decannulation, protamine (300 IU/kg) was administered. Operative Techniques The surgical approach was performed through a left lateral thoracotomy via the 4th intercostal space.After systemic heparinization, the great vessels were cannulated, and CPB was initiated.Cannulation was performed centrally through thoracotomy (n = 12) or peripheral vessels (n = 8) depending on the surgical preferences and weight of the animal.The size of the conduit was selected according to the weight of the animal and the diameter of the native main PA. Before implantation, the conduit was washed from the storage solution in a sterile 0.9% sodium chloride solution with three changes of the solution every 20 min for DE-BJV or GA-BJV, and every 5 min for Contegra (in accordance with manufacturer's recommendations).Conduit implantation was performed on the beating heart and a normothermic full-flow CPB.After the transection of the native PA (Figure 2A), the conduit was implanted orthotopically, just above the native pulmonary valve, after the leaflets were excised.Continuous sutures were used for both the proximal and distal anastomoses (Figure 2B,C).In all cases, double sterility control was performed before the implantation.or GA-BJV, and every 5 min for Contegra (in accordance with manufacturer's recommendations).Conduit implantation was performed on the beating heart and a normothermic full-flow CPB.After the transection of the native PA (Figure 2A), the conduit was implanted orthotopically, just above the native pulmonary valve, after the leaflets were excised.Continuous sutures were used for both the proximal and distal anastomoses (Figure 2B,C).In all cases, double sterility control was performed before the implantation. Postoperative Management In the first 7 days of the postoperative period, the animals received nadroparin calcium (0.3 mL 2 times a day), followed by antiplatelet therapy (clopidogrel 75 mg 1 time a day and acetylsalicylic acid 75 mg 1 time a day) during the entire observation period.Amoxicillin clavulanate (8.75 mg/kg; Synulox, Haupt Pharma Latina S.R.L., Borgo San Michele, Latina, Italy) was used for antibiotic prophylaxis.At the end of the 6-month observation period, the animals were withdrawn from the experiment via the application of super-therapeutic doses of sodium thiopental after preliminary sedation (Zoletil-100 5-7 Postoperative Management In the first 7 days of the postoperative period, the animals received nadroparin calcium (0.3 mL 2 times a day), followed by antiplatelet therapy (clopidogrel 75 mg 1 time a day and acetylsalicylic acid 75 mg 1 time a day) during the entire observation period.Amoxicillin clavulanate (8.75 mg/kg; Synulox, Haupt Pharma Latina S.R.L., Borgo San Michele, Latina, Italy) was used for antibiotic prophylaxis.At the end of the 6-month observation period, the animals were withdrawn from the experiment via the application of super-therapeutic doses of sodium thiopental after preliminary sedation (Zoletil-100 5-7 mg/kg). Macroscopic and Microscopic Study of the Conduits The BJV conduits were excised from the adjacent native tissues of the right ventricle (RV) and PA.Macroscopic assessments of the conduit wall, cusps, sinuses of the valve, and adjacent fragments of the native PA were performed to determine thrombus formation, calcification, obvious stenosis, or aneurysm formation.Histological specimens were obtained by dissecting the conduit fragments with valves and both anastomoses along the entire length, with subsequent preservation in a 10% buffered formalin solution.Before examination, the tissues were dehydrated and embedded in paraffin; 6-micron sections were stained with hematoxylin and eosin (H&E), von Kossa, Russel-Movat pentachrome stains and an immunohistochemical (IHC) stain for S 100 proteins. Scanning Electronic Microscopy (SEM) Briefly, 6 µm sections of each sample were dried at room temperature, straightened, and fixed on the specimen stub.Before the study, the samples were covered with a 25-30 nm thick conductive carbon layer on GVC-3000 Thermal Evaporation Carbon Plating Instrument (KYKY TECHNOLOGY Co., Ltd., Beijing, China).SEM and energy-dispersive X-ray spectroscopy (EDS) analyses of the cell-containing samples and an elemental mapping of the chosen areas were performed using a WIN SEM A6000LV scanning electron microscope (KYKY TECHNOLOGY Co., Ltd., Beijing, China) equipped with an AzTec One EDX system (Oxford Instruments, High Wycombe, UK).Sample observation was conducted using a secondary electron detector at an electron high tension of 20 keV and with an electron beam setting of 120 µA.Ten observation fields were selected for each specimen and examined at a 100×, 250×, 450× and 700× magnification.The sizes of the observed objects were measured using the Microsoft KYKY SEM software (version 1.8.1.2). Statistical Analyses Continuous data are reported as median (Me) and interquartile range (IQR), and categorical data are reported as rates and percentages.Descriptive statistics were obtained using STATA version 13.0 (StataCorp LP, College Station, TX, USA). Results All conduits were successfully implanted, without any surgical complications.All animals survived the procedure and were extubated 1.5-3 h postoperatively.Four pigs did not survive for 6 months.One animal from the Contegra group died 1.6 months after the procedure due to an unrelated cause (Dilatation ventriculi acuta).Three animals had prosthetic endocarditis (DE-BJV-1; Contegra-1; GA-BJV-1) and all of them were euthanized after 5.8, 4.8, and 3.2 months, respectively.The other animals survived until the end of the follow-up period. Macroscopic Findings The DE-BJV conduits were well incorporated at both the proximal and distal ends, without distortion or aneurysmal formation.All grafts maintained their extensibility, elasticity, and softness.Clean, smooth, white luminal surfaces without thrombus deposition or calcified lesions were observed (Figure 3A).A thin neointimal layer was observed on the inner surface of the conduit walls.The leaflets were intact, soft, and mobile without calcification, fenestration, or tears.In two grafts, intimal hyperplasia was observed along the suture line of the distal anastomosis (Figure 3B).Neointimal proliferation was most pronounced along the lesser curvature of the conduit.One DE-BJV conduit (12 mm) showed a significant size mismatch between the main PA and the RVOT due to the fast growth of the animal during the follow-up period. Microscopic Examination In DE-BJV conduits, the wall preserved its own structure.The elastic and collagen fibers were correctly directed, and small focal fragmentation and disorganization of the collagen fibers were observed in some areas with no specific localization.We discovered the remodeling of the conduit tissue in the form of wall germination by myofibroblast cells from the adventitial side into the medial layer, the replacement of collagen fibers with soft non-deforming connective tissue, and areas of neoangiogenesis (Figure 4A,D).In some conduits, the infiltration of inflammatory cells (lymphocytes; granulocytes) was observed in various areas. A uniform neointimal layer covering the inner surface of the DE-BJV originated from the side of both the proximal and distal anastomoses and did not involve the cusps.The thickness of the neointimal layer was in the range of ~110-320 µm in the central part of the conduit and ~130-620 µm near the anastomoses.Fibroblasts were equally distributed in the fibrous sheath.The well-developed endothelial monolayer extended from the PA and fully covered the fibrous tissue layer (Figure 4A,J).Contegra conduits had more rigid walls; however, they showed sufficient pliability.All the conduits had patent lumens (Figure 3C).The wall of the Contegra graft was covered by a fibrotic layer.Two conduits had neointimal hyperplasia at the distal anastomotic site with the greatest thickness along the smaller curve (the first was explanted after 1.6 months, and the second was explanted after 6 months of follow-up).Thick calcified areas were found at both the distal and proximal anastomotic sites.The cusps were thin, mobile, and uncalcified.A thrombotic mass and fibrin deposition were found on the inner surface of the cusps and sinuses of the two conduits. The wall of the GA-BJV was stiff and rigid, and the inner surface was covered with fibrous tissue.In one case, the wall was loose and wrinkled, with a partially exfoliated neointimal layer.Two conduits had deformed cusps that were partially fused to the conduit wall (Figure 3D).Neointimal hyperplasia and areas of calcified lesions were found at both the proximal and distal anastomotic sites. In cases of infectious endocarditis, the process was localized strictly at the level of the conduit, without going beyond the suture margins. Microscopic Examination In DE-BJV conduits, the wall preserved its own structure.The elastic and collagen fibers were correctly directed, and small focal fragmentation and disorganization of the collagen fibers were observed in some areas with no specific localization.We discovered the remodeling of the conduit tissue in the form of wall germination by myofibroblast cells from the adventitial side into the medial layer, the replacement of collagen fibers with soft non-deforming connective tissue, and areas of neoangiogenesis (Figure 4A,D).In some conduits, the infiltration of inflammatory cells (lymphocytes; granulocytes) was observed in various areas.No signs of tissue mineralization or calcium deposition in the elastin or collagen fibers were found in the DE-BJV conduits (Figure 4G).Small dense calcium clusters were present around some stitches along the suture line, both in the graft tissue and the PA tissue in some conduits (n = 4) (Figure 5B). The structure of the cusps was preserved in all cases.Collagen and elastin fibers were well-preserved in the leaflets.The migration of myofibroblastic cells into the cusps was A uniform neointimal layer covering the inner surface of the DE-BJV originated from the side of both the proximal and distal anastomoses and did not involve the cusps.The thickness of the neointimal layer was in the range of ~110-320 µm in the central part of the conduit and ~130-620 µm near the anastomoses.Fibroblasts were equally distributed in the fibrous sheath.The well-developed endothelial monolayer extended from the PA and fully covered the fibrous tissue layer (Figure 4A,J). No signs of tissue mineralization or calcium deposition in the elastin or collagen fibers were found in the DE-BJV (Figure 4G).Small dense calcium clusters were present around some stitches along the suture line, both in the graft tissue and the PA tissue in some conduits (n = 4) (Figure 5B). Biomedicines 2023, 11, x FOR PEER REVIEW 9 of 17 distal anastomotic site of the conduit (Figure 5C).Neovascularization and a dense fibrous tissue with inflammatory cells consisting of lymphocytes and histiocytes were observed around the giant-cell granulomas.At the sites of adventitial fibrosis, the deformation of the conduit wall occurred due to the formation of an adventitial scar of dense fibrous tissue.The accumulation of giant-cell macrophages in the subneointimal region of these grafts was also observed.The explanted Contegra grafts maintained their native layer structure; collagen and elastin fibers retained the correct orientation.A focal fragmentation of the collagen fibers was observed.Large calcium deposits and areas of calcification were revealed at the anastomotic sites.A significant accumulation of calcium in the elastin fibers was found throughout the wall of the conduit, with a predominance in the subintimal layer (Figure 4H,K).Calcified elastin fibers were also present in the conduit explanted after 1.6 months of follow-up (Figure 4E).Three out of four Contegra conduits showed hyalinosis with signs of mineralization and subintimal sites of chondroid metaplasia in the graft wall (Fig- ure 4B).Signs of chondroid metaplasia, hyalinosis and mineralization were revealed in the PA tissue at the anastomotic site.Cells demonstrating chondroid metaplasia showed positive IHC staining for S 100 proteins (Figure 6).Inflammatory cells, mainly lymphocytes, neutrophils, and single multinucleated macrophages, were present in the tunica adventitia of the conduits.The inflammatory cell infiltration of the graft wall was also revealed near the suture lines.Strands of myofibroblastic cells occasionally spread deep into the conduit wall from the adventitial side.The luminal surface of Contegra was covered by an uneven neointimal layer without the involvement of the cusps.The maximum thickness of the neointima was found in the The structure of the cusps was preserved in all cases.Collagen and elastin fibers were well-preserved in the leaflets.The migration of myofibroblastic cells into the cusps was observed in some conduits (Figure 5A).Calcification was not observed in the leaflets.No endothelial cells were present on either the inflow or outflow valve surfaces. In two DE-BJV conduits, dense hyperplastic neointima was found in the area of the distal suture line (thickness range, ~1250-1800 µm).A large number of multinucleated macrophages forming giant-cell granulomas were present in the adventitial layer at the distal anastomotic site of the conduit (Figure 5C).Neovascularization and a dense fibrous tissue with inflammatory cells consisting of lymphocytes and histiocytes were observed around the giant-cell granulomas.At the sites of adventitial fibrosis, the deformation of the conduit wall occurred due to the formation of an adventitial scar of dense fibrous tissue.The accumulation of giant-cell macrophages in the subneointimal region of these grafts was also observed. The explanted Contegra grafts maintained their native layer structure; collagen and elastin fibers retained the correct orientation.A focal fragmentation of the collagen fibers was observed.Large calcium deposits and areas of calcification were revealed at the anastomotic sites.A significant accumulation of calcium in the elastin fibers was found throughout the wall of the conduit, with a predominance in the subintimal layer (Figure 4H,K).Calcified elastin fibers were also present in the conduit explanted after 1.6 months of follow-up (Figure 4E).Three out of four Contegra conduits showed hyalinosis with signs of mineralization and subintimal sites of chondroid metaplasia in the graft wall (Figure 4B).Signs of chondroid metaplasia, hyalinosis and mineralization were revealed in the PA tissue at the anastomotic site.Cells demonstrating chondroid metaplasia showed positive IHC staining for S 100 proteins (Figure 6).Inflammatory cells, mainly lymphocytes, neutrophils, and single multinucleated macrophages, were present in the tunica adventitia of the conduits.The inflammatory cell infiltration of the graft wall was also revealed near the suture lines.Strands of myofibroblastic cells occasionally spread deep into the conduit wall from the adventitial side.ure 4B).Signs of chondroid metaplasia, hyalinosis and mineralization were revealed in the PA tissue at the anastomotic site.Cells demonstrating chondroid metaplasia showed positive IHC staining for S 100 proteins (Figure 6).Inflammatory cells, mainly lymphocytes, neutrophils, and single multinucleated macrophages, were present in the tunica adventitia of the conduits.The inflammatory cell infiltration of the graft wall was also revealed near the suture lines.Strands of myofibroblastic cells occasionally spread deep into the conduit wall from the adventitial side.The luminal surface of Contegra was covered by an uneven neointimal layer without the involvement of the cusps.The maximum thickness of the neointima was found in the The luminal surface of Contegra was covered by an uneven neointimal layer without the involvement of the cusps.The maximum thickness of the neointima was found in the area of the distal anastomosis (~1070-1740 µm), and elsewhere it was in the range of ~130-760 µm.In the conduit, explanted after 1.6 months of follow-up (from the pig that died from non-conduit-related causes), a spread of the neointimal layer was noted in the area of both anastomoses, in the central part of the conduit devoid of fibrous tissue.Fibroblasts were present in the neointimal layer in all grafts, predominantly in the anastomotic area.A monolayer of endothelial cells partially covered the neointima at the distal and proximal anastomotic sites.The adjacent native PA preserved the endothelial layer.The Contegra cusps appeared preserved, with the correct fiber direction.In two cases, fibrin deposits and thrombi were revealed on both inflow and outflow surfaces of the valves.Cusp calcification was not observed. The GA-BJV wall showed signs of edema with fragmented fibers.The wall was infiltrated with inflammatory cells (lymphocytes neutrophils, granulocytes, multinuclear macrophages, and a small number of eosinophils).Inflammation was most pronounced at the anastomotic sites and on the border of the neointima.Substantial calcification was revealed in the GA-BJV wall with the largest calcium deposits near the suture lines (Figure 4C).The area of calcification was related to the elastin fibers (Figure 4F,I).Neointimal tissue covered the inner surface of the GA-BJV unequally; it was absent in the central area and well defined at anastomotic sites (~770-1380 µm).Fibroblasts were observed in the neointima near the luminal surface and suture lines.The endothelium was absent from the inner surface of the GA-BJV grafts (Figure 4L).The leaflet structure was preserved; however, the fibers were fragmented.Mineralization foci were observed near the bases of the valve cusps.Foci of resolving inflammation with the formation of a fibrous capsule and neoangiogenesis were revealed in the adventitia.Native PA was infiltrated with inflammatory cells; it had a preserved endothelial layer, and calcifications were present at the anastomotic sites. Three conduits from the animals with endocarditis were found to contain numerous bacterial or fungal colonies.In the conduit wall, degenerative changes were observed, characterized by a reduction in collagen and elastic fibers and productive inflammation with an abundance of neutrophils and macrophages.Granulation tissue with neovascularization, sclerosis, lysis, lymphocytic and histiocytic infiltration were revealed in the anastomotic zones.Thrombi with widespread inflammatory infiltration and colonies of bacteria and fungi were adjacent to the luminal surfaces of the grafts and leaflets, with both inflow and outflow surfaces.The valve matrix showed degenerative changes and lymphocytic cell infiltration.Necrotic foci were observed at the bases of the cusps. Scanning Electron Microscopy The SEM imaging and elemental mapping of the observed fields revealed that the DE-treated samples were free from mineral deposits throughout the section.The detected calcium level matched the baseline level, whereas the signal corresponding to phosphorus levels was too weak to be accurately reflected on the map (Figure 7).Large crystal clusters with high calcium and phosphorus contents were found in the GA-treated samples.Although these conglomerates had hard mineral structures, their shapes resembled the direction of the elastin fibers in the replaced tissue (Figure 7).Similar picture were seen in the Contegra conduits.According to the EDS maps and reference scan images, the sites of mineralization were spread along the elastin fibers (Figure 7).Calcium to phosphorus ratio values for clusters in Contegra and GA-BJV were >1.5:1, which defines them as apatites. Discussion We assessed the structural changes in DE-BJV grafts and GA-cross-linking conduits implanted in the PA position in the pig model.In this study, the DE-BJV showed acceptable performance during the 6-month follow-up period.These conduits retained a smooth and clean inner surface without thrombosis, significant degeneration, or the calcification of the graft walls and leaflets.The DE-BJV leaflets were not deformed and their structures were preserved. In our experimental series, DE-BJV conduits showed good graft integration.All DE-BJV grafts were characterized by the remodeling of the prosthesis tissue with the equal germination of the wall by myofibroblastic cells, the replacement of collagen fibers with soft non-deforming connective tissue, and active angiogenesis.Moreover, the migration of fibroblasts into the leaflet was noted in some grafts.In Contegra conduits, the germination of myofibroblasts extending from the adventitia into the depth of the wall was less pronounced.A dense fibrous capsule with remaining foci of resolving inflammation was detected in the adventitial layer of the GA-BJV.However, there were few myofibroblasts in the graft walls.We believe that these findings are due to DE toxicity, which was lower than GA toxicity [17,19,25].The cytotoxic properties of GA have been previously described [11,12,19,26].The formation of a fibrous capsule around the GA-BJV may play a role in isolating the graft from the surrounding tissues. Cytotoxic properties of the treatment may also be reflected in the degree of endothelialization of the luminal surface of the graft.In our study, a well-developed continuous endothelial layer was found in all DE-BJVs (Figure 4J) but not in the GA-preserved conduits.Endothelial cells were differentiated via Russel-Movat pentacrome staining.In the Contegra conduits, the endothelial single layer was fragmentarily present in the proximal Discussion We assessed the structural changes in DE-BJV grafts and GA-cross-linking conduits implanted in the PA position in the pig model.In this study, the DE-BJV showed acceptable performance during the 6-month follow-up period.These conduits retained a smooth and clean inner surface without thrombosis, significant degeneration, or the calcification of the graft walls and leaflets.The DE-BJV leaflets were not deformed and their structures were preserved. In our experimental series, DE-BJV conduits showed good graft integration.All DE-BJV grafts were characterized by the remodeling of the prosthesis tissue with the equal germination of the wall by myofibroblastic cells, the replacement of collagen fibers with soft non-deforming connective tissue, and active angiogenesis.Moreover, the migration of fibroblasts into the leaflet was noted in some grafts.In Contegra conduits, the germination of myofibroblasts extending from the adventitia into the depth of the wall was less pronounced.A dense fibrous capsule with remaining foci of resolving inflammation was detected in the adventitial layer of the GA-BJV.However, there were few myofibroblasts in the graft walls.We believe that these findings are due to DE toxicity, which was lower than GA toxicity [17,19,25].The cytotoxic properties of GA have been previously described [11,12,19,26].The formation of a fibrous capsule around the GA-BJV may play a role in isolating the graft from the surrounding tissues. Cytotoxic properties of the treatment may also be reflected in the degree of endothelialization of the luminal surface of the graft.In our study, a well-developed continuous endothelial layer was found in all DE-BJVs (Figure 4J) but not in the GA-preserved conduits.Endothelial cells were differentiated via Russel-Movat pentacrome staining.In the Contegra conduits, the endothelial single layer was fragmentarily present in the proximal and distal anastomotic sites, which is consistent with previous findings [27].Endothelial cells were completely absent from the luminal surface of the GA-BJV conduits (Figure 4L).GA released from the prosthesis tissues may inhibit the growth and metabolism of endothelial cells [13,14,28,29].Simultaneously, DE had no cytotoxic effects on endothelial cell cultures, corroborating the results obtained [25].Similar results have been demonstrated in studies on biomaterials cross-linked with polyepoxy compounds [30,31].A normally developed and well-functioning endothelium prevents neointimal hyperplasia by suppressing its initial triggers, inhibiting the proliferation and migration of intimal smooth muscle cells, and decreasing the inflammatory response and clot formation [32]. The tendency toward neointimal hyperplasia was reduced in the DE-BJV conduits, compared to that in the GA-BJV, in our cohort.Most DE-BJVs had an even neointimal layer, with slight thickening at the suture lines.Neointimal hyperplasia at the distal anastomosis was noted in only two DE-BJV grafts.In both conduits, we observed the accumulation of foreign-body giant cells in the adventitia near the distal anastomosis and in the subneointimal region.In our opinion, the reaction to a foreign body in both conduits indirectly provoked neointimal proliferation due to the deformation of the conduit wall in the areas of giant-cell granulomas, and dense fibrous tissue formed along their periphery.We believe that the signs of reaction to a foreign body observed in both cases were a consequence of individual immune responses in these animals.Moreover, we did not observe any signs of reaction to a foreign body in other explanted DE-BJV conduits, which supports the impact of the immune response characteristics of each animal. The explanted Contegra was characterized by a less even neointimal layer, deforming the wall of the conduit, and hyperplasia at the distal anastomotic site.Manifestations of neointimal hyperplasia were detected as early as 1.6 months after Contegra implantation.In GA-BJV conduits, the neointima did not completely cover the inner surface, hyperplasia was noted at the anastomotic sites, and the neointimal layer was completely absent in the middle part of the conduit.We attributed the neointimal hyperplasia in these groups to another manifestation of the cytotoxic effects of GA.On the one hand, a thicker intimal layer seems to be required to isolate the biomaterial containing residual GA [33].On the other hand, GA cytotoxicity significantly inhibits cell migration and viability [11,26], preventing the formation of a continuous neointimal layer in the GA-BJV during follow-up. Furthermore, the mechanical and hydrodynamic mechanisms at the anastomotic sites of the conduits may provoke neointimal hyperplasia [34,35].This assumption was supported by the uneven thickness of the neointimal ridge along the distal anastomosis.The hyperplastic neointima was most pronounced along the lesser curvature of the distal part of the conduit.Peivandi et al. reported on the relationship between hyperplasia of the neointimal layer and the degree of elastin degradation, without any correlation with the duration of Contegra implantation in the PA position [36].The authors concluded that the increasing degeneration of the elastin fiber network led to the progressive stiffness and rigidity of the graft, which contributed to neointimal hyperplasia.We did not obtain similar results in our experiments.Explanted BJV conduits, with and without neointimal hyperplasia, generally demonstrated retained elastin fibers. Our previous experimental studies using a subcutaneous rat model showed that the degree of calcification of the BJV wall can be reduced by substituting GA with DE [15].According to previous studies, GA-treated tissues are prone to early calcification [6,12,15].The GA-BJV has demonstrated extensive calcium deposition in the wall as early as 3 months after implantation in sheep [37,38].In clinical practice, over 60% of the explanted Contegra conduits had a pronounced calcification of its structures, including diffuse lesions [36].Simultaneously, epoxy compounds significantly reduce the degree of calcium accumulation in biomaterials [10,15,16,30].This experimental series confirmed a significantly lower tendency of mineralization in DE-BJV conduits in either Contegra or GA-BJV grafts.In our series, the DE-BJV wall was free of calcified lesions, and only a few prostheses had small calcium deposits at the suture sites.At the same time, calcium deposition was observed in the PA wall along the suture lines.We concluded that the suture material triggered calcification in this area.The GA-treated conduits were characterized by significantly more pronounced mineralization.In our series, Contegra and GA-BJV grafts had both large mature calcium deposits and foci of calcification along the course of elastin fibers (Figure 4).Moreover, calcified elastin fibers were detected in the Contegra conduit explanted after 1.6 months of follow-up.Our data further confirm that the mineralization of a BJV treated with GA begins with elastin fibers [15].This finding was further supported by the SEM images and elemental analysis.The elemental maps gathered from the surface of the dried samples showed that the sites of calcium accumulation resemble the structure of elastin fibers, which could be differentiated on the scan images.SEM analysis also revealed the presence of large amounts of phosphorus in these mineral clusters, suggesting that they are apatite crystals (Figure 7).In addition, in our series, the mineralization of the GA-BJV was more extensive than that of Contegra, which may reflect the effect of a commercial storage solution [18,39]. Our previous study applying a subcutaneous rat model demonstrated that the DE treatment of a BJV inhibits the calcification of collagen, but does not affect elastin mineralization [15].In this study, we found no evidence of calcium accumulation in either collagen or elastin in the DE-BJV wall after a 6-month follow-up period.This finding suggests that DE nevertheless contributes to slower elastin calcification compared to GA.The discrepancy between the results of the aforementioned studies is due to the different animal models. Thrombosis of the DE-BJV conduit was not observed.Contegra conduits were characterized by the presence of thrombotic deposits, but this did not compromise the performance of the prosthesis.In an experimental series of GA-BJV conduits, thrombosis was detected in 18% of experimental animals after 8 months [40].Our data support the antithrombogenicity of the DE-BJV.Epoxy compounds impart greater hydrophilicity to biomaterials, which leads to a decrease in tissue thrombogenicity [10,30,41].In addition, DE-BJV conduits achieved good endothelialization in our series [32].However, we preventively prescribed anticoagulants and antiplatelet therapy to all animals because of the tendency of pigs to have hypercoagulability [42,43], which also reduced the risk of thrombosis in our series. We noted that the Contegra group was characterized by hyalinosis, and subintimal chondroid metaplasia with signs of calcification both in the graft wall and PA in the anastomotic area (Figures 4B and 6).These transformations were confirmed via Von Kossa staining and IHC staining for S100 protein [44][45][46].Previously, Peivandi et al. described foci of heterotopic ossification in Contegra conduits 9 years after implantation [36].Chondroids and bone metaplasia have also been detected in other biomaterials treated in various ways [33,47], such as aortic allografts [48], stenotic native heart valves [49,50], and biodegradable polymeric vascular grafts [51][52][53].Myofibroblasts, which differentiate into chondrocytes, may play a key role in the phenomenon of chondroid metaplasia and subsequent heterotopic ossification [33,50].However, the reasons for this differentiation remain unclear.Inducing factors include local hypoxemia, due to the increased stability of hypoxia-inducible factor-1α, the expression of bone morphogenetic proteins, vascular endothelial growth factor and neuropilin-1, and inflammation with the release of cytokines that promote cartilage proliferation, as well as mechanical cues [33,49,54,55].Subsequently, cartilage tissue can be replaced with heterotopic bone [49,50,55].The pathogenesis of hyaline production entails the infiltration of plasma proteins like apolipoprotein E, Ig G, α2-macroglobulin and fibrinogen [56], which accumulate within the vascular wall, and induce the formation of fibrinoid conglomerates with subsequent hyalinization.These processes represent the consequence of the increased permeability of the vascular wall.Aseptic inflammation, attendant to the implantation of a sterile xenogenic graft, further increases the probability of the aforementioned scenario.Herein, hyalinosis and chondroid metaplasia were found only in animals with implanted Contegra grafts.Simultaneously, pathological foci were localized subintimally in the conduit, and at the anastomotic site in the PA.No similar changes were observed in other BJV conduits.Considering the present and previous findings, we propose that the possible inducing factors were a combination of material processing components.However, further studies are required to elucidate these mechanisms and their role. Previous studies have reported a high incidence of endocarditis in BJV conduits treated with GA, which limits their use [2,8,9].In our experiment, endocarditis developed in three animals during the observation period (DE-BJV-1; Contegra-1; GA-BJV-1).We cannot extrapolate the data on the incidence of endocarditis in our animal series to that associated with the use of DE-BJVs in humans because the animal model has hygiene and wound care limitations.It is difficult to adequately treat the postoperative suture under sterile conditions for an awake animal and to prevent damage to it in a postoperative aviary.Endocarditis occurred in most experimental series of conduit implantations in the pulmonary position in animal models [37,38,57].However, good endothelialization of the conduit may reduce the risk of subsequent endocarditis in humans. This was an experimental animal study, and its findings cannot be fully extrapolated to humans.However, the similarity of cardiac anatomy and hemodynamics allows us to build a concept of the functioning of the conduit from the DE-BJV in humans.It should be noted that we cannot fully project our results to groups of patients with high pulmonary hypertension or anatomical anomalies of the pulmonary arteries, because we implanted the conduit in healthy animals with well-developed pulmonary vessels.Another limitation of our study was the 6-month follow-up, but we identified structural differences between the conduits by the end of this follow-up period. Conclusions The DE-BJV conduits showed encouraging results in a porcine model for up to 6 months and are an acceptable alternative to GA-treated grafts.DE-BJV conduits demonstrated good integration without the development of significant degenerative changes.Good endothelialization of the inner surface, a low tendency of thrombosis, and calcium accumulation in the wall and leaflets are the advantages of these grafts.Meanwhile, Contegra and the GA-BJV had a tendency toward mineralization and had foci of calcification mainly along the course of elastin fibers.In GA-treated conduits, the endothelial layer was either fragmented or absent.In addition, foci of hyalinosis and chondroid metaplasia were found in the Contegra wall and adjacent PA; further studies on this phenomenon will help elucidate these processes. The present findings suggest that the treatment of the BJV with DE reduces the risk of dysfunction and increases the durability of the conduit for RVOT reconstruction.Thus, DE-BJV conduits may improve the outcomes of surgeries for complex CHD in children.However, further experimental and clinical studies are required to comprehensively evaluate the performance of the DE-BJV. Figure 2 . Figure 2. Implantation of a conduit in the pulmonary position: (A) left thoracotomy and pulmonary trunk mobilization (the green arrow); (В) graft implantation, and the creation of proximal anastomosis between the PA and BJV conduit, where the blue arrow indicates the BJV conduit; (C) a BJV conduit in the pulmonary position, where the blue arrow indicates the BJV conduit. Figure 2 . Figure 2. Implantation of a conduit in the pulmonary position: (A) left thoracotomy and pulmonary trunk mobilization (the green arrow); (B) graft implantation, and the creation of proximal anastomosis between the PA and BJV conduit, where the blue arrow indicates the BJV conduit; (C) a BJV conduit in the pulmonary position, where the blue arrow indicates the BJV conduit. Biomedicines 2023 , 17 Figure 3 . Figure 3. Macrophotographs of explanted conduits after 6 months of follow-up.(A) A DE-BJV; the black arrow indicates a thin neointimal layer, and the green arrow indicates intact valve leaflets.(B) A DE-BJV; the black arrow indicates neointimal hyperplasia.(C) A Contegra graft; the black arrow indicates thick neointima, and red arrows indicate fibrin and thrombotic masses.(D) A GA-BJV; the black arrow indicates substantial neointimal hyperplasia, and the purple arrow indicates a deformed leaflet, partially fused with the conduit wall. Figure 3 . Figure 3. Macrophotographs of explanted conduits after 6 months of follow-up.(A) A DE-BJV; the black arrow indicates a thin neointimal layer, and the green arrow indicates intact valve leaflets.(B) A DE-BJV; the black arrow indicates neointimal hyperplasia.(C) A Contegra graft; the black arrow indicates thick neointima, and red arrows indicate fibrin and thrombotic masses.(D) A GA-BJV; the black arrow indicates substantial neointimal hyperplasia, and the purple arrow indicates a deformed leaflet, partially fused with the conduit wall. Figure 5 . Figure 5. Microphotographs of DE-BJV graft; H&E staining: (A) Valve leaflet; green arrows indicate migrating fibroblasts.(B) Site of mineralization; black arrows indicate mineral clusters and the blue arrow indicates suture hole; (C) A giant cell granuloma; orange arrows indicate giant cells, and the yellow asterisk indicates granuloma tissue.Magnification, ×400; bar, 10 µm. Figure 5 . Figure 5. Microphotographs of DE-BJV graft; H&E staining: (A) Valve leaflet; green arrows indicate migrating fibroblasts.(B) Site of mineralization; black arrows indicate mineral clusters and the blue arrow indicates suture hole; (C) A giant cell granuloma; orange arrows indicate giant cells, and the yellow asterisk indicates granuloma tissue.Magnification, ×400; bar, 10 µm. Table 1 . Baseline and intraoperative characteristics.
9,714.6
2023-11-01T00:00:00.000
[ "Medicine", "Engineering" ]
An experimental investigation into enhancing oil recovery using combination of new green surfactant with smart water in oil-wet carbonate reservoir Enhancing oil recovery from oil-wet carbonate oil reservoir is an important challenge in the world, especially in Middle East oil field. Surfactant and smart water can change the interfacial tension and wettability condition of this type of rock to water wet from oil wet. The present study follows the experimental work of the combination of new green surfactant with smart water to enhance oil recovery from a carbonate oil-wet rock. Wettability alternation and IFT reduction by surfactant, smart water and combination of surfactant with smart water were investigated experimentally. The results show that making surfactant solution using smart water can reduce oil saturation by reducing IFT and alter wettability conditions. The oil recovery factor at the end of water, surfactant and surfactant–smart water flooding was 36, 52 and 66%, respectively. It shows that combination of surfactant with smart water can help surfactant to be powerful. Introduction The industrial developing increase the world energy demand to follow their growing industries running. To meet the world energy demand, it is essential to grow the hydrocarbon reserves and production. It is possible to increase oil production by developing mature reservoirs or discovering new reservoirs. In most cases, it is economical to develop mature reservoirs (Mohsenatabar Firozjaii et al. 2019). Increasing oil production from mature reservoir is an interesting work for petroleum industry researchers. There are some methods to enhance oil recovery from mature reservoir after primary and secondary oil production stages (Onyekonwu and Ogolo 2010). These methods are called enhancing oil recovery or EOR process. The EOR process is classified into four main groups, which include thermal, chemical, miscible gas and microbial (Sheng 2010). Although these methods are expensive and not always operative, researchers are particularly attentive to increase oil production by this techniques, but tertiary recovery can still be money-making if market fees for oil are high sufficiently (Hirasaki and Zhang 2004). The studies concerned with EOR showed 11% of the EOR project in worldwide forced on chemical enhancing oil recovery (Rellegadla et al. 2017). The chemical enhanced oil recovery methods or CEOR are beneficial for light oil reservoir (Alvarado and Manrique 2010). Generally, CEOR are applied in the formation with two main goals: reducing the interfacial tension (IFT) between the reservoir oil and the injected fluid and improving the sweep efficiency of the injected fluid by decreasing mobility of injected fluid (Shah 2012). The IFT reduction and wettability alteration are occurred by adding some surface agent material (Xie et al. 2005). Optimizing oil recovery is strongly depending on oil reservoir formation wettability and IFT between oil and water on rock surface. When an enhancing oil recovery method, particularly related to water flooding, is considered to apply on reservoir, the surface wetting condition of the formation influences the performance of process and determines the final recovery and cost during oil recovery (Hirasaki and Zhang 2004). When two immiscible fluids are presented on a solid surface, one of them would like to spread on the surface compared to another. When a solid-oil-water system is created in porous media, there is a balance of two-liquid phase with solid surface (Mittal 2009). As shown in Fig. 1, the below equation (known as Young's equation) shows the balance of solid-oil-water system: where σ sw is the IFT between the water and solid, σ so is the IFT between the oil and solid, and σ wo is surface tension between the oil and water. θ is the contact angle measured through the water phase. In the reservoir where oil and water exist together, there will be water-wet and oil-wet systems. Oil phase is spread on the grain surfaces while the water phase is located in the pore bodies (Mittal 2009). A reservoir may have mixwet condition when smaller pores are filled with oil and are water wet, whereas larger pores are filled with oil and are oil wet (Kathel and Mohanty 2013). As shown in Fig. 2, the wettability condition on solid-liquid system is often distinguished using the contact angle of the liquid phase on solid surface (Sun et al. 2017). Wettability has a significant effect on the production of oil and gas. By changing wettability condition from oil to water wet or by reducing IFT, the residual oil saturation decreases. Adding some surface agent material such as surfactant or changing the surface condition of rock provides more oil recovery (Sun et al. 2017). When surfactant is solved in water and then is injected to reservoir, interfacial tension (IFT) is decreased. As shown in below equation, IFT reduction can increase capillary number. As where is IFT, is fluid viscosity, and u is fluid velocity. Surfactant flooding is a popular chemical enhancing oil recovery methods that some researchers focused on it at oilwet rock (Strand 2003;Seethepalli et al. 2004;SayedAkram and Mamora 2011;Firozjaii et al. 2018). Nowadays natural surfactants have become a substitute for industrial surfactant because of environmental friendly and low cost of production (Ghahfarokhi et al. 2015;Mehdi et al. 2015). Recently, some researcher focused on using nanoparticle combination with surfactant to improve oil recovery and surfactant flooding efficiency (Bagrezaie and Pourafshary 2015; Emadi et al. 2017). Using surfactant is not limited to CEOR process. It is interesting that mentioned surfactant can be used in hydraulic fracturing or completion fluid due to wettability alteration and IFT reduction (Zhang et al. 2018). Recently, more attention has been exposed in advanced water floods for oil-recovery enhancement in the reservoir engineering. Brine water (brine) flooding can produce oil from porous media (Ligthelm et al. 2009). Adding some ions can help the efficiency of water flooding (Tang and Morrow 1999). Injection water salinity can change the rock surface wettability as surfactant (RezaeiDoust et al. 2009). Low salinity and smart water flooding were presented more than two decades ago. In this case, presence of dual ions (Ca 2+ , Mg 2+ , and So 4 2− ) changes the wettability of rock from oil wet to water wet in a carbonate rock (Fathi et al. 2010). Based on some experimental results, wettability alteration was proposed to be a key reason for the improvement of the oil recovery (Strand et al. 2008;Yousef et al. 2011;Fathi et al. 2012;Puntervold et al. 2015). Figure 4 shows the chemical mechanism for wettability modification during sea water injection. Therefore, choosing the best concentration of ions in water can control the wettability alteration or contact angle (Yousef et al. 2011). Smart water is actually a seawater in which its composition is optimized in terms of ionic composition and the amount of salinity (Awolayo et al. 2014). Some research has been about modified sea water as smart water for EOR (RezaeiDoust et al. 2009;SayedAkram and Mamora 2011;Seethepalli et al. 2004;Shah 2012). For example, Webb et al. (2005) investigated a comparative experiment on the oil recovery from a carbonate core with seawater which holds SO4 2− at reservoir conditions and simulated SO4 2− free brine. It was determined that the wettability alteration of the carbonate rock with SO4 2− ion is responsible for the saturation changes (Webb et al. 2005). Widespread laboratory research has been applied in order to recognize EOR from chalk using surfactant solutions and later on using modified sea water. This ion must act together with Ca 2+ and Mg 2+ because sulfate alone is not capable to increase spontaneous imbibition. The results showed wettability alteration toward more water-wetting conditions to be the only reason for EOR with these seawater ions (Fathi et al. 2010). In the present study, wettability alteration and IFT reduction are considered in oil-wet carbonate rock by using combination of smart water and a new green surfactant. The contact angle measurement method is used for detecting wettability alteration. The pendant drop method is employed to measure IFT change. Then, the core flooding is applied rock plug at optimum concentration of surfactant and smart water. Materials The crude oil used in this study was provided from Ahwaz oilfield from Iran. This oil is classified as light crude oil with approximately an API gravity of 32.6. Table 1 shows the properties of this crude oil. The synthetic brines were composed of a mixture of salt (NaCl) and de-ionized water at 100,000 ppm. The new type of nonionic surfactant called dodecanoylglucosamine that also has medical applications is considered for wettability alteration and IFT reduction. This surfactant was synthesized by Mosalman Haghighi et al. (2018) at Petroleum University of Technology (Abadan, Iran) by mixture of glucosamine, methanol and dodecanoyl chloride (Omid et al. 2018). Figure 5 shows the structure of this green surfactant. For preparing the smart water, the salts were provided by the German Merck Company which include NaCl, MgCl 2 ·6H 2 O, NaHCO 3 , CaCl 2 ·2H 2 O and Na 2 SO 4 with a purity higher than 99%. The rock plug sample was obtained from Asmari formation. Although most of carbonate rocks have low permeability, sample used in this project is porous and permeable. The properties of the plug are summarized in Table 2. Surfactant solution preparing Various solutions of the surfactant samples in distilled water were prepared in the range of 100 to 10,000 ppm. The pH and conductivity of solution were measured using pH and conductivity meters. These measurements were applied to determine the CMC of surfactant solution. The interfacial tension measurement tests were performed to obtain the tension between two liquid phases at ambient temperature and pressure. All IFT tests were performed in an ambient temperature of 25 °C and The pellets were put in oil at 85 °C to be aged with oil for 2 weeks. Then, the wettability of pellet was measured by contact angle measurement using VIT 6000. Then the pellets were put in different concentration of the surfactant solutions for 10 days. The wettability alteration on rock pellets by surfactant solution was measured using contact angle measurement. Smart water preparing The synthetic brine solutions were prepared by spiking the different amounts of MgCl 2 ·6H 2 O, NaCl, NaHCO 3 , CaCl 2 ·2H 2 O and Na 2 SO 4 to the deionized water and were mixed in calculated proportion based on stoichiometry. First the synthetic sea water (SW) was prepared with total dissolve solid (TDS) 43,091 ppm. Table 3 shows the ions concentration in sea water. Then, the concentration of each ion includes sulfate, magnesium and calcium modified from 0, 2, 4 and 6 times. The contact angle of some oil aged pellets was measured at different concentrations of ions of modified SW. Surfactant solution with smart water After detecting the CMC of surfactant, the new solution of surfactant was prepared by using modifying SW at the concentration of each ion which includes sulfate, magnesium and calcium modified from 0, 2, 4 and 6 times. The contact angle and IFT solutions were measured to detect the effect of ions on surfactant solution behavior. Core flooding The core flooding experiment was performed to evaluate the effectiveness of surfactant, smart water and combination of smart water and surfactant in altering wettability of carbonate reservoirs and oil recovery. First, the prepared and clean rock plug was kept by core holder. The overburden pressure 2500 psi was set by handing pump. The brine water was injected to core with different injection rates to determine the absolute permeability. Then, the crude oil was injected in core by various rates to reach connate water saturation. The results of connate water and initial oil saturation in rock plug are presented in Table 2. Three scenarios of flooding were considered to apply on core. First water flooding using brine water was employed to recover oil as secondary oil recovery. Then, the core was removed from core holder and clean and washed. The core flooding condition was prepared again to flood surfactant as tertiary oil recovery. Then, the core was prepared again to flood. Finally, combination of surfactant with smart water was applied on core. All flooding was applied at constant rate 0.2 cc/min. Surfactant behavior The CMC of surfactant solution was obtained based on pH and conductivity. As shown in Fig. 6, the pH of surfactant solution was decreased by increasing surfactant concentration. This reduction shows acetic behavior of the surfactant. On the other hand, the conductivity of solution was raised by increasing surfactant concentration (Fig. 7). This behavior is depended on chemical structure of dodecanoylglucosamine. The CMC was 800 ppm from pH and conductivity. As shown in Fig. 8, the IFT of surfactant solutions was reduced by increasing dodecanoylglucosamine concentration. The results show this surfactant can reduce the IFT lower than 15mN/m by increasing concentration more than 6000 ppm. But, the high concentration of surfactant is not applicable in field condition due to high adsorption on rock surface and high price. Therefore, the optimum concentration of this surfactant is considered 800 ppm from pH and conductivity that reduced IFT to 19 mN/m. Moreover, the effect of surfactant concentration on wettability condition was detected by measuring contact angle. As shown in Fig. 9, the aged pellet was oil wet at first. As shown in Fig. 10 by increasing surfactant concentration, the contact angle was decreased and the wettability condition was alternated from oil wet to water wet. Therefore, the results show dodecanoylglucosamine can reduce IFT and change wettability. Smart water behavior The contact angle changes were measured by modifying the ions concentration of sea water. As shown in Fig. 11, contact angle was reduced by increasing concentration of Mg 2+ from 0 to 4 times in SW, but increased when it reached 6 times. On the other hand, by increasing the concentration of sulfate ion (So 4 2− ) the contact angle was decreased continuously. Moreover, modifying concentration of Ca 2+ has different behaviors on contact angle compared to other ions. As shown in Fig. 12, the lowest contact angle was occurred at concentration of 6 So 4 2− , 4Ca 2+ and 4 Mg 2+ . Moreover, the results show that So 4 2− has strong effect on contact angle. This phenomenon can be described in Fig. 4. Therefore, the smart water that obtained by modifying ions from sea water includes 6 So 4 2− , 4Ca 2+ and 4 Mg 2+ . Surfactant combination with smart water The surfactant solution at concentration of 800 ppm was combined with modified sea water. The contact angle and IFT of some solutions were measured to detect the influence of ions which include So 4 2− , Ca 2+ and Mg 2+ on contact angle when combined with surfactant. As shown in Fig. 13, the IFT of surfactant solutions was decreased by increasing ions concentration. When the surfactant solution was used for IFT reduction, the value of IFT was 19mN/m at 800 ppm of dodecanoylglucosamine. But, combination of this surfactant with 6 So 4 2− reduced the IFT to 11mN/m. Moreover, the contact angle was decreased by increasing ions concentration. As shown in Fig. 14, the lowest contact angle was occurred at 6 So 4 2− , 6 Ca 2+ and 6 Mg 2+ concentration of ions. By comparing this results with pervious results from smart water in last section, it seems the optimum concentration of ions when are combined with surfactant is different from smart water. Because in this condition surfactant and ions change the rock surface wettability condition together. As shown in Fig. 15, the lowest IFT and contact angle were Fig. 16 shows the contact angle of water drop on rock pellet at this concentration. It is important to note that when the surfactant was used individually for wettability alteration, the contact angle at 800 ppm was 128. When this surfactant was combined with modified sea water and the concentration of So 4 2− was modified to 6 times, the contact angle reached to 48.52. Therefore, it can be concluded that synergic of dodecanoylglucosamine as surfactant with smart water containing 6 So 4 2− concentration can be best choice for enhancing oil recovery due to more IFT reduction and wettability alternation. Core flooding The core flooding was employed to illustrate the effect of surfactant, smart water and mixture of them on wettability alteration and IFT reduction. As shown in Fig. 17, water flooding has lowest oil recovery and high residual oil saturation. In oil-wet carbonate, rock water flooding has low recovery because water leaves the porous media and oil remains in porous. Some surface agent material such as dodecanoylglucosamine reduces the IFT and changes the wettability condition to water wet from oil wet. Therefore, the residual oil saturation was decreased by increasing capillary number and increased oil recovery compared to water flooding. On the other hand, surfactant flooding using smart water flooding with concentration of 6 So 4 2− increased oil recovery compared to water and surfactant flooding, because synergic of surfactant and So 4 2− changed the rock surface condition to more water wet. As shown in Fig. 17, oil recovery after 3 pore volume of injected was 36, 52 and 66% at the end of water, surfactant and surfactant-smart water flooding, respectively. Therefore, it can be concluded that dodecanoylglucosamine improved oil recover 16% more than water flooding. On the other hand, combination of dodecanoylglucosamine with smart water enhanced oil recovery 30% more than water and 14% more than surfactant flooding. Moreover, the pressure drop during water, surfactant and surfactant-smart water flooding is presented in Fig. 18. It is illustrated that break through was occurred close to 0.5 pore volume injected. Conclusion Enhancing oil recovery in mature oil reservoir is a new challenge in petroleum industry. Chemical enhanced oil recovery method such as surfactant and smart water flooding can be applicable to employ EOR in mature reservoir. In the present study, feasibility of the dodecanoylglucosamine as new surfactant and combination with smart water was discussed. The main conclusion of this study can be summarized as below: 1. The CMC of dodecanoylglucosamine based on pH and conductivity was 800 ppm. The results show that dodecanoylglucosamine can be a good choice for surfactant flooding due to IFT reduction from 30.36 to 19 mN/m at concentration of 0 to 800 ppm. Moreover, it can change the wettability by modifying contact angle from 148.93 to 128.87 at concentration of 0 to 800 ppm. 2. The results show that modifying ions concentration of sea water can be effective and it can reduce contact angle. Therefore, the smart water was obtained by modifying ions from sea water by concentration of 6 So 4 2− , 4Ca 2+ and 4 Mg 2+ . The result shows So 4 2− has large effect on contact angle compared to other ions. 3. Making surfactant solution using smart water can be more effective compared to surfactant solution using brine. The results show adding ions in surfactant solution can improve IFT reduction and wettability alteration. The results show making surface solution with 6So 4 2− can reduce contact angle from 148.9 to 36.7. Also, it can reduce IFT from 30.36 to 11 mN/m. Therefore, it can be concluded that combination of surfactant with smart water can help surfactant to be powerful. It was obtained from core flooding test. The core flooding results show combination of surfactant with smart water has more oil recovery compared to surfactant and water flooding. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
4,573.8
2019-07-23T00:00:00.000
[ "Environmental Science", "Engineering" ]
Detecting Resemblances in Anti-pattern Ideologies using Social Networks A key argument for modelling knowledge in ideologies is the simple reuse of the facts. However, nearby reliability checking, current ideology engineering tools give only essential functionalities for analyzing ideologies. Since ideologies can be considered as graphs, graph analysis techniques are an apt answer for this necessity. The anti-pattern ideology has been recently proposed as a knowledge base for SPARSE, an intelligent system that can detect the anti-patterns that exist in a software project. However, apart from the excess of anti-patterns that are intrinsically informal and vague, the data used in the anti-pattern ideology itself is many times inexactly defined. We exemplify in this paper the benefits of applying social networks to ontologies and the Semantic Web and discuss which research themes happen on the edge between the two particular fields. Particularly, we confer how different ideas of centrality portray the core content and structure of ontology. Overview Anti-patterns symbolize the latest concept in a series of radical changes in computer science and software engineering thinking.As we move towards the 50-year mark in developing programmable digital systems, the software industry has yet to determine some basic problems in how humans interpret business concepts into software applications. An assessment of hundreds of corporate software development projects showed that eight out of ten software projects are considered ineffective.About one third of software projects are cancelled.The remaining projects delivered software that was normally double the expected budget and obtained twice as long to build up as initially planned 9 .These repeated failures is highly valuable, however in that they offer us with practical knowledge of what does not work and during study: why.Such study, in the dialect of Design Patterns can be classified as the study of Anti-patterns. Design Patterns present the most successful form of software control yet available, and the whole patterns progress has gone a better way in codifying a brief terminology for transmitting complicated computer science thinking.Anti-Patterns are a natural addition to design patterns, concentrating on the broad and ever-growing assortment of repeated software problems in an effort to understand, avoid and recuperate from them.Anti-Patterns are new tools that bridge the gap between architectural theories and real-world executions 7 . The design of Anti-patterns is that there are various fruitless behavior modes in software engineering which result in similar problems 9 , and can be treated in various and possible ways.as nonliteral, allegoric, case-based reasoning and may be a documented machine learning technique employed in computer science systems.This approach in drawback finding needs expertise on previous issues and each success and failure should be experienced so as to collect the desired experience for future drawback finding.Moreover, whereas it should be fruitful to check prospering ways that developers solve issues mistreatment style patterns these mechanisms haven't been employed in project management as they will not accommodate such advanced descriptions. The most common mistake created with style patterns, is that the application of a specific style pattern within the wrong context or setting.Antipatterns redefine the idea of style patterns in a very new type that makes an attempt to resolve this downside by providing careful templates 6 that state the causes, symptoms and consequences of an anti-pattern.Finding out failures and learning from mistakes could be a much more appropriate approach for project management.Antipatterns are the primary mechanisms that adopt a "negative solutions" perspective at software package development and are the primary to simply accept and handle the potential of a software package project for failure 7 .The subsequent parts during this section discuss the motivation behind the study of antipatterns, describe the contribution of the paper and at last present the organization of this paper. The motivation behind why Software Project Anti-pattern Knowledge Management is the subject of this proposition is on account of it remains logically unexplored, as well as on the grounds that anti-patterns can encode and oversee venture administration, as well as any sort of programming advancement information that prompts reoccurring tricky practices or dangerous results.It makes considering the route with which managers and programming experts can adequately utilize anti-patterns an important scholastic wander.Helpful research in this bearing is significant in giving a specialized answer for the issues that as of now torment anti-pattern research. An alternate predominating reason is that few issues identified with anti-patterns have not been dissected. Notwithstanding of the extensive number of accessible antipatterns, utilizing anti-patterns as a part of programming task administration remains tricky because of an arrangement of issues that distress the antipattern research.The primary issue that forbids the more extensive reception of anti-patterns is the absence of a regular vocabulary 4 of terms that might be utilized between individuals and programming equipment.The absence of formalization does not permit the configuration of models, architectures and programming frameworks that could profit the engineering of anti-patterns. Anti-Pattern Models By creating anti-pattern formalisms, the learning encoded in antipatterns could be formalized and handled by programming tools.At a learning representation level, the anti-pattern ideology can focus the classifications of things that exist in an anti-pattern and set the ontological duties of the task supervisor, framework planner or requisition architect 1 .Both of the recommended formalisms offer a medium for proficient reckoning on the grounds that they don't just speak to learning; additionally encode data in a structure that could be transformed productively by programming.Moreover both BBNs and ideology offer a medium for human representation 3 that might be utilized by project managers within request to have a regular vocabulary of terms with a strong scientific establishment. The anti-pattern ideology encodes implied software project management information into a computer readable structure and permits the offering and reuse of this knowledge by programming tools.Moreover, the issue of catching and quantifying doubt in the antipattern ideology is tended to by including the ideas of anti-pattern BBN models and their corresponding OWL ontology in the outline of the non-specific antipattern ideology. Related Literature Expert systems have been extensively used in most dissimilar settings including teaching 12 , medicine 7 and security decisive systems 4 .The World Wide Web contains plentiful examples of expert systems.However, expert systems are not a solution and can be wrong 11 .Adams 2 has presented some thoughts for the expert system design that want to be addressed when the system is used via the web.The author concludes that the viability of giving expert system capabilities over the web depends upon the exact situation for which the expert system is developed. Four primary classes of antipatterns have been recognized in the writing: • Development Anti-patterns: Describe specialized issues and results that are experienced by programmers.Software project management anti-patterns can deal with all parts of a software project more successfully by bringing understanding into the reasons, manifestations, results, and by giving great repeatable results 8 . Four group 11 formats is a more official layout that further incorporates different components, for example, the inspiration, members, related examples, known utilization and coordinated efforts. In any case, it is impossible that each of the three events could be determined in the same way 5 .A more detailed report is the Mini-Antipattern template, which explains the two solutions of the anti-pattern, the difficult answer and the re-factored result 13 . Anti-pattern Interrelationship Anti-patterns can seem separated yet can likewise be connected with different anti-patterns.The later sort is alluded to as cooperating anti-patterns 9 and is clear when a project management anti-pattern causes a software advancement anti-pattern or a construction modelling anti-pattern.It is vital to comprehend the extravagance of anti-pattern interrelationships so as to have the ability to determination anti-patterns exclusively as well as location them as an assembly of interrelated anti-patterns.This proposition keeps tabs on software project management anti-patterns however considers the way that these anti-patterns could be connected with different sorts of anti-patterns.Anti-patterns could be connected through the properties of following Table 1. Causes A list which recognizes the causes of this antipattern. Symptoms A list which comprises the observable symptoms of this antipattern. Consequences A list which comprises the penalties that result from this antipattern.Figure 1 outlines a sample relationship between 3 separate anti-patterns, through their reasons, manifestations and outcomes.These are the primary qualities of an anti-pattern that characterize how an anti-pattern is connected with different anti-patterns. Methodology Here our methodology is that ideology based method to characterize and apply the properties of hostile to examples, bad code smells, refactoring (OABR), and the relations between them.We have gathered, sorted out and arranged the properties of the related ideas and we extended the properties for bad code smell by making a quality list used to prioritize bad code smells with the objective of giving backing to recognizing which bad code smells ought to be evacuated, or endured.We then made layouts dependent upon properties for extra bad code smells and refactoring examination.We likewise created scientific categorizations for against examples, refactoring and bad code smells to give progressive characterizations.Here additionally we have demonstrated the phrasings and relations spoke to by the essential Descriptive Logics (DL) used to characterize ideology dialect.We created an OABR base including hostile to examples, related software issues and location dependent upon the properties, taxonomy and non-taxonomy relations.At last, we portray making, gaining entrance to, saving, questioning and mapping of OABR with the ontological devices, stages and ideology registries/ repositories. Antipattern Properties Numerous properties of existing anti-patterns were characterized by 2 and 10 independently.In this exploration, we chose properties, for example, name, causes, results, symptoms and refactoring.Different properties, for example, root causes, variations, background and general structures are not included as they are dependent on the developer's personal experiences.Additionally, an excess of properties for a particular idea will expand the multifaceted nature as per the essential standard that -the more expressive the language, the harder the thinking 11 . Refactoring Properties We characterized the refactoring properties as name, situation, and mechanics.The name of a refactoring generally comprises of an operation and an object.Case in point, for the 'Remove Middle Man refactoring' , Remove is an operation while -Middle Man is an item.The situation property gives portrayal to each one refactoring about when it will be connected.The mechanics property depicts how to apply methods step by step for every refactoring to tackle the related issue. Bad Code Smell Properties A goal of this research is to give a more formal and predictable documentation of properties to make every bad code smell less demanding to recognize and distinguish.Current terms of bad code smells are depicted and sorted out in a fairly casual and conflicting way.We analysed initial properties of bad code smells as name, symptoms, measurements and refactoring.Symptoms property depicts how to search a bad code smell. Software Product Metrics measure programming tools at distinctive development stages, extending from measuring the complexity of programming design to the extent of the final source code.The measurability of a bad code smell relies on upon the size, the multifaceted nature and the structure of the bad code smell.Some bad code smells, for example, long Method might be effortlessly recognized by accepted programming tools such as Cyclomatic complexity and Halstead measures Tools and Platforms A software application permits research problems to be resolved by using software solutions.This level of research also provides the chance to check the formerly defined theory and methods for seizure and significance.SPARSE is web-based collaborative ontology control software in which the anti-pattern ontology can be augmented.Web-Protege has been used in order to propose a Web-based interface that uses the Protege platform, in order to permit combined ontology editing as well as gloss and selection of both ontology mechanism and ontology changes. Technologies for Communities Ontology is an open system endorsing broad use and sharing.Its growth and justification depend on the contribution from the users of related community.The standard way is to list the ontology with an ontology search engine, or with a storehouse to make the ontology noticeable to the community.The feedbacks from the community will make the ontology more reliable and consistent. Conclusion and Future Works Against examples and bad code smells portray perpetual issues that influence programming quality.Refactoring can help fathom hostile to examples and bad code smells.In this examination, we created an ontological base, OABR, indicating the relations between against examples, bad code smells, and refactoring to help in the distinguishing proof and determination of their partnered issues. The paper draws information from different sources to speak to, model and examine antipattern information with a specific end goal to resolution issues that encompass the innovation of antipatterns.The effects needed structure the work of the creator and his associates have gone far in creating hypothetical models, applying strategies and creating a product device with a specific end goal to aid the anti-pattern location process.Be that as it may, the work might likewise raise numerous other essential issues that need to be explored further.These issues must be dealt with and not so much in place of vitality. The Dependency Structure Matrix (DSM) has been proposed as a strategy that pictures and breaks down the conditions between related qualities of software project management anti-patterns.The methodology was exemplified through a DSM of 50 traits of 25 related software project management anti-patterns that show up in the literature and the Web.A good set of blended anti-pattern Vol 8 (24) | September 2015 | www.indjst.orginformation including advancement, design and managerial antipatterns will uncover the appropriateness of the technique in such settings. The progressing tests and future work of the exploration incorporate the accompanying: • Obtain more inputs from the software group to grow OABR and set requirements for the class properties, given that the improvement of OABR is an iterative methodology. Figure 1 . Figure 1.Dataset of anti-patterns relations through their attributes. • Develop OABR registries and related web administrations, making it less demanding for clients to recognize and test new bad code smells, hostile to examples, and refactoring.• Expand or make another ideology by consolidating or selecting OABR with different ontologies about software improvement, for example, configuration designs, software measurements, and software quality attributes.
3,246.6
2015-09-14T00:00:00.000
[ "Computer Science" ]
Simultaneously realizing thermal and electromagnetic cloaking by multi-physical null medium Simultaneously manipulating multiple physical fields plays an important role in the increasingly complex integrated systems, aerospace equipment, biochemical productions, etc. For on-chip systems with high integration level (e.g., electronic/photonic chips and radio-frequency/microwave circuits), where both electromagnetic information/energy transporting and heat dissipation/recovery need to be considered, the precise and efficient control of the propagation of electromagnetic waves and heat fluxes simultaneously is particularly important. In this study, we propose a graphical designing method based on thermal-electromagnetic null medium to simultaneously control the propagation of electromagnetic waves and thermal fields according to the pre-designed paths. A thermal-electromagnetic cloak, which can create a cloaking effect on both electromagnetic waves and thermal fields simultaneously, is designed by thermal-electromagnetic surface transformation and verified by both numerical simulations and experimental measurements. The thermal-electromagnetic surface transformation proposed in this study provides a new methodology for simultaneous controlling on electromagnetic and temperature fields, which can be used to realize a series of novel thermal-electromagnetic devices such as thermal-electromagnetic shifter, splitter, bender, multiplexer and mode converter. The designed thermal-electromagnetic cloak opens up new ways to create a concealed region where any object within it does not create any disturbance to the external electromagnetic waves and temperature fields simultaneously, which may have significant applications in improving thermal-electromagnetic compatibility problem, protecting of thermal-electromagnetic sensitive components, and improving efficiency of energy usage for complex on-chip systems. and simultaneous improvement of electromagnetic harvesting and surface heat collection in solar cells 6 .Especially for electronic/photonic on-chip systems [7][8][9][10][11][12] , different modules are highly integrated in the same area to provide more functionality, faster processing power, lower cost and energy consumption, which will inevitably bring some other problems, such as the electromagnetic compatibility of different modules and the heat dissipation of resistive elements or high-speed computing modules.However, there is still a lack of effective method about the multi-physics control of heat flow and electromagnetic waves on chips, partially due to the lack of proper multi-physics metamaterials that can perform as desired function for both thermal and electromagnetic fields, and partly due to the lack of rigorous and systematic theories to guide how to control of thermal-electromagnetic fields simultaneously.Therefore, a rigorous and effective method that can control thermalelectromagnetic fields simultaneously for on-chip systems are highly required, which can simultaneously solve the problems of electromagnetic compatibility and heat dissipation caused by the increased chip integration level. Transformation optics/thermodynamics [13][14][15] is a rigorous theory and powerful design method [16][17][18] for a single physical field (i.e., EM field or thermal field).Some novel on-chip optical components, e.g., multimode waveguide bender and multimode waveguide crossing, have been theoretically studied by transformation optics 19 and experimentally demonstrated [20][21][22] .However, all these novel on-chip components are confined in a single physical field, which cannot solve the problem of heat dissipation and electromagnetic compatibility on highly integrated chips simultaneously.With the development of metamaterials 23 and transformation optics, simultaneously manipulating two physical fields with a single device becomes feasible, most of which are for the simultaneously control on static electric fields and temperature fields, e.g., thermal-electrostatic cloaking 24,25 , thermal-electrostatic concentrator 26 , thermal-electrostatic camouflage 27,28 , and bi-functional thermal-electrostatic devices 29,30 .In recent years, researches on multi-physical field control are gradually expanding from thermal-electrostatic fields to other physical fields, such as carpet cloak for electromagnetic, acoustic and water waves simultaneously 31 , magnetostaticacoustic cloak 32 , and electromagnetic-acoustic stealth coats 33,34 .However, current multi-physics devices for other physical fields are mainly developed by designing and optimizing metamaterials/metasurfaces for a specific function case by case [31][32][33][34] , therefore it still lacks a general theory (e.g., transformation optics 24,29 or solving Laplace's equation 25,27,30 in the design of thermal-electrostatic devices) to achieve multi-physical field control on other physical fields.Recently, a surfacetransformation method is proposed for the design on various acoustic-electromagnetic devices that can be implemented using copper plate array in air 35 , which may provide new perspectives on theoretical approaches to design multi-physical devices for other physical fields.However, there is still no general theory that can be utilized to control EM waves and temperature fields simultaneously, nor is there any relevant report that can produce cloaking effect for EM waves and temperature fields simultaneously. In this study, we propose a thermal-electromagnetic null medium (TENM) theoretically, which performs as a perfect 'endoscope' for EM waves and thermal fields simultaneously.Then, a surface-designing method, i.e., thermal-electromagnetic surface transformation, is proposed based on the ideal projection feature of TENM.Many thermal-electromagnetic devices of various functions (e.g., thermal/EM splitting, bending, and expanding) can be designed by a graphical way.Thereafter, the method of implementing reduced TENM, which retains the same directional projection properties as ideal TENM, is proposed by staggered copper and expanded polystyrene (EPS) boards.As an example, a thermal-electromagnetic on-chip cloak that can work for both EM waves and thermal fields simultaneously is designed based on proposed thermal-electromagnetic surface transformation, which can protect thermal sensitive electrical components on the chip from the surrounding heat flows while not affecting the EM radiation patterns from radiating components.Finally, a thermal-electromagnetic on-chip cloak is fabricated by staggered copper and EPS on a thin foam sheet covered with thermal pads, whose thermal and electromagnetic properties are experimentally measured by IR camera and vector network analyzer (VNA), respectively, which show expected cloaking effect for both electromagnetic waves and heat fluxes. Figure 1 illustrates the role of the thermal-electromagnetic cloak designed in this study for an on-chip system with high integration level.As shown in Fig. 1(a), a processing unit may be influenced (or even burned) by the waste heats from the surrounding resistive elements on chip, which may also interfere EM radiation from the surrounding radiating components on chip.Removing the resistive elements and radiating components away from the processing unit can solve this problem, however it reduces integration level of the on-chip system.A thermal-electromagnetic on-chip cloak in Fig. 1(b) can solve the above problem without reducing the integration level: the EM signals and waste heats can be simultaneously guided around the processing unit by the designed cloak.As a result, the waste heats are collected by cooling/recovery units and the radiation pattern produced by radiating components is not affected by other neighboring components on the chip, which can effectively solve both electromagnetic compatibility problem and heat dissipation/recovery problem associated with increased integration level for on-chip systems.Fig. 1.Schematic diagram of an on-chip system consisting of multiple functional modules (a) without the cloak and (b) with the designed thermal-electromagnetic cloak, respectively.(a) A central processing unit CPU (i.e., thermal sensitive electrical element in the center region) will be affected by the gathered waste heat (indicated by red arrows) from surrounding resistive elements (blue blocks) on chip, which may make the temperature around the processing unit higher than its rated temperature or generate thermal stress/deformation due to the gradient temperature field, and then affect its working efficiency and aggravate the aging.At the same time, the CPU will disturb the EM signals (represented by yellow curves) from surrounding radiating components (e.g., the cyan antenna).(b) The designed thermal-electromagnetic cloak (colored orange) is set around the CPU in the same on-chip system.In this case, the EM signals and waste heats can be simultaneously guided around the thermal sensitive CPU.As a result, the CPU will not be affected by waste heats (or the gradient temperature field from surrounding resistive elements) and not influence radiation pattern of EM signals from surrounding radiating components.Meanwhile, the waste heat can be effectively collected by the latter cooling/recovery units. Multi-physical null medium for both electromagnetic waves and thermal fields Mono-physical null medium (also referred as to nihility/void medium, or transformation-invariant material), such as optic-null medium for EM waves [36][37][38][39][40] , acoustic-null medium for acoustic waves [41][42][43] , magnetic hose for magnetostatic fields 44 , and thermal hose for thermal fields 45,46 , provides a flexible and simple way to control single physical field due to the nature of its perfectly oriented projection.For example, optic-null medium is a kind of highly anisotropic EM medium whose permittivity and permeability are infinitely large along its principal axes and close to zero in other directions, which can guide EM field along its principal axes and perform as a perfect 'endoscope' for EM field [36][37][38][39][40] .However, there is still a lack of theoretical and experimental studies on multi-physical null medium, e.g., thermalelectromagnetic null medium (TENM) that performs as a perfect 'endoscope' for EM waves and thermal fields simultaneously.In this study, we will show how to derive the required parameters of TENM by an extreme stretching in transformation optics/thermodynamics, and realize the reduced TENM by natural materials. Using transformation optics and transformation thermodynamics, the three material parameters (i.e., permittivity, permeability, and thermal conductivity) in the physical space (without prime) and the reference space (with prime) can be established through the following relationships [13][14][15] : where  (and ) can be the relative permittivity , the relative permeability , or the thermal conductivity  , and J =  (x, y, z)/  (x  , y  , z  ) is the Jacobian matrix representing the coordinate transformation between two spaces.TENM can have flexible shapes (see Fig. 2(a)), which performs as a perfect 'endoscope' linking two arbitrarily shaped surfaces S1 and S2.To derive the material parameters of the TENM, the whole region of arbitrary shape filled with TENM can always be divided into many consecutive small trapezoids (some of them are randomly indicated by small orange trapezoids in Fig. 2(a)) by this way: once the two end faces S1 and S2 of an arbitrarily shaped TENM are determined, a one-to-one correspondence can be established by geometrically projecting the points on S1 and S2 through the blue curve segments in Fig. 2(a), and the adjacent curve segments are separated by a sub-wavelength distance H; the region enclosed by adjacent blue curve segments can be further decomposed into several small trapezoids, whose width D is much smaller than the wavelength (see Fig. 2(b)).Next, we only need to calculate the material parameter of each small trapezoid by transformation optics/thermodynamics.The top and bottom sides of each small trapezoid are parallel to the local principal axis of the TENM, which is labeled as the x-axis of the local coordinate system in Fig. 2(b). Firstly, we consider the coordinate transformation that can transform a thin slab in the reference space (see Fig. 2(c)) to a small trapezoid in the physical space (see Fig. where the stretching factor M is defined as and   . D and Δ are the length of the bottom side of the trapezoid in the physical space and the small slab in the reference space, respectively. 1 ( 2) is the angle between x axis and the left (right) side of the trapezoid in the physical space as shown in Fig. 2(b).The Jacobian matrix for the transformation in Eq. ( 2) can be written as: Through the coordinate transformation in Eq. ( 3), together with the transformation optics/thermodynamics in Eq. ( 1), the material parameters (  ,  , and  ) in the physical/reference space can be related by: Secondly, we consider the case when each small slab in the reference space is extremely thin (i.e., Δ→0).In this case, each small trapezoid in the physical space corresponds to a surface with null volume in the reference space, and hence the corresponding material parameters in each small trapezoid reduces to the TENM (i.e., null medium).Since two sides of each small trapezoid filled with TENM in the real space correspond to the same surface in the reference space, the thermal/EM fields on two sides of each small trapezoid should be the same, which means the function of each small trapezoid filled with TENM is ideally projecting thermal/EM fields from its front side to its back side.Taking the limitation Δ→0 (note 0 ≤ x ≤ Δ) in Eq. ( 4), the required material parameters of ideal TENM can be obtained: As shown in Eq. ( 5), the ideal TENM in each small trapezoid in the physical space is extremely high anisotropic medium, whose permittivity/permeability/thermalconductivity are all infinitely large along its local principal axis (i.e., the tangential direction of blue curve segment in Fig. 2(a)) and nearly zero in other directions.For a region of arbitrary shape filled with TENM in Fig. 2(a), it can always be decomposed into several local consecutive small trapezoids, whose function is ideally projecting thermal/EM fields from its front side to its back side, and the material parameters in each small trapezoid can be expressed by Eq. ( 5) in its local principal coordinate system.The small trapezoids are continuously distributed inside the TENM, i.e., the end of the previous small trapezoid continuously connects to the front of the latter small trapezoid, which means the principal axes of each local TENM are also continuously connected.Therefore, the whole thermal/EM fields on an arbitrarily shaped surfaces S1 will be ideally projected onto another arbitrarily shaped surfaces S2, once the region between these two surfaces are filled with TENM whose principal axes link continuously from S1 to S2. The ideal TENM can theoretically perform as a perfect 'endoscope' for EM waves and thermal fields simultaneously, whose performance is also verified by numerical simulations in Figs.2(d) and 2(e).As shown in Figs.2(d) and 2(e), when an EM/hot line source are placed on one surface S1 of a tubular TENM, both EM wave and heat flux can be directionally guided along the principal axes of the TENM from S1 to S2, which in turn will produce an EM/hot image spot on the other surface S2.The TENM can have a variety of shapes for different applications, such as bifurcated structures for simultaneous splitting of electromagnetic waves and heat flows (see a fractal tree structure in Figs.2(f) and 2(g)), or porous structures for simultaneous shielding of electromagnetic waves and heat flows (see a 'Tai Chi' shaped structure with a hole in Fig. 2(h) and 2(i)).Note that the required parameters of ideal TENM need infinite and zero values, which can hardly be realized in practice.Later, we will show how to design the reduced TENM that still retains the same directional projection performance as the ideal TENM and can be implemented by natural materials.curves).An arbitrary trapezoid element of a null medium in the (b) physical space is transformed to a compressed thin slab in the (c) reference space.The simulated results when a line current EM source and high temperature source are on the input surface S1 of the TENM, where the normalized magnetic field (d) and the temperature field (e) are plotted, respectively.The simulated normalized magnetic field (f) and temperature field (g) for a fractal-tree shaped TENM with principal axes along the trunk, respectively, which can split both EM fields (f) and heat flux (g) from the root to the top branches.The simulated normalized magnetic field (h) and temperature field (i) for a 'Tai Chi' shaped TENM, respectively, which can guide both EM fields (h) and heat flux (i) around the center concealed hole.The black regions in (h) and (i) represent areas with perfect electric conductor and thermal insulation boundaries that EM fields and heat flux never touch.Details of the numerical setting are given in Supplementary Note 1. Thermal-electromagnetic surface transformation by TENM Since TENM can project EM and thermal fields simultaneously along its principal axis, we can further propose a surface-designing method, namely thermalelectromagnetic surface transformation, which provide a graphical way to achieve simultaneous control of electromagnetic and thermal fields.With the help of thermalelectromagnetic surface transformation based on TENM, the design problems of thermal-electromagnetic devices can be regarded as a black box problem shown in Fig. 3(a).Once the distribution of the thermal/electromagnetic fields incident to and exiting from the black box are given (e.g., the incoming/outgoing EM wavefront or isothermal surface indicated by the black and red dashed curves in Fig. 3(a)), the process of designing thermal-electromagnetic devices is converted into the problem of graphically determining the shape and principal axis of the TENM inside the black box.The whole graphic design process can be summarized as two steps (see Movie 1): the first step is to determine the shape of the input/output boundaries of the TENM (i.e., black and red solid curves in Fig. 3(a)) according to the incoming/outgoing thermal-electromagnetic fields, which can be designed to be the same as the shape of incoming/outgoing EM wavefront or isothermal surface (e.g., black/red dashed curves have the same shapes as black/red solid curves in Fig. 3(a)).The second step is to determine the principal axes of the TENM inside the black box, which corresponds to the problem of geometrically finding a proper one-to-one projection that can project the points on the TENM's input boundary to the points on the TENM's output boundary (e.g., the blue arrow lines in Fig. 3(a)).In this case, the directions of the geometric projection are exactly the principal axes of the TENM.Note the geometric projection between TENM's input/output boundaries is not unique, which can usually be chosen as a simple linear projection. A thermal-electromagnetic cloak is designed as an example to illustrate this graphical method as shown in Fig. 3(b).Considering one function of thermalelectromagnetic cloak is to keep the input and output EM wavefront or isothermal surface the same, thus if the input EM wavefront or isothermal surface is planar, the output from the black box should also be a plane.Once the input and output EM wavefront (and isothermal surface) are designed as planes (indicated by the black and red dashed lines, respectively, in Fig. 3(b)), the input boundary (black solid line) and output boundary (red solid line) of the black box can be correspondingly determined as two planes.Considering the other function of thermal-electromagnetic cloak is to create a concealed region where both electromagnetic waves and temperature fields are not subject to any "scattering" by the objects placed inside, as an example, the simplest shape of the concealed region can be designed as a square (indicated by the green region in Fig. 3(b)).The next step to design a thermal-electromagnetic cloak by thermal-electromagnetic surface transformation is "how to project the points from the black solid line segment geometrically one-by-one onto the red solid line segment without touching the green region". The most convenient geometrical projection is using segmented polylines to connect the input and output surfaces of the cloak, which are indicated by the blue arrowed lines in Fig. 3(b).Note that the directions of arrowed segmented polylines are the same as the principal axes of the TENM filled within the cloak.Once the input and output surfaces of the cloak and the principal axes of the TENM inside the cloak have been determined inside the black box geometrically, the thermal-electromagnetic cloak, which can achieve simultaneous thermal-electromagnetic cloaking of objects in the green region for plane wave incidence, can be obtained by simply filling the region (indicated in yellow) with the TENM whose principal axes are the same as the directions of the blue arrowed segmented polylines.The concealed green region can also be designed as any other shape, just make sure that the projected line segments from the points on the input surface do not touch the concealed region of any other shapes in the projection onto the output surface (i.e., the blue arrowed segmented polylines do not touch the green region; and see examples in Fig. S5 of the Supplementary Note 2). Figs. 3(e) and 3(f) show the simulated cloaking effect of the thermalelectromagnetic cloak (designed by thermal-electromagnetic surface transformation in Fig. 3(b)) for electromagnetic wave and thermal field, respectively, when the detecting wavefront/isotherm are planes.Considering the perfect projecting feature of the TENM, whatever the form of the detecting wavefronts/isotherms incident on the input surface of the designed cloak, they will be perfectly projected along the principal axis of the TENM onto the output surface of the cloak.Therefore, the designed cloak can still work for any other kinds of detecting wavefronts/isotherms, e.g., a line source in Figs.3(g) and 3(h).To further compare the cloaking effect, the comparative simulations without the designed thermal-electromagnetic cloak in Figs. This graphical method can also be used efficiently in the design for other thermalelectromagnetic devices, such as a thermal-electromagnetic shifter in Fig. 3(c) and a thermal-electromagnetic divider-deflector in Fig. 3(d).The thermal-electromagnetic shifter in Fig. 3(c) can shift the incident thermal-electromagnetic fields simultaneously at a pre-designed distance without changing the direction of the incident thermal-electromagnetic fields.Assuming the incident wavefront/isothermal is a plane (indicated by the black dashed line in Fig. 3(c)) and the outgoing wavefront/isothermal is another plane (indicated by the red dashed line in Fig. 3(c)), which is parallel to the incident plane by a fixed displacement along the y direction.Based on the surface transformation method, the input and output surfaces of the thermal-electromagnetic shifter can be correspondingly determined by using the black and red solid lines in Fig. 3(c), respectively.Then, the principal axes of TENM can be chosen as linear segments that link the input and output surfaces of the designed shifter (indicated by the blue arrowed lines in Fig. 3(c)).Figs.3(i) and 3(j) show the simulated results for the thermal-electromagnetic shifter in Fig. 3(c), which can shift both thermal-electromagnetic fields by the pre-designed distance along the direction perpendicular to its propagation/diffusion.With the help of the surface transformation method, a structure that can divide and deflect both electromagnetic and thermal fields is shown in Fig. 3(d).Figs.3(k) and 3(l) show the simulated results for the thermal-electromagnetic divider-deflector in Fig. 3(d), which can divide and deflect both electromagnetic and thermal fields into two parts and propagate/diffuse in different directions. With the help of the thermal-electromagnetic surface transformation, a series of novel thermal-electromagnetic devices, e.g., thermal-electromagnetic splitter, bender, multiplexer and mode converter, can be designed through the standardized black-box designing steps in Movie 1.More detailed designs, potential applications, and simulation results for other thermal-electromagnetic devices can be found in Supplementary Note 2. A thermal-electromagnetic cloak designed by the graphical method described in (a).The input/output boundaries are designed as straight lines (conformal to the straight input/output EM wavefront and isothermal surface).The yellow region is the TENM with its principal axes along the blue arrowed lines.The green region is the concealed region.(c) A thermal-electromagnetic shifter and (d) a thermalelectromagnetic divider-deflector are designed by the same graphical method described in (a).Simulated magnetic field distributions (e, g) and temperature distributions (f, h) for the thermal-electromagnetic cloak under the case of a plane detecting wave/isotherm incidence (e, f), and a cylindrical detecting wave/isotherm incidence (g, h).Simulated magnetic field distributions (i, k) and temperature distributions (j, l) for the thermal-electromagnetic shifter (i, j) and divider-deflector (k, l) under a plane wave/isotherm incidence. Realizing reduced 2D TENM by staggered copper and EPS boards To realize the ideal parameters of TENM in Eq. ( 5), one effective way is using staggered structure composed by two isotropic media with subwavelength separations.Based on the effective medium theory, the effective electromagnetic/thermal parameters of two staggered isotropic media can be expressed as (see Supplementary Note 3): where  can be relative permittivity , relative permeability , or thermal conductivity , and fi (i=1,2) is the filling factor of the i-th isotropic medium.// and ⟂ indicate the directions that are parallel and orthogonal to the interface of two media, respectively.However, to realize the ideal TENM in Eq. ( 5) by the staggered isotropic media in Eq. ( 6), it requires one medium with zero parameters (e.g., 1 = 1 = 1 = 0) and the other medium with infinite parameters (e.g., 2 = 2 = 2 ), which cannot be achieved by natural materials.Since the key to designing various thermal-electromagnetic devices by thermal-electromagnetic surface transformation is to create perfect directional projection on electromagnetic wave and temperature field simultaneously based on the property of TENM, media with simplified material parameters (referred as to reduced TENM), which can also produce the same directional projection on electromagnetic wave and temperature field simultaneously (but with small scatterings), can also be used as basic elements in thermal-electromagnetic surface transformation to design various thermal-electromagnetic devices. Considering electromagnetic parameters and thermal conductivity of materials in nature from materials handbooks [47][48][49] , we find that staggered copper plates and EPS boards with subwavelength separations in Fig. 4(a) can perform as a reduced 2D TENM that can realize the simultaneous projection of TM-polarized electromagnetic waves and in-plane temperature fields.In this case, the two staggered media in Eq. ( 6) are copper (with 1  , 1=1, 1=400 W/(m•K), and f1=(p-w)/p) and EPS (with  2=1,  2=1,  2=0.04W/(m•K), and f2=w/p), respectively, whose effective electromagnetic/thermal parameters can be calculated by Eq. ( 6) 50,51 where the filling factor is chosen as f1=f2=0.5.As the staggered media performs as a better thermal null medium but a poorer EM null medium for a larger filling factor of copper f1, while the staggered media performs as a better EM null medium but a poorer thermal null medium for a smaller filling factor of copper f1, thus we keep f1=f2=0.5 in our later design to balance the performance of the reduced TENM for both EM and temperature fields. To verify the performance of the reduced TENM in Eq. ( 7), a 2D thermalelectromagnetic shifter designed by thermal-electromagnetic surface transformation in Fig. 3(c), which can shift the distribution of thermal-electromagnetic fields from its input surface to its output surface by a pre-designed distance, can be realized by the staggered copper plates and EPS boards in Fig. 4(b), whose arrangement direction is at a fixed angle 45 o with the x axis and the separation of each plate is w = p/2=0/10 ( 0 is the wavelength of the incident EM wave).Compared with the ideal thermal- electromagnetic shifter in Figs.3(i) and 3(j), simulated results verify that the 2D thermal-electromagnetic shifter realized by the reduced TENM can still perform satisfied shifting effect for both EM waves (e.g., plane TM-polarized EM wave and line EM source in Fig. 4(c) and 4(e), respectively) and heat flux (e.g., plane isotherm and line heat source in Fig. 4(d) and 4(f), respectively). The 2D thermal-electromagnetic cloak designed by thermal-electromagnetic surface transformation in Figs.3(b) can also be realized by the reduced TENM, where the separation between copper plate and EPS board is chosen as w = p/2=0/11.The performance of the thermal-electromagnetic cloak by the reduced TENM is verified by numerical simulations in Figs.4(g)-4(j) for varies thermal-electromagnetic detecting sources.Compared with the ideal thermal-electromagnetic cloak in Figs. 3(e)-(h), the thermal-electromagnetic cloak by the reduced TENM in Figs.4(g)-4(j) can still provide a satisfied thermal-electromagnetic cloaking effect.There are some small EM reflections/scatterings around the shifter/cloak by the reduced TENM due to the impedance mismatch between air and the reduced TENM, which can be reduced by choosing a lower separation p between two staggered isotropic media or a smaller filling factor of copper f1.More details on the selection of fill factor f1 and bandwidth analysis of the reduced TENM can be found in Supplementary Note 3. Various other 2D thermal-electromagnetic devices designed by thermalelectromagnetic surface transformation can be realized by the reduced TENM in a similar way.For different applications, 2D thermal-electromagnetic devices can be made/fabricated very high (as long as the incident TM-polarized-wave/isotherm is located within the same 2D cross-section of the device) 40 or made/fabricated within a surface (as long as the incident TM-polarized-wave/isotherm is confined on the same surface) for on-chip thermal-electromagnetic control in Fig. 1.Next, a 2D thermal-electromagnetic cloak by reduced TENM within a surface is fabricated, whose onchip thermal-EM cloaking performance is verified experimentally.Fig. 4 (a) Reduced TENM by staggered copper and EPS boards.(b) a thermalelectromagnetic shifter realized by reduced TENM.Simulated 2D magnetic/temperature field distributions for a thermal-electromagnetic shifter with reduced TENM under (c, d) a plane TM-polarized-wave/isotherm incidence and (e, f) a line thermal/electromagnetic source.Simulated 2D magnetic/temperature field distributions for a thermal-electromagnetic cloak with reduced TENM under (g, h) a plane TM-polarized-wave/isotherm incidence and (i, j) a line thermal/electromagnetic source. Experimental design and measurements To experimentally verify the on-chip cloaking effect of the above designed thermalelectromagnetic cloak by staggered copper plates and EPS boards.A sample cloak in Fig. 5 The experimental setup to verify the on-chip thermal-electromagnetic cloaking effect of the fabricated sample is shown in Fig. 5(a).For the thermal experiment part, a heating plate with power supply (Silicone Rubber Heater, and colored yellow in Fig. 5(a)) is sticked at the boundary of one thermal pad, which can provide a constant heat power about 5.7W and create the incident heat flux onto the sample.A cooling plate (Peltier Cooler) with power supply is placed at the boundary of the other thermal pad, which performs as a constant low temperature source and maintain 0 o C during the experiment.The fabricated cloak is fixed on the center of the on-chip structure (i.e., a foam sheet covered by thermal pads) during the measurement.Another foam board with the same size is covered on the whole structure during the measurement to avoid thermal convection at the top boundary, which can also mimic the package cover of the chip.An IR camera is used to monitor the temperature changes real-timely.After about 4 hours, the temperature distribution captured by the IR camera no longer changes (i.e., whole system reaches a thermally stable state), and then the measured temperature distribution is recorded and shown in Fig. 5 (b).The measured result in Fig. 5 (b) is consistent well with the 3D simulated result in Fig. 5 (d), which shows the fabricated sample can guide heat flux smoothly around the concealed region and keep a low temperature contrast (ΔTmax< 0.5  C in the experiment) inside the concealed region.Both simulated and measured results verify the designed thermalelectromagnetic cloak can protect the concealed region from suffering extreme temperature contrast and avoid affecting the temperature field distribution outside the cloak (i.e., isotherms in thermal pads remain straights). As a reference, the case when the cloak is removed and replaced by the background thermal pad is also measured in Fig. 5(c) and simulated in Fig. 5(e), which show that the heat flux cannot be smoothly guided away from the black protruding square and a large temperature contrast occurs inside the black protruding square (i.e., ΔTmax  4.6C in both the 3D simulation and experiment).In this case, if any thermal sensitive element is placed inside the black protruding region, no thermal protection can be obtained (high temperature field gradient in this region will greatly affect the performance of the thermal sensitive element), and the surrounding temperature field is also significantly affected (i.e., isotherms in thermal pads are distorted).More details about the experimental setup and measurement for thermal fields can be found in the Supplementary Note 5. For the electromagnetic experiment part, the sample and on-chip structure are the same as the thermal experiment.The experimental setup is also shown in Fig. 5(a), which is placed in a small anechoic chamber surrounded by EM pyramid absorber (QYH-J200).A VNA (ROHDE&SCHWARZ ZVL13, working frequency from 5kHz to 13.6GHz), which can perform as both microwave signal source and detector at the designed frequency f0=9GHz, is used to output electromagnetic wave from one port (Port 1) and detect electromagnetic wave by the other port (Port 2) during the electromagnetic measurement.A source loop antenna connected to the Port 1 of VNA by a coaxial cable, which can perform as a 2D magnetic dipole and mimic a radiating component on microwave on-chip system, is fixed at a distance L = 30mm away from the front side of the black protruding region on the chip.A detecting loop antenna, which is connected to the Port 2 of VNA by coaxial cable, is movable along the y-axis during the measurement and performs as a field detector.Both the source and detecting antenna are central aligned with the cloak along z-direction and 5 mm above the surface of the on-chip structure.During the measurement, the detecting loop antenna is kept at the distance L = 30mm from the back side of the black protruding region and moves along the y-direction at intervals of 0.255  0 (=8.5mm), then S21 parameter of the VNA at different locations (marked by the dots in Fig. 5(f)) is measured to obtain the relative magnetic field distribution (the red dots in Fig. 5(h)).The measured magnetic fields in Fig. 5(h) are consistent well with the 3D simulated result in Fig. 5 (f), which verifies that the fabricated cloak can guide the TM-polarized electromagnetic waves radiated from an on-chip radiating component (i.e., a source loop antenna here) smoothly around the black concealed region on the chip without disturbing the EM radiation pattern on the other side of the cloak. As a reference, we also measure and simulate the case when the cloak is removed and only the object in the concealed region (i.e., a square EPS region enclosed by four copper plates) is kept, which are shown in Figs.5(h) and 5(g) for the measured and 3D simulated cases, respectively.In this case, most of the electromagnetic waves are greatly scattered by the object in the concealed region, which obviously disturbs the original EM radiation pattern on the other side of the cloak.More details about the measurement for electromagnetic fields can be found in the Supplementary Note 6.Both simulated and measured results verify the designed thermal-electromagnetic cloak can guide EM waves and heat fluxes around the concealed region in an on-chip structure (see Movie 2), which may be an effective way to solve electromagnetic compatibility problem and heat dissipation problem simultaneously for highly integrated on-chip systems.Fig. 5 (a) Schematic of the experimental setup for measuring thermal-electromagnetic cloak.Orange and green slits represent copper and EPS, respectively.Two blue thin thermal pads on the surface of a gray foam board together can simulate an on-chip environment.The black area is the concealed region for both TM-polarized EM waves and heat fluxes.Measured temperature distributions show the black protruding square on the chip will not and will influence the incident plane isothermal with cloak (b) and without cloak (c), respectively.3D Simulated temperature distributions with cloak (d) and without cloak (e) are well matched with the measurement results in (b) and (c), respectively.3D Simulated normalized z-component magnetic fields when a radiating component (i.e., a loop antenna) is in front of the black protruding square on the chip with cloak (f) and without cloak (g), respectively.(h) Simulated and measured normalized amplitude of magnetic fields along the marked dotted lines in (f) and (g). Conclusion: In conclusion, we propose a graphical designing method, i.e., thermal-electromagnetic surface transformation, to simultaneously control the electromagnetic waves and thermal fields.With the help of the proposed method, the design of thermalelectromagnetic device is converted into a design of geometrical projecting problem, and many thermal-electromagnetic devices of various functions can be simply designed through standardized black-box designing steps.In addition, all EM-thermal devices designed by the proposed method, can be realized by staggered copper plates and EPS boards without metamaterials/metasurfaces. As an illustrative case, a thermal-electromagnetic on-chip cloak, which can protect thermal sensitive electrical components on the chip from the surrounding heat flows while not affecting the EM radiation patterns from radiating components, is designed and experimentally demonstrated.The simulated and measured results consist very well, which verify that the designed thermal-electromagnetic cloak can guide both EM waves from on-chip antenna and heat fluxes from surrounding thermal components around the concealed region efficiently.The proposed method can also be used to design some thermal-electromagnetic devices of other functions, e.g., simultaneously thermal-electromagnetic field-splitting/multiplexing and modeconverting, which may be used in smart micro controlling system and on-chip system with high integration level. Fig. 2 . Fig. 2. (a) basic schematic diagram of a tubular TENM that connects two arbitrarily shaped surfaces S1 and S2, which can simultaneously project the thermalelectromagnetic field distribution from S1 onto S2 along its principal axes (bluecurves).An arbitrary trapezoid element of a null medium in the (b) physical space is transformed to a compressed thin slab in the (c) reference space.The simulated results when a line current EM source and high temperature source are on the input surface S1 of the TENM, where the normalized magnetic field (d) and the temperature field (e) are plotted, respectively.The simulated normalized magnetic field (f) and temperature field (g) for a fractal-tree shaped TENM with principal axes along the trunk, respectively, which can split both EM fields (f) and heat flux (g) from the root to the top branches.The simulated normalized magnetic field (h) and temperature field (i) for a 'Tai Chi' shaped TENM, respectively, which can guide both EM fields (h) and Fig. 3 . Fig. 3. (a) Schematic diagram of graphical design method based on the directional projection property of TENM.The black and red arrow indicate the input and output fields, and the corresponding wavefronts (or isothermal surface) are represented by black/red dashed curves.Black and red solid curves inside the box are the input/output boundaries of the TENM.Blue arrowed curves indicate one possible projection.(b) A thermal-electromagnetic cloak designed by the graphical method described in (a).The input/output boundaries are designed as straight lines (conformal to the straight input/output EM wavefront and isothermal surface).The yellow region is the TENM with its principal axes along the blue arrowed lines.The green region is the concealed region.(c) A thermal-electromagnetic shifter and (d) a thermalelectromagnetic divider-deflector are designed by the same graphical method described in (a).Simulated magnetic field distributions (e, g) and temperature (a) is fabricated by 96 copper plates (colored in orange) and 92 EPS boards The concealed region (i.e., a black protruding square) has a square crosssection in x-y plane with diagonal length 2a = 200mm and height h = 10mm, which is surrounded by copper plates and filled by EPS.A foam sheet (colored gray with the size 440mm616mm100mm and thermal conductivity 0.04W/m/K) covered by thin thermal pads (colored blue with the thickness  h = 2mm and thermal conductivity 13W/m/K) is utilized to mimic an on-chip operating environment.More details about the sample fabrication can be found in the Supplementary Note 4.
8,748.6
2023-01-02T00:00:00.000
[ "Physics", "Engineering" ]
APPLICATION OF WSN IN CROCKERY MANUFACTURING INDUSTRY This Paper illustrates the usage of WSN in Crockery manufacturing unit. The paper has aimed to improve the production process with the help of Wireless Sensors in such a way that not only the system will be fault tolerant but it shall also be measured against the financial gains within the industry after inclusion of such sensors within the production unit. The research has focused to encounter slenderest negligence by the human resource that can result in loss of any produced item. The methodology explained will also provide mental relief to the human resource worker who is required to be at surveillance of produced items at all times during the manufacturing process in order to save the item from overheating, that may result in item loss. Another important feature of this research is to present WSN in such a way that the machinery used for Crockery production will be complete automatic i.e. without human interference process, which has not been practiced previously in this domain. The manufacturing unit of crockery is based in Pakistan and all the data and information presented in paper is taken from there. INTRODUCTION Wireless sensor networks hold the promise of many new applications in the area of monitoring and control.Examples include target tracking, intrusion detection, wildlife habitat monitoring, climate control, and disaster management. In industry, WSNs can be used to monitor manufacturing process or the condition of manufacturing equipment.WSN are used in wide range of applications in the industrial domain and eliminates the 23-25 May 2016-Istanbul, Turkey Proceedings of SOCIOINT 2016 3rd International Conference on Education, Social Sciences and Humanities requirement of human presence in various places including dangerous areas to obtain sensory information and actuation control.The paper will describe the use of WSN in a local Crockery manufacturing Unit in Pakistan.The objective of this research is to provide ease in manufacturing process and also eliminate the chances of financial loss that can be occur without the use of WSN.The proposed model will eliminate the chances of items from being spoiled due to overheating that will eventually results in financial gain.Also the proposed method eliminates the need of a well-trained worker who has a complete command on the manufacturing unit and knows very well that when to open the dyes.Because all these tricky things are done by the Wireless Sensor Network the proposed method also provide mental relief to the worker who has to kept his eyes all the time on the unit and be attentive all the time in order to save the item from overheating. Wireless sensor network (WSN) is a network of micro electro-mechanical systems (MEMS) [1] called sensor devices deployed to gather sensory information from an area of interest.Sensor devices (nodes) have the ability to sense, process, and communicate data.These devices typically have limited computing power and are designed to operate on batteries.Initially, these sensor devices were used in military applications [2].Industrial Wireless Sensor Networks (IWSN) is an emerging class of WSN that faces specific constraints linked to the particularities of the industrial production.In these terms, IWSNs faces several challenges such as the reliability and robustness in harsh environments, as well as the ability to properly execute and achieve the goal in parallel with all the other industrial processes.Furthermore, IWSN solutions should be versatile, simple to use and install, long lifetime and low-cost devices indeed, the combination of requirements hard to meet. LITERATURE REVIEW An earlier research that focused on preventive equipment maintenance, in which vibration signatures are gathered to predict equipment failure.They analyzed the application of vibration analysis for equipment health monitoring in a central utility support building at a semiconductor fabrication plant that houses machinery to produce pure water, handle gases and process waste water for fabrication lines.Furthermore, they deployed the same sensor network on an oil tanker in order to monitor the onboard machinery.In the end, they discuss design guidelines for an ideal platform and industrial applications, a study of the impact of the platform on the architecture, the comparison of two aforementioned deployments and a demonstration of application return on investment [5]. In another application, a wireless network system developed for a team of underwater collaborative autonomous agents that are capable of locating and repairing scale formations in tanks and pipes within inaccessible environments.They described in detail ad-hoc network hardware used in their deployments that comprises the pH, proximity, pressure sensors and repair actuator.Furthermore, they described the communication protocol and sensor/actuator feedback loop algorithms implemented on the nodes.[6] Another work that focused on WSN for pipeline monitoring is [7], which described the WSN Applications of Industrial Wireless Sensor Networks 15 whose aim is to detect, localize and quantify bursts, leaks and other anomalies (blockages or malfunctioning control valves) in water transmission pipelines.The research reported the results and experiences from real deployment and provided algorithms for detecting and localizing the exact position of leaks that is tested in laboratory conditions.The system presented in this work is also used for monitoring water quality in transmission and distribution water systems and water level in sewer collectors.In this context, the work can be classified as the process evaluation group of works (Section 1.5.2.1) as well. Another WSN for machinery condition-based maintenance and they presented design requirements, limitations and guidelines for this type of WSN applications.Furthermore, they implemented their condition monitoring system in Heating & Air Conditioning Plant in Automation and Robotics Research Institute in University of Texas.[8] Another work proposed the use of accelerometer based monitoring of machine vibrations and tackled the problem of predictive maintenance and condition-based monitoring of factory machinery in general.They demonstrated a linear relationship between surface finish, tool wear and machine vibrations thus proving the usability of proposed system in equipment monitoring.[9] RESEARCH CONTEXT The research is based on a local crockery Manufacturing unit at Gujranwala (Pakistan).Gujranwala is well known for crockery manufacturing and considered as crockery manufacturing hub in Pakistan. The Crockery manufacturing unit consists of Automatic Compression Molding machine with Electro Hydraulic Press control system, Integral power pack and motor control equipment together with temperature controller. The manufacturing process is completed in two phases and each phase has several steps. First Step In first step, a pair of dyes is fixed on the molding machine.The upper part of the machine is fixed while the lower one is moveable. Fourth Step Now it is heated at a specific temperature until a small amount of material comes out from the little spillways. Crucial Moment This is a crucial moment, when the material starts to come out, worker has to push forward the gear immediately in order to disjoin both parts of dyes immediately, otherwise the temperature will spoil the item. Step 1 In this step, the foil of overlay is placed with partially cured piece. Crucial Moment Now again it is heated at specific temperature until the material again comes out.This is again a crucial point in manufacturing process.Disjoin the lower and upper part of the dye immediately; otherwise the temperature will spoil the item. Final Product Final Product The previously mentioned process is totally human dependent and it requires complete attention of the worker especially at two crucial moments described above.A little bit negligence of the worker can spoil the item that will cause financial loss.At the same time for the worker, to ensure a complete attentiveness, it results in a continuous mental stress Wireless Sensor for Crockery Manufacturing Measurement Specialties M7100 Industrial Pressure Transducers Measurement Specialties M7100 industrial pressure transducer from the Microfused™ line of MEAS fromTE Connectivity is able to measure liquid or gas pressure, even for media such as contaminated water, steam and corrosive fluids.The transducer pressure cavity is machined from a solid piece of 17-4 PH stainless steel.The standard version includes a 1/4 NPT pipe thread allowing a leak-proof, all metal sealed system.Durability is high due to the fact that no O-rings or organics are exposed to the pressure media.This automotive grade pressure transducer with stainless steel hermetic pressure ports and integral electrical connector boasts up to 43,000psi (3000Bar).The M7100 exceeds industrial CE requirements including surge protection and is overvoltage protected to 16Vdc in both positive and reverse polarity.The Motor is an Industrial grade 10RPM high torque motor with a massive torque of 120kgcm in small size.The motor has a metal gearbox with all high quality metal gears and has an off-centered shaft. WIRELESS SENSOR PROTOCOL Currently, sensor networks are widely gaining ground because of its numerous applications ranging from environment monitoring, industrial machine and home appliances monitoring.Such network is best described as a network that consists of sensor nodes assigned for a specific function [3]. The sensor node will have full computational ability for sensing and transmitting data in wireless communications model.Inter-connective protocol is considered suitable for wireless sensor networks.Interconnective protocol is capable of detecting damaged node on time through the help of another node closed to the faulty one.For the purpose of this project, the 802.15.4 and ZigBee standards will be discussed for its peculiarity and reliability. The 802.15.4 and ZigBee standards are considered as a wireless technology with open global standard, addressing the area of power, cost and high radio frequency [4]. Proposed Method As described in previous section the main objective of our research is to save the item from overheating.To achieve the objective we have to sense when the material comes out from the spillways and how to open the dyes automatically.In order to open or close the dyes, gear lever must pull to push accordingly.Gear lever has three states in center it is considered to be in neutral state, when it is pushes forward it close the dye and when it pushes backward it open the dyes To move the gear lever back and forth we require an electric motor that can be triggered by central control unit. For this purpose we need: 1.A wireless sensor to sense the pressure on the spillway 2. A wireless Micro Controller An Electric Motor Wireless sensor sense the pressure on the spillway that is caused by the marital that comes out from the spillway. The electric motor is used to move the gear lever in the above stated three states.The proposed electric motor can move at 0°, 90° and 180°.At 90° the machine is in natural state.At 0° machine is considered in a state that can open the dyes and 180° is the state to close the dyes. Wireless Micro Controller is used to trigger and control the movement of the electric motor that results in the movement of the gear lever to open and close the dyes. Working The lower part of the dye is filled with raw material and with the help of hydraulic pressure both the part i.e. lower part and upper part of the dyes joins together.When the material is heated it is converted from powder to solid form and takes the shape of dye.A little amount of solid material's waste comes out from the spill way that is pointed above in the fig. 23-25 May 2016-Istanbul, Turkey Proceedings of SOCIOINT 2016 3rd International Conference on Education, Social Sciences and Humanities As soon as the waste comes out, the pressure sensor senses the pressure that is placed at spillway, and sends signal to the controlling node.The controlling node sends this information to the central control unit that senses the node which signaled because each node assigned a unique name.Now the central unit triggered the respective micro controller that instantly open and close the dyes to release the air between the dyes that can cause a bubble in the item, now a timer starts in the control unit, after the set time (in second, aprox 8-10 seconds) the control unit again triggered the Micro Controller to move the electric motor in 0° state that cause to open the dyes and a timer2 starts.Now the worker places the foil of overlay with partially cured piece.After timer2 reaches at a specific set time, the central control unit again triggered the Micro Controller to move the electric motor at 180° in order to close the dyes. Again it is heated at specific temperature until the material again comes out.As previous , the sensor sense the pressure at spillways and send information to the central control unit via controlling node which in turn again triggered the Micro Controller to open the dye i.e. move the electric motor in 0°.Hence the process completes and worker takes out the item from dyes. In this we make improvement in the crockery manufacturing process and reduce the chances of financial loss to zero, that can occurs if the item spoil due to overheating.Also at the same time another benefit of using this technology is that we provide ease for the worker in the form of reducing his mental stress because without using this technology the worker has to be attentive all the time and has to point his eyes on the spill way all the time, to open the part of dyes immediately as soon as waste comes out, in order to save the item from being spoiled. FINANCIAL BENEFITS The author visited some crockery manufacturing unit and finds that daily 5-7 pieces are spoiled due to overheating. 1 piece cost = 200 aprox 5 pieces cost =200 x 5=1000 Annual loss due to overheating =1000 x 365=3,65,000 (only one manufacturing machine) Most of the manufacturing units have as low as approximately 10 manufacturing machines.So the annual loss of a Manufacturing unit of 10 machines: Annual loss of a manufacturing unit (10 machines) = 365000 x 10= 36,50,000 Which is a significant saving due to the use of proposed methodology, as depicted in graph CONCLUSION AND FUTURE DIRECTIONS WSN is a technology with promising future and it is presently used in a wide range of applications to offer significant advantages over current system.The methodology explained in the paper provides significant financial benefits as well as it provides mental relief to the worker.Another benefit of using WSN in this industry is that the owner of the manufacturing unit has no need to worry about in finding an experience and efficient worker because all the crucial moments are handled by WSN.The research can be extended to make this production process completely automatic i.e. without human interference. Figure 2 Figure 2Lower part is filled with raw material. Figure 2 A Figure 2A worker pulls the gear of the Hydraulic part and as a result the Moveable part of the dye moves towards upward and joins the fixed part. Figure 5 Again Figure 5Again Moveable part joins the fixed one TI' s SimpleLink™ CC3x family of wireless MCUs offers the next generation of TI's embedded Wi-Fi® that enables easier development of Internet of Things designs.This Internet-on-a-chip™ portfolio is based on the industry's first user-dedicated Wi-Fi microcontroller.It supports the WI-FI CERTIFIED™ standard, designed for easy integration of sensors, and provides low-power consumption for batteryoperated applications.23-25 May 2016-Istanbul, Turkey Proceedings of SOCIOINT 2016 3rd International Conference on Education, Social Sciences and Humanities May 2016-Istanbul, Turkey Proceedings of SOCIOINT 2016 3rd International Conference on Education, Social Sciences and Humanities
3,507
2016-08-26T00:00:00.000
[ "Computer Science" ]
Analysis of Surface Properties of Nickel Alloy Elements Exposed to Impulse Shot Peening with the Use of Positron Annihilation The paper presents the results of experimental studies on the impact of impulse shot peening parameters on surface roughness (Sa, Sz, Sp, Sv), surface layer microhardness, and the mean positron lifetime (τmean). In the study, samples made of the Inconel 718 nickel alloy were subjected to impulse shot peening on an originally designed stand. The variable factors of the experiment included the impact energy, the diameter of the peening element, and the number of impacts per unit area. The impulse shot peening resulted in changes in the surface structure and an increase in surface layer microhardness. After the application of impulse shot peening, the analyzed roughness parameters increased in relation to post-milling values. An increase in microhardness was obtained, i.e., from 27 HV 0.05 to 108 HV 0.05 at the surface, while the maximum increase the microhardness occur at the depth from 0.04 mm to 0.08 mm. The changes in the physical properties of the surface layer were accompanied by an increase in the mean positron lifetime τmean. This is probably related to the increased positron annihilation in point defects. In the case of small surface deformations, the increase in microhardness was accompanied by a much lower increase in τmean, which may indicate a different course of changes in the defect structure consisting mainly in modification of the dislocation system. The dependent variables were subjected to ANOVA analysis of variance (it was one-factor analysis), and the effect of independent variables was evaluated using post-hoc tests (Tukey test). Introduction Shot peening is a finishing method, which consists in hitting the treated surface with small balls or cut wire shot. As a result of the process, the geometrical surface structure is altered, the surface layer is hardened, and compressive residual stresses are generated [1][2][3]. The surface layer formed during the shot peening process, and in particular compressive residual stresses, increase the fatigue strength and life of elements subjected to this process [4][5][6]. Shot peening may also increase the wear resistance in the elements [7,8]. The beneficial effect of shot peening on the fatigue life has also been observed in elements subjected to the wear process after shot peening [9]. The effect of shot peening on corrosion resistance is unclear. Investigations of the effect of very intensive shot peening on the intergranular corrosion of 304H steel yielded a negative result [10]. In turn, the shot peening process considerably reduced the corrosion of 904L steel welding joints [11]. Shot peening exerts an effect on the adhesive properties of surfaces, which may influence the strength of adhesive joints in elements subjected to shot peening [12]. Shot peening is mainly applied to elements that are exposed to variable loads during use. This process is applied in elements made of a variety of metal alloys, especially steel, titanium alloys, and aluminum alloys. There are relatively few publications on the application of the shot peening process to nickel alloys. The results of studies on the effect of broaching and shot peening on the microstructure and properties of the surface layer of Inconel 718 gas turbine discs were presented in ref. [13]. Broaching resulted in generation of tensile residual stresses and induced material cracking and plucking. The shot peening process contributed to generation of compressive residual stresses at a depth of approx. 300 µm and an increase in the microhardness of the surface layer. Investigations of shot peening and vibro-peening of a nickel-based superalloy showed higher compressive residual stresses and a greater increase in surface microhardness after the shot peening process than after vibro-peening [14]. As demonstrated by Ortiz, the value and depth of compressive residual stresses in the surface layer of the C-2000 nickel alloy depend on the medium used in the shot peening process and surface mechanical attrition treatment [15]. As a result of shot-peening objects made of the Inconel 718 nickel alloy, compressive stresses arise and microstructure changes [16,17]. Shot peening also improves the fatigue properties of nickel alloys. It has been found that shot peening of the Inconel 718 alloy can result in a 2-20-fold increase in the fatigue life depending on the conditions of the process [18]. The fatigue strength largely depends on the condition of the edges of the tested objects. The research on shot peening of the edges of Inconel 718 samples showed an increase in fatigue strength only in certain conditions [19]. Shot peening increases the fatigue strength of elements made of nickel alloys working at elevated temperatures [20]. Nickel alloys are treated with laser shot peening as well [21]. The process of shot peening of additively manufactured nickel alloys has been investigated as well. The analysis of the nucleation and propagation of fatigue cracks in non-peened and peened Inconel 718 nickel alloy samples produced with the additive manufacturing technique was described in ref. [22]. Shot peening is successfully used to process stents made of the intermetallic Nitinol (NiTi) alloy in order to increase their wear resistance [23]. The results of shot peening are assessed with various methods. The most common approaches include measurements of the shot peening intensity with Almen plates, assessment of the coverage of the shot-peened surface, analysis of the geometric surface structure, assessment of changes in the microstructure and microhardness of the surface layer, and analysis of the residual stress distribution. Analyses of shot-peened surface layers using positron annihilation-based techniques, which are successfully used to detect defects in metals, have also been carried out [24]. Previous studies have shown that annihilation techniques can be successfully used to analyze the surface layer of shot-peened unalloyed steel, titanium, and aluminum alloys [25], and stainless steel [26]. Additionally, nickel superalloys similar to that analyzed in this study have been investigated with these techniques [27][28][29][30][31][32], and interesting results were reported. Contrary to the other methods mentioned above, positron annihilation provides information about the structure of sample defects (both their size and concentration) at the atomic level. It facilitates correlation of the macroscopic properties of tested materials with their microscopic structure and, consequently, elucidates processes taking place during shot peening. This ensures more effective designs of effective finishing processes. The literature review indicates that there are only few studies on the physical properties of the surface layer and functional properties of shot-peened nickel alloy elements. There are also no studies on the properties of shot-peened nickel alloys carried out with annihilation techniques or on the influence of technological parameters of impulse shot peening on the properties of the surface layer of nickel alloy elements. Therefore, it seems advisable to determine the impact of the technological parameters of impulse shot peening on surface layer properties with the use of positron annihilation. The aim of the study was to identify technological parameters of impulse shot peening that contribute to low surface roughness and a high increase in microhardness. In turn, detection of changes in the sample defect structure accompanying the improvement in sample properties may elucidate processes involved in impulse shot peening, which may contribute to further improvement of the procedure. Materials and Methods The experiment was carried out using samples made of nickel alloy from the HRSA (Heat Resistant Super Alloys) group, i.e., Inconel 718. This material is a precipitationhardened nickel alloy with excellent corrosion resistance in many environments, creep resistance, susceptibility to forging and casting, as well as good weldability [33,34]. With its favorable properties, the Inconel 718 nickel alloy is used in such machine parts as turbines, discs, shafts, compressor blades, exhaust outlets, and combustion chambers [35]. It constitutes over half the mass of a conventional turbojet engine. The drawback of the nickel alloy is its low resistance to frictional wear, which nevertheless can be eliminated by surface processing [35]. Table 1 presents the chemical composition and properties of the Inconel 718 nickel alloy. Figure 1 shows the experimental design consisting of a set of variables and output data. peening on surface layer properties with the use of positron annihilation. The aim of the study was to identify technological parameters of impulse shot peening that contribute to low surface roughness and a high increase in microhardness. In turn, detection of changes in the sample defect structure accompanying the improvement in sample properties may elucidate processes involved in impulse shot peening, which may contribute to further improvement of the procedure. Materials and Methods The experiment was carried out using samples made of nickel alloy from the HRSA (Heat Resistant Super Alloys) group, i.e., Inconel 718. This material is a precipitation-hardened nickel alloy with excellent corrosion resistance in many environments, creep resistance, susceptibility to forging and casting, as well as good weldability [33,34]. With its favorable properties, the Inconel 718 nickel alloy is used in such machine parts as turbines, discs, shafts, compressor blades, exhaust outlets, and combustion chambers [35]. It constitutes over half the mass of a conventional turbojet engine. The drawback of the nickel alloy is its low resistance to frictional wear, which nevertheless can be eliminated by surface processing [35]. Table 1 presents the chemical composition and properties of the Inconel 718 nickel alloy. Figure 1 shows the experimental design consisting of a set of variables and output data. The Inconel 718 nickel alloy samples were milled prior to the impulse shot peening process. The six-blade milling head with an outer diameter of Dg = 40 mm was equipped with round cemented carbide plates covered with TiAl coating. The following technological parameters were employed in the milling process: depth of cut-ap = 0.5 mm, cutting speed-vc = 40 m/min, and feed per tooth-fz= 0.08 mm/blade. Abundant cooling with the Mobilcut cooling-lubricant fluid was applied in the process. Impulse shot peening was performed on an originally designed shot peening stand ( Figure 2a). The indentations were produced in a regular manner (Figure 2c), i.e., one next to another at a step of "x". On the Figure 2b is presented the schema of the device. The Inconel 718 nickel alloy samples were milled prior to the impulse shot peening process. The six-blade milling head with an outer diameter of D g = 40 mm was equipped with round cemented carbide plates covered with TiAl coating. The following technological parameters were employed in the milling process: depth of cut-a p = 0.5 mm, cutting speed-v c = 40 m/min, and feed per tooth-f z = 0.08 mm/blade. Abundant cooling with the Mobilcut cooling-lubricant fluid was applied in the process. Impulse shot peening was performed on an originally designed shot peening stand (Figure 2a). The indentations were produced in a regular manner (Figure 2c), i.e., one next to another at a step of "x". On the Figure 2b is presented the schema of the device. During shot peening, the work surface of the workpiece (2) was subjected to the hits of the beater (5) of a known mass, raised to the height "h" by the cam (8). The beater with a ball with a diameter d (3), which is an interchangeable element, moves while operating in the ball guides (4). The CNC table, on which the workpiece is attached, performs a feed movement, and the speed of this movement may be controlled with a guide screw (9) and the stepper motor (10,12). The speed of the feed motion affects the value of the shot peening density (number of impacts per unit area). The impact energy E depends on the mass of the beater, mass of the weight, and the height of the peening ball drop. During shot peening, the work surface of the workpiece (2) was subjected to the hits of the beater (5) of a known mass, raised to the height "h" by the cam (8). The beater with a ball with a diameter d (3), which is an interchangeable element, moves while operating in the ball guides (4). The CNC table, on which the workpiece is attached, performs a feed movement, and the speed of this movement may be controlled with a guide screw (9) and the stepper motor (10,12). The speed of the feed motion affects the value of the shot peening density (number of impacts per unit area). The impact energy E depends on the mass of the beater, mass of the weight, and the height of the peening ball drop. The impulse shot peening process was applied at variable technological parameters, which were selected on the basis of preliminary research, presented in Table 2. The peening density is shown in Equation (1), where j is the number of impacts per unit area: The impulse shot peening process was applied at variable technological parameters, which were selected on the basis of preliminary research, presented in Table 2. The peening density is shown in Equation (1), where j is the number of impacts per unit area: The T800RC 120-140 Hommel-Etamic device (Jenoptik, Villingen-Schwenningen, Germany) was used for the analysis of the 3D surface topography. The area of the scanned surface was 4.0 mm × 4.0 mm. The Hommel Map Basic v. 6.2 software (Jenoptik, Villingen-Schwenningen, Germany) was used for determination of the parameters of 3D surface roughness. Positron annihilation lifetime spectroscopy (PALS) measurements were conducted using a digital positron lifetime spectrometer with an Agilent U1065A digitizer (Acqiris, currently supported by Keysight Technologies, Santa Rosa, CA, USA) (sampling rate of 4 GS/s,) and a specialized program [36] for determination of the interval between signals with amplitudes corresponding to energy of radiation emitted during positron formation (1274 keV) and annihilation (511 keV). The pulses were generated by two scintillation detectors equipped with BaF 2 scintillators, placed in the immediate vicinity of the samples. Each of the detectors was used to detect both radiation quanta with energy of 1274 keV and 511 keV. Positrons were produced by 22 NaCl with an activity of 0.3 MBq deposited on an 8 µm thick Kapton envelope between two identical samples, which were mounted in a dedicated holder. To avoid the coincidence of the 511-511 keV annihilation quanta, antiparallel and emitted along a single line, the sample with the positron source was placed in such a way that no line passed through the source and both scintillators simultaneously. The positron lifetime spectrum for each sample was collected with the total number of counts of approx. 2.2 × 10 7 . The chosen measurements were repeated, placing the positron source in other areas in the sample to verify whether the result was dependent on the position of the source on a potentially non-homogeneous surface. The preliminary analysis of the PALS measurement results was carried out with the use of the MELT software (Université de Genève, Genève, Switzerland) [37]. It revealed the presence of two dominant components in the spectra, but this method of analysis yields large statistical dispersions ( Figure 3). The main analysis of the data performed using the PALSfit program (Technical University of Denmark, Roskilde, Denmark) [38] showed that, in addition to these components, it is necessary to assume the existence of a long-life component with a lifetime of approx. 1.75 ns and a negligible intensity of <0.2% in order to obtain a good fit. This was necessary due to the very low background level (approx. 35 counts/channel at 8000 channels covering a timebase of 50 ns). During the analysis, a correction nt for annihilation in the source envelope with a lifetime of 382 ps and a contribution of 14.44% determined in the Positron Fraction program [39] was assumed. The resolution curve was approximated by a single Gaussian with FWHM ≈ 206 ps. The effect of the shot peening conditions on the results of surface roughness parameters Sa and Sz (the most frequently analyzed parameters in engineering practice), an increase in microhardness ∆HV 0.05 (caused by impulse shot peening compared to the post-milling value), and the mean positron lifetime τ mean was determined using the analysis of variance (ANOVA) performed in the Statistica version 13 program. Before the ANOVA analysis, the normality of data distribution was examined with the Shapiro-Wilk test and the homogeneity of variance was assessed using the Levene test. The significance level α = 0.05 was assumed in all the analyses. The results of the statistical analysis of variance F were compared with the critical value F α for the adopted significance level and degrees of freedom. The analysis of the effect of the independent variables was verified by means of post-hoc tests (Tukey test). Detailed results of the analyses are presented in the Supplementary Materials. The effect of the shot peening conditions on the results of surface roughness parameters Sa and Sz (the most frequently analyzed parameters in engineering practice), an increase in microhardness ΔHV 0.05 (caused by impulse shot peening compared to the post-milling value), and the mean positron lifetime τmean was determined using the analysis of variance (ANOVA) performed in the Statistica version 13 program. Before the ANOVA analysis, the normality of data distribution was examined with the Shapiro-Wilk test and the homogeneity of variance was assessed using the Levene test. The significance level α = 0.05 was assumed in all the analyses. The results of the statistical analysis of variance F were compared with the critical value Fα for the adopted significance level and degrees of freedom. The analysis of the effect of the independent variables was verified by means of post-hoc tests (Tukey test). Detailed results of the analyses are presented in the Supplementary Materials. (Figure 4b). This is related to the formation of deeper depressions on the shot-peened surface. The application of high impact energies caused substantial deformations of post-milling micro-roughness (increased value of the Sp) and formation of numerous depressions (increased Sv). In terms of the impact energy used in the experiment, the analyzed parameters of the 3D surface roughness increased in comparison with these values obtained after the milling process. Figure 4b). This is related to the formation of deeper depressions on the shot-peened surface. The application of high impact energies caused substantial deformations of post-milling micro-roughness (increased value of the Sp) and formation of numerous depressions (increased Sv). In terms of the impact energy used in the experiment, the analyzed parameters of the 3D surface roughness increased in comparison with these values obtained after the milling process. The effect of the shot peening conditions on the results of surface roughness parameters Sa and Sz (the most frequently analyzed parameters in engineering practice), an increase in microhardness ΔHV 0.05 (caused by impulse shot peening compared to the post-milling value), and the mean positron lifetime τmean was determined using the analysis of variance (ANOVA) performed in the Statistica version 13 program. Before the ANOVA analysis, the normality of data distribution was examined with the Shapiro-Wilk test and the homogeneity of variance was assessed using the Levene test. The significance level α = 0.05 was assumed in all the analyses. The results of the statistical analysis of variance F were compared with the critical value Fα for the adopted significance level and degrees of freedom. The analysis of the effect of the independent variables was verified by means of post-hoc tests (Tukey test). Detailed results of the analyses are presented in the Supplementary Materials. Figure 4b). This is related to the formation of deeper depressions on the shot-peened surface. The application of high impact energies caused substantial deformations of post-milling micro-roughness (increased value of the Sp) and formation of numerous depressions (increased Sv). In terms of the impact energy used in the experiment, the analyzed parameters of the 3D surface roughness increased in comparison with these values obtained after the milling process. At the constant impact energy and shot peening ball diameter, the growth the distance between the indentations reduced the degree of coverage. This contributed to uneven deformation of the shot-peened surface and, consequently, increased the roughness parameters ( Figure 5). The effect of the impact density j on surface roughness parameters was more noticeable at the value of j = 6 ÷ 25 mm −2 . At the constant impact energy and shot peening ball diameter, the growth the distance between the indentations reduced the degree of coverage. This contributed to uneven deformation of the shot-peened surface and, consequently, increased the roughness parameters ( Figure 5). The effect of the impact density j on surface roughness parameters was more noticeable at the value of j = 6 ÷ 25 mm −2 . Surface Roughness (a) (b) In the process of impulse shot peening with the use of a ball with a small diameter, the contact surface of the ball with the workpiece is relatively small, hence the more intense plastic-elastic deformations result in an increase in surface roughness. In turn, the impact of a ball with a larger diameter produced shallower indentations on the surface, which caused more intensive micro-roughness smoothing after milling and a decrease in surface roughness parameters Sa, Sz, Sp, and Sv (Figure 6). At a ball diameter of d = 12.45 mm, the analyzed parameters of 3D surface roughness reached the values similar or lower than those obtained for the milled surface. The statistical analysis confirmed the significant effect of the technological parameters on the values of the 3D roughness parameters. Figure 7 shows the topography and 3D surface roughness parameters prior to impulse shot peening. The surface topography after milling was characterized by an even distribution of micro-roughness with clearly visible elevations and depressions resulting from the geometric-kinematic mapping of the tool in the workpiece. The elevations and In the process of impulse shot peening with the use of a ball with a small diameter, the contact surface of the ball with the workpiece is relatively small, hence the more intense plastic-elastic deformations result in an increase in surface roughness. In turn, the impact of a ball with a larger diameter produced shallower indentations on the surface, which caused more intensive micro-roughness smoothing after milling and a decrease in surface roughness parameters Sa, Sz, Sp, and Sv (Figure 6). At a ball diameter of d = 12.45 mm, the analyzed parameters of 3D surface roughness reached the values similar or lower than those obtained for the milled surface. At the constant impact energy and shot peening ball diameter, the growth the distance between the indentations reduced the degree of coverage. This contributed to uneven deformation of the shot-peened surface and, consequently, increased the roughness parameters ( Figure 5). The effect of the impact density j on surface roughness parameters was more noticeable at the value of j = 6 ÷ 25 mm −2 . Surface Topography (a) (b) In the process of impulse shot peening with the use of a ball with a small diameter, the contact surface of the ball with the workpiece is relatively small, hence the more intense plastic-elastic deformations result in an increase in surface roughness. In turn, the impact of a ball with a larger diameter produced shallower indentations on the surface, which caused more intensive micro-roughness smoothing after milling and a decrease in surface roughness parameters Sa, Sz, Sp, and Sv (Figure 6). At a ball diameter of d = 12.45 mm, the analyzed parameters of 3D surface roughness reached the values similar or lower than those obtained for the milled surface. The statistical analysis confirmed the significant effect of the technological parameters on the values of the 3D roughness parameters. Figure 7 shows the topography and 3D surface roughness parameters prior to impulse shot peening. The surface topography after milling was characterized by an even distribution of micro-roughness with clearly visible elevations and depressions resulting from the geometric-kinematic mapping of the tool in the workpiece. The elevations and The statistical analysis confirmed the significant effect of the technological parameters on the values of the 3D roughness parameters. Figure 7 shows the topography and 3D surface roughness parameters prior to impulse shot peening. The surface topography after milling was characterized by an even distribution of micro-roughness with clearly visible elevations and depressions resulting from the geometric-kinematic mapping of the tool in the workpiece. The elevations and depressions represent similar proportions in the total surface profile. The surface topography should be classified as a directed structure. The surface topography was altered after the impulse shot peening (Figure 8). Numerous indentations induced by the impact of the ball were visible on the shot-peened surface. The depth of the indentations depended on the technological parameters of the process (the maximum Sv was obtained for d = 6.00 mm, j = 11 mm −2 , E = 240 mJ). They were arranged in successive rows corresponding to the course of the impulse shot peening process. The analysis of the topography (Figure 8e,f) revealed that the deformation of the sample surface was more complete at a higher value of the shot peening density of j = 44 mm −2 than at j = 6 mm −2 , which was reflected in the over two-fold reduction of the surface roughness. The impulse-shot peening process resulted in changes in the skewness parameter Ssk, which suggests that the material was concentrated around the peaks of the profile; thus, it can be regarded as a good bearing surface [40]. The lower values of the skewness parameter allow an assumption of a greater ability to transfer contact loads and lower tribological wear of the surface in the presence of a lubricant [41]. The surface topography was altered after the impulse shot peening (Figure 8). Numerous indentations induced by the impact of the ball were visible on the shot-peened surface. The depth of the indentations depended on the technological parameters of the process (the maximum Sv was obtained for d = 6.00 mm, j = 11 mm −2 , E = 240 mJ). They were arranged in successive rows corresponding to the course of the impulse shot peening process. The analysis of the topography (Figure 8e,f) revealed that the deformation of the sample surface was more complete at a higher value of the shot peening density of j = 44 mm −2 than at j = 6 mm −2 , which was reflected in the over two-fold reduction of the surface roughness. The impulse-shot peening process resulted in changes in the skewness parameter Ssk, which suggests that the material was concentrated around the peaks of the profile; thus, it can be regarded as a good bearing surface [40]. The lower values of the skewness parameter allow an assumption of a greater ability to transfer contact loads and lower tribological wear of the surface in the presence of a lubricant [41]. Microhardness The impulse shot peening process resulted in changes in the surface microhardness ( Figure 9). Impulse shot peening contributed to growth in the density of dislocations, which propagated and were halted when they encountered other dislocations, grain boundaries, or precipitates. The "blockage" of the arising dislocations contributed to an increase in the microhardness parameters. The increase in microhardness close to the surface ranged from 27 HV 0.05 to 108 HV 0.05, and the maximum changes in microhardness reached the depth from 0.04 mm to 0.08 mm. The surface topography was altered after the impulse shot peening (Figure 8). Numerous indentations induced by the impact of the ball were visible on the shot-peened surface. The depth of the indentations depended on the technological parameters of the process (the maximum Sv was obtained for d = 6.00 mm, j = 11 mm −2 , E = 240 mJ). They were arranged in successive rows corresponding to the course of the impulse shot peening process. The analysis of the topography (Figure 8e,f) revealed that the deformation of the sample surface was more complete at a higher value of the shot peening density of j = 44 mm −2 than at j = 6 mm −2 , which was reflected in the over two-fold reduction of the surface roughness. The impulse-shot peening process resulted in changes in the skewness parameter Ssk, which suggests that the material was concentrated around the peaks of the profile; thus, it can be regarded as a good bearing surface [40]. The lower values of the skewness parameter allow an assumption of a greater ability to transfer contact loads and lower tribological wear of the surface in the presence of a lubricant [41]. Microhardness The impulse shot peening process resulted in changes in the surface microhardness ( Figure 9). Impulse shot peening contributed to growth in the density of dislocations, which propagated and were halted when they encountered other dislocations, grain boundaries, or precipitates. The "blockage" of the arising dislocations contributed to an increase in the microhardness parameters. The increase in microhardness close to the surface ranged from 27 HV 0.05 to 108 HV 0.05, and the maximum changes in microhardness reached the depth from 0.04 mm to 0.08 mm. The effect of the impulse shot peening parameters on the microhardness of the Inconel 718 nickel alloy surface is shown in Figures 10-12. The effect of the impulse shot peening parameters on the microhardness of the Inconel 718 nickel alloy surface is shown in Figures 10-12. Figure 9. Distribution of the microhardness of the Inconel 718 nickel alloy surface before and after the impulse shot peening process with parameters E = 180 mJ, d = 3.95 mm, j = 11 mm −2 . The effect of the impulse shot peening parameters on the microhardness of the Inconel 718 nickel alloy surface is shown in Figures 10-12. The rising impact energy resulted in a greater increase in the microhardness of the surface layers and in the depth of the hardened layer ( Figure 10). The growth the impact energy over E = 120 mJ did not produce a significant increase in the surface layer microhardness, but there were evident differences in the depth of the changes. The rising impact energy resulted in a greater increase in the microhardness of the surface layers and in the depth of the hardened layer ( Figure 10). The growth the impact energy over E = 120 mJ did not produce a significant increase in the surface layer microhardness, but there were evident differences in the depth of the changes. The decrease in the shot peening density j, corresponding to the increase distance between traces x, contributed to reduction of the microhardness of the surface layer ( Figure 11). An increase in the diameter of the shot peening ball d contributed to enlargement of the impact-induced indentation surface area. This led to reduction of the concentration of The decrease in the shot peening density j, corresponding to the increase distance between traces x, contributed to reduction of the microhardness of the surface layer ( Figure 11). An increase in the diameter of the shot peening ball d contributed to enlargement of the impact-induced indentation surface area. This led to reduction of the concentration of energy transferred to the workpiece, thereby lowering the microhardness of the surface layer of the Inconel 718 samples ( Figure 12). Noteworthy, while approximately parallel shifts of microhardness distributions with a similar slope were observed in the case of changes in E and j, the distributions became flatter with the increase in d. This indicates that the increase in the ball diameter is accompanied by a decrease in the microhardness at the surface and the rise in the depth of the surface layer hardening. The statistical analysis confirmed the significant effect of the technological parameters on the values of the ∆HV 0.05 microhardness increase (Supplementary Material). Positron Lifetime Spectroscopy The procedure of averaging the results of all measurements shows the mean values of the lifetime of the two dominant components in the positron lifetime spectra of <τ 1 > = 137 ps and <τ 2 > = 197 ps. These values are too close to allow determination of the parameters (lifetime and intensity) of each component with a acceptable accuracy. Consequently, the lifetimes in the ranges: τ 1 = 103 ÷ 160 ps and τ 2 = 165 ÷ 246 ps are observed. This impedes unequivocal interpretation of the origin of the components. Moreover, it should be expected that they may have several sources of origin with similar lifetimes. The lifetime of the first component for most of the samples is too long to originate from annihilation in undefected material, where lifetimes in the range of 110 ÷ 120 ps [42] are expected, which should additionally be reduced due to positron trapping [43]. Bulk lifetimes in the range of 146 ÷ 166 ps, which are clearly too large, would rather result from the lifetimes of the first component and intensity of the second component in the range of 9 ÷ 79%. Therefore, it is most probable that the first component originates from the annihilation of positrons trapped in the dislocations. These are shallow traps with positron binding energy below 0.1 eV. Positrons can relatively easily leave the traps and then annihilate in adjacent point defects thus contributing to the second component [44]. This, in turn, may modify the lifetime of the first component. The lifetime of the second component indicates that it may be a result of positron localization in Ni, Fe, or Cr monovacancies. Incoherent δ phase precipitates (Ni 3 Nb) or niobium carbide at the grain boundaries may also contribute to this interphase annihilation component. Due to the afore mentioned low accuracy of determination of the lifetime values and intensities of each of the components, the mean positron lifetime (τ mean ) is the most reliable parameter, which can be determined using the formula, Equation (2): where: τ i and I i are the lifetime and intensity of the ith component of the positron lifetime spectrum. A low value of τ mean indicates a dominant role of dislocations in positron trapping, and its increase is associated with a greater relative contribution of point defects and, probably, grain boundaries. To compare τ mean with microhardness measurements, the weighted mean of microhardness HV w was determined for each sample, Equation (3). where: ∆N i is the number of positrons, determined based on the positron implantation profile [45], annihilating in the ith layer, for which microhardness is HV0.05 i (Figures 10-12). This allows taking into account the uneven contribution of the individual material layers for which microhardness was measured to τ mean (analogous to the residual stresses in ref. [46]). The comparison of τ mean and HV w for different impact energy values is shown in Figure 13. In the untreated sample, τ mean ≈ 157 ps corresponded to HV w ≈ 470. The growth in the impact energy to 60 mJ was accompanied by an approximately linear increase to τ mean ≈ 168 ps and HV w ≈ 540. This evidences an increase in the concentration of point defects. Importantly, τ mean indicates the ratio of defects to dislocations; nevertheless, a decrease in the concentration of dislocations caused by the shot peening seems unlikely. In turn, changes in their structure cannot be excluded, i.e., the dislocations became shorter, which resulted in faster de-trapping of positrons propagating along the dislocation line and trapping in point defects often accompanying dislocations [47]. Above energy of 60 mJ, τ mean had a constant value within measurement errors, and the increase in the HV w value was inconsiderable. This indicates saturation of the alterations in the samples. The upper sensitivity limit for PALS is known, i.e., the absence of an increase in the mean lifetime despite an increase in the defect concentration when all positrons are trapped in a given type of defect. However, the absence of an increase in HV w suggests that this was not the case here. The increase in the shot peening density resulted in a rapid rise in HV w to 510 at j = 6.25 mm −2 followed by a much milder HV w (j) relationship reaching 540 at the maximum peening density used in the study (Figure 14). In this case, the changes in the τ mean value did not follow the same trend as HV w , but exhibited a nearly linear increase in the entire range with no initial sharp rise. Noteworthy, in the case of the same scale of the mean τ mean and HV w axes (0.12 ps/HV w ), the rate of changes in both values in the range over j = 6 mm −2 was similar. During the investigation of the effect of shot peening density with PALS, the selected shot peening parameters (E = 40 mJ, d = 10.00 mm) were chosen. This was made to avoid the influence of surface non-homogeneity which caused the dependence of the result on the location of the positron source, and was the case especially when smaller balls and higher energy values were used (Supplementary Material, Figure S1). Choosing these particular parameters resulted in a smaller surface deformation at a single impact (Table 3). tion line and trapping in point defects often accompanying dislocations [47]. Above energy of 60 mJ, τmean had a constant value within measurement errors, and the increase in the HVw value was inconsiderable. This indicates saturation of the alterations in the samples. The upper sensitivity limit for PALS is known, i.e., the absence of an increase in the mean lifetime despite an increase in the defect concentration when all positrons are trapped in a given type of defect. However, the absence of an increase in HVw suggests that this was not the case here. The increase in the shot peening density resulted in a rapid rise in HVw to 510 at j = 6.25 mm −2 followed by a much milder HVw(j) relationship reaching 540 at the maximum peening density used in the study (Figure 14). trapped in a given type of defect. However, the absence of an increase in HVw suggests that this was not the case here. The increase in the shot peening density resulted in a rapid rise in HVw to 510 at j = 6.25 mm −2 followed by a much milder HVw(j) relationship reaching 540 at the maximum peening density used in the study ( Figure 14). Most probably, little point defects (observed at larger deformations) were created in the sample in this case, but the dislocations were modified (e.g., shortened), which had a positive effect on microhardness but was not detected by PALS. Similar differences between τ mean and HV w are shown in Figures 13 and 15 Most probably, little point defects (observed at larger deformations) were created in the sample in this case, but the dislocations were modified (e.g., shortened), which had a positive effect on microhardness but was not detected by PALS. Similar differences between τmean and HVw are shown in Figures 13 and 15 for the smallest surface deformations. Interestingly, there is no analogical dependence on the energy density (quotient of impact energy and the indentation area), which indicates that that not all energy is transferred to non-elastic deformations when different shot peening parameters are used. The change in the diameter of the peening ball had a weak effect on HV w (Figure 15), which at larger ball sizes (6-13 mm) rised slightly in the range of 540-560 along the decrease in the ball diameter (i.e., an increase in transferred energy per unit area). This was related to the increase in the slope of the HV 0.05 distributions accompanying the decrease in the size of the ball (Figure 12), which compensated for their increasingly shallower penetration into the sample. This trend seemed to be halted in the case of a ball with a diameter of 3.95 mm, for which the HV w value declined slightly to 530. The τ mean dependence agreed well with the dependence of HV w (d) at the same scale for both quantities as that for the dependence on impact energy. An exception was the ball with the diameter of 12.45 mm, for which τ mean was clearly lower than the expected value resulting from HV w . This discrepancy may have had the same background as the dependence on the shot peening density, i.e., the formation of defects that were not detected by PALS at small surface deformations induced by a single impact. In this case, the strong effect may have resulted from the low curvature of the indentation, which resulted in smaller deformations than in the case of high curvature of the overlapping indentations. The statistical analysis confirmed the significant effect of the technological parameters on the values of the mean positron lifetimes. Conclusions The impulse shot peening process resulted in changes in the geometrical structure of the surface. In comparison with the pre-treatment, shot peening contributed to a decrease in the value of the skewness parameter Ssk. This allows an assumption of lower tribological wear in the presence of the lubricant on the surface of the Inconel 718 subjected to the impulse shot peening procedure. This is confirmed by the presence of numerous depressions on the surface, which may serve as lubrication pockets. The occurrence on the surface of the lubrication pockets allows to reduce the coefficient of friction during the cooperation of two elements with each other. Unfortunately, the other surface roughness parameters (Sa, Sz, Sv, and Sp) increased in relation to the post-milling values. Impulse shot peening carried out at E = 180 mJ, j = 11 mm −2 , and d = 12.45 mm yields lower values of surface roughness parameters Sa, Sz, Sp, and Sv than the values of post-milling parameters and a large depth of strengthening (z = 80 µm). This suggests an increase in the wear resistance of the surface of Inconel 718 workpiece subjected to shot peening at these parameters, which should therefore be considered the best among those used in the research for the process. The positron lifetimes indicate annihilation of two groups of positrons: trapped in dislocations and in various types of point defects (vacancies, grain boundaries). The changes in the ratio of the concentration of defects of both types can be estimated basing on the changes in the mean lifetime. There is a good correlation between the mean lifetime and the weighted mean microhardness HV w (with the positron implantation profile as the weight) for HV w > 500. This implies that an increase in microhardness is associated with the growth in the probability of positron trapping in point defects. This may result from the increase in their concentration and shortening of the dislocation line, which increases the probability of de-trapping from dislocations and trapping in point defects. In the case of the HV w value in the range of 470-500, the increase in the mean lifetime is lower than 0.12 ps/HV w observed for higher microhardness values. This is probably due to the different modification of the defect structure responsible for the increase in microhardness with smaller deformations per impact, which results in less intense formation of point defects. It mainly consists in modification of dislocations, i.e., shortening there of or formation of a larger number of mutually blocking dislocations. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ma14237328/s1. Table S1. ANOVA analysis of variance for surface roughness parameters Sa and Sz in the applied conditions of impulse shot peening of samples made of the Inconel 718 nickel alloy, where: DF-number of the degrees of freedom, SS-sum of squares between groups, MSmean sum of squares between groups, F-value of the test statistic, p-probability level; Table S2. ANOVA analysis of variance for the mean positron lifetime τ mean in the applied conditions of impulse shot peening of samples made of the Inconel 718 nickel alloy, where: DF-number of the degrees of freedom, SS-sum of squares groups, F-value of the test statistic, p-probability level; Table S3. ANOVA analysis of variance for the relative increase in microhardness ∆ HV 0.05 in the applied conditions of impulse shot peening of samples made of the Inconel 718 nickel alloy, where: DFnumber of the degrees of freedom, SS-sum of squares between groups, MS-mean sum of squares between groups, F-value of the test statistic, p-probability level; Table S4. Comparative analysis of the significance of differences (post-hoc Tukey test) between the mean values of the Sa roughness parameter after the impulse shot peening treatment with the use of different parameters. The red color indicates the level of probability for which there are no statistically significant differences; Table S5. Comparative analysis of the significance of differences (post-hoc Tukey test) between the mean values of the Sz roughness parameter after the impulse shot peening treatment with the use of different parameters. The red color indicates the level of probability for which there are no statistically significant differences; Table S6. Comparative analysis of the significance of differences (post-hoc Tukey test) between the mean values of the increase in microhardness after the impulse shot peening treatment with the use of different parameters. The red color indicates the level of probability for which there are no statistically significant differences; Table S7. Comparative analysis of the significance of differences (post-hoc Tukey test) between the mean values of mean positron lifetimes τ mean after the impulse shot peening treatment with the use of different parameters. The red color indicates the level of probability for which there are no statistically significant differences; Figure S1. Dependence of the mean positron lifetime τ mean on the shot peening density j (E = 180 mJ, d = 6.00 mm). Funding: The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).
10,556
2021-11-30T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Active and Passive Mediated Social Touch with Vibrotactile Stimuli in Mobile Communication : Tactile technology in mobile devices makes mediated social touch (MST) a possibility. MST with vibrotactile stimuli can be applied in future online social communication applications. There may be different gestures to trigger vibrotactile stimuli for senders and receivers. In this study, we compared senders with gestures and receivers without gestures to identify the differences in perceiving MST with vibrotactile stimuli. We conducted a user study to explore differences in the likelihood to be understood as a social touch with vibrotactile stimuli between senders and receivers. The results showed that for most MST, when participants acted as senders and receivers, there were no differences in understanding MST with vibrotactile stimuli when actively perceiving with gestures or passively perceiving without gestures. Researchers or designers could apply the same vibrotactile stimuli for senders’ and the receivers’ phones in future designs. Introduction Mediated social touch (MST) is a new form of remote communication [1]. Recent haptic technology helps to transmit MST over mobile devices. For example, ComTouch [2], POKE [3], CheekTouch [4], and KUSUGURI interface [5] have applied haptic actuators to design vibrations to transmit MST such as patting, poking, tapping, and tickling. Bendi [6] has taken advantage of the material and Internet to transmit MST with tactile feedback. The abovementioned studies mainly developed prototypes for MST. They presented how the MST could be transmitted between senders and receivers and described what haptic feedbacks were on the receivers' devices or the senders' devices. However, they did not test if senders and receivers could both understand the haptic feedbacks, since the senders and receivers might have different gestures when triggering the haptic feedback on mobile devices. For example, Rantala et al. [1] mentioned that during remote communication with mobile devices, a sender actively manipulated the device while a receiver held the device passively. In this study, we aim to test the differences between a sender and a receiver in perceiving MST with vibrotactile stimuli. The conditions are: 1. The sender has specific gestures to trigger and perceive the vibrotactile stimuli, while the receiver just presses the touchscreen one time to trigger the vibrotactile stimuli. 2. For the sender, we test the the likelihood to be understood as a social touch (LUMST) with vibrotactile stimuli when actively perceiving with gestures, while for the receiver, we test the LUMST when passively perceiving without gestures. Vibrotactile stimuli were tested for receivers in our previous study [7]. In addition to the receiver, some researchers have demonstrated that it is also necessary for the sender to confirm the expected vibrotactile stimuli before sending. For example, Ramos et al. [8] mentioned that it was convenient for a sender to achieve specific target forces through 1. Are there significant differences between a sender with gestures and a receiver without gestures in perceiving MST with vibrotactile stimuli? 2. What are the implications for designing and applying MST with vibrotactile stimuli for mobile communication? Haptic Feedback on Receivers' Mobile Devices When Transmitting MST Many researchers have mentioned that a sender's finger motion or gestures could be captured by sensors and transmitted through the Internet to activate haptic feedback on a receiver's device. For example, Park et al. [4] provided CheekTouch, a bidirectional communication diagram. One user's finger motion could be rendered on the other user's mobile device with vibrotactile stimuli via the prototype in real time. Furukawa et al. [5] provided the KUSUGURI interface, which could offer tickling feelings for dyads. The interface was designed based on the theory of "Prediction of one's behavior suppresses the perception brought about by the behavior" [12]. The vibrations were mainly described in receivers' devices. Hashimoto et al. [13] provided a novel tactile display for emotional and tactile experiences in remote communication on mobile devices. Through the novel tactile display, receivers could feel different touch gestures such as tapping, tickling, pushing, and caressing, via vibrations. Due to the limitation of the prototype, only the receivers' devices could trigger vibrations. Hemmert et al. [14] proposed intimate mobiles, which could send grasping by remote communication on mobile phones. A sender performed grasping on the device, and the detected force by embedded sensors could be transmitted to a receiver's device. The receiver could feel the touch by sensing the pressure through the tightness actuation on their device. Based on the above, we found that: 1. Most studies have mainly provided haptic feedback to receivers. There is a lack of consideration applying haptic feedback to senders. This may make the manipulation of user interaction with the touchscreen not very precise [15]. Meanwhile, Ramos et al. [8] showed that it was necessary to apply local feedback on a sender's phone when transmitting MST, since the local feedback could help the sender to control their expected force. 2. Most studies have mainly designed and provided haptic feedback for prototypes. Their research has not considered if senders and receivers could both understand the designed haptic feedback. Therefore, in this study, we consider testing if senders and receivers can both understand the vibrotactile stimuli for MST. Haptic Feedback on Senders' Mobile Devices When Transmitting MST In addition to haptic feedback on receivers' phones, some studies have considered applying local feedback on senders' phones. For example, Hoggan et al. [16] proposed Pressages, which transmitted a user's input pressure to a receiver via vibrations during phone calls. Researchers have considered local feedback for pressure input in a sender's phone. Therefore, senders and receivers could both feel the vibrations on their phones during message transmission. Rantala et al. [1] provided a mobile prototype to communicate emotional intention via vibrotactile stimuli. Senders could sense the vibrotactile stimuli on their devices while manipulating them. The vibrotactile stimuli could be triggered on senders' and receivers' devices simultaneously. Therefore, senders could feel the exact vibrotactile stimuli that they wanted to transmit to receivers. Chang et al. [2] designed ComTouch, which was a vibrotactile communication device. Local feedback on the sender's device helped users to estimate the signal intensity to be transmitted. Therefore, users could be aware of the vibrations they send during remote communication. Park et al. [6] provided Bendi, which was a shape-changing device for a tactile-visual phone conversation. When senders moved a joystick to transmit social touch, their devices would also bend up or down similar to the receivers' devices. Therefore, a sender could see what had been sent to a receiver. Some prototypes have provided self-checking functions for a sender to check the MST they would send to ensure that the haptic feedback on the sender's and the receiver's devices were the same. For example, Park et al. [3] provided POKE. The inflatable surface of POKE could send social touch to a receiver. A sender could receive index finger pressure input on the back of their device during a phone call. The POKE had a self-checking function, which helped the sender check whether the tactile feeling they wanted to send was correct. Based on the above, we observed that: 1. Some researchers have checked the feedback before sending it, but this may cause unnecessary delay and workload in the communication. It is a step in the communication process that may not be convenient for some people. Therefore, we want to take this checking step in the design and research process, rather than in the application, to ensure that both senders and receivers can understand the haptic feedback before applying it in real applications. 2. A receiver's receiving of haptic feedback and a sender's self-checking process of haptic feedback are different. A receiver feels the haptic feedback without gestures. However, in a sender's self-checking process, the sender feels the haptic feedback along with their gestures, since gestures trigger the haptic feedback. Therefore, differences in perceiving MST with vibrotactile stimuli may occur when gestures are different, especially when complex gestures are applied. For example, in repetitive gestures such as "shake" [9], a sender moves their fingers back and forth on the touchscreen to send the touch [9]. The vibrotactile stimuli will be along with their fingers' movements. While for a receiver, they may press the button one time to trigger the vibrotactile stimuli and there are no finger movements when they feel the vibrotactile stimuli. In this study, we check if the senders and receivers could both understand the vibrotactile stimuli for MST. The senders actively perceive with gestures, while the receivers passively perceive without gestures. The Differences between Actively and Passively Perceiving Haptic Feedback Many engineering studies have explored the differences between actively and passively perceiving haptic feedback. For example, perceived roughness is an interesting topic in the active and passive touch of surface texture. Lederman [17] investigated the perceived roughness in active and passive communication. The study found no significant differences in the perceived magnitude of surface roughness and the consistency of such judgments between the two conditions. Hatzfeld [18] also mentioned a duplex theory of roughness perception, which meant active and passive touch conditions did not affect the perceived roughness. Other haptic devices have also been used in different applications to explore the perception differences between active and passive touch. For example, Symmons et al. [19] used a Phantom force-feedback device to explore virtual three-dimensional geometric shapes. The results showed that, as compared with passive exploration, active exploration had significantly shorter latencies. Vitello et al. [20] examined the tactile suppression between active and passive touch. The results indicated that active movements lead to a significant decrease in tactile sensibility, while passive movements seem to have a minor effect when differentiating tactile performance [20]. Ahmaniemi et al. [21] summarized that it was more suitable to apply passive tactile messages in system alerts and user discrete action feedback. Active feedback is more natural for interacting with the physical interface between a user and a system, which helps to increase the user's feeling of control. Based on the above, we found that the differences between actively and passively perceiving MST is an interesting research field for haptic stimuli. However, there were some gaps, as follows: 1. The above studies explored physical perceptions such as perceived roughness on the surface with vibrotactile stimuli. There was a gap in the field of MST with vibrotactile stimuli. 2. For actively perceiving vibrotactile stimuli, in addition to dynamically stroking a surface or object [18], more gestures could also be considered, such as pressing the touchscreen with different repeat times, rhythms, or speeds. Therefore, in this study, we compare the differences between actively and passively perceiving MST, considering more types of MST in active sensation. Gesture Data Collected from Our Previous Study We explored user-defined gestures for MST on the touchscreen of smartphones in our previous study [9]. In [9], we proposed classifications based on movement forms. We mentioned in [9] that "Movement forms indicate the trajectory and dynamics of hands/fingers movement" [22]. We also described the spatial relations between the hands/fingers and the touchscreen. In this study, the movement forms applied are straight gestures on the touchscreen (SOT), straight gestures from the air (SFA), and repetitive gestures (RPT) on the touchscreen [9]. The detailed definitions of each movement form are given in [9]. We chose related gestures to trigger the vibrotactile stimuli for MST based on movement forms. We also recorded the pressure and duration of user-defined gestures for each MST from [9]. These data were applied in the design of vibrotactile stimuli in [7]. Design of MST with Vibrotactile Stimuli In this study, we compare the differences between the sender with gestures and the receiver without gestures in perceiving MST with vibrotactile stimuli. We apply typical vibrotactile stimuli designed in [7] and further analyze the comparison between the two conditions. We designed the vibrotactile stimuli based on movement groups [7,9]. Table 1 shows the typical recorded accelerations of vibrotactile stimuli for MST in each group (four types of "tap", two types of "shake", and one type of "pull" and "toss"). The detailed design process of vibrotactile stimuli for MST are in [7]. The forms of vibrotactile stimuli for MST in each group are similar. Therefore, only typical examples of recorded accelerations are listed in Table 1. In the SFA group, all the vibrotactile stimuli for MST are similar to "tap" in Table 1, with different durations and frequencies. In the RPT group, all the vibrotactile stimuli for MST are similar to "shake", with different numbers of repeats, durations, and frequencies. Table 1 shows the vibrotactile stimuli for "pull" and "toss" in the SOT group. The detailed values of physical parameters and accelerations of vibrotactile stimuli are in [7]. The SFA group Tap 1 Tap 2 Poke, press, tap, tickle of "tap", two types of "shake", and one type of "pull" and "toss"). The detailed design process of vibrotactile stimuli for MST are in [7]. The forms of vibrotactile stimuli for MST in each group are similar. Therefore, only typical examples of recorded accelerations are listed in Table 1. In the SFA group, all the vibrotactile stimuli for MST are similar to "tap" in Table 1, with different durations and frequencies. In the RPT group, all the vibrotactile stimuli for MST are similar to "shake", with different numbers of repeats, durations, and frequencies. Table 1 shows the vibrotactile stimuli for "pull" and "toss" in the SOT group. The detailed values of physical parameters and accelerations of vibrotactile stimuli are in [7]. The SFA group Tap 1 Tap 2 Poke, press, tap, tickle Tap 3 Tap 4 The RPT group Gestures to Trigger Vibrotactile Stimuli Researchers have demonstrated that it is necessary to provide vibrotactile stimuli on both a sender's and a receiver's phone for remote communication [5,8,16]. In this study, we consider exploring the LUMST with vibrotactile stimuli in two conditions: 1. Participants act as a receiver, passively perceiving vibrotactile stimuli without gestures. In this condition, participants were told to imagine themselves as receivers during a virtual online communication. In addition, they were asked to press a button one time to activate the MST with vibrotactile stimuli sent from the sender. In [7,23], we provided vibrotactile stimuli with buttons. 2. Participants act as a sender, actively perceiving vibrotactile stimuli with specific gestures. In this situation, participants were told to imagine themselves as senders during an online mobile communication. They were asked to press a button based on the Tap 3 Tap 4 process of vibrotactile stimuli for MST are in [7]. The forms of vibrotactile stimuli for MST in each group are similar. Therefore, only typical examples of recorded accelerations are listed in Table 1. In the SFA group, all the vibrotactile stimuli for MST are similar to "tap" in Table 1, with different durations and frequencies. In the RPT group, all the vibrotactile stimuli for MST are similar to "shake", with different numbers of repeats, durations, and frequencies. Table 1 shows the vibrotactile stimuli for "pull" and "toss" in the SOT group. The detailed values of physical parameters and accelerations of vibrotactile stimuli are in [7]. The SFA group Tap 1 Tap 2 Poke, press, tap, tickle Tap 3 Tap 4 The RPT group Gestures to Trigger Vibrotactile Stimuli Researchers have demonstrated that it is necessary to provide vibrotactile stimuli on both a sender's and a receiver's phone for remote communication [5,8,16]. In this study, we consider exploring the LUMST with vibrotactile stimuli in two conditions: 1. Participants act as a receiver, passively perceiving vibrotactile stimuli without gestures. In this condition, participants were told to imagine themselves as receivers during a virtual online communication. In addition, they were asked to press a button one time to activate the MST with vibrotactile stimuli sent from the sender. In [7,23], we provided vibrotactile stimuli with buttons. 2. Participants act as a sender, actively perceiving vibrotactile stimuli with specific gestures. In this situation, participants were told to imagine themselves as senders during an online mobile communication. They were asked to press a button based on the The RPT group Shake 1 Shake 2 Nuzzle, rub, rock, shake, tremble process of vibrotactile stimuli for MST are in [7]. The forms of vibrotactile stimuli for MST in each group are similar. Therefore, only typical examples of recorded accelerations are listed in Table 1. In the SFA group, all the vibrotactile stimuli for MST are similar to "tap" in Table 1, with different durations and frequencies. In the RPT group, all the vibrotactile stimuli for MST are similar to "shake", with different numbers of repeats, durations, and frequencies. Table 1 shows the vibrotactile stimuli for "pull" and "toss" in the SOT group. The detailed values of physical parameters and accelerations of vibrotactile stimuli are in [7]. The SFA group Tap 1 Tap 2 Poke, press, tap, tickle Tap 3 Tap 4 The RPT group Gestures to Trigger Vibrotactile Stimuli Researchers have demonstrated that it is necessary to provide vibrotactile stimuli on both a sender's and a receiver's phone for remote communication [5,8,16]. In this study, we consider exploring the LUMST with vibrotactile stimuli in two conditions: 1. Participants act as a receiver, passively perceiving vibrotactile stimuli without gestures. In this condition, participants were told to imagine themselves as receivers during a virtual online communication. In addition, they were asked to press a button one time to activate the MST with vibrotactile stimuli sent from the sender. In [7,23], we provided vibrotactile stimuli with buttons. 2. Participants act as a sender, actively perceiving vibrotactile stimuli with specific gestures. In this situation, participants were told to imagine themselves as senders during an online mobile communication. They were asked to press a button based on the Table 1. In the SFA group, all the vibrotactile stimuli for MST are similar to "tap" in Table 1, with different durations and frequencies. In the RPT group, all the vibrotactile stimuli for MST are similar to "shake", with different numbers of repeats, durations, and frequencies. Table 1 shows the vibrotactile stimuli for "pull" and "toss" in the SOT group. The detailed values of physical parameters and accelerations of vibrotactile stimuli are in [7]. Movement Form Group [9] Typical Accelerations of Vibrotactile Stimuli of MST [7] 1 MST in Each Movement Group [9] The SFA group Tap 1 Tap 2 Poke, press, tap, tickle Tap 3 Tap 4 The RPT group Gestures to Trigger Vibrotactile Stimuli Researchers have demonstrated that it is necessary to provide vibrotactile stimuli on both a sender's and a receiver's phone for remote communication [5,8,16]. In this study, we consider exploring the LUMST with vibrotactile stimuli in two conditions: 1. Participants act as a receiver, passively perceiving vibrotactile stimuli without gestures. In this condition, participants were told to imagine themselves as receivers during a virtual online communication. In addition, they were asked to press a button one time to activate the MST with vibrotactile stimuli sent from the sender. In [7,23], we provided vibrotactile stimuli with buttons. 2. Participants act as a sender, actively perceiving vibrotactile stimuli with specific gestures. In this situation, participants were told to imagine themselves as senders during an online mobile communication. They were asked to press a button based on the Gestures to Trigger Vibrotactile Stimuli Researchers have demonstrated that it is necessary to provide vibrotactile stimuli on both a sender's and a receiver's phone for remote communication [5,8,16]. In this study, we consider exploring the LUMST with vibrotactile stimuli in two conditions: 1. Participants act as a receiver, passively perceiving vibrotactile stimuli without gestures. In this condition, participants were told to imagine themselves as receivers during a virtual online communication. In addition, they were asked to press a button one time to activate the MST with vibrotactile stimuli sent from the sender. In [7,23], we provided vibrotactile stimuli with buttons. 2. Participants act as a sender, actively perceiving vibrotactile stimuli with specific gestures. In this situation, participants were told to imagine themselves as senders during an online mobile communication. They were asked to press a button based on the user-defined gestures of MST [9]. We considered the user-defined gestures, here, to mimic the real situation when sending MST. We told participants what each motion should be like to control the gestures when participants acted as senders based on [9]. The detailed gesture information is as follows: 1. Participants were asked to press a button for MST in the SFA group (i.e., poke, press, tap, tickle). a. For "poke", "tap", and "tickle", participants were asked to use their right index finger to press a button to trigger the vibrotactile stimuli. A participant's left hand held the phone, and their right hand acted like that in Figure 1a. Participants could press the button with different rhythms, speeds, or repeat times. For example, when the sender "poke" others, they could "poke" many times rather than just one time based on their habits. The sender could feel the vibrotactile stimuli each time they "poke" others. b. For "press", participants were asked to use their right thumb to press the button to trigger the vibrotactile stimuli (Figure 1b). 2. For gestures in the RPT group (nuzzle, rub, rock, shake, tremble), participants were asked to start touching the button to activate the vibrotactile stimuli and move their right index fingers back and forth (Figure 1c), and actively sense the repetitive change of vibrotactile stimuli, until the vibrotactile stimuli stopped. 3. For gestures in this group, we considered different directions. Therefore, we described the gestures as follows: a. For "pull", participants moved their right index finger from up to bottom with a strong force, acting like they were pulling someone to a closer position (Figure 1d). b. For "toss", participants moved their right index finger from bottom to up and moved their finger fly away from the touchscreen similar to the movement of tossing something away (Figure 1e). a. For "poke", "tap", and "tickle", participants were asked to use their right index finger to press a button to trigger the vibrotactile stimuli. A participant's left hand held the phone, and their right hand acted like that in Figure 1a. Participants could press the button with different rhythms, speeds, or repeat times. For example, when the sender "poke" others, they could "poke" many times rather than just one time based on their habits. The sender could feel the vibrotactile stimuli each time they "poke" others. b. For "press", participants were asked to use their right thumb to press the button to trigger the vibrotactile stimuli (Figure 1b). 2. For gestures in the RPT group (nuzzle, rub, rock, shake, tremble), participants were asked to start touching the button to activate the vibrotactile stimuli and move their right index fingers back and forth (Figure 1c), and actively sense the repetitive change of vibrotactile stimuli, until the vibrotactile stimuli stopped. 3. For gestures in this group, we considered different directions. Therefore, we described the gestures as follows: a. For "pull", participants moved their right index finger from up to bottom with a strong force, acting like they were pulling someone to a closer position ( Figure 1d). b. For "toss", participants moved their right index finger from bottom to up and moved their finger fly away from the touchscreen similar to the movement of tossing something away (Figure 1e). The grey square in Figure 1 represents a graphic button on the touchscreen. Participants started their gestures by pressing the button to trigger the vibrotactile stimuli and continued their gestures until the vibrotactile stimuli stopped. (a) the gesture for "poke", "tap", and "tickle"; (b) the gesture for "press"; (c) the gesture for "nuzzle", "rub", "rock", "shake", and "tremble"; (d) the gesture for "pull"; (e) the gesture for "toss". These gesture figures were selected from [9]. The grey square in Figure 1 represents a graphic button on the touchscreen. Participants started their gestures by pressing the button to trigger the vibrotactile stimuli and continued their gestures until the vibrotactile stimuli stopped. Gestures and Displayed Vibrotactile Stimuli To make a sender's gesture match with the displayed vibrotactile stimuli, we explain it based on the following gesture groups: 1. Gestures in the SFA group such as poke, press, tap, and tickle had a short duration [9]. The vibrotactile stimuli of these MST were also very short [7]. The vibrotactile stimuli would finish when participants finished the quick pressing process. Users' gestures could easily catch the displayed vibrotactile stimuli. 2. For repetitive gestures such as nuzzle, rub, rock, shake, and tremble, it seemed not easy to catch the vibrotactile stimuli, since the durations of these gestures were long and the vibrotactile stimuli varied ( Table 1). The rhythms of vibrotactile stimuli in this group were extracted from our previous study on user-defined gestures for MST [9]. We explored how people performed these repetitive gestures and recorded the average number of repeats, durations, and frequencies [9]. We designed vibrotactile stimuli based on user-defined gestures [7]. Those parameters came from users. Therefore, it was not difficult for users to understand the vibrotactile stimuli' numbers of repeats, durations, and frequencies. In the user study, we told participants how to perform the repetitive gestures and asked them to catch the vibrotactile stimuli. Participants were allowed to feel the vibrotactile stimuli in this group several times before filling in the questionnaire. 3. Gestures in the SOT group included pull and toss. "Pull" had a long duration [9]. The vibrotactile stimuli of "pull" were long and constant [7]. Participants touched the button to trigger the vibrotactile stimuli. When the vibrotactile stimuli stopped, participants' fingers left the touchscreen. It was easy for participants to catch the displayed vibrotactile stimuli for "pull". For "toss", the duration was not long [9]. If participants touched the button to trigger the vibrotactile stimuli and performed gestures immediately, they could catch the displayed vibrotactile stimuli. The duration and the changing trend of vibrotactile stimuli were also set based on user-defined gestures [9]. Related parameters had averaged values collected from users. It was easy for participants to understand the duration and changing trend of vibrotactile stimuli. In the user study, participants were also allowed to feel the vibrotactile stimuli several times before they filled in the questionnaire. Experiment Setup We used the same experimental smartphone setup as the one used in [7,9,23]. The smartphone had a wideband linear resonant actuator (LRA) motor to display vibrotactile stimuli [23]. The detailed technical information about this smartphone is in [23]. Figure 2 shows the test interface. The grey squares in Figure 2 represent the graphic buttons. study on user-defined gestures for MST [9]. We explored how people performed these repetitive gestures and recorded the average number of repeats, durations, and frequencies [9]. We designed vibrotactile stimuli based on user-defined gestures [7]. Those parameters came from users. Therefore, it was not difficult for users to understand the vibrotactile stimuli' numbers of repeats, durations, and frequencies. In the user study, we told participants how to perform the repetitive gestures and asked them to catch the vibrotactile stimuli. Participants were allowed to feel the vibrotactile stimuli in this group several times before filling in the questionnaire. 3. Gestures in the SOT group included pull and toss. "Pull" had a long duration [9]. The vibrotactile stimuli of "pull" were long and constant [7]. Participants touched the button to trigger the vibrotactile stimuli. When the vibrotactile stimuli stopped, participants' fingers left the touchscreen. It was easy for participants to catch the displayed vibrotactile stimuli for "pull". For "toss", the duration was not long [9]. If participants touched the button to trigger the vibrotactile stimuli and performed gestures immediately, they could catch the displayed vibrotactile stimuli. The duration and the changing trend of vibrotactile stimuli were also set based on user-defined gestures [9]. Related parameters had averaged values collected from users. It was easy for participants to understand the duration and changing trend of vibrotactile stimuli. In the user study, participants were also allowed to feel the vibrotactile stimuli several times before they filled in the questionnaire. Experiment Setup We used the same experimental smartphone setup as the one used in [7,9,23]. The smartphone had a wideband linear resonant actuator (LRA) motor to display vibrotactile stimuli [23]. The detailed technical information about this smartphone is in [23]. Figure 2 shows the test interface. The grey squares in Figure 2 represent the graphic buttons. Participants Twenty participants (eight males and twelve females) between the ages of 23 and 36 participated in this study. Participants were recruited from the local university. All participants had experiences with smartphones and online social communication. They had no physical constraints of sensing touch [7,23]. Noise-canceling headphones were provided to block out vibrotactile stimuli' sound effects [7,23]. Figure 3 shows the test environment. Procedure We introduced the test and handed out the consent forms and questionnaires before the experiment. Table 2 shows variables and test conditions. The descriptions of two conditions were as follows: 1. Participants were told to act as a receiver to feel the MST with vibrotactile stimuli. In this situation, participants only pressed the button on the touchscreen once. 2. Participants were told to act as a sender to feel the MST with vibrotactile stimuli. In this situation, participants pressed the button with gestures. Participants were asked to use the gestures mentioned above (Section 5). Participants were allowed to try the vibrotactile stimuli several times before filling in the questionnaire. Participants Twenty participants (eight males and twelve females) between the ages of 23 and 36 participated in this study. Participants were recruited from the local university. All participants had experiences with smartphones and online social communication. They had no physical constraints of sensing touch [7,23]. Noise-canceling headphones were provided to block out vibrotactile stimuli' sound effects [7,23]. Figure 3 shows the test environment. Procedure We introduced the test and handed out the consent forms and questionnaires before the experiment. Table 2 shows variables and test conditions. The descriptions of two conditions were as follows: 1. Participants were told to act as a receiver to feel the MST with vibrotactile stimuli. In this situation, participants only pressed the button on the touchscreen once. 2. Participants were told to act as a sender to feel the MST with vibrotactile stimuli. In this situation, participants pressed the button with gestures. Participants were asked to use the gestures mentioned above (Section 5). Participants were allowed to try the vibrotactile stimuli several times before filling in the questionnaire. As a receiver or a sender (within-group), participants were asked to feel how much the LUMST with vibrotactile stimuli was and fill in the 7-point Likert Scale from 1 (very unlikely) to 7 (very likely). We delivered randomized orders of MST to participants on the questionnaire before testing. Participants followed the order of MST they received and felt them one by one. We obtained the randomized orders by random function in Python [7,9,23]. Results We used SPSS 23.0 to conduct a one-way ANOVA analysis. The d in Table 3. More detailed results on receivers can be seen in [7]. Table 4 shows the comparison data between the senders and the periment did not find significant differences in perceiving vibrotacti senders and receivers for most MST (p > 0.05). These results suggest tha no different understanding of the MST with vibrotactile stimuli, whet sender actively perceiving with gestures or a receiver passively perce tures. For actively perceiving MST such as "poke", "press", "tap", an pants' different rhythms, speeds, or repeat times might affect the res different types of vibrotactile stimuli for these MST. Table 4 shows n ences for these vibrotactile stimuli (p > 0.05), which means a user's ac certain range, did not lead to significant perceptual changes as comp perceiving of vibrotactile stimuli. The results in this study could guide us in application design. Res ers could apply the same vibrotactile stimuli for MST for senders' phones. We could use the same vibrotactile stimuli for MST for both with gestures and passively perceiving without gestures. Results We used SPSS 23.0 to conduct a one-way ANOVA analysis. The descriptive data ar in Table 3. More detailed results on receivers can be seen in [7]. Table 4 shows the comparison data between the senders and the receivers. Our ex periment did not find significant differences in perceiving vibrotactile stimuli betwee senders and receivers for most MST (p > 0.05). These results suggest that participants hav no different understanding of the MST with vibrotactile stimuli, whether they acted as sender actively perceiving with gestures or a receiver passively perceiving without ge tures. For actively perceiving MST such as "poke", "press", "tap", and "tickle", partic pants' different rhythms, speeds, or repeat times might affect the results. We provide different types of vibrotactile stimuli for these MST. Table 4 shows no significant diffe ences for these vibrotactile stimuli (p > 0.05), which means a user's active behavior, in certain range, did not lead to significant perceptual changes as compared with passiv perceiving of vibrotactile stimuli. The results in this study could guide us in application design. Researchers or design ers could apply the same vibrotactile stimuli for MST for senders' and the receiver phones. We could use the same vibrotactile stimuli for MST for both active perceivin with gestures and passively perceiving without gestures. Shake (Press the button once to trigger the vibrotactile stimuli) As a receiver or a sender (within-group), participants were asked to feel how much the LUMST with vibrotactile stimuli was and fill in the 7-point Likert Scale from 1 (very unlikely) to 7 (very likely). We delivered randomized orders of MST to participants on the questionnaire before testing. Participants followed the order of MST they received and felt them one by one. We obtained the randomized orders by random function in Python [7,9,23]. Results We used SPSS 23.0 to conduct a one-way ANOVA analysis. The descriptive data are in Table 3. More detailed results on receivers can be seen in [7]. Table 4 shows the comparison data between the senders and the receivers. Our experiment did not find significant differences in perceiving vibrotactile stimuli between senders and receivers for most MST (p > 0.05). These results suggest that participants have no different understanding of the MST with vibrotactile stimuli, whether they acted as a sender actively perceiving with gestures or a receiver passively perceiving without gestures. For actively perceiving MST such as "poke", "press", "tap", and "tickle", participants' different rhythms, speeds, or repeat times might affect the results. We provided different types of vibrotactile stimuli for these MST. Table 4 shows no significant differences for these vibrotactile stimuli (p > 0.05), which means a user's active behavior, in a certain range, did not lead to significant perceptual changes as compared with passive perceiving of vibrotactile stimuli. The results in this study could guide us in application design. Researchers or designers could apply the same vibrotactile stimuli for MST for senders' and the receivers' phones. We could use the same vibrotactile stimuli for MST for both active perceiving with gestures and passively perceiving without gestures. Considering Specific Demands and Context in Future Designs Based on existing studies, the effects of active and passive perceiving were different in different contexts. For example, no significant differences were found in perceived roughness [17,18], the active perception had shorter latencies in the exploration of virtual three-dimensional geometric shapes [19], active movements lead to a significant decrease in tactile sensibility [20], etc. This study focused on a different context, i.e., MST with vibrotactile stimuli. We also considered more gestures, such as pressing the touchscreen with different repeat times, rhythms, or speeds. This study showed no significant differences between the two conditions in the LUMST with vibrotactile stimuli. The results indicated that senders and receivers could understand the vibrotactile stimuli we designed for MST in two different conditions. We could apply the same vibrotactile stimuli for senders and receivers in the future. The effects may vary in different contexts, and therefore, we need to consider specific demands and context in future designs. Although the LUMST had no differences between the two conditions for the selected input gestures, other factors may affect these two conditions, such as preference, complexity [8], or controlling feelings [21]. These factors are also important in applications. For future applications, we should consider the interface demands in a specific context and try to make MST with vibrotactile stimuli more suitable for the interface and user's needs. Multimodal MST for Online Social Applications Haptic feedback is a compensation channel for information transmission [24]. It is not easy for users to recognize vibrotactile stimuli when there is no other information [25]. We told participants what the vibrotactile stimuli represented in the user study [7]. We should also mention what the vibrotactile stimuli represent in an application. This study observed no significant differences in perceptions when participants actively or passively perceived the vibrotactile stimuli for most MST. Researchers or designers could apply the same vibrotactile stimuli for both senders and receivers. However, users may not tell the differences when similar vibrotactile stimuli come out. The vibrotactile stimuli in each movement form group have similar forms of vibrotactile stimuli [7]. For example, the vibrotactile stimuli of "poke" and "press" are all short pulses. Multimodal MST could help to differentiate them, with the visual stickers or emoji of "poke" and "press". In addition, suppose we need to consider multi-point gestures in the future. In that case, we need to apply the visual channel to compensate for the original complex gestures [26] to better understand MST with vibrotactile stimuli. Limitations The vibrotactile stimuli in this study are fixed. In the RPT group, the rhythm of vibrotactile stimuli is also fixed. When participants tried to imagine themselves as a sender and performed repetitive gestures to feel the vibrotactile stimuli, they sometimes could not catch up with the rhythm. The first feeling of rhythm may affect the results of the LUMST. In the user study, participants could try the vibrotactile stimuli several times to catch the rhythm better. In the future, real-time transmission would help to solve this problem. Another point was that we did not consider MST with a multi-point touch. Although there are many challenges in designing user experiences of multi-touch interfaces [11], users may have different insights into MST with multi-point touch. In the future, considering MST with multi-point touch may help to thoroughly understand the active and passive MST with vibrotactile stimuli. Conclusions and Future Work In this study, we compared senders with gestures and receivers without gestures to identify the differences in perceiving MST with vibrotactile stimuli. We introduced vibrotactile stimuli for selected MST based on [7]. In [7], we mainly provided the design process of MST with vibrotactile stimuli and we also tested if the designed vibrotactile stimuli could be understood by receivers [7]. However, we did not discuss if senders could understand the designed vibrotactile stimuli in [7]. A sender's actively perceiving ways may affect the LUMST. Therefore, we conducted a user study to check and compare the LUMST in this study. This study showed that when participants acted as senders and receivers, they did not have a different understanding of the MST with vibrotactile stimuli when actively perceiving with gestures or passively perceiving without gestures. Future studies should focus on applications. In applications, receivers and senders are considered together. This study connects the previous study [7] and future applications. Combined with [7], the results of this study also provide implications for future applications. Researchers or designers could apply the same vibrotactile stimuli for both senders' and receivers' phones in future application designs. In the future, we plan to consider the specific context and multimodal interfaces when applying MST with vibrotactile stimuli. For a better understanding, we also plan to consider multimodal stimuli when designing MST.
9,207.8
2022-01-27T00:00:00.000
[ "Psychology", "Computer Science" ]
Research on the Preparation of Graphene Quantum Dots/SBS Composite-Modified Asphalt and Its Application Performance : This study aims to prepare a graphene quantum dots (GQDs)/styrene-butadiene seg-mented copolymer composite (GQDs/SBS) as an asphalt modifier using the Pickering emulsion polymerization method. The physicochemical properties of the GQDs/SBS modifier and their effects on asphalt modification were investigated. In addition, the GQDs/SBS modifier was compared with the pure SBS modifier. Research results demonstrated that GQDs could be evenly dispersed into the SBS phase to form a uniform composite. Adding GQDs brings more oxygen-containing functional groups into the GQDs/SBS modifier, thus strengthening the polarity and making it disperse into the asphalt better. Compared with the SBS modifier, the GQDs/SBS modifier presents better thermostability. Moreover, GQDs/SBS composite-modified asphalt achieves better high-temperature performance than SBS-modified asphalt, which is manifested by the increased softening points, complex shear modulus and rutting factors. However, the low-temperature performance decreases, which is manifested by reductions in cone penetration, viscosity and ductility as well as the increased ratio between creep stiffness (S) and creep rate (m), that is, S/m. Furthermore, adding GQDs can improve the high-temperature performance of asphalt mixture, but it influences low-temperature and water stability slightly. GQDs/SBS also have the advantages of simple preparation techniques, low cost and are environmentally friendly. Therefore, they have become a beneficial choice as asphalt cementing material modifiers. Introduction In all pavement forms, asphalt pavement accounts for a very high proportion of road engineering around the world due to its remarkable advantages such as high riding comfort and convenient maintenance. Recently, transportation industries in countries around the world are booming. Increasing traffic loads, especially the growth in heavy-loaded and overloaded vehicles, intensifies damage to original roads. As a result, asphalt pavements may develop different types of early diseases soon after opening to traffic, such as ruts, pavement subsidence, upheaval, etc. [1,2]. These diseases affect the performance and service life of pavements significantly. Nowadays, improving durability and prolonging the service life of asphalt pavement is a key problem that has to be solved in the road field at present. Furthermore, developers often choose superior performance materials for asphalt pavements since excellent pavement structural performances are closely related to material performances. In particular, it is very important to choose a good performance asphalt binder because its quality is directly related to the performance of the asphalt pavement [3]. To obtain ideal performances from asphalt under various climatic and traffic Nevertheless, carbon nanomaterials must overcome considerable surface tension in order to disperse into asphalt since they have great specific surface area [4]. As a result, the dispersion problem of carbon nanomaterials in SBS-modified asphalt is a key constraint against their development at present. With abundant carboxyl and hydroxyl functional groups on the surface, GQDs show some surface activity and they can be used to prepare Pickering emulsion as nano surfactant [28]. Further, the polymerization of Pickering emulsion is expected to be a new choice to prepare GQDs/SBS compositemodified materials [4,29]. This study aims to discuss the applications of GQDs as an asphalt modifier. For this purpose, the asphalt-based GQDs and SBS were used as the main raw material, and a new GQDs/SBS composite material prepared by the Pickering emulsion polymerization method was used as the modifier of asphalt. A series of chemical analyzers were used to analyze the functional group structures and thermostability of the GQDs and SBS modifier. On this basis, the conventional physical properties and rheological properties of GQDs/SBS composite-modified asphalt and SBS-modified asphalt were compared. Moreover, pavement performances of asphalt mixtures that were prepared using GQDs/SBS composite-modified asphalt and SBS-modified asphalt as binders, were characterized. Materials GQDs/SBS modifier was prepared using asphalt-based GQDs and linear SBSas the raw materials. Modified asphalt was prepared using Qinhuangdao AH-70 (PetroChina fuel asphalt Co., Ltd., Qinhuangdao, China)asphalt and GQDs/SBS modifier as raw materials. The conventional performances and four-component compositions of Qinhuangdao AH-70 asphalt are listed in Table 1. SBS, white fluffy rod-like solids, was provided by Yueyang Baling Petrochemical Corp (Sinopec, Yueyang, China). SBS is a linear molecule with an average molecular weight of 100,000 g/mol. The deoiling asphalt (DOA, Asphaltenes content is 20%) from SINOPEC Jiujiang company, Jiujiang, China, was used as the raw material, and GQDs with asphaltene polycyclic aromatic hydrocarbon nucleus were prepared by nitric acid oxidation. The specific manufacturing technology was introduced as follows: 10 g DOA powder were added into a 250 mL flask, and 150 mL 65% concentrated nitric acid were added in slowly and stirred continuously. Under the strong stirring, the temperature increased gradually, and the reflux was heated. The reaction lasted for 4 h under 90 • C. After finishing the reaction, it was cooled to room temperature and then diluted with distilled water. It was filtered by a 0.2 um Millipore filter directly rather than neutralized by sodium hydroxide. The residual nitric acids in the filtrate were eliminated through reduced pressure distillation and then dried, thus obtaining nitric acid oxidized GQDs. Preparation of GQDs/SBS Modifier The GQDs/SBS modifier was prepared by the Pickering emulsion polymerization method. The preparation process is shown in Figure 1. Firstly, a certain mass of GQDs was dispersed into pure water, and a 5% (mass concentration) GQDs solution was gained through ultrasonic dispersion for 30 min. Meanwhile, SBS particles were dissolved into methylbenzene, and a 20 wt% (mass concentration) SBS methylbenzene solution was prepared. The SBS methylbenzene solution was added to the GQDs solution at a mass ratio of 1:1, and Pickering emulsion was acquired through 5 min high-speed shearing using a BME shearing machine under 4000 r/min. The Pickering emulsion was poured into a clean glass tray with a flat bottom. Finally, the tray was put in a vacuum drying box under 80 • C for 12 h. The Pickering emulsion developed auto polymerization under these conditions, thus obtaining a GQDs/SBS modifier. It can be seen from Figure 2 that the GQDs/SBS modifier is a black solid under room temperature. Preparation of GQDs/SBS Modifier The GQDs/SBS modifier was prepared by the Pickering emulsion polymerization method. The preparation process is shown in Figure 1. Firstly, a certain mass of GQDs was dispersed into pure water, and a 5% (mass concentration) GQDs solution was gained through ultrasonic dispersion for 30 min. Meanwhile, SBS particles were dissolved into methylbenzene, and a 20 wt% (mass concentration) SBS methylbenzene solution was prepared. The SBS methylbenzene solution was added to the GQDs solution at a mass ratio of 1:1, and Pickering emulsion was acquired through 5 min high-speed shearing using a BME shearing machine under 4000 r/min. The Pickering emulsion was poured into a clean glass tray with a flat bottom. Finally, the tray was put in a vacuum drying box under 80 °C for 12 h. The Pickering emulsion developed auto polymerization under these conditions, thus obtaining a GQDs/SBS modifier. It can be seen from Figure 2 that the GQDs/SBS modifier is a black solid under room temperature. FT-IR Spectral Analysis The functional groups and material structures of the SBS modifier and the GQDs/SBS modifier were characterized using a Fourier infrared microscopic analysis spectrometer. Meanwhile, their chemical compositions were analyzed. A Nicolet IS 5-type infrared spectrometer (Thermo Science, Waltham, MA, USA) was used in the experiment. All tests were performed at room temperature. The resolution was 4 cm −1 , the scanning frequency was 32 times/min and the spectral wavenumber ranged between 4000 and 500 cm −1 . The samples were prepared by casting a film onto a potassium bromide (KBr) window from a 5% by weight solution in carbon tetrachloride (CCl4). Preparation of GQDs/SBS Modifier The GQDs/SBS modifier was prepared by the Pickering emulsion polymerization method. The preparation process is shown in Figure 1. Firstly, a certain mass of GQDs was dispersed into pure water, and a 5% (mass concentration) GQDs solution was gained through ultrasonic dispersion for 30 min. Meanwhile, SBS particles were dissolved into methylbenzene, and a 20 wt% (mass concentration) SBS methylbenzene solution was prepared. The SBS methylbenzene solution was added to the GQDs solution at a mass ratio of 1:1, and Pickering emulsion was acquired through 5 min high-speed shearing using a BME shearing machine under 4000 r/min. The Pickering emulsion was poured into a clean glass tray with a flat bottom. Finally, the tray was put in a vacuum drying box under 80 °C for 12 h. The Pickering emulsion developed auto polymerization under these conditions, thus obtaining a GQDs/SBS modifier. It can be seen from Figure 2 that the GQDs/SBS modifier is a black solid under room temperature. FT-IR Spectral Analysis The functional groups and material structures of the SBS modifier and the GQDs/SBS modifier were characterized using a Fourier infrared microscopic analysis spectrometer. Meanwhile, their chemical compositions were analyzed. A Nicolet IS 5-type infrared spectrometer (Thermo Science, Waltham, MA, USA) was used in the experiment. All tests were performed at room temperature. The resolution was 4 cm −1 , the scanning frequency was 32 times/min and the spectral wavenumber ranged between 4000 and 500 cm −1 . The samples were prepared by casting a film onto a potassium bromide (KBr) window from a 5% by weight solution in carbon tetrachloride (CCl4). The functional groups and material structures of the SBS modifier and the GQDs/SBS modifier were characterized using a Fourier infrared microscopic analysis spectrometer. Meanwhile, their chemical compositions were analyzed. A Nicolet IS 5-type infrared spectrometer (Thermo Science, Waltham, MA, USA) was used in the experiment. All tests were performed at room temperature. The resolution was 4 cm −1 , the scanning frequency was 32 times/min and the spectral wavenumber ranged between 4000 and 500 cm −1 . The samples were prepared by casting a film onto a potassium bromide (KBr) window from a 5% by weight solution in carbon tetrachloride (CCl 4 ). Thermogravimetric Analysis (TGA) The TGA-100 A (Shanghai all Instrument Equipment Co., Ltd., Shanghai, China) thermal gravimetric analyzer was applied in the experiment for the TGA of the SBS modifier and GQDs/SBS modifier. Under the nitrogen atmosphere, about 7 mg samples were collected and heated from 30 to 600 • C at the constant temperature rising rate of 10 • C/min. The thermostability performances of the two modifiers were evaluated by TG and DTG curves. To assure accuracy and decrease errors, all experiments were performed three times. Preparation of Modified Asphalt This study prepared GQDs/SBS composite-modified asphalt and SBS-modified asphalt (control group) using the melting-thawing mixed method. The melted-thawed AH-70 asphalt was collected and then poured into a cylinder container, which was then heated to 180 • C. Subsequently, 3 wt% (asphalt mass) compatilizer (extract oil) and 4 wt% modifier were added successively. Next, the mixture was processed by high-speed shearing for 30 min at the rate of 4000 r/min. Later, the temperature was lowered to 170 • C, and the stirring rate was 750 r/min. In total,0.25 wt% stabilizer was added and stirred continuously for 3 h. After full development, modified asphalt with stable performance was obtained. Rheological Test The rheological properties of modified asphalt samples were characterized using the dynamic shear rheometer (DSR, TA, New Castle, DE, USA). The clamp chose the parallel plates with diameters of 8 mm and 25 mm, respectively. Firstly, the linear viscoelastic interval of the samples was determined through a stress and strain scan. Secondly, a smallangle vibration shearing test was carried out within the determined linear viscoelastic interval. The scanning results of isothermal frequencies (0.1-50 mads) were acquired under 30, 45, 60, and 75 • C. The specific operation process was introduced as follows: first, put about 0.1 g of the samples on the lower plate of the parallel plates. Second, install the parallel plates on the rheometer, and set the initial temperature. After the samples are softened, lower the upper plate to squeeze some samples. Finally, set the interval between the parallel plates to 1 mm (25 mm plate) or 1.5 mm (8 mm plate). The temperature scanning ranged from 58 to 95 • C. The temperature rising rate was 1 • C/min, and the frequency was l0 rad/s. The multi-stress repetitive creeping test was carried out under 100 and 3200 Pa. Each stress cycle number was set to 10, and each circle had 1 s stress loading and 9 s relaxation. The bending dye rheometer (BBR, ATS, Butler, PA, USA) was used to measure the creep properties of the asphalt under low temperatures. The combination of BBR and DSR can present relatively comprehensive rheological information on asphalt under the used temperature. BBR uses the small beam principle in engineering to characterize the cracking trend of the asphalt upon temperature drop, through which two indexes could be gained: creep stiffness (S) and variation rate of stiffness with time (m). To avoid the cracking phenomenon of the asphalt under low temperatures, Peformance Grade (PG) classification norms require that the S for 60 s loading of BBR should be no higher than 300 MPa and the m value should be no smaller than 0.3. The temperature of the BBR test ranged between −18 and −24 • C. In this study, the viscoelasticity within a wide-frequency and wide-temperature range was gained by the time-temperature equivalence principle. Such viscoelasticity with a very large span in orders of magnitude can hardly be measured directly. The time-temperature equivalence principle elaborates that influences of extended time (or decreased frequency) Coatings 2022, 12, 515 6 of 18 on mechanical properties of materials are equivalent to temperature rise. Under conditions meeting the time-temperature equivalence principle, various viscoelastic parameters measured by experiment can be used to synthesize curves using translocation factors. Performance Characterization of Asphalt Mixture The SBS-modified asphalt (control group) and the prepared GQDs/SBS compositemodified asphalt were used as binders, respectively. The AC-20 asphalt mixture, which is commonly used in asphalt pavement surfaces, was chosen to design asphalt mixture by the Marshall Design method according to China's Construction Technological Norms on Highway Asphalt Pavement (JTG F40-2004). The grading curve is shown in Figure 3. Combining with engineering experiences, the optimal oil-stone ratio was determined to be 4.5 based on the target voidage of 4.0%. In this study, all evaluated asphalt mixtures had the same grading and optimal asphalt content. In this study, the viscoelasticity within a wide-frequency and wide-temperature range was gained by the time-temperature equivalence principle. Such viscoelasticity with a very large span in orders of magnitude can hardly be measured directly. The timetemperature equivalence principle elaborates that influences of extended time (or decreased frequency) on mechanical properties of materials are equivalent to temperature rise. Under conditions meeting the time-temperature equivalence principle, various viscoelastic parameters measured by experiment can be used to synthesize curves using translocation factors. Performance Characterization of Asphalt Mixture The SBS-modified asphalt (control group) and the prepared GQDs/SBS compositemodified asphalt were used as binders, respectively. The AC-20 asphalt mixture, which is commonly used in asphalt pavement surfaces, was chosen to design asphalt mixture by the Marshall Design method according to China's Construction Technological Norms on Highway Asphalt Pavement (JTG F40-2004). The grading curve is shown in Figure 3. Combining with engineering experiences, the optimal oil-stone ratio was determined to be 4.5 based on the target voidage of 4.0%. In this study, all evaluated asphalt mixtures had the same grading and optimal asphalt content. Two asphalt mixtures were molded into specimens according to Test Regulations on Highway Engineering and Asphalt Mixture (JTG E 20-2011) of China. The properties of the asphalt mixtures, including high-temperature performance, low-temperature performance and water stability, were analyzed. Since the grading and asphalt consumptions of the two asphalt mixtures were the same, the volume indexes were similar, and their differences in performance indexes were mainly determined by the different performances of the asphalt cements. FTIR Functional Group Analysis The FT-IR spectra of the SBS modifier and GQDs/SBS composite modifier are shown in Figure 4. The IR region (wavenumber from 4000 to 400 cm −1 ) was divided into a functional group zone (wavenumber from 4000 to 1330 cm −1 ) and fingerprint zone (wavenumber from 1330 to 400 cm −1 ) [30]. It can be seen from Figure Two asphalt mixtures were molded into specimens according to Test Regulations on Highway Engineering and Asphalt Mixture (JTG E 20-2011) of China. The properties of the asphalt mixtures, including high-temperature performance, low-temperature performance and water stability, were analyzed. Since the grading and asphalt consumptions of the two asphalt mixtures were the same, the volume indexes were similar, and their differences in performance indexes were mainly determined by the different performances of the asphalt cements. FTIR Functional Group Analysis The FT-IR spectra of the SBS modifier and GQDs/SBS composite modifier are shown in Figure 4. The IR region (wavenumber from 4000 to 400 cm −1 ) was divided into a functional group zone (wavenumber from 4000 to 1330 cm −1 ) and fingerprint zone (wavenumber from 1330 to 400 cm −1 ) [30]. It can be seen from Figure 4 that SBS shows obvious methylene C-H asymmetric and symmetric stretching vibration peaks at 2917 and 2848 cm −1 as well as multiple absorption peaks between 3100-2950 cm −1 , which were stretching vibration absorption peaks of unsaturated hydrocarbons. The absorption peaks occurring simultaneously at 1630, 1600, 1560, and 1422 cm −1 corresponded to the stretching vibration of the aromatic ring skeleton (-CH 2 -). The vibration within 1390~1000 cm −1 is the stretching vibration of the -C-O bond and single-bond skeleton vibration of C-C. In addition, absorption peaks near 697, 730, and 749 cm −1 were caused by the vibration absorption of single substituted benzene. The absorption peak near 972 cm −1 was caused by the twisting vibration of the C=C bond, while the absorption peak near 915 cm −1 is the infrared characteristic absorption peak of polybutadiene caused by out-of-plan swinging and vibration of =CH 2 . As the petroleum asphalt-based GQDs are added, the peaks of the GQDs/SBS composite modifiers at these points are all strengthened. Moreover, a wide adsorption peak occurs at 3307 cm −1 , which is a combined peak of hydroxyl and amidogen stretching vibrations of petroleum asphalt-based GQDs. Meanwhile, there are obvious shoulder peaks at 1650-1580 cm −1 , which are stretching vibration peaks of the benzene ring. This reflects that GQDs and SBS develop polymerization reactions to form stable covalent bonds. These covalent bonds are enough to avoid phase separation which might occur during the simple physical mixing manufacturing of nanocomposites. In addition, modified asphalt contains more oxygen-containing functional groups (e.g., -C=O and -C-O). Furthermore, these functional groups mainly come from GQDs, and the increased oxygen content can also improve the polarity of GQDs/SBS composite modifiers, thus increasing their compatibility with asphalt [31]. cm −1 as well as multiple absorption peaks between 3100-2950 cm −1 , which were stretching vibration absorption peaks of unsaturated hydrocarbons. The absorption peaks occurring simultaneously at 1630, 1600, 1560, and 1422 cm −1 corresponded to the stretching vibration of the aromatic ring skeleton (-CH2-). The vibration within 1390~1000 cm −1 is the stretching vibration of the -C-O bond and single-bond skeleton vibration of C-C. In addition, absorption peaks near 697, 730, and 749 cm −1 were caused by the vibration absorption of single substituted benzene. The absorption peak near 972 cm −1 was caused by the twisting vibration of the C=C bond, while the absorption peak near 915 cm −1 is the infrared characteristic absorption peak of polybutadiene caused by out-of-plan swinging and vibration of =CH2. As the petroleum asphalt-based GQDs are added, the peaks of the GQDs/SBS composite modifiers at these points are all strengthened. Moreover, a wide adsorption peak occurs at 3307 cm −1 , which is a combined peak of hydroxyl and amidogen stretching vibrations of petroleum asphalt-based GQDs. Meanwhile, there are obvious shoulder peaks at 1650-1580 cm −1 , which are stretching vibration peaks of the benzene ring. This reflects that GQDs and SBS develop polymerization reactions to form stable covalent bonds. These covalent bonds are enough to avoid phase separation which might occur during the simple physical mixing manufacturing of nanocomposites. In addition, modified asphalt contains more oxygen-containing functional groups (e.g., -C=O and -C-O). Furthermore, these functional groups mainly come from GQDs, and the increased oxygen content can also improve the polarity of GQDs/SBS composite modifiers, thus increasing their compatibility with asphalt [31]. TGA The thermostability of the modifier is an important property that has to be considered when analyzing the structural characteristics of asphalt binders. In this study, the thermal stability of GQDs/SBS composite modifiers and the SBS modifier were discussed by TGA. It can be seen from Figure 5 and Table 2 that the TGA curves of the GQDs/SBS composite modifier and SBS modifier present the same trend, and they both experienced two major stages of mass loss. However, the thermodynamic behaviors of the GQDs/SBS composite modifier and SBS modifier are significantly different. The GQDs/SBS composite modifier and SBS modifier both enter the first stage of mass loss before 340 °C . In this stage, mass loss is mainly attributed to volatilization of crystal water adsorbed onto the sample surface as well as decomposition of some oxygen-containing functional groups in molecules (-OH and -COOH). Since the GQDs surface has a lot of oxygen-containing functional groups, the mass loss rate of the GQDs/SBS composite modifier is far higher than that of TGA The thermostability of the modifier is an important property that has to be considered when analyzing the structural characteristics of asphalt binders. In this study, the thermal stability of GQDs/SBS composite modifiers and the SBS modifier were discussed by TGA. It can be seen from Figure 5 and Table 2 that the TGA curves of the GQDs/SBS composite modifier and SBS modifier present the same trend, and they both experienced two major stages of mass loss. However, the thermodynamic behaviors of the GQDs/SBS composite modifier and SBS modifier are significantly different. The GQDs/SBS composite modifier and SBS modifier both enter the first stage of mass loss before 340 • C. In this stage, mass loss is mainly attributed to volatilization of crystal water adsorbed onto the sample surface as well as decomposition of some oxygen-containing functional groups in molecules (-OH and -COOH). Since the GQDs surface has a lot of oxygen-containing functional groups, the mass loss rate of the GQDs/SBS composite modifier is far higher than that of the SBS modifier in the first stage. The second stage of mass loss occurs in the temperature range of 340~490 • C. The mass loss of modifiers is mainly attributed to the decomposition of SBS into small molecules and volatilization. This is the major stage of mass loss. It can be seen from the TG curve that the initial decomposition temperatures of the GQDs/SBS composite modifier and SBS modifier are at about 416 • C (the tangent initial point of TG is about 416 • C), and the pyrolysis termination temperature is about 480 • C. The pyrolysis termination temperature of the GQDs/SBS composite modifier (478.8 • C) is slightly lower than that of the SBS Coatings 2022, 12, 515 8 of 18 modifier (479.9 • C), showing a very small difference. After finishing the pyrolysis, the residual mass of the GQDs/SBS composite modifier is 1.78%, and the mass change is 98.22%. The residual mass of the SBS modifier is only 0.05%, and the mass changes are 99.95%. This demonstrates that within this temperature range, the SBS modifier almost loses weight completely under the N 2 atmosphere, and it is decomposed into small molecules and then volatilized without producing any residual carbons. However, the GQDs/SBS composite modifier is decomposed incompletely under an N 2 atmosphere, and it will produce some residual carbons. This is because the carbon nucleus is the major structural unit of GQDs in the GQDs/SBS composite modifier. After surface oxygen-containing functional groups are lost in the first stage of mass loss, the residual carbon nucleus has very good thermostability, and it will not be decomposed again, thus resulting in a high residual mass. The maximum mass loss rate points (DTG peak value) of the GQDs/SBS composite modifier and SBS modifier are at 460 • C. The mass-loss rate of the SBS modifier (17.08%/min) is 0.85%/min higher than that of the GQDs/SBS composite modifier (16.23%/min). To sum up, the GQDs/SBS composite modifier has better thermostability than the SBS modifier. In other words, adding GQDs improves the thermostability of the SBS modifier. range of 340~490 °C . The mass loss of modifiers is mainly attributed to the decomposition of SBS into small molecules and volatilization. This is the major stage of mass loss. It can be seen from the TG curve that the initial decomposition temperatures of the GQDs/SBS composite modifier and SBS modifier are at about 416 °C (the tangent initial point of TG is about 416 °C ), and the pyrolysis termination temperature is about 480 °C . The pyrolysis termination temperature of the GQDs/SBS composite modifier (478.8 °C ) is slightly lower than that of the SBS modifier (479.9 °C ), showing a very small difference. After finishing the pyrolysis, the residual mass of the GQDs/SBS composite modifier is 1.78%, and the mass change is 98.22%. The residual mass of the SBS modifier is only 0.05%, and the mass changes are 99.95%. This demonstrates that within this temperature range, the SBS modifier almost loses weight completely under the N2 atmosphere, and it is decomposed into small molecules and then volatilized without producing any residual carbons. However, the GQDs/SBS composite modifier is decomposed incompletely under an N2 atmosphere, and it will produce some residual carbons. This is because the carbon nucleus is the major structural unit of GQDs in the GQDs/SBS composite modifier. After surface oxygen-containing functional groups are lost in the first stage of mass loss, the residual carbon nucleus has very good thermostability, and it will not be decomposed again, thus resulting in a high residual mass. The maximum mass loss rate points (DTG peak value) of the GQDs/SBS composite modifier and SBS modifier are at 460 °C . The mass-loss rate of the SBS modifier (17.08%/min) is 0.85%/min higher than that of the GQDs/SBS composite modifier (16.23%/min). To sum up, the GQDs/SBS composite modifier has better thermostability than the SBS modifier. In other words, adding GQDs improves the thermostability of the SBS modifier. Conventional Physical Properties of Modified Asphalt The physical properties of the GQDs/SBS composite-modified asphalt and SBSmodified asphalt are listed in Table 3. Clearly, the cone penetration and ductility of the GQDs/SBS composite-modified asphalt decrease, while the softening point increases compared with those of the SBS-modified asphalt. This implies that adding GQDs can improve the high-temperature performance of SBS-modified asphalt but decrease the low-temperature performance to some extent. In addition, temperature influences the high-temperature flowing characteristics of asphalt. Flow characteristics of different samples show different degrees of sensitivity to temperature changes. In other words, asphalts have different temperature sensitivities. There are high-temperature zones and low-temperature zones. The temperature sensitivity of the high-temperature zone is closely related to the construction of asphalt mixture, the pumping of asphalt and other construction characteristics. In this study, a Brookfield rotary viscosimeter was applied, and the 27 # rotors were applied to the viscosity of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt within the temperature range of 110-175 • C. The variation curves of viscosity with temperature are shown in Figure 6a. With the increase in temperature, the viscosity values of both the GQDs/SBS composite-modified asphalt and SBS-modified asphalt decline sharply in the beginning and then become stable. This is because modified asphalt changes from a non-Newtonian body to a Newtonian body gradually under high temperatures. Given the same temperature, the viscosity of GQDs/SBS composite-modified asphalt is lower than SBS-modified asphalt. With the increase in temperature, differences between the GQDs/SBS composite-modified asphalt and SBS-modified asphalt decrease. This reflects that chemical crosslinking between GQDs and SBS is disadvantageous to the strength of polymers. The physical properties of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt are listed in Table 3. Clearly, the cone penetration and ductility of the GQDs/SBS composite-modified asphalt decrease, while the softening point increases compared with those of the SBS-modified asphalt. This implies that adding GQDs can improve the high-temperature performance of SBS-modified asphalt but decrease the low-temperature performance to some extent. In addition, temperature influences the high-temperature flowing characteristics of asphalt. Flow characteristics of different samples show different degrees of sensitivity to temperature changes. In other words, asphalts have different temperature sensitivities. There are high-temperature zones and low-temperature zones. The temperature sensitivity of the high-temperature zone is closely related to the construction of asphalt mixture, the pumping of asphalt and other construction characteristics. In this study, a Brookfield rotary viscosimeter was applied, and the 27 # rotors were applied to the viscosity of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt within the temperature range of 110-175 °C. The variation curves of viscosity with temperature are shown in Figure 6a. With the increase in temperature, the viscosity values of both the GQDs/SBS composite-modified asphalt and SBS-modified asphalt decline sharply in the beginning and then become stable. This is because modified asphalt changes from a non-Newtonian body to a Newtonian body gradually under high temperatures. Given the same temperature, the viscosity of GQDs/SBS composite-modified asphalt is lower than SBS-modified asphalt. With the increase in temperature, differences between the GQDs/SBS compositemodified asphalt and SBS-modified asphalt decrease. This reflects that chemical crosslinking between GQDs and SBS is disadvantageous to the strength of polymers. The Saal model (Equation (1)) proposed by ASTM D2493 was further applied to process the viscosity-temperature curves. It can characterize the temperature sensitivities of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt. The Saal model (Equation (1)) proposed by ASTM D2493 was further applied to process the viscosity-temperature curves. It can characterize the temperature sensitivities of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt. lg(lgη * 1000) = n + m·lg (T + 273.13) where m refers to the slope of the regression line; n denotes the intercept of the regression line on the lg(lgη*1000) axis; η is the viscosity (Pa·s); T is the temperature ( • C). Saal fitting curves of the viscosity-temperature curves of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt are shown in Figure 6b. The parameters of the corresponding Saal model are listed in Table 4. Moreover, m in the Saal model was defined as viscositytemperature sensitivity (VTS). The smaller absolute value of VTS indicates that viscosity changes more slowly with temperature, and the temperature sensitivity is better. It can be seen from Figure 6 and Table 4 that the absolute value of VTS of the GQDs/SBS composite-modified asphalt is higher than that of the SBS-modified asphalt, indicating that adding GQDs is disadvantageous for the temperature sensitivity of SBS-modified asphalt. Rheological Properties of Modified Asphalt The usability of asphalt pavement is determined, to a very large extent, by the viscoelastic properties of the modified asphalt binder. The linear viscoelasticity of modified asphalt is very sensitive to the motion and interaction of polymer molecular chains. Moreover, the complexity of different high-molecular polymer modification systems may influence the internal structure of modified asphalt, thus influencing the rheological characteristics of asphalt. Rheological parameters in the linear viscoelasticity interval are independent of changes regarding stress and strain, and they are only related to the properties of the materials [32]. Therefore, linear viscoelasticity and dynamic rheological tests are very effective methods to elaborate on the influences of modifiers on the performances of modified asphalt and to study the influences of polymers on the viscoelasticity of asphalt. Frequency Scanning under Middle and High Temperature The major curve of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt at 30 • C is shown in Figure 7. This curve was obtained from translocations of frequency scanning curves at 30, 45, 60, and 75 • C. It can be seen from Figure 7 that within the whole frequency scanning range, given the same frequency, the modulus of a complex number (G*) of the GQDs/SBS composite-modified asphalt is higher than that of the SBS-modified asphalt. Moreover, the major curves of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt differ significantly in the low-ω zone. However, such difference decreases with the increase of frequency. According to the time-temperature equivalence principle, the low-frequency zone corresponds to the high-temperature zone. Hence, the GQDs/SBS composite-modified asphalt has a better high-temperature performance than SBS-modified asphalt. In other words, adding GQDs improves the rutting resistance of the SBS-modified asphalt. Additionally, the G* values of the GQDs/SBS and SBS-modified asphalt in the high-ω zone are close to the same value, indicating that GQDs influence the viscoelasticity performance of the SBS-modified asphalt in the high-ω zone. It can be seen from major curves in Figure 7a that the time-temperature equivalence principle is highly applicable to GQDs/SBS composite-modified asphalt and SBS-modified asphalt. Variations of the translocation factor with temperature are shown in Figure 7b. Obviously, the translocation factors of GQDs/SBS composite-modified asphalt and SBS- It can be seen from major curves in Figure 7a that the time-temperature equivalence principle is highly applicable to GQDs/SBS composite-modified asphalt and SBS-modified asphalt. Variations of the translocation factor with temperature are shown in Figure 7b. Obviously, the translocation factors of GQDs/SBS composite-modified asphalt and SBSmodified asphalt are significantly different. The variations of translocation factor with temperature were fit using the Arrhenius-like equation (Figure 8). Differences in the translocation factors of different samples can be distinguished quantitatively by the activation energy of the Arrhenius-like equation. The activation energy is related to the temperature sensitivity of materials. This further proves that adding GQDs improves the high-temperature performance of the SBS-modified asphalt. It can be seen from major curves in Figure 7a that the time-temperature equivalence principle is highly applicable to GQDs/SBS composite-modified asphalt and SBS-modified asphalt. Variations of the translocation factor with temperature are shown in Figure 7b. Obviously, the translocation factors of GQDs/SBS composite-modified asphalt and SBSmodified asphalt are significantly different. The variations of translocation factor with temperature were fit using the Arrhenius-like equation (Figure 8). Differences in the translocation factors of different samples can be distinguished quantitatively by the activation energy of the Arrhenius-like equation. The activation energy is related to the temperature sensitivity of materials. This further proves that adding GQDs improves the high-temperature performance of the SBS-modified asphalt. Temperature Scanning In this study, temperatures of asphalt samples within a wide range (58-95 °C ) were scanned. The variations of storage modulus (G′) and loss modulus (G″) with temperature are shown in Figure 9. Both G′ and G″ decrease dramatically with the increase in temperature. The reduction rates of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt are different and finally, tend to be stable. Within a wide temperature range, the reduction rates of the G′ and G″ of the GQDs/SBS composite-modified asphalt with the temperature rise are lower than those of SBS-modified asphalt. This reflects that Temperature Scanning In this study, temperatures of asphalt samples within a wide range (58-95 • C) were scanned. The variations of storage modulus (G ) and loss modulus (G") with temperature are shown in Figure 9. Both G and G" decrease dramatically with the increase in temperature. The reduction rates of the GQDs/SBS composite-modified asphalt and SBSmodified asphalt are different and finally, tend to be stable. Within a wide temperature range, the reduction rates of the G and G" of the GQDs/SBS composite-modified asphalt with the temperature rise are lower than those of SBS-modified asphalt. This reflects that compared to SBS-modified asphalt, the GQDs/SBS composite-modified asphalt has better temperature sensitivity within a wide range. The rutting resistance of asphalt can be characterized by the rutting factor G*/sinδ and the failure temperature which is gained when G*/sinδ = 1.0 kPa. The higher the G*/sinδ and failure temperature, the better the high-temperature stability of asphalt. It can be seen from Figure 10 that in the middle-temperature and high-temperature intervals, The rutting resistance of asphalt can be characterized by the rutting factor G*/sinδ and the failure temperature which is gained when G*/sinδ = 1.0 kPa. The higher the G*/sinδ and failure temperature, the better the high-temperature stability of asphalt. It can be seen from Figure 10 that in the middle-temperature and high-temperature intervals, asphalt is mainly in the sticky flow state. The elasticity and strength of the system are provided by polymers. The variation laws of the G*/sinδ of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt with temperature are consistent with the variation laws of G' and G". Their temperatures at G*/sinδ = 1.0 Kpa are 84.82 and 86.20 • C, respectively. This further demonstrates that GQDs bring higher hardness of SBS-modified asphalt so that SBSmodified asphalt presents better mechanical properties and better resistance to deformation. The rutting resistance of asphalt can be characterized by the rutting factor G*/sinδ and the failure temperature which is gained when G*/sinδ = 1.0 kPa. The higher the G*/sinδ and failure temperature, the better the high-temperature stability of asphalt. It can be seen from Figure 10 that in the middle-temperature and high-temperature intervals, asphalt is mainly in the sticky flow state. The elasticity and strength of the system are provided by polymers. The variation laws of the G*/sinδ of the GQDs/SBS compositemodified asphalt and SBS-modified asphalt with temperature are consistent with the variation laws of G' and G". Their temperatures at G*/sinδ = 1.0 Kpa are 84.82 and 86.20 °C , respectively. This further demonstrates that GQDs bring higher hardness of SBS-modified asphalt so that SBS-modified asphalt presents better mechanical properties and better resistance to deformation. MSCR After the reciprocal action of vehicle loads for a long period, asphalt pavement may develop shear creep deformation and form ruts. A multi-stress cyclic creep (MSCR) test is an index used to evaluate the high-temperature performance of modified asphalt in recent years. MSCR usually provides 10 loading cycles to samples. In each cycle, loads are applied for 1s, and then the stress is eliminated for resilience for 9 s. In this study, MSCR tests were carried out at 60 • C under two stress levels (100 Pa and 3200 Pa). In MSCR tests, the recovery rate (R) and unrecoverable compliance (Jnr) could be calculated from the recoverable and unrecoverable strains, respectively. The strain responses of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt after 10 cycles at 60 • C and two stress levels (100 Pa and 3200 Pa) are shown in Figure 11. For one creep-recovery cycle, the strain of the GQDs/SBS composite-modified asphalt at the end of the creep stage and its strain at the end of the recovery stage are smaller than those of SBS-modified asphalt, which are attributed to the added GQDs. For the quantitative comparison of high-temperature performance between the GQDs/SBS composite-modified asphalt and SBS-modified asphalt, the parameters of R and Jnr of the two samples at 60 • C under two stress levels (100 Pa and 3200 Pa) are shown in Figure 12. With the addition of GQDs, R increases while Jnr decreases. Given the same conditions, the R of the GQDs/SBS composite-modified asphalt is higher than that of the SBS-modified asphalt, while Jnr is smaller. This implies that adding GQDs increases the high-temperature rutting resistance of asphalt. In addition, the R values of both the SBS-modified asphalt and GQDs/GQDs composite-modified asphalt decrease with the increase of stress. Meanwhile, the Jnr of the two samples increases to some extent. In a word, increasing vehicle loads may weaken the recoverable capacity of asphalt pavement significantly under high temperatures in summer, thus causing rutting damages. For the quantitative comparison of high-temperature performance between the GQDs/SBS composite-modified asphalt and SBS-modified asphalt, the parameters of R and Jnr of the two samples at 60 °C under two stress levels (100 Pa and 3200 Pa) are shown in Figure 12. With the addition of GQDs, R increases while Jnr decreases. Given the same conditions, the R of the GQDs/SBS composite-modified asphalt is higher than that of the SBS-modified asphalt, while Jnr is smaller. This implies that adding GQDs increases the high-temperature rutting resistance of asphalt. In addition, the R values of both the SBSmodified asphalt and GQDs/GQDs composite-modified asphalt decrease with the increase of stress. Meanwhile, the Jnr of the two samples increases to some extent. In a word, increasing vehicle loads may weaken the recoverable capacity of asphalt pavement significantly under high temperatures in summer, thus causing rutting damages. Low-Temperature Creep Properties Asphalt pavement may crack under low temperatures. Since there is a binding force between the asphalt mixture layer and the lower layer, it will hinder shrinkage and produce translocations, thus generating tensile stress. Cracks occur when the tensile stress exceeds the tensile strength of the asphalt mixture. This requires the asphalt to have a high creep rate to release stresses generated under low temperatures or in the cooling process. Since DSR cannot test asphalt which has considerable hardness under low temperatures, BBR is usually applied to measure the creep properties of asphalt when the temperature is very low. BBR uses the small beam principle to characterize the cracking trend of asphalt when temperature declines. Tow indexes can be gained from BBR: creep stiffness (S) and creep rate (m). These two indexes are used to characterize the load resistance and relaxation ability of asphalt. If S is too large, the possibility of cracking is high. If m is relatively low, the relaxation ability is insufficient to release stress produced by the reduction of temperature and the probability of cracking increases. Variations of the S and m of the GQDs/SBS composite-modified asphalt and SBSmodified asphalt at −18 and −24 • C, which are measured by BBR with time, are shown in Figure 13. S drops quickly with the increase of loading time, while m increases significantly. The variable rates of S and m are different. Given the same loading time, the m of the GQDs/SBS composite-modified asphalt at −18 • C is smaller than that of SBS-modified asphalt. However, the m of the GQDs/SBS composite-modified asphalt at −24 • C is higher. S presents the opposite variation trend. To further compare the low-temperature performance of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt, S/m at 60 s was used to characterize the low-temperature crack resistance of asphalt. The lower S/m indicates the stronger crack resistance of asphalt and better low-temperature performance. It can be seen from Figure 14 that given the same temperature, the S/m of the GQDs/SBS composite-modified asphalt is higher than that of the SBS-modified asphalt, indicating that the GQDs/SBS composite-modified asphalt has poor low-temperature crack resistance. Variations of the S and m of the GQDs/SBS composite-modified asphalt and SBSmodified asphalt at −18 and −24 °C, which are measured by BBR with time, are shown in Figure 13. S drops quickly with the increase of loading time, while m increases significantly. The variable rates of S and m are different. Given the same loading time, the m of the GQDs/SBS composite-modified asphalt at −18 °C is smaller than that of SBS-modified asphalt. However, the m of the GQDs/SBS composite-modified asphalt at −24 °C is higher. S presents the opposite variation trend. To further compare the low-temperature performance of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt, S/m at 60 s was used to characterize the low-temperature crack resistance of asphalt. The lower S/m indicates the stronger crack resistance of asphalt and better low-temperature performance. It can be seen from Figure 14 that given the same temperature, the S/m of the GQDs/SBS composite-modified asphalt is higher than that of the SBS-modified asphalt, indicating that the GQDs/SBS composite-modified asphalt has poor low-temperature crack resistance. High-Temperature Stability In the present study, the high-temperature stabilities of the SBS-modified asphalt mixture and the GQDs/SBS composite-modified asphalt mixture were evaluated by the dynamic stability in the high-temperature (60 °C ) rutting test. The high-temperature stability test results are shown in Figure 15. High-Temperature Stability In the present study, the high-temperature stabilities of the SBS-modified asphalt mixture and the GQDs/SBS composite-modified asphalt mixture were evaluated by the dynamic stability in the high-temperature (60 • C) rutting test. The high-temperature stability test results are shown in Figure 15 High-Temperature Stability In the present study, the high-temperature stabilities of the SBS-modified asphalt mixture and the GQDs/SBS composite-modified asphalt mixture were evaluated by the dynamic stability in the high-temperature (60 °C ) rutting test. The high-temperature stability test results are shown in Figure 15. It can be seen from Figure 15 that the dynamic stability of the GQDs/SBS compositemodified asphalt mixture is significantly higher than that of the SBS-modified asphalt mixture, indicating that adding GQDs increases the cohesive force of asphalt. Moreover, more compact structures are formed by adjusting the skeleton of asphalt mixtures in the compacting process, thus increasing the internal friction angle. With the increase of cohesive force and internal friction angle, the shear strength of the asphalt mixture increases, thus making it equipped with good high-temperature stability. It can be seen from Figure 15 that the dynamic stability of the GQDs/SBS compositemodified asphalt mixture is significantly higher than that of the SBS-modified asphalt mixture, indicating that adding GQDs increases the cohesive force of asphalt. Moreover, more compact structures are formed by adjusting the skeleton of asphalt mixtures in the compacting process, thus increasing the internal friction angle. With the increase of cohesive force and internal friction angle, the shear strength of the asphalt mixture increases, thus making it equipped with good high-temperature stability. Low-Temperature Crack Resistance The resistance of the asphalt mixture to low-temperature cracking performance was evaluated through a low-temperature small beam bending test. Small beam specimens (250 mm (Length)*30 mm (Width)*35 mm (Height)) were used in the test, and the loading rate and temperature were set to 50 mm/min and −10 • C, respectively. The lowtemperature crack resistance test results are shown in Figure 16. Low-Temperature Crack Resistance The resistance of the asphalt mixture to low-temperature cracking performance was evaluated through a low-temperature small beam bending test. Small beam specimens (250 mm (Length)*30 mm (Width)*35 mm (Height)) were used in the test, and the loading rate and temperature were set to 50 mm/min and −10 °C , respectively. The low-temperature crack resistance test results are shown in Figure 16. It can be seen from Figure 16 that the maximum bending strain of the GQDs/SBS composite-modified asphalt mixture at low-temperature failures is 14.5% lower than that of the SBS-modified asphalt mixture. However, all test results meet the standard requirements, indicating that adding GQDs decreases the tenacity and temperature sensitivity of the asphalt mixture under a low-temperature state. As a result, the low-temperature crack resistance declines accordingly. It can be seen from Figure 16 that the maximum bending strain of the GQDs/SBS composite-modified asphalt mixture at low-temperature failures is 14.5% lower than that of the SBS-modified asphalt mixture. However, all test results meet the standard requirements, indicating that adding GQDs decreases the tenacity and temperature sensitivity of the asphalt mixture under a low-temperature state. As a result, the low-temperature crack resistance declines accordingly. Water Stability The water stabilities of two modified asphalt mixtures were evaluated by the freezethaw splitting test. After freeze-thaw cycles of specimens based on the Marshall test, freezethaw splitting residual strength was tested, thus enabling us to analyze the resistance of the asphalt to water damage under tough environments. The water stability test results are shown in Figure 17. It can be seen from Figure 17 that the residual strengths of the GQDs/SBS compositemodified asphalt mixture before and after freezing and thawing decrease compared with those of the SBS-modified asphalt mixture. This reveals that adding GQDs decreases the adhesion between asphalt and aggregate, thus decreasing the resistance of the asphalt mixture to water damage. However, the residual strength meets the requirements of technical specifications. This implies that the prepared GQDs/SBS composite modifier influences the water stability of the mixture slightly. Conclusions In this study, the GQDs/SBS composite modifier was prepared using the Pickering emulsion polymerization method. In addition, the physical and chemical properties of the GQDs/SBS composite modifier, physical and rheological properties of the binders, as well as pavement performances of the GQDs/SBS composite-modified asphalt mixture were investigated. According to results and discussions, some conclusions could be drawn: (1) The GQDs/SBS composite modifier is prepared by the simple Pickering emulsion polymerization method. GQDs can evenly disperse into the SBS modifier to form a uniform composite. The GQDs/SBS composite modifier contains more oxygen-containing functional groups than the SBS modifier. Furthermore, the pyrolysis rate of the GQDs/SBS composite modifier is lower than the SBS modifier, and its residual mass is higher, thus showing better thermostability. (2) The conventional physical properties and rheological properties of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt are compared. The GQDs/SBS composite-modified asphalt shows a higher softening point, complex shear modulus, activation energy, rutting factor and recovery rate than the SBS-modified asphalt, thus showing better high-temperature performance. However, the cone penetration and ductility of the GQDs/SBS composite-modified asphalt decrease while S/m increases, indicating that its low-temperature performance is worsened. It can be seen from Figure 17 that the residual strengths of the GQDs/SBS compositemodified asphalt mixture before and after freezing and thawing decrease compared with those of the SBS-modified asphalt mixture. This reveals that adding GQDs decreases the adhesion between asphalt and aggregate, thus decreasing the resistance of the asphalt mixture to water damage. However, the residual strength meets the requirements of technical specifications. This implies that the prepared GQDs/SBS composite modifier influences the water stability of the mixture slightly. Conclusions In this study, the GQDs/SBS composite modifier was prepared using the Pickering emulsion polymerization method. In addition, the physical and chemical properties of the GQDs/SBS composite modifier, physical and rheological properties of the binders, as well as pavement performances of the GQDs/SBS composite-modified asphalt mixture were investigated. According to results and discussions, some conclusions could be drawn: (1) The GQDs/SBS composite modifier is prepared by the simple Pickering emulsion polymerization method. GQDs can evenly disperse into the SBS modifier to form a uniform composite. The GQDs/SBS composite modifier contains more oxygencontaining functional groups than the SBS modifier. Furthermore, the pyrolysis rate of the GQDs/SBS composite modifier is lower than the SBS modifier, and its residual mass is higher, thus showing better thermostability. (2) The conventional physical properties and rheological properties of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt are compared. The GQDs/SBS composite-modified asphalt shows a higher softening point, complex shear modulus, activation energy, rutting factor and recovery rate than the SBS-modified asphalt, thus showing better high-temperature performance. However, the cone penetration and ductility of the GQDs/SBS composite-modified asphalt decrease while S/m increases, indicating that its low-temperature performance is worsened. (3) The pavement performance of the GQDs/SBS composite-modified asphalt mixture and SBS-modified asphalt mixture are compared. The high-temperature stability of the GQDs/SBS composite-modified asphalt mixture is improved to some extent compared to that of the SBS-modified asphalt mixture, while its water stability changes slightly and the low-temperature performance declines to some extent.
11,818.8
2022-04-11T00:00:00.000
[ "Materials Science", "Engineering" ]
Metal Peptide Conjugates in Cell and Tissue Imaging and Biosensing Metal complex luminophores have seen dramatic expansion in application as imaging probes over the past decade. This has been enabled by growing understanding of methods to promote their cell permeation and intracellular targeting. Amongst the successful approaches that have been applied in this regard is peptide-facilitated delivery. Cell-permeating or signal peptides can be readily conjugated to metal complex luminophores and have shown excellent response in carrying such cargo through the cell membrane. In this article, we describe the rationale behind applying metal complexes as probes and sensors in cell imaging and outline the advantages to be gained by applying peptides as the carrier for complex luminophores. We describe some of the progress that has been made in applying peptides in metal complex peptide-driven conjugates as a strategy for cell permeation and targeting of transition metal luminophores. Finally, we provide key examples of their application and outline areas for future progress. Introduction Fluorescence microscopy is one of the most important and ubiquitous tools in the life sciences. Its applications vary from the simple visualisation of fixed samples to quantitative and dynamic determination of biological processes in living cells and tissues. Luminescent metal complexes are emerging as highly useful probes for fluorescence microscopy, competing with more traditional organic fluorophores due to their excellent, tuneable photophysical properties and their amenity to sensing applications. Indeed, many metal complex luminophores are addressable through multimodal methods and offer prospects for applications in imaging, sensing and theranostics. Probes with visible to near-infrared (NIR) excitation and emission are required in bioimaging. In particular, emission that coincides with the biological optical window (650-1000 nm) is preferable because NIR light is more isotropically scattered by tissue, and light in this frequency range is not absorbed by biomolecules. It is therefore more penetrative through biological tissue and moreover autofluorescence from endogenous sources upon NIR excitation is minimal. As luminescence from most transition metal complexes is formally phosphorescence, their emission exhibits a large Stokes shift (energy difference between absorption and emission maxima). This is advantageous as it avoids artefactual effects from inner filter effects or self-quenching, which may be more prevalent when the probe is localised at high concentrations. Another rarely considered advantage of the large Stokes shift is that it facilitates dual use of such complexes as probes in tandem luminescence and resonance Raman measurements under resonant excitation, since the Stokes shift enables excitation and detection of the resonance Raman signature away from the overwhelming emission signature [1,2]. The long-lived and triplet nature of the excited state of many metal complex luminophores, notably those of ruthenium(II) and iridium(III), render them susceptible to quenching by analytes such as molecular oxygen (O 2 ), reactive redox species or pH. The characteristic luminescence lifetimeor intensity-based response typically reflects the interaction of the metal complex with these species within a cellular or tissue environment. Luminescence intensity-based sensing can be performed using conventional instrumentation such as a fluorescence microscope or plate reader. An important limitation is that the absolute signal intensity alone, cannot typically be used as a reliable quantitative marker for a single target analyte because intensity in cellulo is influenced by many factors. Most notably, it will vary with distribution in the cell which is rarely uniform, and physiochemical issues such as photodamage, probe leaching and interaction with species such as proteins or lipid membranes within the cellular environment can influence emission intensity. Additionally, intensity can be affected by the excitation source or detector drift and sensitivity. A practical approach to facilitate use of emission intensity for sensing is to apply ratiometric sensing. The ratiometric approach involves referencing the sensor probe emission signal to a stable emission signal from a dye that does not respond to the analyte or species of interest, but is subject to the same instrumental fluctuations that influence the intensity of the analytical signal. An alternative way to obtain insight into a particular analyte that is not dependent on dye concentration or instrumental fluctuations is to apply fluorescence lifetime imaging microscopy (FLIM), or phosphorescence lifetime in the case of most metal luminophores. It is a quantitative imaging technique that can be used for real-time mapping of the cellular and tissue microenvironment, including cell functions and metabolic changes where the lifetime of a fluorophore is influenced by its local environment. As indicated, unlike intensity-based methods such as confocal fluorescence imaging, the image is independent of luminophore concentration, reflecting only the emission lifetime distribution of the probe. A challenge that has traditionally impeded the application of metal complexes as bioimaging and biosensing probes is the poor uptake of such materials into cells. This has been widely overcome in recent years with a variety of strategies involving modification of the physicochemical properties of the complex or bioconjugation [3]. Conjugation of complexes to cell-penetrating and signal peptides specifically, has proven to be a particularly attractive and reliable method for achieving efficient cellular uptake without the use of permeabilisation agents. In particular, in the context of metal complex luminophores, this approach has the potential to very specifically drive the probe to target organelles with complex membrane structures such as the mitochondria or nucleus. This review focuses on the more commonly studied luminescent transition row complexes of Ru(II), Ir(III), Os(II) and Re(I) with some examples from less well studied transition metal complexes such as Pt(II), Pd(II), Rh(III) and Zn(II). Photophysical Profile of an Ideal Chromophore for Bioimaging Luminescence imaging, including, particularly, confocal fluorescence and luminescence lifetime imaging methods, are widely used techniques in biochemistry and molecular biology as they offer high contrast, sensitivity, good resolution and flexibility in choice of luminophore probe. In addition, with commercialisation of more advanced imaging methods, including super-resolution and multiphoton methods, there is a growing need for probes that meet the demands of these methods, including robust photostability, sensitive environmental responsivity, high membrane permeability and targeted localisation. Indeed, studies to date have demonstrated that metal complexes can be applied in interrogating the cell environment and studying dynamic processes in vivo via a variety of imaging methods and that they have the synthetic versatility to tune to the desirable photophysical properties while maintaining biocompatibility and low cytotoxicity. Favourable Properties of Metal Complexes The ideal photophysical/optical characteristics of a luminescent imaging probe vary depending on the imaging methodology, although a number of characteristics are common to all, including the need for high molecular brightness (product of the molar extinction coefficient and quantum yield) and photostability. A diverse range of probes have been developed for fluorescence/luminescence imaging, including fluorescent proteins, expressed in situ in the cell, or exogenously applied probes, including organic fluorophores, nanoparticles, quantum dots and metal complexes. Organic fluorophores such as rhodamine, cyanine dyes, and the Alexa Fluor and Atto dyes, have been used widely as contrast agents in fluorescence microscopy to date as they exhibit high molecular brightness and in the case of Atto and Alexa Fluor probes, show good photostability. However, intrinsic drawbacks of organic fluorophores include a narrow Stokes shift which leads to inner filter effects and self-quenching at high optical densities, and in many cases, limited photostability. They also frequently show poor solubility in aqueous media, and so, application in cells often requires pre-dissolution in organic solvent that promotes cellular permeation but often through damage to the membrane. Finally, the short emission lifetime of organic fluorophores (usually in the range of 1-5 ns) is typically too short to enable time gating as a method to discriminate probe emission from background autofluorescence, and in sensing applications, limits quenching capability for diffusing species (the dye singlet states limit oxygen sensing also). Aside from time gating, another approach to avoid autofluorescence interference in cellular or tissue imaging is to use a probe that emits in the red or NIR spectral range. Autofluorescence, excited at short excitation wavelengths, occurs from naturally fluorescent molecules within the cell and tissue environment or medium and usually decays on the nanosecond timescale. Nicotinamide adenine dinucleotide (NAD) and flavin adenine dinucleotide (FAD) are examples of intrinsic biological fluorophores for which several studies on their fluorescent properties have been carried out. Even if, for example, in Stokes-shifted emission, the probe is excited in the blue visible spectral range, which excites autofluorescence, its detection can be avoided if the probe emission is in the red region. In the context of luminescence imaging, but also therapy, a probe absorbing in the low-energy visible or NIR region is also desirable as this allows for deeper light-tissue penetration and avoids biological damage from continuous photo-irradiation into spectral regions where there is absorbance by tissue [4][5][6][7][8][9]. This is illustrated in Fig. 1 where emission from an Os(II) polypyridyl complex in the 650-800-nm region avoids any significant background signal from biological autofluorescence of a multicellular spheroid [10]. Transition metal complexes have also shown good photostability, which is particularly robust in the case of osmium(II) polypyridyl luminophores, where photodecomposition and photobleaching can be completely avoided. To date, the coordination compounds of the (second and third row) d 6 metals Ru(II), Os(II) or Ir(III) are amongst the most widely studied transition metal imaging probes. Figure 2 shows examples of luminescent metal complexes discussed in this section, highlighting key ligands used as building blocks for the design and development of metal complex luminophores. Aside from their favourable photophysical properties, which are highly tunable due to the synthetic versatility of transition metal luminophores, metal complexes can also show good and also tuneable aqueous solubility, cell permeability and uptake and can be driven to subcellular structures through a range of approaches, in particular, as discussed in this review, by bioconjugation to peptides. Tuning of Photophysical Properties The photophysics and photochemistry of the prototype metal complex [Ru(bpy) 3 ] 2+ has been very thoroughly studied, and it is often used as an example to describe the photophysical activity of Ru(II) complexes [11][12][13][14]. The ultraviolet spectrum of [Ru(bpy) 3 ] 2+ is dominated by intense π-π* ligand bands and the broad metalto-ligand charge transfer (MLCT) transitions in the visible region. Spin-forbidden transitions are facilitated by spin orbit coupling which can be very large for second and third row transition metals such as Ru(II) and Os(II) complexes. Upon photon absorption, the singlet 1 MLCT excited state is populated and undergoes rapid intersystem crossing ( k ISC ), populating a triplet MLCT ( 3 MLCT) excited state with unity quantum yield. In the case of [Ru(bpy) 3 ] 2+ , deactivation from the lowest excited MLCT state to the ground state ( 1 A 1g in O h symmetry) is observed through emission or non-radiative decay via thermally activated (E a ) population of the 3 MC state ( 3 T 1g in O h symmetry). This latter process can lead to ligand dissociation. Indeed, enhanced ligand dissociation following 3 MC population is observed for sterically strained complexes or for complexes coordinated to ligands with a weak σ donor such as in Ru(II) 2,2′-biquinoline (biq) complexes [15], where reduced ligand field splitting capacity reduces the energy of the dissociative 3 MC, facilitating its thermal population from the 3 MLCT state. This is, for example, observed in [Ru(tpy) 2 ] 2+ , where tpy is terpyridine, which exhibits a weak short-lived emission at room temperature [16]. The rigid tpy ligands cause geometric distortion from the ideal O h geometry and smaller N-Ru-N trans angles (158.6°) that give rise to a weaker ligand field reducing the energy of the 3 MC state, thus facilitating radiationless deactivation [17]. Substitution at the 4′ position of the terpyridine ligands in [Ru(tpy) 2 ] 2+ with electron donor or acceptor moieties can enhance the excited state lifetime [18] by destabilising the metal-based highest occupied molecular orbital (HOMO) or stabilising the ligand-based lowest unoccupied molecular orbital (LUMO), respectively. A review by Medlycott and Hanan reports on the various synthetic strategies used to enhance the room-temperature photophysical properties of Ru(II) complexes of tridentate ligands [17]. Emission from Ru(II) complexes typically occurs in the wavelength range of 580-800 nm with λ exc ≈ 400 to 550 nm. Luminescence lifetimes are typically on the order of hundreds of nanoseconds with quantum yields of 1-5% (e.g. [Ru(bpy) 3 ] 2+ ; φ air = 0.04 in water [19]). Molecular brightness, which is defined as the product of molar extinction coefficient (ε) and quantum yield (φ), is an important photophysical characteristic of an imaging probe, as it can determine the sensitivity and signal-tonoise ratio for luminescence detection. Molar extinction coefficients for Ru(II) complexes are in the range of 5000-20,000 M −1 cm −1 meaning, their molecular brightness is moderate when compared to organic dyes such as fluorescein. Although less optically tuneable than Ir(III), modification of the σ-donor or π-acceptor properties of Ru and Os complexes can also be used to tune the photophysics of these complexes. For example, coordination of strong π-acceptor ligands, such as 2,2′-biquinoline (biq), decreases ligand field strength and stabilises dπ orbitals, leading to red shifts in absorption and emission of Ru(II) complexes [20]. While, as described above, this can promote population of the 3 MC state, simultaneous co-coordination of a strong σ-donor ligand such as pyridyl-1,2,4-triazolate (trz) will promote photostability by raising the energy of the 3 MC, thus preventing both thermal population of this state and potential photodecomposition. Thus, strategic co-mixing of ligands can promote red emission whilst impeding photoinstabilty [20,21]. While molecular brightness is modest for most Ru(II) complexes, this is less of an issue for Ir(III) complexes, and they are also inherently more sensitive to the impact of ligand modification due to mixing of ligand and metal states. Factors such as absorbance and emission maxima and molecular brightness are relatively easily modified through ligand modification. The excited states of Ir(III) complexes frequently contain mixed contributions from both 3 LC and 3 MLCT and permit greater photophysical tuning, leading to complexes with a diverse range of emission properties across the visible to NIR spectrum. The photophysical properties of such complexes can be tuned via a number of strategies via π-extension of the coordinated ligands, or their modification with electron donating/withdrawing substituents to cyclometalated ligands or by introducing an ancillary ligand, e.g. N, N coordinating ligands that have σ-donating or п-accepting properties [24][25][26][27]. A recent review on NIR-emitting Ir(III) complexes discusses in detail the different methods that can be utilised to tune the photophysics of Ir(III) complexes [28]. Of note, although Ru(II) complexes are typically weaker emitters and less amenable to photophysical tuning than Ir(III) complexes, an advantage is that they tend to exhibit lower cytotoxicity upon uptake into cells [29,30]. Shifting the emission maxima toward the NIR region can also be achieved by selecting an alternative metal centre. Os(II) polypyridyl complexes exhibit emission typically centred in the NIR region (> 730 nm), which is advantageous in the context of bioimaging, including cellular and tissue imaging [31][32][33][34][35]. Os(II) complexes share many of the same photophysical properties with their ruthenium analogues, with some key differences; The 3 MC state is higher in energy in Os(II) complexes due to increased crystal field splitting that raises the energy of the anti-bonding e g * levels, making it thermally inaccessible from the emitting 3 MLCT state. Thus, Os(II) complexes are extremely photostable, and their photophysics tend to show weak temperature dependence compared to their ruthenium analogues [16]. However, in comparison to [Ru(bpy) 3 ] 2+ , the 3 MLCT excited state lifetime of Os(II) is much shorter-lived, and quantum yields are lower. This is a feature of the energy gap law which comes into play for red to NIR emission. It predicts that the non-radiative rate decay increases as the energy gap between the excited and ground state decreases [36]. Therefore, the low-energy MLCT in the case of Os complexes leads to efficient non-radiative decay. Amongst the d 6 complexes, rhenium(I) complexes, typically rhenium fac tricarbonyl polypyridyls, also exhibit attractive photophysical properties, including large Stokes shifts, long-lived oxygen sensitive emission, and high photostability. Thus, they have also been applied as bioimaging agents. Photophysical tuning of rhenium complexes is more challenging compared to complexes of Ru(II), Ir(III) and Os(II). In particular, the absorption of such complexes tends to be toward the UV or blue spectral range which limits suitability for imaging applications, especially in tissues. Nonetheless, NIR emission can be achieved by implementing the complex into a D-π-A system [37]. Due to the isostructural relationship between rhenium and technetium-99 m and their characteristic infrared absorption bands, complexes of rhenium(I) have been applied as probes for radio imaging and vibrational imaging, respectively [38,39]. Furthermore, rhenium(I) tricarbonyl complexes have been developed as agents for photodynamic therapy as they tend to be strong photosensitisers for singlet oxygen generation [40,41]. Pt(II)/Pt(IV) compounds have historically found application, mainly in therapy as anticancer agents [42,43], but have also been studied more recently in the context of imaging [44][45][46]. Luminescence and biocompatibility are prerequisites for use in imaging, and numerous kinetically stable and emissive Pt(II) complexes have been reported. Pt(II) (d 8 ) luminophores are distinctive from the complexes discussed above because of their square planar geometries, and Pt(II) luminophores have been based mainly on the general structures [Pt(C^N^N)(L)] n (C^N^N = aryl-substituted N^N ligand, L = monodentate ligand and n = 0 or + 1), cyclometalated tridentate (e.g. [Pt(C^N^C)(Cl)]) derivatives [47]. Tetradentate ligand and π-conjugated porphyrin coordinated Pt(II) luminophores have also been reported [48,49]. Emission from cyclometalated tridentate complexes is usually attributed to a triplet intra-ligand charge transfer excited state ( 3 ILCT), and so photophysical properties are tuneable through ligand modification [44]. π-Conjugated Pt(II) porphyrin complexes, in particular, can exhibit high quantum yields and NIR emission, but efficient and uniform cellular uptake can be problematic due to the large size of porphyrins. In addition, complexes of platinum(II) exhibit a square planar coordination geometry that can permit self-assembly by non-covalent π-π and/or Pt(II)-Pt(II) interactions and the prospect of triplet metal-metal-to-ligand charge transfer ( 3 MMLCT) excited state emission [50,51]. Complexes of Rh(III) [52] and Zn(II) [53,54] have also been applied in bioimaging, but to date, to a lesser extent than the above metals. Reducing Toxicity by Ligand Modification It is important to consider potential cytotoxicity, both dark and photo-induced, when designing a metal complex luminophore for bioimaging and sensing. The metal centre and coordinated ligands dictate the excited state and redox properties of a complex, and these features, along with size, lipophilicity and overall charge, can generally influence cytotoxicity. Owing to their long-lived triplet excited states, Ir(III), Re(I) and Ru(II) complexes can induce cellular toxicity via a number of photochemical and photophysical routes. For example, incorporation of tap or hat ligands (tap = 1,4,5,8-tetraazaphenanthrene, hat = 1,4, 5, 8, 9,12-hexaazatriphenylene) in complexes of Ru(II), permits efficient proton-coupled electron transfer (PCET) reactions with bio-relevant molecules such as DNA which can lead to cytotoxic effects. For example, the complexes [(N^N) 2 Ru(tatpp)] 2+ (where N^N = bpy or phen and tatpp = 9,11 ,20,22 -tetraazatetrapyrido[3,2-a:2′,3′-c:3′′,2′′-1:2′′′,3′′′-n]-pentacene) were shown to cleave DNA through a redox-mediated mechanism [55]. Sensitisation of reactive oxygen species (ROS), such as singlet oxygen, is another important route to photo-induced toxicity that can be exploited in photodynamic therapy applications [56,57] Another strategy to phototherapy and a route to cytotoxicity is photo-induced ligand dissociation or substitution which, as previously mentioned, is observed mainly in Ru(II) complexes upon thermal population of the 3 MC states [15,58]. Turro et al. investigated in detail the factors that affect ligand dissociation and singlet oxygen generation and demonstrated how ligand tuning can be used to promote both reactions. As the ligand dissociation is thermally driven, it can be quite prevalent under cellular imaging conditions and used for application as such complexes as prodrugs [15,[59][60][61][62]. For example, Glazer et. al. have described a series of sterically strained Ru(II) complexes that exhibit dramatically increased ligand photorelease which results in a reactive ruthenium-aquo complex that can photo-bind DNA/biomolecules and trigger cellular apoptosis [63]. Conversely, osmium luminophores tend to be very photostable and relatively inert towards ligand substitution, thus reducing cytotoxic effects that occur through ligand dissociation routes [64,65]. While the nature of the ancillary ligands is important, chemical modifications to the ligand itself can also influence the lipophilicity and consequently cellular uptake, localisation and cytotoxicity of the complex [71][72][73]. Increased lipophilicity and dark cytotoxicity was observed for Ru(II) bis-phen and bis-TAP complexes coordinated to a hydrophobic alkylamide phen ligand [74]. Glazer et al. reported on the uptake of two Ru(II) complexes differing in their charge but coordinated to the highly lipophilic dip (or dpp) ligand [75]. The complexes were successfully internalised by A549 cells where the lipophilic [Ru(dip) 3 ] 2+ (log P = + 1.8) accumulated at the mitochondria and lysosomes, while the anionic and less lipophilic [Ru((SO 3 ) 2 -dip) 3 ] 4− (log P = −2.2) localised in the cytosol and was mitochondrial-excluding. Both complexes showed photoinduced toxicity, but interestingly, the mitochondrial accumulating complex also showed dark toxicity with an IC 50 between 0.62 and 3.75 μM. This study highlights the importance of balancing charge and lipophilicity in order to modulate accumulation and limit cytotoxicity (Table 1). Recently, Finn et al. reported on functionalised Ru(II) complexes with pendant and lipophilic alkyl-acetylthio chains of varying lengths [80]. The complexes were capable of self-assembling into micelles under aqueous conditions and could traverse the cell membrane. Polyethylene glycol has been conjugated to metal complexes to increase aqueous solubility and reduce dark cytotoxicity [81]. Reduced cytotoxicity was observed for cell-permeable Ir(III)-poly(ethylene glycol) (PEG) conjugates in comparison to the PEG-free counterparts [82,83]. The long PEG chains likely protect the Ir(III) complexes from non-specific interactions with proteins, DNA and membranes within the cell. Table 1 Lipophilicity and cytotoxicity of selected metal complexes upon synthetic modifications a Lipophilicity, log P o/w , was estimated by the partition coefficient of each compound in octanol/water. Propidium iodide and Hoechst are both commercially available organic nucleic acid markers where the first is permeant only to damaged/dead cells, and the latter is cell-permeable. IC 50 Rationale for Peptide Conjugation to Transition Row Metal Complexes Cell membrane permeability is a key barrier to the widespread application of metal complexes in cellular and tissue imaging. One widely used approach to overcome this challenge is the use of organic solvents such as dimethyl sulfoxide (DMSO) or detergents such as Triton-X to permeabilise the cell membrane of mammalian cells. Permeabilising agents act by disrupting the integrity of the membrane bilayer, thus promoting entry of the compound into the cell [84]. This approach is widely used for both organic fluorophores [85][86][87] and metal complexes [88][89][90][91], though it is not always explicitly explained. A key drawback is that above relatively low volume percentages, e.g. for DMSO > 5% vol/vol, solvent permeabilisation can cause irreversible damage to the cell membrane [92], so the approach should be used with care in the study of cultured cells and organic solvent as a permeant is of limited use in tissue or in vivo applications [93,94]. Other approaches to improving permeability have focused on tuning the lipophilicity, charge and solubility of the complex which in turn can influence cellular uptake and accumulation, as mentioned previously. The use of nanocarriers [95][96][97][98], liposomes [99], dendrimers [100], sugars/carbohydrates [101,102], polyethyleneglycol (PEG) chains [81,82], vitamins [103][104][105], antibodies [106], lipophilic moieties such as triphenylphosphonium (TPP) [107], amino acids [108] and cell-penetrating peptides (CPPs) [81,[109][110][111] has also been shown to increase solubility and improve membrane permeability, facilitating reliable uptake of complexes within cells for a range of applications. Recent reviews describe the preparation and application of ruthenium bioconjugates [112] and vectorisation strategies of metal complex luminophores [3]. Following cellular uptake, subcellular targeting of organelles, such as to the mitochondria or nucleus, is typically of interest in the context of bioimaging/ sensing and therapy. The nuclear envelope is a double membrane comprising of inner and outer nuclear membranes that converge at several sites, generating nuclear pores [113]. Uptake of ions and small molecules is mediated through the nuclear pores through a channel (~ 30 nm in diameter) via passive diffusion. In contrast, uptake of larger molecules is mediated through transport receptors [114]. The mitochondria also feature a double-membrane boundary, though structurally different to the nucleus. The inner mitochondrial membrane is far less permeable than the outer, allowing only very small molecules to cross into the matrix where mitochondrial DNA (mtDNA) and other molecules of analytical interest are contained. Peptide conjugation has emerged in recent years as a key enabling tool to promote cell uptake, particularly of non-membrane-permeable metal complexes or to enhance uptake and targeting of permeable complexes [115]. The mechanism by which peptides facilitate transport across the cell membrane is often linked to an energy-dependent process such as endocytosis. For example, the recognition of molecules by specific receptors located on the surface of the cell membrane can lead to a receptor-mediated endocytic pathway of uptake. In principle, it is possible that the peptide may lower the uptake efficiency in comparison to the peptide-free complexes-for example, in instances where the complex is highly permeable through the membrane by passive diffusion due to their lipophilic character or in comparison to membrane modification methods such as use of permeabilisation agents. Lower uptake efficiency, however, is often balanced by improved precision in intracellular localisation and decreased cytotoxicity. Peptide conjugation to metal complexes has been facilitated by a plethora of peptide coupling reactions available to couple peptides to metal complexes, including, but not limited to, amine/carboxyl coupling reactions, "click" chemistry and Sonagashira coupling reactions. Cell-penetrating and signal peptides specifically are proven reliable vectors for the efficient intracellular delivery of different metal complexes and for targeting organelles with complex membrane structures such as the mitochondria or the nucleus [10,109,[116][117][118][119]. Peptides Peptides, short sequences (< 50) of amino acids linked by amide bonds, are physiologically important biomolecules that serve in signalling processes and are ligands for many proteins. In the body, they function as hormones, inhibitors, antibiotics and anti-inflammatories, and both natural and synthetic peptides are finding increasing use in therapeutic applications [120]. Peptides have been widely applied in the pharmaceutical industry to promote permeation of drugs across the membrane; in particular, cell-penetrating peptides (CPPs) have been very effective in this regard and have been applied both as appendages to therapeutic molecules and incorporated into nanocarriers [121]. One of the reasons that peptides have become so important in pharmaceuticals as cargo carriers is that they can be readily accessed, including linear and cyclic and branched peptides, through chemical synthesis. The most important route is through solid-phase peptide synthesis (SPPS), and for many peptide sequences, the synthesis can be automated with continuous improvements reported to protocols that lead to gains in speed, purity and yield. Furthermore, functional terminal groups can be readily appended in the synthesis protocol to facilitate conjugation [122]. The success of peptides in the pharmaceutical industry and also their application in driving organic imaging agents into cells has led to their application in recent years as conjugates to metal complexes to promote their cellular access and targeting. Cell-Penetrating Peptides (CPPs) The ability of cationic peptide sequences to cross the cell membrane and facilitate uptake of small molecules was first demonstrated in 1965 by Ryser and Hancock with the cationic amino acid-mediated enhanced uptake of albumin followed by studies on conjugation of poly-l-lysine to albumin and horseradish peroxidase [123,124]. The most studied cell-penetrating peptide is likely the arginine-rich HIV-Tat transduction protein (RKKRRQRRR ) from the human immunodeficiency virus [125,126]. Homopolymers of arginine (polyarginines) have shown superior cellular uptake compared to other cationic analogues such as ornithine and histidine [127]. With studies showing no strict requirement for side chain length or backbone chirality (D-Arg vs L-Arg), it was concluded that the guanidinium head groups of arginine units are the structural features crucial to cellular uptake. Barton et al. first reported peptide-facilitated cellular uptake of rhodium complexes [52]. In order to reduce nonspecific DNA binding owing to the highly charged R8, a shorter peptide sequence, RrRK (where r = d-arginine), was conjugated to a Ru(dppz) complex, achieving cellular uptake and nuclear accumulation above a threshold concentration of 100 μM in complete media [128]. Cargo transduction occurs for arginine sequences of Arg n or R n , where n = 6-11 residues, with octaarginine (Arg 8 ) and nonaarginine (Arg 9 ) being most efficiently transported. Our group has reported the efficient transport of an otherwise cell-impermeable Ru(II) polypyridyl complex, [Ru(bpy) 2 (pic)] 2+ , via conjugation to octaarginine [88]. The conjugate was found to passively transport into myeloma cells within 12 min. In addition, studies showed that Arg 5 or lower conjugates are not effective in promoting metal complex permeation. Wender et al. reported similar decrease in uptake efficiency of shorter polyarginines [129]. Polyarginine sequences have been extensively explored for promoting or enhancing cellular uptake of metal complexes with applications ranging from bioimaging to medicinal chemistry [29,88,116,[130][131][132][133]. More specifically, octaarginine-driven cellular uptake has been reported for a range of otherwise impermeable luminescent complexes differing in their metal centre (e.g. Ru(II), Os(II), Ir(III)) and coordinated ligands [10,88,116,117,131]. The effect of conjugation to peptides on the DNA recognition properties of Ru(II) and Ir(III) complexes has also been explored [30,116,134]. Recent studies showed that appending an R8 tail to the Ru(II)-dppz complex increased its affinity for G-quadruplexes and that both the ancillary ligand and the octaarginine tail were key to control the selectivity between quadruplexes [134]. We have also demonstrated that two tetraarginine sequences across a linear osmium(II) complex promote cellular uptake, whereas the analogue containing R8 at each terminal was membrane-impermeable. Our data indicated that a contiguous structure may not be required for octaarginine-facilitated transport and that there is an upper limit to the arginine chain length effective in promoting membrane transport of the metal complex [10]. There have been multiple pathways and mechanisms proposed to explain polyarginine CPP behaviour. Although there are a number of studies that report that polyarginines can promote permeation through a passive mechanism [135] or through local changes at the membrane [136], the key pathway in live cells appears to be ATP-activated endocytosis [137]. Polyarginine interactions with cell surface lipids and formation of neutral complexes that transport across the bilayer have also been reported, as well as surface attachment through interactions with heparan sulfate proteoglycans (HSPG) [138][139][140][141][142]. Penetratin, a cationic peptide sequence (RQIKIWFQNRRMKWKK) corresponding to the R-helix of the Antennapedia homeodomain, is capable of crossing lipid bilayers and is also quite widely applied cell-penetrating peptide [143]. Studies have shown that the uptake mechanism involves direct interaction of the peptide with membrane lipids and does not involve vesicle disruption or pore formation [144,145]. Peptide conjugation of penetratin to [Ru(bpy) 2 -phen-Ar-COOH] 2+ (Fig. 3) allowed for delivery of the complex to the endoplasmic reticulum in live HeLa cells [117]. Signal Peptides Although cell-penetrating peptides such as polyarginines can facilitate efficient cell permeability, more targeted subcellular organelle targeting of imaging probes or theranostic agents can be achieved using signal peptides. Natural signal peptides are amino acid sequences appended to the N termini of newly synthesised proteins in the ribosome that direct the protein from the ribosome along its secretory pathway to its destination. Such signal peptides can provide a powerful means of directing exogenous probes to their target within the cell, and naturally derived peptides have been applied in this regard, and designed sequences have been shown to be recognised by proteins in organelle membranes [146]. In a recent study by Pope et al., an alternative nuclear localisation sequence, PAAKRVKLD (Fig. 3), was conjugated to a cyclometalated iridium(III) complex [153]. The c-Myc NLS is derived from the human c-Myc protein and is essential for its nuclear localisation [154]. The Ir-CMYC conjugate was efficiently delivered to the nucleus of human fibroblast cells and was essentially non-toxic, in contrast to the peptide-free parent complex [153]. Cellular uptake of cyclometalated iridium(III) complexes upon conjugation to an endoplasmic reticulum (ER)-targeting sequence (KDEL) and the NLS PKKKRKV (derived from SV40 large T antigen) has also been explored [155]. Interestingly, although the ER-targeting conjugate accumulated at the endoplasmic reticulum, the NLS conjugate showed non-specific staining attributed to endosomal trapping upon uptake. Ypsilantis et al. presented a detailed study of the interaction of diruthenium complex peptide conjugates with an oligonucleotide duplex and found that the tethered peptide Gly 1 -Gly 2 -Gly 3 -Lys 1 CONH 2 hindered complex binding [156]. Mitochondria-penetrating peptides (MPPs) have been employed for the specific targeting of mitochondria for imaging and therapy. Kelley et al. carried out a detailed iterative study on synthetic peptide sequences relating to signal sequences effective in promoting mitochondrial targeting of fluorescent probes/drug analogues [157]. Amongst the most effective of the sequences studied was an 8-amino acid sequence, FrFKFrFK, containing D-arginine and hydrophobic residues [157]. Keyes et al. exploited this sequence and the acetyl-blocked sequence, FrFKFrFK(Ac), to effectively and selectively drive mono-and dinuclear Ru(II) complexes to the mitochondria of mammalian cells [118,119]. As discussed in detail in later sections, such MPP-driven complexes have been applied as bioimaging and sensing tools in live mammalian cells. For example, [(Ru(bpy) 2 phen-Ar) 2 -MPP] 7+ showed dynamic response to variations in oxygen and ROS levels [118], whereas [Ru(dppz)(bpy) (bpy-Ar-MPP)] 5+ was used as a light switch probe for mitochondrial nucleoid imaging [119]. Bis-conjugation of the MPP sequence to an achiral Os(II) complex generated a NIR probe showing concentration-dependent cell death that could be tracked on the basis of probe localisation using confocal microscopy, offering a potential theranostic probe [158]. Receptor-Targeting Peptides The peptide sequence Arg-Gly-Asp (RGD) has been applied to mediate specific binding with integrin receptors and has been extensively used in cancer drug research as integrin receptors, such as α ν β 3 , which are overexpressed in certain tumour cells [159][160][161]. Adamson et al. first reported on RGD-labelled luminescent metal polypyridyl complexes [110]. Complexes of ruthenium(II) were conjugated to a linear RGD peptide with the objective of targeting platelet integrin, α IIb β 3 to, through emission anisotropy, reflect integrin conformation status. Integrins are adhesion receptors and transmembrane proteins that undergo large conformational changes and clustering on activation that alters their affinity for their receptors, and RGD is a peptide motif recognised by all integrins [162]. The yielded [Ru(N^N)(pic-RGD)] 2+ (where N^N = bpy or dpp) conjugates showed high binding affinity and specificity for α IIb β 3 , and through alterations in metal complex photophysical behaviour and anisotropy, it was possible to distinguish between different activation states of integrin. A two-step binding was determined for [Ru(dpp) 2 (pic-RGD)] 2+ with K d1 = 0.25 ± 0.29 μM and K d2 = 4.37 ± 0.82 μM. Additionally, confocal imaging revealed that both bpy-RGD and dpp-RGD conjugates selectively bind to CHO cells expressing the resting form of α IIb β 3 . A zinc phthalocyanine complex conjugated to a cyclic RGD peptide displayed dramatically higher cellular uptake in α v β 3 + U87-MG cells compared with the α v β 3 − MCF-7 cells [168]. A recently reported Ru-cRGD (cyclic RGD) conjugate exhibited strong two-photon luminescence and showed preferential accumulation in malignant cells with promising potential as a theranostic agent [167]. In an alternative system, dual-imaging nanoprobes were prepared by conjugating iridium(III), gadolinium(III) and RGD onto silica nanoparticles [169]. The watersoluble particles permitted in vitro and in vivo studies using confocal luminescence imaging and magnetic resonance imaging. Certain vectors can be used to target cells that overexpress key receptors such as folate, transferrin and somatostatin at the membrane surface of different disease states [170][171][172][173]. For example, enhanced uptake of a somatostatin-targeting Ru(II) conjugate was achieved in A549 cells overexpressing somatostatin receptors [173]. Although the conjugate did not act as a bioimaging probe, it showed excellent photosensitised toxicity with an IC 50 of 300 μM in the absence of light versus an IC 50 of 13 μM upon irradiation (PI > 23). Similarly, a redox active Pt(IV) complex was coordinated to the tumourpenetrating sequence (TKDNNLLGRFELSG) that targets the membrane protein heat shock protein 70 positive (memHSP70+) which is upregulated in colorectal cancer cells but is not usually found in healthy tissues [174]. The Pt(IV) complex is reduced in the cell to Pt(II), releasing the axial ligands and leading to cytotoxicity [175]. This strategy of conjugation to tumour recognition or penetrating sequences can also be exploited in the design of targeted probes for bioimaging. For example, C-X-C chemokine receptor 4 (CXCR4) is overexpressed in over 23 different types of cancer and is more prevalent in malignant cancer tissue [176]. With this consideration, a rhenium(I) tricarbonyl complex was conjugated to a derivative of T140 (14 amino acid sequence), a known antagonist of CXCR4, and showed potential as an imaging agent for CXCR4 expression that was capable of differentiation between cancerous and healthy tissue [177,178]. Kuil and co-workers had previously presented an iridium(III)-peptide conjugate for FLIM-based visualisation of CXCR4 expression in cells by conjugating the complex to a series of Ac-TZ14011 peptides [179]. Vallaisamy et al. reported on an iridium(III) complex conjugated to the hexapeptide MKYMVm, the peptide agonist, which selectively targeted formyl peptide receptor 2 (FPR2) in live cells [180]. Formyl peptide receptor plays an important role in chemotactic signals and modulation of host defence and inflammation [181]. Gasser et al. have functionalised Ru(II) and Re(I) complexes with the peptide bombesin (BBN, 14 amino acid sequence) in order to enhance uptake and accumulation of the complexes selectively in cancerous cell lines [151,152]. The peptide BBN, Fig. 4, is structurally similar to the human gastrin-releasing peptide (GRP) and is recognised and internalised by GRP receptors that are overexpressed in some cancer cell lines [182,183]. A series of zinc phthalocyanine peptide conjugates were synthesised to target gastrin-releasing peptide (GRP) and integrin receptors [184] in order to initiate a targeted therapeutic effect. Agorastos et al. reported a rhenium tricarbonyl complex functionalised with acridine orange that could selectively stain the cell nucleus of both mouse melanoma (B16-F1) and human prostate adenocarcinoma cell line (PC-3) cells [185]. Conjugation of the complex to the bombesin peptide resulted in cell-specific uptake. The peptide conjugate was membrane-impermeable to B16F1 cells, but readily permeated into PC-3 cells. Cell-specific uptake was achieved by exploiting the lack of gastrin-releasing peptide (GRP) receptors in B16F1 cells, but which are expressed in PC-3 cells. Interestingly, conjugation to the peptide prevented the ability of the complex to enter the nucleus. A short peptide based on the endogenous opioid pentapeptide ligands was chosen for the preparation of luminescent heterobimetallic Ir(III)/Au(I) conjugates that were found to be membrane-permeable, and localise in the lysosomes of A549 cells [186]. A rather different type of recognition occurs in the case of peptide nucleic acids (PNAs). PNAs are non-natural DNA/RNA analogues that consist of N-(2-aminoethyl)glycine units which form a pseudopeptide backbone bearing the four nucleobases. They thus exhibit strong affinity for nucleic acid strands [187]. PNA conjugation has been explored in the development of luminescent rhenium-PNA conjugates for cell imaging and DNA targeting [188][189][190][191]. Structure of Re(I) complex conjugated to a nuclear localisation signalling (NLS) peptide or a bombesin (BBN) derivative peptide sequence used to improve uptake of the complex by cancer cells overexpressing the gastrin-releasing peptide receptor (GRPR) [152] Although it is outside of the scope of this review on peptide-driven luminescent metal complexes, it is important to note that there are also several reports on metal polypyridyl complexes conjugated to proteins. For example, Ru(II) complexes have been conjugated to protein G [192], cytochrome c [193] and human serum albumin (HSA) protein [194] for various applications. Chakrabortty et al. presented a protein-Ru(II) hybrid with photosensitising properties which targets the mitochondria [194]. In this case, the Ru(II) complex was conjugated to HSA protein and covalently decorated with mitochondria-directing triphenylphosphine groups, thus achieving cellular uptake and specific subcellular accumulation. Luminescent Metal Complex Peptide Conjugates Applied in Bioimaging Early examples of peptide-conjugated metal complexes in confocal imaging are described in studies carried out by Barton et al. Conjugation to octaarginine enhanced cellular uptake of rhodium(III) 5,6-chrysenequinone diimine (chrysi) and ruthenium(II) dipyrido-phenazine (dppz) complexes, and interestingly, attachment of a fluorescein moiety, in the case of the Ru(II)-dppz complex, led to nuclear localisation [52,130]. Our group has focused extensively on the design and development of peptide metal complex conjugates for bioimaging, sensing and theranostics. A series of otherwise cell-impermeable ruthenium(II), iridium(III) and osmium(II) complexes have been conjugated to cell-penetrating and signal peptides and have been studied using confocal microscopy, lifetime imaging and resonance Raman spectroscopy. Recently, a polyarginine Os(II) probe was used in imaging of pancreatic multicellular tumour spheroids, marking the first step towards the application of such luminescent peptide probes in tissue imaging [10]. A fused peptide consisting of a nona-arginine fragment attached to a sequence (RHVLPKVQA = Aβ aggregation inhibitor) with anti-amyloid activity was labelled via a histidine residue to a platinum(II) complex [196]. The resulting luminescent conjugate was studied in cells and was shown to stain the cytoplasm of HeLa cells. In vivo studies in Drosophila melanogaster showed that the luminescent platinum conjugate could permeate the blood brain barrier of these organisms and evenly distribute in the brain. This work highlighted the use of fused peptides as vectors to penetrate the blood brain barrier while also selectively targeting biorelevant molecules or, in this case, inhibit the formation of amyloids. Using "click" chemistry, a rhenium(I) tricarbonyl complex was attached to a lipopeptide known to increase cell permeability [197]. The addition of the myristoylated HIV-1 TAT (myr-Tat) peptide to the rhenium complex substantially enhanced uptake in cells compared to the peptide-free complex and showed cytoplasmic accumulation with partial nucleoli staining. Nucleus, DNA/RNA The interaction of metal complex luminophores with nucleic acid materials has been the subject of extensive study since the 1980s. This has led to deep insight into the nature of metal complex-DNA interactions, expanding the prospects for both intracellular sensing and photo therapy by these species. Increased understanding of the factors that can be used to promote metal complex permeation and organelle targeting have led to the application of such complexes to study nucleic acid materials in cells, with several studies now reporting on the nuclear uptake and staining of metal peptide conjugates used for imaging or sensing of DNA within live cells [198][199][200]. One of the earliest of such studies was reported by Brunner and Barton who utilised functionalised rhodium complexes with octaarginine peptides to study DNA mismatches [52]. The rhodium complexes were capable of specifically binding to DNA mismatches where they can photocleave the DNA adjacent to the mismatch. The rhodium complex conjugated to an octaarginine peptide appended to fluorescein was rapidly internalised within the cell and localised in the nucleus. It was noted that the presence of the peptide led to binding of matched DNA (electrostatic interaction between peptide and DNA), although the photocleavage only occurred at DNA mismatches as desired. High-resolution imaging of chromosomal DNA was achieved using a Ru-dppz NLS conjugate, [Ru(dppz)(bpy)(bpy-Ar-NLS)] 6+ , which also allowed tracking of the different stages of mitosis in HeLa cells using STED [117]. In a separate study, the [Ru(tap) 2 (bpy-Ar-NLS)] 6+ showed nuclear penetration and DNA binding indicated by the extinguished complex emission [2]. In an example of the multimodal addressability of such complexes, they were confirmed to remain present in the nucleus after emission extinction by resonance Raman microscopy. With the aim of extending the application of Ru-NLS conjugates toward theranostics, it was shown that upon in situ photoirradiation, cellular destruction is accomplished, attributed to DNA oxidation by photo-induced electron transfer from a guanine base and the Ru(II) complex, analogous to a mechanism reported for related tap complexes in solution [201]. A rhenium complex conjugated to an NLS peptide was reported to exhibit nucleolar localisation and efficient singlet oxygen generation under light irradiation in polar (Φ = 0.25) or lipophilic (Φ = 0.75) environments [202]. This luminescent probe is attractive for dual application in both imaging and photodynamic therapy as it exhibits low dark toxicity (IC 50 = 35 µM), but enhanced toxicity under UV irradiation. In a separate study, the derivatised and caged Re(I) complex, Re-PLPG, was coupled to an NLS peptide that showed penetration into sub-cellular compartments such as the nucleoli, thus allowing interaction of the complex with nucleic acids [152]. Metal complex peptide nucleic acid (PNA) conjugates are a useful approach to probe different nucleic acid strands due to their ability to hybridise to their complementary oligonucleotide strands with high specificity which is advantageous in sensing and therapy. For example, the Re(I)-PNA conjugate, [(CO) 3 Re(pyridazine-PNA)(Cl) 2 Re(CO) 3 ], suitable for two-photon excitation (λ exc 750 nm), revealed cytoplasmic and nuclear staining in HEK-293 cells attributed to PNA-nucleic acid binding [188]. Notably, small concentrations of DMSO were required for uptake of the conjugate. The emission wavelength was substantially altered depending on sub-cellular localisation and could be used to differentiate between the cytoplasm and the nucleus. The difference in emission energy is attributed to the difference in polarity/rigidity between the different locales. A follow-on study by the Licandro research group on related rhenium complexes conjugated to different PNA sequences revealed difficulties in cell studies, including poor solubility and endosomal entrapment [189]. These issues are frequently encountered in biological studies of metal complexes and can hinder sensing and imaging applications due to a low rate of cell uptake and off-target localisation. Mitochondria The nucleus is the primary location of DNA within the cell, but the mitochondria is also an important repository. Although containing much less DNA, it contains 37 genes in total that encode proteins and RNAs critical for energy transduction. A histidine-binding Ir(III) complex was bis-conjugated to an HTat sequence and a mitochondrial targeting sequence derived from the mitochondrial protein cytochrome P450 [195]. The conjugate was membrane-permeable and efficiently targeted the mitochondria. In a recent publication, precision targeting of mitochondrial DNA in live HeLa cells was achieved using an MPP-driven light-switching Ru II -dppz complex [119]. Confocal laser scanning microscopy showed rapid cellular uptake of [Ru(dppz)(bpy) (bpy-Ar-MPP)] 5+ in live HeLa cells, and localisation to mitochondrial sub-structures was confirmed using luminescence lifetime imaging (Fig. 7). Solution titration with ctDNA showed that the DNA binding ability of the parent complex, mediated by dppz intercalation, is retained for the Ru II -dppz MPP conjugate. Additionally, an increased binding constant was reported, which was attributed to electrostatic interactions between the polycationic sequence of MPP and the anionic DNA backbone. The conjugate showed low cytotoxicity in the dark and under imaging conditions, thus facilitating mtDNA visualisation. Photo-induced toxicity was observed only under continuous and intense irradiation, enabling controllable initiation of cell death, making it an interesting prospect for theranostic applications. Recently, the successful conjugation of an osmium(II) complex to two mitochondrial-penetrating peptides was reported [158]. The bis-MPP conjugate strongly confined to the mitochondria at and below concentrations of 30 μM and leached out of the organelles and into the cytoplasm over time. At increased concentrations, it showed cytoplasmic and even nucleoli staining, leading to cell death. This localisation switch was also reflected by the cell death mechanism, where at 30 μM, loss of the membrane potential was observed, whereas at increased probe concentrations, a moderate effect on depolarisation and a greater caspase activity was observed instead. Endoplasmic Reticulum (ER) The endoplasmic reticulum (ER) in eukaryotic cells is the site of synthesis and processing of many transmembrane and secretory proteins, synthesis of lipids and calcium regulation. Accumulation of unfolded or misfolded proteins trigger an ER stress response which regulates cell functions to either restore ER homeostasis or to induce apoptosis for damaged cells. Complexes that target the ER may be used as imaging tools to study the endoplasmic reticulum and processes, such as ER stress, or as therapeutic tools, as the ER signalling pathways have been linked to various diseases, including cancer. Lysosome Lysosomes are subcellular organelles that are surrounded by a single membrane and are characterised by an acidic interior environment (pH ∼ 4.5 to 5), in contrast to mitochondria, for example, which are alkaline (pH ∼ 8). Lysosomes play an important role in cellular processes, including homeostasis [203], energy metabolism [204], enzymatic activity [205] and autophagy [206] which is also related but not limited to inflammatory diseases. Monitoring changes in the lysosomal environment including pH variations can aid understanding of lysosomal function and dysfunction. Furthermore, lysosomes are emerging as attractive therapeutic targets [203]. For example, cyclometalated iridium(III) complexes were applied as pH-activatable cell imaging agents and photosensitisers, highlighting the dual application of such probes for photodynamic therapy and real-time therapeutic monitoring [207]. With the aim of developing potential theranostic agents, Fernández-Moreira et al. reported the preparation of peptide-linked bimetallic Ir(III)/Au(I) conjugates [186]. The luminescent properties of the iridium moiety permitted confocal imaging and tracking of the conjugates upon cellular uptake and lysosomal accumulation, and [117] with permission from the Royal Society of Chemistry the coordination sphere of the gold appeared to influence cytotoxic activity. The cysteine-containing conjugate showed antiproliferative activity which is thought to be attributed to the readily cleaved Au-S(cysteine) bond. A Ru(II)-cyclodextrin-RGD nanoassembly reported by Mao et al. was found to accumulate in lysosomes of integrin-rich tumour cells and trigger apoptosis through lysosomal damage, ROS elevation and caspase activation [166]. Uptake of nanoparticles via an endocytic mechanism frequently results in lysosomal accumulation. Endocytosis is generally associated with endosomal entrapment in early or late endosomes which then fuse with lysosomes at a later stage if they are not released to the cytoplasm. Recently, the bridged octaarginine conjugate, [Os-(R 4 ) 2 ] 10+ was found to be taken up initially into the cytoplasm of A549 cells prior to accumulating in lysosomal structures at 30 μM/48 h [10]. This permitted both confocal imaging and luminescence lifetime mapping of the intracellular environment, including potential response to redox species as discussed later. Although lysosomal accumulation is desired in the context of therapy or redox and pH sensing, endosomal entrapment can hinder delivery of a luminophore probe or nanoparticle to the desired intracellular destination. The recently reported RuBDP nanoparticles were found to localise in late endosomes and lysosomes of A549 cells [208]. The particles ratiometrically responded to fluctuations in oxygen concentration and, interestingly, exhibited emission enhancement within 4 h following initial uptake, the origin of which is thought to reflect endosomal escape. Endosomal escape is a topic that is particularly important in drug delivery, and approaches addressing this challenge have focused on promoting endosomal membrane fusion and destabilisation or pore formation in the endosomal membrane [209,210]. In addition, several endosomal escape agents have been identified, such as chemical agents or viral-and bacterial-derived proteins and peptides [211]. Following protocols emerging in this domain, e.g. through modifying the particle composition to achieve pH-induced release [211,212], efficient endosomal escape and specific organelle targeting may further expand the application of nanoparticles and probes. Sensing Capabilities of Peptide Metal Complex Conjugates Taking advantage of the excellent targeting capability of peptides, there are several examples of emissive metal complex conjugates that have been applied for sensing of bio-relevant species such as oxygen and molecules including DNA and proteins. The characteristic luminescence lifetime-or intensity-based response typically reflects the interaction of the metal complex with these species within a cellular environment. Coordination of responsive ligands allows for the design and preparation of complexes with a responsive luminescence. For example, complexes of dipyrido[3,2a:2′,3′-c]phenazine (dppz) and its derivatives exhibit no luminescence in aqueous solution, but emission is switched on in hydrophobic environments, such as upon DNA binding, leading to the design and development of a range of DNA "light-switch" dppz complexes [213,214]. Sensing of important molecular structures can be enhanced via coordination of the lipophilic diphenyl phenanthroline (dpp) ligand, for example, which also allows for cellular uptake and targeting of lipid-rich regions [75,110,150]. Incorporation of a targeting vector allows for targeted sensing. For example, conjugation of a Ru(II) sensor to a mitochondrial-penetrating peptide enables monitoring of local oxygen fluctuations in live cells using either emission intensity or lifetime imaging. Oxygen There are several methods applied traditionally for monitoring and measuring dissolved oxygen in biological systems, for example, Clark-type O 2 electrodes [215], electron paramagnetic resonance (EPR) probes [216] and microelectrodes or needle probes [216][217][218][219][220]. However, there is a demand for less invasive techniques for O 2 sensing and particularly for sensing modalities that can be readily followed dynamically intracellularly with as little as possible interference with the cell. For this reason, quenched phosphorescence-based O 2 sensors are particularly attractive for intracellular oxygen (icO 2 ) sensing. Ideal characteristics of an intracellular oxygen sensor include high oxygen responsivity, photostability, cell uptake efficacy, molecular brightness, biocompatibility, cytotoxicity and subcellular targeting ability where desired. There are numerous examples of emissive probes that have been applied for oxygen sensing using lifetime-or intensity-based methods. As mentioned, a key advantage of lifetime sensing is that emission lifetime is largely independent of probe concentration. A drawback though is that phosphorescence lifetime imaging/sensing requires a microscope coupled with a lifetime/FLIM unit, which is rather a specialist technique, not a routine tool in many bio-laboratories. Whereas, intensity-based sensing can be performed using conventional instrumentation such as a fluorescence microscope or plate reader. Intensity-based sensing measurements, as described, can be applied where the probe species is combined with a reference, which overcomes issues of concentration and other artefacts in an intensity-based measurement. The choice of modality for O 2 sensing will also depend on the sample for analysis. For example, using a conventional plate reader, intensity-based measurements permit parallel analysis of monolayer cells exposed to various conditions in a single experiment while also carrying out the measurement in multiplicate. For three-dimensional cell models or tissue samples, PLIM is typically the method of choice as it allows visualisation of spatial O 2 distribution throughout the sample. Complexes of ruthenium(II), iridium(III) and Pt(II) or Pd(II) porphyrins are amongst the most widely studied oxygen sensors owing to their long-lived triplet excited states which are highly susceptible to oxygen quenching, giving a characteristic lifetime-and intensity-based response to oxygen concentration, as described by the Stern-Volmer equation. Phosphorescent Pt(II) and Pd(II) porphyrins exhibit phosphorescence lifetimes ranging from 40 to 100 μs and 400 to 1000 μs, respectively [221]. Papkovsky and co-workers have worked extensively on Pt(II) and Pd(II) porphyrin probes which show good photostability and efficient quenching by oxygen [222][223][224][225]. The solubility of such probes in water and targeting ability can be improved by conjugation to protein cargos, PEG chains or cell-penetrating peptides [224,[226][227][228][229]. Dmitriev et al. presented a Pt(II) coproporphyrin conjugated to a peptide fragment derived from the antimicrobial bactenecin 7 peptide [227]. The conjugate showed efficient cellular uptake across several cell lines, cytoplasmic and mitochondrial accumulation and was used for monitoring intracellular O 2 levels upon exposure to metabolic stimuli reagents. The application of this conjugate and similar Pt(II) coproporphyrins in cells can be hindered by poor photostability and potential photocytotoxic effects. Ir(III) dyads such as the iridium-coumarin ratiometric probe, C343-Pro 4 -BTP reported by Yoshihara et al., have been developed for ratiometric intensity-based O 2 sensing [236]. In this report, the coumarin moiety (C343) is linked to the iridium (BTP) complex through a tetraproline amino acid linker, and upon excitation at 405 nm, energy transfer from C343 to BTP yields emission from both dyad components at 480 nm and > 610 nm, respectively. The phosphorescence emission signal of the iridium is quenched by oxygen, and the ratio of the emission from the dyad moieties exhibits an O 2 -dependent response both in solution and in live HeLa cells. In later studies, octaproline [234] and octa-and dodecaarginine [237] linkers were utilised in coumarin-iridium(III) dyads in order to enhance cellular uptake to enable ratiometric imaging of the oxygen gradient in HeLa cells. Several complexes of ruthenium(II) have demonstrated in cellulo oxygen response as molecular probes or part of a dual emissive dyad [238][239][240]. In the context of peptide conjugates, Keyes' group presented the octaarginine conjugate, [Ru(bpy) 2 (pic-R 8 )] 10+ [1,88], whose luminescence lifetime, similar to the parent complex, was oxygen-sensitive. Confocal imaging revealed rapid uptake of [Ru(bpy) 2 (pic-R 8 )] 10+ in myeloma cells and human blood platelets, and lifetime imaging was used for cellular oxygen mapping where, for example, the probe lifetime was shortest (~ 400 ns) when the conjugate localised in the cell membrane. This agrees with the increased solubility of oxygen in the cellular membrane. Although the emission lifetime of this complex is strongly oxygen-dependent, the quenching constant by O 2 is largely pH-independent. The advantage of peptide vectorisation is that it may enable real-time monitoring of local oxygen fluctuations at a specific cellular region or organelle, and cross-reactivity may be minimised if other parameters remain unchanged while the analyte of interest is varied. For example, the mitochondrial-targeted Ru(II) conjugate reported by Keyes et al. showed dynamic response to changes in local oxygen concentrations and to elevated levels of reactive oxygen species using luminescence intensity and lifetime imaging [118]. The dinuclear ruthenium(II) probe was bridged across a mitochondrial-penetrating peptide yielding [(Ru(bpy) 2 ) 2 (phen-MPP-phen] 7+ . Following exposure of HeLa cells to antimycin A, a mitochondrial uncoupler agent, PLIM studies showed that average emission lifetime of [(Ru(bpy) 2 phen-Ar) 2 -MPP] 7+ in live HeLa cells was quenched from approximately 525-228 ns, as shown in Fig. 9. Cross-responsiveness with other environmental factors such as pH, proteins/ enzymes and lipidic environments is a key limitation of lifetime imaging for single probes in the cell environment and indeed in molecular probes in general, as it is difficult to decouple or calibrate the analytical probe and/or reference from other potential environmental influences in the complex cell matrix. However, in studies like these, where relative changes to analyte are followed rather than studies of absolute concentration, this may be less of an issue. Nanoparticle probes offer some advantages in this regard. Nanoparticle (NP)-based systems have inherent advantages in terms of their cell permeability once they are between 50 and 200 nm and have appropriate lipophilicity and surface charge. Several NP O 2 sensor formats have been reported; they comprise of either (1) an O 2 probe alone or (2) a probe reference pair that generates a ratiometric signal through Förster resonance energy transfer (FRET) or (3) a probe reference pair that has independent emission signals that do not require cross-communication but can be excited at a single wavelength [241][242][243][244][245][246][247][248][249][250]. Of the latter type, our group has reported a ratiometric core-shell nanosensor, Ru-BODIPY NP, where the BODIPY reference probe was confined to a [118]. Reprinted (adapted) with permission from Martin et al. [118]. Copyright 2014 American Chemical Society polystyrene core, offering its protection from environment, and so a stable reference signal and a ruthenium probe conjugated to the poly-lysine shell exterior allowing direct exposure to the environment and O 2 accessibility [241]. In the Ru-BODIPY nanoparticle system, the O 2 indicator and reference dye are spatially separated into a particle core and shell, thus limiting any potential cross-communication, and they are simultaneously excited at a single wavelength. The nanoparticles showed good ratiometric response to oxygen in aqueous media with a rate of quenching of 7.52 × 10 8 M −1 s 1 . The emission intensity ratiometric data showed moderately good linearity (R 2 = 0.9525) over a biologically relevant O 2 range. Following surfactant-mediated uptake of RuBODIPY NPs in CHO cells, lifetime imaging studies showed that the emission lifetime of the BODIPY dye, as expected, was unaffected by the surrounding intracellular environment in contrast to the ruthenium probe, demonstrating the potential of the core-shell approach to designing new ratiometric nanotools. Aiming to overcome the need for a membrane permeabilising agent, a ratiometric sensor was then developed where the Ru(II) oxygen sensor and reference BODIPY dye are co-encapsulated within the particle core which is permeable to oxygen, and the particle exterior is decorated solely with a poly-l-lysine shell [208]. This approach indeed permitted uptake of the self-referenced O 2 nanoparticles in live mammalian cells, demonstrating the impact of even relatively modest surface modification of the particle on uptake. Importantly, the particles were suitable for both non-invasive hypoxia imaging using confocal microscopy (xyλ scanning) and for quantitative ratiometric intensity-based measurements of oxygen in cellulo using a plate reader assay. The isolation of the probe to the particle core protected it from environmental factors other than oxygen, but may impact dynamic response as the oxygen must diffuse through the particle matrix to reach the probe. pH pH is an important regulator of metabolic processes in the cell and is believed to play a role in signalling. pH varies across the cell organelles, and its homeostasis may be a marker of cell health. Therefore, sensing intracellular and organelle pH is an important target analyte that has been the focus of studies in metal luminophore probes. A pH-and oxygen-sensitive iridium(III) complex was prepared by coordinating two cyclometalated ligands [2-(2, 4-difluorophenyl)pyridine; dfpp] to an Ir(III) centre along with the pic(COOH) ligand, 2-(4-carboxylphenyl)imidazo [4, 5-f] [1,10]phenanthroline, carrying a terminal carboxyl moiety, thus permitting amide coupling to an octaarginine sequence in order to improve aqueous solubility [29]. The parent complex exhibited a lifetime of approximately 674 ns in degassed organic media which was reduced to 200 ns in degassed aqueous media at pH 6.9. Cytotoxicity studies showed that both the Ir(III) parent complex and conjugate were cytotoxic towards SP2 and CHO cell lines. The cytotoxic character of iridium complexes has been reported in a number of studies [69,251], and it is likely that increased cytotoxicity compared to other transition metal luminophores is the result of its lipophilic nature inducing rapid uptake and wide distribution of the conjugate within cells. Chao et al. reported an iridium(III) pH sensor that was coordinated to ligands containing morpholine groups [252]. They observed that morpholine promoted mitochondrial targeting, and the pH dependence of the emission intensity of the probes was explored in HeLa cells where extracellular pH was adjusted and from 6.0 to 8.0 in high-K + media; equilibration with the cell interior was achieved by application of nigericin, a membrane-associating antiporter ionophore for K + and H + . The emission intensities from the complexes within the cell were observed to respond to pH in the range of 6.0-8.0. On stimulation of apoptosis in the cells, using mitochondrial uncoupler carbonyl cyanide m-chlorophenyl hydrazine (CCCP), the emission intensity of the probes in the mitochondria was also observed to modulate; however, attributed to pH change, other quenching species may evolve in the mitochondria as a consequence of uncoupling. As previously described, the octaarginine-driven conjugate [Ru(bpy) 2 (pic-R 8 )] 10+ was applied as a probe for oxygen mapping using lifetime imaging [1,88]. The emission lifetime of this complex is strongly oxygen-dependent, but the quenching constant by O 2 is largely pH-independent, which serves the use of the probe in O 2 mapping. Conversely, resonance Raman spectroscopy, and therefore the Raman signature signal of the probe, is strongly pH-dependent and is insensitive to O 2 , thus enabling use of the probe in pH mapping using resonance Raman spectroscopy. The probe permits multi-parameter monitoring and mapping of the intracellular environment using a single probe, single excitation and two imaging techniques enabled by the large Stokes shift of Ru(II) polypyridyl complexes. Ligands such as pic in this complex or dppz and bpy exhibit signature Raman signals when they participate in the MLCT the excitation is resonant with, and in the case of the ionizable pic ligand, its Raman signature grows into resonance depending on the pH of the environment/ionisation of the pic imidazole residue, thus providing a distinctive pH marker. Biorelevant Molecules: Receptors, Proteins, Enzymes Lo et al. presented a series of cyclometalated Ir(III) complexes containing a perfluorobiphenyl (PFBP) moiety and their respective conjugates, afforded through reaction of PFBP with the cysteine moiety in a four amino acid sequence (FCPF, known as "π-clamp") [155]. Following this π-clamp-mediated cysteine conjugation, novel Re(I) conjugates were prepared and applied as imaging agents but also as enzyme sensors [253]. An early study, presented by Stephenson et al., described conjugation of a rhenium complex to the peptide fMLF which is known to deliver to the formyl peptide receptor (FPR) [254]. A qualitative comparison of the cell uptake and distribution between a known FPR-targeting fluorescent probe (fluorescein-labelled fNLFNTK) and the rhenium-fMLF conjugate suggested that the rhenium probe successfully targeted the FPR. As mentioned earlier, a rhenium(I) tricarbonyl complex was conjugated to a derivative of T140, a known antagonist of CXCR4 and a chemokine receptor which is overexpressed in cancer cells [177]. The rhenium conjugate was successful in sensing CXCR4, evident by a strong luminescence signal detected from cells expressing CXCR4, whereas no luminescence was detected from cells lacking the receptor. Although rhenium peptide conjugates have been exploited as luminescent probes for interrogating different cell receptors, there has been an alternative motivation for synthesising rhenium peptide conjugates as structural analogues for "hot" technetium complex conjugates. However, in several studies, the radioactive technetium peptide conjugate was utilised to study specific cell receptors via radioimaging techniques, for instance, SPECT [255][256][257][258]. Although not exploited for its sensing capabilities, a zinc phthalocyanine complex conjugated to a receptor-targeting peptide, LARLLT, was reported to exhibit high selectivity for the epidermal growth factor (EGF) which tends to be overexpressed on the surface of cancerous cells [259]. Conjugation to the peptide increased the photodynamic efficacy and selectivity of complex against cancer cells with different receptor expression levels. Reactive Oxygen and Nitrogen Species (ROS/RNS) Reactive oxygen species and reactive nitrogen species are highly reactive, often radical species, generated as part of metabolic processes within the cell. They are potentially injurious to the cell if not regulated. ROS and RNS are numerous and ), hydroxyl ( · OH), peroxyl (RO 2 · ) and alkoxyl (RO − ) radicals nitric oxide, peroxynitrite, nitrate, nitrite and nitrogen dioxide. RNS and ROS generation pathways are inter-dependent. For example, peroxynitrite (ONOO − ) is the product of a reaction between nitric oxide (NO) free radicals and superoxide and at abnormal levels, can induce oxidative changes in intracellular molecules, including DNA and proteins [260]. Monitoring changes in oxygen, RNS and ROS levels within the cell and particularly within the mitochondria, one of the key cellular sources of such species, is invaluable in understanding both normal physiology and disease, and also in understanding toxicity and therapeutic response. As mentioned earlier, the mitochondria-targeting Ru(II) probe, [(Ru(bpy) 2 ) 2 (phen-MPP-phen] 7+ , is capable of responding to changes in local O 2 concentrations and also to elevated ROS levels [118]. Recent studies have highlighted the potential of Os(II) polypyridyl complexes for detection of oxidative damage and intracellular reactive oxygen species [10,261,262]. The absence of oxygen sensitivity in the case of Os(II) complexes but potential redox sensitivity offers an advantage in their application as intracellular sensors over complexes of ruthenium or iridium. For example, phosphorescence lifetime imaging studies revealed that the emission lifetime of the polyarginine Os(II) conjugate, [Os-(R 4 ) 2 ] 10+ , was found to vary with intracellular localisation [10]. When confined to the lysosomes and surrounding cytoplasm, [Os-(R 4 ) 2 ] 10+ exhibited reduced lifetimes in comparison to when it was initially taken up into the cytoplasm of cells. For example, the dominant amplitude component of the decay was measured as 92.2 ± 2.9 ns upon cytoplasmic uptake and 37 ± 1.8 ns upon lysosomal accumulation. This lifetime quenching is likely due to the presence of redox-active species as the probe luminescence was not sensitive to oxygen or pH changes [10]. Figure 10 shows the confocal and lifetime imaging of [Os-(R 4 ) 2 ] 10+ upon lysosome localisation at 30 μM/48 h. Lifetime imaging studies were also carried out following uptake and accumulation of the probe within pancreatic 3D multicellular tumour spheroids, thus highlighting the suitability of such probes for monitoring metabolic changes in cells, spheroids or tissues without interference from oxygen. The mitochondria-localised ruthenium(II) complex-cyanine (Ru-Cy5) scaffold, although peptide-free, is a good example of the application of such transition metal probes for in cellulo sensing and imaging, in this instance, of peroxynitrite in cells [263]. This energy transfer-based probe constituted a Ru(II) complex as the energy transfer donor and Cy5 as energy transfer acceptor. Following cellular uptake and mitochondrial localisation in HeLa cells, the emission of Cy5 was decreased in the presence of ONOO − as a result of oxidative cleavage of the polymethine bridge which interrupts the energy transfer between Ru(II) and Cy5. The Ru-Cy5 system showed low cytotoxicity, efficient mitochondrial accumulation and good selectivity for ONOO − (over other reactive species). Although there are some additional examples of non-peptide metal complexes which exhibit a luminescence response to NO [264,265] or radical species such as hypochlorite [266], there are to date limited reports of peptide metal conjugates which have been applied for monitoring intracellular redox species. Cell Membrane Markers Again reflecting the versatility of Ru(II) complexes as multiparameter and multimodal imaging tools, [Ru(dppz) 2 (pic-Arg 8 )] 10+ was used in confocal luminescence and resonance Raman imaging [109]. Owing to the dppz ligands, the complex behaved as a molecular light switch where the luminescence of the complex is extinguished in aqueous solution, due to hydrogen bonding to the phenazine nitrogens, but switched on in lipid vesicles. Resonance Raman intensity mapping revealed that the octaarginine conjugate crossed the membrane and distributed throughout the cell, whereas the parent complex accumulated in the cell outer membrane. PLIM mapping of a related osmium octaarginine conjugate, [Os(bpy) 2 (pic-Arg 8 )] 10+ , revealed that the emission lifetime of the complex changed in response to the intracellular environment [131]. For example, the average lifetime was found to be 11.6 ± 0.4 ns in the cytoplasm of CHO cells and 14.5 ± 0.5 ns in SP2 cells. Additionally, a lifetime of 13 ± 1.5 ns and 18.8 ± 0.6 ns was observed for the membrane of CHO and SP2 cells, respectively. As the lifetime of the complex is oxygen-independent, this response may be due to differences in the lipid packing of the cell membrane of each cell line. As mentioned earlier, a variation in lifetime was observed for the Ru(II) analogue, [Ru(bpy) 2 (pic-Arg 8 )] 10+ , conjugate which was attributed to the increased solubility of O 2 in the cellular membrane, thus reflecting cell membrane oxygenation [88]. Figure 11 illustrates the PLIM mapping of [Os(bpy) 2 (pic-Arg 8 )] 10+ and [Ru(bpy) 2 (pic-Arg 8 )] 10+ in SP2 cells. Conclusions Metal complex luminophores have been widely explored as probes of biological molecules, particularly of DNA in its variety of motifs, since the 1980s. In the past decade, their putative application as probes within the cellular environment has been realised, and metal complex luminophores are now rapidly extending beyond the scope of in vitro studies to diverse applications in cellular imaging, intracellular sensing and theranostics. The beauty of metal complexes is that they are highly synthetically versatile and can be tailored to their application by tuning their photophysical properties (e.g., molecular brightness, NIR emission), through modification of the metal centre and/ or the coordinated ligands, through relatively facile synthetic methods. They can be tailored to meet the demands of the technique used for imaging or sensing whether that is phosphorescence lifetime imaging microscopy, luminescence intensity-based sensing using a plate reader or vibrational imaging such as Raman mapping. Critically, metal complex luminophores can be readily conjugated to vectorising functionalities to facilitate cellular uptake and intracellular targeting. Peptides are a valuable means of promoting such permeation and targeting within cells, and their efficacy in this regard has been known for many years in drug delivery. As signal molecules, they can drive their cargo very precisely within the cell. Peptides can confer improved solubility and lower cytotoxicity on their cargo and can be synthesised readily via high throughput and often automated solid-state synthesisers. Conjugation of metal complexes to cell-penetrating and signal peptides has been used for the efficient delivery of complexes in cell monolayers, but also recently in multicellular spheroids. Peptides, to date, have been very effective in driving complex cargo to the cell and targeting, but the mechanism is not fully elucidated; for example, the role of the counter-anions accompanying the charged conjugate seem to have a strong effect on membrane permeability, but remains to be fully explored and exploited. The possibilities are extensive, and metal complex luminophore peptide conjugates are likely to find increasing application in theranostic applications. Although peptides can promote fairly rapid cellular uptake (i.e. < 1-24 h) required for imaging, the application of the metallo-peptide conjugates in bioassay experiments has been less explored. In terms of the metal complex photophysics, key future challenges lie in maximising probe brightness, as inorganic luminophores do not compete well with organic fluorophores in this regard, but maximising the absorbance cross section and tuning analytically relevant responsive luminescence will ensure that even if they do not match organic probes in terms of emission quantum yield, they offer versatile alternatives. Another challenge is the in cellulo limit of detection and limit of quantification of the species that the probe is sensing. The influence of the incredibly complex cytosol composition or organelle environment complicates photophysical effects and sensing capability. Metal complex luminophores translate well to confocal luminescence imaging techniques that permit focused interrogation of a specific cell region to yield site-specific biorelevant information with good optical sectioning and indeed look likely to provide important solutions in super-resolution methods too. In particular, lifetime imaging is the method of choice for visualising spatial oxygen distribution in cells and particularly in multicellular spheroid samples. However, metal complex luminophores are less widely tested in the more common and conventional methods used in biological labs. In particular, as probes for plate readerbased bioassays in 96-well format which are a widely used and high-throughput approach where emission, lifetime or absorbance is collected from multiple cells across a well format. Highly selective probe targeting or switchable probes would be required with limited contribution from emission of the complex outside the region of interest for such applications. The targeting ability of the conjugate dye would also have to be extremely precise. Few of the dyes described above meet this criterion of highly specific localisation that would be required for a well format assay yet. Nonetheless, they have proven to be effective imaging probes for investigating cellular membrane dynamics, sensing receptor expression and monitoring redox species in mitochondria, pH fluctuations and DNA interactions under imaging conditions in cellulo and look likely with further advances in precision targeting to emerge as powerful tools for diverse bioanalysis.
17,260.6
2022-06-15T00:00:00.000
[ "Biology", "Chemistry" ]
On duality of color and kinematics in (A)dS momentum space We explore color-kinematic duality for tree-level AdS/CFT correlators in momentum space. We start by studying the bi-adjoint scalar in AdS at tree-level as an illustrative example. We follow this by investigating two forms of color-kinematic duality in Yang-Mills theory, the first for the integrated correlator in AdS$_4$ and the second for the integrand in general AdS$_{d+1}$. For the integrated correlator, we find color-kinematics does not yield additional relations among $n$-point, color-ordered correlators. To study color-kinematics for the AdS$_{d+1}$ Yang-Mills integrand, we use a spectral representation of the bulk-to-bulk propagator so that AdS diagrams are similar in structure to their flat space counterparts. Finally, we study color KLT relations for the integrated correlator and double-copy relations for the AdS integrand. We find that double-copy in AdS naturally relates the bi-adjoint theory in AdS$_{d+3}$ to Yang-Mills in AdS$_{d+1}$. We also find a double-copy relation at three-points between Yang-Mills in AdS$_{d+1}$ and gravity in AdS$_{d-1}$ and comment on the higher-point generalization. By analytic continuation, these results on AdS/CFT correlators can be translated into statements about the wave function of the universe in de Sitter. I. INTRODUCTION The study of scattering amplitudes and on-shell observables in quantum field theory has revealed new mathematical structures and symmetries which are obscured in off-shell, Lagrangian formulations [1]. The duality between color and kinematics, and the associated doublecopy relations, are prominent examples that give fundamentally new insights into the perturbative structure of quantum field theory [2]. These ideas indicate that the dynamics of gauge theories and gravity, when they are both weakly coupled, are governed by the same kinematical building blocks. Additionally, these ideas have lead to novel computations for loop-level graviton amplitudes, gravitational wave patterns, and string theory amplitudes. Color-kinematic duality and doublecopy have also been applied to theories with seemingly no relation to gauge or gravity theories. We refer the reader to [3] for a recent, comprehensive review of these topics. Concurrent to these advances in flat space scattering amplitudes, there has also been an intense focus on the study of holographic correlators. The most concrete example of holography has been formulated in asymptotically anti-de Sitter (AdS) spacetimes [4]. While there is no notion of an S-matrix in AdS, one can consider AdS scattering experiments that are dual to the correlation functions of a conformal field theory (CFT) living on its boundary. In this context, the terms "AdS amplitude" and "CFT correlator" are typically used interchangeably. There are now a variety of methods to compute holographic correlators, including ideas from Mellin space [5][6][7], the conformal bootstrap [8,9], and harmonic analysis in AdS [10,11]. In this work, we will build on recent developments concerning CFT correlation functions in momentum space in order to understand how ideas from flat space amplitudes can be imported to curved spacetimes. One motivation to study holographic correlators in momentum space comes from their close connection to the wave function of the universe [37][38][39], which can be used to compute late-time cosmological correlators. Inspired by the modern amplitudes program, there is an ongoing systematic program to compute de Sitter invariant correlators, which is known as the cosmological bootstrap [40][41][42][43][44][45][46]. Interestingly, holographic and cosmological correlators possess a total energy singularity when the norms of all the momenta sum to zero. The coefficient of this singularity is exactly the scattering amplitude for the same process in flat space [29,47]. In other words, holographic and cosmological correlators contain within them information about flat space amplitudes. It is then natural to wonder if one can generalize the rich structure of color-kinematic duality and double-copy to AdS and cosmological correlators. Color-kinematic duality implies that flat space scattering amplitudes can be arranged in such a way that the kinematic numerators of the scattering amplitude have the same algebraic properties as the color factors. That is, whenever the color factors of an amplitude obey a Jacobi identity, the corresponding kinematic factors can also be chosen such that they obey the same relation. The doublecopy construction, which relates Yang-Mills amplitudes to gravity, then corresponds to replacing the color factors of a Yang-Mills amplitude with the corresponding, colorkinematic obeying numerators. Since color-kinematics has lead to computational and conceptual advances in flat space scattering amplitudes, it is natural to hope that similar advances can be made for curved space arXiv:2012.10460v1 [hep-th] 18 Dec 2020 correlators. In this work, we take the initial steps in generalizing and testing color-kinematic duality for AdS/CFT correlators, or equivalently for the cosmological wave function. We will propose two different formulations of color-kinematics in AdS momentum space. The first method corresponds to imposing this duality on the full, integrated correlator. In the examples considered here, we find it is always possible to choose a set of numerators such that color-kinematics holds for the integrated correlator. The second method corresponds to imposing color-kinematics directly on the AdS integrand. This is inspired by recent work on scattering in a plane-wave background [48] and on the study of celestial amplitudes [49]. We also comment on its connection to double-copy. This note is organized as follows. In section II we will study the scalar bi-adjoint theory in AdS at tree-level. This will serve as a simple example to illustrate how color-kinematics works in AdS and how the BCJ relations are modified. In section III, we study color-kinematics for Yang-Mills four-point functions in AdS, both at the level of the integrated correlator and the integrand. Finally, in section IV we comment on the color KLT relations, which connect the bi-adjoint and Yang-Mills theories, and the double-copy relations between Yang-Mills and gravity at three and four-points in AdS. Note: After this work was completed, [50] appeared which partially overlaps with our results. II. BI-ADJOINT SCALAR To start, let us recall that the bi-adjoint scalar theory consists of scalars φ aA which are charged under two different SU (N ) global symmetries (see [51][52][53] and references therein). We will use lowercase and capital Latin letters to distinguish the two groups. The action for this theory is simple and takes the following form: where R is the Ricci scalar. In AdS 1 , the metric g for the Poincaré patch is Here we take η µν to be the mostly plus, flat space metric. The scalar φ aA is dual to a boundary operator 2 O aA with 1 We will focus on AdS computations for concreteness in the rest of the note, although our results can equally be interpreted in terms of the dS wave function with suitable analytic continuation [37]. 2 The bulk theory does not include gravity, so the boundary theory does not have a stress-tensor and will be non-local. A particularly simple case is when the scalar is conformally coupled, which corresponds to, 4) or ν = 1/2. Next, we need the bulk-to-bulk propagator for scalars in AdS: (2.5) where J is the modified Bessel function of the first kind and k ≡ |k| = √ k 2 is the norm of the boundary momenta k. When computing correlators, we will always take the external momenta to be spacelike. When we set ν = 1/2, the p integral can be computed in closed form, but we find the p integral representation of the propagator simplest for practical computations. By taking one point to the boundary, we find the scalar bulk-to-boundary propagator: where K is the modified Bessel function of the second kind. Finally, to each interaction vertex we have the factor iλf abc f ABC . A. Test case: AdS6 In this section we will compute tree-level Witten diagrams for the bi-adjoint theory in AdS 6 . For simplicity we will study the conformally-coupled scalar, i.e. we set ν = 1/2. The conformally-coupled, bi-adjoint theory in AdS 6 is particularly simple because at tree-level it is conformally invariant. It is convenient to introduce a set of AdS Mandelstamlike invariants. First, from Lorentz invariance we know a general n-point correlation function will depend on n(n − 1)/2 dot products of the momenta, k i · k j , where i, j = 1, ..., n − 1. 3 We then define the AdS Mandelstam invariants to be: Our definition of the AdS Mandelstams is motivated by their connection to the usual Mandelstam invariants in the flat space limit. To see this, we first construct a null, (d + 1)-dimensional momentum by appending the norm k to the vector itself, k ≡ (ik, k). (2.8) We then define the flat space invariants, At four-points we use the standard notation s = s 12 , t = s 23 , and u = s 13 . Finally, we recall that the flat space limit in AdS momentum space [29] is defined by analytically continuing in the norms k i such that the total "energy" E T → 0. Here E T is the sum of all the norms, Now we turn to computing Witten diagrams for the full, color-dressed correlator. To set the notation, we will use M (1, 2, 3, 4) to denote the color-dressed correlator and A(1, 2, 3, 4) for the color-ordered correlator. To keep the expressions compact, we will suppress the global symmetry indices. The exchange diagram for conformally coupled scalars in Figure 2 has been computed -in [54] for the dS wave function and in [35] for an AdS/CFT correlator -so we will quote the final answer here: where we use similar conventions as in flat space: s =s 12 ,t =s 23 ,ũ =s 13 . (2.13) The s-channel color and "kinematic" factors are with repeated indices summed. The t and u-channel factors are defined by performing the following replacements: As a consistency check, the AdS four-point function reduces to the familiar flat space amplitude when we take the residue at E T = 0: (2.16) For the bi-adjoint theory, we have made an arbitrary split between the color and kinematic factors. Both n i and c i are by definition the group theory factors for SU (N ) and therefore obey the Jacobi identities: (2.17b) While this example is trivial in terms of deriving a color-kinematics duality, it does demonstrate in a simple way how this duality will differ between AdS and flat space. To see this, we use the color relation c t = −c s −c u to write the color-dressed correlator as a sum of colorordered correlators: We can further reduce this expression by using the identity n t = −n s − n u to find the following linear relations: In flat space it is impossible to invert these equations and solve for the numerators directly in terms of the colorordered amplitudes. This degeneracy implies a further identity among the color-ordered amplitudes, which are known as the BCJ relations. In flat space this degeneracy follows from the fact s + t + u = 0 for massless scalars. However, in AdS we haves +t +ũ = 0 and we find: (2.21b) In the limit E T → 0 the explicit factor of E T cancels against the pole in the color-ordered correlator, while the sum in the denominator vanishes,s +t +ũ → 0. Therefore, the kinematic numerators naïvely diverge in the flat space limit. 4 To avoid this, we impose the following relation on the flat space amplitude: If we take the same combination of color-ordered AdS correlators as what appears in the BCJ relation, we see the right hand side is non-zero, but vanishes in the flat space limit: (2.23) The observation that color-kinematics does not always yield BCJ relations has also been made in flat space. For example, color-kinematic duality for massive, flat space amplitudes does not necessarily imply additional linear relations among the color-ordered amplitudes [55]. Instead, requiring that the BCJ relations hold -or that there are only (n − 3)! linearly independent, colorordered amplitudes at n-points -imposes constraints on the masses of the particles. These constraints proved important in constructing valid examples of massive double-copy. Similar observations about color-kinematics and BCJ relations have also been made for ABJM [56,57] and for amplitudes in the flat space bi-adjoint theory with off-shell momenta [51]. Finally, it is straightforward to generalize our results to higher points. For example, using the results of [35], we find that the five-point, color-dressed correlator is: where for brevity we defined These objects are related to Mandelstam variables by and c (5) s are contractions of color structures which are defined explicitly in Appendix A. One can also see that the AdS expression has the correct pole structure in the flat space limit: In Appendix A, we give the expansion of the five-point, color-dressed correlator in terms of the color-ordered correlators. Color-kinematics in the bi-adjoint theory is trivial at all points and we find again that there is a square, non-degenerate matrix relating the color-ordered AdS correlators and the numerators. That is, if we organize the correlators and numerators into vectors, A α and n β , we have the linear relation, where the matrix S is invertible. Therefore, the 5-point numerators can be written as a linear combination of color-ordered correlators. In the flat space limit, det S → 0 and one instead finds BCJ relations among the colorordered, flat space amplitudes. B. Generalization to AdS d+1 In this section we will study the bi-adjoint theory in general dimensions. The immediate difficulty one faces is that in generic dimensions, the exchange Witten diagrams for φ 3 theory do not take a simple form, even for conformally coupled scalars. For example, in AdS 4 the exchange Witten diagrams already involve dilogarithms [35,58,59]. On the other hand, in flat space the tree-level amplitudes take a simple form in all dimensions, with poles corresponding to particle exchange. Therefore, for general AdS d+1 , it may not be clear how to identify the "numerators" which should obey the color-kinematic relations. One remedy for this is to simply define a numerator n i as the overall coefficient of a tree-level exchange diagram whose color-factors have been removed. If we consider conformally-coupled, bi-adjoint scalars in AdS 6 , this gives the same definition of the numerators as before, up to a factor of E T . Since we will not need the explicit form of the integrated diagram, in this section we will let the boundary scalars have a generic conformal dimension. The color-dressed correlator for bi-adjoint scalars is where the s-channel exchange Witten diagram, with the color-factors removed, is: The t and u-channel diagrams are defined by the same permutations as before. At this point we can repeat the analysis of the previous section with minor changes. Everywhere we see a factor of (E Ts ) −1 we replace it with W s (k i ), and similarly for the t and u-channel exchanges. Once again, we can use the color and kinematic relations to rewrite the d-dimensional color-ordered correlators in terms of the numerators, and then invert this relation. Suppressing the momentum arguments for compactness, we find: and Similarly, the BCJ relation in AdS becomes: In general dimensions, we do not have the explicit form of W s,t,u in AdS momentum space, although there do exist results in Mellin space [5,6]. In our context, all we need is that in the flat space limit each Witten diagram becomes a flat space exchange diagram, e.g. W −1 s → s. We therefore see that the AdS BCJ relation has a nonzero right hand side which vanishes in the flat space limit. Alternatively, we can define numerators by using the p integral representation of the bulk-to-bulk propagator inside the Witten diagram, With this representation, we can use the flat space language and identify the numerators n i as multiplying certain poles in the momenta. The only difference is that in AdS we have a continuum of poles which depend on p. This reflects the well-known fact that in CFTs the generator of time translations has a continuous spectrum [60]. For the bi-adjoint theory, the introduction of an integrand is not necessary since by definition the kinematic factors are independent of p. However, this representation will be useful once we turn to Yang-Mills in general dimensions. III. YANG-MILLS THEORY In this section we will study color-kinematics for Yang-Mills in AdS. The study of Yang-Mills in AdS 4 will mirror exactly the analysis of the bi-adjoint scalar theory in AdS 6 . In both cases the theories are conformal at treelevel, and the correlators take similar forms. Then we will study Yang-Mills in general AdS d+1 and propose how color-kinematics is manifested at the integrand level. Throughout this section, we will take the axial gauge, A z = 0. With this choice, the propagators are [28,61]: and 2) where the tensor T µν (k, p) = η µν + kµkν p 2 . Technically, the bulk-to-boundary propagator also comes with the tensor structure T µν (k, √ −k 2 ), but this simply projects onto polarization vectors transverse to k. Throughout this section we assume the polarizations are transverse, and therefore drop the projector. The interaction terms in the axial gauge have the same momentum dependence as in flat space: where the color factors are defined as before. To keep expressions compact, we will define: for the transverse polarization vectors µi i , and similarly for the quartic interaction. We also raise and lower the µ, ν indices using the flat space metric η µν . To take into account that we are studying a theory in the Poincaré patch, we also need a factor of z 4 for each interaction vertex. We summarize the Feynman rules in Figure 3. A. Test case: AdS4 As we mentioned above, the advantage of studying Yang-Mills in AdS 4 is that the theory is conformal at tree-level. For example, the s-channel exchange diagram in Figure 4 has been computed in both AdS [32,33] and dS [42] and takes the following simple form: 5 4) and the other exchange diagrams are found by permutation. The contact diagram is even simpler and is given by the flat space vertex times the total energy pole: Next, we want to rearrange the full color-dressed, Yang-Mills result into the form In order to do this, we follow the flat space prescription and split the contact diagram into three pieces, corresponding to the color structures c i . For example, the s-channel piece of the contact diagram is where we defined Then to bring this term into the form (3.6) we multiply W YM cont,s bys/s, and similarly for the t and u-channel pieces of the contact diagram. With these manipulations, we can bring the AdS correlator into the standard form (3.6), for which n s reads The term proportional tos comes from the quartic interaction. As a reminder, the k are the null flat space momenta, which we have used to make the expression more compact. In this form the AdS correlator does not obey the color-kinematic relations. One can check that n s + n t + n u = 0 but that we have n s + n t + n u → 0 in the flat space limit. The fact we have color-kinematics in this limit follows from the fact that the individual Witten diagrams have the correct flat space limit. Since color-kinematics holds automatically for flat space, fourpoint, Yang-Mills amplitudes, the corresponding AdS numerators must also obey color-kinematics in the limit E T → 0. However, the AdS numerators are not unique and it is possible to define a new set of numerators related by a generalized gauge transformation: (3.10c) With these new numerators, the full correlator is unchanged: where we used the color Jacobi identity c s + c t + c u = 0. Therefore, if we choose: the new numerators n s automatically satisfy colorkinematic duality. Here we see that our freedom in choosing a generalized gauge transformation such that the duality holds relies on havings +t +ũ = 0. A similar feature at four-points was also seen for massive, flat space amplitudes [55]. With this generalized gauge transformation, we can now repeat exactly the analysis we did for the conformal bi-adjoint scalar in AdS 6 . By imposing both the color and kinematic identities, c s +c t +c u = 0 and n s +n t +n u = 0, we can express the numerators in terms of the color- The only difference in comparison to the bi-adjoint scalar is that we had to shift the numerators in order for colorkinematics to hold. B. Generalization to AdS d+1 In this section we will study Yang-Mills in AdS d+1 . As with the bi-adjoint scalar, we face the problem that the gauge theory Witten diagrams are not known in closed form for general dimensions. For the bi-adjoint scalar, one solution was to simply express the full correlator as a sum of exchange diagrams. In the study of Yang-Mills in AdS, we face the additional challenge of the contact interaction, which we need to re-express such that it looks like a sum of exchange diagrams. Without a closed form expression in general dimensions, we can not simply multiply bys/s to rewrite the contact diagram in this way. Our resolution for this problem is to study the AdS Witten diagrams under the p and z integrals. We then want to understand color-kinematics at the level of the AdS integrand, which looks similar in structure to a flat space amplitude. For example, using the explicit form of the cubic vertices, we find the exchange diagram is: where t µν is a product of three-point vertices, and Φ s is the product of s-channel bulk-to-boundary propagators, In analogy to flat space, these can be thought of as our external wavefunctions. One difference in comparison to flat space is that our bulk-to-bulk propagator G YM µν (k, z 1 , z 2 ) is not proportional to the metric η µν . Instead, we have extra factors which comes from our choice of axial gauge. The expression for the Witten diagram is simplest if we use the p integral representation of the propagator: The way to interpret this expression, when making the analogy with flat space, is that the term is the scalar piece of our propagator. Heuristically, we can think of p as the radial momentum, although it only becomes a true component of the momentum in the flat space limit. For the s-channel piece of the contact diagram, we only have z-integrals: (3.20) To make this look like an exchange diagram, we introduce p-integrals via the following identity: The same identity was used in [28] to prove the validity of the BCFW recursion relations in AdS. We then find It is now clear how to rewrite the contact diagram such that it looks like an exchange diagram, we multiply by (k1+k2) 2 +p 2 (k1+k2) 2 +p 2 under the integral. We now have the full s-channel piece of the color-dressed correlator: where the s-channel numerator is, Finally, the color-dressed correlator is the sum over the three-channels: In order to bring all the numerators under one integral, one can switch to the position-space representation for the propagator and add plane-wave factors to the external wavefunctions Φ s,t,u . Then the full, color-dressed correlator is: While this representation nicely groups different terms together, we will find it convenient to work with the momentum space representation for the propagators. In general, one can also consider external wavefunctions which do not have translation invariance in the d flat directions, in which case the x-integral representation is more useful. Now we want to find a generalized gauge transformation such that the numerators obey color-kinematics duality, but the full correlator is left invariant. To do this, we define the following "scalar" exchange diagram for the s-channel, (3.28) and similarly for the t and u-channels. If we define the shifted numerators as: then M s shifts as: Therefore, as long as ∞ 0 dp 2 2 Ω(k i , p) is finite, the color Jacobi identity guarantees this redefinition leaves the full correlator invariant. We find that the shifted numerators n i satisfy color-kinematics duality if we set: . (3.31) As an example, in AdS 4 we have: , (3.32) and one can check that ∞ 0 dp 2 2 Ω(k i , p) is finite. We should emphasize, unlike in our previous analysis for Yang-Mills in AdS 4 or the bi-adjoint scalar in AdS d+1 , here the color-kinematic numerators are functions of both p and k. Therefore, we cannot directly express the numerators in terms of the integrated, color-ordered correlators. It would be interesting if there is another formulation of AdS color-kinematics where such relations hold in general d. It would also naturally be interesting to find a representation, e.g., in Mellin or position space, where the numerators are not directly expressible in terms of the AdS color-ordered correlators, and instead there are new BCJ-like relations for AdS correlators. IV. COLOR KLT AND DOUBLE COPY In this section we will study simple examples of the KLT and double-copy relations in AdS. The simplest double-copy and KLT relations roughly state that YM=YM ⊗ bi-adjoint. More precisely, the color-dressed Yang-Mills correlator can be expressed as a product of color-ordered Yang-Mills and bi-adjoint correlators [51,62]. 7 We then discuss double-copy for gravity at 3-and 4-points. To start, we can consider Yang-Mills in AdS 4 , where the color-dressed correlator has the form: This is a simple example of double-copy because we can think of c i as the numerators for the bi-adjoint theory and n i are of course the numerators of the Yang-Mills theory. Assuming color-kinematics holds for the Yang-Mills numerators, we can directly express them in terms of the color-ordered Yang-Mills correlators. Similarly, we can also express the color-factors c i in terms of the colorordered, bi-adjoint correlators. To find a KLT relation, we then write each of the numerators in terms of the corresponding color-ordered correlators. 7 For an example of KLT in cosmology, see [63]. There are two important differences in comparison to the flat space, color KLT relations. The first is that here the AdS KLT matrix has rank two. This is expected because in AdS we have two linearly independent colorordered correlators at four-points, while in flat space we only have one independent amplitude. The second is that we have an extra degree of freedom: when writing the color factors c i in terms of the bi-adjoint correlators, we are free to choose the spacetime dimension d. In flat space we have a similar freedom, but there the scalar amplitudes look the same in all dimensions, while in AdS the form can change dramatically. For example, while we take the Yang-Mills theory to live in AdS 4 , we are free to express c i in terms of the conformally-coupled, bi-adjoint scalar theory in AdS 6 . With this choice the color KLT relation takes the form: Here the superscripts in K (d1,d2) give us the dimension of the AdS di+1 spacetime in which the bi-adjoint scalar and the Yang-Mills theory live, respectively. 8 The AdS KLT matrix becomes singular in the flat space limit, reflecting the additional linear relations for flat space amplitudes. Here we restricted Yang-Mills to AdS 4 , so that its integrated correlator took a simple form, but it would be interesting to extend this discussion to integrated correlators in general dimensions. Next, we will study how double-copy may work at the integrand level for general dimensions and for gravity. Below, we reproduce the s-channel piece of the AdS Yang-Mills integrand: Given this expression, we can double-copy down to the bi-adjoint scalar by taking n s and replacing it with a SU (N ) color factor c s . This yields an exchange Witten diagram for a scalar in AdS d+3 dual to a boundary scalar 8 For d 1 = 5 and d 2 = 3, we have T t(s +t +ũ) 2 s(t +ũ)(2sũ +t(t +ũ))sũ(2s + 3t)(t +ũ) sũ(s(t + 2ũ) + 2t(t +ũ))ũ ũ(2s +t)(s +t) +stũ +t(s +t) 2 . The shift d → d + 2 and the identification ∆ = d follows from matching this expression with the integrand for a scalar exchange diagram in AdS. Specifically, to find the dimension of the AdS spacetime and the conformal dimension of the scalar, we match the arguments of the Bessel functions and the overall powers of z. As a consistency check, if we set d = 3 we find a scalar of dimension ∆ = 3 in AdS 6 , i.e. the conformallycoupled scalar. By comparing eqn. (2.12) and eqn. (4.1), we see explicitly that making the replacement n s → c s for the AdS 4 Yang-Mills correlator gives the AdS 6 biadjoint scalar correlator, up to overall factors such as the couplings. It is tempting to conjecture that if we replace c s in the Yang-Mills integrand with n s we get the schannel contribution to graviton four-point scattering in The shift from d → d − 2 is once again found by comparing the z dependence of the resulting expression to that of the graviton propagators, which we will give explicitly in a moment. Here we assume that the AdS theory is given by Einstein gravity. We should emphasize though, it is not clear that the AdS graviton amplitude necessarily takes this form. We have already seen that at tree-level the AdS gauge-boson exchange diagram is more complex than the corresponding flat space one and there is a similar increase in complexity for graviton diagrams [30,34,42]. Therefore, it is possible that we need a more complicated gluing of the numerators, rather than a simple squaring. We hope to come back to this problem in future work. Furthermore, in flat space we know that squaring colorkinematic Yang-Mills numerators is guaranteed to give a graviton amplitude that obeys the correct Ward identities [3]. Ward identities in AdS/CFT are more complicated, owing to contact terms in the CFT correlator, and it would be interesting to understand what relations have to be imposed on gauge-theory numerators such that the double-copied correlator obeys the graviton, or stresstensor, Ward identities. With these caveats in mind, the motivation for this double-copy comes from studying AdS gauge and graviton scattering at three-points. The three-point correlator for Yang-Mills in AdS d+1 is while the three-point correlator for Einstein gravity in AdS d+1 is [28,29,34] Here we wrote the graviton polarization tensor as a product of null polarization vectors, µν = µ ν . We also need the graviton bulk-to-boundary propagator: (4.9) As with the Yang-Mills bulk-to-boundary propagator, we have dropped an overall tensor structure which projects out polarizations along the momenta k. Then, if we define the three-point numerator to be we find In other words, squaring the three-point numerator for Yang-Mills in AdS d+1 yields the three-point correlator for Einstein gravity in AdS d−1 , up to some overall convention dependent factors. Alternatively, one can square the numerator and modify by hand the z-dependence so that the double-copied correlator also comes from gravity in AdS d+1 . V. CONCLUSION In this work we explored the viability of colorkinematics and double-copy in AdS momentum space. We found that color-kinematics for AdS four-point functions appears trivial, one can always perform a generalized gauge transformation such that the duality is valid. We also found that it is possible to express the numerators directly in terms of the color-ordered correlators and that the BCJ relations are modified by an extra term which vanishes in the flat space limit. We used the relation between the numerators and integrated correlators to find the AdS color KLT relation and discuss how double-copy in AdS may work at the integrand level. There is clearly more work that needs to be done on this subject. In this note we focused on AdS momentum space because it has a natural connection to the wave function of the universe in cosmological spacetimes. There has also been recent beautiful work on the relation between momentum space correlators in AdS and dS and a new set of cosmological polytopes [54,[64][65][66][67]. For color-kinematics however, it could turn out that another representation is more useful, including twistor formulations [68][69][70][71][72], spinor-helicity in stereographic coordinates [24,25,73], Mellin space [5], or of course position space. Recent work on the scattering equation formalism [52,74,75] generalized to AdS [76,77] will also prove invaluable in studying color-kinematics and double-copy in AdS. Based on related results for massive scattering amplitudes [55], we expect it is important to find a representation of AdS/CFT correlators such that color-kinematics, plus some possible assumptions on the spectrum, implies additional relations for the colorordered correlators. In flat space, color-kinematic duality and the doublecopy relations extend to theories other than gauge or gravity theories. For instance, the nonlinear sigma model has been studied in [78,79]. Also, it was shown that the Lagrangian of the nonlinear sigma model exhibits a manifest duality between color and kinematics [80]. It would be interesting to study these theories in AdS and see if color-kinematics can be understood at the Lagrangian level. Finally, there has been progress in computing loop level AdS correlators through bulk and boundary unitarity methods [31,[81][82][83][84][85][86]. 10 In flat space, generalized unitarity and double-copy relations can be systematically used to study higher-loop graviton amplitudes and reveal new ultraviolet cancellations [89]. Loop computations in AdS is in its infancy in comparison to its flat space counterpart and it is conceivable that color-kinematics and double-copy could present a new way to study AdS loops. Here we will study the five-point color-dressed correlator for the conformally coupled, bi-adjoint scalar in AdS 6 . 11 The color-dressed correlator can be written as a sum over 15 exchange diagrams: M (1, 2, 3, 4, 5) = c 12345 n 12345 W 12345 + crossed-channels , (A.1) where the color factors are defined as The n ijk m are defined in the same way, but for the second SU (N ) global symmetry. W ijk m is the Witten diagram in the corresponding channel with the color and kinematic factors removed. We follow the same ordering as in figure 2. The explicit expression for the five-point Witten diagram is where ω ± are defined in eqn. (2.25). For completeness, the 15 diagrams are given by: There are nine independent Jacobi identities among the color factors: Similar relations can be found for the other 5 colorordered correlators by comparing eqn. (A.1) and eqn. (A.5), or equivalently by using the color-ordered Feynman rules [3]. Using that the numerators n ijk m also obey the Jacobi relations, we can relate the 6 independent, color-ordered correlators to 6 independent numerators. If we organize the numerators and colorordered correlators into vectors, for some matrix S αβ . In flat space, the corresponding matrix is degenerate due to the flat space BCJ relations. In AdS, S αβ instead is a full-rank matrix, which can be checked using the explicit form of the five-point Witten diagram. We then find, This is the generalization of eqn. (3.13) to 5−point amplitudes. The explicit expression for S αβ reads as
8,176
2020-12-18T00:00:00.000
[ "Physics" ]
Advanced Controllers Implementation for Speed Control Analysis of Two Mass Drive System Article History:Received:11 november 2020; Accepted: 27 December 2020; Published online: 05 April 2021 Abstract:The speed control of two mass drive systems is presented in this paper. For the purpose of analysis, PI, Linear Matrix Inequality controller and Internal Model Controller are chosen. Tuning of PI controller is achieved by Zeigler Nichols method, whereas the LMI and IMC are adapted with the conventional method of tuning. The various simulations are carried out in order to analyze the performance index of these controllers and to fix the best controller, and the results are compared. I. INTRODUCTION The idea of drive systems is used by most processing industries, such as paper mills, textile mills and elevator control mechanisms, which means that a motor has been linked via a shaft to a load. These implementations are commonly known as electro-mechanical system [1]. It is noted that the system's electrical energy will rotate the mechanical shaft parts that will drive the attached load at the end of the shaft. In the case of two mechanical coupling mass drive systems, the two individuals, motor and load, are connected, so the load will appear to rotate with the motor. But, because of its non-linear behaviour, it is basically extremely difficult to achieve this synchronization [2] - [3]. Because of the long shaft between the motor and load for its mechanical coupling, this form of nonlinearity will occur in two mass drive systems, there is sure to be a speed difference between them. The speed oscillations between the motor and the load result in a decrease in the efficiency of the output and in the stability of the system. And because of the coupling shaft stresses, it also negatively influences the coupling. For nonlinear systems, stability needs to be precisely described, [4]. Therefore, special control techniques have to be investigated and improved to obtain the stable system and to improve performance. The use of the Linear Matrix Inequality Controller is part of the implementation process. LMI's controller architecture is based on mathematical derivation. This requires the conversion process of the transfer function to variables of the state space. Then the ultimate process for the controller is to find the benefit. The Algebraic Riccati Equation (ARE) could be used to measure this benefit. In the LMI controller, the PI controller is used. With the assistance of the Zeigler Nichols system, this PI controller is tuned. The mathematical derivation also describes the Internal Model Controller. The mathematical derivation involves the method of decreasing the function of higher order transfer to the function of first order transfer. This process is called as half rule method. The conversion of the domain is then done with the help of a reduced first order equation using the pulse transfer function. The s-domain transfer function is converted to z-domain transfer function. For the purposes of selecting the best controller for the two mass drive systems, the simulated results are then determined. This is chosen by means of index of the controllers. The organization of the rest of the paper can be described as follows in [II] process dynamics, and [III] controller implementation, continues with [IV] simulation results and [V] conclusion of the work. II. PROCESS DYNAMICS The characteristics and functions of the plant actually depends on the process's dynamics. A proper analysis of plant dynamics can lead to the rise of a better model for the process in concern [2] - [3]. The schematic representation of two mass systems has been given in Fig. 1. The load (M2) that also has the same M1 specification is driven by Motor 1 (M1). The parameters of the process involved in two mass drive systems are given as notation and described below. Jm -Moment of inertia of motor (kg-m 2 ) Jd -Moment of inertia of load (kg-m 2 ) ω1 = W1 -motor speed (rad/s) ω2 = W2 -load speed (rad/s) τe = Te -motor torque (N-m) τs = Ts -shaft torque (N-m) τL = TL -load torque (N-m) Kmd -stiffness constant From above equations it is clearly mentioned that the state of the system depends on motor speed, load speed, motor torque, shaft ( with its dynamics, the block diagram representation of two mass systems. As the control input to the system, the electromagnetic torque of the motor is used and the angular velocity of the motor is taken from the system as the output[2]. The electromagnetic torque of the motor drives the shaft-linked load. Generally, this mixture is known as the electromechanical system[3]. As damping losses are typically considered to be relatively low, they are ignored without affecting the simulation analysis significantly [12]. Furthermore, in the simulation analysis, nonlinear parts of the actual drive system, such as backlash, mechanical hysteresis and nonlinear friction, were not taken into account. For further simulations, the arrival state model is known as a plant model. Implementation and review of the controller was performed on the related plant model. III. CONTROLLER IMPLEMENTATION In process sector applications, the controllers play the main role. In the device, the controllers will make the process run in the anticipated area. The deviation occurs in the process due to external and internal disturbances of intent and non-intention, causing the system to have different problems and sometimes making the environment treacherous. Different controllers are introduced to the given electromechanical system[12]-[13] to avoid this uncertainty and make the system operate in stable conditions. Not only does the implementation of the controller end with the selection of the correct controller, but it is based on the tuning method for the chosen controller. In this work, PI, LMI and IMC controllers were introduced by adapting Ziegler Nichols and traditional tuning methods to two mass drive systems. A generic control structure that obeys the loop feedback control mechanism is a proportional integral controller [12] and is often followed in industrial control systems. The structure of PI control is given in the equation below (4) As the difference between a measured process variable and a target fixed point, a PI controller calculates a "error" value. By changing the process control inputs, the controller attempts to mitigate the error. The Ziegler Nichols tuning method has set optimized controller gain values such as Kp and Ki Clearly, in process industry applications, the PI controller offers better control efficiency and is used as a common benchmark controller to analyze various control systems. LMI and IMC controllers have been introduced and results will be compared with benchmark controllers in order to achieve maximum control efficiency greater than the PID. Fig. 3. Block Diagram of Two-Mass Drive System using PI controller The Linear Matrix Inequality Controller has been introduced and the benchmark controller will be compared with the results. LMI has less peak overshoot as an advanced controller and hence becomes a benefit for the controller. The improvement of the proposed controller, namely the LMI controller, is given with the help of feedback gain in the generalization and accompanied by the tuning of the PI controller [5]. The gain in feedback is calculated by the aid of the Algebraic Riccati Equation (ARE) and the gain measurement. The ARE is conferred by A T P + PA -PAR -1 B T The required matrix could be found in the implementation of this Algebraic Riccati Equation and after this the gain is determined by the following, Then the formula calculates the gain and then the PI controller is tuned with the help of the Ziegler Nichols method and the response is obtained for the LMI controller. The Internal Model Controller is designed with the help of the principle of the internal model, which states that control can only be achieved if the control system en-capsulate some representation of the process to be controlled, either implicitly or explicitly. The design process begins with the process of deriving the pulse transfer function of the two mass drive systems for the transfer function. The pulse transfer function is then factorized into two distinct parts, i.e., invertible and non-invertible parts. With the help of the first order filter tuning parameter, this is followed by deriving the Gimc(z). The simulation is then completed and the reaction is collected. The PI controller is tuned to give the best output here. The optimized values of Kp, Ki gain were determined using the tuning technique of the closed loop Ziegler-Nichols. The efficiency of two PI controller mass drive systems is shown in Fig. 6. It is important to remember that the synchronization between the engine and the load is perfectly accomplished using the PI controller. The conventional tuning method of the LMI controller has been tuned and the result is shown in Fig. 7. From the answer, it is clear that in the case of two mass drive systems, the maximum peak overshoot is less, which is the key index to be discussed. Fig. 7. Response of two mass drive system with LMI controller Using the conventional tuning method, the IMC controller was tuned and the result is shown in Fig. 8. It is clear from the reply that the controller is better than the LMI controller and the PI controller. Therefore, the IMC's response using the two mass drive systems shows no overshoot. Table 1 shows that the values for the different performance indices are 1000 rpm for the PI, IMC and LMI controllers when the set point reference is set. By identifying the appropriate controller for the process, the goal of this work converges to one point. This can be achieved by comparing the IMC and the controller with the standard PI and LMI controller benchmarks. IMC is satisfied with the results. In this respect, the LMI has only 10 percent peak overshoot and IMC has no peak overshoot. The peak overshoot is certainly the main index to be addressed in the two mass drive system. IMC is better than the other controllers with reference to the other parameter specifications. The Internal Model Controller is therefore the most appropriate one for the two mass drive systems. V. CONCLUSION This paper focused on the implementation of controllers for two mass drive systems. The plant model was obtained and a number of simulations were performed using PI, LMI and IMC controllers on that model. For PI, LMI and IMC, Zeigler Nichols tuning and conventional tuning methods were respectively implemented. In order to identify and fix better controllers for two mass drive systems, the objective then converges. Compared to PI and IMC controllers, the controller performance indices and the performance index of the LMI controller were then calculated. It is concluded that LMI is a deserving controller that provides two mass drive systems with satisfied performance in both transient and steady state behavior. In the manufacturing and real-time implementation of two mass drive systems, this modeling study and its simulation results may be used. In the two mass drive systems, the study can be extended to design controllers with various disturbances such as shocks, vibrations, torsion, etc.
2,564.8
2021-04-10T00:00:00.000
[ "Engineering" ]
Forensic Linguistic Inquiry into the Validity of F 0 as Discriminatory Potential in the System of Forensic Speaker Verification In Indonesia, some court cases involve speech recordings of suspects as legal evidence. In the courts, an expert is invited to explain for the verification to find out whether the speech recordings are spoken by the suspects or not. The task is called as Forensic Speaker Verification (FSV). It is one of the areas in the application for Forensic Linguistics as the provision of linguistic evidence [1]. FSV system includes an analysis of speech recordings to verify the voice of a criminal. In the system, fundamental frequency (F0) is one of the acoustic features which are extracted from the speech data [2]. Then, they are analyzed as the discriminatory potential. It is important to note that there is always a question in the context of human speech sound and its forensic relevance as an inquiry into its validity [3]. The critical question brings the requirement for always reviewing an available system in forensic speaker verification or identification. Later on, the review can be used for the improvement of the system. In line with that, the paper aims at reviewing the method in Indonesian FSV system in terms of the extracted acoustic feature of F0, which is used as the discriminatory potential. Method Introduction In Indonesia, some court cases involve speech recordings of suspects as legal evidence.In the courts, an expert is invited to explain for the verification to find out whether the speech recordings are spoken by the suspects or not.The task is called as Forensic Speaker Verification (FSV).It is one of the areas in the application for Forensic Linguistics as the provision of linguistic evidence [1].FSV system includes an analysis of speech recordings to verify the voice of a criminal.In the system, fundamental frequency (F0) is one of the acoustic features which are extracted from the speech data [2].Then, they are analyzed as the discriminatory potential.It is important to note that there is always a question in the context of human speech sound and its forensic relevance as an inquiry into its validity [3].The critical question brings the requirement for always reviewing an available system in forensic speaker verification or identification.Later on, the review can be used for the improvement of the system.In line with that, the paper aims at reviewing the method in Indonesian FSV system in terms of the extracted acoustic feature of F0, which is used as the discriminatory potential. Method The data are derived from Indonesian speech sounds of two (2) telephone conversations with Speakers LR (f;21) and MR (m;23) in the first conversation; Speakers DS (f;22) and RD (m;22) in the second.The data were recorded in Centre for Studies in Linguistics, Bandar Lampung University.The conversation is designed as a simulation for a corruption case.The speech data are categorized as Unknown (Uk), following the scenario used in Indonesian FSV system [4].For Known (K) category, twenty (20) words spoken by each speaker are recorded to be paired with the same words in Uk sample.Praat [5] is used for the acoustic analysis of K and Uk samples.Oneway analysis of variance (ANOVA) and Likelihood Ratio (LR) approach are used to evaluate the findings statistically.sample, it is already known who the speaker is.Meanwhile, the Uk sample is derived from speech data of a recorded telephone conversation which is not known yet who is speaking.The main purpose of the comparison is to find out who is really the speaker in the recorded telephone conversation.The evaluation provides some evidence if the speaker is the suspected person or not.In presenting the evidence, there are four main steps to conduct data analysis such as pairing, tagging, acoustic features extraction, and statistical analysis.In the analysis, F0, F1, and F2 are observed to find out the patterns of habitual pitch range, minimum-maximum pitch, first-second formant, and speaking style for pitch and formant.As the evaluation of the current Indonesian FSV system, there is a claim that it "meets the demand for presenting legal evidence in Indonesian court" [6]. To review the method in Indonesian FSV system, we scrutinize F0 in the data which are used as the discriminatory potential. For each speaker, we paired up twenty (20) words in Uk samples with those in K samples (Tables 1 & 2).Therefore, in both K and Uk samples, there are a total of one hundred sixty (160) words., there are a total of one hundred sixty (160) words.Among the four speakers participating in the telephone conversations in the simulation for a corruption case, RD is treated as a suspect.Then, each word is analyzed in terms of its acoustic feature -F0. The following figures exemplify the acoustic features of the word rancangan 'design' spoken by Speaker RD in K (Figure 1) and Uk samples (Figure 2).It is in default pitch setting: 75 -500 Hz.F0 contours as the physical correlates to the speaker's pitch are represented in blue lines in the second window in Praat display.The red contours, in the same window, represent the speaker's formant frequencies.Since the speech sounds are spoken by the same speaker, we presume that pitch values of RD's speech in K and Uk samples will match.However, it is found that in the pitch analysis of its mean and standard deviation (SD) of 20 words spoken by RD in K and Uk samples, only few values match (Figure 3). In the pitch analysis of minimum and maximum values, it is also found that the maximum values in the Uk samples do not match their K counterparts (Figure 4).Meanwhile, the minimum values in K and Uk samples only match at several points.In addition, in one-way analysis of variance (ANOVA), it is also found that the pitch of each word spoken by RD in K and Uk samples is significantly different (p<0.05).RD's F0s are significantly different in both K and Uk samples (Figure 5). Journal of Forensic Sciences & Criminal Investigation Further, for the evidence evaluation using Likelihood Ratio (LR) approach [2], we analyze the probability of the samples.The result indicates that the pitch in the data can be categorized as 'very strong evidence against' the fact that the K and Uk samples are derived from the same speaker (LR<0.0001). From the results in one-way ANOVA and LR approach, it can be inferred that F0 cannot be used as a discriminatory potential in the experimental data.ANOVA says that the pitch in K and Uk samples is significantly different.And LR also indicates that the sounds are derived from different speakers.In the contrary, they are from the same speaker, i.e.RD.We highlight three main problems that may arise in terms of fundamental frequency (F0) used as the discriminatory potential based on the experimental data following Indonesian FSV system.The first problem is about the default setting in pitch range for analysing connected speech [7].The F0 reading with the default setting may not show the actual value of the speaker's F0 [Figure 6].The second problem is about the telephone transmission [8].The transmission could have effects [9], especially on the vowel quality [10] that may result in the discrepancy in values of the speaker's F0.The third problem is about the lack of theoretical background for the Indonesian FSV system which uses F0 as one of its discriminatory potentials. Conclusion F0 as the physical correlates to a speaker's pitch is analyzed to review the method in Indonesian FSV system.In the experimental data, although the speech data are derived from the same speaker (RD), only few values in pitch analysis of its mean and SD in K and Uk samples match.Maximum and minimum pitch values also show the same result.Furthermore, using one-way ANOVA and LR approach, the study proves that it fails in providing the evidence for F0s derived from the same speaker.Therefore, it is suggested that more studies should be proposed to look at another strategy if F0 is still used for Indonesian FSV system, e.g. using pitch alignment features [11], adjusting advanced pitch settings and framing sentences by using the intonation system [7], and considering the effect of pitch span on intonational plateau [12].Highlighting some functional aspects in the conversational structure in spontaneous dialogue [13] is also necessary to consider in getting the required K and Uk samples.Moreover, insights on phonological variation for discriminatory aspects in forensic speaker verification [14] and other related aspects in forensic phonetics [15,16] and forensic linguistics [17] are suggested to the system as some of theoretical backgrounds to provide linguistic evidence in legal settings.The experimental study on Indonesian FSV system leads us to propose a scenario for forensic speaker verification [Figure 7].In the system, K and Uk samples are paired for the same words.For tagging, syllables are derived from the paired words.Starting from pairing to the end of tagging, a control is conducted to scrutinize the effects of telephone transmission.Then, it moves forward to the acoustic feature extraction.Starting from the acoustic feature extraction to the end of statistical analysis, a filter is implemented to get high qualified performance.The filter is in terms of what acoustic features will be analyzed, what the theoretical backgrounds are for the analysis, and how the factors of reliability and validity can be achieved.Finally, the result is ready to present as legal evidence. Figure 1 : Figure 1 : Acoustic features of RD's word rancangan 'design' in K sample. Figure 2 : Figure 2 : Acoustic features of RD's word rancangan 'design' in Uk sample. Figure 3 : Figure 3 : Mean pitch and its standard deviation (SD) of RD's speech in K and Uk samples. Figure 4 : Figure 4 : Maximum and minimum pitch of RD's speech in K and Uk samples. Figure 5 : Figure 5 : F0s of RD's speech rancangan in both K and Uk samples. Figure 6 : Figure 6: F0 reading in default pitch setting and the actual value of the speaker's F0. Figure 7 : Figure 7: Steps in forensic speaker verification system. Table 1 : Target words for K and Uk samples in telephone conversation 1. Table 2 : Target words for K and Uk samples in telephone conversation 2.
2,342.2
2017-01-01T00:00:00.000
[ "Computer Science" ]
The Collaborative Cross-Mouse Population for Studying Genetic Determinants Underlying Alveolar Bone Loss Due to Polymicrobial Synergy and Dysbiosis Dysbiosis of oral microbiota is associated with the initiation and progression of periodontitis. The cause-and-effect relationship between genetics, periodontitis, and oral microbiome dysbiosis is poorly understood. Here, we demonstrate the power of the collaborative cross (CC) mice model to assess the effect of the genetic background on microbiome diversity shifts during periodontal infection and host suitability status. We examined the bacterial composition in plaque samples from seven different CC lines using 16s rRNA sequencing before and during periodontal infection. The susceptibility/resistance of the CC lines to alveolar bone loss was determined using the micro-CT technique. A total of 53 samples (7 lines) were collected before and after oral infection using oral swaps followed by DNA extraction and 16 s rRNA sequencing analysis. CC lines showed a significant variation in response to the co-infection (p < 0.05). Microbiome compositions were significantly different before and after infection and between resistant and susceptible lines to periodontitis (p < 0.05). Gram-positive taxa were significantly higher at the resistant lines compared to susceptible lines (p < 0.05). Gram-positive bacteria were reduced after infection, and gram-negative bacteria, specifically anaerobic groups, increased after infection. Our results demonstrate the utility of the CC mice in exploring the interrelationship between genetic background, microbiome composition, and periodontitis. Introduction Periodontitis is one of the most common chronic infectious diseases worldwide [1].Untreated severe periodontitis cases may cause tooth loss.It is evident that susceptibility to periodontitis is attributed to an interplay of the host genetic background with environmental factors and disturbances of the oral microbiome [1][2][3][4][5][6].Accordingly, periodontitis is regarded as a "dysbiositic disease".Dysbiosis is a condition in which the balanced state of the ecosystem is disturbed.The hypothesis suggests that the transition from periodontal health to disease reflects a significant alteration in the number and community organization of the oral commensal bacteria in the periodontal pocket.This shift in the microbial community composition leads to alterations in the host-microbe crosstalk sufficient to mediate destructive inflammation and bone loss [7,8].Different studies suggested that this shift may be mediated by environmental factors and/or keystone pathogens such as Porphyromonas gingivalis (P.g.) [9][10][11][12][13][14]. In the last decades, numerous studies evaluated the alterations in oral microbiota composition across different stages of periodontal disease.However, only a few studies have assessed the cause-and-effect relationship between periodontal disease severity, the genetic constitution of the host, and alterations in the composition of the oral microbiota.Using appropriate animal models could overcome these limitations and contribute significantly to an improved understanding of the relationship between alterations in the composition of the oral microbiota and the genetic architecture of periodontitis susceptibility. The ideal mouse model to study complex diseases with complex etiologies mimics the complex genetic diversity among populations and enables the fine mapping of quantitative trait loci (QTLs) underlying defined complex phenotypes [15].In recombinant inbred line (RIL) crosses of genetically defined strains, chromosomal regions responsible for the genetic variance of complex traits can be mapped as QTLs under defined conditions [16][17][18][19].To address these requirements, a new genetic resource population, named the collaborative cross (CC) mouse model, was proposed by community efforts. The CC is a RIL mouse population that was specifically designed for high-resolution QTL mapping.It was created from a full reciprocal mating of five classical inbred strains (A/J, C57BL/6J, 129S1/SvImJ, NOD/ShiLtJ, and NZO/HlLtJ) and three wild-derived strains (CAST/EiJ, PWK/PhJ, and WSB/EiJ) to capture a much greater level of genetic diversity than existing mouse genetic reference populations (GRPs) (Collaborative Cross Consortium 2012).Recently, we showed that CC lines respond differently to experimental periodontitis 42 d after mixed infection with P.g. and Fusobacterium nucleatum (F.n.) and enabled the mapping of confined QTLs that confer susceptibility to alveolar bone loss [3,20]. In the current study, we used the oral mixed-infection model of two periodontitisrelated anaerobic bacteria, P.g. and F.n., to induce dysbiosis of the oral microbiota.We subsequently explored differences in microbiome composition and periodontal disease development in different CC lines with different genetic backgrounds. Susceptibility of CC Lines to Alveolar Bone Volume Affected by Oral-Mixed Infection CC lines showed a significant variation in their response to the co-infection.Based on the one-way ANOVA, two lines (IL72, IL2513) showed a significant reduction in the mean of bone volume level after infection compared to the control group and were considered susceptible lines (p < 0.05), while four lines (IL211, IL188, IL1912, and IL3912) did not show significant bone loss after infection and were considered to be resistant lines (Figure 1).Noteworthy, line IL2126 showed a significant bone gain following the infection and is considered an over-resistant line. Challenge Effect on Microbiome Compositional Shift Alpha diversity was calculated for different variables at different levels, including susceptible and resistant lines, and at different time points (day 0, day 14, and day 42), including before and after infection.All of these comparisons were not significant (Figure 2).Beta diversity was calculated to assess the change in the diversity of species after infection for each line.A significant change in the species diversity was observed among the seven CC lines after infection (between time point one and the others time points), regardless of their susceptibility status (i.e., susceptible and resistant).However, no significant change was observed between day 14 and day 42 (Figures 2B and 3A). Challenge Effect on Microbiome Compositional Shift Alpha diversity was calculated for different variables at different levels, including susceptible and resistant lines, and at different time points (day 0, day 14, and day 42), including before and after infection.All of these comparisons were not significant (Figure 2).Beta diversity was calculated to assess the change in the diversity of species after infection for each line.A significant change in the species diversity was observed among the seven CC lines after infection (between time point one and the others time points), regardless of their susceptibility status (i.e., susceptible and resistant).However, no significant change was observed between day 14 and day 42 (Figures 2B and 3A).NMDS plots on rank order distances were used to assess the significance of bacterial community composition between different time points.Significant changes (p < 0.001) in the species diversity were observed among the seven CC lines due to infection (between time point 1 and the other time points), regardless of their susceptibility status (i.e., susceptible (A) and resistant (B)).However, no significant change was observed between day 14 and day 42. Microbiome Composition in Control Mice Bacterial composition before infection (point time 1): Pasteurella and Streptococcus were the majority of the microbiome taxa for both susceptible and resistant lines.In addition, gram-positive taxa were significantly higher at the resistant lines compared to susceptible lines.In addition, the Streptococcus genus represented ~50% of the microbiome composition compared to ~25%.Some of the taxa were exclusive for susceptible lines (i.e., Saguibacteraceae) and were not shown in the flora of resistant lines, while others (i.e., Bradyrhizohiaceae and Bradyrhizobium) were present only in the resistant lines (Figures 3B and 4A). Figure 3. Beta-diversity visualized using a non-metric multidimensional scaling (NMDS) plot.NMDS plots on rank order distances were used to assess the significance of bacterial community composition between different time points.Significant changes (p < 0.001) in the species diversity were observed among the seven CC lines due to infection (between time point 1 and the other time points), regardless of their susceptibility status (i.e., susceptible (A) and resistant (B)).However, no significant change was observed between day 14 and day 42. Microbiome Composition in Control Mice Bacterial composition before infection (point time 1): Pasteurella and Streptococcus were the majority of the microbiome taxa for both susceptible and resistant lines.In addition, gram-positive taxa were significantly higher at the resistant lines compared to susceptible lines.In addition, the Streptococcus genus represented ~50% of the microbiome composition compared to ~25%.Some of the taxa were exclusive for susceptible lines (i.e., Saguibacteraceae) and were not shown in the flora of resistant lines, while others) i.e., Bradyrhizohiaceae and Bradyrhizobium) were present only in the resistant lines (Figures 3B and 4A).A total of 14 days after the mixed infection (point time 2), the microbiome composition for both susceptible and resistant mice was similar for the two major bacterial groups (Pasteurella and Streptococcus) but with significant proportional changes compared to time point 1.In addition, the bacteria are mainly gram-negative for both susceptible and resistant lines.Interestingly, the mixed cultures of P.g. and F.n. were not shown at this stage of post-infection status (Figures 4B and 5A).A total of 14 days after the mixed infection (point time 2), the microbiome composition for both susceptible and resistant mice was similar for the two major bacterial groups (Pasteurella and Streptococcus) but with significant proportional changes compared to time point 1.In addition, the bacteria are mainly gram-negative for both susceptible and resistant lines.Interestingly, the mixed cultures of P.g. and F.n. were not shown at this stage of post-infection status (Figures 4B and 5A).After 42 days of infection (point time 3), there was no significant change compared to 14 days after infection (time 2 point).Microbiome compositional changes were shown After 42 days of infection (point time 3), there was no significant change compared to 14 days after infection (time 2 point).Microbiome compositional changes were shown between resistant and susceptible lines; while some bacteria showed mainly in the susceptible lines, others were mainly present in the resistant ones.However, these changes were not significant (Figures 5B and 6A). between resistant and susceptible lines; while some bacteria showed mainly in the susceptible lines, others were mainly present in the resistant ones.However, these changes were not significant (Figures 5B and 6A). Correlation between Alveolar Bone Loss and Dysbiosis The abundance of Pasteuerlla and Bergeyella was positively correlated with bone loss in the susceptible lines (r = 0.78, 0.75, p < 0.0.5),suggesting a role in bone loss severity after infection, while Streptococcus and Pseudomonas show a negative correlation with bone loss (r= −0.79, −0.84, p < 0.0.5); the lower the number of bacteria is, the worse the disease severity was observed.This suggests the role of these bacteria in keeping bone volume levels healthy. Discussion The micro-CT results showed variations in the response of different CC lines to infection.Because the RILs were kept within the same controlled environment, we consider that this is related to the genetic differences between the RILs, which we assume are responsible for individual differences of the RILs in susceptibility to periodontal disease.Likewise, a previous study of our lab showed that the heritability tests for 23 CC lines that were challenged using the oral mixed infection model were 0.2.This implies that the variation in host susceptibility to the disease is influenced by genetic factors, as shown by the different phenotypes among the different RILs.For humans, twin studies estimated a heritability of 50% for periodontitis [21].Each CC line responded differently to the infection according to the type and number of the SNPs in multiple candidate genes.Therefore, the phenotypes showed variation from mild to severe periodontitis or bone formation, which appeared in two CC lines instead of bone decreases.As we know, bone loss is a feature of the disease, but this new phenotype shows how genetic variability affects different genetic pathways, resulting in increased bone formation or resorption in response to bacterial challenge [3,22].These results validate the power of the CC lines population for studying host genetic susceptibility to complex diseases with complex etiologies. The 16 s microbiome sequencing showed that the beta diversity was distinctly different before and 42 days after infection.We also observed that resistant lines had a different microbiome compared to susceptible lines before infection.We consider this a valuable result because it gives some insight into how genetic predisposition shapes the microbiome and possibly indicates microbial compositions associated with resistance to infection-induced bone loss.In resistant lines, the gram-positive bacteria formed more than half of the microbiome compositions.This flora is associated with periodontal healthy tissues.It can influence the prevention of pathogenic colonization in different ways, e.g., by limiting the ability of pathogenic bacteria to adhere to appropriate tissue surfaces or by producing metabolic factors that are adverse to periodontal pathogens.After infection, gram-negative bacteria become dominant relative to the gram-positive bacteria, which may increase the susceptibility to periodontitis, resulting from specific virulence factors, which may consist of proteolytic enzymes that break down host tissue and may result in gingival inflammation, loss of gingival attachment, periodontal pocket formation, and alveolar bone and teeth destruction [23,24].These results may support previous reports that showed that the composition and function of the indigenous oral microbiome may determine an alteration of the symbiotic interaction between the oral microbial community and the host with consequences for the oral and general health of the individual.The alteration of this finely tuned equilibrium between host and hosted microbes allows pathogenic bacteria to manifest their disease-promoting potential and determinate pathological conditions.Moreover, our results may fall in agreement with previous studies, which showed that the dynamic interactions between the various microbial and host factors that drive periodontal tissue destruction are not related to a limited number of periodontopathogenic species but are the outcome of a synergic action of dysbiotic microbial communities in the specific individual [10].For example, P.g., one of the major etiologic microbial agents of periodontitis that we used in the current study and included in the Socransky Red Complex, requires iron and protoporphyrin IX from heme to survive and support dysbiosis initiation and development and the consequent onset of periodontal disease [25].However, the changes in microbial diversity between health (eubiosis) and periodontal disease (dysbiosis) remain controversial since some researchers reported a loss of microbial diversity, others indicated an increasing level of microbial diversity, and still others did not report significant differences [26][27][28]. Interestingly, gram-positive bacteria were high in resistant lines before infection, and the Streptococcus genus represented half of the microbiome composition.Noteworthy, this genus is well known for its critical role in preventing pathogenic periodontal bacteria colonization (colonization resistance) and was proposed previously as a guided pocket recolonization approach as an alternative to the armamentarium of treatment options for periodontitis [29,30].After infection, gram-positive bacteria decreased while gram-negative bacteria increased.These bacteria were not associated with alveolar bone loss, which may be related to many reasons.The bacteria did not produce a destructive host response, or microorganisms may lack some virulence factors responsible for periodontal tissue destruction.Notably, we observed that after infection, the genus Pseudomonas was reduced in susceptible but not resistant lines.Although speculative, this may indicate a protective role for periodontal tissue destruction. Noteworthy, while this study demonstrates how the specific genetic background of the RIL shapes characteristics of the immune system and microbiome composition and may be used as a valuable tool for the dissection of the complex relationship, some limitations should be pinpointed.First, the microbiome composition in the current mouse population may not resemble the "natural" relationship of periodontitis to oral microbiome.This is because the dental plaque microbial community that forms on the supragingival area, which was collected in the present study, differs from the subgingival community, which is supposed to be more relevant to periodontal disease.However, collecting microbial samples from the periodontal pocket in the mouse model is almost unachievable.Second, during the sampling process, the samples were not always taken from the same mouse during the 3 points but from the same status and line and were not taken immediately after antibiotic treatment, which may have led to the loss of important information.Finally, in the current study, a small number of mice/lines were used to assess the utility of this model for exploring such complex phenotype; however, to dissect such a complex cause-effect relationship, we need to assess many more lines and mice per line. In summary, microbiome analysis provides information regarding the differences in microbiome composition between resistant and susceptible RILs in health and disease independent of environmental factors.These data highlight the role of the genetic constitution in shaping the microbiome, thereby contributing to increased protection or risk of periodontal destruction.Future studies will characterize the underlying causal genetic variants and provide proof for the interrelations with specific bacterial taxa that influence oral health and disease.This may guide important preventive and therapeutic strategies for dental care. Materials and Methods All experiments were conducted at the Department of Clinical Microbiology and Immunology, Sackler Faculty of Medicine, Tel-Aviv University (TAU), Israel.All experimental mice and protocols were approved by the Institutional Animal Care and Use Committee of TAU (approval number: M-11-026), which adheres to the Israeli guidelines that are in accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals, National Institute of Health (NIH), USA (National Academies Press; 8th edition).Full details of the CC lines were reported previously [31]. Assessment of Lines Susceptibility to Alveolar Bone Loss To assess the susceptibility of the CC lines to experimental periodontitis, we tested the alveolar bone changes among seven different lines (IL72, IL211, IL188, IL1912, IL2126, IL2513, and IL3912) after oral mixed infection by quantifying the alveolar bone volume using the micro-CT (µCT) technique.Figure 7 demonstrates the µCT assessment of two hemi-maxilla of two mice with different susceptibility statuses. Assessment of Lines Susceptibility to Alveolar Bone Loss To assess the susceptibility of the CC lines to experimental periodontitis, we tested the alveolar bone changes among seven different lines (IL72, IL211, IL188, IL1912, IL2126, IL2513, and IL3912) after oral mixed infection by quantifying the alveolar bone volume using the micro-CT (µCT) technique.Figure 7 demonstrates the µCT assessment of two hemi-maxilla of two mice with different susceptibility statuses.In total, we used 87 mice: 43 mice for the control and 44 mice for the challenge (a minimum of 5 mice were used per line in each group).Details are shown in Table 1: Oral Mixed Infection Model and Micro-Computerized Tomography (CT) Analysis Mice were treated with sulfamethoxazole (0.8 mg/mL) in drinking water for a continuous period of 10 days, followed by an antibiotic-free period of three days.Mixed cultures of P.g. and F.n. (400 uL of 10 9 bacteria/mL for each mouse) were prepared.Next, the infected groups were treated with the bacteria, while the control groups were treated with In total, we used 87 mice: 43 mice for the control and 44 mice for the challenge (a minimum of 5 mice were used per line in each group).Details are shown in Table 1: Oral Mixed Infection Model and Micro-Computerized Tomography (CT) Analysis Mice were treated with sulfamethoxazole (0.8 mg/mL) in drinking water for a continuous period of 10 days, followed by an antibiotic-free period of three days.Mixed cultures of P.g. and F.n. (400 uL of 10 9 bacteria/mL for each mouse) were prepared.Next, the infected groups were treated with the bacteria, while the control groups were treated with PBS and 2 carboxymethy cellulose at days 0, 2, and 4. Forty-two days post-infection, mice were euthanized after complete anesthesia, using xylisine (Sedaxylan) and ketamine (Clorketam).The infection challenge of the CC lines was carried out at the small animal facility at Tel-Aviv University (TAU).Susceptibility to alveolar bone loss-induced mixed infection among the selected lines was determined as previously described by our group.Briefly, maxillary jaws were harvested for micro-computed tomography (µCT) analysis, and alveolar bone loss was calculated proportionally to the alveolar bone volume of a control group (non-infected) of the same line [22]. Oral Microbiome Collection Of the scanned 87 mice, 53 were used in the microbiome analysis: 53 oral swap samples and 53 from 7 different CC lines were collected and used for the microbiome assessment based on their susceptibility status (i.e., susceptible vs. resistant).Briefly, the biological samples for each status were collected at three points during the experiment and pooled for analysis.The oral samples were collected from mice at three groups/time points (point 1, day 0 (without infection); point 2, 14 days after infection; and point 3, 42 days after infection).Samples were collected using dry fine-tipped swabs and sterile paper points.The oral cavity of each mouse was swabbed for ~50 s (starting on the tongue, followed by buccal mucosa and gingiva).The swab was placed in empty Eppendorf tubes, swab handles were cut with sterile scissors, and samples were stored at −80 • C for further processing. DNA Extraction and 16s rRNA Sequencing The control (time point 1) and infected oral samples (time points 2 and 3) were used for DNA extraction using the Gene AllExgene TM cell SV mini (100p/Catalog No:106-101/LotNo: 10618A26008) Kit.In total, 53 samples (15 samples at point 1, 17 at point 2, and 21 at point 3; details are shown in Table 2) were included and sequenced with the 16S sequencing library preparation protocol [34].Briefly, after extracting pure DNA from the different samples, published primers were used to amplify the V3-V4 region of the bacterial 16S rRNA gene.PCR products were visualized following electrophoresis in agarose and staining with Gel Red™ to confirm a positive yield for each sample.Library preparation followed the Illumina library preparation protocol, with the following primers: forward CCTACGGGNGGCWGCAG and reverse GACTACHVGGGTATCTAATCC.Sequencing was carried out using Illumina Miseq at the Hadassah Medical School facility. Microbiome Analysis Reads with poor quality were filtered out, and after two filtration steps, the reads were reduced from 5194 to 67 taxa.Sequencing adapters and barcodes were trimmed, and each set of paired-end reads was merged into a single sequence (based on overlap).The sequences were assigned to individual samples with barcodes.Metagenomics workflow classified organisms from V3 and V4 amplicon using a database of 16S rRNA data.The classification is based on the Greengenes database (http://greengenes.lbl.gov/;accessed on 20 May 2020). Data analysis was performed using the statistical software package MICC, v2 to produce operational taxonomic units (OTUs) at the genus level.Next, the phyloseq package of R was used to compute the number of OTUs and the Shannon diversity index, which is used to calculate the total number of species-level phylotypes.A one-way analysis of variance was used to test the significance of the difference between the treated groups.After significance was established, the inter-group differences were tested for significance using the t-test with the Shannon index for multiple testing.The level of significance was determined at a corrected significance threshold of p < 0.05. Statistical Analysis µCT data were analyzed using the statistical software package SPSS version 21.An analysis of variance (ANOVA) was performed to test the differences in response between the different CC lines.The level of significance was determined at p < 0.05.All results were presented as mean values and standard error of the mean. Figure 1 . Figure 1.Alveolar bone volume parameters of seven different CC RILs.The figure shows a comparison between means of the alveolar bone volume in each line (±SEM) evaluated using µCT.Gray columns are related to the mean of the control bone volume (CBV), whereas black columns are related to the residual bone volume (RBV). Figure 1 . Figure 1.Alveolar bone volume parameters of seven different CC RILs.The figure shows a comparison between means of the alveolar bone volume in each line (±SEM) evaluated using µCT.Gray columns are related to the mean of the control bone volume (CBV), whereas black columns are related to the residual bone volume (RBV). Figure 5 . Figure 5.The figure shows the microbiome composition in susceptible (A) and resistant (B) RILs 14 days after oral mixed infection. Figure 5 . Figure 5.The figure shows the microbiome composition in susceptible (A) and resistant (B) RILs 14 days after oral mixed infection. Figure 6 . Figure 6.The figure shows the microbiome composition in susceptible (A) and resistant (B) RILs 42 days after oral mixed infection. Figure 7 . Figure 7. Micro-computed tomography (µCT) Left frame, expanded view of uninfected left hemimaxilla.Right frame, expanded view of post-infection left hemi-maxilla.White, enamel; yellow, dentin, and cementum; gray, alveolar bone.Horizontal resorption is measured as the distance from the cementoenamel junction (CEJ, the line between the yellow and gray colors) to the alveolar bone crest. Figure 7 . Figure 7. Micro-computed tomography (µCT) Left frame, expanded view of uninfected left hemimaxilla.Right frame, expanded view of post-infection left hemi-maxilla.White, enamel; yellow, dentin, and cementum; gray, alveolar bone.Horizontal resorption is measured as the distance from the cementoenamel junction (CEJ, the line between the yellow and gray colors) to the alveolar bone crest. Table 1 . List of used CC mice in the experiment. Table 1 . List of used CC mice in the experiment. Table 2 . The table lists the details of the collected samples for 16s rRNA sequencing.
5,608.4
2023-12-29T00:00:00.000
[ "Medicine", "Biology" ]
The Nature of Stability in Replicating Systems We review the concept of dynamic kinetic stability, a type of stability associated specifically with replicating entities, and show how it differs from the well-known and established (static) kinetic and thermodynamic stabilities associated with regular chemical systems. In the process we demonstrate how the concept can help bridge the conceptual chasm that continues to separate the physical and biological sciences by relating the nature of stability in the animate and inanimate worlds, and by providing additional insights into the physicochemical nature of abiogenesis. Introduction Although we tend to imagine that all systems converge towards equilibrium as embodied in the theory of equilibrium statistical thermodynamics [1], the notion of non-equilibrium steady-state (NESS) behavior is widely recognized [2,3].Physical examples of such systems abound.Thus whirlpools form and maintain themselves as long as some energy gradient is present, tops and rotors spin around, exhibiting a stability unachievable at lower speeds, moving bicycles remain upright.Rivers and fountains flow, new water replacing old, yet those rivers and fountains appear unchanged.In this review we describe how living organisms, and replicators in general, can display non-equilibrium characteristics, thereby manifesting low stability in a thermodynamic sense, yet exhibit high stability of a different type, a stability that actually derives from their underlying dynamic OPEN ACCESS character.We will argue that it is this type of stability, one which we have termed dynamic kinetic stability [4,5], that is the key to understanding many of life's key features, including the process by which it emerged. Kinetic Stability vs. Thermodynamic Stability The second law of thermodynamics teaches us that closed systems tend to converge towards their equilibrium state and that the irreversible processes that lead to the equilibrium state result in an increase in global entropy.In chemistry, this is often expressed in terms of the minimization of the system's Gibbs energy, G.When the equilibrium state is reached, the closed system ceases to undergo further change, and at that point the system is termed thermodynamically stable [1]. The second law, however, does not predict how rapidly a closed system will reach its equilibrium state.For example, a H 2 -O 2 mixture, under appropriate conditions, can be extremely persistent and maintained almost indefinitely, despite its non-equilibrium state.In order for reaction to take place, some form of activation, provided by a spark or appropriate catalyst, is necessary.Thus, we term a H 2 -O 2 mixture (under appropriate conditions) to be kinetically stable due to the high kinetic barrier separating reactants from products. This well-known kinetic-thermodynamic dichotomy leads to the concepts of kinetic and thermodynamic control, whereby a substance A can react by two competing pathways-a kinetically preferred lower free energy pathway that leads to a thermodynamically less stable product, X, or a higher free energy pathway that leads to a thermodynamically more stable product, Y (Figure 1).For such a system the preferred product will depend on the particular reaction conditions.Thus when the system is maintained under conditions of kinetic control, the kinetically preferred product X will be favored, while under conditions where the reaction barrier is readily overcome, the thermodynamically preferred product Y is favored.Accordingly, actual product formation is governed by a combination of thermodynamic and kinetic factors. Figure 1. Gibbs energy as a function of the reaction coordinate of A. Under conditions of kinetic control, product X will be favored, while under conditions of thermodynamic control, product Y will be favored. The kinetic stability associated with the H 2 -O 2 mixture mentioned above is a static one, i.e., no change takes place within that system over time.However, within the category of kinetic stability there is a distinct class of systems whose stability is of a dynamic type, rather than of the more familiar static type.As its name implies, dynamic kinetically stable systems are dynamic, constantly in motion.A river or fountain provides a physical example of the dynamic aspect.A river's stability is of a dynamic type in that the water that constitutes the river is continually changing-the rate of water flow into the river from its sources equaling the rate of flow out into the sea.Yet the river's appearance remains constant over time, thereby manifesting stability.As long as the water supply is unimpeded, the river (or fountain) as an entity remains stable.Thus that river exemplifies a physical non-equilibrium steady state.Its stability, a dynamic stability, is based on change, as opposed to lack of change. Dynamic Kinetic Stability of Replicating Systems A stable population of replicating entities, whether molecular in nature or composed from a complex assembly, constitutes a chemical example of a non-equilibrium steady state, so its stability is also of a dynamic type.The kinetics of the replication reaction, a form of autocatalysis, which can result in exponential growth, was appreciated by Lotka [6] almost a century ago.Since exponential growth is inherently unsustainable, a replicating system can only maintain a stable population if a balance between the rates of replicator formation and replicator decay has been established.This can be expressed by a simple differential kinetic equation, such as Equation ( 1), where X is the replicator concentration, M is the concentration of building blocks from which X is composed, and k and g are rate constants for replicator formation and decay, respectively.A steady state population, a state that is effectively 'stable', is achieved and maintained as long as dX/dt remains close to zero.Accordingly the stability type is a dynamic one-it is the population of replicators that is stable even though the individual members are being constantly turned over.It is this type of stability, one that is solely associated with persistent replicating systems, that we term dynamic kinetic stability.The term 'dynamic' reflects the continual turnover of the population members, the term 'kinetic' reflects the fact that the stability of the replicating system is based on kinetic parameters, such as k and g of Equation (1), i.e., on reaction rate constants, rather than on thermodynamic parameters: Let us now specify some characteristics that distinguish these two types of stability-dynamic kinetic and thermodynamic, and demonstrate how the classification can be useful. (i) Circumstantial vs. Inherent.Kinetic stability (static or dynamic), in contrast to thermodynamic stability, is not an intrinsic function of the system alone, but also depends on its surroundings.So, whereas thermodynamic stability is inherent, kinetic stability is circumstantial.A hydrogen-oxygen mixture in a glass container is kinetically stable, although that same mixture in the presence of a platinum catalyst becomes kinetically unstable.Dynamic kinetic stability, as manifest in replicating systems, follows the same pattern.A particular bacterial population in a pool of water might be kinetically stable, whereas that same population in a chlorinated pool would be unstable.Clearly, the dynamic kinetic stability of physical, chemical and biological systems may be dramatically affected by changing circumstances.In contrast, thermodynamic stability, being a state function, is independent of factors extraneous to the system.This difference is significant-it enables thermodynamic stability to be quantified and, accordingly, state functions such as G, H and S are applicable.In comparison, dynamic kinetic stability, being circumstantial, cannot be formally quantified and can only be assessed in some qualitative way.Two parameters that can give some indication of the dynamic kinetic stability of some replicating system are its size-the larger the population the more stable it is likely to be, and the length of time the replicating system has been able to maintain itself.Clearly, long-lived replicating populations are stable by definition, having proved stable by virtue of their persistence [4,5]. (ii) Replicator Space vs.'Regular' Chemical Space.A direct consequence of the division of stability into two discrete types is that chemical systems, when undergoing transformations, can be classified as belonging to two discrete chemical spaces-'regular' chemical space and replicator space.Replicator space is the all-encompassing grouping of persistent replicating systems, simple or complex.Thus molecular replicators, bacterial cells, birds, bees and camels all exemplify entities within replicator space.'Regular' chemical space effectively incorporates all other chemical entities, non-replicating in their character.The utility of this classification is that entities within each of the two spaces follow different selection rules because of the different type of stability in operation.For 'regular' chemical systems the selection rule is the well-known thermodynamic one.However, it turns out that within replicator space the selection rule is primarily kinetic [4,5].In order to understand the basis for that kinetic selection rule, consider the in vitro molecular replication reaction such as the one described by Spiegelman in the late 1960s [7].In that study Spiegelman took an RNA strand (isolated from the Qβ virus), activated nucleotides (the building blocks that make up an RNA oligomer) and the Qβ replicating enzyme, and demonstrated that in vitro replication of the RNA strand took place.However, since the replication reaction on occasion occurred imperfectly, mutated replicators were formed as well.Such transformations can be viewed as transitions in replicator space, from one replicating system to another.What was striking, however, was the fact that successful transitions in replicator space would only be those that lead to the formation of replicators of higher kinetic stability (faster replicators).As pointed out by Lifson [8] some years ago, a mutation leading to the formation of a kinetically less stable replicator would likely be driven to extinction, i.e., it would simply decay and disappear with time.Thus the transition between two connected elements in replicator space would effectively take place in just one direction-the direction based on kinetic selection, one that leads to the formation of kinetically more stable replicators.Indeed, Spiegelman observed that the initially extended and slow replicating RNA oligonucleotide ended up evolving into a much shorter and rapidly replicating entity [7]. (iii) Complexification vs. Thermodynamic Aggregation.The different selection rules in replicator and 'regular' chemical spaces discussed above is of considerable significance since they induce different chemical behavior.When transformations within the two spaces lead to a process of aggregation, it turns out that the aggregation patterns within the two spaces are different.Within 'regular' space, aggregation is primarily thermodynamically-driven as exemplified by solid and liquid thermodynamic states of matter.However, within replicator space, aggregation is not primarily thermodynamic in character.In fact, within replicator space the process is better termed complexification since the aggregation process is highly organized, rather than ordered [9].The aggregation process is kinetically-driven, leading to complex assemblies that are kinetically stable, though thermodynamically unstable.Indeed, the idea of a kinetically-driven hypercyclic network-a form of complexification-obtained through cooperative molecular behavior, was theoretically predicted by Eigen and Schuster in the late 1970s [10].But it was only some 15 years later that the newly emergent area of Systems Chemistry [11] provided striking empirical evidence for these ideas.Studies by von Kiedrowski et al. [12,13], Ghadiri et al. [14], Chmielewski et al. [15], and Kassianidis and Philp [16] all suggest that small replicating networks, based on the cross-catalysis of the network components, are feasible.Joyce's studies on RNA enzymes, in particular, have demonstrated the importance of complexification on the efficacy of the replication process.Whereas a single RNA autocatalyst was a relatively ineffective replicator, incapable of more than two successive doublings, conversion of that RNA ribozyme into a small cross-catalytic network based on two RNA ribozymes resulted in the formation of a rapidly replicating system which could be sustained indefinitely [17].Thus the on-going process of complexification appears to be not only a biological (evolutionary) property, but a kinetically-driven chemical property.The simple fact that complex (biological) replicators are kinetically more stable than simpler ones reaffirms the idea that there is a kinetic driving force that tends to transform less stable replicators, replicators that are simpler, into more stable ones that are complex.(However, that not all transformations in replicator space necessarily involve complexification.If a specific process of simplification were to lead to an increase in kinetic stability, then clearly that process would be kinetically selected for.Our point is that, in general, enhanced dynamic kinetic stability is achieved through complexification, not simplification.)The importance of network formation in replicating systems has recently been discussed by Ashkenasy et al. [18], Ludlow and Otto [19] and, in a more general context, by Kauffman [20]. (iv) Chiral Stability in Regular Space vs. Replicative Space.The stability of chiral systems in 'regular' and replicator space is strikingly different.In regular chemical space a racemic mixture is clearly the more stable one.Chiral excess is inherently thermodynamically unstable, and with time all homochiral systems will tend to become transformed into the more stable racemic system (if aggregation effects are ignored).Within the replicative world, however, the reverse pattern is observed.Chiral recognition is crucial in biological processes, particularly in the process of replication, so that in a replicative context homochirality is the preferred stereochemical outcome.Thus the tendency of regular chemical systems toward racemization, and replicating systems toward homochirality, becomes understandable in terms of the stability types in the two chemical spaceshomochiral systems exhibit greater dynamic kinetic stability than racemic ones. Interestingly, the importance of autocatalysis is not just manifest in maintaining that homochiral dynamic kinetic state, but is also considered instrumental in generating that state.The symmetry breaking Soai reaction [21,22], in which a chiral product is formed and maintained in almost 100% enantiomeric excess from an achiral substrate, also derives from the predominant influence of kinetic, as opposed to thermodynamic, forces. (v) Convergence vs. Divergence in Chemical Space.We have discussed replicating and 'regular' chemical systems as occupying different spaces based on their differing selection rules.Interestingly, those two spaces also exhibit different topologies [5].'Regular' space is convergent while replicator space is divergent.Transitions within 'regular' space are convergent as all isomeric systems are directed toward their common thermodynamic sink.That convergent pattern is represented schematically in Figure 2a.In contrast, transitions in replicator space, being kinetically directed, are divergent in character as illustrated in Figure 2b.For transformations in replicator space there is no specific target of maximal kinetic stability because kinetic stability is not a state function.Kinetic stability depends on factors external to the system, and, accordingly, there is no single unique pathway to higher values.Accordingly, each system within replicator space becomes a potential branching point for other kinetically stable systems, with the result being that within replicator space we observe a pattern of diverging pathways, as opposed to the pattern of converging pathways associated with transformations in "regular" chemical space [5].This different topology of the two spaces has interesting consequences.The patterns of convergence in 'regular' chemical space, as well as the patterns of divergence in replicator space, manifest themselves through the progress of time.This means that if we trace reaction pathways back in time, the patterns of convergence and divergence become reversed; a path that is convergent in the forward direction is necessarily divergent in the backward direction, and vice versa.This topological characteristic impacts on our ability to make both predictive and historical statements regarding systems in the two spaces.In replicator space, the convergence going back in time means it is easier to access historical information regarding precursor replicating systems, because a converging path by definition is directed toward a limited number of primal points.Indeed given the fossil and genetic record, the current evidence supports the view that all living systems on earth descended from just one such primal system.In other words, inspection of the fossil and genetic record has enabled us to explore our evolutionary history with considerable success, but this success has depended crucially on the convergent nature of replicator space as we go back in time.However, when we attempt to make predictive statements about replicator space the situation is reversed.The question as to where the future exploration of replicator space is likely to lead us is one which we cannot even begin to address; a diverging path, by definition, does not go anywhere in particular.The evolutionary future of replicating systems is effectively unknowable. Applying the same thinking to the consideration of transformations in 'regular' space leads to the opposite pattern.We can make reasonable predictive statements as to where a regular chemical system is directed (i.e., in a convergent direction toward its thermodynamic sink).However, making reliable statements regarding the identity of historical precursor systems in regular space is much more problematic, since in a backward direction the space is divergent.In sum, the different patterns of the two spaces-replicator and 'regular' suggest that our ability to make either predictive statements regarding future transformations or historical statements regarding the nature of past transformations is greatly influenced by the topology of the two spaces.A convergent topology facilitates prediction, a divergent one does not. Interplay between Dynamic Kinetic Stability and Thermodynamic Stability Life's far-from-equilibrium state, one that is maintained over time, has troubled physicists for over a century.Thus Niels Bohr, one of the fathers of atomic theory, in a well-known "Light and Life" lecture in 1933, proposed "that life is consistent with, but undecidable or unknowable by, human reasoning from physics and chemistry" [23] and justified that conclusion with the following reasoning: "The existence of life must be considered as an elementary fact that can not be explained, but must be taken as a starting point in biology, in a similar way as the quantum of action, which appears as an irrational element from the point of view of classical mechanical physics, taken together with the existence of elementary particles, forms the foundation of atomic physics.The asserted impossibility of a physical or chemical explanation of the function peculiar to life would in this sense be analogous to the insufficiency of the mechanical analysis for the understanding of the stability of atoms." Simply put, Bohr believed that the animate-inanimate dichotomy was unbridgeable and could be compared to the classical physics-quantum physics divide!And Erwin Schrödinger, following that general line of reasoning, enigmatically concluded some years later in his classic "What is life?" book, that living matter, while not eluding the established laws of physics, was likely to involve "other laws of physics" hitherto unknown [24].Even after major discoveries in molecular biology during the 1950s and 1960s, the belief that the origin of life problem was unresolvable continued to be expressed by leading scientists and philosophers of science.Thus in 1974, twenty years after the discovery of DNA, Karl Popper, the iconic philosopher of science, supported the physicist position noted above with his assertion that the origin of life problem was "an impenetrable barrier to science and a residue to all attempts to reduce biology to chemistry and physics" [25]. Clearly life's far-from-equilibrium state is central to the dilemma of how biology and physics inter-relate.How can living systems be stable, in the sense of persistent, yet maintain a far-from-equilibrium state?The more recent development of non-equilibrium thermodynamics [26], though throwing light on "dissipative structures" such as whirlpool, heated liquids, etc, has done little to resolve the puzzle of biological systems [9].Of course, from a purely thermodynamic perspective there is no contradiction-living systems undergo continual material and energy exchange with their environment.So just as a refrigerator can transfer heat from cold to hot, in the reverse direction to the natural one, and can do so through the consumption of energy, a living system can maintain its far-from-equilibrium state (like the refrigerator) through the continual utilization of energy.But how could such a highly organized energy-gathering entity have come about?That, in essence, was the issue that troubled those great physicists.So within the context of this thermodynamic overview of living systems, the issue that needs to be resolved is how do dynamic kinetic stability and thermodynamic stability relate to one another?In what manner do they coexist? Even though we are claiming that dynamic kinetic stability governs the stability of replicating systems, it is clear that the drive toward greater dynamic kinetic stability must be consistent with the requirements of the second law.Initially that may not pose a problem, as the constraints of the second law permit many kinetically-allowed pathways.However, it is also clear that certain pathways leading to enhanced dynamic kinetic stability may not be allowed.In fact, one can presume that the greater complexity associated with the drive toward enhanced dynamic kinetic stability is likely to be thermodynamically unfavorable, so that such pathways will be at some point become effectively blocked.Simply, highly complex systems that exhibit high dynamic kinetic stability are likely to be thermodynamically unstable.However, a way to resolve the apparent conflict between these two types of stability is possible through the emergence of a metabolic (in the energy-gathering sense) capability.Let us consider this point in some detail. In a theoretical simulation of molecular replication, we have recently demonstrated that a metabolic capability, given sufficient time, is likely to become incorporated into a replicating system [27].If through some chance mutation a replicating molecule acquires, for example, a photoactive site, a kinetic analysis of the competing replication reactions of the original non-metabolic replicator and the mutated metabolic replicator indicates that the metabolic replicator, the one with the energy-gathering photoactive site, will drive the non-metabolic replicator into extinction.In other words, the metabolic replicator exhibits greater dynamic kinetic stability than the non-metabolic replicator and as a result the less stable non-metabolic replicator is transformed into the more stable metabolic replicator.The significant point here is that once a replicator has acquired a metabolic capability it is now in some sense "freed" from the thermodynamic constraints associated with the directives of the second law.From that moment on, kinetic considerations, rather than thermodynamic ones, govern the continued evolution of that replicating system.Thus the theoretical simulation demonstrates that the incorporation of an energy-gathering metabolic capability into a non-metabolic system, once acquired through a chance mutation, will, through a process of kinetic selection, lead to a more effective metabolic replicator, i.e., a replicator of greater dynamic kinetic stability.In fact the point at which a down-hill (thermodynamic) replicator acquired a metabolic capability can be viewed as a critical one in the emergence of life.One might even say that at that point life began.That was the moment that an objective (to use the Monod terminology [28]) replicator was transformed into a projective (teleonomic) replicator, and (to a degree) cut loose from its thermodynamic chains.That was the moment that the system began to follow its kinetic "agenda" [29], the moment, as Kauffman put it, it began "to act on its own behalf" [20]. Summary In this review we have attempted to describe the concept of dynamic kinetic stability and how it relates to the traditional and well-established concepts of (static) kinetic stability and thermodynamic stability.We believe that recognizing the existence and nature of this quite distinct stability type can assist in further bridging the physics-biology gap that has troubled physicists for the past century, and assist in placing Darwinian thinking within a broader physicochemical framework.We believe that in doing so one can obtain greater insight into central questions in biology, including the most enduring and controversial one-the nature of the physicochemical principles that could help explain the emergence of life from inanimate matter.The profound lesson that can be learned from recent studies on replicating systems [12][13][14][15][16][17] is that the physics-biology gap can be bridged, and surprisingly, that the bridge for this merging can be achieved by clarifying the dominant role of kinetic factors, as opposed to thermodynamic ones, in both the generation and the maintenance of all persistent replicating systems.
5,310.2
2011-02-15T00:00:00.000
[ "Biology" ]
Centering and marginalization in introductory university physics courses Research-based instructional strategies in physics promote active participation in collaborative activities as a primary means for students to construct understanding. This emphasis is in line with situated learning theory, in which learning is indicated by a student’s increasing centrality in a community. In both perspectives, to learn more is to engage more centrally: to start discussions, conduct experiments, write on the board, decide when a question has been answered, and so on. In a study of small-group collaborative learning activities in introductory physics classrooms at three different universities, we observe that as students engage with one another and with instructors, they are not only negotiating physics concepts, but also negotiating social positioning. Some students are centered (and their contributions are valued), while others are marginalized (and their contributions are neglected). The aim of this research is to become conscious of how centering and marginalization shape the way physics is taught and learned. I. INTRODUCTION Research-based instructional strategies in physics promote active participation in collaborative activities as a primary means for students to construct understanding [1]- [3]. This emphasis is in line with situated learning theory (SLT), in which learning is indicated by a student's increasing centrality in a community [4]- [6]. This centrality may be physical, as when everyone in a group orients to a specific person, or symbolic, as when everyone in a group orients to an idea that is considered to be at the heart of some matter of consequence. People in a central position have access to full participation in the activity; in a small-group collaborative physics activity, for example, they start discussions, conduct experiments, write on the board, or decide when a question has been answered. Others, whose position is more peripheral, may be physically located on the margins of the activity or may participate less fully in it. In addition to indicating learning, movement from periphery to center is also associated with identity development: as participants move toward full participation in a physics activity, they increasingly identify as physics students. Overall, this model contrasts with others in which learning is seen as an individual cognitive achievement and knowledge is a psychological or volitional reality of an individual. In a study of small-group collaborative learning activities in introductory physics classrooms at four different universities, we observe that discussions of physics content are also negotiations of social position, and may not reflect the learning of physics concepts in a cognitive sense [7]. Some students are centered (and their contributions are valued), while others are marginalized (and their contributions are neglected). The aim of this research is to become conscious of how centering and marginalization shape the way physics is taught and learned. Our contribution is to add critical consciousness to the SLT perspective, in a physics learning context. II. THEORETICAL FRAMEWORK Situated learning theory is a framework that defines learning in terms of the practices of a community [4]- [6]. In this framework, learners move from legitimate peripheral participation into more central positions as they are acculturated through knowledge acquisition and identity development [4]- [6], [8], [9]. SLT is often used to describe learning in the community or the workplace, but also provides a framework for analyzing school learning [10]. SLT's emphasis on centrality aligns with common values in physics education research: in both perspectives, a key component of learning is increasingly engaged participation in learning activities (e.g., starting discussions, conducting experiments, writing on the board, deciding when a question has been answered). In physics classrooms as in other contexts, a participant's mastery of the situated practices, knowledge, and skills of the community determines centrality and sustains the practices of the community [4]- [6]. SLT has been criticized by critical scholars such as Esmonde and Booker for paying little attention to the power relationships that are negotiated in communities of practice, for not considering race, gender, or other social identities to be barriers to participation, and for treating students who remain peripheral as failing to identify as expert learners, regardless of social identity [8]- [10]. Critical scholars have pointed out that power relationships are key to learning in the SLT model because access to the cultural system, and its associated identity development, is determined by those positioned at the center [8]- [11]. Power relationships are not static, but negotiated [10]: power is (or is not) shared by the expert with the novice in a legitimate negotiation of shared identity [8], [12]. Critiques of SLT argue that full participation within a community of practice, such as a classroom, creates inequitable opportunities when power relationships and member identity are negotiated inconsistently, often based on sociocultural identity [10], [13]- [15]. For example, a learner's sociocultural identity may conflict with their community of practice identity, thus preventing full participation: e.g., a women of color may be questioned because her body does not conform to prevalent images of the "ordinary" white male physicist. [16]- [19]. Social relations ascribed by the broader sociocultural context may not value the membership of diverse others, in which case the learning of those diverse others will be limited [12]. For example, in a year-long ethnographic study of female Japanese graduate students in a Canadian university, students negotiated and constructed identity in an attempt to gain a more central position in the academic community of practice: the movement to a more central position required identity formation and learning, but the students did not always get access to necessary cultural resources [12]. Identity formation as a full participating member therefore may not be accessible to all students regardless of the learner's willingness to participate. In such circumstances, structural inequities and epistemological hierarchy are reinforced [10], [11]. In physics, which has among the lowest representation of women, gender-diverse people, and people of color of any academic discipline [20], [21], theories of learning need to address the lack of mutuality that people in underrepresented groups experience. When students from underrepresented groups do not move from peripheral to central positions in a learning community, it may be because the community of practice does not allow them to learn. Without a critical lens to account for the situatedness of race, gender, or even patriarchy within the power relations, physics education research risks reproducing structural inequities. Acculturation with no assessment of the power relationship, or attention to sociocultural identities, ensures learning as identity construction is a type of colonization [10], [11]. In this study, we analyze physics learning interactions from a SLT perspective that includes critical consciousness [8]- [10], [16], [22], [23]. We characterize interactions among students in terms of centrality and marginalization, and attend to the sociocultural identities that may be barriers to participation. III. METHODS We video recorded 1-2 weeks of introductory physics classes at three different universities, totaling approximately 10-20 hours of video for each site. The project team and the study site leader selected the classes to be videotaped by mutual agreement, with attention to classes that feature (1) frequent or in-depth interactions among students and between students and instructors and (2) racial, ethnic, or gender diversity. From the body of video data for each site, the project selected sequences of events in which centering and/or marginalization was visibly sustained or contested. Evidence of centrality comes from both non-verbal cuese.g., multiple people in the video turn toward a speaker -and discourse analytic markers indicating that an idea or person is being elevated -e.g., "that's a great idea" [7], [24]- [26]. Such interactions, when they occur repeatedly, consistently, or in a sustained manner over time, indicate that someone is being sustained in a central position. We also selected episodes in which centrality was contested: when an otherwise peripheral participant tried to gain the floor [7], contribute knowledge, or direct the group's activity. Five to eight episodes were selected from each university. To inform and triangulate the analysis of selected classroom video episodes, the project team typically conducted 2-3 stimulated recall interviews [27]- [29] per site, in which faculty or students who appear in a classroom video episode offer their perspective on those specific events. IV. EXAMPLE OF CENTERING AND MARGINALIZATION The following episode takes place in an introductory physics class at a small private university in the western United States. The class uses evidence-based practices including tutorials [2] to support small-group collaborative activities. The project videotaped two weeks in the middle of the autumn quarter course in mechanics, recording two out of six groups four times a week, for one to two hours each class session. In the following episode, students are working on a tutorial activity about acceleration in one dimension. The students, pseudonymed Andy, Bill, Cindy, and Dan, sit together for each class session (see Fig. 1). Andy, Bill, and Dan present as white men and Cindy presents as a white woman. We selected this group particularly because all of the students in it present as white, which is the dominant racial culture represented in physics: as such, it tends to be treated as the norm and therefore ignored. By analyzing the interactions of white students, we analyze a case of the white culture that is pervasive in physics. Overall, the positioning for this group is highly stable: the same individuals are centered or marginalized to approximately the same extent in all of the class sessions observed for the study. In SLT terms, the legitimate peripheral participants are kept at the periphery, and thus reified as novices. A. Episode In this episode, Andy, Bill, Cindy, and Dan are predicting how the acceleration of a cart on a ramp will change when they change the mass of the cart. The cart, ramp, and motion sensor are set up at their table so that they can run experiments to test their predictions. Andy asks if the weight of the cart is 700 g, and Bill says it is 0.8 kg. Bill sets up the cart, Dan starts the motion detector, and Cindy stops the cart. Andy states that the acceleration would decrease when the mass is decreased, according to Newton's Second Law (in which he presumes, incorrectly, that the net force is constant): Andy: So the mass of the base cart is 700 grams, right? Bill: Uh, it's about 0.8 kilograms. [ shouldn't do anything. Thank you," seeming to agree with Andy, while also seeming to state that the acceleration would stay the same (which is not what Andy said). Andy next says, "Acceleration is still just due to gravity," now seeming to agree with Bill that the acceleration of the cart is independent of its mass. Once Dan has finished assisting with the experiment, he and Cindy do not attend to Andy and Bill's interchange. Instead, they focus on a graph on the computer screen, which likely represents the output of the motion sensor. Dan has the mouse and manipulates the graph, appearing unsatisfied with the result. Cindy makes a technical suggestion ("Put a highlighter on it"), and then says "Hello?", suggesting she feels Dan is not listening to her. She requests the mouse by wiggling her fingers. Dan throws up his hands, releasing the mouse to her, and she uses the mouse to make an adjustment to the graph (see Fig. 2). As Cindy gains the mouse, Andy and Bill look down at their papers and write. B. Centering and marginalization This episode shows several behavioral patterns that we observe to be typical for this group over the two-week period of video recording. One student, Bill, is centered: he directs the group's action (such as deciding when to run the experiment) and makes intellectual decisions for the group (what the mass of the cart is, and whether the acceleration is independent of mass). In a stimulated recall interview, Andy affirms Bill's centering as typical: he says, "Usually I deferred to his opinions and his reasoning because it was usually right." Andy described this episode as unusual in that Andy made a bid to correct Bill's reasoning; Andy says he "phrased it hesitantly because [Bill is] really smart, he's in his third year doing a master's in physics, just retaking this class because he transferred," suggesting that Bill has unusually high expertise for an introductory physics class. Andy adds that "usually when I ask something like that, about half the time I'm probably wrong, if I'm trying to correct him, because he's really good at physics." In other words, Bill is centered partly because other students attribute academic merit to him. In this episode, Andy makes a bid to move toward the center (by "trying to correct" Bill), but perceives this action as unusual (thus he "phrased it hesitantly"). In the end, Bill's centralized position is sustained: Andy's contribution is absorbed and changed into a different statement made by Bill. Dan appears close to the center occupied by Bill in this episode: he carries out Bill's suggestion to run the experiment, and he takes the lead in manipulating the graphical output of the motion sensor (he initially holds the mouse). Cindy makes a bid to move toward the center by making a technical suggestion about how to manipulate the graph ("Put a highlighter on it"), but her move is ignored. Cindy's "Hello?" shows her own recognition that she is on the periphery of the action: that others are not listening to her, and that if she is going to be heard, she needs to take more action. She does so by requesting the mouse; her move is treated as unusual (Dan throws up his hands to release the mouse to her, as if doing so was a major event). None of the other group members attend to her manipulation of the graph. In an interview, Andy described Cindy's low profile as typical for their group: he says Cindy is "usually a little more quiet. She and Dan are good friends and she mostly defers to what he says." Overall, two students in this group are centered: primarily Bill, and secondarily Dan. In this episode, Andy and Cindy both make bids for centrality: these moves are seen as non-normative (Andy describes his move as rare, and Dan throws up his hands), and do not disrupt the overall structure of the group. Situated learning theory would pose Bill and Dan as being more expert-like and having a secure identity as physics learners, as evidenced by their central positions. Physics education research would see Bill and Dan as successful learners in that they are actively engaged with the material: they are "hands-on," they direct action, they are visibly engaged in the instructional activities, they share their ideas, and they pursue answers to physics questions. Andy and Cindy, meanwhile, would be understood by SLT to be legitimate peripheral participants in the group: they engage in simpler, lower-risk tasks, such as asking about the mass of the cart or making suggestions about the computer display. Overall, SLT and PER would tend to judge Bill and Dan as more successful learners than Andy and Cindy, in this episode. C. Critical perspective Missing from the above analysis is the critical perspective that to move to the center in a group is to negotiate power, sometimes inequitably. Bill and Dan appear to be white men, a social identity that is greatly overrepresented in physics at all levels [20], and display certain characteristics that U.S. culture typically associates with white masculine behavior, including control, independence, and decisiveness [30]. Although Andy also presents as a white man, he does not exhibit many characteristics typically associated with white masculine behavior. Bill's decisionmaking (about the mass of the cart, the timing of the experiment, and the group's prediction) has the effect of marginalizing Andy: Andy's "hesitant" suggestions are not taken up by the team, and are even rendered less visible by Bill's discourse (e.g., when Bill reframes Andy's prediction as something other than what Andy said). In other words, the centering of Bill produces the marginalization of Andy. Meanwhile, on the other side of the table, Dan and Cindy's interaction typifies some patterns of gendered interaction in physics, including men getting more attention and more access to equipment [14], [15], [18], [31]- [33]. For example, Dan's possession of the mouse makes the mouse initially inaccessible to Cindy. When Cindy obtains the mouse, she does not gain the group's attention: at that moment, Andy and Bill look away from the computer display, down to their individual worksheets. This marginalization, though contested (e.g., by Cindy obtaining the mouse), is not significantly disrupted during this episode. Overall, Andy and Cindy are not only on the periphery, but are kept to the periphery with behavioral patterns that typify racialized and gendered interactions. IV. DISCUSSION This analysis describes a physics learning interaction from a SLT perspective that includes critical consciousness [8]- [10], [16], [22], [23]. Interactions among students are characterized in terms of centrality and marginalization, with attention to the sociocultural identities that shape participation. We argue, in line with work by Esmonde and Booker, that research using SLT must attend to the power structures associated with sociocultural identities in order to properly characterize students' learning [10]. The power structures and discourse norms of the dominant (white masculine) culture act on students' social identities like a kind of gravity, producing centering and marginalization and thereby distorting opportunities for learning. Future work will examine these dynamics in greater depth, with other groups (including mixed-race groups), and with instructor interactions. ACKNOWLEDGMENTS We are grateful to Amy D. Robertson for significant contributions to this analysis. This work was supported in part by the National Science Foundation under grant number 1760761.
4,085.2
2020-09-01T00:00:00.000
[ "Education", "Physics" ]
Highly Loaded Independent Pt0 Atoms on Graphdiyne for pH‐General Methanol Oxidation Reaction Abstract The emergence of platinum‐based catalysts promotes efficient methanol oxidation reactions (MOR). However, the defects of such noble metal catalysts are high cost, easy poisoning, and limited commercial applications. The efficient utilization of a low‐cost, anti‐poisoning catalyst has been expected. Here, it is skillfully used N‐doped graphdiyne (NGDY) to prepare a zero‐valent platinum atomic catalyst (Pt/NGDY), which shows excellent activity, high pH adaptability, and high CO tolerance for MOR. The Pt/NGDY electrocatalysts for MOR with specific activity 154.2 mA cm−2 (1449.3 mA mgPt −1), 29 mA cm−2 (296 mA mgPt −1) and 22 mA cm−2 (110 mA mgPt −1) in alkaline, acid, and neutral solutions. The specific activity of Pt/NGDY is 9 times larger than Pt/C in alkaline solution. Density functional theory (DFT) calculations confirm that the incorporation of electronegativity nitrogen atoms can increase the high coverage of Pt to achieve a unique atomic state, in which the shared contributions of different Pt sites reach the balance between the electroactivity and the stability to guarantee the higher performance of MOR and durability with superior anti‐poisoning effect. Introduction Because of its unique and superior characteristics, atomic catalyst (ACs) has become one of the hottest research frontier fields in renewable energy conversion. This is mainly due to its ≈100% metal atom utilization, infinitely distributed and uniform active sites to achieve high catalytic selectivity, activity and stability in various sustainable energy technologies (e.g., fuel cells, batteries, and hydrogen production devices, etc.). [1][2][3][4][5][6][7][8][9] Previous reports have shown that the unique atomic environments of the active sites (for example, geometric construction, coordination, and electronic structure) in ACs are decisive in determining the catalytic efficiency. [10][11][12] More importantly, the special geometric and electronic structures of ACs allow for the modulation of the binding behaviors of reaction intermediates, which can lead to different reaction selectivity, activity, and stability in catalytic processes. [3,4,13] Accordingly, the configurations of ACs can be further exploited to tune the catalytic performances toward target reactions. A very important issue is to rationally design and synthesize ACs with desired atomic environments aiming to enhance the conversion efficiencies toward practical applications. Direct methanol fuel cells (DMFC) have been considered one of the most promising energy conversions with high energy density, [14][15][16] in which Platinum-based catalysts are employed as the most promising electrocatalysts for efficient methanol oxidation reaction. [17,18] Previously, great efforts have been devoted to enhancing the MOR performance through (1) fabricating the Pt-M (M = Ni, Pd, Co, Ru) alloying with special structure [17][18][19][20] ; (2) designing Pt-C (C = rGO, CNTs, NGO) catalysts dispersed on carbon support [19,21,22] ; (3) Anchoring single-atom on Pt nanostructure or metal oxidation. [23] These strategies exhibit enhanced electrocatalytic activity and durability for MOR by tunning electronic structure, improving the dispersing on supports, or fabricating the atomic vacancies. [14,19,23] However, the low kinetic activity and self-poisoning are due to the intermediate CO adsorption on Pt leads to the rapid decrease in the catalytic performance. This is a difficult issue in the catalytic field. Well solved, it can well guide us to develop new and general electrocatalysts with high reaction efficiency and realize the rapid development of the energy industry. Hence, the challenge we face is how to develop www.advancedsciencenews.com www.advancedscience.com efficient and general electrocatalysts with high resistance to CO poisoning, and high pH adaptability for MOR. Graphdiyne, a new carbon allotrope comprising of sp/sp 2hybridized carbon atoms, affords unique opportunities for rational elemental doping, [24] for instance, sp-for nitrogen [25] and sp 2hybridized carbon atoms for hydrogen [26] -, fluoro [27] -, chlorine [28] -, and boron. [29] These new materials, developed on the basis of graphdiyne, endow graphdiyne with special properties for various applications including catalysis, [24,[30][31][32] energy storage and conversion. [33][34][35] N-doped carbon supports have made a great contribution to the development of ACs due to their defect-engineering to stronger chemical bonds, offer and stabilize more atomically sites. [36][37][38] The N-doped carbon SACs catalysts applied for different electrocatalytic including formic acid oxidation reaction, hydrogen evolution reaction and oxygen reduction reaction with high activity and stability in the nearest reports. [39][40][41] The high electronegativity of N atoms induces more charge transfer between GDY support and low coordination metal ACs in this process, which results in the ACs on N-doped GDY supports more stable and active. [25] In addition, the N-doping induced intrinsic defects, large surface area and abundant porous size of GDY supports favor anchoring more metal sites. [42] Importantly, N-doped GDY supports can exhibit high electric conductivity, excellent stability in acid and alkaline electrolytes. [43] Encouraged by these excellent advantages of Ndoped GDY, we speculated the single atom Pt on NGDY with notable properties can enhance catalyze the MOR. Although the atomic catalysts have been widely applied in many other electrochemical reactions, Pt single-atom catalyst on N-doped GDY applied in the MOR have rarely been discussed. Herein, we report the facile anchoring of zero-valent Pt atoms on N-doped graphdiyne obtained by selective cycloaddition of sphybridized carbon atoms in GDY with hydrazine (Pt/NGDY). Experimental results showed that the Pt/NGDY has excellent activity, CO anti-poisoning ability and durability for efficient MOR over a wide pH range from acidic to alkaline conditions. Our results revealed that the inhomogeneous electronic distribution induced by N dopants allows the dispersive and high coverage of Pt atoms, which boosts up the electroactivity based on more positively charged Pt to enhance the fixation of intermediates and electron transfer for MOR. This work paves a new direction for the atomic catalyst to achieve comparable performance with nanoparticles in the MOR through the high-loading strategy. Results and Discussion Benefiting from the rich in diacetylene units, GDY allows for the precise and controllable cycloaddition reaction, which can result in a new type of pyrazole-nitrogen doped GDY with accurate Ndoping sites. The Cope-type hydroamination of diacetylenes in GDY with hydrazine was performed to form NGDY, as shown in Figure 1a. In brief, the NGDY was synthesized by the selective cycloaddition of diacetylene in GDY with hydrazine, including the Cope-type hydroamination of diacetylenes with hydrazine occurred together with a proton-transfer, followed by a fast isomerization and an intramolecular electrophilic addition. [5] Electrochemical in situ anchoring was performed through a chronopotentiometry method at the current density of 5 mA cm -2 for 10 s by a three-electrode system, in which the self-supported three dimension (3D) NGDY nanosheets array was used as the working electrode. The Pt atoms were seized and anchored on the NGDY nanosheets, achieving the single Pt atom catalysts, whereas longer deposition time (e.g., above 20 s) would dramatically result in Pt nanoparticles on NGDY nanosheets ( Figure S1, Supporting Information). Density functional theory (DFT) calculations were performed to explore the origins of high coverage of Pt atoms on the NGDY and their high performance in MOR. For the structure of NGDY, we have constructed the model with N dopants of different distributions in GDY. The bonding and antibonding orbitals near the Fermi level (E F ) are demonstrated for the NGDY, which supports that the electron-rich feature of the C sites and the N dopants on the chains of GDY (Figure 1b). This indicates both the structural symmetry and the electronic distribution has been affected by the N dopants, which creates more potential sites for the anchoring of Pt atoms to achieve high coverage and loading. With the sufficient coverage (≈70%) of Pt atoms on the NGDY, we notice the evident distortion of the local structure induced by the substantial p-d couplings between the NGDY and Pt atoms ( Figure 1c). More importantly, the local electronic structure has been further perturbed, leading to highly uneven electronic distribution, where different Pt sites have displayed different electronic contributions. For high coverage, we notice that the Pt atoms preferred to distribute in the lattice to reach high stability ( Figure 1d). Regarding the energy cost for the anchoring, we have classified the various Pt sites into three different types based on the energy. Notably, the Pt atom anchored on the GDY chain without N dopants still demonstrates the highest stability with the lowest energy cost. As the neighboring N dopants become more, the anchored Pt atoms are less stable with a high energy barrier to be stabilized. To reveal the electronic structure induced by the N dopants and Pt, we have compared the projected partial density of states (PDOS) of NGDY and Pt/NGDY (Figure 1e). For the NGDY, we notice the higher position of N-s,p orbitals than the C-s,p orbitals. A minor gap is still noticed between the conduction band (CB) and valence band (VB), indicating the barrier for electron transfer. With the high coverage of Pt atoms, the electronic structure has been evidently changed ( Figure 1f). The Pt-5d orbitals dominate the electronic states near E F . Notably, the N-s,p orbitals have been significantly suppressed from E V −1.94 eV in NGDY to the slightly deeper position at E V −2.90 eV (E V = 0 eV). The s,p orbitals of GDY show the broadband feature and cross the Fermi level, indicating the improved electronic conductivity. This is attributed to the p-d coupling by the high coverage of Pt. Since we have noticed different anchoring sites of Pt, the corresponding site-dependent electronic structures of anchored Pt atoms have been investigated (Figure 1g). For those highly stable anchoring sites on the GDY chain without N dopants involvement, the dominant peak of 5d orbital in Pt is located near E V −1.94 eV. The consistent results with XPS data of experiments confirm that the high coverage of Pt atoms in NGDY consists of varied types of anchoring sites, which achieves the balance between stability and electroactivity. To evaluate the MOR performance, the PDOS of the key intermediates has been demonstrated (Figure 1h). Owing to the contribution of different Pt sites, the PDOS of the intermediates displays the linear correlation, which supports the efficient electron transfer during the MOR process. The much-increased active Pt sites on Figure S2, Supporting Information) images, we can see that the Pt/NGDY still retains the honeycomb-type porous structure which is beneficial for the catalytic reaction process. Energy-dispersive X-ray spectroscopy (EDS) mapping (Figure 2l) showed the homogeneous distribution of Pt, N and C elements. X-ray photoelectron spectroscopy (XPS, Figure 2m) revealed the successful anchoring of Pt and on NGDY. High-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) was employed to characterize the atomic structure of the catalysts. The HAADF-STEM images display large numbers of bright dots isolatedly and highly dispersed on the substrate surface and no peak of Pt was observed in XRD (Figure 3a-h, Figure S3, Supporting Information), X-ray absorption near-edge structure (XANES), extended X-ray absorption fine structure spectroscopy (EXAFS) and XPS analysis provide the information of the electronic states of Pt atoms in the catalysts. Figure 3i displays the XANES spectra of Pt/NGDY and Pt foil at Pt L3 edge. Compared with Pt foil, Pt/NGDY gives an obvious negative shift in the energy, indicating that Pt atoms in Pt/NGDY are zero-valent. This was also confirmed by the derivative XANES results ( Figure S5, Supporting Information). The local structure environment and atomic dispersion of Pt was examined by EXAFS Fourier transforms (Figure 3j). For Pt foil, the peak at ≈2.6 Å was observed in Fourier-transformed EXAFS (FT-EXAFS), which attributed to the Pt-Pt bonding and were not detected in Pt/NGDY. This corresponds to HAADF-STEM results and verifying the isolated existence of Pt single atoms in Pt/NGDY. While, in Pt/NGDY sample, the two peaks at around 1.5 Å correspond to the scattering interaction between Pt atoms and the first coordination shell of Pt-N. The peak appears at 2.0 Å for Pt/NGDY, which is larger than Pt-N distance, corresponds to the Pt-C bond length. The subsequent quantitative EXAFS curve-fitting analysis shows that the coordination number of N atoms in the first coordination sphere was estimated to be 3 at the distance of 1.6 Å, indicating a square-pyramidal configuration for the Pt-N/O bonding. In addition, the coordination sphere of C atoms exhibits the coordination number of 7 at 2.2 Å (Table S1, Supporting Information). These results solidly demonstrate the success fully anchoring of isolated zero-valent Pt atoms on NGDY. [8,9] The chemical states of N and C in Pt/NGDY were studied by XPS spectra. Pt 4f spectrum of Pt/NGDY (Figure 3k) located at 71.6 (Pt 4f 7/2 ) and 74.9 eV (Pt 4f 5/2 ), respectively, confirm that the Pt atoms on NGDY are mainly in zero-valence, [6] consistent with the XANES results (Figure 3i). In N 1s spectra of NGDY shows two different peaks (Figure 3l), at 400.28 and 399.05 eV, indicating the formation of aromatic pyrazole units in GDY. These types of N offer new anchoring sites for Pt atoms. [10,11] After the Pt atom anchoring, the N 1s binding energy shifts positively by 0.15 eV and the C 1s binding energy of Pt/NGDY shows a negative shift by 0.3 eV ( Figure S6, Supporting Information). The D/G band ratio value of Pt/NGDY in Raman increased from 0.77 to 0.85, indicating the Pt/NGDY have more defects ( Figure S7, Supporting Information). These results confirm the presence of interactions between NGDY and Pt atoms, as well as the obvious electron transfer between Pt atoms and C/N atoms, which are critical and beneficial for enhancing the catalytic activity, longterm stability, and carbon monoxide resistance. MOR Performances of Pt/NGDY The electrocatalytic MOR performances of the Pt/NGDY, NGDY and commercial 20 wt% Pt/C were evaluated by using the cyclic voltammetry (CV) method at a sweep rate of 50 mV s −1 on a typical three-electrode system at room temperatures. The electrochemical surface area (ECSA) with respect to the charge involved in hydrogen desorption for Pt/NGDY was determined to be 69.5 m 2 g Pt −1 , much larger than reported electrocatalysts [14] ( Figure S8, Supporting Information). The MOR tests were first performed in an aqueous solution of 1 m methanol and 1 m KOH. The specific activity curves normalized by corresponding surface area (Figure 4a) reveal that Pt/NGDY has better MOR activity, with the larger current densities over the whole oxidation potentials, than that of Pt/C. For example, Pt/NGDY exhibits the largest current density of 154.2 mA cm −2 , which is about 67 and 9 times larger than that of NGDY (2.3 mA cm −2 ) and commercial 20 wt% Pt/C (17.1 mA cm −2 ), respectively. Besides, Pt/NGDY exhibited a more negative onset potential (−0.6 V), as compared to Pt/C (−0.4 V), indicating that Pt/NGDY needs lower energy to drive the MOR than Pt/C. These results demonstrate the high intrinsic activity of Pt/NGDY. Such excellent specific activity is even better than previously reported Pt-based electrocatalysts and other ones (Figure 4b, Table S2, Supporting Information). The mass activities (MOR current normalized by loading mass of Pt) of the samples were further obtained and shown in Figure 4c. As expected, Pt/NGDY possesses a significantly higher mass activity 1449 mA mg Pt −1 than commercial 20 wt% Pt/C (300 mA mg Pt −1 ) and previously reported Pt-based electrocatalysts in alkaline conditions (Figure 4d). The operation stability of the Pt/NGDY is determined by the chronoamperometry test (CAT) at room temperature in 1 m CH 3 OH+1 m KOH aqueous solution. As shown in Figure 4e, Pt/NGDY shows much better stability with the current density retention of >74% for a 50 000 s continuous operation than that of 20 wt% Pt/C only 43% retention ( Figure S9, Supporting Information), higher than that of 20 wt% Pt/C (43%). The current loss of Pt/NGDY may be due to the adsorption of methanol molecules which lead to the poisoning of the catalysts. HAADF characterizations on the Pt/NGDY samples after the long-term stability test show that the Pt atoms remain isolated dispersion on the surface of NGDY without any aggregation and all elements are uniformly dispersed in NGDY ( Figure S10 and S11, Supporting Information). The specific activity of Pt/NGDY was found to change with the variation of the Pt loadings. As the deposition time was further increased, Pt clusters and Pt nanoparticles could be observed, and the specific activity of Pt/NGDY decreased ( Figure S12, Supporting Information). These results reveal that the suitable loading Pt in Pt/NGDY can lead to greater enhancement of activity and durability. The MOR activities of Pt/NGDY were determined in 0.5 m H 2 SO 4 and 1 m methanol electrolytes. As shown in Figure 4f, the Pt/NGDY displays the highest peak current density of 29 mA cm -2 than that of Pt/C, NGDY and CC. The stability of Pt/NGDY was examined Pt/NGDY at −0.65V. It was observed that the highest peak current density still retains 21 mA mg Pt -1 even after 7600 s (Figure 4g). The mass activity of Pt/NGDY (296 mA mg Pt -1 ) is larger than Pt/C (10 mA mg Pt -1 ) ( Figure S13, Supporting Information). We next investigated the MOR performance and long-term stability of the samples in 1 m Na 2 SO 4 and 1 m methanol electrolytes. As expected, the Pt/NGDY still possesses a higher MOR activity with a current density of 22 mA cm -2 (110 mA mg Pt -1 ) than that of the Pt/C (current density of 11 mA cm -2 , and mass activity of 14 mA mg Pt -1 ) (Figure 4h and Figure S14, Supporting Information). After a continuous electrocatalysis, the mass activity of Pt/NGDY retained 48 mA mg Pt -1 (Figure 4i). These findings demonstrate that the N doped GDY-based zero-valent Pt atomic catalysts can not only maximize the catalytic activity but also improve the long-term stability. The anti-CO poisoning ability is another important indicator for evaluating the performance of a catalyst. The anti-poisoning ability to carbonaceous species of the catalysts could be evaluated by quantitatively comparing the ratio of the peak current densities in the forward scan (j f ) to backward scan (j b ), the antipoisoning ability to carbonaceous species of the catalysts could be assessed. [12][13][14] As expected, the j f /j b ratio values of Pt/NGDY are 4.2, 1.63 and 1.26-fold larger than the commercial 20 wt% Pt/C in acidic, alkaline and neutral solutions, revealing the better anti-CO poisoning ability of Pt/NGDY than commercial 20 wt% Pt/C and implied the methanol would be effectively oxidized. This was also confirmed by the CO stripping voltammetry experiments. As shown in Figure S15 (Supporting Information), commercial 20 wt% Pt/C presents the oxidation peak at −0.51 V (vs SCE) due to the electrooxidation of CO on Pt/C. [14] Remarkably, Pt/NGDY shows the oxidation peak started at a more negative potential of around −0.64 V (Figure 5j), giving a 130 mV decrease in the onset potential compared to Pt/C. These results demonstrate the excellent MOR performance of Pt/NGDY in a wide pH range could be corresponding to the catalysts can efficiently reduce the CO binding strength, inconsistent with DFT calculations. We further study the high coverage of Pt/NGDY from the energetic mapping. For the initial coverage of Pt on NGDY, the energy variation is very large due to the distinct energy cost of substantial potential anchoring sites ( Figure 5a). As the coverage increases, the energy variation becomes more stable and reaches the smallest point at the coverage of ≈70%, indicating the limits for the most stable structures of high Pt coverage. This is due to the higher coverage that leads to the more even distribution of Pt atoms, lowering the overall energy. When the coverage further increases to larger than 70%, the additional Pt atoms occupy the more unstable, leading to the holistically unstable structure of Pt/NGDY. For the different Pt coverage structures, the ratio of the most energetically preferred anchoring sites also correlates with the stability of the catalyst (Figure 5b). As the Pt atoms start covering the NGDY, Pt atoms prefer to occupy the most stable anchoring site first. When the most stable positions are occupied, the occupation of Pt on other less stable anchoring sites results in the overall increases of the structure. Thus, the high coverage of the Pt on NGDY is achieved based on the balance between the Pt atoms distribution and stability. In addition, the energetic reaction pathways have been demonstrated for both MOR and CO poisoning (Figure 5c). For the MOR process, the overall reaction process exhibits the exothermal trend with an energy release of 3.52 eV, which supports a much stronger thermodynamic trend than the pristine Pt (111) surface. Such an energetically favorable trend is ascribed to the enhanced Pt coverage with high electroactivity to facilitate the electron transfer efficiency and intermediate reactions. Meanwhile, the Pt (111) surface demonstrates an evident barrier of 0.52 eV for the initial cleavage of the C-H bond, which significantly lowers the MOR efficiency. Meanwhile, the pristine Pt catalyst usually suffers from CO poisoning, which is a key factor to influencing the MOR performances (Figure 5d). Although the initial dehydrogenation of methanol requires a subtle barrier of 0.13 eV, the Pt (111) surface display an energetically favorable trend for the CO formation, leading to the inevitable occurrence of catalyst poisoning. In comparison, the high coverage of Pt atom on NGDY substantially suppresses the CO poisoning effect. Attributed to the varied Pt active sites on Pt/NGDY, the multi-Pt sites are able to display stronger suppression effect to prevent the CO poisoning than the mono Pt atom anchoring on GDY or the pristine Pt catalyst. Such an integrated structure causes a high energy barrier of 0.74 eV to form CO from dehydrogenation of CHO* on Pt/NGDY, determining the high performance of MOR and durability of the Pt/NGDY. Conclusion In summary, we have demonstrated the atomic catalyst of Pt/NGDY is an almost perfect catalyst, especially as an efficient catalyst in MOR reaction. The dependent Pt/NGDY catalyst exhibits superior performance both high specific activity and mass activity in wide pH conditions. For example, the mass activity (specific activity) of Pt/NGDY toward MOR in alkaline, acidic, and neutral conditions are 1449 mA mg Pt −1 (154 mA cm −2 ), 296 mA mg Pt -1 (29 mA cm −2 ) and 110 mA mg Pt -1 (22 mA cm −2 ), which are 5, 29, and 8 times higher than Pt/C, respectively. And the Pt/NGDY catalysts also exhibited robust stability and resistant CO poisoning in MOR. Experimental and theoretical results reveal that the high Pt loading on NGDY is achieved due to the loss of symmetry and electronic homogenous in NGDY, which allows more Pt anchoring on different sites. The synergistic contributions of different Pt sites with varied electroactivity not only promote the electron transfer towards intermediates for MOR but also improve the suppression of the CO poisoning for enhanced durability. This strategy provides an effective way for the synthesis of robust atom catalysts with high catalytic performance for MOR. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
5,321.4
2022-04-07T00:00:00.000
[ "Chemistry", "Materials Science" ]
Prolotherapy agent P2G is associated with upregulation of fibroblast growth factor-2 genetic expression in vitro Purpose Osteoarthritis (OA) is a prevalent, progressively degenerative disease. Researchers have rigorously documented clinical improvement in participants receiving prolotherapy for OA. The mechanism of action is unknown; therefore, basic science studies are required. One hypothesized mechanism is that prolotherapy stimulates tissue proliferation, including that of cartilage. Accordingly, this in vitro study examines whether the prolotherapy agent phenol-glycerin-glucose (P2G) is associated with upregulation of proliferation-enhancing cytokines, primarily fibroblast growth factor-2 (FGF-2). Methods Murine MC3T3-E1 cells were cultured in a nonconfluent state to retain an undifferentiated osteochondroprogenic status. A limitation of MC3T3-E1 cells is that they do not fully reproduce primary human chondrocyte phenotypes; however, they are useful for modeling cartilage regeneration in vitro due to their greater phenotypic stability than primary cells. Two experiments were conducted: one in duplicate and one in triplicate. Treatment consisted of phenol-glycerin-glucose (P2G, final concentration of 1.5%). The results were assessed by quantitative Reverse Transcriptase-Polymerase Chain Reaction (qRT-PCR) to detect mRNA expression of the FGF-2, IGF-1, CCND-1 (Cyclin-D), TGF-β1, AKT, STAT1, and BMP2 genes. Results P2G - treated preosteoblasts expressed higher levels of FGF-2 than water controls (hour 24, p < 0.001; hour 30, p < 0.05; hour 38, p < 0.01). Additionally, CCND-1 upregulation was observed (p < 0.05), possibly as a cellular response to FGF-2 upregulation. Conclusions The prolotherapy agent P2G appears to be associated with upregulation of the cartilage cell proliferation enhancer cytokine FGF-2, suggesting an independent effect of P2G consistent with clinical evidence. Further study investigating the effect of prolotherapy agents on cellular proliferation and cartilage regeneration is warranted. The development of therapy that stimulates cartilage regeneration and controls pain is the subject of active research. A growing number of clinicians across several specialties carry out an injection therapy known as prolotherapy, a term coined from "proliferative" and "therapy" [7]. The current protocols, which were developed in the 1950s [29], comprise multiple small-volume injections of therapeutic solution, usually either hypertonic dextrose (D-glucose) or phenol-glucose-glycerin (P2G), at ligament and tendon entheses and in adjacent joint spaces [51]. Early clinical data [50] and recent clinical trials and meta-analysis data [53] support reduced pain and stiffness and improved function in patients undergoing this treatment. However, the mechanism of action is not well understood. Early researchers observed that animal tissue was hypertrophied following prolotherapy [29]. Physician scientists hypothesize a multifactorial mechanism of action [51], with one specific hypothesis positing that prolotherapy slows OA progression by stimulating cartilage regeneration [31]. This hypothesis is supported by a study of 6 OA patients that used pre-and postarthroscopic imaging and histological staining to show clinical evidence suggesting that HD stimulates joints to regrow cartilage [55]. Clinical researchers have called for more basic science studies on prolotherapy, especially regarding potential cellular and molecular mechanisms of action [51,53]. Freeman et al. [25] established the field of in vitro prolotherapy with a viability assay and found that P2G induces the proliferation of MC3T3-E1 cells. Another research team used flow cytometry to reproduce the finding that P2G induces the proliferation of MC3T3-E1 cells [34]. Consistent with previous in vitro research on prolotherapy, our study utilized the MC3T3-E1 cell line. Established in 1981, this is a murine nontransformed cell line derived from newborn mouse calvaria [17,39,44,49,54]. In addition to the specific study of prolotherapy [25,34], the MC3T3-E1 cell line has been used more generally to study skeletal tissue regeneration [5,38,42,58]. The present study expands the newly emerging field of in vitro prolotherapy by being the first to investigate the molecular mechanisms by which P2G activates cell proliferation, as shown in previous research [25,34]. The primary focus is fibroblast growth factor-2 (FGF-2) because it facilitates cell proliferation [56]. Using an in vitro model, Chien and colleagues showed that murine cells synthesize FGF-2 [11]. Researchers have further shown in rabbits [15,36], rats [59], and mice [33] that FGF-2 changes a cell's gene expression profile from a state of low/nonproliferation to one of increased proliferation. As a downstream marker for proliferation, the current study quantifies mRNA expression of the cell cycle gene Cyclin D1 (CCND-1), which promotes transition from G1 to S stage of the cell cycle [1]. Given the existing evidence for FGF-2 as a factor involved in proliferation, we hypothesize that P2G upregulates FGF-2 and subsequently Cyclin D1. For a broader understanding of the possible mechanisms of P2G as a prolotherapy agent, we also investigated additional genes related to proliferation and regeneration (IGF-1, TGF-B1, BMP-2 and STAT-1). Experiments To identify the molecular mechanisms of P2G-induced cell proliferation, two experiments were conducted. Genes targeted in the experiments were identified via a systematic MEDLINE search. The list was narrowed to a primary candidate (FGF-2), a downstream indicator (CCND-1), and four exploratory genes based on published literature and expert recommendations on the subject matter (see Table 1). RPL13A, rather than GAPDH and beta-actin, was utilized as the reference gene for normalizing quantitative Reverse Transcriptase-Polymerase Chain Reaction (qRT-PCR) gene expression data. This choice is supported by several criteria, including (1) a potential effect of experimental treatment (P2G) on housekeeping gene mRNA expression levels [40], (2) an algorithmic analysis of RPL13A, GAPDH, and beta-actin sample performance [4], and (3) published literature indicating that RPL13A is one of the best reference genes for cartilage [6]. We conducted a preliminary experiment in duplicate that demonstrated the usefulness of an experimental protocol from an existing in vitro prolotherapy study [20] and provided independent results. Our primary experiment, conducted in triplicate, utilized a similar approach. The hour 0 measurement of mRNA expression served as a baseline control. Cells were treated for hour 24 with either P2G or cell culture grade water. Cellular mRNA expression was measured at the hour 24 treatment conclusion and then again at hours 30 and 38. mRNA expression of water-treated control cells was also measured in triplicate at hours 0, 24, 30, and 38. The two experiments provided very similar results, and this manuscript only reports the results from the primary watercontrolled experiment. Cell line MC3T3-E1 (ATCC Cat #CRL-2594, Subclone 14), a murine nontransformed cell line, was used to study P2G-induced cell proliferation in vitro. The cells were grown as previously reported [25] in a nonconfluent state to allow them to remain undifferentiated osteochondroprogenitors [49]. Cell culture Following Freeman [ Messenger RNA extraction and measurement To examine mRNA levels, cells were lysed, RNA was extracted (1 μg), cDNA was synthesized, and quantitative PCR was carried out using equal amounts of cDNA per sample to measure the expression levels of genes potentially involved in cartilage anabolism. The Qiagen RNeasy Mini kit was used to isolate the mRNA, and DNA levels were quantified using Applied Biosystems High Capacity cDNA Reverse Transcription Kit for cDNA synthesis and SYBR Green. cDNA was amplified by polymerase chain reaction using specific primers (Table 1), and cDNA levels were quantified using a Roche LightCycler 480 II. Statistical analysis Mean differences were compared utilizing statistical techniques in accord with the distributional characteristics of the data. For FGF-2, Welch's t-tests were employed to compare treatment and control groups at each time point because the data were approximately normally distributed (Kolmogorov-Smirnov test with pvalue = 0.98) and the equality of variance assumption was not reasonable. For IGF-1, two-way ANOVA was employed because the data were approximately normally distributed (Kolmogorov-Smirnov test with p-value = 0.6028) and showed relatively equal variances across groups. The preliminary experiment suggested that P2G treatment is associated with upregulation of FGF-2 in osteochondroprogenitors as early as hour 24. Accordingly, a directional test was performed in the primary watercontrolled experiment at hour 30 to investigate whether P2G-treated osteochondroprogenitors exhibit upregulation of a downstream gene regulating cell proliferation (CCND-1) relative to the control. Welch's t-test was employed for CCND-1 because the data were approximately normally distributed (Kolmogorov-Smirnov test with p-value = 0.8531) and the equality of variance assumption was not reasonable. Exploratory analyses of TGF-β1, BMP-2, and STAT-1 using two-tailed Welch's t-tests were also conducted to detect whether treated cells display higher or lower gene expression at any time point. In all cases, the level of significance, 0.05, refers to two-sided probability except for the prespecified directional test of CCND-1 at hour 30. Study statistics were conducted with RStudio (version 1.2.1335) and the Windows (10, version 1903) platform. RStudio was also used to generate graphics and Adobe Illustrator was applied to layer in legends and demarcations of statistical significance. Results P2G-induced stress is associated with increased FGF-2 mRNA expression and Cyclin D upregulation Figure 1a shows that in Experiment 2, P2G-treated osteochondroprogenitors exhibited higher levels of FGF-2 gene expression relative to the water control at hour 24 with a fold ratio of 4.63 (p < 0.001), at hour 30 with a fold ratio of 2.74 (p < 0.05), and at hour 38 with a fold ratio of 5.33 (p < 0.01). The hour 30 treatment/control 95% confidence interval error bars overlapped, but the difference continued to be statistically significant (p < 0.05) [16,41]. Figure 1b presents evidence that osteochondroprogenitors treated with P2G display upregulation of mRNA expression of CCND-1, also known as Cyclin D (p < 0.05). Although P2G-treated osteochondroprogenitors did not exhibit an upregulation of CCND-1 at hour 24, by hour 30, higher levels of CCND-1 relative to the control were detected, with a fold ratio of 2.23 (p < 0.05). CCND-1 gene expression returned to normal levels by hour 38. P2G-induced stress is associated with changes in IGF-1 mRNA expression As illustrated in Fig. 2, P2G-treated osteochondroprogenitors expressed lower mean relative IGF-1 mRNA levels than hour 0 untreated baseline cells (hour 24, p < 0.01; hour 30, p < 0.01; hour 38, p < 0.001). Additionally, Fig. 2 shows diminished IGF-1 expression in watertreated cells across all time points (hour 24, p < 0.001; hour 30, p < 0.001; hour 38, p < 0.001). Finally, the size of the error bars in Fig. 2 is relatively consistent, which favors pooled testing for a more reliable and precise test. Additionally, two-way ANOVA with interaction terms revealed no significant interaction between hour and treatment (data not shown). In other words, a constant treatment effect over time, starting at hour 24 and persisting through hours 30 and 38, was observed. Accordingly, the more appropriate statistical test is a two-way ANOVA without a main effect for time [hour]. This test indicated a highly significant effect for treatment (1.82fold increase; p < 0.001, not shown). When the preplanned, more fully specified two-way ANOVA with an interaction term for time-specific comparisons between P2G and water control was fit to the data, the model produced estimates that included a 2.47-fold increase of P2G-induced stress possibly Upregulates TGF-β1 but not BMP-2 or STAT-1 gene expression Figure 3 indicates that at hour 30, P2G-treated osteochondroprogenitors exhibited higher levels of TGF-β1 gene expression relative to the water control, with a fold ratio of 1.26 (p < 0.001). In contrast, at hours 24 and 38, P2G-treated osteochondroprogenitors exhibited expression levels of TGF-β1 similar to those in the water control. Moreover, the water-controlled experiment did not yield any evidence of a significant difference in BMP-2 and STAT-1 gene expression between the treatment and control at 24, 30, or 38 h (data not shown). Discussion This study provides evidence that when P2G is applied to MC3T3-E1 cells, the treatment activates FGF-2specific proliferation-related gene expression, changes neither BMP-2 nor STAT-1 expression, and produces time-dependent activation of IGF-1 and TGF-β1 gene expression patterns. The finding that P2G upregulates FGF-2 is directly supported by both our preliminary and primary experiments, each of which shows that P2G treatment is followed by increased levels of FGF-2 mRNA expression. Further supporting this finding is the experimental result indicating that CCND-1 mRNA levels are increased in P2G-treated osteochondroprogenitors relative to a water control. CCND-1 expression advances cells through the G1 checkpoint of the cell cycle, accelerating cell proliferation [47]. Researchers have shown that direct FGF-2 application to cells increases CCND-1 expression through the MAPK pathway [23,24]. Others have shown, both in vitro [56] and in vivo [37], that FGF-2 induces cell proliferation, suggesting that P2G-induced upregulation of FGF-2 mRNA may lead to cell proliferation. The finding that CCND-1 upregulation after FGF-2 upregulation at 24 h aligns with the following previously published research. CCND-1 upregulation suggests that P2G-treated osteochondroprogenitors proliferate between 33 and 45 h after treatment initiation first through FGF-2 and then CCND-1 [34]. The return of CCND-1 levels to normal at hour 38 is consistent with the long-established finding that CCND-1 is highly regulated to prevent uncontrolled cell division. A clinical study showing that prolotherapy stimulates cartilage growth [55] highlights its potential value for future research regarding the role of FGF-2 and CCND-1 in inducing proliferating cells to deposit Fig. 3 Exploratory investigations suggest P2G possibly effects preosteoblasts' TGF-β1 mRNA expression. Experimental data, which includes water controls, suggests that P2G may upregulate TGF-β1 gene expression at hour 30 (Welch's t-tests). The solid line is the normalized mean of hour 0 pre-treatment baseline measurements. Control refers to study arms treated with water. The graph displays qRT-PCR mRNA expression. NS p > 0.05, * p < 0.05, ** p < 0.01, *** p < 0.001 Fig. 2 Experimental data indicate that preosteoblasts' mean relative IGF-1 expression at the pre-treatment baseline is higher than either P2G or water treated preosteoblasts' mean relative IGF-1 expression at each time point (regression with indicator variables). Experimental data indicate that treated preosteoblasts have a higher mean relative IGF-1 expression than water treated controls (fold increase of 1.82, two-way ANOVA that aggregates the three biological replicates from each time point into an overall study arm and as a consequence precisely estimates the standard error). Significance of each group, as shown with asterisks refers to comparison with the hour 0 control. The solid line is the normalized mean of hour 0 pre-treatment baseline measurements. Control refers to study arms treated with water. The graph displays qRT-PCR mRNA expression. NS p > 0.05, * p < 0.05, ** p < 0.01, *** p < 0.001 ECM to heal OA. The prolotherapy agent P2G may induce chondrocytes to upregulate FGF-2, which leads to downstream upregulation of CCND-1, inducing cells to proliferate, a finding previously reported in two independent studies [25,34]. Overall, these findings suggest that FGF-2 mediated activation of CCND-1 is a biological mechanism by which a prolotherapy agent induces cell proliferation. This basic science finding provides evidence to support preclinical prolotherapy research that explores potential processes by which a prolotherapy agent may induce cell proliferation and cartilage regeneration in models that are physiologically closer to humans [48]. The study results also suggest that P2G induces an early response and time-dependent effect on IGF-1 gene expression. The effect occurs within the context that osteochondroprogenitors, regardless of treatment with P2G or water, exhibit decreased levels of IGF-1 compared to untreated baseline. Understanding this finding of attenuated IGF-1 expression may require a different research design with multiple controls at each time point. At the 24-, 30-, and 38-h time points, P2Gtreated cells expressed more FGF-2 and IGF-1 mRNA than water-treated (control) cells (hour 24: p < 0.01, hour 30: p = 0.0576, hour 38: p = 0.0977). Moreover, IGF-1 mRNA expression levels at hour 38 were lower those at hour 24, which is consistent with prior literature showing IGF-1 acts as an immediate early gene in its osteogenic role [43]. Furthermore, Hughes-Fulford and Li [33], who also used the MC3T3-E1 cell line, found that direct FGF-2 treatment suppresses IGF-1 mRNA expression. This suggests that P2G-induced FGF-2 upregulation may be responsible for the suppression of IGF-1 mRNA expression at hours 30 and 38. In future studies, knocking down FGF-2 mRNA with RNA interference and assaying changes in IGF-1 may be of value to determine the effect of P2G treatment on IGF-1 expression. The evidence from our current study likely indicates that FGF-2, rather than IGF-1, is the more important contributor to cell proliferation. The results of this study indicate that P2G may induce a very short period of increased TGF-β1 gene expression in osteochondroprogenitors. Ekwueme and colleagues [20] studied TGF-β1 protein expression, suggesting that P2G negatively regulates TGF-β1 signaling. The difference in findings may be the result of a timing/sampling difference in protocols. This current study does not provide evidence that P2G affects expression of BMP-2, a cytokine known to increase cartilage repair under certain conditions and increase ossification under others [52]. STAT-1, which is known to be involved in the global immune response [28], does not seem to be affected by P2G treatment under the study conditions which are focused on the local environment. This study has limitations. The most relevant is the use of the murine MC3T3-E1 cell line, which is not a human primary chondrogenic cell line. Nonetheless, MC3T3-E1 cells are used for modeling cartilage regeneration and are considered reliable because of their greater phenotypic stability compared to primary cells [17] and retention of an osteochondroprogenitor phenotype in culture [30,54]. The use of the MC3T3-E1 cell line to study the direct effect of P2G on the expression of proliferation-related genes aligns current results to earlier in vitro prolotherapy studies that utilized MC3T3-E1 cells to directly study proliferation [25,34]. Examples of recently published articles using the MC3T3-E1 cell line for research on cartilage include those by Li et al. [44], Kang et al. [35], and Cai et al. [10]. As an osteochondroprogenitor, MC3T3-E1 cells represent an earlier developmental stage than chondrocytes, the unique cellular component of cartilage [2,26]. As chondrocytes are more fully differentiated, the environment may not be as influential in inducing chondrocytes to proliferate; therefore, additional experiments are required to definitively confirm that P2G upregulates FGF-2 in chondrocytes. Prolotherapy studies in vitro also do not entirely reproduce the entire joint environment in a tissue culture dish. For example, MC3T3-E1 cells do not involve any inflammatory stimuli. For this reason and others, in vitro prolotherapy studies will not be able to fully reproduce the in situ environment of an osteoarthritic joint [18,27,57]. Regardless, in vitro studies play a vital role in demonstrating cell-type-specific responses. Indeed, the current study on murine cells is an important precursor to mechanistic research with complementary transgenic, knockin, and knockout murine models, preferably humanized, which can help to elucidate the mechanisms by which prolotherapy agents affect gene expression in a complex immune-mediated cellular environment [12]. Conclusions The standard of care for OA is supportive and focuses on symptomatic relief [18,27,57] rather than slowing or reversing cartilage degradation. This study found that P2G is associated with upregulation of FGF-2 mRNA in osteochondroprogenitors. This is consistent with clinical studies suggesting that prolotherapy stimulates the regeneration of cartilage [55]. Further analyses investigating the effect of prolotherapy agents on cellular proliferation and cartilage regeneration in different cell types and model systems are warranted.
4,354
2020-12-01T00:00:00.000
[ "Biology", "Medicine" ]
Fast and Efficient Image Novelty Detection Based on Mean-Shifts Image novelty detection is a repeating task in computer vision and describes the detection of anomalous images based on a training dataset consisting solely of normal reference data. It has been found that, in particular, neural networks are well-suited for the task. Our approach first transforms the training and test images into ensembles of patches, which enables the assessment of mean-shifts between normal data and outliers. As mean-shifts are only detectable when the outlier ensemble and inlier distribution are spatially separate from each other, a rich feature space, such as a pre-trained neural network, needs to be chosen to represent the extracted patches. For mean-shift estimation, the Hotelling T2 test is used. The size of the patches turned out to be a crucial hyperparameter that needs additional domain knowledge about the spatial size of the expected anomalies (local vs. global). This also affects model selection and the chosen feature space, as commonly used Convolutional Neural Networks or Vision Image Transformers have very different receptive field sizes. To showcase the state-of-the-art capabilities of our approach, we compare results with classical and deep learning methods on the popular dataset CIFAR-10, and demonstrate its real-world applicability in a large-scale industrial inspection scenario using the MVTec dataset. Because of the inexpensive design, our method can be implemented by a single additional 2D-convolution and pooling layer and allows particularly fast prediction times while being very data-efficient. Introduction The ability to detect unusual patterns in images is an important capability of the human vision system. Humans can differentiate between expected variance in the data and outliers after having only seen examples of normal instances. In this work, we address the computer vision approach to this problem, usually known as image novelty detection. Novelty detection is related to outlier detection in the sense that both methods try to detect anomalies. However, while the latter is totally unsupervised, novelty detection has access to a training dataset consisting of clean normal reference data, and, hence, is an instance of weakly-supervised learning. The output of such an algorithm is a scoring function (anomaly score) that can be used to grade test data from inlier (normal) to outlier (novel) (e.g., [1]). Since the anomaly score is computed for a single input example, it can also be used for binary classification tasks. The major difficulty of such a model is that the decision boundary is not robust against overlapping between inlier and outlier distributions. This motivates the main idea of the ensemble approach to novelty detection: representing both training and test images as ensembles of image patches [2]. Instead of scoring a single test example with respect to the normal distribution, the ensemble approach first transforms the test example into a ensemble of patches and checks the test and training ensemble against each other, which improves the robustness of the decision process. There is a wide range of methods for testing if two samples originate from the same distribution. Here, we follow our previous work [2] and use the the Hotelling T 2 test [3] for assessing the mean-shift • Global novelty is spread across the entire image, e.g., when separating dog images from cat images. • Local novelty appears only in some parts of the image whereas the other parts of the image are totally normal, e.g., detecting tiny manufacturing defects in industrial visual inspection systems. (b) Figure 1. Detection of (a) locally concentrated novelty and localization on the MVTec dataset and (b) global anomalies on the CIFAR-10 dataset. All shown examples are from the outlier test set and the red color highlights the location of the novelty in the images. The overlayed red score map is computed using the µshift anomaly score (cf. Equation (11)) without applying the spatial max-operator, such that the model output is a 2D grid of anomaly scores. These scores are mapped to a red heat map and resized to match the input resolution using bilinear interpolation. Hence, red areas correspond to potentially anomalous regions. We identified the following practical principles for successful image novelty detection using mean-shifts: (1) First, as anomalies mostly consist of patterns not available in the normal class, a rich feature space, such as a pre-trained neural network, needs to be used. (2) No dimension reduction based on the inlier data should be applied, as the inlier data occupies only a small portion of the feature space, and projecting onto its subspaces causes the anomalies to overlap with the normal data. Additionally, (3), the spatial size of the expected anomalies needs to be correctly expressed in terms of the hyperparameters, i.e., patch size and local mean-shift region, as a small local novelty cannot influence the mean shift sufficiently in too large averaging areas, mainly, because the distributions overlap only in insufficiently small regions. Contributions In this work, we propose a non-expensive algorithm (https://github.com/matherm/ deep-mean-shift, 4 September 2022) based on the Hotelling T 2 test for image novelty detection that is stacked on top of a standard pre-trained neural network, such as EfficientNet [5] or Vision Image Transformer (ViT) [6]. Using an upstream pre-trained neural network induces a rich feature space with a diverse set of pre-learnt patterns and accommodates the previously mentioned principle (1). In contrast to our previous work [2], we follow principle (2) and use a full-rank covariance matrix for modelling the neural network features, instead of relying on a compressed low-rank approximation which improves performance significantly. Further, to fulfil principle (3), we generalize the existing ensemble approach to novelty localization and add a hyperparameter that controls the expected spatial size of the anomalies which has a strong impact on overall performance in practical applications. We show in extensive experiments that our approach not only achieves comparable results to existing state-of-the-art approaches, but is also applicable to a large-scale industrial inspection scenario. Further, due to its simple architecture, the model has faster prediction times compared to existing approaches. Lastly, because we only need to estimate the mean of the training dataset, our method is very data-efficient and reaches 90% AUC with only 10 non-defective examples of the MVTec dataset [7]. Figure 1 shows examples from the evaluated datasets. Related Work The use of limited supervision for image classification has been studied extensively [8,9]. Some approaches (e.g., [10]) consider the unbalanced setting where a small number of anomalous examples is given, but many examples are given from the normal class. However, these approaches use additional supervision that is not used in our method. Our work relates more closely to anomaly detection approaches that use limited to weak supervision [1]. During training, we only use examples from the normal class and, therefore, consider our method an instance of novelty detection, a semi-supervised version of anomaly detection, sometimes also referred to as one-class classification [11]. There are different approaches to the problem in general and we, therefore, group the related methods into the categories reconstruction-, classification-, distribution-based, and self-supervised methods. Reconstruction-based methods. These methods derive a data-driven encoder and decoder from the reference data and expect the anomalous data to have a higher reconstruction error compared to normal data. However, such models are mostly based on unconstrained compression and, therefore, often oversee novel patterns, resulting in poor performance in practice [12]. Classification-based methods. These methods attempt to model a discriminating hyperplane between data regions of normal data and those of anomalous data [13], without necessarily using compression. Such methods often perform well in practice. However, their main limitation arises from the fact that the hyperplane can only be estimated accurately in regions occupied by the training examples [14]. The recently proposed Mahalanobis method [12] tries to heal the problem by negating the estimation process by using the null space of a pre-trained neural network feature space instead. Distribution-based methods. These methods are another branch of novelty detection that model the distribution of the normal data. Such methods are often built around autoencoders [15] or normalizing flows [16]. However, it has been argued and empirically found that distribution-based methods that fit a flexible parametric distribution with the maximum likelihood objective may not be well-suited for detecting out-of-distribution data [17]. Self-supervised methods. These methods try to improve distribution-based methods by replacing the data likelihood with a proxy classification objective, such that classifying normal data based on that objective allows for a good separation of normal and anomalous data. These techniques are related to non-linear Independent Component Analysis (ICA) using an auxiliary variable, such as a time segment, a generalized non-stationary variable, or synthetic labels [18,19]. A successful application of this theory to images is to predict image rotations [20,21]. The proxy objective is given by first rotating the image by an arbitrary angle and then trying to predict that angle using a deep convolutional neural network. However, this strategy only works well for aligned objects with a natural orientation, where the rotation dependence is strong enough to learn a good rotation predictor. In the literature, there are mostly specialized algorithms for either global or local novelty detection, and, hence, there are different methods superior within each scenario. For global novelty, particular rotation prediction [20], Deep SVD [13], and Deep Robust One Class Classifier (DROCC [14]) excel. The rotation prediction method is a self-supervised schemes that solve a proxy classification problem for feature learning and uses a softmaxbased anomaly score. Although the Deep SVD is a deep learning based version of the singular value decomposition (SVD), the DROCC method uses a nearest neighbor approach on pre-trained neural network features. For local defect detection, a recent method named PatchCore [22] achieves almost total recall on the MVTec challenge [7] using a greedy algorithm for dataset reduction based on coreset theory [23]. It is also based on an underlying nearest neighbor search in the feature space of a pre-trained EfficientNet-B4, but uses a modified distance as anomaly score. CutPaste [24] is a self-supervised method specially designed for local novelties. It is similar to rotation prediction [20] as it also solves a selfsupervised surrogate classification problem. However, instead of predicting the rotation of the input example it cuts out small patches and pastes them to another image location to create the contrastive dataset. The anomaly score is the class probability of being an altered image. Our work is most related to recent works that model the internal distribution of images [16]. However, unlike these approaches, our model benefits from the estimation of only low order cumulants of image patches, i.e., mean and covariance, instead of a parametric model of the full density, which is a simpler task in general. By choosing non-linear basis functions for representing the patches, here a pre-trained deep neural network, our method adds a relatively small computational overhead compared to the method based on raw pixel mean-shifts [2], but improves the detection and localization performance effectively. There are two similar approaches named PatchSVDD [25] and PaDiM [26] that we want to relate shortly. PatchSVDD optimizes a deep spherical embedding for extracted image patches. As this is based solely on the reference data, it implicitly reduces dimensionality, and novelties with orthogonal patterns are projected onto the null space of the normal distribution. This decreases the performance of the method compared to our approach that does not involve any dimension reduction. PaDiM is similar to ours and also computes fullrank Mahalanobis distances. However, it does not benefit from extracted patch ensembles that turned out to be performance critical in our tests. Therefore, it is a special case to ours where the size of the extracted patch ensemble is one and the covariance matrix is not shared across locations. Method The central part of our algorithm is the µshift(x) anomaly score [2] that is based on the Hotelling T 2 test [3]. This test is formulated on the basis of two samples of two distributions and measures the mean-shift between them. Therefore, it is a multivariate extension to the well-known Student's t-test. Here, we use ensembles of image patches as samples and measure their mean-shift in some specified feature space Φ. In this section, we first describe the data representation and the mean-shift detection in its classical form. On a high level, the first step is the extraction of patches {I 0 (s), . . . , I N (s)} from the normal training images {I 0 , . . . , I N }. These patches are transformed by a feature map Φ, typically parametrized by a pre-trained neural network. For indexing the ensemble where necessary, we introduce the indexing variable s. Using this notation, a single extracted patch of the i-th example in feature space is denoted by x i and the corresponding patch ensemble by x i (s). Based on all available transformed training patches X, the required statistics, i.e., mean vector and covariance matrix, are computed. For a given test image I * , the same preparatory steps are applied and we extract a ensemble of patches I * (s) and compute the features x * (s). We then evaluate the mean-shift of the test example by comparing the test ensemble mean µ(x * (s)) with the mean of the entire training dataset µ(X). Data Representation The input examples I i ∈ [0, 1] 3×H×W , with i = 1, ..., N, are square-sized RGB images, i.e., H = W. The distinctive property of our algorithm is to generate patch ensembles, instead of processing the full image. For patch extraction, we tested several sampling strategies without noticing performance-critical differences. Therefore, we extract all valid patches of size R inside the image. The term valid is used in accordance with the neural network literature and means that all extracted patches must be entirely contained within the image borders. This is equivalent to cropping patches by a sliding window without applying image padding or crossing the border. As the input images are potentially large, the horizontal and vertical stride τ of the sliding window allows limiting the total number of cropped image patches S. We fix this parameter to τ = 2 for small images and τ = 16 for larger ones. Hence, the maximum number S of distinct image patches per input image depends only on the size of the image and the patch size R, i.e., the larger the image relative to the patch size, the more patches can be extracted. We do not apply any pre-processing and compute a feature representation Φ for the extracted patches I i (s) using a pre-trained neural network, given by where D is the number of features after flattening the computed feature map. Flattening is needed since some feature maps Φ, e.g., Convolutional Neural Networks (CNN), retain the spatial dimensions of the input patches. We organize the flattened feature vectors of all available extracted normal training patches in a long concatenated design matrix X ∈ R NS×D . Mean-Shift Detection We follow [2] and also perform mean-shift detection with the Hotelling T 2 test [3]. Since this test is a generalization of Student's t-test, it estimates the significance of meanshifts between two populations. In this section, we introduce the Hotelling T 2 test with its required statistics in the original form. In the second part of the paper, in Section 4, we derive a generalized version that is able to smoothly transition between global and local population mean-shifts. We discuss relevant hyperparameters that are needed for model selection in Sections 3.4 and 4.1. For detecting anomalies, we first compute the feature-wise mean over all extracted and flattened feature maps of the training dataset X. Note that µ has the same dimension as x i . We then compare this reference mean with the mean of the transformed patches x * (s) extracted from a single test example I * . Given the two estimated mean vectors µ * and µ, the unnormalized Hotelling T 2 test statistic for a dependent test sample is computed byT is the empirical covariance matrix of the training dataset X. There is a intuitive geometric interpretation of theT 2 statistic available. That way, it can be interpreted as the squared Mahalanobis distance [27] between the two estimated mean vectors. For completeness, we want to highlight that we discarded the constant normalization factor NS 2 NS+S that appears in the original formula and, hence, denote our unnormalized version of the statistic byT 2 instead. In principle, there are several options for defining the mean µ of the reference data, e.g., by clustering or partitioning. Here, we choose the simplest option and compute the feature-wise mean over all patches of the training examples. This gives a single µ-vector for the entire dataset as denoted in Equation (2). A naive global anomaly score is simply defined as the unnormalized T 2 test statistics over the entire image, given bỹ The entire pipeline of this global method is illustrated at the top in Figure 2. For illustration of the mean-shift, Figure 3 shows a scatter plot of test examples in feature space. (a) (b) Figure 2. Schematic illustration of our mean-shift method. (a) First, the feature map Φ is computed per extracted image patch, then the mean statistics is computed over all training patches. Together with the empirical covariance matrix of the normal data, the Hotelling T 2 test is used as anomaly score. (b) The local mean-shift variant applies the global mean-shift method to a local region A of the image, and, hence, yields a field of mean vectors µ(x, y). The final score is computed by taking the maximum over the field of local scores. The covariance matrix is shared across all local regions. Covariance Shrinkage The empirical covariance matrix in Equation (5) cannot be robustly estimated for high dimensional data as most of the eigenvalues are close to zero and, hence, the estimates are very unstable. This is especially an issue for small datasets, where the number of patches is equal or smaller than the covariance matrix dimension D, but is potentially also a problem for highly redundant datasets that occupy only a small subspace. In order to mitigate, we use the Ledoit-Wolf shrinkage estimator [4]. This estimator is given by a convex combination between a scaled identity matrix and the empirical covariance matrix where the so-called shrinkage factor α ∈ [0, 1] is given analytically by minimizing the quadratic loss between the true and estimated covariance matrix. The exact formula for α is a bit cumbersome, we, therefore, refer the reader to Equation (5) in the original paper for it [4]. Loosely speaking, the shrinkage factor α is an analytic function of the empirical covariance matrix and the number of data points. A useful property of the estimator is, that the shrinkage factor is near to one for small numbers of data points and reduces to zero with increasing dataset size. Therefore, in the limit of an infinite number of data points, the shrunk covariance matrix converges to the true covariance matrix. Our experiments show that the chosen estimator is crucial and responsible for almost 5% of the overall performance. Figure 4 shows the impact of the shrinkage factor on novelty detection performance for different values of α and different dataset sizes. For the experiment, we used 64 × 64 patches and the EfficientNet-B4 feature space, which has dimensionality D = 6800 after the flattening operation (cf. Equation (1)). Therefore, the estimation of a covariance matrix with shape 6800 × 6800 is required. As already noted, outliers are projected onto the subspace spanned by the normal data, which makes a robust estimation of the full covariance matrix necessary, without the possibility of using low-rank approximations [12]. This is especially important when the expected anomalies are small and characterized by patterns that are not present in the training set. Hyperparameter Selection There are two main hyperparameters that need to be set for the global method. First is selecting the feature space Φ, and second is choosing the patch size R. In the following, we first discuss the feature spaces, and how the chosen neural network architectures differ impacts the model. [29]. L indicates the feature block (layer) of the deep neural network. The commonly used Block convention wraps several adjacent layers of a deep neural network into blocks, such that the architectures become handier and easier to compare (e.g., [29]). We follow this convention and the hyperparameter L indexes entire blocks of the architecture. The superscript indicates the used network architecture, e.g., Φ vgg 3 for the third feature block of VGG-19. Additionally, we analyzed the recently presented Vision Image Transformer Model (ViT) Φ vit L [6]. The required pre-training of the networks is always done by using the well-known Ima-geNet dataset, where all architectures reach a test set accuracy of around 85%. Note, we concatenated two adjacent layers [L, L + 1] of EfficientNet-B4 as the layers are relatively low-dimensional, which improves the performance slightly (see, e.g., [22]). Due to the pooling layers, the spatial resolution of the feature map decreases with the depth of the network, i.e., deeper layers have lower spatial resolution. To enable channel-wise concatenation of different sized feature maps in the first place, we match the spatial resolution of the feature maps by spatially resizing the smaller downstream feature map to the size of its larger predecessor feature map using bilinear interpolation. Patch Size R The most crucial hyperparameter is the chosen image patch size R. Generally, the deeper the convolutional neural network, the smaller the spatial resolution of the resulting feature map, which is mainly caused by 2D-pooling operations [30]. While the receptive field grows, more global information is carried by the pixels of the feature maps. Importantly, for successful feature extraction, one needs to choose a layer that retains enough spatial resolution for the problem at hand and an appropriate receptive field to capture the anomalies. Hence, to gain sufficient separate inlier and outlier distributions, the layer selection L and patch size R depends on the size of the input images and the expected size of the anomalies. Note, that the Vision Image Transformer (ViT) is different as the receptive field size is implicitly learnt by the model and in principle equal to the size of the entire input and hence independent of the layer L. We did not notice a performance-critical impact of the stride parameter and keep it fixed to τ = 2 for smaller inputs and τ = 16 for larger ones. Evaluating Global Novelty Detection We compare our method with methods that particularly excel in global novelty detection. As a baseline we use the well-known OC-SVM [31] with an a RBF-kernel and flattened CNN feature maps. We reproduced all experiments by either using implementations provided by the authors or re-implementing the models by using available information and hyper-parameters. For testing global novelty detection we use the CIFAR-10 dataset [32], which are 32 × 32 RGB images, and test the methods in a one-vs-all procedure. This means we use the 5000 available training examples of a single class as normal class, and classify on the entire test dataset, consisting of 10 classes with 1000 examples each, afterwards. We use area-under-the-ROC-curve (AUC) as performance measure (see, e.g., [12,20]). The ROC curve plots the true positive rate (TPR) against the false positive rate (FPR) at various thresholds and hence measures the overall discrimination performance of a binary classifier. In terms of hyperparameters for µshift, we selected θ = {L = 5, R = 32, τ = 2}. This is the maximum patch size possible and a special case of the method. However, it is also optimal for the chosen scenario: Changing the parameter L, decreases the AUC significantly for most of the network architectures (see Figure 5). The same applies to reducing the patch size R, which we verified in Figure 6. Figure 6. Varying the patch size R and the averaging region A impacts the detection rate significantly. Tests were conducted using the critical classes pill, capsule and screw classes from the MVTec dataset, and the entire CIFAR-10 dataset. The MVTec tests used the EfficientNet-B4 architecture. For CIFAR-10 we tested several popular architectures. Table 1 shows the results averaged across five folds of cross-validation using varying training and test splits. It is interesting to note how the different CNN architectures have a strong influence on the performance and how the Vision Image Transformer (ViT), with its large receptive field, is able to separate the inliers from the outliers almost entirely. Particularly using EfficientNet-B4 is problematic in the special case of global novelties as it possesses the smallest receptive field among the tested architectures and is not able to capture the entire image context into a single feature variable. We presented the original mean-shift method [2] based on raw pixel values and evaluated the performance without using pre-trained features or transfer-learning. For completeness, we report the low average AUC of only 0.67 using raw pixel values in Figure 6. This lack of performance emphasizes the requirement for a rich feature space, such that the inlier distribution does not overlap with novelties through its null space causing a large blind spot for novelty detection. Such an overlap happens naturally when the anomalous patterns are projected onto the subspace of the normal data and the corresponding features are not present in the given training data. Consequently, anomalous patterns cannot be detected as they are mapped to null space. A rich feature space with a diverse set of pre-learnt patterns mitigates that effect. With RotationNet, we could achieve the reported AUC of 0.86 only when the internal network were pre-trained on ImageNet [20], but not when initialized randomly. However, despite of that, for general unsupervised feature learning, the method remains extremely powerful on CIFAR-10. Local Anomaly Score Due to global averaging, the naive global mean-shift anomaly score [2] is not flexible enough to localize anomalies properly. To improve, a generalization of the mean-shift detection capabilities to local mean-shifts is required, and, hence, a modified test statistic. To this end, we define a local version by computing an entire field of µ-vectors uniformly distributed across the image instead of a single vector. As shown on the right-hand side in Figure 2, this is equivalent to computing the global anomaly score only for local parts of the image with shared covariance matrix across all locations. To leverage the spatial structure, we organize the S extracted patches x(s) as a √ S × √ S feature mapx(x, y), where the positions (x, y) correspond to their relative locations in the input image, i.e., the order of the patches and their relative spatial position is unchanged. Note, that S is a square number, because we assume that the input images are square-sized RGB images, i.e., H = W. Next, we compute the µ-vectors by averaging across a local neighborhood whose size is given by A. The resulting As in the global case, the µ(x, y) of the training data is computed by averaging over all available training examples where Sx is the pooled feature map andx(x, y) ∈ R D× √ S× √ S is the reshaped version of x(s) ∈ R S×D . The generalized anomaly score is then computed by taking the maximum over the field of local mean-shifts, given by Note that the covariance matrix Σ is exactly the same as in the global case and just the mean estimates are computed differently. In fact, for ρ = √ S the global case appears as a special case. A second special case appears when R = H and, hence, ρ = 1. Here, the extracted patches represent entire images. Local Mean-Shift Region A Generally, we differentiate between two extreme cases for novelty detection in this work: (1) global novelty and (2) local novelty. To cover both cases, we found that it is important to first adjust the patch size R to match the desired anomaly fraction of the given input resolution, such that the anomaly falls into the receptive field of the computed feature map. In simple terms, for globally distributed novelties, select a large patch size, for local anomalies a small one. As already noticed by others, the EfficientNet-B4 works very well for local novelty detection [12], whereas the VGG-19 is better suited for global case [20]. We argue that the main reason for this is that the EfficientNet-B4 uses almost solely 1 × 1 convolutions, which retains less blurry local features. To validate this assumption, we computed the receptive fields sizes for different selected CNN-architectures. Table 2 shows the results. Note that the receptive field can be computed by varying the input size R until the feature map Φ L of the desired block L has size 1 × 1. (Note that for some architectures, there are analytical formulas available, e.g., [33].) The stride of the receptive field τ is then the remaining input size H divided by the remaining feature map sizeH L , given by It can be seen, that the EfficientNet-B4 has a significantly smaller and a non-overlapping receptive field compared to, e.g., the VGG-19. Finally, the last remaining hyperparameter is the local averaging region A for computing the local mean-shift statistics. Note, the parameter A is similar to the patch size R as it also impacts the effective receptive field of the entire model and hence the sensitivity for globally distributed and local concentrated novelty. Our final architecture-independent parameter vector is denoted by We visualized the local mean-shift for the bottle class in Figure 3 as an example. Efficient Feature Computation The computation of the feature map per extracted image patch is quite expensive in practice. For mitigation, we propose computing the features of all patches simultaneously by computing the feature map of the entire image in a single forward pass through the neural network. However, one needs to be careful, as the patch size R and stride τ is now restricted by the internal details of the selected architecture and their corresponding receptive fields, as shown in Table 2. For example, for block 5 of EfficientNet-B4, a single pixel in the feature map corresponds to 16 pixels in the input space and the stride is fixed to τ = 16. This also limits the freedom of the local averaging region A to a multiple of 16. Table 3 shows the results of the experiment averaged across five folds of crossvalidation using again varying training and test splits. Because there are no defective images in the training set of MVTec, we only swapped the non-defective training and test data during cross-validation. In other words, the defective examples of the test dataset were kept constant and only the non-defective examples of the test and training datasets were varied. In terms of hyperparameters for µshift, we selected θ = {L = 5, R = 64, τ = 16, A = 80}. In order to find the best hyperparameters, we performed a grid-search L ∈ [2, 6], R ∈ [48, 144], A ∈ [64, 164] evaluating the average performance across all classes (cf. Figures 5 and 6). The sensitivity of the effective mean-shift region A on the detection performance can be seen in Figure 6. For the EfficientNet-B4 for instance, A = 96 gives an AUC of 98.5, A = 64 an AUC of 98.3. We also tested the method with the commonly used WideResnet-50 features and achieved 98.1 AUC on average for the same set of hyperparameters. Generally, the average AUC depends strongly on the average anomaly sizes: for instance, while the pill class benefits from a small averaging region, the screw class performs significantly better with a larger one. Evaluating Local Novelty Detection Note that we could reach the reported 99.0 AUC of PatchCore only in a single fold of cross-validation, but not on average over different folds of cross-validation and, therefore, report slightly lower average scores than in [22]. The same appears to be the case for CutPaste and we could only touch the reported 90.9 AUC. However, this is still impressive for a self-supervised scheme that does not rely on pre-training or transfer-learning. We also tested the related methods PatchSVDD [25] and PaDiM [26]. PatchSVDD reached on average 92.1 AUC, PaDiM scored 97.9 AUC using the EfficientNet-B5 feature space. Complexity, Runtime, and Data Efficiency The complexity and runtime differs heavily between training and test time. For training, the most expensive part is computing the D × D covariance matrix in Equation (5). With an efficient estimation algorithm, this can be achieved in O(min{(NS) 2 D, (NS)D 2 }). The mean estimation itself is linear O(NSD). At test time, the most expensive computation is computing the T 2 statistics in Equation (4) that needs a S × D-dimensional matrix-vector multiplications O(SD 2 ). We noticed that depending on the dimensionality D and the chosen feature space, the computational costs of computing the feature maps quickly exceeds the cost of our algorithm. There is a fixed overhead that depends on the size of the input O(HW). For example, for EfficientNet-B4, in our experiments the computation of a single MVTec example took 36 ms for the feature map, and 30 ms for the anomaly score. The estimation of the covariance matrix took 20 s for a single class. The runtime was measured on standard CPU hardware (Intel i7-6700) without using GPU acceleration. By using a single GPU (GTX 3080Ti) the runtime could be reduced to 1 ms for the feature map, 1 ms for the anomaly score. Computing the covariance matrix took 3 s. Therefore, implementing the mean-shift detection by an additional CNN block consisting of a 2D-convolution for the Mahalanobis-distance and using 2D-pooling for the averaging-region, the model reached about 500 FPS for the entire pipeline on our hardware. Note that this is much faster than, e.g., the 7 FPS of the PatchCore GPU-model (using default 0.1 sampling ratio) [22]. The increased frame rate is mainly caused by avoiding the k-nearest neighbor search across the entire patch database for every prediction. As already mentioned, we also evaluated the data efficiency of the models with respect to performance in Figure 7. Again, PatchCore and µshift perform similar with respect to the number of needed training examples to reach a particular performance level. For example, 90% AUC could be achieved with only 10 non-defective examples of the MVTec dataset [7]. In this scenario, we also tested the recently proposed hierarchical method for few-shot anomaly detection (HTDGM) [ µshift e f f HTDGM [34] Mahalanobis [12] OC-SVM Figure 7. Data efficiency on the MVTec dataset across several analyzed architectures. With more than 10 non-defective training examples, an average AUC above 90% could be achieved using the EfficientNet-B4 or the Vision Transformer Network (ViT) as feature space. Discussion We found that the success of our approach critically depends on the details of how the patch ensemble is extracted from the input images. The most important parameters are the number and the size of the patches and by which features the patches are represented. Since mean-shift can only be detected when the outlier ensemble is sufficiently separate from the inlier distribution, the overlap acts as a blind spot for novelty detection. In the earlier version of the algorithm [2], we proposed using a hyperparameter selection rule based on a negentropy approximation [35] to minimize overlapping of the distributions. However, further experiments showed, that such an approach does not generalize well towards arbitrary datasets and deep features. One reason is that the method prefers larger patch sizes, which is beneficial for global novelty, but not for industry-relevant local novelty detection. Another interesting point appears when comparing our method to PaDiM [26]. As already mentioned, this method is similar to ours, when the mean-shift region is equals to the patch size, i.e., A = R, and hence the ensemble size S is one. The advantage of our ensemble approach is manifested by the zigzag pattern in Figure 6. Here, increasing the local mean-shift area A just slightly over the patch size R increases the detection performance significantly, regardless of the actual chosen patch size. Conclusions For the task of novelty detection, we proposed a method that is capable of detecting novelties effectively using deep mean-shifts. By attaching our method on-top of a pretrained neural network, we were able to achieve state-of-the-art performance in standard benchmarks, such as the MVTec defect detection and CIFAR-10 one-class classification challenge. Because of the simple design, the method is easy to implement and provides a fast execution time. By using a GPU we could reach 500 FPS in our tests. Additionally, because the model only relies on low order statistics, it is very data efficient and achieves 90% AUC on the MVTec challenge with only 10 non-defective examples. The main drawback of the method is that the model accuracy heavily depends on the specific problem at hand and the available knowledge about expected anomalies and their sizes. As shown in Table 1, swapping the feature space can cause a significant change in performance. Second, not setting the correct patch size reduces the performance quickly. However, the same limitations also appear in other methods, such as RotationNet or PatchCore. For practitioners it is of great importance to use domain knowledge and setting hyperparameters accordingly. A central open question is how to derive those hyperparameters directly from data, which we leave for future work. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
8,713.6
2022-10-01T00:00:00.000
[ "Computer Science" ]
Dataset of Multi-aspect Integrated Migration Indicators Nowadays, new branches of research are proposing the use of non-traditional data sources for the study of migration trends in order to find an original methodology to answer open questions about the human mobility framework. In this context we presents the Multi-aspect Integrated Migration Indicators (MIMI) dataset, an new dataset of migration drivers, resulting from the process of acquisition, transformation and merge of both official data about international flows and stocks and original indicators not typically used in migration studies, such as online social networks. This work describes the process of gathering, embedding and merging traditional and novel features, resulting in this new multidisciplinary dataset that we believe could significantly contribute to nowcast to forecast both present and future bilateral migration trends. Introduction In the last years the pursuit of original drivers and measures is becoming an increasing requirement to migration studies, considering the new methods and technologies used to characterize and understand human migration phenomenon.Many researchers [35,7,2,10] have proposed to employ non-traditional data sources to study migration trends, including so-called social Big Data such as online social networks.The usefulness of exploiting unconventional data sources for better understanding migration patterns, as well as the benefits of merging knowledge from both traditional and novel datasets, have already been proven [35].This unconventional approach is intended to find an alternative methodology to ultimately answer open questions about the human mobility framework (i.e.nowcasting flows and stocks, studying integration of multiple sources and knowledge, and investigating migration drivers).Nevertheless, in this context of meaningful combination of the conventional and the original, many types of data exist, still very scattered and heterogeneous: in the variety of this background, integration is not straightforward. For this purpose we propose a tool to be exploited in migration studies as a concrete example of this new integration-oriented approach: the Multi-aspect Integrated Migration Indicators (MIMI) dataset.It includes both official data about bidirectional human mobility (traditional flow and stock data) with multidisciplinary features and original indicators, including the Facebook Social Connectedness Index (SCI), which measures the relative probability that two individuals across two countries are friends with each other on Facebook.The inclusion of SCI in the dataset enables it to be exploited as a non-traditional way to describe, understand and nowcast international migration.The combination of this index with socioeconomic variables measuring the similarity of two locations (such as per capita income, religiosity and language) already appeared in [4,5] where it has been shown that pairs of locations that are more similar on these dimensions share more friendship links.Nevertheless a similar approach on country level is still missing; moreover, such observations and conclusions about SCI have never been exploited in migration studies.For this reason, our aim is to use this "homophily" concept (defined as the empirical regularity with which individuals are more likely to be associated with other individuals of similar characteristic) [28], that literature has already linked to Facebook social connectedness [4,5], to present a new dataset useful for better understanding country-to-country human mobility trends. Motivation MIMI is an open dataset that provides multidimensional information about several traditional and non-traditional aspects related to human mobility phenomenon.Thanks to this variety of knowledge, experts from several research fields (demographers, sociologists, economists) could exploit MIMI to investigate the behavior of many drivers and relate it to migration trends, so as to build a comprehensive overview and understanding of them. As an example, it could be possible to access existing correlations between original sources of data and traditional migration measures, explore and investigate them and try to identify any possible causal relationship.Moreover, it could be possible to develop complex models able to assess human mobility framework by evaluating related interdisciplinary drivers, as well as models able to nowcast and predict traditional migration indicators in accordance with original features, such as the strength of social connectivity.By means of these algorithms, companies and researchers could find an alternative methodology to answer open questions about emerging mobility trends. Human migration is a complex phenomenon characterized by several related factors.It is also ancient as human history, and it has been widely studied, explored and described over time.However, the technological advancements and the rapid and drastic changes that society faced in the 21st century have impacted on the human mobility phenomenon, which consequently has undergone radical modifications.We believe that taking into account this same information about society changes and technological progress (such as economic, cultural and social big data) can be an effective strategy nowadays to detect new trends in bilateral migration and to better understand and nowcast it.The motivations for building and releasing the MIMI dataset precisely lie in this need of new perspectives, methods and analyses that can no longer prescind from taking into account a variety of new factors.The heterogeneous and multidimensional sets of data present in MIMI offer an all-encompassing overview of the characteristics of international human mobility, enabling a better understanding and an original potential exploration of the relationships between migration and non-traditional sources of data. Data description The MIMI [21] dataset version 1 (March 15, 2022) was released under the Creative Commons Attribution 4.0 International Public License (CC BY 4.01 ) and is publicly available on Zenodo (10.5281/zenodo.6360651).It consists of a single file containing more than 28,000 entries (records) and 480 different features.In this section we provide all the dataset specifications and describe the structure of the CSV file in detail, as well as how each feature was built. Data files and format The MIMI dataset is made up of one single CSV file that includes 28,725 rows and 485 columns.The index consists in uniquely identified pairs of countries, built from the join of the two ISO-3166 alpha-2 codes of origin and destination country respectively.Indeed, the dataset contains as main features country-to-country bilateral migration flows and stocks, together with the Facebook strength of connectedness of each pair. Geographical coverage The dataset comprises migration features and social strenght of Facebook connectedness for 254 different countries belonging to the following macro-areas: North America, South America, Europe, Asia, Africa, Oceania, Antarctica. Temporal coverage Since our work does not focus on the study of migration phenomenon per sé but on its possible relationship with social networks, in particular with the use of Facebook, the choice of the time range has been calculated accordingly.Therefore, the initial decision was not to select migration data antecedent to 2004.However, our intention was to make available a tool that could also be useful for the study on the differences between contemporary and past trends (e.g.alterations of some phenomenons, consistent changes of values compared to the past, consequences of previous data on the last few years, etc...): for this reason some features have been selected starting from 2000.Certainly, data selection according to predetermined temporal ranges always depends on the availability of sources: for example, during our data collection phase, Eurostat was not providing information about population density of countries before 2008.Table 1 provides a detailed temporal coverage of each time-related feature, apart from SCI for which we included the only one made available (the latest, which refers to October 13, 2021, updated in December 15, 2021). Features definition In this section we are going to list all the indicators included in the MIMI dataset, then we will describe them in detail in the following section.Table 2 contains a complete declaration of all drivers, grouped and categorized by context ("feature area").The column "Name" contains the identifier of each feature: since it would not be possible to list all features, a more compact replacement rule is presented in order to include them all in the table.From this simple rule it is possible to derive the exact name of each single indicator.The column "Name" should be read as follows: the invariant part of the identifier is static, while the interchangeable part must be substituted as explained below in order to obtain the exact name of the feature. • country should be replaced with origin or destination. • year and start-end should be replaced, respectively, with the reference year (in case of annual feature) or reference year range (for NET migration and NET migration rate features).Substituted values should be consistent with the temporal coverage available for each indicator, which can be found in Table 1.• source allows UN and ESTAT as replacement values. • sex should be substituted with F, M or T (respectively, female, male or both). • age allows only T as replacement value for data obtained from UN (both flows and stocks), while it can take four different values for ESTAT flows: T (total), <15 (less than 15 years), 15-64 (from 15 to 64 years), >65 (65 years or over). Some examples are provided in Table 2 footnotes. Features description and sources In this section we are going to describe in detail each single feature listed in the previous section, also reporting all the data sources: some indicators may have multiple sources since they were necessary to better integrate missing values. As stated in Section 2, the purpose of the integration of all these different drivers in the MIMI dataset is to allow the exploration of any of their possible connections with the international migration phenomenon, and eventually exploit them to better understand and nowcast it. • Index (feature 1 in Table 2).The index consists in uniquely identified pairs of countries, built as follows: ISO2 code of origin country -ISO2 code of destination country (e.g.AL-FI index indicates records related to migration from Albania to Finland).Pairs having the same country codes for origin and destination indicate the so-called "returners" (e.g.BE-BE record represents people that were born or have citizenship in Belgium which moved their residence in Belgium in the reference year). • Facebook data (feature 2 in Table 2).This indicator represents one of the most non-traditional feature (i.e.social media data) within the context of migration studies that we included.It consists in the so-called Facebook Social Connectedness Index (bit.ly/Facebook_SCI)publicly provided by "Data for Good at Meta"2 organisation on "Humanitarian Data Exchange, Data for Good" platform 3 .Country-to-country values of SCI are available in TSV format for more that 34,000 pairs, updated to December 2021 [32]. This indicator uses anonymized insights of active Facebook users and their friendship networks to measure the intensity of connectedness between locations [4].In this way, the resulting formulation in Equation 1 is a measure of the social connectedness between the two locations i and j, that is representative of the relative probability that two individuals across the two locations are friends with each other on Facebook: if SocialConnectednessIndex i,j is twice as large, a Facebook user in country i is about twice as likely to be connected with a given Facebook user in country j. Specifically, in this work the concept of "locations" coincides with NUTS0 areas since our dataset only focuses on country-to-country bilateral migration.Nevertheless, SCI is also provided with respect to narrower geographical granularities, (e.g.NUTS2, NUTS3): we do not exclude future works focused on the study of migration trends at a smaller resolution (country-to-county, or county-to-county). The SCI has a symmetric structure by definition of the concept of "friendship" and has been re-scaled to have a maximum value of 1,000,000,000 and a minimum value of 1.In our dataset, the minimum possible value was originally 0 (indicating pairs of countries for which the index was not available), subsequently replaced with an arbitrarily small value (chosen as half of the minimum available) in order to fix problems when computing Pearson correlation of the logarithmic SCI. • Geographic features (features 3-15 in Table 2).These features portray and contextualize both origin and destination countries at geographical level providing all the necessary information to describe them, starting from the official codes and names, up to their land extent and how far they are.Specifically: features 5, 6, 7, 8 are ISO-3166 standards nomenclatures for country identification, retrieved from PyCountry Python module 4 and ISAN (International Standard Audiovisual Number) [23]. feature 3 consists in the pair code of origin continentcode of destination continent.Its functionality can be fully appreciated in chord diagrams of Section 5. features 11, 12, 13 locate the position of the centroids of both origin and destination countries in a classic geographic coordinate system.They are gathered and integrated from Google DSPL [11] and from latlng() method of CountryInfo Python library 5 , and then merged together in a tuple (feature 13) built as a specific GeoPandas data structure called "geometry array" 6 . feature 4 is the measure of distance between origin and destination, computed starting from the tuple in feature 13 of both countries and using the geodesic formulation7 [27] provided by GeoPy Python library 8 .It has already been observed in [4] that, at county level, much of the estimated effect of distance on migration might be coming from the relationship between distance and social connectedness: therefore the use SCI indicator could better explain the variation of migration flows than geographic distance alone can. feature 14 consists in the list of countries that share a border with the given country.The utility of this feature is to find out if the two countries of origin and destination share a border, using a straightforward function to check if a country name (feature 6) is contained into the list of neighbors of the other, and vice versa.An additional binary feature (e.g."neighbors", having value True or False) could be derived from this method.Countries having empty list are islands.The corresponding sources for this feature are the following: GitHub repository in [20], borders() method of CountryInfo Python module9 and Wikipedia [46]. feature 15 is the measure of the area extension of the country in squared kilometers.It is gathered from The World Bank [37] and integrated with area() method of CountryInfo Python module 10 . • Interdisciplinary indicators (features 16-25 in Table 2).Some of these drivers are considered non-traditional data in the context of migration studies since their use in migration understanding and nowcasting is poorly documented in literature.Despite this, most of the available studies consider these features as relevant in such context, as they are related to the behavior of international migration trends. feature 17 is an indicator that provides per capita 11 annual values for gross domestic product (GDP) of a country, expressed in current international dollars and converted by purchasing power parity (PPP) 12 conversion factor.Data is retrieved from The World Bank [36].The gross domestic product is one of the "Development Indicators", already widely used in literature in combination with global migration. features 16, 18 correspond to two lists containing, respectively, the most practiced religions, and the most spoken languages in the country (both including official ones and minorities). The benefit of including these columns would be to discover if the two countries of origin and destination share some languages or religions (or both), since this could favor a migratory exchange between the two.Rare languages and religions used only in one country and not shared with any other have been removed as meaningless for our purposes. Languages have been gathered from Wikipedia [47] while religions comes from DataHub [8] and have been integrated with Wikipedia data [48]. features 19, 20 indicates the quantity (respectively, as absolute number and as percentage of the total population) of Facebook users that a given country has.The source is World Population review [50], which refers to the latest available measure for each country (oldest date back to December 2020). features 21-25 represents Cultural Indices of a location, intended as dimensions along which cultural values of that location can be analyzed [26].Their origin dates back to the work of [22] although, over the decades, independent research branches led to the creation and addition of new ones [49].Our work includes five of these indicators, of which we provide a brief individual description.Their applications in literature have been several (e.g.cross-cultural studies using Twitter data [3]), but the purpose of their inclusion in the MIMI dataset is to use them in an original way: our intention is to explore and understand their possible relation with international migration trends.Data about cultural indicators are available in different NUTS levels but in our work they only appear related to NUT0 (country) level since it is the only one that fits our geographic viewpoint. Features 21-25 are the result of the integration of the two different datasets [24,9].Unfortunately, they are provided only for 66 of the more than 250 available countries but, despite this, most of them have already shown to be strongly involved in migration trends (see the behavior of their correlation values with the absolute number of migrants of a country, in Section 5.1). Starting from cultural dimensions of both countries of origin destination, a new feature about cultural distance could be obtained: datasets with this configuration already exist [26,25] despite, at the moment, data is available only for a third of the countries (22 in total).* feature 21 is Power distance indicator (PDI) which is defined as "the extent to which the less powerful members of organizations and institutions (like the family) accept and expect that power is distributed unequally" [49].This index describes the extent to which hierarchical relations and unequal distribution of power in organisations and societal institutions are accepted in a culture. * feature 22: Individualism indicator (IDV) 13 (as opposed to collectivism) explores the "degree to which people in a society are integrated into groups" [49]: it reflects the extent to which people prefer to act as individuals rather than as members of a community. * feature 23 is Masculinity indicator (MAS), defined as "a preference in society for achievement, heroism, assertiveness and material rewards for success" [49]: as opposed to femininity, this dimension reveals to what degree traditionally masculine societal values, such as orientation towards accomplishment, prevail over values such as modesty, solidarity or tolerance. * feature 24 is Uncertainty avoidance indicator (UAI) defined as "a society's tolerance for ambiguity", in which people embrace or avert an event of something unexpected, unknown, or away from the status quo [49]. * feature 25: Long-term orientation indicator (LTO) associates the connection of the past with the current and future actions/challenges.A lower degree of this index (short-term orientation) indicates that traditions are honored and kept [49]. • Demographic features (features 26-33 in Table 2).These features correspond to traditional migration and population measures obtained from official statistics, either from national censuses or from the population registries. feature 26: annual population stocks, defined as the number of persons having their usual residence in a country in a given year, are gathered both from UN Population Division [45] (from which only records with "Zero migration" variant were selected) and EUROSTAT [19]: these two sources often refer to different groups of countries so their mutual integration allowed to cover most of the countries of the dataset.Where both measurements were available for the same country, both were reported.The two sources refer to different methodologies, since the annual total population measurement is performed on July 1st by UN, while on Dataset of Multi-aspect Integrated Migration Indicators D. Goglia et al. January 1st by EUROSTAT.However, their ∼1 correlation value proves that the two measures, related to the same year, are well compatible and almost interchangeable: indeed missing values related to the former have been replaced with the latter, and vice versa. feature 27 represents annual population density, defined as the ratio between the annual average population and the land area.Therefore, its unit of measure correponds to "persons per square kilometre".Data has been retrieved from ESTAT [18]. feature 28, 29: absolute number of migrants (respectively, immigrants and emigrants) per country.Data was taken from ESTAT [14,12] and from UN datasets on flows (see below feature 32) selecting, from these latters, records having "Total" as country (respectively, origin and destination country). features 30, 31 indicate quinquennial NET migration and NET migration rate of each country. The former is the difference between the number of immigrants and the number of emigrants in a given area during the reference year, while the latter is defined as the NET migration per 1,000 persons and so it indicates the contribution of migration to the overall level of population change.A positive value for them indicates that there are more migrants entering than leaving a country (NET immigration), while a negative one means that emigrants are more than immigrants (NET emigration). Values have been taken from UN Population Division [41,40]: note that they apply also for EUROSTAT countries, and they have been widely used in literature in combination with them, even if NET migration rate calculation is based on midyear population (as required by the standard UN methodology). feature 32: yearly migration flows for each pair of countries are defined as the number of people that have moved the country (i.e. that changed residence).Unlike a static stock measure, flow data are dynamic, summarising movements over defined period and consequently allow for a better understanding of past patterns and the prediction of future trends [1]. Both EUROSTAT and UN divide migration flows into three categories: by residence [44,42,13,17], by citizenship [43,15] and by country of birth [38,16].This is true in EUROSTAT for both inflows and outflows, while in UN only for inflows, as UN outflows exist only by residence.For our purposes, however, we selected EUROSTAT outflows only by residence, since the ones by citizenship and by country of birth cannot properly be defined "flows", having missing destination country. feature 33: quinquennial migration stocks for each pair of countries consist in the absolute number of migrants residing in the destination country at given time.Data is obtained from UN [39] and includes stocks by sex and age. Methods The entire work was performed in Python 3.8 language, with the aid of Jupyter software 14 . The initial phase consisted in data collection and acquisition, starting from the exploration of open source portals and proceeding with data selection and download.Initially, only migration flows data were imported. Then a pre-processing phase started, where we carried out data understanding, cleaning and preparation.This has been managed by defining some functions that automatically clean and prepare source datasets. Here our data was subjected to various computational standard processes (such as outliers detection, duplicates handling, uniforming notation, etc. . .).Some of the operations that have been performed at this level included the selection of task-relevant data (detection of country-to-country valid records, aggregation removal, and non-bilateral flows elimination). Data transformation phase was fundamental to reshape the data in order to resemble the final structure (previously established by our design choices) so that to have a huge matrix with pairs of countries as rows. Concretely, this meant converting, grouping, and unstacking records of source datasets in order to transform them in features (columns).We continued on shaping this framework by working on indexing: to obtain the dataset index we described in Section 3.2.2,duplicates of pairs of countries where not admissible.For this reason, specifically with respect to EUROSTAT flows, we established a priority for selection of pairs: the union of keys (pairs) was taken firstly selecting migration by citizenship, then by residence, and lastly by country of birth. The following step was data integration were we collected, included and computed all other indicators.Geographic and interdisciplinary features related to single countries (5-25 in Table 2) have been processed in a separate dataset since, neither containing demographic data nor information about couples of countries, it can be reused in different contexts where needed.This countries.csvdataset has undergone the same pre-processing pipeline, but not the trasformation one, since it has its own structure and design: it was then merged 15 with the MIMI prototype previously obtained (already structured according to our needs) by matching both countries of origin and destination. Finally the latest features (2-4 and demographic 26-33, in Table 2) were integrated by computing them or following the previously described merging process, matching single countries or pairs when needed.Once integration has been completed, it has been helpful to check data semantic and statistics of the resulting dataset and make some random inspections in order to verify the need for a further cleaning step. The final data quality assessment phase was one of the longest and most delicate, since many values were missing and this could have had a negative impact on the quality of the desired resulting knowledge.They have been integrated from additional sources reported, for each feature, in Section 3.2.2. Usage notes In this section our focus is on documenting and describing salient patterns in distributions and correlations of data.We do not seek to provide causal analyses, nor do we want to imply causal relationships at this stage: however we believe it can be useful to analyze the obtained numerical results since they may guide possible future research and led to some interesting progress in human mobility studies. Unless otherwise specified, correlation values have been computed as simple Pearson's correlation [34], measuring the linear relationship between two variables: values of -1 or +1 imply an exact linear relationship, while 0 implies no correlation.P-values have been computed in order to confirm of refute the relevance of each correlation value: results are indicated in heatmaps with a number of asterisks proportional to the relevance obtained. no asterisks no relevance p-value ≥ 0.5 * little relevance 0.1 ≤ p-value < 0.5 ** medium relevance 0.01 ≤ p-value < 0.1 *** high relevance p-value < 0.01 When no asterisks are reported for all the values in the matrix, all the correlations computed are highly relevant, meaning p-values always below the threshold of 0.01. Data statistics, distributions and correlations. In this section we provide some practical examples of how to explore the data.Despite the impact of COVID-19 pandemic on international human mobility, mostly related to travel restrictions and "stay-at-home" measures which reduced internal movements within a country [31], Figures 9 and 12 confirm that the numbers in migration flows statistics did not suffer.However, a consistent flow of returners can be noticed for Thailand, probably due to COVID-19 itself, since in 2020 the pandemic prompted the return of hundreds of thousands of migrants to their countries of origin [33]. Regarding migration stocks in Figure 10, the impact of COVID-19 on the global population of international migrants is difficult to assess, since the latest available data refers mid-2020, fairly early in the pandemic.However, it is estimated that the pandemic may have reduced the growth in the stock of international migrants by around two million [31,29]. Moreover, almost all these pairs of countries are included in the "top 20 international migration country-tocountry corridors, 2020" list in the World Migration Report 2022 [31], (e.g.Mexico -United States, Syria -Turkey, India -Saudi Arabia, United Arab Emirates and United States, Afghanistan -Iran, Myanmar -Thailand), meaning that the greatest communities of permanently residing migrants in a host country have developed over years for safety reasons. Boxplots in Figures 11,12 and 13 display the statistical distribution of migration flows and stocks values over the years, divided by sex.Increasing trends and regular patterns over time are well recognizable from the timeseries data plotted, as well as the statistics evidence on male migration that reveals largest numbers with respect to female one (about gender dimensions on human mobility refer to [30]). Heatmap in Figure 14 shows correlations between the computed ratio of total migrants and total population of a country and its cultural indicators, while Figures 15 and 16 correspond to the outcome of the division of that heatmap in immigration and emigration with the mapping of annual correlation values in a bidimensional plane. Values of almost all indicators seem to initially lie mostly in the upper zone of the plane, showing a quite strong positive correlation with emigration, until some breakpoint years occur and the correlation value becomes henceforth highly negative.This radical change in trend cannot yet be supported and explained by a causal relation, so we limit ourselves to report its behavior.Concerning correlation related to immigration, they lie on the middle region of the plot, quite far from the range in the upper and lower extremities, therefore assuming less polarized values.Besides, there are no trend reversal for it. Correlation between NET migration rate and GDP of a country shown in Figure 17 confirms the existing relation, well documented in literature, between these two variables.Correlation is always positive, meaning that countries with high GDP face a NET immigration trend and so confirming that high per capita income are conducive to mobility [6].Specifically, human mobility is influenced by GDP values up to more than 10 years back. Heatmaps in Figure 18 illustrates the trends in Spearman correlations over years between EUROSTAT migrations flows and UN migration stocks.Although the existing correlation between stocks at a given time t and flows relative to previous years is self-evident (as those same flows will be included in the total counting of stocks), it is interesting to notice that quite strong positive correlations also propagate forward in time: this could mean that the higher the stock count at a given time t, the more migration flows will be shared by the pair of countries. Finally, Figure 19 explores the changes in trend of NET migration rate for a small sample of countries. Figures 1, 2, 3 and 4 show some interesting insights about the distribution of values and the top coverage on global scale of Social Connectedness Index. Figure 2 : Figure 2: Density plot of SCI with logarithmic x axis.It shows a strongly right-skewed distribution, meaning that the smallest values of the indicator are the most frequent. Figure 3 : Figure 3: Sample of the highest value of SCI, over 99 quantile.It displays countries pairs with the highest strength of Facebook connectivity. Figure 4 : Figure 4: Facebook strength of connectedness among continents (averaged aggregation of SCI for each couple of countries in the continent).Very high values of connectivity can be noticed in Oceania, Africa and South America.Intra-continental connections are much stronger than inter-continental connections, confirming that the intensity of friendship links is strongly declining in geographic distance[4]. Figure 5 : Figure 5: Inter-continental migration flows from UN (left) and EUROSTAT (right) in 2010.We point out a strong intra-continental mobility for Asia and Europe but also relevant immigration trends in Oceania and Europe.In contrast, Asia and South America are subject to almost only emigration. Figure 6 : Figure 6: Inter-continental migration flows from UN (left) and EUROSTAT (right) in 2014 .Asia and South America remain continents with a strong emigration, bound for the same destinations as in previous years and which appears even to increase in inter-continental trends. Figure 7 : Figure 7: Inter-continental migration flows from UN (left) and EUROSTAT (right) in 2018 is declining with respect to previous years: Oceania experiences far fewer incoming migrants as well as Europe with outgoing ones. Figure 8 : Figure 8: EUROSTAT bilateral migration flows in the most recent year available (2019): pairs of countries with the highest numbers of migrants sharing. Figure 9 : Figure 9: UN bilateral migration flows in the most recent year available (2020): pairs of countries with the highest numbers of migrants sharing. Figure 10 : Figure 10: UN bilateral migration stocks in the most recent year available (2020): pairs of countries with the highest numbers of permanent residing migrants. Figure 11 : Figure 11: Distribution of migration flows from EUROSTAT.Male migration is always higher than female migration for each annual measurement, while the general trend over time is a slight increase of the migration phenomenon.Two drops in the progressive grown of values can be identified, corresponding to triennium 2012-2014 and to 2017. Figure 12 : Figure12: Distribution of migration flows from UN.The increasing trend encountered in the previous chart is not present for these distributions, where instead it is possible to notice a regularity in the behavior over time: a gradual descent takes a few years (which ends coincide with the drops in the previous plot) and then have a sudden peak of ascent.The discrepancy between male and female migration is sharper. Figure 13 : Figure 13: Distribution of migration stocks.The five year measurement prevents you from having a more detailed look as it was for the flows: nevertheless, an increase in the general trend over years is quite evident. Figure 14 : Figure 14: Correlation between immigrants / emigrants over years and cultural indicators of a country.Absolute numbers of total migrants have been divided by the annual total population of the country.Strong positive values are indicated in red while strong negative values in blue.Asterisks indicates the relevance of the p-values obtained, as described in Section 5. Figure 15 : Figure 15: Distribution of correlation between total immigration and cultural indicators.Immigrants for each year are expressed as ratio with respect to the total population of the country for the same year. Figure 16 : Figure 16: Distribution of correlation between total emigration and cultural indicators.Emigrants for each year are expressed as ratio with respect to the total population of the country for the same year. Figure 17 : Figure 17: Correlation matrix between annual GDP per capita and five-year NET migration rate of a country. Figure 18 : Figure 18: Spearman correlation between migration flows and stocks, divided by sex. Figure 19 : Figure 19: Evolution of five-year NET migration rate over time, for a sample of countries. Table 1 : Temporal coverage of each time-related features."End" always refers to the latest available measure.For all the abbreviations refer to Section 5.1. Table 2 : Features list.The exact name of the each single indicator can be retrieved by following the rule explained in Section 3.2.1.
7,747.6
2022-04-26T00:00:00.000
[ "Geography", "Sociology", "Economics" ]
Potential technique for improving the survival of victims of tsunamis We investigated a method for surviving tsunamis that involved the use of personal flotation devices (PFDs). In our work, we succeeded in numerically demonstrating that the heads of all the dummies wearing PFDs remained on the surface and were not dragged underwater after the artificial tsunami wave hit them. In contrast, the heads of all the dummies not wearing PFDs were drawn underwater immediately; these dummies were subsequently entrapped in a vortex. The results of our series of experiments are important as a first step to preventing the tragedies caused by tsunamis. Introduction According to the definition given by the World Health Organization (WHO), "tsunamis are giant sea waves that are produced by submarine earthquake or slope collapse into the seabed" [1]. During the 21st century, the world has already experienced at least two tremendous earthquakes, namely, the Sumatra-Andaman Earthquake (M W 9.1) on December 26, 2004, and the Tohoku-Pacific Earthquake (M W 9.0) on March 11, 2011 [2,3]. Both of these earthquakes caused devastating tsunamis that claimed the lives of approximately 230,000 and 18,000 people, respectively. It is very important to study the mechanisms of tsunamis and how they impact people in order to prevent heavy casualties during the next huge tsunami, which is expected to occur in the near future [4][5][6][7][8][9]. While there are papers and websites that discuss how to evacuate safely during a tsunami [10,11], to the best of our knowledge, there are no papers describing how people move in the water when engulfed in a tsunami. Some websites say that personal floatation devices (PFDs) are effective when dealing with a tsunami, but the grounds for this statement are not clear [12]. Past tsunami disasters have shown us that it is extremely difficult even for good swimmers to escape from drowning. According to the new definition adopted by the WHO in 2002, "drowning is the process of experiencing respiratory impairment from submersion/immersion in liquid" [13]. If a victim cannot rise to the surface of the water, the victim will lose a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 consciousness within a short period of time, and then, breathing will stop and cardiac arrest will follow within 4-5 min [14]. To study why this is so, it is necessary to survey the movements of people after they are swallowed into a tsunami. Furthermore, it would be of great interest to investigate whether a person who wears a PFD would have a greater chance of survival. When people without PFDs are dragged under the water and cannot resurface, they will not be able to hold their breath and will immediately start gasping for air, which will be followed by rapid, deep breaths and an increase in breathing volume to about five times that of the normal resting level [15,16]. In our work, we have conducted a series of experiments at a large flume and observed the movements of dummies with and without PFDs attached to them during a simulated tsunami. Here, we demonstrate numerically that the heads of all the dummies wearing PFDs remained on the surface and were never dragged down into the water after being struck by artificial tsunami waves. In contrast, the heads of all the dummies not wearing PFDs were drawn underwater and entrapped in a vortex immediately. The results of this series of experiments are an important first step for improving tsunami survivorship during future disasters. Large Hydro Geo Flume Experiments were conducted at the Large Hydro Geo Flume in the Port and Airport Research Institute, Nagase, Yokosuka, Kanagawa Prefecture, Japan. This flume is 184 m long, 3.5 m wide, and 12 m deep, and it has two side windows (every window is 2.7 m in length and 2.5 m in height) that enable observers to view the sides of the pool water (Fig 1A and 1B) [17]. Simulated tsunami waves (0.59 ± 0.13 m high) hit the dummies within the flume. Dummies The dummies employed in this study were Simulaids's1 Water Rescue Manikin (Item number 1328). In every experiment, the internal cavity of the dummy was filled with water to maintain its weight in air at about 48 kg and its specific gravity at 1.05, which is almost equal to a human's specific gravity. A light-emitting diode was installed on the head of every dummy to facilitate tracking of the positions of the dummy. Every dummy was lowered down to the concrete block on the floor of the flume. Then, the dummy was positioned on the concrete block at right angles to the long axis of the flume in a supine position. The top of the block was 20 cm below the water surface (Fig 2). Personal flotation devices (PFDs) The Calculation of the positions of the heads of dummies Video sequences of the dummies carried by tsunami waves were recorded by using two synchronized cameras. The first camera was used to capture a side view, and the other one was used to capture a top view. Recorded video sequences were split into separate image frames, where the frame rate was reduced to 1/10 of each original sequence. Next, the position of the head of the dummy was manually marked in each image frame, and the pixel coordinates of the marks were extracted by using an image processing method. Extracted pixel coordinates were then used to estimate the position of the dummy's head in the real coordinate system. The horizontal position of the dummy's head was calculated from the pixel position in each top-view image, where the perspective distortion was not taken into consideration. On the other hand, the water depth above the head, i.e., vertical distance between water level and head position, was calculated from the pixel position in each side-view image. These calculations were performed with respect to the perspective distortion, i.e., the distance from the side-view camera was used to correct the perspective distortion of the water depth above the dummy's head. The distance from the side-view camera was acquired from the top-view images [18]. This series of experiments revealed several key findings. Example side-view images of the movement of dummies with and without PFDs are shown in Figs 3 and 4, respectively. First, after the tsunami struck, the heads of all the dummies wearing PFDs remained afloat; the heads were higher than the water level and were never dragged underwater. The dummies with PFDs were carried in motions similar to that of surfers during wave surfing (Figs 3 and 5, No. 1, 2, 3, 4). The movements of the dummies' heads in Experiment No. 1 and 6 are shown three-dimensionally in Fig 9. In Experiment No. 1 (with a PFD), the dummy moved on the surface of the water. Conversely, in Experiment No. 6 (without a PFD), the dummy was whirling in the water. Discussion A typical tsunami has a very high speed of roughly 700 km/h as it emerges from the deep sea, after which it suddenly slows down when it reaches shallow coastal regions where it may retain a relatively high speed of about 40 km/h. At these moments, tsunami waves rear up precipitously and one can realize the Japanese meaning of the word "tsunami" (wave "nami" in a harbor "tsu"). This is when people witness giant surges; tsunami waves will often overcome a majority of witnesses, even if they run away to save their lives. In addition, when a tsunami recedes, the water will sweep away victims and debris from the land to the sea [19]. As recommended by the Japanese Sanriku coast's old adage "Tsunami tendenko" ("Run uphill on your own will when a tsunami comes"), the first defensive action against any tsunami is to act quickly and seek higher ground immediately. However, since a highly reliable tsunami detection and warning system is still in the developmental stages, people are apt to delay or even ignore evacuations [20][21][22]. Some of them even deliberately take the opposite direction and head towards the beach to watch the incoming tsunami [23]. Technique for the survival of tsunami victims Findings from the systematic literature review indicate that the primary cause of tsunamirelated mortality is drowning [24]. During the 2004 Sumatra-Andaman Earthquake, the main cause of death was drowning due to the tsunami [25][26][27]. According to the Japanese National Police Agency's report based on data obtained from post-mortem certificates after the Tohoku-Pacific Earthquake, the cause of death for 14,308 of the 15,786 fatalities (90.64%) was drowning, while 667 (4.23%) died from severe impact injuries [28]. People might be crushed to death by various debris such as that from destroyed houses and buildings, wrecked boats, and cars, which would be whirling in the water, or they might be crushed against a wharf or breakwater and suffer fatal injuries. Even so, we have to admit that large numbers of people were engulfed in the tsunami waves and drowned in the Tohoku-Pacific Earthquake on March 11, 2011. Therefore, there is an urgent need to find a technique that can prevent drowning. With such a technique, it would be possible to reduce the number of victims who drown and die in tsunamis. Unfortunately, however, there is a lack of information on the cause of drowning during tsunami disasters. During the Tohoku-Pacific Earthquake, the body of every victim of the tsunami was examined by a forensic doctor or a medical Technique for the survival of tsunami victims coroner, and through this examination, it was confirmed that the cause of death of almost all the victims was drowning. However, there were no detailed descriptions beyond drowning in the post-mortem certificates. Why could the tsunami victims not swim? Why were they not able to cling firmly to floating objects on the surface of water? We searched the literature and could not find answers to these important questions. Hence, we conducted a series of experiments to analyze the cause of drowning during a tsunami. In our experiments, all the heads of dummies not wearing PFDs were entrapped in vortices after the tsunami wave hit them. They continued whirling intensively up and down in the water but never came up to the water surface. When tsunamis engulf people below the water surface, and the rate as well as the depth of their breathing increase dramatically, people have no other choice but to inhale water since it would be nearly impossible to swim up to the water surface. This will greatly increase the risk of drowning. Since tsunamis, whose wavelengths are very long, are generated by the displacement of huge volumes of water, the whirlpools created by them are extremely powerful and continue for a long time. Therefore, once people without PFDs are caught up in a tsunami, it is very difficult for them to escape from such whirlpools. Even skilled swimmers without PFDs would not be able to resurface quickly and remain afloat. This severe whirling of tsunami waves is likely one of the main factors that cause the overwhelming majority of tsunami victims to drown. The buoyancy of widely popular PFDs is 7.031 kg (15.5 lbs) [29], and the effectiveness of PFDs has been studied for recreational swimming and fishing [30][31][32]. However, to the best of the authors' knowledge, there is a lack of information on the effectiveness of PFDs during catastrophic tsunami disasters. In our experiments that employed widely popular PFDs, the dummies wearing the PFDs were not dragged underwater. They remained afloat and the heads were higher than the water level. As our experiments demonstrated, it can be concluded that when people are engulfed within tsunami waves, PFDs will provide them with a higher chance of survival because they will remain on the surface of tsunami waves and are still able to breathe. In other words, a PFD is a critical piece of equipment for surviving tsunamis. In critical situations, when a tsunami wave is already visible to people and there are no PFDs around, they might be able to put empty plastic bottles between their skin and clothing, hold on to garbage cans, or wear helmets as substitutions for PFDs and other protective gear. These actions represent the second-best tsunami survival technique. It is reasonable to assume that hypothermia victims were included in the 15,786 fatalities (90.64%) of the Tohoku-Pacific Earthquake because the seawater temperature was very low (5 to 7˚C) [33]. If the entire human body is immersed in water at such low temperatures, its core Technique for the survival of tsunami victims temperature will decrease to a critical level within 2 h [34]. People with PFDs might be transported extensively far from the coast; therefore, it will be necessary to establish a reliable rescue system to save them before their core temperature drops to a severe level where hypothermia can set in. People lose consciousness when their core temperature drops to 30˚C. However, if people wear PFDs, they would be able to avoid drowning even if they lose consciousness as they would merely float on the water surface while still retaining their ability to breathe [34][35][36]. Tsunamis might kill people in multiple ways as mentioned above. In the next series of experiments, we plan on evaluating whether a dummy wearing a PFD would be able to overcome a crash with debris swept by the water or a crash against a concrete block. Notably, the wave heights of our artificial tsunamis were much lower than those of natural tsunamis, which often exceed 10 m. Regardless, our experiments demonstrated that dummies without PFDs were caught up in the vortex and could not resurface after they were hit by the artificial tsunami. On the other hand, dummies wearing PFDs were not drawn under and were able to continue to float on the water surface. Based on these results, we are planning to carry out further experiments with a 1.5 m high artificial tsunami and simulations of 10 m high tsunami waves using computer software [37,38]. The results of our series of experiments are important as a first step to improve survivorship during tsunami disasters, and application of the results could likely save numerous lives. Conclusion Our experiments with approximately 50 cm high artificial tsunami waves demonstrated that PFD use is an effective technique to prevent drowning during a tsunami. Specifically, the heads of all the dummies not wearing PFDs were entrapped in a vortex and drawn underwater immediately. In contrast, the heads of all the dummies wearing standard PFDs remained on the surface and were not dragged underwater. These findings were obtained through video images, which were taken from the side window of a large flume. Drowning is the main cause of death during a tsunami. Thus, use of PFDs during a tsunami could potentially save numerous lives.
3,416
2018-05-23T00:00:00.000
[ "Physics" ]
A New Method for Evaluating Pelvic and Trunk Rotational Pitching Mechanics: From Qualitative to Quantitative Approaches The purpose of this study was to build on existing qualitative to quantitative approaches to develop a new quantitative method for evaluating pelvic and trunk rotational pitching mechanics. Thirty pitchers were divided into two groups (“Pattern1”: closed “hip-to-shoulder separation”; “Pattern2”: open “hip-to-shoulder separation”). Several parameters were analyzed. Higher ball speeds were found in group of Pattern1, four key characteristics of which were identified. Based on the results, a new evaluation method was developed. Pelvic and trunk rotational mechanics were classified into four types. Type1 (proper mechanics) enabled significantly higher ball speed than the other three types and was thought to involve proper energy transfer from the stride foot to the throwing upper limb. Types 2–4, however, were regarded as “improper mechanics”, which could result in slower ball speeds and less efficient energy transfer. A qualitative approach, based on “expert opinion”, can specify optimal pelvis and trunk rotational mechanics. However, quantitative analysis is more precise in identifying three improper types of pelvis and trunk rotational mechanics. Furthermore, special programs, such as core strengthening and flexibility training, can be developed for various improper practices in order to improve pitching mechanics. Introduction Pitching mechanics make up part of a kinetic chain in which energy or momentum is transferred from the striding foot through to the throwing hand [1,2]. Suboptimal pitching mechanics are thought to diminish parameters of performance such as ball speed and to increase the risk of injuries such as overload in joints [3,4]. Various studies have focused on improper pitching mechanics in the extremities [5][6][7][8]. Pelvis and trunk, which are also described as "runners in this relay race of energy transfer", play important roles. The literature features two common approaches to describing pelvic and trunk rotational mechanics. The qualitative approach is to identify certain features such as "early trunk rotation" [9] and "hip-to-shoulder separation" [5,[10][11][12]. The quantitative method draws on kinematic data regarding the pelvis and trunk, such as the pelvic/trunk rotation angle, pelvic/trunk orientation, trunk separation, and spinal rotation [13][14][15][16][17][18]. Although there are various descriptions in the literature, there have yet to be any studies that approach pelvic and trunk rotational mechanics in highly systematic way. First, we seek to rectify that by starting from a qualitative perspective. Hip-to-shoulder separation refers to the position of the hips relative to the shoulder just prior to foot contact being made. It is a commonly-employed qualitative method and is described in detail by Erickson et al. [10]. We compared the parameters that differentiated the closed and open hip-to-shoulder separation patterns and hypothesized that a closed pattern produced a higher ball speed. Moreover, we aimed to determine the parameters that significantly differed between these two patterns. These parameters can be further utilized for quantitative analysis. Second, pelvis and trunk are linkage parts of axial body structure. In most of the quantitative kinematic studies in the literature [13][14][15][16][17][18], features of pelvic and trunk rotation were analyzed separately. We thought it might be useful to understand more about the characteristics of pelvic and trunk rotational mechanics by placing the kinematic curves of the pelvis and trunk together. Finally, the aim of these procedures was to develop a new quantitative method, based on the results of quantitative analysis, which constituted a more precise way to evaluate pelvic and trunk rotational pitching mechanics. Participants Thirty adult male elite pitchers were enrolled for this study. All of them were rightdominant handed and without a history of surgeries due to injuries sustained while pitching. Informed consent documents, which were approved by the Institutional Review Board of the affiliated institutions (KMUHIRB-SV(I)-20180022), were signed by all of the participants. Procedures This study was conducted outdoors, in a baseball stadium. Following stretching and warm-up, each participant threw ten overhand fastballs with maximum effort from the pitching mound toward a catcher at the home plate. Protocols of the warm-up, markers placement and experimental setup were similar to those cited in our previous studies [19][20][21]. For each pitching task, reflective markers were attached to the participants and tracked by a motion capture system (Motion Analysis Corporation, Santa Rosa, CA, USA) that comprised eight charge-coupled device cameras with a sampling frequency of 300 Hz. The ball speed in each pitch was measured using a radar gun (Jugs Sports International Distributors, Tualatin, OR, USA). The position of the tracked markers was then used for the estimation of joint centers, three-dimensional body-segment locations, and kinematics during each pitching task. All pitches were divided into two patterns by one expert (a pitching coach from a professional team) in accordance with a closed and open pattern of hip-to-shoulder separation [10]. The expert watched videos of each pitch from near the catcher's viewpoint. At the moment of foot contact, pitches in which a complete "arm-elbow" structure of the throwing arm was not visible were identified as a closed pattern of hip-to-shoulder separation ( Figure 1A). At the moment of foot contact, pitches in which a complete armelbow structure of the throwing arm was visible were identified as conforming to the open pattern of hip-to-shoulder separation ( Figure 1B). Pitches that were ambiguous were excluded. The participants who had more than five pitches of closed pattern were assigned to the group of Pattern1. The participants who had more than five pitches of open pattern were assigned to the group of Pattern2. For each participant in Pattern1, his top five fastest closed pattern pitches were selected for analysis. Similarly, the top five fastest open pattern pitches were selected for analysis from the group of Pattern2. Ultimately, 30 participants with 150 fastballs were enrolled. Demographics of the two groups is presented in Table 1. tern pitches were selected for analysis from the group of Pattern2. Ultimately, 30 participants with 150 fastballs were enrolled. Demographics of the two groups is presented in Table 1. The kinematic definition of pelvic and trunk axial rotation is in relation to global coordinate system. In the transverse plane, rotation towards the home plate is defined as 0°, whereas that towards third base is defined as −90° (Figure 2). Parameters The parameters derived from the kinematic data (Table 2) were used in the approach, including: (1) parameters of the timing of events; (2) parameters of the angle at the time of the events; (3) parameters of trunk-pelvis separation (TPS) at the time of the events; (4) parameters associated with special time events and intervals; (5) parameters that are important in the stride phase; and (6) the ball speed. The kinematic definition of pelvic and trunk axial rotation is in relation to global coordinate system. In the transverse plane, rotation towards the home plate is defined as 0 • , whereas that towards third base is defined as −90 • (Figure 2). Parameters The parameters derived from the kinematic data (Table 2) were used in the approach, including: (1) parameters of the timing of events; (2) parameters of the angle at the time of the events; (3) parameters of trunk-pelvis separation (TPS) at the time of the events; (4) parameters associated with special time events and intervals; (5) parameters that are important in the stride phase; and (6) the ball speed. The kinematic definition of pelvic and trunk axial rotation is in relation to global coordinate system. In the transverse plane, rotation towards the home plate is defined as 0°, whereas that towards third base is defined as −90°. MKU = maximum-knee-up; FC = foot contact; MER = maximum -shoulder-external-rotation; BR = ball-release. The kinematic definition of pelvic and trunk axial rotation is in relation to global coordinate system. In the transverse plane, rotation towards the home plate is defined as 0 • , whereas that towards third base is defined as −90 • . MKU = maximumknee-up; FC = foot contact; MER = maximum -shoulder-external-rotation; BR = ball-release. Parameters on the Timing of Events There are five hallmark timing events associated with a baseball pitch, including maximum-knee-up (MKU), foot contact (FC), maximum shoulder external rotation (MER), ball-release (BR) and maximum shoulder internal rotation (MIR) [6]. The time interval between FC and BR, abbreviated as "BRt-FCt", was used for the normalization. The time of the FC was set as 0, and that of BR as 100%. The "time ratio" was calculated for the normalization timing of events between pitches. The formula of the time ratio of the event is as shown below: "MKUr" is the abbreviated form of the "time ratio of MKU", "MERr" is the abbreviated form of the "time ratio of MER", and "MIRr" is the abbreviated form of the "time ratio of MIR". The parameters of the timing of events include: "BRt-FCt", "MKUr", "MERr" and "MIRr". Parameters of Trunk-Pelvis Separation at the Time of Events Trunk-pelvis separation (TPS) is defined as trunk angle minus the pelvic angle at the same moment. The TPS is similar to the trunk axial rotation angle related to pelvic co-ordinates, and in the literature, the value of TPS is equivalent to "trunk separation", "trunk twist" and "spinal rotation" in the literature [14,[16][17][18]. "TPSoMKU" is the abbreviated form of "TPS at the moment of MKU", etc. The parameters of the TPS at the time of events include: "TPSoMKU", "TPSoFC", "TPSoMER", "TPSoBR", and "TPSoMIR". Parameters Associated with Special Time Events and Intervals Two special time events are acknowledged when plotting the curves of pelvic angle and trunk angle together ( Figure 3A). One event is the first crossing of curves of the pelvic and trunk angles (the first angle crossing is abbreviated as "AC1"). Another event is the second crossing of the curves of the pelvic and trunk angles (the second angle crossing, abbreviated as "AC2"). "AC1r" is the abbreviated form of the "time ratio of AC1". "AC2r" is the abbreviated form of the "time ratio of AC2". "AC2r-AC1r" is the abbreviated form of the "time ratio interval between AC1 and AC2". The parameters associated with special time events & intervals include, "AC1r", "AC2r" and "AC2r-AC1r". Parameters that are Important during the Stride Phase Several studies have noted that some parameters during the stride phase, such as stride length, maximum knee height and stride foot contact direction, are important and may affect the ball speed or some of the pitching mechanics [4,22]. "StrideL/BH" is the abbreviated form of the "percentage of stride length normalized with body height". Parameters That Are Important during the Stride Phase Several studies have noted that some parameters during the stride phase, such as stride length, maximum knee height and stride foot contact direction, are important and may affect the ball speed or some of the pitching mechanics [4,22]. "StrideL/BH" is the abbreviated form of the "percentage of stride length normalized with body height". "MKH/BH" is the abbreviated form of the "percentage of maximal knee height normalized with body height". "SFCD" is the abbreviated form of "stride foot contact direction". The definition of "SFCD" is the same as the kinematic definition of this study. In the transverse plane, the SFCD towards home plate is defined as 0 • , whereas that towards third base is defined as −90 • . The parameters that are important in the stride phase include, "StrideL/BH", "MKH/BH" and "SFCD". Finally, the resulting parameter is "Ball Speed". Statistical Analyses Statistical analyses were conducted using SPSS 12.0 (SPSS Inc., Chicago, IL, USA) software. The mean data from five pitches by each subject were used for the analysis. The independent t test was used for comparison of each of the parameters between the two groups. A statistically-significant α level was set a priori to 0.05. Cohen's d effect sizes were calculated. Then, the Pearson r correlation between each parameter and the ball speed was calculated. For multiple comparisons, the false discovery rate (FDR)-adjusted p-values were calculated using the R package "q value". If a parameter with an FDR-adjusted p-value was smaller than 0.05, this was related to ball speed and adopted for further analysis. A receiver operating characteristic (ROC) curve analysis was employed to determine the best cut-off value of these parameters in order to discriminate between low and high ball speeds. Results A schematic diagram comparing Pattern1 and Pattern2 viewing in the transverse plane at different time ratios is presented in Figure 2. The curves of mean rotational angle of pelvis and trunk related to time ratio from MKU to MIR of Pattern1 and Pattern2 are shown in Figure 3A,B. The curves of mean pelvic angle of Pattern1 and Pattern2 related to time ratio from MKU to MIR and those of trunk are shown in Figure 4A,B. The curves of mean TPS of Pattern1 and Pattern2 related to time ratio from MKU to MIR are shown in Figure 4C. It can be noted that Pattern1 and Pattern2 have different mean time ratios of MKU, MIR, AC1, and AC2. "MKH/BH" is the abbreviated form of the "percentage of maximal knee height normalized with body height". "SFCD" is the abbreviated form of "stride foot contact direction". The definition of "SFCD" is the same as the kinematic definition of this study. In the transverse plane, the SFCD towards home plate is defined as 0°, whereas that towards third base is defined as −90°. The parameters that are important in the stride phase include, "StrideL/BH", "MKH/BH" and "SFCD". Finally, the resulting parameter is "Ball Speed". Statistical Analyses Statistical analyses were conducted using SPSS 12.0 (SPSS Inc., Chicago, IL, USA) software. The mean data from five pitches by each subject were used for the analysis. The independent t test was used for comparison of each of the parameters between the two groups. A statistically-significant α level was set a priori to 0.05. Cohen's d effect sizes were calculated. Then, the Pearson r correlation between each parameter and the ball speed was calculated. For multiple comparisons, the false discovery rate (FDR)-adjusted p-values were calculated using the R package "qvalue". If a parameter with an FDR-adjusted p-value was smaller than 0.05, this was related to ball speed and adopted for further analysis. A receiver operating characteristic (ROC) curve analysis was employed to determine the best cut-off value of these parameters in order to discriminate between low and high ball speeds. Results A schematic diagram comparing Pattern1 and Pattern2 viewing in the transverse plane at different time ratios is presented in Figure 2. The curves of mean rotational angle of pelvis and trunk related to time ratio from MKU to MIR of Pattern1 and Pattern2 are shown in Figure 3A,B. The curves of mean pelvic angle of Pattern1 and Pattern2 related to time ratio from MKU to MIR and those of trunk are shown in Figure 4A The results of the parameters of the timing of events are shown in Table 3. The "MKUr" of Pattern1 is significantly earlier than that of Pattern2. The results of the parameters of the angle at time events are shown in Table 3. The "TAoMKU", "TAoMER", "TAoBR" and "TAoMIR" of Pattern1 significantly trail behind Pattern2. However, the "PAoFC", "PAoMER" and "PAoBR" of Pattern1 are leading. The results of the parameters of the TPS at time events are shown in Table 3. All of the parameters listed here significantly differ between Pattern1 and Pattern2. The results of the parameters associated with special time events and intervals are shown in Table 3. The "AC1r" of Pattern1 is significantly earlier than in Pattern2. In contrast, the "AC2r" of Pattern1 is later than in the case of Pattern2. Additionally, the intervals between AC1 and AC2 ("AC2r-AC1r") are significantly longer in Pattern1 than in Pattern2. The results of parameters that are important in stride phase are displayed in Table 3. No significant differences were identified between Pattern1 and Pattern2. The results of the ball speed are shown in Table 3. The "Ball Speed" was significantly higher in Pattern1 than in Pattern2. Table 4 presents the correlation coefficients and cut-off values from the ROC of each parameter with statistically-significant correlations with ball speed. Cohen's d effect sizes of these parameters are also shown in Table 4. The parameter with the highest correlation with ball speed is "PAoFC". It also has the largest Cohen's d effect size. The cut-off value of "PAoFC" from the ROC can be used as a reference to discriminate pitches between low ball speed (with "PAoFC" < −69.95 • ) and high ball speed (with "PAoFC" > −69.95 • ). Other parameters with statistically-significant correlations with the ball speed are listed in Table 4. Discussion The rotation of the pelvis and trunk is an area that merits analysis and may be key factors in pitching mechanics. Therefore, it is important to understand and follow a scientific method to evaluate these mechanics. This study seeks to systematically incorporate both qualitative and quantitative approaches to this. The goal of the former is to identify features between different patterns of "hip-to-shoulder separation". The novelty of this study is placing the curves of the pelvis and trunk together. By doing so, the characteristics of pelvic and trunk rotational mechanics can be more clearly seen. Three phases of pelvic and trunk rotation during pitching are introduced herein. Closed pattern of "hip-to-shoulder separation" was found to have a higher ball speed. Four characteristics of pelvic and trunk rotation of pitches with closed "hip-to-shoulder separation" were also identified. These lend fresh perspective to pitching mechanics. Three Novel Phases of Pelvic and Trunk Rotation during Pitching As was noted above, there were two special time events, the first angle crossing (AC1) and the second angle crossing (AC2), which were noted while plotting the curves of the pelvic and trunk angles together. We divided a pitching cycle involving the pelvis and trunk into three distinct phases based on these time events ( Figure 3A). The definition of Phase1 is the interval from MKU to AC1. In Phase1, the angle of the pelvis is more backward behind the trunk, and so the TPS is positive. The definition of Phase2 is the interval from AC1 to AC2. In Phase2, the angle of the pelvis goes beyond the trunk, and the TPS becomes negative. The definition of Phase3 is the interval from AC2 to MIR. In Phase3, the trunk rotates over the pelvis (positive TPS) again. The results show that Pattern1 has a significantly faster ball speed than Pattern2, which implies that Pattern1 has better rotational mechanics of the pelvis and trunk for faster ball speeds. We will discuss and explain the characteristics of Pattern1 according to the stated three phases of pelvic and trunk rotation during pitching. Phase1 The pelvis functions like a rocket booster, and the trunk, proceeding with the metaphor, is the spacecraft. In Phase1, the goal is pre-tension of the pelvis for elastic energy storage, followed by pelvic run-up for the preparation of boosting. The MKU is the initial moment of the stride phase and can be thought of as the timing of the start-up of the pelvic backward rotation for elastic energy storage [22]. In plots of the pelvic rotational angle, the trunk rotational angle and TPS, Pattern1 has an earlier MKU ("MKUr") than Pattern2 (Figure 4A, Arrow1; Figure 4B, Arrow1; Figure 4C, Arrow1). In addition, Pattern1 exhibits backward rotation of the trunk following the pelvis ( Figure 4B, Arrow2), whereas Pattern2 does not feature this behavior. The backward rotation of the trunk results in a decrease in the maximum TPS in Phase1 ( Figure 4C, Arrow1). In summary, during Phase1, Pattern1 shows characteristics with the curve of the pelvic rotational angle being shifted earlier ( Figure 4A) and that of the trunk rotational angle being pressed down ( Figure 4B). Phase2 In Phase2, the goal is to boost the pelvis first, followed by an acceleration of the trunk. In the plot of the TPS, Pattern1 had an earlier AC1 ("AC1r") and later AC2 ("AC2r") than Pattern2 ( Figure 4C Arrow2, Arrow4). In other words, Pattern1 represented a longer period of Phase2 ("AC2r-AC1r"). Luera et al. studied the kinematic characteristics of high school pitchers versus professionals [13] and found that high school pitchers were incapable of rotating their trunks and pelvises to aid in pitching. Therefore, high school pitchers primarily threw hard by generating larger forces in their elbows and shoulders, which may increase their risk of injury. In the study, the figure on upper trunk rotation was similar to the plot of the TPS in this work. This was because definition of trunk rotation in their study was in relation to pelvic coordinate system, rather than global one. The time period between the first and second zero-crossing of the mean upper trunk rotational angle was longer in the group of professional pitchers, who had a correspondingly faster ball speed. This accords with the result of our study that Pattern1 (with faster ball speeds) has a longer period of Phase2 ("AC2r-AC1r"). In the plot of the pelvic rotational angle, Pattern1 demonstrates more leading pelvic rotational angle at the moment of FC ("PAoFC") than Pattern2 does ( Figure 4A, Arrow2). Oi et al. compared the difference between Japanese and American pitchers [15]. American pitchers threw with a higher ball velocity than their Japanese counterparts. The American group exhibited more leading pelvic rotation angle at the instant of lead foot contact than the Japanese one. Wright et al. noted that pitchers who were defined as "early pelvis rotators" (more leading "PAoFC") displayed greater shoulder external rotation at the moment of FC and the earlier occurrence of maximal pelvic rotation angular velocity [23]. In the plot of the TPS, Pattern1 shows more of an absolute value of the negative TPS (trunk trailing behind the pelvis) at the moment of FC ("TPSoFC") than Pattern2 ( Figure 4C, Arrow3). This result was also reported in the study by Luera et al., with the finding that professional pitchers with a higher pitch velocity had significantly greater upper trunk rotation (equivalent to "TPSoFC") [13]. Nissen et al. noted that the relative difference in rotation between the pelvis and trunk at FC (equivalent to "TPSoFC") was 28 • with greater external rotation of the trunk in relation to pelvis [17]. They assumed that this difference in rotation enabled "coiling" whereby potential energy was built up and subsequently transferred to the arm. Fleisig et al. observed the biomechanical changes in youth pitchers between the ages of nine to 15. They noted that trunk separation and ball velocity both increased with age [16]. In the plot of the trunk rotational angle, Pattern2 features a turning-back of trunk prior to FC ( Figure 4B, Arrow4), whereas Pattern1 does not exhibit this behavior ( Figure 4B, Arrow3). In summary, during Phase2, Pattern1 shows characteristics in terms of the curve of the pelvic rotational angle being pulled up far from the curve of the trunk rotational angle ( Figure 4A). This results in the expanding and shifting down of the entire TPS curve ( Figure 4C). Moreover, Pattern1 displays characteristics with no turning-back of the trunk rotational angle prior to the FC ( Figure 4B). Phase3 In Phase3, the goal is slowing down the pelvis and trunk to achieve a sudden "stop" for a relatively stable condition. In this condition, energy can be more efficiently transferred to the throwing arm. This was consistent with a study by Dun et al. [14], who noted that energy can be transferred in a more effective way if the lower body segment was stabilized while the upper one was in movement or rotation. After the ball has been released, which means the task is complete, the pelvis and trunk continue to rotate forward for the follow-through and unloading, decreasing the risk of injury. Characteristics of Pattern1 (Closed Hip-to-Shoulder Separation) In summary, the Pattern1 pitchers display several characteristics (described as below and illustrated in Figure 4A-C) that distinguish them from their counterparts in Pattern2, with slower ball speed. These are as follows: (1) They rotate their trunks backwards following the pelvises in Phase1 ( Figure 4B). (2) They commence rotation of their pelvises (backwards and then forwards) earlier in Phase1 ( Figure 4A). (3) They achieve a more leading pelvic angle ( Figure 4A) and gain more angle between the pelvis and trunk around the moment of foot contact in Phase2 ( Figure 4C). (4) They do not rotate their trunks backwards in Phase2 just before foot contact while pitchers in Pattern2 rotate their trunks backwards ( Figure 4B). These four characteristics may be critical to proper pelvic and trunk rotational mechanics for faster ball speeds. They can be taken as references to help pitchers and coaches evaluate and improve their pitching mechanics. Evaluation of Pelvic and Trunk Rotational Mechanics Using "PAoFC" Accompanied by "TPSoFC" For sports scientists or coaches/pitchers who wish to more precisely realise pitching mechanics, the recognition of closed and open 'hip-to-shoulder separation' by experts is insufficient. In accordance with the results of and characteristics displayed by Pattern1 in this study, we develop a more objective method for evaluating pelvic and trunk rotational pitching mechanics. "PAoFC", which is the parameter with the highest correlation with ball speed and the largest effect size, is used as the primary component of this method, accompanied by "TPSoFC". A "PAoFC" of −70 • and "TPSoFC" of −25 • are used here for classification based on the result of the cut-off values from the ROC. Thus, pitches can be classified in terms of four types of pelvic and trunk rotational mechanics ( Figure 5). Type1 represents a leading pelvic rotational angle at the moment of FC and is followed by the trunk with enough separation between the pelvis and trunk. This is regarded as "proper mechanics". Type2 represents a leading pelvic rotational angle at the moment of FC and is followed by the trunk, but with insufficient separation between the pelvis and trunk. This is considered as "early trunk rotation". Type3 represents a pelvic rotational angle that falls behind at the moment of FC and is followed by the trunk with insufficient separation between the pelvis and trunk. This is referred to as "delayed pelvic rotation". Type4 constitutes a pelvic rotational angle that falls behind at the moment of FC and is followed by the trunk with enough separation between the pelvis and trunk. Typically, the separation of Type4 results from the turning-back of the trunk's rotation prior to FC. Pitchers who feature Type4 mechanics may attempt to increase the angle between the pelvis and trunk by backward trunk rotation in order to gain more "coiling" for potential energy. This is defined as "delayed pelvic rotation with trunk turning-back". Types 2, 3, and 4 are all regarded as "improper mechanics". As is shown in Table 5, Type1 (proper mechanics) features significantly higher ball speed than the other three types, whereas Types 2-4 (improper mechanics) exhibit no significant difference in ball speed between one another. tion of Type4 results from the turning-back of the trunk's rotation prior to FC. Pitchers who feature Type4 mechanics may attempt to increase the angle between the pelvis and trunk by backward trunk rotation in order to gain more "coiling" for potential energy. This is defined as "delayed pelvic rotation with trunk turning-back". Types 2, 3, and 4 are all regarded as "improper mechanics". As is shown in Table 5, Type1 (proper mechanics) features significantly higher ball speed than the other three types, whereas Types 2-4 (improper mechanics) exhibit no significant difference in ball speed between one another. Figure 5. Four types of pelvic and trunk rotational pitching mechanics (viewing in the transverse plane). A "PAoFC" (Pelvic angle at the moment of foot contact) of −70° and "TPSoFC" (Trunk-pelvis separation at the moment of foot contact) of −25° were used for classification based on the result of the cut-off values from ROC (receiver operating characteristic). Type1 (proper mechanics) represents a leading "PAoFC" and is followed by the trunk with enough separation between the pelvis and trunk. Type2 (early trunk rotation) represents a leading "PAoFC" and is followed by the trunk with insufficient separation between the pelvis and trunk. Type3 (delayed pelvic rotation) represents a fallen behind "PAoFC" and is followed by the trunk with insufficient separation between the pelvis and trunk. Type4 (delayed pelvic rotation with trunk turning-back) represents a fallen behind "PAoFC" and is followed by the trunk with enough separation between the pelvis and trunk. Typically, the adequate separation of Type4 results from a turning-back of trunk rotation prior to foot contact. Types 2-4 are regarded as being characterized by "improper mechanics". Type1 (proper mechanics) represents a leading "PAoFC" and is followed by the trunk with enough separation between the pelvis and trunk. Type2 (early trunk rotation) represents a leading "PAoFC" and is followed by the trunk with insufficient separation between the pelvis and trunk. Type3 (delayed pelvic rotation) represents a fallen behind "PAoFC" and is followed by the trunk with insufficient separation between the pelvis and trunk. Type4 (delayed pelvic rotation with trunk turning-back) represents a fallen behind "PAoFC" and is followed by the trunk with enough separation between the pelvis and trunk. Typically, the adequate separation of Type4 results from a turning-back of trunk rotation prior to foot contact. Types 2-4 are regarded as being characterized by "improper mechanics". For the "expert decision", as is shown in Table 5, 53 of 58 pitches of Type1 (proper mechanics) belong to Pattern1. This means that the "expert decision" has a sensitivity of 91.4% in detecting proper mechanics. However, types 2-4 (improper mechanics) account for 37 pitches in Pattern1 and 55 in Pattern2 with a specificity of 59.8%. Although "expert decision" is good for identifying proper mechanics, it is not good enough to consistently identify improper ones. The human eye has inherent limitations and sometimes makes mistakes. For instance, trunk rotation is easier to detect with the human eye than pelvic rotation. Thus, Type3 (delayed pelvic rotation) with "on-time" trunk rotation looks similar to Type1 (proper mechanics). In addition, variation between experts is a drawback. Therefore, the new method is more precise in distinguishing between different mechanics with less variation. Strategies for correcting improper mechanics can be determined clearly. For Type2, it is important to improve trunk flexibility. For Type3 and Type4, the strength and power of the pelvis/hips and core must be enhanced. Moreover, it is necessary to train for proper placement and angle of the leading foot and knee. Additionally, for Type4, turning-back of the trunk just before FC should be avoided. Limitations This study carries some limitations. We focused on the rotational kinematics of the pelvis and trunk. However, linear kinematics, which might also affect ball speed, were neglected. Confounders in other rotational axes of the pelvis and trunk or in other parts of the body, such as the lower and upper extremities, which might relate to pelvis and trunk rotation, could also affect ball speed and were not studied here. Small sample size is also a weakness. Further studies to improve these limitations will be conducted in the future. In addition to ball speed, the studies of kinetic effects and special programs for practical applications will also be carried out at a later point in time. Conclusions In accordance with the results and characteristics identified in this study, a new quantitative method with the use of "PAoFC" and "TPSoFC", instead of "expert decision", was developed. Pelvic and trunk rotational pitching mechanics can be classified into four types. Type1 (proper mechanics) yields significantly higher ball speeds than the other three types and is thought to feature adequate energy transfer, from the stride foot to the throwing upper limb. Types2-4 are regarded as "improper mechanics" that result in slower ball speeds and less efficient energy transfer. The qualitative approach based on "expert decision" can identify proper pelvis and trunk rotational mechanics. However, quantitative analysis is more precise in identifying three improper types of pelvis and trunk rotational mechanics. Furthermore, special programs, such as core strengthening and flexibility training, can be adapted to address different improper types in order to improve pitching mechanics. We hope that this new method of evaluation can help coaches, pitchers and sports scientists recognize and improve on pitching mechanics and performance.
7,445.2
2021-01-21T00:00:00.000
[ "Engineering" ]
Learning to Follow Object-Centric Image Editing Instructions Faithfully Natural language instructions are a powerful interface for editing the outputs of text-to-image diffusion models. However, several challenges need to be addressed: 1) underspecification (the need to model the implicit meaning of instructions) 2) grounding (the need to localize where the edit has to be performed), 3) faithfulness (the need to preserve the elements of the image not affected by the edit instruction). Current approaches focusing on image editing with natural language instructions rely on automatically generated paired data, which, as shown in our investigation, is noisy and sometimes nonsensical, exacerbating the above issues. Building on recent advances in segmentation, Chain-of-Thought prompting, and visual question answering, we significantly improve the quality of the paired data. In addition, we enhance the supervision signal by highlighting parts of the image that need to be changed by the instruction. The model fine-tuned on the improved data is capable of performing fine-grained object-centric edits better than state-of-the-art baselines, mitigating the problems outlined above, as shown by automatic and human evaluations. Moreover, our model is capable of generalizing to domains unseen during training, such as visual metaphors. Introduction Frameworks for large-scale text-conditional image synthesis, which rely on diffusion processes (Saharia et al., 2022;Rombach et al., 2022a;Ramesh et al., 2021;Nichol et al., 2021) have shown impressive generative capabilities and practical uses.Notably, image editing guided by text has garnered considerable attention due to its ease of use and seemingly high-quality results (Avrahami et al., 2022b,a;Hertz et al., 2022a;Kawar et al., 2023).These advances are now utilized in leading industry * * Equal Contribution tools such as Adobe Photoshop1 , bridging the gap between technology and content creators. While natural language instructions act as a powerful interface for editing images, following them remains a challenging task for several reasons.First, these instructions are often underspecified, requiring models to uncover their implicit meaning.For example, in Figure 1 for the input image (a), the user provides a prompt Add a lighthouse.The model needs to understand how a lighthouse looks, that only one lighthouse needs to be added and that it needs to be placed on land and not in the water.Second, models must be able to localize where the "background" is in the image so that the lighthouse can be added appropriately (grounding).Finally, models must follow instructions faithfully, i.e. preserve the elements of the image not affected by the edit instruction (e.g., both houses in Figure 1 (a)). One of the key challenges for making progress on the task of image editing via natural language instructions is the lack of high-quality annotated or naturally occurring data.Recently, Brooks et al. (2023) proposed a way to handle this by automatically creating a paired dataset utilizing large language models and text-to-image models.They further train a conditional diffusion model on this paired data of synthetic examples and show that their model is capable of editing images conditioned on natural language instructions at run-time.However, closer inspection reveals that the data is noisy and sometimes nonsensical.As can be seen in Figure 1 (b) the model adds three lighthouses instead of one, likely due to the underspecification of the instruction and the lack of grounding.Furthermore, it compromises faithfulness by removing both houses from the input image. To tackle these challenges we first create a highquality corpus starting from the parallel data released by Brooks et al. (2023).For example, given the caption of the input image in Figure 1, Ocean Cottage by Todd Baxter, and an edit instruction Add a lighthouse, we use recent advances in Chainof-Thought prompting (Wei et al., 2022) to identify whether the transformation can be performed in the context of the input image and what entity should be transformed.The entity and the input image are fed to an object detection model Ground-ingDINO (Liu et al., 2023).As can be seen in Figure 2 (b), DINO draws a bounding box to denote "background".However, grounding in the form of bounding boxes cannot entirely disentangle the entity of interest.Thus, we use recent advances in image segmentation (Kirillov et al., 2023) to identify the exact segment containing the entity that needs to be masked (see Figure 2 (c)) and and use Stable Diffusion+Inpainting (Rombach et al., 2022a) with the masked image (c) to obtain the edited output (Figure 2(d)). To further account for faithfulness in our paired training data we leverage techniques from VQA (Antol et al., 2015) to ensure that the edited im-ages remain faithful to the entities in the original image.For this, we generate relevant questions with 'Yes/No' answers regarding the unmodified elements in the image such as "Does this image contain a cottage" or questions regarding objects in the instruction "Is there a lighthouse in the image" using the Vicuna-13B model (Chiang et al., 2023).Given an image-question pair, we then use the BLIP-2 VQA-model Li et al. (2023) to collect responses and re-rank generated images from Stable Diffusion+Inpainting by faithfulness scores in terms of correct answers and select the best one for higher quality.We then fine-tune the model on this newly created parallel data.In addition, both during finetuning and inference, we enhance the supervision signal by denoising parts of the image that need to be changed by the instruction as shown in Figure 5. To summarize, our contributions are: • Improving the quality of existing paired datasets used for image editing via natural language instructions with the help of recent advances in segmentation, Chain-of-Thought prompting, and visual question answering. • Curating a test set of diverse non-noisy instructions, consisting of both in-domain and out-ofdomain examples and conducting a thorough evaluation across SOTA baselines and our model ablations. • Demonstrating that fine-tuning a diffusion model on our parallel data enhanced with supervision signal leads to a significant improvement over several compelling baselines in terms of faithfulness using TIFA scores (Hu et al., 2023) as well as instruction satisfiability.Our human evaluation corroborates these findings.We release our code, data and pretrained models.2 2 Data Problems with Existing Datasets The dataset introduced by (Brooks et al., 2023) marked a significant step towards enabling diffusion models to comprehend instructions for image editing.However, since GPT-3 was used to generate captions and edit instructions (Figure 9 in Appendix A), the constructed image-edit pairs suffered from various limitations that makes the editing process less precise and efficient.One of the frequent concerns in the edit instructions is that many of them are vague and incomprehensible to perform a successful edit.Furthermore, many of the edit instructions suffer from language model hallucinations.For instance, in Figure 3 (b), the instruction "Make the rocky mountains look like a chessboard" is not sensible in the context of the input image.In Figure 3 (c), the gold image does not actually represent the instruction, creating issues in the instruction following the fine-tuned model.Furthermore, object-centric instructions are inherently harder for diffusion models.Models that cannot localize the corresponding entity or region in the image end up performing incorrect or excessive modifications, resulting in images that are not faithful to the instruction or input image as can be seen in Figures 3 (c) and (d).This undermines the quality of the training set and leads to non-faithful edits by fine-tuned diffusion models. High-Quality Training Dataset Curation We describe our approach for curating the dataset with focus on addressing the above-mentioned challenges including underspecification, grounding, and faithfulness. Filtering noise and handling under-specification To improve the noisy synthetically generated edit instructions from (Brooks et al., 2023), we leverage the reasoning capabilities of large language models through Chain-of-Thought prompting (Wei et al., 2022) to jointly predict whether the instruction is appropriate w.r.t the context present in the original caption and to generate the entity/region that needs to be grounded and changed for performing the edit.Table 3 in Appendix A shows how we jointly elicit the edit entity as well as a verdict on whether the transformation is possible.Large language models can uncover implicit entities in image editing tasks based on their own commonsense knowledge, even without explicit mentions in the input caption or edit instruction.For instance, in Figure 4 given the original image with the caption Buttermere Lake District and edit instruction Add an aurora borealis, the model outputs that the entity to which the edit is applied is sky, using the implicit commonsense knowledge that "aurora borealis" has to appear in the sky. We filter object-centric image-instruction pairs that align with the original caption, discarding images lacking a specific segment for editing.This ensures high-quality, precise, and contextually appropriate entity-centric image editing. Incorporating Grounding We utilize object detection and segmentation to provide additional supervision signals for grounded-image editing.The edit entity generated from ChatGPT (gpt-3.5-turbo) in the previous step is given as an input to the open-set object detection model GroundingDINO (Liu et al., 2023) that generates a rectangular box around the region/entity that needs to be altered.After getting the box coordinates from GroundingDINO, we further perform the image segmentation using the SOTA model SAM (Segment Anything Mask) (Kirillov et al., 2023), which takes point inputs for generating a segmentation mask over the entity.These steps can be seen in Figure 4: ChatGPT outputs the entity to which the edit has to be applied given the caption and the instruction (sky), GroundingDINO locates the entity (red rectangular box), and the SAM model further disentangles the sky from the mountains. Generating Images using Stable Diffusion Inpainting Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting a specified area of an image given a mask.However, this model does not understand implicit instructions and requires captions of the original image, so we use the "edited caption" (See Figure 9 in Appendix A) present in the existing dataset to perform the edit operation on the image.We feed the binary image-segmentation mask generated by the SAM model as seen in Figure 4 along with the input image and the edited caption to perform the required entity-centric edit on the image.Stable Diffusion Inpainting generated images might not be always faithful, so we generate three images per input and re-rank them based on faithfulness as described below. Ensuring Faithfulness To further account for faithfulness, we leverage techniques from Visual Question Answering (Antol et al., 2015) to ensure that the edited images are faithful to the instruction as well as to the entities in the original image.We formulate relevant questions regarding the unmodified elements in the image such as the presence of specific objects or contextual information.We gen-erate these questions using the Vicuna-13B model (Chiang et al., 2023), a LLaMA model (Touvron et al., 2023) fine-tuned for instruction following.We use the edited caption available in the dataset to extract noun phrases and drop entities which are either locations or names of individuals.For example, in Figure 4, given the edited caption "Buttermere Lake District with Aurora Borealis" and the extracted entities Buttermere, Lake District and Aurora Borealis, we drop Buttermere and pass the other two entities to the Vicuna model to generate question-answer pairs.Figure 4 shows the generated questions corresponding to the individual entities. For Remove or Delete instructions, we need to ensure that the edit entity is not present in the resulting image.Along with the edited caption and the extracted entities, we also pass the edit instruction to generate a question ensuring a successful edit.For instance, suppose the original caption is Buttermere Lake District with Aurora Borealis and the edit instruction is Remove Aurora Borealis.We input the edited caption Buttermere Lake District, the entity lake district, and the edit instruction to generate the following questions: Is there a lake district in the picture?and Does the picture contain Aurora Borealis?along with the corresponding correct answers Yes, No. We input the questions and the image generated by Stable Diffusion Inpainting in the SOTA VQAmodel BLIP-2 (Li et al., 2023) to extract whether entities are present in the edited image.We count the number of correct answers (No for remove instructions, Yes for other instructions) for each of the three images generated using Stable Diffusion Inpainting and select the one having the largest number of correct answers. Test Data creation Our test set of 465 <image, instruction> pairs is curated carefully across both in-domain and out-ofdomain images to account for model robustness and generalization capabilities.To create an in-domain test set, we deployed a filtering strategy where we only retained those examples in our test having a CLIP-similarity score (Radford et al., 2021) For creating an out-of-domain test set, we use two sources of examples.First, we consider the recently released MagicBrush dataset (Zhang et al., 2023).This is a manually annotated dataset for instruction-guided image editing that covers multiple scenarios including single-turn and multi-turn edits.They sample real-world images from the MS-COCO (Lin et al., 2015) dataset, ask annotators to write instructions, and use the DALLE-2 (Ramesh et al., 2022) image editing platform to interactively synthesize target images.For our experiments, we only consider single-turn edits.We discard the Change Action instructions such as Make the person jump, Make the dog look away as they were not present in the original InstructPix2Pix data.We finally select 200 random images from the Mag-icBrush test set after the above considerations. Second, we use a dataset of 100 DALLEgenerated imperfect visual metaphor images paired with expert-written natural language instructions to improve them (Chakrabarty et al., 2023).We select 65 single-turn images with verbs that are already present in our in-domain test set. Fine-tuning with Additional Supervision Our training set curation pipeline led to a set of 52,208 high-quality instruction-images pairs.We split this data in 80:10:10 for the training, validation, and test sets, respectively.We follow the same protocol as InstructPix2Pix training and use their codebase.For an image x, the diffusion process adds noise to the encoded latent producing a noisy latent z = E(x) where the noise level z t increases over timesteps t ∈ T .We learn a network θ that predicts the noise added to the noisy latent z t given image conditioning c I and text instruction conditioning c T .We minimize the latent diffusion objective and initialize the weights of our model with the InstructPix2Pix checkpoint.To support image conditioning, we add input channels to the first convolutional layer, concatenating z t and E(c I ).All available weights of the diffusion model are initialized from the pre-trained checkpoints, and weights that operate on the newly added input channels are initialized to zero.We reuse the same text conditioning mechanism that was originally intended for captions to instead take as input the text edit instruction c T .We experiment with two different training strategies with additional supervision on top of the vanilla InstructPix2Pix.We initialized our model weights from the In-structPix2Pix model and finetuned for an additional 8k steps on NVIDIA-A100GPUs.We adopt the rest of the hyperparameters from the public Instruct-Pix2Pix public repository.Further details for hyperparameters during inference are mentioned in Appendix A. Models Below we describe how we use some of the stateof-the-art baselines as well as ablations of our best model to generate outputs based on the test set instruction-image pairs. Instruct-X-Decoder Zou et al. (2023) released the X-Decoder model that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks.Given the high-quality referring segmentation results with X-Decoder, they further combine it with an off-theshelf Stable Diffusion image inpainting model and perform zero-shot referring image editing. 3rounded Inpainting Grounded-Inpainting module proposed by (Liu et al., 2023) deployed with gpt-3.5-turbolanguage model to identify the region/object of interest from the corresponding edit instruction.The pipeline then includes identifying the bounding box for the region of interest and generating a binary segmentation mask before passing it to the Stable Diffusion Inpainting model (Rombach et al., 2022b).We only provide the instruction to the inpainting module as opposed to the complete caption during the training dataset curation for fairness to other baselines. InstructPix2Pix+BoundingBox We use the model fine-tuned with a bounding box supervision signal described in section 3.During the inference, the bounding box is constructed with the GroundingDINO model (Liu et al., 2023) using the entity extracted from the edit instruction by gpt-3.5-turbo. InstructPix2Pix+EntityMask We use the model fine-tuned with a segmentation mask supervision signal described in section 3.During the inference, the segmentation mask is constructed with the Segment Anything Mask (SAM) model (Kirillov et al., 2023) using the entity extracted from the edit instruction by gpt-3.5-turbo. Evaluation Automatic Evaluation using TIFA-Score We use a recent metric proposed by Hu et al. (2023), TIFA (Text-to-image Faithfulness evaluation with question-answering), 4 for evaluating the faithfulness of the generated images to their text inputs.TIFA evaluates a generated image using a two-stage pipeline: first, generates a list of question-answer pairs that cover various aspects of the contextual information provided in the given caption, and then uses SOTA VQA models such as BLIP-2 to answer these questions and match the correct answers for faithfulness.This framework is proposed for benchmarking diffusion models.The framework generates 7-10 question-answer pairs per image caption (modified, which covers the instructional edit aspects too).We average the score for individual images over all these question pairs.The final score is reported as the average of the TIFA score over all the images in the test set. Human evaluation We randomly sample 100 examples from our test set with 50 images from in-domain and 50 images from out-of-domain.We recruit 31 annotators on Amazon Mechanical Turk through a rigorous qualification test.We require three distinct workers to do one HIT.Given an input image and an edit instruction, they are asked to judge if the output images from five systems satisfy the edit by choosing between Yes, Partially Yes, and No.Following prior work (Kayser et al., 2021;Majumder et al., 2021;Chakrabarty et al., 2022), we map these to 1, 1 2 , 0 respectively and report the average as H score .Additionally, we ask the annotators to consider how faithful the output image is to the input: if the output image changes several elements that were beyond the scope of the edit instruction, they are asked to respond No.We require them to provide justification for their choice (see Figure 6) to prevent random guessing.Table 2 shows the performance of our model and the baselines on the in-domain and out-of-domain parts of the test set in terms of both automatic and human evaluation.InstructPix2Pix+EntityMask appears to be the winning system with Instruct-Pix2Pix+BoundingBox coming second.These results indicate that our improved data combined with the additional supervision signal lead to better edits. System TIFA Hscore-I Hscore TIFA acts well as a reference-free automatic evaluation metric and is useful in real-world settings gold images are not necessarily available.For instance, given the edited caption Red skirt ballerina for the edit Make it a red skirt (see top row in Figure 7), TIFA generates several <question, choices, answer> tuples such as <Is there a red skirt?, [Yes, No], Yes>, <Who is wearing the red skirt?, [Ballerina, Singer, Actress, Model], Bal-lerina> , <What color is the skirt?, [Red, Blue, Green, Yellow], Red> which are then scored by a VQA model for correctness. For human evaluation, as mentioned earlier, three MTurkers were recruited for each instance.The IAA using Krippendorff's α (Krippendorff, 2011) is 0.58, indicating moderate agreement.Human judges consistently preferred our Instruct-Pix2Pix+Entity Mask model and the gap between our best model and the other baselines are substantial.All judges unanimously agreed that the original InstructPix2Pix changes the images following the instructions; however, the resulting images exhibit excessive modification and lack photorealism.For instance, for image (iv) in the top row in Figure 7 human judges rate the edit as bad and give the following justification: The entire dress has been changed to red and the background is a different color.These explanations shed light on user preferences corroborating the fact that humans like the edits to be precise and faithful.Qualitative examples in Figure 8 (2019) argues towards an approach to edit images via language instructions without explicitly mentioning the contextual information.Recent works from Liu et al. (2023) explore a strong objectdetection model coupled with strong segmentation (Kirillov et al., 2023) model to edit via Stable Diffusion model (Rombach et al., 2022a).Further work (Saharia et al., 2022), (Richardson et al., 2021), (Fu et al., 2020) explores stylistic-image editing including via generative models.Unlike several existing prior works we focus on the faithfulness and specificity of object-centric edits.Like Zhang et al. (2023) we argue that high-quality training data and incorporating grounding is the key to achieving high-quality edits. Recently there has been a growing interest in using text-to-image diffusion models for creative tasks such as creating illustrations or abstract art.Akula et al. ( 2023) release MetaCLUE, consisting of four interesting tasks (Classification, Understanding, Localization, and Generation) related to metaphorical interpretation and generation of images.Chakrabarty et al. (2023) release a dataset of visual metaphors through collaboration between large language models and text-to-image models.These model-generated outputs while being impressive are often imperfect and require further edits.Our results on editing imperfect visual metaphors open up opportunities for content creators who can simply use natural language instructions to steer AI-generated images to their liking. Conclusion We address challenges in natural language-based image editing tasks and provide a novel approach to enhance the quality of the training data.We improve the supervision signal and tackle the issues of underspecification, grounding, and faithfulness by leveraging advancements in segmentation, Chainof-Thought prompting, and VQA.Our models finetuned on the improved dataset with enhanced supervision signal outperform the existing baselines on object-centric image editing both in terms of automatic and human evaluation.Moreover, our models showcase the capability to edit faithfully on out-of-domain datasets.Overall our findings highlight the significance of high-quality annotation and grounded supervision signals for precise and faithful image editing. Limitations While our best-performing model supports various edit types on real images, we do not benchmark for global editing (e.g., style transfer).Our method can greatly enhance text-guided image editing, making it accessible to more users without professional knowledge, boosting their efficiency.However, the risk of misuse for creating fake or harmful content is a concern.Therefore, implementing robust safeguards and responsible AI protocols is critical.Finally, while our data creation uses a pipeline of best-performing state-of-the-art models, there is still potential for error in our training data.Additionally, while TIFA score acts as a good referencefree metric for automatic evaluation it is relying on VQA model's correct answers which may be incorrect.Thus, we corroborate our results with human evaluation.Our study focuses on singleturn atomic instructions and does not show results on multi-turn instructional edits.Additionally, our method only works for natural language instructions in English and does not handle instructions in other languages. Ethics Statement The use of text-to-image generation models is subject to concerns about intellectual property and copyright of the images generated since the models are trained on web-crawled images.We use the LAION-Aesthetics dataset which primarily consists of images from a variety of mediums (photographs, paintings, digital artwork).To ensure the collection of high-quality human annotations and fair treatment of our crowd workers, we have implemented a meticulous payment plan for the AMT task.We conduct a pilot study to estimate the average time required to complete a session.We pay our workers 18$/hr, which is above minimum wage.All data collected by human respondents were anonymized and only pertained to the data they were being shown.We do not report demographic or geographic information so as to maintain full anonymity.Workers were paid their wages in full immediately upon the completion of their work. A Pormpts, hyperparameters, annotation A.1 Few-shot Prompts for filtering noise and handling under-specification We describe our few-shot prompts given to the ChatGPT model (gpt-3.5-turbo)for our datasetgeneration pipeline. You will be given an input caption of an image and an instruction to transform it by an image editor.Sometimes, however, the instruction does not make sense as the resulting transformation would result in a nonsensical image.{ "verdict": "true", "entity": "barn"} Caption: Sligachan Bridge by English Landscape Prints, Instruction: change the bridge to a wooden ship, The resulting image would show a ship up in the air which does not make logical sense.{ "verdict": "false", "entity": "none"} Caption:...... Caption:...... Caption:......For question-answer pair generation to evaluate faithfulness during the dataset cutation, we provide three-shot prompt to the Vicuna model as provided in Table-4. A.3 Hyperparameters for training and inference We finetuned the the InstructPix2Pix checkpoint from their official repository for 8k steps on NVIDIA-A100GPUs with a learning rate of 1e − 4. We take the rest of the hyper-parameters from the official implementation of InstructPix2Pix repository.For generation during inference, we set cfg-text=7.5 and cfg-image=1.5 for instruct-pix2pix baselines and its ablations.For evaluation using TIFA, we use GPT-4 for generating questions and flan-t5-xxl version of the BLIP-2 model for visual question answering. A.4 InstructPix2Pix training data creation Figure 9 shows the training data creation pipeline for InstructPix2Pix by (Brooks et al., 2023). A.5 Human annotation and explanations Table 5 shows the rationale chosen by human judges for edits from different baselines.Image in row 1 + Entity Mask is deemed as perfect as can be seen in the written explanation.The vanilla model edits the image too much beyond the scope as can be seen in the images in row 3 and 4 of Table 5. Figure 1 : Figure 1: (a) Input Image (b) Edited image from In-structPix2Pix (Brooks et al., 2023) with the instruction Add a lighthouse. Figure 2 : Figure 2: Steps to create parallel data: (a) Input Image + Edit Instruction; (b) Image with grounding in the form of a bounding box for the entity where transformation has to be made; (c) Masked localized segment of the grounded image where the transformation has to be made; (d) Final output. Figure 3 : Figure 3: Examples of noisy edit instructions and image-pairs: (a) Make her look like an android; (b) Make the rocky mountains look like a chessboard; (c) Replace her with a bird; (d) Have it be a lighthouse. there a lake district in the painting?", "Does the image contain an aurora borealis?"] Figure 4 : Figure 4: Steps to create high-quality training parallel data: Given an input image, caption, and edit instruction we first use Chain-of-Thought (CoT) prompting with ChatGPT to identify whether the edit instruction is sensible and if it is, what is the entity that needs to be transformed.Using the LLM-generated edit entity we use GroundingDINO to localize it and SAM (Segment Anything Mask) to segment it.We then use Stable Diffusion Inpainting to generate 3 images and filter out the best image with the help of VQA. Figure 6 : Figure 6: Human evaluation framework (a) Input Image with Edit Instruction (b) Example of a bad edit Table 1 : less than 0.7 with every other training image.We further focus on multiple carefully chosen verbs such as Replace, Swap, Add, Turn, Change and generate 20 images per verb leading to 200 diverse <image, instruction> pairs.Test split and number of image-caption pairs. Table 2 : Automatic evaluation using TIFA scores on the entire test set and human evaluation on in-domain(H score -I) and out-of-domain (H score -O) parts of the test set. Based on the input caption and instruction, reason how the resulting image would look like and whether the resulting image would be possible to imagine.Provide your verdict on whether the transformation is possible.If the verdict is true, also state the entity.First, provide your reasoning, starting with the words "The resulting image would show. . .".Then, return the verdict and the entity in JSON format. Table 3 You are given an image description and the corresponding entities present in the caption.Generate a question per entity to check whether the image aligns with the text. Table 4 : Three shot prompt given to Vicuna-13b to generate question for ensuring faithfulness wrt original caption and instruction
6,393.8
2023-10-29T00:00:00.000
[ "Computer Science" ]
Corrosion Testing of a Heat Treated 316 L Functional Part Produced by Selective Laser Melting Selective Laser Melting (SLM) shows a big potential among metal additive manufacturing (AM) technologies. However, the large thermal gradients and the local melting and solidification processes of SLM result in the presence of a significant amount of residual stresses in the as built parts. These internal stresses will not only affect mechanical properties, but also increase the risk of Stress Corrosion Cracking (SCC). A twister used in an air extraction pump of a condenser to create a swirl in the water, was chosen as a candidate component to be produced by SLM in 316 L stainless steel. Since the main expected damage mechanism of this component in service is corrosion, corrosion tests were carried out on an as-built twister as well as on heat treated components. It was shown that a low temperature heat treatment at 450 ̊C had only a limited effect on the residual stress reduction and concomitant corrosion properties, while the internal stresses were significantly reduced when a high temperature heat treatment at 950 ̊C was applied. Furthermore, a specific stress corrosion sensitivity test proved to be a useful tool to evaluate the internal stress distribution in a specific component. Introduction Selective Laser Melting (SLM) is an Additive Manufacturing (AM) process which locally melts a metallic powder bed using a highly focussed laser beam. Complex functional metallic parts with competitive mechanical properties can How to cite this paper: De Bruycker, E., Sistiaga, M.L.M., Thielemans, F. and Vanmeensel, K. (2017) Corrosion Testing of a Heat Treated 316 L Functional Part Pro-be built using a layer by layer manner.The high flexibility in design, low material waste and fast production of near-net-shape parts are the main advantages compared to conventional processing routes. The SLM process has been widely used for the production of 316 L parts and the optimization of its laser scan parameters has been widely reported [1] [2]. Referring to the microstructure after SLM, a cellular-dendritic microstructure is observed at the micrometer level and elongated grains across several layers at the macro level [3].The mechanical properties of the SLM processed parts show an increased yield strength compared to wrought 316 L [4] [5].The effect of heat treatments has been reported by Riemer et al [6] and Montero Sistiaga et al [7]. Both works show that stress relieving treatments show no significant change in grain size and mechanical properties.Hot Isostatic Pressing (HIP) and annealing treatments show a decrease in yield strength while maintaining the ultimate tensile strength due to complete dissolution of cellular dendrites and maintained grain size [3] [7]. 316 L stainless steel is characterized by a high corrosion resistance thanks to the combination of chromium, nickel and molybdenum [8].However, in SLM components high thermal gradients are obtained due to local melting and a fast solidification process, which can result in residual stresses in the as built condition [9] [10].These internal stresses will not only affect mechanical properties, but also increase the risk of Stress Corrosion Cracking (SCC).Hence, heat treatments to relieve these stresses are normally applied after the SLM process. A so-called twister component, used in an air extraction pump of a combined cycle gas turbine plant condenser to create a swirl in the water (Figure 1), proved to be a good candidate to be produced by SLM.Producing a spare twister by conventional casting would require making a casting die and would prove to be slower and more expensive than creating the part by AM.Also, since the original ex-service part shows a lot of erosion and impact damage on the surface, surface roughness is not an issue for this part.In addition, the consequences of a component failure are only minor, since a second extractor is present to take over. Since the twister is a static part and the extraction pump operates at room temperature, the main in-service damage mechanism for this component is corrosion and possibly SCC.Since the risk of SCC can be reduced by reducing the internal stresses, different heat treatments were carried out on the components manufactured by SLM.Subsequently, immersion corrosion tests and SCC tests were carried out on as-built and heat treated components.This paper describes the results of these tests. Experimental Set-Up The material used in this study was 316 L stainless steel provided by SLM Solutions Group AG with powder particle sizes ranging from 10 to 45 μm and spherical in shape.The 316 L powder composition is shown in Table 1 as defined in ASTM B243.A SLM 500 machine from SLM Solutions Group AG, Germany, was used to build 4 twisters and two cylinders with roughly the same dimensions as the axis of the twister (11.5 cm long with a diameter of 1.6 cm).The SLM500 machine provides a build envelope of 500 × 280 × 320 mm 3 and is equipped with four 400 W fibre lasers. Twister 1, cylinder 1 and twister 4 were kept in the as-built condition.Twister 2 and cylinder 2 were submitted to a high temperature heat treatment (HT-HT, 2 h at 950˚C, air cooling) and twister 3 to a low temperature heat treatment (LT-HT, 2 h at 450˚C, air cooling). The twisters were slightly sandblasted before the corrosion tests. For the immersion corrosion tests, samples were immersed for 33 days in demineralised water with 200 ppm chlorides and 450 ppm sulphates, which represents the most severe condition the actual component could experience in reality. To further evaluate the internal stresses in the different components ASTM G36 [1] SCC tests were carried out in boiling MgCl2 solution at 155˚C ± 1˚C.A first visual inspection was carried out after 3 h, a second one after 5.5 h, followed by daily visual inspections.The test was stopped when cracks were found or after a maximum of 77.5 h. Fluorescent penetrant testing was carried out to better identify the cracks. Both cylinders were cut in half in the longitudinal direction.One half was used to study the microstructure after HT-HT and compare it with the as-built microstructure.The microstructure and crack propagation of the twisters was also analysed.All the metallographic samples were polished and etched using oxalic acid and examined using an Axiocam Leica optical microscope.Micro Vickers hardness tests were performed using a Future Tech FV-700 hardness tester with an indentation load of 0.5 kg during 15 s. The geometry of the twisters has been captured before and after heat treatment using a HANDI SCAN 700 (with a resolution of 0.2 mm).The comparison of geometries did not reveal major deformations after heat treatment (Figure 2). Microstructural Investigation The microstructures of cross-sectioned samples are shown in Figure 3 for Twister 1, Twister 2 and Twister 3.For Twister 1 and Twister 3 no significant difference in the microstructure can be observed (Figure 3 plied heat treatments on the hardness.The results are depicted in Table 2.For the as built condition 240 ± 6 HV is obtained which is comparable to the values seen in literature [3] [4] [7].The LT-HT has the same hardness as the as built condition, thanks to the similar microstructure.These values are in accordance to the 235 HV found in other works [3] [4].On the other hand, for HT-HT a decrease in hardness is observed.This can be attributed to the dissolution of the cellular dendritic substructure and hence the loss of strengthening sites compared to the as built and LT-HT condition, as observed in Figure 3. Immersion Corrosion Tests and Field Test Twister 1 and 1 half of cylinder 1, both in as-built condition, were subjected to the immersion test for 33 days.No cracks were observed on any of the components.Only a mild discoloration on the rougher edges of twister 1 (former location of the support structure) was observed (Figure 4).Since the as-built com-ponents, containing the largest amount of internal stresses, did not show any cracking in the representative immersion test, it was not necessary to subject the heat treated components to an immersion corrosion test.It was also decided that the as-built twister 4 could be used for an operational test.Twister 4 was put into service in October 2016 and removed after 2000h in operation.No cracking was observed visually (Figure 5).This confirms the validity of the immersion corrosion test and also the effective performance of the SLM processed twister in service. SCC Tests Twister 1, half of Cylinder 1, twister 2 and twister 3 were subjected to the ASTMG36SCC test.The results are summarised in Table 3.During the first visual inspection after 3 h only the as-built twister 1 and cylinder 1 showed multiple cracks.Cracks oriented both parallel and perpendicular to the build direction were observed on twister 1 (Figure 6(a)).The cracks parallel to the build direction were situated at the top and bottom edges of the blades, while the cracks perpendicular to the build direction and hence parallel to the build layers were not only present at the edges of the blades, but also on the blades' surfaces and on the tip of the twister.For cylinder 1, only cracks perpendicular to the build direction were observed. During the second visual inspection after 5.5 h, multiple cracks were observed on the LT-HT twister 3, although the total amount was still less than those observed on twister 1 after 3 h.The orientation of the cracks was again both parallel and perpendicular to the build direction, as described for twister 1 (Figure 6(b)). The HT-HT twister 2 was kept immersed in the ASTMG36 test for an additional 72 h (3 days), with daily visual inspections.It did not show any sign of cracking (Figure 6(c)). The results of these SCC tests allow to have a rough idea of the magnitude of the internal stresses inside the components, based on the graph in the ASTMG36 standard (Figure 7).The internal stresses in the as built condition are probably higher than 250 MPa.After the low temperature heat treatment they are slightly decreased and will be somewhere in between 150 and 350 MPa.After the high temperature heat treatment, internal stresses have dropped significantly and will be below 110 MPa, significantly reducing the SCC risk. These tests highlights the positive effect of heat treating SLM processed 316L parts at a temperature of 950˚C compared to the as-built and low temperature heat treatment condition as it results in a significant reduction of the internal stresses.These results confirm the results obtained in a previous work [7] where heat treatment at 950˚C resulted in high energy absorption values and higher tensile values than the minimum required from the EM10216-5; 2013 standard. On the other hand, in order to fully understand the effect of heat treatments on SCC performance, more heat treatment temperatures should be tested. Characterization of the Cracks Fluorescent penetrant testing was used to highlight the cracks on twister 1 and twister 3 and some typical cracks in both directions were selected for more detailed microscopic inspection (Figure 8). SCC is usually characterized by extensive branching and crack growth perpendicular to the stress direction.All observed cracks on the 3 samples had the typical branched aspect of stress corrosion cracks (Figure 9).Most of the cracks initiate at the edge of the blades where the stresses are the highest.For the cracks at the edge of the blades, parallel to the build direction (samples 1-A and 3-A), apart from the cracks that were visible with the naked eye, some smaller cracks were present (Figure 9(a)).The crack lengths were measured for all sections mentioned in Figure 8 and are depicted in Table 4.The largest observed crack on sample 1-A was about 3.75 mm and on sample 3-A about 2.5 mm.For the cracks perpendicular to the build direction, several cracks starting from the blade surface and not necessarily from the blade edge, could be observed.The largest one on sample 1-B was about 2.5 mm. The crack density found for all cases corresponds with the results obtained from the fluorescent penetrant inspection.The samples from Twister 1 have a higher crack density than the sample from Twister 3. In addition, it has to be kept in mind that Twister 3 was subjected to the SCC test for 5.5 h and Twister 1 only for 3 h. In Figure 9, the crack morphology of sample 1-A and sample 3-A can be ob- Figure 1 . Figure 1.The original twister component which was selected for AM.(a) Twister inside the air extraction pump; (b) ex-service twister; (c) schematic drawing of the twister component. Figure 2 . Figure 2. Surface comparison of 3D scan before and after heat treatment. Figure 3 . Figure 3. Secondary electron micrograph parallel to the building direction of SLM produced 316 L. (a) Twister 1 in as built condition, (b) HT-HT twister 2 and (c) LT-HT twister 3. Table 2. Micro vickers hardness of SLM 316 L in as built condition and after low temperature (LT-HT) and high temperature (HT-HT) heat treatment.Vickers Hardness (HV0.5) Figure 7 . Figure 7. Effect of applied stress on the times to failure of various alloys tested in a magnesium chloride solution boiling at 154˚C [12]. Figure 8 . Figure 8. Fluorescent Penetrant Inspection on twister 1 and twister 3 to highlight the cracks.Red lines on the pictures indicate where the component was cut in order to examine some of the cracks in more detail. Figure 9 . Figure 9.Light Optical Micrographs of the samples showing the typical SCC aspect of the observed cracks (a) sample 3-A without etching; (b) sample 3-A etched and (c) zoomed area of the intergranular/transgranular crack.(d) sample 1-A etched, crack perpendicular to building direction. Table 1 . Chemical composition in weight % of 316 L powder provided by SLM solutions AG. Table 3 . Results of the ASTMG36 SCC tests on the twisters. Table 4 . Cracks lengths of the twisters after performing SCC test measured on the polished cross sections.
3,197.6
2017-03-08T00:00:00.000
[ "Materials Science" ]
Evaluating molecular representations in machine learning models for drug response prediction and interpretability Abstract Machine learning (ML) is increasingly being used to guide drug discovery processes. When applying ML approaches to chemical datasets, molecular descriptors and fingerprints are typically used to represent compounds as numerical vectors. However, in recent years, end-to-end deep learning (DL) methods that can learn feature representations directly from line notations or molecular graphs have been proposed as alternatives to using precomputed features. This study set out to investigate which compound representation methods are the most suitable for drug sensitivity prediction in cancer cell lines. Twelve different representations were benchmarked on 5 compound screening datasets, using DeepMol, a new chemoinformatics package developed by our research group, to perform these analyses. The results of this study show that the predictive performance of end-to-end DL models is comparable to, and at times surpasses, that of models trained on molecular fingerprints, even when less training data is available. This study also found that combining several compound representation methods into an ensemble can improve performance. Finally, we show that a post hoc feature attribution method can boost the explainability of the DL models. Introduction ML has been widely used in the pharmaceutical industry for rational drug discovery. Quantitative structureactivity relationship (QSAR) models, for example, typically use ML algorithms to learn the relationship between the structures or properties of compounds and their biological activity. ML can help guide the discovery process by identifying the most promising drug candidates before experimental work is carried out. The development of predictive models of drug response in cancer is a particularly important application of ML in this field [1,2]. The first step in an ML workflow for drug discovery is usually the calculation of molecular descriptors or fingerprints ( Figure 1). Molecular descriptors are the experimental or theoretical physicochemical properties of a compound. Molecular fingerprints encode molecules as bit or count vectors. The information that is encoded depends on the type of fingerprint. Substructure key-based fingerprints set the bits of the bit vector to one or zero depending on the presence or absence in the compound of certain substructures or features from a list of predefined structural keys [3]. Path-based fingerprints enumerate the different linear paths between atoms in a molecule to determine which types of fragments are present. Circular fingerprints describe the surrounding environment of each atom in the molecule up to a predefined radius [3]. The use of end-to-end DL approaches ( Figure 2) that can learn relevant features directly from raw input data may eliminate the need for precomputed descriptors and fingerprints. Graph neural networks (GNNs) can learn directly from molecular graphs [4,5]. Certain types of DL algorithms, such as recurrent neural networks (RNNs) and 1D convolutional neural networks (CNNs), are able to learn from line notations like Simplified Molecular-Input Line-Entry System (SMILES) strings, and DL-based natural language processing methods can be applied to line notations to create continuous embeddings of molecules [6]. Several recently published benchmarking studies have analyzed whether learned representations of compounds can outperform molecular descriptors and fingerprints. The authors of one such study compared different representation methods across several drug target prediction tasks and reached the conclusion that end-to-end DL methods tended to perform worse than DL models trained using precomputed chemical features [7]. Another study benchmarked a variety of molecular descriptor-based ML models and GNN models on several molecular property prediction datasets, and also concluded that descriptor-based models usually outperformed the GNN models [8]. A different research group found that GNNs performed better than fullyconnected neural networks (FCNNs) trained with molecular fingerprints on most of the benchmarking tasks considered in the study [9]. Another recent study evaluated the performance of several molecular fingerprints and DL-based representations on drug combination sensitivity and synergy prediction tasks using a large drug combination dataset. The authors found that several DL-based representations outperformed traditional fingerprints, but they also noted that the differences in performance between traditional and learned representations were small [10]. A large-scale benchmarking study using MoleculeNet datasets concluded that learned representations perform worse when training data is scarce or very imbalanced [11], and several other studies have also observed that traditional fingerprints tend to outperform learned representations in low data scenarios [12,13]. Therefore, the most suitable representation for a given prediction problem probably depends on the type of problem itself, as well as other factors such as dataset size, making it essential to evaluate this for each specific application. In this study, a variety of molecular representation methods and DL algorithms were benchmarked to determine which representations are the most suitable for predicting drug sensitivity in cancer cell lines. A new Python package, developed in-house, was used to perform this analysis. All of the datasets and scripts used in this study are available online at https://github.com/BioSystemsUM/DeepMol/tree/pacbb21/pacbb21_paper. Datasets A selection of DL models were benchmarked on several human cancer cell line drug screening datasets (Table 1). Single-cell line datasets were used so that it would be possible to study the effect of different compound representations without having to take cell line features into account. The NCI 1 and NCI 109 human tumor cell line growth inhibition datasets were used to develop binary classification models. These datasets were chosen because they have been widely used in the literature to validate graph classification algorithms. In this study, balanced versions of these datasets [14] were used (available from https://github.com/shiruipan/graph_datasets). In each dataset, the output variable indicates whether a given compound is active or inactive in a specific cell line. To develop and evaluate regression models, two single-cell line cytotoxicity datasets (PC-3 and CCRF-CEM) from a recent drug sensitivity prediction study [15] were used. The authors of this study obtained the original datasets from ChEMBL [16], performed some filtering and data cleaning steps, and transformed the original half maximal inhibitory concentration (IC 50 ) values into −log(IC 50 ) values (pIC 50 ). These pIC 50 values were used as the output variable for the regression models. Regression The previously mentioned datasets are all relatively small, each comprised of less than 5000 compounds. However, these smaller datasets were preferred because it was assumed that they would more closely reflect the behavior of the compound representation methods when used in drug sensitivity prediction models trained on publicly available anti-cancer screening datasets, which usually have data for even fewer compounds. Indeed, the original version of the popular Genomics of Drug Sensitivity in Cancer (GDSC) resource [17], for example, provides access to a dataset (GDSC1) containing screening data for only 367 compounds, while the Cancer Therapeutics Response Portal (CTRPv2)v2 [18] dataset has data for only 481 compounds. Nevertheless, a larger dataset derived from the National Cancer Institute 60 Human Cancer Cell Line Screen (NCI-60) dataset was also used for benchmarking. A single cell line, A549/ATCC, was selected from this dataset. After removing low quality experiments, compounds without sensitivity data, and compounds that could not be mapped to SMILES strings using the files provided by the Developmental Therapeutics Program (DTP), the final dataset contained 20,730 compounds. Sensitivity was measured as −log(GI 50 (half maximal growth inhibition concentration)) in this dataset. Prior to modeling, all SMILES strings were preprocessed using the ChEMBL Structure Pipeline [19]. Pre-computed features: Six different types of molecular fingerprints were evaluated in this work: extended connectivity fingerprint (ECFP) (ECFP4 and ECFP6), Molecular ACCess System (MACCS) keys, atom pair fingerprints (AtomPair), RDKit fingerprints (RDKitFP) and RDKit layered fingerprints (LayeredFP). ECFP fingerprints [20] are a popular circular fingerprint based on the Morgan algorithm [21]. ECFP4 fingerprints use a radius of 2 to define the circular neighborhood surrounding each atom, while ECFP6 fingerprints use a radius of 3. MACCS is a type of substructure key-based fingerprint which uses 166 predefined keys [22]. The AtomPair fingerprint is a topological fingerprint based on determining the shortest distance between all pairs of atoms within a molecule [23]. The RDKitFP is another topological fingerprint that was developed by the RDKit [24] project. The algorithm finds all subgraphs in a molecule containing a number of bonds within a predefined range, hashes the subgraphs, and then uses these hashes to generate a bit vector of fixed length. The LayeredFP [24] uses the same algorithm as the RDKitFP to identify subgraphs, but different bits are set in the final fingerprint based on different "layers" (different atom and bond type definitions). Compound structures were encoded as bit vectors using each fingerprinting algorithm and the resulting fingerprints were used as inputs to FCNNs. With the exception of MACCS fingerprints, which have a fixed length, the size of all fingerprints was limited to 1024 bits. Mol2vec embeddings: Mol2vec is an unsupervised method that generates continuous vectors representing molecules using the Word2vec word embedding algorithm [6]. Each molecule is considered a "sentence" and molecular substructures (calculated using the Morgan algorithm) are considered "words". In this work, a pre-trained Mol2vec model was used to generate 300-dimensional embeddings for the molecules in each dataset, which were fed into FCNNs. TextCNN: TextCNN is a 1D CNN that was originally developed for sentence classification [25]. A modified version of this algorithm (implemented in DeepChem [26]), which uses tokenized and one-hot encoded SMILES strings as inputs instead of words, was used in this study. It applies several 1D convolutional filters, followed by a max-over-time pooling operation, which summarizes each filter using its maximum value. These learned features are then fed into fully-connected layers to predict the output. Graph neural networks: The structure of a chemical compound can be represented as a molecular graph, where nodes are atoms and edges represent bonds. Conventional neural network architectures, such as FCNNs or CNNs, are unable to learn directly from this type of data. GNNs generalize deep neural networks to graph-structured data. Inputs to a GNN are usually node features (e.g. atom type) and adjacency matrices encoding the structure of the graph, and sometimes can also include edge features as well. The node and edge features are used to initialize the graph. In general, GNNs apply learnable functions to update the node-level representations, progressively incorporating information about the neighborhood of a node into its representation. After several rounds of updates, a pooling operation can be used to obtain a graph-level (molecular-level) representation. The graph-level representations can then be fed into fully-connected layers to predict a given output. Model training and evaluation The modeling workflow that was followed in this study is shown in Figure 3. Each dataset was split into a training set (70%) and a test set (30%). All models were trained and evaluated using the same splits for each dataset. All models were trained for 100 epochs with a batch size of 256 samples, and used the Adam [29] optimization algorithm. Cross-entropy loss was used as the loss function for all classification models, while the mean squared error was used for regression models. Other model-specific hyperparameters were tuned using a 5-fold cross-validated randomized search, in which 30 different hyperparameter combinations were tested. These included the number of hidden layers and hidden units and the use of regularization methods, such as L2 weight regularization and dropout [30], among others. The best model that was found for each algorithm was then refit on the entire training set and evaluated on the held-out test set. Additional details on the models (including the full search space and the best hyperparameters found for each type of model) are available online at https://github.com/BioSystemsUM/DeepMol/blob/pacbb21/pacbb21_paper/supplementary_material.pdf. Model ensembling Besides evaluating individual models, simple voting ensembles were also built to determine if combining predictions from multiple models could further improve performance. For classification tasks, majority voting ensembles were built, where the final prediction is the label predicted by the majority of the individual classifiers. For regression tasks, the ensembles simply averaged the individual predictions to obtain the final prediction. The hyperparameter values that were used for each individual model were the optimal values that had been previously tuned using randomized search. Feature importance Feature importance was determined using the Deep SHapley Additive exPlanations (SHAP) method implemented in the SHAP Python package (version 0.39.0) [31]. Deep SHAP approximates SHAP values for DL models by using a modified version of the DeepLIFT [32] algorithm. DeepMol All preprocessing, featurization and modeling steps were implemented using DeepMol, a newly developed chemoinformatics package from our host group. It is a python-based machine and deep learning framework for drug discovery, offering a variety of functionalities that enable a smoother approach to many drug discovery and chemoinformatics problems. This framework uses Tensorflow [33], Keras [34], Scikit-learn [35] and DeepChem [26] to either build custom ML and DL models or make use of pre-built models. It also uses the RDKit [24] framework to perform operations on molecular data. Regarding compound standardization, it allows users to use the ChEMBL Structure Pipeline [19] or apply custom standardization steps using RDKit [24] standardization methods. Some of these steps include standardization of some non-standard valence states, molecule sanitization, charge neutralization, stereochemistry removal, the removal of smaller fragments, kekulization, among others. The package also offers several featurization methods including molecular fingerprints, molecular embeddings and graphbased featurizers. In summary, the package offers a complete workflow to perform machine and deep learning tasks for molecules represented as SMILES strings. It has modules that perform standard tasks, such as loading and standardizing data, computing molecular features, performing feature selection and data splitting. It also provides methods to deal with unbalanced datasets and to do unsupervised exploration of the data. This way, DeepMol provides a common platform to treat the data and build, train, optimize and evaluate ML and DL models using different ML frameworks. Individual models In this section, we report and discuss the performance of 12 DL algorithms benchmarked on 5 drug response datasets of variable size and with different output variables. Figure 4 reports the results for classification tasks, with model performance quantified using the area under the receiver operating characteristic curve (ROC-AUC). For regression problems, model performance scores are reported in Figure 5, using the root mean squared error (RMSE) values. The full results tables and plots for additional scoring metrics are available from https://github.com/BioSystemsUM/DeepMol/tree/pacbb21/pacbb21_paper/results. ECFP4 fingerprints outperformed other fingerprints and end-to-end DL models on the NCI 1 classification task, having achieved a ROC-AUC score of 0.83. The LayeredFP model achieved a very similar ROC-AUC score (also 0.83, when rounded), while the best end-to-end DL model (TextCNN) reached a score of 0.81. On the NCI 109 dataset, the GCN algorithm ranked first in terms of performance, but other methods such as LayeredFP, TextCNN and GraphConv were not far behind (Figure 4). With the exception of the LayeredFP model, fingerprint-based methods generally performed worse than most of the end-to-end DL models on this dataset. On the PC-3 dataset, the TextCNN model achieved the lowest RMSE (0.61), followed by the LayeredFP model and the GraphConv model, both with an RMSE of 0.65 ( Figure 5). Other GNNs did not perform as well as GraphConv, having been surpassed by most of the fingerprint-based models. In the CCRF-CEM regression task, several fingerprint-based models (AtomPair, RDKitFP and MACCS) outperformed the best end-to-end DL model, which was once again the TextCNN model (RMSE = 0.76) ( Figure 5). On the larger A549 dataset ( Figure 5), TextCNN was the best model (RMSE = 0.79), followed by AtomPair and GCN which both reached an RMSE value of 0.81. The increase in dataset size did not benefit all of the end-to-end DL models, however. In general, performance scores were usually similar between many of the models. This finding is in agreement with the MoleculeNet benchmarking study, which was also unable to find clear differences between models when benchmarking on smaller datasets (less than 3000 compounds) [11]. The best type of compound representation method seems to depend on the dataset itself, even when the prediction tasks are similar. Other factors, such as the particular data split that was used or the limited number of hyperparameter combinations that were explored using randomized search, could also have influenced the results. Surprisingly, results were consistently worse when using Mol2vec embeddings, which were generated using a model that had been pre-trained on a dataset with over 19 million molecules [6]. The Mol2vec models performed poorly on the training sets as well, indicating that these models were underfitting. Although the original Mol2vec study does not mention the need for scaling the embeddings before using them as inputs to ML models, some of the values in the embeddings are outside the ideal range (between 0 and 1) for DL models. Scaling the embeddings prior to learning might be necessary to improve learning. In addition, dataset-specific fine-tuning of the embedding model might also improve the performance of FCNNs trained on these embeddings. End-to-end DL models performed as well as, and at times even surpassed, models trained on precomputed features. TextCNN models, in particular, ranked highly across all datasets. GNNs also performed relatively well on some of the prediction tasks. This is contrary to what would be expected given the limited number of compounds and the fact that these models were not pre-trained on larger chemical datasets beforehand. However, more complex GNNs with attention mechanisms (GAT and AttentiveFP) did not always perform as well as other end-to-end methods on the datasets that were used in this study, indicating that applying self-attention to the molecular graph nodes does not seem to be particularly advantageous in this case. Regarding molecular fingerprints, LayeredFP consistently performed well across all datasets, and models trained using atom pair fingerprints also performed relatively well, both outperforming the more popular ECFP fingerprints in 4 out of 5 datasets. LayeredFPs are able to encode information about larger subsets of the molecular graphs than ECFPs, capturing more information on the global structure of the molecules. AtomPair fingerprints also encode more global features since all pairs of atoms are taken into account. The results suggest that global molecular features might be important for the prediction of drug sensitivity. Therefore, these less well-known fingerprints might be interesting alternatives to some of the more commonly used options, at least for drug response prediction tasks. Despite the use of regularization methods, most models still had a tendency to overfit. In the future, an early stopping mechanism similar to the Keras EarlyStopping callback could be implemented for all models to try to mitigate this. Ensemble models versus individual models For each prediction task, the best individual models were also compared to two different ensembles: an 11-model ensemble comprising all models except Mol2Vec models, and an ensemble comprising only the 5 best individual models for a specific task. Results for the classification tasks and the regression tasks are provided in Tables 2 and 3, respectively. Using the 11-model ensembles improved performance scores across all prediction tasks relative to the best individual models for nearly all of the scoring metrics considered. The 5-model ensembles also outperformed single models on all tasks, and performed better than the larger ensembles on the regression tasks, having achieved lower RMSE scores. The improvement in performance, when using ensembles instead of single models, is likely due to the fact that different representations capture different molecular features. Therefore, combining several drug representations is a strategy that should be considered when developing drug response prediction models in the future. Feature importance SHAP values were used to determine which features were the most important for the test set predictions. The top 20 most important features (calculated for the test sets) for the best individual models for the NCI 1 classification task (ECFP4 model) and the CCRF-CEM regression task (AtomPair model) are shown in Figures 6 and 7, respectively. Features with greater absolute SHAP values will be the most important, and global feature importance can be determined by averaging the absolute SHAP values across all samples in the dataset. Through the use of color, these plots also show how the value of a given feature impacts the model prediction. For NCI 1, the SHAP values can be used to determine which fingerprint bits generally lead the model to predict whether the cell line will be sensitive to a given compound or not when the bits are set to 1. For example, ECFP4 bits 724, 947, 453, 353, 324, 832, 434, 695, 856, 302, and 272 have a positive effect on the model output, indicating that the presence of these fragments in a compound is associated with drug sensitivity. Likewise, in the CCRF-CEM task, fingerprint bits that have higher SHAP values are compound fragments that, when present, contribute to increase the pIC 50 values. This indicates that the sensitivity of the CCRF-CEM cell to treatment is greater when these substructures are present in the screened compounds. SHAP values can also be analyzed at the sample level, allowing to explain how specific features affect a single prediction. Figure 8 shows how each feature contributes to increase or decrease the model prediction from a base value of 0.46 for sample 537 from the NCI 1 test set, which had been correctly predicted as "sensitive" (class label = 1) by the ECFP4 model. Fingerprint bits that are set to 1 and move the predicted value towards 1, are features present in the compound that explain the sensitivity of the cell line used in the NCI 1 assay towards this compound. Figure 9 depicts the top ten ECFP4 bits and the corresponding substructures that most influenced the model prediction for sample 537. The compound in question is (7-Acetamido-1,2,10-trimethoxy-9-oxo-6,7dihydro-5H-benzo[a]heptalen-3-yl) ethyl carbonate, a colchicine analog. Colchicine and its analogs bind to tubulin [36]. The top 10 most important ECFP4 bits identified for sample 537 correspond to substructures that have been identified as essential for the bioactivity of colchicine binding site inhibitors [36,37]. Bit 900 corresponds to the trimethoxyphenyl ring that is essential for interaction with the binding site [36]. Bits 576 and 706 also represent parts of the trimethoxyphenyl moiety, and are centered on atoms that are part of the methoxy groups where drug-protein interactions are more likely to occur [36]. Bits 813, 724 and 334 are centered on atoms belonging to the phenyl ring that is part of the trimethoxyphenyl group as well, which has been shown to be essential for maintaining the geometry of the molecule required at the binding site [36]. Bit 907 is centered on a keto group where an important drug-tubulin interaction can occur in the form of a hydrogen bond [36,37], and bit 947 is centered on a carbon atom that is linked to a methoxy group, which also might potentially be involved in drug-protein interactions [36]. Bit 302 is more difficult to interpret, but it appears to be representing part of the central 7-membered ring. These results indicate that the DL model was capable of learning which molecular features are the most relevant for predicting drug response in this particular case. It would also be possible to determine which AtomPair fingerprint bits are most predictive of drug sensitivity for specific samples in the CCRF-CEM test set using the calculated SHAP values. However, due to the use of a hashed version of AtomPair fingerprints with a fixed length, obtaining more information on the specific fragments that are associated with each fingerprint bit is not possible, making these fingerprints more difficult to interpret than ECFP4 fingerprints. SHAP values were only calculated for the NCI 1 and CCRF-CEM tasks because we were unable to apply SHAP directly to TextCNN (best type of model for the PC-3 and A549/ATCC datasets) and GCN (best model on the NCI 109 task). More explanation methods, including graph-specific methods such as GNNExplainer [38], should be explored the future. Although SHAP helped to identify important features and explain how individual features affect the output of some of the DL models developed in this study, it is still difficult to explain how the interactions between multiple drug features influence drug response. The use of alternative ML methods that produce human-readable models, such as Decision Trees or other rule induction algorithms, or relational learning algorithms (e.g. inductive logic programming), may be preferable when a greater degree of interpretability is required. Conclusions Most compound representation benchmarking studies are focused on molecular property prediction or drugtarget interaction prediction tasks. The aim of this study was to determine which types of representations perform best for the specific problem of drug sensitivity prediction in cancer cell lines. Both traditional molecular fingerprints and DL-based representation learning methods were compared. Our findings show that most compound representations perform similarly. Nevertheless, this comparison of methods allowed the identification of representation strategies that consistently performed well across different drug response datasets. Some end-to-end DL models were capable of performing as well as or even better than traditional fingerprint-based models even on smaller datasets. Additionally, less well-known molecular fingerprints may be interesting alternatives to some of the more popular types of molecular fingerprints. The effect of combining multiple compound representation methods into ensembles of models was also evaluated in this study. The results of this analysis show that this strategy led to improved predictive performance. Finally, this study has also shown that SHAP can be used to increase the interpretability of fingerprint-based DL models. These findings can help guide the development of new DL-based drug response prediction models trained on screening data from other screening projects. The chemoinformatics software package that was used to carry out these analyses is under constant development and it is currently at a pre-release version. New models and features will be added in the future. Author contribution: All authors have accepted responsibility for the entire content of this manuscript and approved its submission. Research funding: This study was supported by the Portuguese Foundation for Science and Technology (FCT) under the scope of the strategic funding of UIDB/04469/2020 unit and through a PhD scholarship (SFRH/BD/130913/2017) awarded to Delora Baptista. This research has also been supported by the DeepBio project -ref. NORTE-01-0247-FEDER-039831, funded by Lisboa 2020, Norte 2020, Portugal 2020 and FEDER -Fundo Europeu de Desenvolvimento Regional. Conflict of interest statement: Authors state no conflict of interest. All authors have read the journal's Publication ethics and publication malpractice statement available at the journal's website and hereby confirm that they comply with all its parts applicable to the present scientific work.
6,074.6
2022-08-26T00:00:00.000
[ "Computer Science" ]
Unsupervised Domain Adaptation Using Exemplar-SVMs with Adaptation Regularization 1School of Computer and Control Engineering, University of Chinese Academy of Sciences, Beijing 100049, China 2Research Center on Fictitious Economy and Data Science, Chinese Academy of Sciences, Beijing 100190, China 3Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing 100190, China 4School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190, China 5School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China Introduction Over the past decades, machine learning technologies have achieved significant success in various areas, such as computer vision [1], natural language processing [2], and video detection [3].However, traditional machine learning methods assume that training and testing data come from the same domain, which implies that training or testing data are drawn from the same distribution and represented in the same feature spaces.This assumption is too violated to be held in the real world as collecting suitable and enough labeled data is time consuming and an expensive manual effort.Lacking labeled data, most of traditional machine learning methods always lose their generalization performance in reality.Therefore, it is desired to utilize the data of the relational domain to help training a robust learner for target domains.Driven by this requirement, transfer learning has rapidly developed in recent years [4].Transfer learning slacks the assumption of the traditional machine learning in which data or labels are drawn from the same distribution and represented in the same feature space.In the transfer learning settings, it is always assumed that domains are similar or related, with even no relationships, which is instead of i.i.d.assumption.Thus, transfer learning has a strong motivation when developing the classical machine learning functions or applying the functions to real-world applications.Besides, transfer learning can be regarded as a supplement of classical machine learning methods.One is the problem of covariate shift or sample selection bias.Another motivation is that we want to train a universal or general model as a predictor for all the tasks, viewed as the parameter or learner shared.It is also considered as a goal of Artificial General Intelligence.Transfer learning aims to utilize source or related domains to help target domain tasks.It has achieved significant success Complexity in various practical applications, such as face recognition [5], natural language processing [6], cross-language text classification [7], WiFi localization [8], or medicine image [9].Domain adaptation is a subproblem of transfer learning which assumes that source and target domain data are generated from the same feature and label space but different margin probability distributions.It aims to solve the problems that there is none or less labeled data in the target domain and usually use labeled data in the source domain to assist the training of target domain tasks.Massive works focus on the domain adaptation problems, and they also extend to some applications, such as WiFi location, text sentiment analysis, and image classification for multidomains.Since distribution mismatch generally exists in the real-world applications, there is also some other research area concern about domain adaptation.For example, extreme learning machine (ELM) is an efficient model for training single-hidden layer networks [10].There are also some ELM works in a domain adaptation setting [11,12].They utilize most previous domain adaptation classifiers that have added constraint term which is based on using instance reweighting to minimize Maximum Mean Discrepancy (MMD) [13].However, these methods need to assume that the difference between the source and target domain is not too large.Namely, this idea requires that different domains are similar. Most pattern recognition problems can be transformed into several basic classification tasks.Generally speaking, classification tasks assume that a category can be represented by a hyperplane [14,15], and most of the machine learning algorithms aim to learn hyperplanes to predict for unseen instances.Meanwhile, to improve the ability of representation by a hyperplane, there are some works which cluster the samples first and then solve the classification tasks on the clusters.In contrast to the category classification tasks, a cluster classifier can include more information about the positive category, but the more risks of overfitting.Motivated by the object detection, [16] proposed an extreme classification model training the classifiers for every positive instance and all the negative instances named exemplar support vector machines (E-SVMs).In fact, exemplar-SVMs can be viewed as an extreme situation of cluster-level SVM, in which every positive sample is regarded as a cluster.There are two viewpoints about the reason why the exemplar-SVM achieves a surprising generalization performance.One of the viewpoints is taking the exemplar-SVMs as a representation with complete details of positive instances.In other words, every classifier captures details of the positive instance like background, corner, color, or orientations and most of the classifiers can describe the category more intrinsically.From transfer learning viewpoint, training data cannot satisfy the underlying assumption of i.i.d., as every instance in the training set may be different from each other, namely, sample selection bias [17].Each exemplar-SVMs classifier is trained on a high weight positive sample and other negative samples; it can represent the positive sample well in the same distribution.Recently, [18] extends exemplar-SVMs into a transfer learning form which uses loss function reweighting and adds a low-rank regularization item for classifiers. In this work, we propose a novel model to address unsupervised domain adaptation problems that there is no label on target domain data.Furthermore, it permits distribution mismatch among instances.In our model, we train kernel exemplar classifiers for every positive instance and then integrate the classifier to make a prediction for target domain data.To align the distribution mismatch, we embed the regularization item based on TCA in our classifiers.In our opinion, the model constructs the bridge to transfer the knowledge, and we use the information in the kernel matrix which includes the instances representation in the highdimension space to assist classifier training across domains.For the problem of sample selection bias, we integrate the classifiers to make a prediction.Basically, the step of integration is to expand the representation of hyperplanes that entirely take advantage of details learned before. Our contributions are as follows. (1) We propose a novel unsupervised domain adaptation model based on exemplar-SVMs named Domain Adaptation Exemplar Support Vector Machines (DAESVMs), and it improves standard domain adaptation prediction accuracy by transferring knowledge across domains.(2) Every DAESVM classifier constructs a bridge that transmits knowledge from the source domain to target domain.Compared with the traditional two-step method, this strategy thoroughly searches the optimization point of the model which makes the classification hyperplane more precious about domains.(3) To solve the problem of sample selection bias, we use the ensemble methods to integrate the classifiers.The process of the ensemble is similar to slacking the classification hyperplane, which drops off some unreliable classification results and use the reliable parts to make a prediction.(4) We bring in the method of the pseudo label in DAESVMs inspired by [19] to supplement the information of target domain, and the experiments verify the effectiveness of the pseudo label.(5) We push a step further to extend to implementing DAESVMs on the multidomain adaptation.The rest of this paper is organized as follows.In Section, we introduce the notation of the problem.Meanwhile, we review the related works of domain adaptation, exemplar-SVM, and Transfer Component Analysis (TCA).In Section, we introduce the deduction process of DAESVM and formulate the model.In Section, we propose the optimization algorithm for our model.In Section, we integrate all the DAESVMs classifiers to make a prediction.In Section, we analyze the experiments on some transfer learning dataset to verify the effectiveness of DAESVMs.In Section, we conclude our work and give an expectation. Notation and Related Works This section will introduce the notation and related works about this paper. Notation. In this paper, we use the notation of [4] It is agreed that the approaches of domain adaptation can be divided into three parts, reweighting approach, feature transfer approach, and parameter shared approach. (1) Reweighting Approaches.In the transfer learning tasks, the basic idea of utilizing the source data to help training target predictor is to reduce the discrepancy between the source and target data as far as possible.Under the assumption that source and target domains have a lot of overlapping features, a conventional method is reweighting or selecting the source domain instances to correct the marginal probability distribution mismatch.Based on the metric distance method between distributions named Maximum Mean Discrepancy (MMD), [20] proposed a technique called Kernel Mean Minimum (KMM) revising the weight of every instance to minimize MMD between the source and target domain.Being similar to KMM, [21] used the same idea but a different metric method to adjust the discrepancy of domains.Reference [22] used the strategy of AdaBoost to update the weights of source domain data, which improved the weight of instances in favor of classification task.It also introduced the generalization error bounds of model based on the PAC learning theory.In recent years, [23] used a two-step approach; first is sampling the instances which are similar with other domains as landmarks, and then use these landmarks to map the data into a high-dimension space, after which it is more overlapping.Reference [24] solved the same problem but slacked the similarity assumption; it assumes that there are no relationships between the source and target domain.The model named Selective Transfer Machine (STM) reweights the instance of personal faces to train a generic classifier.Most of instance-based transfer learning techniques use KMM to measure the difference of the distributions, and these methods are applied in many areas, such as facial action unit detection [25] and prostate cancer mapping [26]. (2) Feature Transfer Approaches.Compared with instancebased approaches, feature-based approaches slack the similarity assumption.It assumes that source and target domain share some features named shared features, and domains have their own features named spec-features [27].For example, when we train a task that uses movie critical to help sofa critical sentiment analysis classification task.The word "comfortable" is always nonzero in the sofa domain features but always zero in the movie domain features.This word is the spec-feature of sofa domain feature.Feature transfer approaches aim to find a shared latent subspace where the distance between the source and target domain is minimized.Reference [28] proposed an unsupervised domain adaptation approach named Geodesic Flow Kernel (GFK) based on kernel method.GFK maps data into Grassmann manifolds and constructs geodesic flows to reduce the mismatch among domains.It effectively exploits intrinsic low-dimensional structures of data in domains.To solve problems of crossdomain natural language processing (NLP), [29] proposed a general method structural correspondence learning (SCL) to learn a discriminative predictor by identifying correspondences from features in domains.Primarily, SCL finds the pivot features and then links the shared features with each other.Reference [7] learned a predictor by mapping the target kernel matrix to a submatrix of the source kernel matrix.The deep neural network is used not for learning essential features but also for domain adaptation.Reference [30] proposed a neural network architecture for domain adaptation named Deep Adaptation Network (DAN) and extended it to joint adaptation networks (JAN) [31].Reference [32] discussed the transferable domain features on the deep neural network. (3) Parameter-Based Approaches.The core idea of parameterbased approaches aims to transfer parameters from source to target domain tasks.It assumes that different domains share some parameters and these parameters could be utilized for domains.Reference [33] proposed Adaptive Support Vector Machine (A-SVM) as a general method to adopt new domains.A-SVM trains an auxiliary classifier firstly and then learns the target predictor based on the original parameters.Reference [34] reweighted prediction of the source classifier on target domain by signing distance between domains. Exemplar Support Vector Machines. Reference [16] is proposed for object detection and getting high performance.It trains classifiers on every positive instance from all negative instances.Every positive instance is an exemplar and the classifier corresponding to it can be viewed as a representation of the positive instance.In the process of the prediction, every classifier predicts a value for the test instance and uses a function to make a calibration for the value and then gets the high score classifiers result as a predicted class.The exemplar-SVMs solve the problem that a hyperplane is hard to represent a category instance and utilize an extreme strategy to train predictor.In [35], they gather the training procession into one model and enter the nuclear norm regularization to the scene of domain generalization which assumes target domain is unseen.They also extend the model to the problem of domain generalization and multiview [36,37].In [38], they reduced two hyperparameters into one and spread exemplar-SVMs to a kernel form.Complexity 2.4.Transfer Component Analysis.Reference [39] proposed a dimension reduction method called maximum mean discrepancy embedding (MMDE).By minimizing the distance of source and target domain data distribution in a shared latent space, the source domain data is utilized to assist training classifier on the target domain.MMDE is not only to minimize the distance between the domains in the latent space but also preserve the properties of data by maximum of the variance of data.Based on the MMDE, [40] extended it to have the ability of deal with the unseen instance and reduce the computation complexity of MMDE.Substantially, TCA simplifies the process of learning kernel matrix instead by transforming init kernel matrix.The optimization of this problem is equal to a solution in leading eigenvectors of object matrix. Domain Adaptation Exemplar Support Vector Machine In this section, we present the formulation of Domain Adaptation Exemplar Support Vector Machine (DAESVM). In the remainder of this paper, we use a lowercase letter in boldface to represent a column vector and an uppercase in boldface to represent a matrix.The notation mentioned in Section is extended.We use x + 푖 , ∈ {1, . . ., + 푆 }, where + 푆 is the number of positive instances, to represent a positive instance, and x − 푗 , ∈ {1, . . ., − 푆 }, where − 푆 is the number of negative instances, to represent a negative instance.The set of negative samples are written as − .This section introduces the formulation procession of an exemplar classifier.In fact, we need to train exemplar classifiers in the number of source domain instances and the method which integrates these classifiers is proposed in Section. Exemplar-SVM. The exemplar-SVM is constructed by an extreme idea of training a classifier by a positive instance from all the negative instances and then calibrating the outputs of classifiers into a probability distribution to separate the samples.The model trains the number of positive instance classifiers.Learning a classifier which aims to separate a positive instance from all the negative instance can be modeled as where ‖ ⋅ ‖ is 2-norm of a vector and 1 and 2 are the tradeoff parameters corresponding to in SVM for balancing the positive and negative error cost.ℎ() = max (0, 1 − ) is a hinge loss function. The formulation (1) is the primal problem of exemplar-SVM, and we can find the dual problem for utilizing kernel method.The dual formulation can be written as follows [38]: (2) are Lagrangian multipliers.e is an identity vector.We take this model as an exemplar learner.The matrix (3) Pseudo Label for Kernel Matrix. To make the best use of samples in source or target, we construct the kernel matrix on both domain data.However, in the dual problem of SVM, kernel matrix K needs to be supplied labeled data.Our model is based on the unsupervised domain adaptation problem in which only source domain data are labeled.Motivated by [19], we use the pseudo label to help model training.Pseudo labels are predicted by classical classifiers, SVM in our model, which train on the source labeled data.Due to the distribution mismatch between source and target domain, there may be many labels incorrect.Followed by [19], we assume that the pseudo class centroids calculated by them may reside not far apart from the true class centroids.Thus, we use both domain data to supplement the kernel matrix K with label information.In our experiments, we testify this method is effective. Exemplar Learner in Domain Adaptation Form.In fact, each exemplar learner is an SVM in kernel form which is trained by a positive instance and all the negative instances. In the opinion of [16], a discriminative exemplar classifier can be taken as a representation of a positive instance.However, in the task of object detection or image classification, this parametric form representation is feasible because of some characteristics in samples, such as angle, color, orientations, and background, which are hard to represent.The instancebased parametric discriminative classifier can include more information about positive samples.Similarly, with the motivation of transfer learning, we can view a positive instance as a domain, and there is some mismatch among domains.Our model aims to correct this mismatch and reduce the distance from the target domain.We construct an exemplar learner distance metric of domains from MMD and it can be written as dist However, it is just a metric of distance which is satisfied with our requirement of minimizing this distance by some transformation.Motivated by Transfer Component Analysis (TCA), we want to map the instance into a latent space that the instances from source and target domain are more similar and assume this mapping is ().Namely, we aim to minimize MMD distance between domains by mapping instances into another space.We extend the distance function as follows: Corresponding to a general approach, it always reformulates (4) to construct a kernel matrix form.We define the Gram matrices on the source positive domain, source negative domain, and target domain.The kernel matrix K is composed of nine submatrices, and it constructs the coefficient matrix L, Thus, the primal distance function is represented by KL.Motivated by TCA [40], the mapping for primal data is equal to the transformation of kernel matrix generated by the source and target domain data.Utilizing the low-dimension transform matrix M ∈ R (1+푛 − +푛 )×푚 reduces the dimension of the primal kernel matrix.It maps the empirical kernel map K = (KK −1/2 )(K −1/2 K) into an -dimensional shared space.Mostly, we replaced the distance function KL by (KMM 푇 KL).In our case, we follow [40] and minimize the trace of the distance, For controlling the complexity of M and preserving the data characteristic, we add the regularization and constraint item. The domain adaptation item is formulated followed from TCA and written as where > 0 is a tradeoff parameter and I 푚 ∈ R (푚×푚) is an identity matrix. Furthermore, the objective function of dual SVM needs to be added to the training label information which is similar to our model.Thus, we construct the training label matrix y + 푆 is the label of a positive instance, y − 푆 is the label vector of negative source instances, and y 푇 is the pseudo labels of target instances which are predicted by SVM before.It can be rewritten in another form: Label matrix U provides the information of source domain data labels and target domain pseudo labels.The matrix K in a dual problem of exemplar-SVM ( 2) is primal data kernel matrix.We want to replace it by mapping the kernel matrix into a latent subspace.Namely, replace K by K and the final objective function of each DAESVM model is formulated as follows: min Optimization Algorithm To minimize problem (12), we adopt the alternated optimization method which alternates between solving two subproblems over parameter and mapping matrix M.Under these methods, the alternated optimization approach is guaranteed to decrease the objective function.Algorithm 1 summarizes the optimization procedure of problem ( 12) which we formulated.min 푇 K − e 푇 , K = UKMM 푇 KU which represents the kernel matrix has been transformed by transformation matrix M. It is obvious that this problem is a QP problem and it could be solved efficiently using interior point methods or other successive optimization procedures such as Alternating Direction Method of Multipliers (ADMM). Ensemble Domain Adaptation Exemplar Classifiers In this section, we introduce the method of integration exemplar classifiers.As mentioned before, we get the number of source domain instances classifiers and this section aims to predict labels for target domain instances.In our opinions, the classification hyperplane of an exemplar classifier is representation for a source domain positive instance.However, most of the hyperplanes contain information which comes from various samples, such as images of different background or source.In fact, we aim to search the exemplar classifiers which are from instances similar to the testing sample.Thus, we utilize integrating method to filter out classifiers which include details different with the testing sample.Another view for the integration method is that it slacks the part of hyperplanes.Namely, it removes some exemplar classifiers which are trained by large instances distribution mismatch. In our method, we first construct the classifiers from Lagrange multipliers .The classifier construction equation is where w is the weight of classifier. where is the bias of classifier.The classifier is given by And then we compute the scores by every classifier and the testing instance.Second, we find the top P numbers of scores for each class classifier and compute the sum of those scores.At last, we get a score for each class, and the highest score is the category that we predict.The prediction method is described in Algorithm 2. Experiments In this section, we conduct experiments onto the four domains, Amazon, DSLR, Caltech, and Webcam, to evaluate the performance of proposed Domain Adaptation Exemplar Support Vector Machines.We first compare our method to baselines and other domain adaptation methods.Next, we analyze the effectiveness of our approach.At last, we introduce the problem of parameter sensitivity. Data Preparation. We run the experiments on Office and Office Caltech datasets.Office dataset contains three domains Amazon, Webcam, and DSLR.Each of them includes images from amazon.com or Office environment images taken with varying lighting and pose changes using a Webcam or a DSLR camera.Office Caltech dataset contains the ten overlapping categories between the Office dataset and Caltech-256 dataset.By the standard transfer learning experiment method, we merge two datasets; it entirely includes four domains Amazon, DSLR, Caltech, and Webcam which are studied in [41].The dataset of Amazon is the images downloaded from Amazon merchants.The images in the Webcam also come from the online web page, but they are of low quality as they are taken by web camera.The domain of DSLR is photographed by the digital SLR camera by which the images are of high quality.Caltech is always added to domain adaptation experiments, and it is collected by object detection tasks.Each domain has its characteristic.Compared to the other domains, the quality of images in the DSLR is higher than others and the influence factors such as object detection and background are less than images downloaded from the web.Amazon and Webcam come from the web, and images in the domains are of low quality and more complexity.However, there are some different details on each of them.Instances in the Webcam are object alone, but the composition of samples in Amazon is more complex including background and other goods.Figure 1 shows the example of the backpack from four domain samples.In the view of transfer learning, the datasets come from different domains and the different margin probabilities for the images.In our model, we aim to solve this problem and get a robust classifier for the cross-domain. We chose ten common categories among all four datasets: backpack, bike, bike helmet, bookcase, bottle, calculator, desk chair, desk lamp, desktop computer, and file cabinet.There are 8 to 151 samples per category in a domain: 958 images in Amazon, 295 images in Webcam, 157 images in DSLR, 1123 images in Caltech, and 2533 images total in the dataset.Figure 1 shows examples for datasets. We follow both SURF and DeCAF features extraction in the experiments.First, we use SURF features encoding the images into 800-bin histograms.Next, we use DeCAF feature which is extracted by 7 layers of Alex-net [42] into 4096-bin histograms.At last, we normalized the histograms and then -scored to have zero mean and unit standard deviation in each dimension. We run our experiments on a standard way for visual domain adaptation.It always uses one of four datasets as source domain and another one as target domain.Each dataset provides same ten categories and uses the same representation of images which is considered as the problem of homogeneous domain adaptation.For example, we choose images taken by the set of DSLR (denoted by ) as source domain data and use images in Amazon (denoted by ) as target domain data.This problem is denoted as D → A. Using this method, we can compose 12 domain adaptation subproblems from four domains. Experiment Setup (1) Baseline Method.We compare our DAESVM method with three kinds of classical approaches: one is classified without regularization of transfer learning, the second is conventional transfer learning methods, and the last one is the foundation model, which is low-rank exemplar support vector machine.The methods are listed as follows: (1) Transfer Component Analysis (TCA) [40] (2) Support Vector Machine (SVM) [43] (3) Geodesic Flow Kernel (GFK) [28] (4) Landmarks Selection-based Subspace Alignment (LSSA) [23] (5) Kernel Mean Maximum (KMM) [20] (6) Subspace Alignment (SA) [44] (7) Joint Matching Transfer (TJM) [45] (8) Low-Rank Exemplar-SVMs (LRESVMs) [18] TCA, GFK, and KMM are the classical transfer learning methods.We compare our model with these methods.Besides, we prove our method is more robust than models without domain adaptation items in the transfer learning scenery.TCA is the foundation of our model, and it is similar to GFK and SFA which are based on the idea of feature transfer.KMM transfer knowledge by instance reweighting. TJM is a popular model utilizing the problem of unsupervised domain adaptation.SA and LSSA are the models using landmarks to transfer knowledge. (2) Implementation Details.For baseline method, SVM is trained on the source data and tested on the target data [46].TCA, SA, LSSA, TJM, and GFK are first viewed as dimension reduction process and then train a classifier on the source data and make a prediction for the target domain [19].Being similar to dimension reduction, KMM is first to compute the weight of each instance and then train predictor on the reweighting source data.Under the assumption of unsupervised domain adaptation, it is impossible to tune the optimal parameters for the target domain task by cross validation, since there exists distribution mismatch between domains.Therefore, in the experiments, we adopt the strategy of Grid Search to obtain the best parameters and report the best results.Our method involves five tunable parameters: tradeoff in ESVM 1 and 2 , tradeoff in regularization items and , and parameter of dimension reduction .The parameters of tradeoff in ESVM 1 and 2 are selected over {10 −3 , 10 −2 , 10 −1 , 10 −0 , 10 1 , 10 2 , 10 3 }.We fix = 1, = 1, = 40 empirically and select radial basic function (RBF) as the kernel function.In fact, our model is relatively stable under a wide range of parameter values.We train a classifier for every positive instance in the source domain data and then we put them into a probability distribution.We deal with the multiclass classifier in a one versus the others way.To measure the performance of our method, we use the average accuracy and the standard deviation over ten repetitions.The average testing accuracies and standard errors for all 12 tasks of our methods are reported in Table 1.For the rest of baseline experiments, most of them are cited by the papers which are published before. Experiments Results. In this section, we compare our DAESVM with baseline methods regarding classification accuracy. Table 1 summarizes the classification accuracy obtained by all the 10 categories and generates 12 tasks in 4 domains.The highest accuracy is in a bold font which indicates that the performance of this task is better than others.First, we implement the traditional classifiers without domain adaptation items that we train the predictors on the source domain data and make a prediction for target domain dataset.Second, we compared our DAESVM with unsupervised domain adaptation methods, such as TCA or GFK, implemented to use the same dimension reduction with the parameter in our model.At last, we also compared DAESVM with newly transfer learning models, like low-rank ESVMs [18].Overall, in a usual transfer learning way, we run datasets across different pairs of source and target domain.The accuracy of DAESVM for the adaptation from DSLR to Webcam can achieve 92.1% which make the improvement over LRESVM by 1.2%.Compared with TCA, DAESVMs make a consideration about the distribution mismatch among instances or different domains.For the adaptation from Webcam to DSLR, this task can get the accuracy of 91.8%.For the domain datasets Amazon and Caltech which are more significant than DSLR and Webcam, DAESVM gets the accuracy of 77.5% which improves about 36.2% compared to the method of TJM.For the ability which transfers knowledge from large dataset to small domain dataset, from Amazon to DSLR, we get the accuracy of 76.8%.Contrarily, from DSLR to Amazon, the prediction accuracy is 83.4%.Totally speaking, our DAESVM trained on one domain has good performance and will also have robust performance on multidomain.We also complement tasks of multidomains adaptation, which utilized one or more domains as source domain data and made an adaptation to other domains.The results are shown in Table 2.The accuracy of DAEVM for the adaptation from Amazon, DSLR, and Webcam to Caltech achieves 90.1% which get the improvement over LERSVM.For the task of adaptation from Amazon and Caltech to Webcam, DSLR can get the accuracy of 92.4%.The experiments prove that our models are effective not only for single domain adaptation but also for multidomain adaptation. Two key factors may contribute to the superiority of our method: The feature transfer regularization item is utilized to slack the similarity assumption.It just assumes that there are some shared features in different domains instead of the assumption that different domains are similar to each other.This factor makes the model more robust than models with reweighting item.The second factor is the exemplar-SVMs which are proposed from a motivation of transfer learning which makes a consideration that instances are distribution mismatch from each other.Our model combines these two factors to resist the problem of distribution mismatch among domains and sample selection bias among instances.6.4.Pseudo Label Effectiveness.Following [19], we use pseudo labels to supplement training model.In our experiments, we test the prediction results which are influenced by the accuracy rate of pseudo labels.As a result, described by Figure 2, the prediction accuracy is improved following the increasing accuracy of pseudo labels.It is proved that the method of the pseudo label is effective and we can do the iteration by using the labels predicted by the DAESVM as the pseudo labels.The iteration step can efficiently enhance the performance of the classifiers. Parameter Sensitivity. There are five parameters in our model, and we conduct the parameter sensitivity analysis which can achieve optimal performance under a wide range of parameter values and discuss the results. (1) Tradeoff . is a tradeoff to control the weight of MMD item which aims to minimize the distribution mismatch between source and target domain.Theoretically, we want this term to be equal to zero.However, if we set this parameter to infinite, → ∞, it may lose the data properties when we transform source and target domain data into high-dimension space.Contrarily, if we set to zero, the model would lose the function of correcting the distribution mismatch. (2) Tradeoff . is a tradeoff to control the weight of data variance item which aims to preserve data properties.Theoretically, we want this item to be equal to zero.However, if we set this parameter to infinite, → ∞, it may augment the data distribution mismatch among different domains; namely, transformation matrix M cannot utilize source data to assist the target task.Contrarily, if we set to zero, the model cannot preserve the properties of original data. (3) Dimension Reduction . is the dimension of the transformation matrix, namely, the dimension of the subspace which we want to map samples into.Similarly, minimizing too less may lead to losing the properties of data which may lead to the classifier failure.If is too large, the effectiveness of correct distribution mismatch may be lost.We conduct the classification results influenced by the dimension of , and the results are displayed in Figure 3. (4) Tradeoff in ESVM 1 and 2 .Parameters 1 and 2 are the upper bound of the Lagrangian variables.In the standard SVM, positive and negative instances share the same standard of these two parameters.In our models, we expect the weights of the positive samples to be higher than negative samples.In our experiments, the value of 1 is one hundred times 2 which could gain a high-performance predictor.The visual analysis of these two parameters is in Figure 4. Conclusion In this paper, we have proposed an effective method for domain adaptation problems with regularization item which reduces the data distribution mismatch between domains and preserves properties of the original data.Furthermore, utilizing the method of integrating classifiers can predict target domain data with high accuracy.The proposed method mainly aims to solve the problem, in which domains or instances distributions mismatch occurs.Meanwhile, we extend DAESVMs to the multiple source or target domains.Experiments conducted on the transfer learning datasets transfer knowledge from image to image. Our future works are as follows.First, we will integrate the training procession of all the classifiers in an ensemble way.It is better to accelerate training process by rewriting all the weight into a matrix form.This strategy can omit the process of matrix inversion optimization.Second, we want to make a constraint for that can hold the sparsity.At last, we will extend DAESVMs on the problem transfer knowledge among domains which have few relationships, such as transfer knowledge from image to video or text. Notations and Descriptions D 푆 , D 푇 : Source/target domain T 푆 , T 푇 : Source/target task : Dimension of feature X 푆 , X 푇 : Source/target sample matrix y 푆 , y 푇 : Source/target sample label matrix K: Kernel matrix without label information : Lagrange multipliers vector 푆 , 푇 : The number of source/target domain instances e: Identity vector I: Identity matrix. Input: y 푆Algorithm 2 : , X 푡푒 ; parameter P Output: prediction labels y (1) Compute the weights w of the classifiers.(2) Construct weight matrix W and bias b of predictors based on .(3) repeat (4) Compute scores of each classifier in this category.(5) Find top P scores.(6) Compute the sum of these top scores.(7) until The number of categories (8) Choose the max score owned category as the prediction label y.Ensemble Domain Adaptation Exemplar Classifiers. Figure 1 : Figure 1: Example images from the backpack category in Amazon, DLSR ((a) from left to right), Webcam, and Caltech-256 ((b) from left to right).The different domain images are various.The images have different style, background, or sources. Figure 2 : Figure 2: The accuracy of DAESVMs is improved with the improvement of the pseudo label accuracy.The results verify the effectiveness of the pseudo label method. Figure 3 :Figure 4 : Figure 3: When the dimension is 20 or 40, the prediction accuracy is higher than others. definition in transfer learning, and the definition just considers the condition of one source domain and one target domain.First, it needs to define the Domain and Task.Domain D is composed of a feature space X and a margin probability distribution (), namely, D = {X, ()}, ∈ X. Task T is composed of a label space Y and a prediction model (), namely, T = {Y, ()}, ∈ Y. From view of probability, () = ( | ).Notations in this paper which are frequently used are summarized in the Notations and Descriptions section.The definition of transfer learning is as follows: Give a source domain data D 푆 = {( 푆 1 , 푆 1 ), . . ., ( 푆 Table 1 : Classification accuracies of different methods for different tasks of domain adaptation.We conduct the experiments on conventional transfer learning methods.Comparing with traditional methods, DAESVMs gain a big improvement in the prediction accuracy.And they also improve confronted with the approach of LRESVM which is proposed recently [average ± standard error of accuracy (%)]. Table 2 : We also conduct our experiments for the tasks of multidomain and gain an improvement comparing with methods proposed before.The experiments adopt the same strategy as the single domain adaptation.We treat multidomain as one source or target to find the shared features in a latent space.However, the complexity of the multidomain shared features limits the accuracy of tasks [average ± standard error of accuracy (%)].
8,419.6
2018-04-22T00:00:00.000
[ "Computer Science" ]
Emergence of Non-Fourier Hierarchies The non-Fourier heat conduction phenomenon on room temperature is analyzed from various aspects. The first one shows its experimental side, in what form it occurs, and how we treated it. It is demonstrated that the Guyer-Krumhansl equation can be the next appropriate extension of Fourier’s law for room-temperature phenomena in modeling of heterogeneous materials. The second approach provides an interpretation of generalized heat conduction equations using a simple thermo-mechanical background. Here, Fourier heat conduction is coupled to elasticity via thermal expansion, resulting in a particular generalized heat equation for the temperature field. Both aforementioned approaches show the size dependency of non-Fourier heat conduction. Finally, a third approach is presented, called pseudo-temperature modeling. It is shown that non-Fourier temperature history can be produced by mixing different solutions of Fourier’s law. That kind of explanation indicates the interpretation of underlying heat conduction mechanics behind non-Fourier phenomena. Introduction The Fourier's law [1] is one of the most applicable, well-known elementary physical laws in engineering practice. Here, q is the heat flux vector, T is absolute temperature, k is thermal conductivity. However, as all the constitutive equations, it also has limits of validation. Phenomena that do not fit into these limits, called non-Fourier heat conduction, appear in many different forms. Some of them occur at low temperature such as the so-called second sound and ballistic (thermal expansion induced) propagation [2][3][4][5][6][7]. These phenomena have been experimentally measured several times [8][9][10][11] and many generalized heat equations exist to simulate them [12][13][14][15][16][17][18][19][20]. The success in low-temperature experiments resulted in the extension of this research field to find the deviation at room temperature as well. One of the most celebrated result is related to Mitra et al. [21,22] where the measured temperature history was very similar to a wave-like propagation. However, these results have not been reproduced by anyone and undoubtedly demanded for further investigation. In most of the room-temperature measurements, the existence of Maxwell-Cattaneo-Vernotte (MCV) type behavior attempted to be proved [23,24]. It is this MCV equation that is used to model the aforementioned second sound, the dissipative wave propagation form of heat [3,25,26]. The validity of MCV equation for room-temperature behavior has not yet been justified, despite of the numerous experiments. It is important to note that many other extensions of Fourier equation exist beyond the MCV one, such as the Guyer-Krumhansl (GK) equation [27][28][29][30][31][32], the dual-phase-lag model [33], and their modifications, too [7,34,35]. Some of these possess stronger physical background, some others not [36][37][38]. Here we would like to emphasize that we restrict ourselves to the GK equation that shows the simplest hierarchical arrangement of Fourier's law and applicable for room-temperature problems. The simplest extension of MCV equation is the GK model, which reads: where the coefficient τ is called relaxation time and κ 2 is regarded as a dissipation parameter and the dot denotes the time derivative. This GK-type constitutive equation contains the MCV-type by considering κ 2 = 0 and the Fourier equation taking τ = κ 2 = 0. This feature of GK equation allows to model both wave-like temperature history and over-diffusive one. This is more apparent when one applies the balance equation of internal energy to eliminate q: with mass density ρ, specific heat c and volumetric source neglected, one obtains with thermal diffusivity a = k/(ρc). One can realize that Equation (4) contains the Fourier heat equationṪ as well as its time derivative, with different coefficients. It becomes more visible after rearranging Equation (4): when the so-called [39,40] Fourier resonance condition κ 2 /τ = a holds, the solutions of the Fourier Equation (5) are covered by the solutions of (4). Meanwhile, when κ 2 < aτ the wave-like behavior is recovered, and this domain is known as under-damped region. In the opposite case (κ 2 > aτ), there is no visible wave propagation and it is called over-diffusive (or over-damped) region. We measured the corresponding over-diffusive effect several times in various materials such as metal foams, rocks and in a capacitor, too [39,40]. Furthermore, a similar temperature history was observed in a biological material [38]. It is also important to note that originally the GK equation is derived from Boltzmann equation applying phonon hydrodynamics in the background. Here, we would like to emphasize that in non-equilibrium thermodynamics it can also be derived without assuming any phonon interaction in the material [6,7] keeping the GK equation applicable for room-temperature heat conduction. In this paper, further aspects of over-diffusive propagation are discussed. In the following sections the size dependence of the observed over-damped phenomenon is discussed both experimentally and theoretically. Moreover, the approach of pseudo-temperature is presented to provide one concrete possible interpretation for non-Fourier heat conduction. Size Dependence Our measurements reported here are performed on basalt rock samples with three different thicknesses, 1.86, 2.75 and 3.84 mm, respectively. We have applied the same apparatus of heat pulse experiment as described in [39,40], schematically depicted in Figure 1 below. In each case, the rear-side temperature history was measured and numerically evaluated solving the GK equation with constant coefficients, i.e., they do not depend on the temperature due to its small change. It is also assumed that the GK equation characterizes the whole sample. We choose the GK equation as the simplest thermodynamically consistent one that can predict signal shapes observed in room-temperature measurements. (The heat pulse setup-a widely used one for transient heat conduction measurements-is not capable of obtaining space dependence of temperature along the sample but even such measurement data would be insufficient to determine an underlying partial differential equation -any experimental data can only refute or support an equation (at some confidence level).) The GK coefficients used below are best fits. The recorded dimensionless temperature signals are plotted in In these figures, the dashed line shows the solution of Fourier equation using thermal diffusivity corresponding to the initial part of temperature rising on the rear side. The measured signal deviates from the Fourier-predicted one even when considering non-adiabatic (cooling) boundary condition. That deviation weakens with increasing sample thickness; for the thickest one it is hardly visible, and the prediction of Fourier's law is almost acceptable. The evaluation of the thinnest sample using the GK equation is shown in Figure 5. The fitted coefficients are summarized in Table 1. It is important to mention that MCV equation using the presented parameters would show a wave-like propagation that is not observed in the experiments. Deviation from the Fourier prediction is weak but is clearly present, and has size dependent attributes. Concerning the ratio of parameters, i.e., investigating how considerably the Fourier resonance condition aτ/κ 2 = 1 is violated, the outcome can be seen in Table 2. As analysis of the results, it is remarkable to note the deviation of the GK fitted thermal diffusivity from the Fourier fitted one, and that this deviation is size dependent. For the thickest sample, which can be well described by Fourier's law, the fitted thermal diffusivity values are practically equal, and the ratio of parameters is very close to the Fourier resonance value 1. The next section is devoted to a possible explanation for the emergence of a generalized heat equation with higher time and space derivatives. All coefficients of the higher time and space derivative terms are related to well-known material parameters. The result also features size dependent non-Fourier deviation. Seeming Non-Fourier Heat Conduction Induced by Elasticity Coupled via Thermal Expansion While, in general, one does not have a direct physical interpretation of the phenomenon that leads to, at the phenomenological level, non-Fourier heat conduction here follows a case where we do know this background phenomenon. Namely, in case of heat conduction in solids, a plausible possibility is provided by an interplay between elasticity and thermal expansion. Namely, without thermal expansion, elasticity-a tensorial behavior-is not coupled to Fourier heat conduction-a vectorial one-in isotropic materials. However, with nonzero thermal expansion, strains and displacements must be in accord both with what elastic mechanics dictates and with what position dependent temperature imposes. The coupled set of equations of Fourier heat conduction, of elastic mechanics and of kinematic relationships, after eliminating the kinematic and mechanical quantities, leads to an equation for temperature only that contains higher derivative corrections to Fourier's equation. It is important to check how remarkable these corrections are. In the following section we present this derivation and investigation. The Basic Equations In all respects involved, we choose the simplest assumptions: the small-strain regime, a Hooke-elastic homogeneous and isotropic solid material, with constant thermal expansion coefficient, essentially being at rest with respect to an inertial reference frame. Kinematic, mechanical and thermodynamical quantities and their relationships are considered along the approach detailed in [41][42][43]. The Hooke-elastic homogeneous and isotropic material model states, at any position r, the constitutive relationship between stress tensor σ and elastic deformedness tensor D (which, in many cases, coincides with the strain tensor), where d and s denote the deviatoric (traceless) and spherical (proportional to the unit tensor 1) parts, i.e., Stress induces a time derivative in the velocity field v of the solid medium, according to the equation with mass density being constant in the in the small-strain regime; hereafter ← ∇ and → ∇ denote derivative of the function standing to the left and to the right, respectively, to display the tensorial order (tensorial index order) properly for vector/tensor valued functions. For the velocity gradient L and its symmetric part, one has where the Einstein summation convention for indices has also been applied. Again, using this convention, and the Kronecker delta notation, to any scalar field f , follow, which are also to be used below. The small-deformedness relationship among the kinematic quantities, with linear thermal expansion coefficient α considered constant, and absolute temperature T, is For specific internal energy e, its balance, after subtracting the contribution ė el coming from specific elastic energy e el and the corresponding elastic part tr σḊ of the mechanical power tr (σL), is where c is specific heat corresponding to constant zero stress (or pressure), temperature has been approximated in one term of (18) by an initial homogeneous absolute temperature value T 0 to stay in accord with the linear (small-strain) approximation, and heat flux q follows the Fourier heat conduction constitutive relationship with thermal conductivity k also treated as a constant. The Derivation The strategy is to eliminate σ in favor of (with the aid of) D, then D is eliminated in favor of L sym , after which we can realize that both from the mechanical direction and from the thermal one we obtain relationship between v · ← ∇ and T, which, eliminating v · ← ∇, yields an equation for T only. Starting with the thermal side, Meanwhile, from the mechanical direction, aiming at being in tune with (20): (where c is the longitudinal elastic wave propagation velocity); hence, summarizing the final result in two equivalent forms, The first form here tells us that we have here the wave equation of a heat conduction equation, the last term on the r.h.s. somewhat detuning the heat conduction equation of the r.h.s. with respect to the one on the l.h.s. (the underlined coefficient is the one becoming modified when its term is melted together with the last term). In the meantime, the second form shows the heat conduction equation of a wave equation, the last term on the r.h.s. detuning the underlined coefficient. Both forms show that coupling, after elimination, leads to a hierarchy of equations, with an amount of detuning that is induced by the coupling-for similar further examples, see [44]. We close this section by rewriting the final result in a form that enables to estimate the contribution of thermal expansion coupled elasticity to heat conduction: i.e., One message here is that, thermal expansion coupled elasticity modifies the thermal diffusivity a = k/( c) to an effective one a 2 = k/γ 2 = ( c/γ 2 ) · a (see the heat conduction on the r.h.s.). For metals, this means a few percent shift (1% for steel and copper, and 6% for aluminum) at room temperature. The other is that, for a length scale (e.g., characteristic sample size) and the corresponding Fourier time scale 2 /a, the r.h.s. is, to a (very) rough estimate, 1/ 2 times a heat conduction equation while the l.h.s. is (similarly roughly) times the (nearly) same heat conduction equation (a one with a 1 = k/γ 1 ). In other words, the l.h.s. provides a contribution to the r.h.s. via a dimensionless factor This dimensionless factor is about 10 −10 to 10 −13 for metals, 10 −14 for rocks and 10 −15 for plastics with = 3 mm, a typical size for flash experiments. Therefore, the effect of the l.h.s. appears to be negligible with respect to the r.h.s. It is important to point out that the first phenomenon-the emergence of effective thermal diffusivity-would remain unnoticed in the analogous one space dimensional calculation: [no detuning of c on the r.h.s.]. It is revealed only in the full 3D treatment, which reveals possible pitfalls of 1D considerations in general as well. As conclusion of this section, thermal expansion coupled elasticity may introduce a few percent effect (a material dependent but sample size independent value) in determining thermal diffusivity from flash experiments or other transient processes (while its other consequences may be negligible). Pseudo-Temperature Approach The experimental results serve to check whether a certain theory used for describing the observed phenomenon is acceptable or not. The heat pulse (flash) experiment results may show various temperature histories. Generally, the flash measurement results are according to the Fourier theory. In some cases, as reported in [39,40] the temperature histories show "irregular" characteristics, especially these histories could be described by the help of various non-Fourier models [7,34,45,46]. Some kind of non-Fourier behavior could be constructed as it is shown in the following. This is only an illustration how two parallel Fourier mechanisms could result a non-Fourier-like temperature history. The idea is strongly motivated by the hierarchy of Fourier equations in the GK model [44] as mentioned previously; however, their interaction is not described in detail. The sample that we investigate now is only a hypothetic one, we may call it as a "pseudo-matter". We consider in the following that the pseudo-matter formed by parallel material strips is wide enough that the interface effects might be neglected, i.e., they are like insulated parallel channels. We also consider that only the thermal conductivities are different, and the strips have the same mass density and specific heat. During the flash experiment after the front side energy input, a simple temperature equalization process happens in the sample in case of adiabatic boundary conditions. Since the flash method is widely developed, the effects of the real measurement conditions (heat losses, heat gain, finite pulse time, etc.) are well treated in the literature. Figure 6 shows two temperature histories with thermal diffusivities of different magnitude, both are the solution of Fourier heat equation. The mathematical formula that expresses the temperature history of the rear side in the adiabatic case is [47]: where ν is the dimensionless temperature, i.e., ν = T−T 0 T max −T 0 , where T 0 is the initial temperature and T max is the asymptotic temperature corresponding to equilibrium with adiabatic boundary conditions, ξ is the normalized spatial coordinate (ξ = 1 corresponds to the rear-side) and Fo = a · t/(L 2 ) stands for the Fourier number (dimensionless time variable). This is an infinite series with property of slow convergence for short initial time intervals. An alternative formula derived using the Laplace theorem to obtain faster convergence for Fo < 1 [48]: wherein p is the Laplace transform of ν. In the further analysis we use Equation (32) to calculate the rear-side temperature history. So far, we described two parallel heat-conducting layers without direct interaction among them; however, let us suppose that they can change energy only at their rear side through a very thin layer with excellent conduction properties. Eventually, that models the role of the silver layer used in our experiments to close the thermocouple circuit and assure that we measure the temperature of that layer instead of any internal one from the material. Actually, the silver layer averages the rear-side temperature histories of the parallel strips. We considered the mixing of temperature histories using the formula: p(Fo) = Θp 1 (a = 10 −6 m 2 /s, Fo 1 ) + (1 − Θ)p 2 (a = 2.5 · 10 −7 m 2 /s, Fo 2 ), (33) that is, taking the convex combination of different solutions of Fourier heat Equation (5). Figure 7 shows a few possible cases of mixing. Outlook and Summary This pseudo-material virtual experiment is only to demonstrate that there might be several effects causing non-Fourier behavior of the registered temperature data. Here, the assumed mixing of "Fourier-temperatures" is analogous with the GK equation in sense of the hierarchy of Fourier equation: dual heat-conducting channels are present and interact with each other. However, the GK equation is more general, there is no need to assume some mechanism to derive the constitutive equation. Comparing Equations (6) to (25), the hierarchy of Fourier equation appears in a different way. While (6) contains the zeroth and first order time derivatives of Fourier equation, the (25) instead contains its second order time and spaces derivatives. Recalling that Equation (25) 1 is derived using the assumption that thermal expansion is present beside heat conduction, it becomes obvious to compare it to a ballistic (i.e., thermal expansion induced) heat conduction model. Let us consider such model from [7]: where τ 1 and τ 2 are relaxation times. Equation (35) have been tested on experiments, too [16]. Eventually, the GK equation is extended with a third order time derivative and the coefficients are modified by presence of τ 2 . On contrary to Equation (34), it does not contain any fourth order derivative. Actually, the existing hierarchy of Fourier equation is extended, instead of τ and κ 2 the terms (τ 1 + τ 2 ) and (κ 2 + aτ 2 ) appear within (35). Although it is still not clear exactly what leads to over-diffusive heat conduction, the presented possible interpretations and approaches can be helpful to understand the underlying mechanism. It is not the first time to experimentally measure the over-diffusive propagation, but it is to consider its size dependence. The simplest thermo-mechanical coupling predicts size dependence of material coefficients that can be relevant in certain cases. All three approaches lead to a system of partial differential equations, which can be called hierarchical. Author Contributions: T.Fü. developed the thermo-mechanical model presented in Section 3. Section 4 is suggested by G.G. and with P.V. they designed the experiments. Á.L., Á.R., T.Fo., M.S. performed and analyzed the experimental data. R.K. compiled and composed the paper. All the authors contributed equally to the paper.
4,412.8
2018-08-21T00:00:00.000
[ "Physics", "Materials Science" ]
On the Oscillating Course of d hkl − sin 2 ψ Plots for Plastically Deformed, Cold-Rolled Ferritic and Duplex Stainless Steel Sheets : This work deals with non-linear d hkl − sin 2 ψ distributions, often observed in X-ray residual stress analysis of plastically deformed metals. Two different alloys were examined: duplex stainless steel EN 1.4362 with an austenite:ferrite volume ratio of 50:50 and ferritic stainless steel EN 1.4016. By means of an in situ experiment with high-energy synchrotron X-ray diffraction, the phase-specific lattice strain response under increasing tensile deformation was analysed continuously with a sampling rate of 0.5Hz. From Debye–Scherrer rings of nine different lattice planes {hkl}, the d hkl − sin 2 ψ distributions were evaluated and the phase-specific stresses were calculated. For almost all lattice planes investigated, oscillating courses in the d hkl − sin 2 ψ distributions were observed, already occurring below the macro yield point and increasing in amplitude within the elasto-plastic region. By comparing the loaded and the unloaded state after deformation, the contribution of crystallographic texture and plastically induced intergranular strains to these oscillations could be separated. For the given material states, only a minor influence of crystallographic texture was observed. However, a strong dependence of the non-linearities on the respective lattice plane was found. In such cases, a stress evaluation according to the sin 2 ψ method leads to errors, which increase significantly if only a limited ψ range is considered. Introduction Duplex stainless steels are widely used in mechanical and chemical engineering due to their excellent combination of properties in terms of strength, ductility, and corrosion resistance.These advantageous material properties are achieved via the interaction of two phases, ferrite and austenite, which both exist in large volume fractions.Many manufacturing processes of metal components involve non-uniform plastic deformations and thereby cause the development of residual stresses.These 'internal stresses' are superimposed on the external loads during operation and can decisively influence the material behaviour, for example the service life of components subjected to cyclic loads [1].In plastically deformed duplex stainless steels, phase-specific micro-residual stresses are observed, which are superimposed on the macro-residual stresses.These kinds of microstresses develop because the ferritic and austenitic phases differ in their mechanical behaviour.The sign and magnitude of the phase-specific micro-residual stresses are affected by the degree of plastic deformation and the phase-specific elasto-plastic behaviour, which depends, i.e., on the specific material composition, crystallographic texture, and previous heat treatments [2]. The most widely used method for the analysis of phase-specific residual stresses on polycrystalline materials is the sin 2 ψ method using X-ray diffraction [3].The method is based on the measurement of diffraction lines from specific lattice plane families of type {hkl} of one phase under various sample inclinations ψ for a fixed azimuthal direction ϕ.A classical sample-fixed coordinate system, indicating angles ψ and ϕ, is shown in Figure 1a.Following Bragg's law the lattice spacing d hkl in the direction of the scattering vector m(ϕ, ψ) can be determined from the X-ray wavelength λ and diffraction angle θ hkl , as schematically depicted in Figure 1b.From d hkl , the lattice strain ε hkl is derived considering the lattice spacing d hkl 0 corresponding to the stress-free state.In general, the stress calculation from the measured strain data is based on the assumption that the stress tensor, averaged over the measurement volume, causes a linear d hkl −sin 2 ψ distribution.Linear distributions occur when there is a surface-parallel, uniaxial or biaxial stress state which is sufficiently homogeneous, i.e., shows no steep in-depth gradient within the information depth.Furthermore, the material volume irradiated by the X-rays must contain a sufficient number of randomly oriented grains, i.e., the grain sizes are very small in comparison to the irradiated sample volume [4].If shear stresses are present normal to the specimen surface, the d hkl −sin 2 ψ distribution shows an elliptical course.However, in cold-formed polycrystalline materials, pronounced non-linearities, i.e., oscillating courses in the d hkl −sin 2 ψ plots, can occur.This is due to the phase-specific crystallographic texture (elastic anisotropy) and the plastically induced microstresses (plastic anisotropy), which are also denoted as intergranular stresses [5].In situations that pronounced oscillations are observed, the sin 2 ψ method can lead to erroneous results and should no longer be applied.It is known that near-surface residual stress depth gradients can also cause non-linear d hkl −sin 2 ψ distributions [1].Because in-depth stress gradients play no role in this study, this is not further discussed. The lattice spacing d hkl , determined by diffraction methods, always represents a selective mean value of those crystallite orientations whose lattice planes {hkl} are perpendicular to the respective measurement direction m(ϕ, ψ).In terms of strain, this mean value can be separated into two parts: a mean strain part ε hkl ϕ,ψ , which is related to the mean stress tensor σ ij , and a second part ε hkl,pl.ϕ,ψ , which is related to orientation-dependent microstresses caused by previous plastic deformations [6].At this point, it should be emphasised that the superscript 'pl.' does not indicate plastic strain; instead, it indicates elastic intergranular strain induced by plastic deformation.Generally, the relation between σ ij and ε hkl is described by stress factors F hkl ij , which can be calculated with knowledge of the single-crystal elastic anisotropy and the orientation distribution function (ODF) [7].The crystallite coupling within the polycrystal is thereby taken into account using appropriate mathematical models, e.g., according to Voigt [8], Reuss [9] or Eshelby/Kröner [10,11].A detailed description of the F ij calculation approaches following different models is given in [12].The measured strain ε hkl ϕ,ψ in the direction m(ϕ, ψ) of a plastically deformed, polycrystalline material is calculated as follows [6]: Thus, the plastically induced intergranular strains ε hkl,pl. ϕ,ψ are independent from the acting mean stress tensor and are instead due to the history of plastic deformation, which is usually not known. Several works have already dealt with the experimental analysis of phase-specific stresses in plastically deformed duplex stainless steels, e.g., [13][14][15][16][17][18][19][20].Although materials with comparable chemical compositions and similar phase fractions were examined, different conclusions were drawn regarding the phase-specific yield strength and the formation of phase-specific microstresses.On the one hand, this can be attributed to the fact that phase-specific strength is affected by the specific crystallographic texture (orientation strengthening) and by the differences in the precise phase-specific chemical composition (solid solution strengthening).On the other hand, different measurement and evaluation approaches were used, which can also lead to different stress results for such complex material states.A systematic study on the evolution of non-linear d hkl −sin 2 ψ distributions for several lattice planes {hkl} caused by plastic deformation and the accompanying error in residual stress evaluation has not yet been performed.Usually, only the residual stress state is analysed.Without a comparison of the same material state under additional external loading, the influences of texture and intergranular strains cannot be easily separated.Furthermore, the data are frequently obtained by diffraction experiments using conventionally generated X-rays (lab X-ray applications) in reflection mode.In this case, mostly only a limited sin 2 ψ range is accessible due to the absorption at high inclination angles (ψ) or simply due to the geometric constraints of the applied measurement setup, and the oscillatory courses of d hkl vs. sin 2 ψ are not necessarily visible. The use of high-energy synchrotron X-ray in transmission mode, however, enables the analysis of mean phase-specific stresses for metal samples having a thickness of up to a few millimetres.Here, the diffraction data contain integral information over the sample thickness; hence, depth gradients of the crystallographic texture or residual stresses are not resolved.In contrast, the information gained reflects an overall material response that is unaffected by local deviations due to near-surface effects.By means of a 2D detector, full diffraction rings of several lattice planes can be recorded for polycrystalline samples.After azimuthal segmentation of the diffraction pattern, the diffraction profiles can be analysed for various azimuthal directions.Using these means, d hkl −sin 2 ψ distributions can be evaluated based on a single exposure [21,22].This approach allows for the determination of lattice spacings with polar angles of up to |ψ| = 90 • .Thus, it is a valuable tool for the systematic analysis of the development of oscillatory d hkl −sin 2 ψ distributions with elastic and elasto-plastic deformations. In the present work, the phase-specific lattice strain responses of a cold-rolled duplex stainless steel sheet and a ferritic stainless steel sheet were analysed under increasing tensile deformations up to a total strain of about ε = 0.12.The aim was to analyse the development of oscillatory courses in d hkl vs. sin 2 ψ for several lattice planes for singlephase and two-phase materials and to obtain a better comprehension of the respective contributions of elastic and plastic anisotropy.Therefore, in situ loading experiments using 2D high-energy synchrotron X-ray diffraction were carried out at the P07B@PETRA III beamline at Deutsches Elektronen-Synchrotron (DESY) Hamburg, Germany.During uniaxial deformation in the elastic and elasto-plastic regime, entire Debye-Scherrer rings of several lattice planes {hkl} for the ferritic phases were detected by means of a flatpanel detector.In the case of the duplex stainless steel, the Debye-Sherrer rings were also found for the austentitic phase.From these, the direction-dependent lattice spacings d hkl ϕ,ψ were evaluated. The results are presented and discussed in the following order.At first, the continuous evolution of the phase-specific lattice strain is analysed for selected directions with respect to the loading direction.Thereafter, d hkl −sin 2 ψ plots of the individual lattice plane families of both materials are discussed for four particular load increments including the unloaded state.It is investigated whether linear regression over oscillating distributions leads to comparable stress results for different lattice planes if the entire range in sin 2 ψ (from 0.1) can be considered.The ferritic phase of the duplex stainless steel and the ferritic stainless steel exhibit the same crystal structure.In comparing the d hkl −sin 2 ψ distributions of the single-phase material and the two-phase material, it is investigated if the second phase has an influence on the non-linearities.Finally, the influences of intergranular strains and crystallographic texture on the d hkl −sin 2 ψ courses are separated by comparing the loaded and unloaded state. Initial Material State Duplex stainless steel EN 1.4362 (Alloy 2304, German grade X2CrNiN23-4) and ferritic stainless steel EN 1.4016 (AISI 430, German grade X6Cr17) were used in this study, both in a cold-rolled state with a sheet thickness of 1.5 mm.The abbreviations DSS for duplex stainless steel and FSS for the ferritic stainless steel are used throughout this work.The materials' chemical compositions, as determined by optical emission spectroscopy, are shown in Table 1.Metallographic analysis on the DSS revealed a nearly equal volume fraction of the phases ferrite (α, body-centred cubic) and austenite (γ, face-centred cubic), see Figure 2a.No further phases could be determined.The ferritic phase exhibits relatively large grains that are elongated in the rolling direction with a mean diameter of approximately 13 µm.In contrast, the austenitic phase has smaller grains of with a rather spherical shape with sizes of about 4 µm in diameter [19].The ferritic stainless steel has rather large globular ferrite grains of sizes within a range of 10 µm to 25 µm, see Figure 2b.In addition, small carbide segregations were observed, regularly arranged in lines along the rolling direction (RD).However, the volume fraction of carbides is rather low and does not exceed 5%.Hence, the ferritic stainless steel is considered as single-phase material within this work. The crystallographic textures of the initial states were analysed by lab X-ray experiments using a four-circle X-ray diffractometer with 1 mm collimated Fe-filtered Co-Kαradiation (photon energy E CoKα = 6.93 keV, wavelength λ CoKα = 1.789Å).On the secondary side, a 4 mm slit aperture was installed in front of the point detector.From the measured incomplete pole figures of ferritic lattice planes ({200}α, {211}α, {220}α) and austenitic lattice planes ({200}γ, {220}γ, {311}γ) the phase-specific ODFs ( f (g)) and main texture components {hkl} uvw of both materials were calculated using the Matlab toolbox MTEX [23].To account for the depth gradient, the texture was repeatedly characterised after step-wise layer removal until half of the sheet thickness was reached.Each layer had a thickness of about 250 µm and was removed by grinding and subsequent electrochemical polishing.The volume fractions of the main texture components as well as the overall texture index J, which describes the degree of anisotropy [24], are depicted in Figure 3 over the distance to the surface.Both phases show ideal texture components that are frequently observed for body-centred cubic (bcc) and face-centred cubic (fcc) metals after rolling.Major components of the austenitic phase of the DSS are Taylor {4 4 11} 11 11 8 (also named Dillamore), copper {123} 111 , S1 {124} 211 , brass {110} 112 and Goss {110} 001 orientations.The ferritic phases of the DSS and the FSS both have components of α-fibre ({100} 110 , {115} 110 , {112} 110 ) and γ-fibre ({111} 110 , {111} 112 ).Yet, the texture of the FSS is less pronounced in comparison to the ferritic phase of the DSS.According to the texture index J, the ferritic phase in the DSS also exhibits a sharper texture than the austenitic phase.For both materials (DSS and FSS), only a negligible variation of the texture vs. depth was observed. In Situ Loading Experiment Tensile specimens were cut out of the sheet metals with their longitudinal direction aligned parallel to the sheet's rolling direction.The sample volume experiencing an homogeneous strain has a length, width, and thickness of 21 × 8 × 1.5 mm.For the tensile tests, a miniature tensile testing machine from Walter+Bai AG, Switzerland, with an attached 10 kN load cell was used.The macro stress-strain curves were determined up to a maximum strain of about ε t = 0.12 using an extensometer.The strain rate was approximately ε = 4 × 10 −5 s −1 .During the in situ experiments, i.e., during X-ray exposure, the extensometer was dismounted to avoid shadowing effects. The in situ high-energy X-ray diffraction (HEXRD) experiments were carried out at beamline P07B@PETRA III at DESY in Hamburg, Germany.Monochromatic synchrotron X-rays with a photon energy of E HE = 87.1 keV (wavelength λ HE = 0.1423 Å) and a beam size of 1 mm in width and 0.6 mm in height were used.Diffraction patterns were detected by a Perkin-Elmer flat -panel detector of type XRD 1621 with a 2048 × 2048 array of 200 × 200 µm pixels having a distance to the sample of 1365 mm.The experimental setup was calibrated using a Fe-powder reference sample. Determination of sin 2 ψ Courses from Debye-Scherrer Rings To enable direction-dependent data evaluation, the diffraction patterns were sectioned into 72 diffraction profiles via integration along the azimuthal angle η in the detector plane in steps of ∆η = 5 • using the software FIT2D [25].As exemplified for η = 90 • in Figure 4, each of the intensity vs. 2θ profiles corresponds to one azimuthal direction η.After linear background subtraction, the individual diffraction peaks were fitted with pseudo-Voigt functions using a self-written MATLAB routine.From the evaluated mean 2θ hkl positions, the lattice spacings d hkl were calculated according to Bragg's law (see Equation ( 1)).For better comparability of the evolution of lattice spacings of different lattice planes d hkl , they are expressed by the changes of the uniform lattice spacing d 100 using the following equation: In the laboratory system, each measurement direction is defined by the scattering vector m, which depends on the azimuthal angle η in the detector plane and the particular diffraction angle θ hkl , as schematically depicted in Figure 5a.For the evaluation of strains and stresses, the scattering vectors have to be expressed in respect to a sample-fixed reference system, which is generally defined by an azimuthal angle ϕ and a polar angle ψ.In Figure 5, two different stereographic projections of the scattering vectors m are illustrated, assuming a diffraction angle θ = 4.1 • .Figure 5b depicts the projection perpendicular to the sheet's normal direction (ND), whereas Figure 5c depicts the projection perpendicular to the sheet's transverse direction (TD).The respective projections are indicated by indices.The coordinate transformation was performed using a rotation matrix as described in [26].In classical application of the sin 2 ψ method, the stress in direction ϕ is evaluated from measured lattice spacings for various sample inclinations ψ under constant azimuthal direction ϕ.The path of the scattering vectors in coordinate system (c) approximately describes a path of various ψ TD angles for a fixed azimuthal angle ϕ TD = 0 • .The negligible offset from the LD-TD plane is simply due to the diffraction angle θ (compare Figure 5a).Considering the very small diffraction angles for the given experimental setup (θ hkl max = θ 220α ≈ 4.1 • ), this angular offset is ignored in further evaluation and the following approximations are made: Thus, the d ϕ,ψ -sin 2 ψ distributions dealt with in this work can be assigned to the sample-fixed LD-TD plane.Because the angles ϕ and ψ do not correspond exactly to ϕ TD and ψ TD , they are not indexed with 'TD'.The stress error made by this assumption was calculated assuming non-textured ferritic steel with a simulated uniaxial stress state (σ LD = 100 MPa, σ LD = σ LD = 0).For the maximum diffraction angle investigated (θ 220α ) the stress error is less than 0.5% and is thus within the range of usual measurement error.Comparable evaluation approaches have already been proposed by [21,22,27].To correct any inaccuracies in the determination of the beam centre, the lattice spacings d hkl ϕ,ψ determined from opposing azimuthal angles within the Debye-Scherrer rings are averaged: Stress Evaluation Although significant oscillations in the d hkl −sin 2 ψ distributions are expected in this study, the phase-specific stress evaluation is carried out according to the sin 2 ψ method.In this way, it can be verified, if the stress values determined from different lattice planes correlate, given that the entire range 0 ≤ sin 2 ψ ≤ 1 is covered.Gradients of residual stresses cannot be resolved with the transmission experiment, as only integral values across the sheet thickness are determined.In multi-phase materials however, there are homogeneous phase-specific micro-residual stresses which are balanced by the homogeneous micro-residual stresses of the other phases.The presence of such homogeneous microresidual stresses must be considered for all measurement directions in the sample reference system.Hence, only the differences of the mean stresses σ LD − σ TD ϑ of the respective phase ϑ can be obtained by the applied method: Because the uniform lattice spacing for the stress-free state d 100 0 of the respective phases is unknown, it is replaced by the average of the uniform lattice spacings of all analysed interference lines measured for the materials initial state.The stress error introduced by this assumption is less than 0.1 % for most materials if stresses are evaluated according to the sin 2 ψ method [4].In Table 2, the used d 100 0 values for DSS and FSS are provided.The diffraction elastic constants 1 2 s hkl 2 , as specified in Table 3, are calculated according to the self-consistent model, known as the Eshelby/Kröner model, following the iterative approach proposed by [28].The single-crystal elastic constants for the ferritic and austenitic phases were taken from [29,30], respectively. Lattice Strain Evolution of DSS for Selected ψ Angles At first, the continuously recorded measurement data are shown for three selected polar angles ψ = 0 • , ψ = 45 • and ψ = 90 • in order to illustrate the evolution of lattice strain response from both phases of the DSS sample with increasing load.The strain response is given as a change in lattice strain ∆ε hkl ϕ,ψ , which is calculated with respect to the lattice spacing of the initial state d hkl init : When the lattice strain versus applied load deviates from a linear dependency, oscillations in the d hkl −sin 2 ψ plots can also be expected.The theoretical strain evolution for a linear dependency is calculated by the stress factors F 11 (ϕ, ψ) multiplied with the nominal applied stress σ n .The stress factors F 11 (ϕ, ψ) for the three particular directions ψ were calculated using the software isoDEC applying the Eshelby/Kroener model [31].The phase-specific ODFs from the initial material state were taken into account as input data. In Figures 6a and 7a the Debye-Scherrer rings from the austenitic and the ferritic phase are highlighted and the selected ψ directions are marked (ψ = η).The solid lines in Figures 6b and 7b depict the corresponding changes in lattice strain ∆ε hkl ϕ,ψ of the investigated austenitic and ferritic lattice planes, respectively.The dotted lines indicate the theoretical strain evolution, calculated by F 11 (ϕ, ψ) multiplied with σ n .The experimentally determined strain development ∆ε hkl ϕ,ψ of both phases is accurately predicted by the calculated values for almost all investigated lattice planes as long as the applied load is clearly below the macro yield strength (σ n R eS , compare Figures 6b and 7b).Hence, both constituents of the DSS share approximately the same load σ n within the purely elastic deformation, which seems plausible because of their only minor differences in elastic properties.When the applied load approaches the macro yield strength, significant deviations from the linear stress-strain relationship are observed.This is due to the plastic anisotropy of the individual crystallites, i.e., the direction-dependent yield strength [32].Crystallite orientations with low strength with respect to the direction of the applied load show plastic yielding already below the macro yield strength.This causes additional elastic strain within the crystallites that have a high-strength orientation, due to the constraints of the surrounding polycrystal [33].The selective nature of diffraction methods means that those strain heterogeneities become visible, especially when various {hkl} interference lines are studied.Depending on the particular polar angle ψ, positive or negative deviations from linear stress-strain-dependency are observed, which are related to oscillations in a d hkl −sin 2 ψ plot.The unfilled symbols indicate the residual strain after unloading.It can be seen that for both phases, the introduced intergranular strains also remain in the unloaded state. Furthermore, in Figure 6b, a difference between the lattice strains determined by {111}γ and {222}γ interference lines is evident for applied loads beyond R eS .This systematic shift is due to the stacking fault probability in the austenitic phase which increases with plastic deformation and has been frequently observed for materials that exhibit a low stacking fault energy (SFE) [34].For the ferritic lattice planes {110}α and {220}α, no such strain differences exist (see Figure 7b), which can be explained by the usually high SFE of body-centred cubic metals. Stress Evaluation According to the sin 2 ψ Method The d 100 −sin 2 ψ plots are determined from Debye-Scherrer rings as described in detail in Section 2.3.For reasons of clarity, only results of four characteristic load states during the tensile deformation are shown.Those are the initial unloaded state (I), the macro yield point σ t = R eS (II), the maximum deformed state under load (ε t = 0.12, (III)), and the unloaded deformed state (IV).From two crystallographic-equivalent lattice planes, only one is depicted, namely {222}γ for the austenitic phase and {220}α for the ferritic phase.According to the theory of [34], it is assumed that the shift of lattice strain due to deformation stacking faults is non-directional.Hence, it does not affect the slope of d hkl −sin 2 ψ distributions and will not be considered further in this work.The stress differences σ LD −σ TD of the respective phase are calculated from the slopes of the linear regressions according to Equation ( 6) using the isotropic diffraction elastic constants given in Table 3. Stress errors are calculated for each sin 2 ψ evaluation, taking into account the variances of the regression line as described in [1]. DSS-Austenitic Phase In Figure 8, the d 100 −sin 2 ψ distributions for the four load states (I-IV) determined from austenitic lattice planes {200}γ (a), {220}γ (b), {311}γ (c) and {222}γ (d) are shown.The uniform lattice spacings d 100 are depicted over sin 2 ψ.Positive and negative ψ angles are indicated with white-filled and black-filled symbols, respectively.The variances in the individual d 100 values are propagated from the peak fit and are given by error bars.The result of the linear regression is shown as a red dotted line from which the stresses σ LD −σ TD γ are calculated.In the initial state (I), all austenitic lattice planes show a linear d 100 −sin 2 ψ dependency.Merely, minor scattering can be observed which results in small stress errors (≤ ± 3 MPa).The initial residual stress difference σ LD −σ TD γ of the austenitic phase is negative in sign, and the average from all lattice planes amounts to about −22 MPa. In the load state II, tendencies towards oscillating deviations are already visible for the {200}γ and the {220}γ planes.The stress evaluation of different lattice planes leads to an average value of about 453 MPa, from which {200}γ deviates the most, by approximately 19 MPa.The calculated stress errors are ±13 MPa at maximum.It is well known that oscillatory d 100 −sin 2 ψ distributions observed for {h00} or {hhh} lattice planes of cubic centred materials cannot be caused by crystallographic texture [35].Consequently, the non-linearities visible for {200}γ already indicate the influence of plastically induced intergranular strains.With further plastic deformation (load state III) the oscillations for {200}γ and {220}γ increase significantly in amplitudes.The d hkl −sin 2 ψ distributions of {311}γ and {222}γ lattice planes now also exhibit deviations from linearity, although those are much smaller compared to the latter.The evaluated stresses σ LD −σ TD γ for this load state range from 651 MPa to 906 MPa, with an average value of about 786 MPa.The oscillations result in higher calculated stress errors, which, however, significantly underestimate the true uncertainty in stress evaluation if the results from different lattice planes are considered. After unloading (load state IV), the deviations from the linear distribution almost completely remain for all lattice planes.The average stress difference σ LD −σ TD γ from all investigated planes calculates to about 51 MPa.However, it should be pointed out that depending on the respective lattice plane compressive residual stress ({220}γ), tensile residual stress ({200}γ, {311}γ) and residual stress values of close to zero are determined.A direct comparison of load state I and IV reveals the magnitude of strain heterogeneities introduced by the elasto-plastic tensile deformation, which is observable for all examined austenitic lattice planes.At load state II, minor d 100 oscillations are already visible for all lattice planes except for {211}α.However, the evaluated stresses of different lattice planes show comparable results, with an average value of about 553 MPa from which {211}α deviates the most, by approximately 23 MPa.The maximum stress error from linear regression is about ±22 MPa. For load state III, it can be observed that the non-linearities of d 100 vs. sin 2 ψ are increased for lattice planes {200}α and {220}α, whereas {211}α still shows an almost linear distribution.The stress values σ LD −σ TD α range from approx.732 MPa to 814 MPa with an average value of about 765 MPa.Whereas the error in stress calculation increases for lattice planes {200}α and {220}α, it decreases for {200}α compared to the previous load state II.This is due to the reduced local fluctuations from a continuous d 100 −sin 2 ψ distribution, which is evident for all lattice planes after plastic deformation. As already observed for the austenitic phase, the oscillations persist after unloading (load state IV).The residual stress difference σ LD −σ TD α from all three lattice planes averages to about −71 MPa with a minimum value of approx.−131 MPa and a maximum value of −25 MPa for {220}α and {200}α, respectively.A direct comparison of load states I and IV shows the extent of the oscillations introduced by tensile plastic deformation with simultaneous reduction in the local fluctuations which were present in the initial state.The latter indicates that the domain size is reduced significantly due to the increase in lattice defects caused by plastic deformation.This hypothesis could be confirmed by a peak profile analysis on the measurements data set according to Williamson-Hall, which allows for the separation of size and strain effects on diffraction line broadening [36].However, the measurement statistic of the interference profiles was not sufficient for a precise quantitative analysis of the respective contributions, hence, results on this evaluation is not presented here.Nonetheless, from comparison of load state I and IV a decrease in average domain sizes after plastic deformation could be qualitatively confirmed for both, the ferritic and austenitic phases. FSS-Ferritic Phase The results obtained from the FSS sample are presented in Figure 10 in the same manner as for the DSS sample.The load states I-IV correspond to equivalent positions on the macro stress strain curve of the FSS.The d 100 −sin 2 ψ distributions in the initial state (I) show a nearly horizontal line for all three lattice planes.The residual stress difference σ LD −σ TD α is close to zero with an average value of about −9 MPa confirming the expected stress-free state for a single-phase steel.Compared to the ferritic phase of the DSS even higher local fluctuations in d 100 vs. sin 2 ψ are determined, which supports the aforementioned assumption of grain size influences because the FSS exhibits the largest grains (see Figure 2). In load state II the average stress obtained from all lattice planes amounts to approx.291 MPa.Any oscillations that may already be present in the d 100 −sin 2 ψ distribution are superimposed by the local scattering deviations and can therefore not be clearly recognised. Those local fluctuations reduce significantly after plastic deformation (load state III) and the d 100 −sin 2 ψ oscillations are clearly visible for {200}α and {220}α planes.The average stress determined by the sin 2 ψ method is about 499 MPa, whereby {211}α, the only lattice plane with linear d 100 −sin 2 ψ distribution, shows the highest stress, with approximately 519 MPa. The non-linear distributions that are also present in the unloaded state (IV) lead to considerable differences in the evaluated residual stresses based on different lattice planes.The residual stress, determined by the almost linear d 100 −sin 2 ψ distribution of {211}α, is again close to zero with a value of about σ LD −σ TD α = −3 MPa. Comparison of DSS and FSS For equivalent lattice planes of the deformed FSS (Figure 10(IVa-c)) and the ferritic phase of the deformed DSS (Figure 9(IVa-c)), comparable features in the oscillating d 100 −sin 2 ψ distributions are observed.Hence, these oscillations are not affected by the presence of a second phase, i.e., the austenitic phase of the DSS.The lower oscillation amplitudes for the FSS compared to those observed for the ferritic phase of the DSS are due to the lower yield strength and thus the lower number of microstresses caused by plastic deformation. For a better overview, the evaluated stress values are listed together with the applied loads in Tables 4 and 5 for FSS and DSS, respectively.If stress values are determined from d 100 −sin 2 ψ distributions showing significant or marginal oscillations, they are shaded with light red of light pink, respectively.Because the FSS is considered a single-phase material, no homogeneous phase-specific residual stress should be observable.The measured data were obtained by a transmission experiment, so that a macro-residual stress depth gradient always integrates to zero in order to fulfil the equilibrium condition of residual stresses.This assumption is confirmed by the very low stress values determined by two lattice planes in the initial state and by the {211}α planes after unloading (Table 4).In the DSS, however, the mean phase-specific residual stresses over the sample thickness are not equal to zero because they are balanced by the mean stresses of the other phase.Because both phases have approximately equal volume fractions, the phase-specific micro-residual stresses of both phases must add up to zero.This is confirmed by the average stress differences in the initial state (I), which amount to about σ LD −σ TD γ = −22 MPa and σ LD −σ TD α = 21 MPa for the austenitic and ferritic phases, respectively.The unloaded state after deformation showed σ LD −σ TD γ = 51 MPa for the austenitic phase and σ LD −σ TD α = −71 MPa for the ferritic phase.The fact that the phase-specific residual stresses change due to uniaxial deformation shows that ferritic and austenitic phase exhibit different strength and/or hardening behaviour with respect to the direction of loading (LD RD).A further evaluation of the phase-specific triaxial stress tensor requires the precise lattice parameters for the stress-free state d 100 0 , which are not known in the present state and are difficult to determine.σ hkl TD must not be assumed to be zero because significant hydrostatic phase-specific stresses can arise in multi-phase materials.Therefore, the phase-specific hardening behaviour unfortunately cannot be derived from the presented results. Influence of Texture on d 100 -sin 2 ψ-Oscillations For the initial state (load state I) and the macro yield point (load state II), no considerable oscillations in the d 100 −sin 2 ψ distributions were observed for DSS and FSS.It is therefore assumed that no significant plastically induced intergranular strains ε pl.existed in the initial states.Furthermore, it is concluded that the initial crystallographic textures only minorly affect non-linear d 100 −sin 2 ψ distributions for the given material states.However, it should be noted that texture generally develops through plastic deformation.The observed significant increase in non-linearities with further tensile deformation might thus be affected by both elastic anisotropy (i.e., the crystallographic texture) and plastic anisotropy (i.e., intergranular strains).At this point, it should be mentioned that the crystallographic texture also influences the development of plastically induced intergranular strains.It is expected that different incompatibility stresses arise during plastic deformation of nontextured materials than in highly textured materials.However, no variation in textures was investigated within this study.The term 'influenced by texture' in this work is therefore exclusively associated with the effect of elastic anisotropy on the oscillatory courses in d hkl vs. sin 2 ψ. To separate the effects of texture and plastically induced strains, the results obtained for load state III and load state IV were examined in more detail.Generally, it can be assumed that the texture and the plastically induced intergranular strains ε pl.do not change during unloading given that the unloading process is purely elastic: The change in elastic strain between the loaded and unloaded state therefore only depends on the loading stress ∆ σ ij .Equations ( 2) and ( 8) yield: With respect to the change of the lattice spacing d 100 , the equation is: In Figure 11a, this approach is exemplified for the {220}γ lattice plane.The strongly pronounced oscillations which are present in load state III and IV disappear almost completely if the difference in lattice spacings of both states is calculated (load state V = III-IV).The remaining deviations from the regression line that might be caused by the crystallographic texture are comparatively small.Figure 11b depicts the same measurement results but here only d hkl values for the lower-half range of sin 2 ψ are considered, which correspond to sample inclinations |ψ| ≤ 45 • .In laboratory X-ray applications, the ψ range is frequently limited to such low angles due to the restrictions of the used instruments (e.g., mobile diffractometers) or shadowing caused by the geometry of the sample being investigated.A comparison of Figure 11a,b shows that the evaluated stresses differ drastically (IIIa/b) and even change signs for the residual stress state (IVa/b).Even for the difference of both load states (Va/b), in which the effect of plastically induced intergranular strains can be excluded, a considerable stress difference is evident.In the case of plastically deformed material states, it must be taken into account that non-linearities might be recognisable only in the upper tilt angle range.A stress evaluation based on the apparently linear d hkl −sin 2 ψ distribution within the lower ψ angle range in this case leads to strongly erroneous stress results. The comparison of Figure 11a,b reveals that the stress errors, which are calculated from variances in linear regression, are far from the true stress inaccuracies. From all investigated lattice planes of the ferritic and austenitic phases within this study, only the {211}α plane remained almost unaffected regarding oscillatory d 100 −sin 2 ψ distributions for the applied deformation. Conclusions The in situ HEXRD experiments have shown that significant oscillations in the d hkl −sin 2 ψ distributions arise during elasto-plastic tensile deformation for both materials investigated: duplex stainless steel EN 1.4362 and ferritic stainless steel EN 1.4016. A comparison of the results from selected load states and deformation states resulted in the following findings, irrespective of the material considered: • Local fluctuations in d hkl −sin 2 ψ, which are present in the initial state, significantly reduce during tensile plastic deformation, presumably due to the decrease in domain sizes. • The oscillations of the d hkl −sin 2 ψ distributions, observed after plastic deformation, are only minorly affected by crystallographic texture.This is due to the rather weak textures of both cold-rolled steel grades investigated. • In contrast, the non-linearities are predominantly caused by plastically induced intergranular strains.Hence, they are observed also for lattice planes which are not susceptible to crystallographic texture ({h00}, {hhh}). • The crystallographic texture's influence on the development of plastically induced intergranular strains could not be analysed because only one initial texture state was investigated. • The oscillations occur for almost all examined interference lines, although they differ strongly in characteristics and amplitudes depending on the respective lattice planes.• Stress evaluation by linear regression of oscillating d hkl −sin 2 ψ distributions varies for different lattices planes, even if the entire sin 2 ψ range is covered. • The non-linearities frequently occur, in particular, under higher tilt angles, which cannot be accessed with stationary and mobile diffractometers using conventionally generated X-rays (lab X-ray applications).Thus, in case of plastically deformed material states, stress evaluation according to sin 2 ψ method might be highly erroneous.Even if presumably linear distributions are obtained within the limited sin 2 ψ range, the results must be interpreted with caution. Regarding the respective material, following conclusions can be drawn: • No phase-specific residual stresses are determined for the FSS.The stresses determined from lattice plane {211}α are in good accordance with the applied stress values. • The DSS exhibits phase-specific microstresses, which change with plastic deformation.The tensile residual stress of the ferritic phase is balanced by compressive residual stress of the austenitic phase prior to the deformation.After uniaxial deformation, the inverse is true. • Because only the stress differences σ LD −σ TD α,γ were determined, the phase-specific strain hardening could not be derived. • For the ferritic phases of DSS and FSS, similar characteristics in the d hkl −sin 2 ψ oscillations were observed.Thus, their evolution in the DSS is barely affected by the austenitic phase. We believe that the results of this study are well suited for the validation of models used for the prediction of elasto-plastic material behaviour.This concerns, for example, crystal plasticity models or EPSC (elasto-plastic self-consistent) models, which are able to predict the evolution of intergranular strains and texture during plastic deformation. Figure 1 . Figure 1.Sample reference system with scattering vector m defined by azimuthal angle ϕ and sample inclination ψ (a).Schematic visualisation of the lattice spacing d hkl and diffraction angle θ hkl (b). Figure 2 . Figure 2. Micrographs of longitudinal sections of duplex stainless steel 1.4362 (a) and ferritic stainless steel 1.4016 (b), determined by light optical microscopy after etching with Lichtenegger-Bloech and V2A etchant, respectively; rolling direction (RD) and normal direction (ND) are indicated. Figure 3 . Figure 3. Depth gradients of crystallographic texture from sheet surface to centre of the austenitic phase (a) and ferritic phase (b) of the DSS and the FSS (c), depicted by volume fractions of the main texture components (top row) and the texture index J (bottom row). Figure 4 . Figure 4. Exemplary diffraction profile of the duplex stainless steel sample after azimuthal integration of a segment over ±2.5 • at η = 90 • (a) and experimental setup with tensile testing machine and flat-panel detector (b). Figure 5 . Figure 5. Schematic visualisation of the scattering vector orientations m(θ, η) in the laboratory system (a).Scattering vectors for a specific diffraction angle of θ = 4.1 • in the sample-fixed system as stereographic projection perpendicular to the sheet's normal direction (ND), defined by azimuthal angle ϕ ND and polar angle ψ ND (b) and perpendicular to the transverse direction (TD), defined by azimuthal angle ϕ TD and polar angle ψ TD (c). Figure 11 . Figure 11.d 100 ϕ,ψ -sin 2 ψ plots for the {220}γ lattice planes of the DSS for the deformed state (III), deformed state after unloading (IV) and the difference between loaded and unloaded state (V); stress evaluation is performed considering the full sin 2 ψ range 0 ≤ sin 2 ψ ≤ 1 (a) and considering only the lower half 0 ≤ sin 2 ψ ≤ 0.5 for demonstration purposes (b). Table 3 . Diffraction elastic constants 1 2 s hkl 2 used for the austenitic phase and the ferritic phase. Table 4 . Stresses in the ferritic phase σ LD −σ TD α of the FSS for different load states (LS) at the applied true stress σ t and the true strain ε t ; evaluation according to sin 2 ψ method. Table 5 . Stresses in the austenitic phase σ LD −σ TD γ and ferritic phase σ LD −σ TD α of the DSS for different load states (LS) at the applied true stress σ t and the true strain ε t ; evaluation according to sin 2 ψ method.
9,727.2
2023-02-28T00:00:00.000
[ "Materials Science", "Physics" ]
Helicity amplitudes in $B \to D^{*} \bar{\nu} l$ decay We use a recent formalism of the weak hadronic reactions that maps the transition matrix elements at the quark level into hadronic matrix elements, evaluated with an elaborate angular momentum algebra that allows finally to write the weak matrix elements in terms of easy analytical formulas. In particular they appear explicitly for the different spin third components of the vector mesons involved. We extend the formalism to a general case, with the operator $\gamma^\mu -\alpha\gamma^\mu \gamma_5$, that can accommodate different models beyond the standard model and study in detail the $B \to D^{*} \bar{\nu} l$ reaction for the different helicities of the $D^*$. The results are shown for each amplitude in terms of the $\alpha$ parameter that is different for each model. We show that $\frac{d \Gamma}{d M_{\rm inv}^{(\nu l)}}$ is very different for the different components $M=\pm 1, 0$ and in particular the magnitude $\frac{d \Gamma}{d M_{\rm inv}^{(\nu l)}}|_{M=-1} -\frac{d \Gamma}{d M_{\rm inv}^{(\nu l)}}|_{M=+1} $ is very sensitive to the $\alpha$ parameter, which suggest to use this magnitude to test different models beyond the standard model. We also compare our results with the standard model and find very similar results, and practically identical at the end point of $M_{\rm inv}^{(\nu l)}= m_B- m_{D^*}$. In the present work we retake this line of research and study the polarization amplitudes in semileptonicB → Vνl decays, applied in particular to theB → D * ν l reaction. We look at the problem from a different perspective to the conventional works where the formalism is based on a parametrization of the decay amplitudes in terms of certain structures involving Wilson coefficients and form factors. A different approach was followed recently in the study of B or D weak decays into two pseudoscalar mesons, one vector and a pseudoscalar and two vectors [40]. Starting from the operators of the standard model at the quark level, a mapping is done to the hadronic level and the detailed angular momentum algebra of the different processes is carried out leading to very simple analytical formulas for the amplitudes. By means of that, reactions likeB 0 → D − s D + , D * − s D + , D − s D * + , D * − s D * + , and others, can be related up to a global form factor that cancels in ratios by virtue of heavy quark symmetry. The approach proves very successful in the heavy quark sector and, due to the angular momentum formalism used, the amplitudes are generated explicitly for different third components of the spin of the vectors involved. In view of this, the formalism is ideally suited to study polarizations in these type of decays. Work along the line of [40] is also done in [41] in the study of the semileptonic B, B * , D, D * decays intoνl and a pseudoscalar or vector meson. Once again, we can relate different reactions up to a global form factor. If one wished to relate the amplitudes of different spin third components for the same process, the form factor cancels in the ratio and the formalism makes predictions for the standard model without any free parameters. In the present work we extend the formalism and allow a (γ µ − αγ µ γ 5 ) structure for the weak hadronic vertex which makes it easy to make predictions for different values of α that could occur in different models BSM (α = 1 here for the SM). We evaluate different ratios for the B → D * ν l reaction. Work on this particular reaction, looking at the helicity amplitudes within the standard model, was done in [42]. A recent work on this issue is presented in [43] where the B → D * ν τ τ is studied separating the longitudinal and transverse polarizations. The same reaction, looking into τ and D * polarization, is studied in [44]. Helicity amplitudes are also discussed in the relatedB * → P lν l reactions in the recent paper [45]. The formalism of Ref. [41] produces directly the amplitudes in terms of the third component of the D * spin along the D * direction. This corresponds to helicity amplitudes of the D * . The formulas are very easy for these amplitudes and allow to understand analytically the results that one obtains from the final computations. Not only that, but they indicate which combinations one should take that make the results most sensitive to the parameter α that will differ from unity for models BSM. We find some observables which are very sensitive to the value of α, which should stimulate experimental work to investigate possible physics BSM. II. FORMALISM We want to study the B → D * ν l decay, which is depicted in Fig. 1 for B − → D * 0ν l l − The Hamiltonian of the weak interaction is given by where the C contains the couplings of the weak interaction. The constant C plays no role in our study because we are only concerned about ratios of rates. The leptonic current is given by and the quark current by In the evaluation of B − → D * 0ν l l − decay we need where t is the transition amplitude, and for simplicity which can be easily obtained with the result [46] L αβ = 2 where we adopt the Mandl and Shaw normalization for fermions [47]. In Ref. In evaluating the quark current, we use the ordinary spinors [48] where χ r are the Pauli bispinors and m, p and E p are the mass, momentum and energy of the quark. As in Ref. [46] we take where m B , p B , and E B are the mass, momentum and energy of the B meson, and the same for the c quark related to the D * meson. Theses ratios are tied to the velocity of the quarks or B mesons and neglect the internal motion of the quarks inside the meson. We evaluate the matrix elements in the frame where theνl system is at rest, where p B = p D * = p, with p given by where M (νl) inv is the invariant mass of the νl pair. By using Eq. (8) we can write and A , B would be defined for the D * meson, simply changing the mass in Eq. (7). In the present work, we are only interested in the B − → D * 0ν l l − decay, which means J = 0, J = 1 decay. As in [41] we need to evaluate L αβ Q α Q * β which sums over the polarizations ofν l l, but keeping M fixed. We have where M 0 and N i , written in spherical coordinates, is with C(· · · ) a Clebsch-Gordan coefficient. In addition to the p dependence (and hence M (νl) inv ) of these amplitudes, in [41] there is an extra form factor coming from the matrix element of radial B and D * quark wave functions. However, in our approach we normalize the different helicity contributions to the total and the effect of this extra form factor disappears. The magnitude L ij N i N * j can be written in spherical coordinates as M in Eqs. (15), (16), and (17) stands for the third component of the D * spin in the direction of D * . Hence these are the helicities of the D * . Note that in the boost from the B rest frame to the frame where B and D * have the same momentum andνl are at rest, the direction of D * does not change and the helicities are the same. We can see that the sum of these expressions for the three helicities gives the same result as the sum obtained in Ref. [41] using properties of Clebsch-Gordan and Racah coefficients. III. RESULTS The differential width is given for where p D * is the D * momentum in the B rest frame and p ν theν momentum in the νl rest frame, The factor m ν m l in the numerator of Eq. (19) is due to the normalization used in [47] and cancels exactly the same factor appearing in the denominator of Eqs. (15), (16) and (17). inv goes to its maximum, then p → 0 and Bp, B p go to zero. Taking into account the behaviour of Bp and B p depicted in Fig. 2, we can see that when inv goes to its maximum, |t| 2 goes to the same value 2(AA ) 2 It is also interesting to see that This means that the differential width dΓ/dM (νl) inv for this difference goes as (1−BB p 2 )(B p− Bp) and the difference of these two distributions goes to zero, both as M going to its maximum. We show also these results in Figs. 3 and 4. The total differential width is given by , all divided by the total differential width R of Eq. (22). We also appreciate in Fig. 4 that the ratio of dΓ inv . In Fig. 4 we also see a smooth transition from 1 to 1 3 for the M = 0 case. The rapid transition to zero of some of the amplitudes discussed and the wide change of values for the (a), (b), (c) and (d) cases in the figure make these magnitudes specially suited to look for extra contribution beyond the SM. To give a further insight into this issue we stress that the reason for the zero strength at inv → 0. However, for M = ±1, M 0 = 0 and N µ goes to zero in that limit. This said, the models beyond the SM which could provide finite contribution for M = ±1, or a sizeably bigger one, are those that go beyond the γ µ − γ µ γ 5 structure in the quark current, like leptoquarks or right-handed quark currents of the type γ µ + γ µ γ 5 [49][50][51]. We discuss this case below. Some models BSM have quark currents that contain the combination γ µ + γ µ γ 5 . The models mentioned above could be accommodated with an operator We shall call a−b a+b = α and study the distributions for different M as a function of α. We have thus the operator γ µ − αγ µ γ 5 . Using the same formalism of [41] it is easy to see the results as a function of α. We obtain the following results: and N i written in spherical coordinates is Then, the different helicity contributions are given by 2) M = 1 3) M = −1 Since 1 − BB p 2 and B p − Bp go individually to zero for M We also see that now Then, it is also interesting to see what happens for the ratio 1 We show these results in Fig. 6. We can see that this magnitude keeps rising up as α goes This means that there is only one independent amplitude for all these processes. This is reminiscent of the heavy quark symmetry [64,65] where all form factors can be cast in terms of only one in the limit of infinite masses of the mesons. In view of this, let us face this issue here to see the heavy quark symmetry implicit in the approach of [41] which we follow here. The key point in our approach, which allows us to express the quark matrix elements in terms of the meson variables, is Eq. (8). Let is take the first relation p b m b = p B m B . In the B meson at rest there is a distribution of quark momenta due to the internal motion of the quarks, p in . If we make a boost to have the B with a velocity of v, we will have where we have split p in into a longitudinal and transverse part along the direction of v. We can write now The relative correction factor is but since p in,L has positive and negative components the correction is of order Let us remark that around M and their effect is further negligible. One can repeat the argumentation for the second relation of Eq. (8). This indicates that in theνl rest frame, where we evaluate the matrix elements, Eq. (8) is very accurate. However, it is only exact in the strict limit that m B , m D * go to infinite. Hence, it should not be surprising that our method implements automatically the symmetries of heavy quark physics. In order to test this hypothesis let us first study theB → Dνl transition. We have [66] < D, (M D → M D * for theB → D * ν l transition). Similarly (using 0123 = 1) we have [8,66] <D * ,λ,P |Jµ(0)|B,P > √ m B m D * In the heavy quark limit, with the quark masses going to infinite, one finds [8,66] h with ξ(w) the Isgur Weise function, and with a certain normalization of J µ , ξ(w) at the end This condition appears naturally in the quark model since for w = 1 the momentum transfer is zero and the wave functions with very large quark masses are also equal. Hence the quark transition form factor is unity. We take the D * polarization vectors consistent with our convention in [41] for the angular momentum states By using these polarization factors we compare the J 0 , J i (Jμ in spherical basis) matrix elements with M 0 and Nμ of the expressions found in [41]. We find Because of our normalization for J µ , all these functions are normalized to the value the symmetry of heavy quark physics, and provides an w dependence for these functions. It is interesting to compare our results with those of [9]. There a quark model calculation is done. and the quark matrix elements are evaluated, including the transition form factor from B to D * which we do not evaluate with the claim that it cancels in ratios of amplitudes for different M . We see that h + in [9] is qualitatively similar to ours, although it falls faster with w. The difference with us are of the order of 15% at the maximum value of w, indicating in any case a soft transition matrix element. Next, in order to connect with the standard model we follow the formalism of [18,67] where q µ = P Bµ − P D * µ . Once again, comparing this expression with our results for µ = 0, µ = 1, 2, 3 with M = 0, +1, −1, we obtain the following results: from where we find As in [18] (Eq.(B.5)), we define here h A 1 (w) as In [18,67] the A i , V form factors are parameterized as Our expressions in Eqs. (38), (39), (40), (41), (42) and (43) fulfill these conditions in the strict heavy quark limit with R 0 (w) = 1, R 2 (w) = 1, R 1 (w) = 1, such that R D * A i and R D * V are exactly equal to h A 1 . This is seen in Fig. 8. Diversions from the strict heavy quark limit of the standard model are incorporated in this formalism parameterizing h A 1 (w), R 0 (w), (38), (39), (42), and (43) as a function of w normalized to 1 at w = 1. The results for h A 1 R D * V , R D * A 0 , R D * A 2 are shown in Fig. 9. Comparison of Fig. 8 with Fig. 9 shows the difference of our approach with the standard model. We can appreciate a bigger slope as a function of w for the standard model (as already seen comparing with Ref. [9]) and also a different normalization at w = 1. Yet, the claim from our approach is that differences become much smaller when we use our approach to calculate ratios of amplitudes. To see the accuracy of our model to provide ratios, we evaluate again the contribution of M = 0, ±1, divided by the sum of the three contributions, for different values of α, with the form factor of the standard model and compare the results with those obtained in Fig. 5. To evaluate those contributions in the standard model we look at the formulas of Eqs. (26), (27), (28), and looking at the expressions of Eqs. (38), (39), (40), (41), (42) and (43) we substitute, The results are shown in Fig. 10. One can appreciate some differences from to M inv | max the behaviour in both cases is so close can also be traced to the fact that for a certain range of p momentum the A 1 term is still largely dominant. Yet, this could be seen as a manifestation of a general behaviour of the helicity amplitudes close to the end point discussed in Ref. [68]. VI. CONCLUSIONS We have taken advantage of a recent reformulation of the weak decay of hadrons, where, instead of parameterizing the amplitudes in terms of particular structures with their corresponding form factors, the weak transition matrix elements at the quark level are mapped into hadronic matrix elements and an elaborate angular momentum algebra is performed that allows one to correlate the decay amplitudes for a wide range of reactions. The formal-ism allows one to obtain easy analytical formulas for each reaction in terms of the angular momentum components of the hadrons. One global form factor also appears in the approach related to the radial wave functions of the hadrons involved, but since this form factor is common to many reactions and in particular is exactly the same for the different spin components of the hadrons within the same reaction, it cancels in ratios of amplitudes or differential mass distributions. In the present paper we have taken this formalism and extended it to the case of hadron matrix elements with an operator γ µ −αγ µ γ 5 , which can accommodate many models beyond the standard model by changing α. We have applied the formalism to study the B → D * ν l reaction and the amplitudes for different helicities of the D * are evaluated. We see that dΓ dM (νl) inv depends strongly on the helicity amplitude and also on the α parameter. In particular the difference dΓ dM (νl) inv | M =+1 is shown to be very sensitive to the α parameter and changes sign when we go from α to −α. Such a magnitude, with its strong sensitivity to this parameter, should be an ideal test to investigate models beyond the standard model and we encourage its measurement in this and analogous reactions, as well as the theoretical calculations for different models. We have taken advantage to relate our approach to the standard model by calculating the form factors V (q 2 ), A 0 (q 2 ), A 1 (q 2 ), A 2 (q 2 ) in our approach and comparing them to the parameterization of the standard model. The form factors are qualitatively similar but one can observe differences. Yet, when one uses them to evaluate ratios of amplitudes, or partial differential mass distributions, the differences are very small, and near the end point w = 1 the distributions are practically identical.
4,586.4
2018-08-08T00:00:00.000
[ "Physics" ]
Light-induced metal-like surface of silicon photonic waveguides The surface of a material may exhibit physical phenomena that do not occur in the bulk of the material itself. For this reason, the behaviour of nanoscale devices is expected to be conditioned, or even dominated, by the nature of their surface. Here, we show that in silicon photonic nanowaveguides, massive surface carrier generation is induced by light travelling in the waveguide, because of natural surface-state absorption at the core/cladding interface. At the typical light intensity used in linear applications, this effect makes the surface of the waveguide behave as a metal-like frame. A twofold impact is observed on the waveguide performance: the surface electric conductivity dominates over that of bulk silicon and an additional optical absorption mechanism arises, that we named surface free-carrier absorption. These results, applying to generic semiconductor photonic technologies, unveil the real picture of optical nanowaveguides that needs to be considered in the design of any integrated optoelectronic device. W hen the size of a device is reduced to the nanoscale, its behaviour may be strongly affected by physical effects localized at the surface [1][2][3][4][5] , where phenomena not occurring in the bulk of the material may arise 6 . For instance, in semiconductor nanomembranes 1,2 and nanowires [3][4][5] , the impact of the surface can be so relevant that some properties, such as the electrical conductivity, are dominated by surface contributions 1,2 . In photonics, surface effects have been traditionally associated to imperfections and roughness on the walls of integrated waveguides 7,8 . However, on the surface of semiconductor waveguides such as in silicon (Si), the natural termination of the crystal lattice results in a distortion of the energy bands 1 and in the creation of intra-gap states 6 , that are localized in the forbidden gap and are responsible for the absorption of photons with energy lower than the bandgap of the bulk material 9 (Fig. 1a). Surface-state absorption (SSA) is thus a single-photon process generating free carriers at the core/cladding interface. So far, SSA has been exploited only to develop photonic devices such as near-infrared all-Si photodetectors [10][11][12][13][14] and all-optical Si modulators 15 . Actually, the impact of surface states on Si photonic waveguides is much broader and may become relevant in the behaviour of optoelectronic integrated devices, especially when their size approaches the nanoscale. The relevance of these issues has not been perceived yet because the effect of carrier generation induced by surface states on the electrical and optical properties of a waveguide has been neither quantified nor compared with bulk effects. In this work we show that at typical light intensity the surface of Si photonic waveguides exhibits a metal-like behaviour. In fact, below the high intensity threshold where non-linear bulk effects become relevant, an intermediate regime exists where the electric conductivity of weakly doped (10 15 cm À 3 ) Si nanowaveguides is dominated by the carrier generation induced by SSA. The density of the free carriers generated on the surface is more than 100 times larger than that of the free carriers thermally generated in the bulk, thus making the surface properties move towards those of metals. This effect overwhelms the conductivity of bulk Si in the absence of light, and becomes relevant in Si electro-optic devices where slightly doped regions are used, such as in modulators [16][17][18][19] and micro-heaters embedded in the waveguide core 14 . Furthermore, we provide the first direct evidence that surface free carriers are responsible for an additional optical absorption mechanism, hereinafter named surface free-carrier absorption (SCA), that increases linearly with light intensity. Our results reveal that in Si waveguides this is the largest loss contribution with linear dependence on optical power, being about one order of magnitude larger than two-photon absorption (TPA). Results The intermediate regime where the electric properties of the waveguide are dominated by SSA-induced carriers is schematically shown in Fig. 1. At low and high light intensity the waveguide conductivity s is dominated by the carriers located in the bulk of the Si core, that are respectively due to the waveguide doping and to TPA (Fig. 1b,d). Vice versa, at moderate light intensity the carrier photogeneration occurring at the core/cladding interface is so massive (Fig. 1c) that the surface is filled with carriers, s is essentially given by the conductivity of the surface, that exhibits a metal-like behaviour. In the following sections the existence of this metal-like surface is demonstrated by cross-relating the results of optical domain SCA measurements with electric domain measurements performed with a recently developed surface-state photodetector 10 Light-induced absorption at the surface. The optical absorption due to surface free carriers was measured according to a novel technique whose principle is sketched in Fig. 2a,b. Let us assume that light with sinusoidal intensity modulation at angular frequency o 0 around the optical carrier is injected in a photonic waveguide with total propagation loss a. Ideally, if a were constant with light intensity I, the output would exhibit only a spectral component at o 0 (Fig. 2a). Vice versa, if the waveguide loss depends on I also other spectral components, at harmonic frequencies 2o 0 , 3o 0 and so on, appear at the output of the waveguide (Fig. 2b). The latter is the real scenario experienced by Si photonic waveguides, where different intensity-dependent absorption mechanisms contribute to the overall loss of the waveguide. The intensity of the harmonic components I(o i ) at the output of the waveguide depends on the magnitude of each absorption mechanism. We model the dependence of the waveguide loss on light intensity as where a 0 , a 1 , a 2 are coefficients. The first term of equation (1), a 0 , that does not depend on light intensity, represents the loss associated to SSA 9 and to surface imperfections and roughness, that is the coupling of the optical mode with radiation modes (radiation loss) and with counter-propagating modes (backscattering) 7,8 . The second term of equation (1), a 1 I, that depends linearly on light intensity, includes the loss associated to TPA 20 , that occurs in the bulk of the waveguide core, and to SCA, that is due to the free carriers that fill the waveguide surface under light injection (see Supplementary Note 1 for details). The last term of equation (1), a 2 I 2 , that has a quadratic dependence on light intensity, represents the loss induced by the free carrier absorption (FCA) in the bulk of the waveguide core as a result of TPA mechanisms 20 . A summary of the absorption mechanisms and their dependence on light intensity is provided in Table 1. By measuring the harmonic component I(o i ) at the waveguide output, quantitative information on the loss processes that occur can be found. This experimental technique is applied to channel waveguides fabricated in Si-on-insulator technology (doping 10 15 cm À 3 ). The rectangular Si core is buried in a glass cladding, has height of 220 nm and width w. No roughness smoothing treatment is performed at the core/cladding interface (see Methods section for details). All the measurements are performed for quasi-transverse electric polarized light at the wavelength of 1,550 nm. The modulation at o 0 is applied to the light at the input of the waveguide by means of an external intensity modulator, while the spectral components at the chip output are measured with a photodetector and an electrical-spectrumanalyzer (see Methods section, Supplementary Fig. 1 and Supplementary Note 2 for details). Figure 2c shows the normalized ratio between the intensity of the fundamental and the first harmonic r o ¼ I(o 0 )/I(2o 0 ) measured as a function of the optical power propagating in waveguides with w ¼ 480 nm (red squares) and w ¼ 1 mm (blue circles); waveguides length is, respectively, 7 mm and 9 mm. According to this technique, the loss coefficient a 0 , that is constant with light intensity, does not affect the ratio between the harmonic components, and therefore is not included in the analysis of the loss contributions. On the contrary, r o depends on a 1 and a 2 , with slope that increases for larger absorption coefficients. Dashed lines show r o calculated under the assumption that absorption of light occurs only in the bulk of the waveguide core (TPA and FCA only). A waveguide TPA coefficient of b TPA ¼ 0.7 cm GW À 1 , that we measured in ref. 21 for this waveguide technology, has been used to evaluate a 1 and a 2 . For the waveguide with w ¼ 480 nm (w ¼ 1 mm), at the optical power of 0 dBm, this results in a total loss due to TPA of about 6.5  10 À 3 dB cm À 1 (3.2  10 À 3 dB cm À 1 ), and in a loss due to FCA of about 1.3  10 À 3 dB cm À 1 (8  10 À 4 dB cm À 1 ) according to the equations of Soref and Bennett 22 . It is worth noting that provided b TPA both the loss due to TPA and to FCA are given. However, this does not match the measured r o that for both waveguides deviates from the reference level (0 dB) at lower waveguide power and has a different slope. The observed behaviour can be explained only with a much larger a 1 while keeping the same value of a 2 (solid line), that means that a higher linear absorption coefficient is required without resulting in a higher quadratic absorption coefficient as well. Therefore, this cannot be explained with a larger b TPA , because an increase of the TPA induced loss would result in an increase of the FCA loss as well. Indeed this is the contribution of SCA, that is the loss induced by the free carriers that populate the waveguide surface. Figure 3 reports the absorption contributions with linear dependence on light intensity, that are SCA and TPA, as a function of the optical power for the two waveguides (shown also in Supplementary Fig. 2). Red squares and blue circles indicate the experimental points derived from the measurement of ARTICLE whereas solid lines show the linear fit. At 0 dBm, the loss due to SCA amounts to about 4.5  10 À 2 dB cm À 1 and 4.3  10 À 2 dB cm À 1 , respectively, for the waveguide with w ¼ 480 nm and w ¼ 1 mm. Therefore, the absorption induced by the surface free carriers at the core/cladding interface is about one order of magnitude larger than the TPA in the bulk (dashed lines). This means that in our Si photonic waveguides SCA is essentially the main source of optical loss with linear dependence on light intensity. At this power level, SCA is also higher than the loss associated to the doping, that is expected to be about 0.03 dB cm À 1 for a 10 15 cm À 3 p-type Si waveguide. Actually it is worth noting that, as shown in Fig. 3, the loss induced by SCA is similar for the two considered waveguides, that have different widths. This is because the integral of the light intensity of the modes on the two waveguides along the waveguide surface (where surface states are located) is similar, as confirmed by electromagnetic simulations. This results in a comparable local density of surface carriers, and therefore in about the same SCA. Also, it is worth considering that the technique used to measure SCA is neither dependent on the material system employed for the waveguide fabrication nor on its geometry, and therefore can be directly applied to any photonic waveguide technology. Light-induced conductivity at the surface. We now evaluate the effects of the surface free carriers in the electrical domain. Figure 4 shows the density DN s of free carriers located on the surface of the waveguide with w ¼ 1 mm as a function of the optical power (blue circles). This carrier density is calculated from the measured SCA induced loss of Fig. 3 using the equations given in ref. 22 and assuming that surface states are located within the first three/four atomic layers (B1 nm) from the Si surface 23 . Although the waveguide is excited with moderate optical power, the photogeneration of free carriers on its surface is massive. In fact, only À 18 dBm are sufficient to fill the surface with twice as much the carriers available in the bulk of the waveguide in absence of light (B10 15 cm À 3 ). Then, DN s grows rapidly with light intensity, amounting to more than 10 17 cm À 3 at 0 dBm, that is respectively two and four order magnitudes larger than the number of carriers located in the waveguide bulk in absence of light and induced by TPA (B10 13 cm À 3 ). This means that a 1 nm thick highly conductive frame is created by the light at the border of the Si core, as shown in Fig. 1c. Furthermore, we compare the density of surface carriers calculated from the loss measurement of Figs 2 and 3 with that measured on the same waveguide with a surface-state photodetector (red triangles in Fig. 4) that we developed. This detector, whose operation principle is detailed in ref. 10, provides in the electrical domain the density of free carriers located on the waveguide surface. Good agreement across the entire power range is found between DN s measured in the optical (blue circles) and electrical (red triangles) domain, thus confirming the result achieved with the approach presented in this work. Similar results were found for the 480 nm wide waveguide ( Supplementary Fig. 3). Finally, we investigate the effect of the photogenerated surface carriers on the overall electrical conductivity s of the waveguide. Figure 5a shows the contribution to s of the electrical carriers located in the Si core as a function of the optical power for the waveguide with w ¼ 480 nm. Both the contributions of the surface (red squares) and of the bulk of the waveguide are shown (blue line for the doping and green line for TPA). For each physical process that contributes to s, the area A in which it occurs has been taken into account, so that in Fig. 5a we show s  A. This means that the contribution of the surface carriers is integrated only on the surface of the waveguide, whereas those related to the doping and to TPA are integrated on the whole core area. The red line indicates the total conductivity of the waveguide, that is the sum of the contributions due to doping, TPA and SSA. From this analysis, an electrical model of the Si waveguide emerges, that is shown in the inset of Fig. 5a. In the electrical domain, each contribution to the electrical conductivity can be represented as a resistance, so that under light injection the total resistance R of the waveguide is given by the following relation 1 1 where R dark , R bulk and R surface are respectively the resistances due to the doping in absence of light, to the bulk (TPA) and to the surface (SSA). At low light intensity (o2 dBm), the doping contribution dominates and s is essentially given by the carriers thermally generated in the bulk of the waveguide core; in this regime, R is well approximated by R dark . The bulk conductivity dominates also at high intensity (410 dBm), where free carriers are generated by TPA and R is mainly given by R bulk . In contrast, at moderate light intensity (in the range 2-10 dBm) the effect of Fig. 2, is about one order of magnitude larger than TPA, and therefore is the main source of optical loss with linear dependence on optical power. the surface emerges, and the main responsible to the waveguide photoconductivity are the surface free carriers. Therefore, in this regime the contribution of R surface to the total resistance R must be taken into consideration. For instance, at a waveguide power of 8.5 dBm, where the doping and TPA equally contribute to the waveguide conductivity (1.65 nS mm, cross point between blue and green lines in Fig. 5a), the surface conductivity is about 3 nS mm, that is twice as large as the bulk conductivity. The relevance of surface conductivity effects depends on the size of the waveguide. In fact, in a larger waveguide the conductivity due to the doping is higher, and therefore the lightinduced surface conductivity becomes less relevant, as shown in Fig. 5b for the 1 mm wide waveguide. Comparing the two waveguide structures considered in this work, it is worth noting that the impact of the surface carriers on the photoconductivity can be significantly different even though the surface carrier absorption is almost the same (Fig. 3). Finally, another consequence of the SSA-generated free carriers is the existence of a non-zero minimum conductivity for Si waveguides. As shown with black lines in Fig. 5, in presence of light the waveguide would exhibit a residual conductivity even in case the doping level were ideally reduced to zero. Discussion Results presented in this work demonstrate that the surface of Si photonic waveguides significantly changes its nature according to the intensity of the light propagating in the waveguide. At intensities that are typically used in linear applications (o10 dBm), a massive surface carrier generation occurs, that makes the surface appear as a metal-like frame, where the freecarrier density is more than two orders of magnitude higher than in the bulk of the waveguide. The creation of this quasi-metal layer, that has been so far ignored, is responsible for two main effects: (i) at moderate light intensity, for instance between about 0 and 10 dBm for single mode waveguides, the electrical properties of the waveguide are dominated by the surface free carriers, that largely overcome the conductivity of the bulk of the waveguide. This effect poses the ultimate limit to the electric conductivity of Si nanowaveguides; (ii) an additional optical absorption mechanism is induced, that we named SCA and has a linear dependence on light intensity, as for TPA. In conventional channel Si nanowaveguides SCA can be higher than the loss associated to the doping, and even more than ten times stronger than TPA. Therefore, the loss contributions due to surface states is expected to become dominant in waveguides where roughness induced scattering loss is reduced 24 . This unveils a new picture of optical Si waveguides, according to which a realistic description cannot be given without taking into consideration surface effects at the core/cladding interface. The possibility to measure the optical and electric properties associated with the surface of Si waveguides enables also to access the necessary information to optimize the waveguide design, to either enhance or inhibit surface effects compared with the properties of the bulk of the waveguide. For example, surface effects are likely to play an important role in Si photonic modulators, based for instance on p-i-n junctions 16 or capacitor structures embedded in Si waveguides 17 , as well as integrating electro-optic organic 18 or 2D 19 cladding materials. In these devices, the waveguide core is typically contacted to the metal electrodes through Si layers, which are slightly doped to avoid optical loss, but introduce a series resistance that limits the maximum modulation speed. As the optical field overlaps with these Si connections, a light dependent resistance associated with the generation of surface free carriers has to be considered to optimize the device design. The relevance of SSA-induced carrier generation is as much pronounced as the waveguide size is reduced. Maximum impact is expected when the optical confinement is pushed so deeply into the subwavelength scale that the effective waveguide size approaches that of its own surface: this is for instance the case of nanoplasmonic structures, where the electric field is strongly localized and dramatically enhanced at a metal-Si interface 25 . In this case, a massive density of free carriers is expected to be generated by SSA well below the intensity threshold of non-linear effects. These considerations are not limited to Si photonics, but hold for any optical waveguide technology where surface phenomena are likely to appear, such as indium phosphide 26 , germanium 27 , gallium arsenide 28,29 and their compounds. The relevance of surface carrier generation processes is expected to depend on the energy gap of the core material compared with the wavelength of the light radiation and on the nature of the core/cladding interfaces. Therefore, the choice of the material used as waveguide cover layer and the quality itself of the surface, which is related to the fabrication processes, strongly affect the optical and electrical phenomena that occur at the waveguide surface. Methods Waveguides fabrication. Silicon photonic waveguides were fabricated from a commercial Si-on-insulator wafer with a 220-nm thick Si layer on a 2-mm thick w = 480 nm w = 1 μm ARTICLE oxide buffer layer. The waveguide pattern was written on a hydrogen silsesquioxane resist through electron-beam lithography and then transferred to the Si core by an inductively coupled plasma etching process according to the procedure described in ref. 30. The waveguide core is buried under a 1-mm thick cover layer, consisting of 550 nm of spun and baked hydrogen silsesquioxane and 450 nm of SiO 2 grown by plasma enhanced chemical vapour deposition. Experimental setup. The light signal at the input of the waveguide is generated by means of a continuous-wave laser at 1,550 nm. A lithium niobate intensity modulator applies a weak sinusoidal modulation at frequency 500 kHz to the signal. An erbium-doped-fiber-amplifier and a variable-optical-attenuator are used to accurately control the light intensity at the input of the waveguide. Polarization controllers at the input and at the output of the modulator enable to control the polarization of the light respectively injected in the modulator and coupled to the Si photonic waveguide. At the output of the waveguide the light is collected by a photodetector, whose output feeds an electrical-spectrum-analyzer that enables to measure the harmonic components at the relevant frequencies. A schematic of the experimental setup can be found in Supplementary Fig. 1, additional details in Supplementary Note 2.
5,194
2015-09-11T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Adapting to the Dominant Language: Challenges and Coping Strategies This study investigates students' challenges in academic institutions as they transition to a dominant language. It likewise explores strategies to address these challenges, ultimately enhancing successful integration. It employed a qualitative research design. The 12 participants were selected using purposive sampling. They were Sama students studying at Mindanao State University – Tawi-Tawi College of Technology and Oceanography, adapting to the dominant language, Sinug. The responses in the focus group discussion were analyzed thematically to identify the challenges and coping strategies the participants faced and employed, respectively. The identified challenges are vocabulary barriers, cultural bias, identity crisis, social isolation, linguistic inequality, and identity formation. At the same time, the identified coping strategies are language immersion, peer support, speaking in the lingua franca, utilizing electronic resources, language exchanges, building vocabulary, and cultural immersion. Introduction Acquiring a second language (L2) is never easy.Learners must struggle with new vocabulary, grammar, sentence structure, idioms, pronunciation, and more (Rajik, 2022).In the context of education, the role of language goes beyond mere communication; it shapes identities, influences access to resources, and impacts academic success (Ellis, 2007).In educational institutions where a dominant language prevails, adapting becomes essential for students from linguistic minority backgrounds to engage fully within the academic community (Lou & Noels, 2019).It is often a critical factor in academic success and social integration.Beyond practical advantages, adapting to the dominant language in the academic community also catalyzes personal growth and cultural understanding (Kutor, et al., 2021).Moreover, it has a vital role outside the confines of the classroom to establish connections and access opportunities.This research study aims to investigate challenges and coping strategies in the dominant language adaptation processes of Sama students at Mindanao State University-Tawi-Tawi College of Technology and Oceanography (MSU-TCTO), where Sinug, which is primarily spoken by the Tausug in Sulu, Basilan and Tawi-Tawi (Estrera & Rajik, 2022), serves as the dominant language spoken by most students, faculty, and staff.The Sama people, an indigenous group in the Philippines, have their unique language, culture, and worldview.In this context, the subjects are referred to as the Sama in Tawi-Tawi and nearby provinces within the Bangsamoro Autonomous Region and Muslim Mindanao (BARMM), who are studying in MSU-TCTO.As they immerse themselves in the MSU-TCTO community, their adaptation experiences within the Sinug-dominant environment warrant further exploration. The Sama community, predominantly residing in the coastal areas of various regions in the Philippines, has distinct language and cultural practices.Although Tawi-Tawi is home to Sama people, they experience linguistic prejudice because Sinug, not Sinama, is the language of interaction used in commerce, offices, and educational institutions (Rajik & Tarusan, 2023), at least in the capital town, Bongao.Sinama, however, is predominantly spoken by the Sama community, particularly in the island municipalities. As a locale of the study, the MSU-TCTO community is a multilingual academic environment.It accommodates students from various cultural and linguistic backgrounds.It is a branch campus of the Mindanao State University (MSU) System.One of the key strengths of MSU-TCTO lies in its focus on technology and oceanography.With its abundant marine resources and coastal ecosystems, Tawi-Tawi province presents a unique environment for studying and exploring various aspects of oceanography.This is why many students from different linguistic backgrounds opted to study at this institution. The existing literature generally lacks specific research on the lived experiences of the students coming from a linguistic minority group who are adapting to the dominant language, specifically within the context of Sama students at MSU-TCTO.While some studies explore language adaptation and cultural assimilation among minority students in different educational settings (Lai, et al., 2015;Makarova & Birman, 2015;Portes, & Rivas, 2011), the unique experiences of Sama students in MSU-TCTO have yet to be examined in depth.This study aims to fill this literature gap by exploring the challenges they encounter and their coping strategies that arise from the adaptation process within the Sinug-dominant MSU-TCTO community. Methods A qualitative research design was employed to effectively explore the experiences of the Sama students in MSU-TCTO in suiting themselves to the dominant language.Qualitative methods allow an in-depth understanding of individuals' experiences, perceptions, and subjective realities within a specific context (Lindgren, et al., 2020).The following sections outline the key components of the methodology: Participants: The research involved a purposive sampling technique in selecting the 12 Sama students at MSU-TCTO who have experienced adapting to the Sinug language.The participants were chosen based on their language background (those who initially do not speak Sinug), length of exposure to Sinug, and their willingness to share their experiences. Data Collection: Semi-structured interviews were the primary data collection method in the focus group discussion.The interviews were conducted conversationally, allowing participants to express their unique experiences in terms of challenges and coping strategies.The interview questions were designed to elicit in-depth narratives about the participants' journey of language adoption.Probing questions were used to explore various aspects such as language use, cultural practices, and feelings of belongingness. Data Analysis: Thematic analyses were employed to analyze the qualitative data gathered from the interviews.The study involved transcribing and coding the interview data, translating the responses into English, and identifying common themes, patterns, and categories that emerged from the participants' responses.These themes were then organized and interpreted to derive meaningful insights about adopting the dominant language. Ethical Considerations: Ethical guidelines were followed throughout the research process.Participants were provided informed consent before their involvement in the study, ensuring their rights to privacy and confidentiality were respected.Likewise, anonymity was maintained by assigning pseudonyms to participants and using generic identifiers in reporting the findings. Results and Discussion The results highlight the challenges and coping strategies faced and employed by Sama students when adapting to the Sinug language. Challenges Faced by Sama Students in Adapting to Dominant Language The themes identified under the challenges that the Sama students faced while adapting to the dominant language spoken in MSU-TCTO are vocabulary barriers, cultural bias, identity crisis, social isolation, linguistic inequality, and identity formation.Table 1 illustrates the description of the themes to show how they are different from each other. Table 1: Identified Themes for Challenges and their Descriptions Theme Description Vocabulary barrier Students struggle with vocabulary in Sinug. Cultural bias Students struggle with communication styles, cultural references, social norms, nonverbal communication, and stereotyping. Identity crisis Students struggle with conflicting desires to assimilate while remaining true to their cultural heritage. Social isolation Students limit opportunities for meaningful interactions. Linguistic inequality Students faced discrimination because of their accent and pronunciation. Identity formation Students are required to modify their linguistic styles, creating a struggle to maintain authenticity. The themes identified posed the challenges in the process of dominant language adaptation by the Sama students in MSU-TCTO.For instance, the vocabulary barrier hinders the Sama students' understanding, especially when interacting with Tausug students, faculty, and staff outside the classroom.Participant 4 remarked, "As a Sama student, I do not have a strong command of Sinug and struggle with vocabulary.This has impeded me from articulating my thoughts when talking with Tausug on campus."[All responses were translated into English from Sinama].The participant's responses addressed the difficulties in language proficiency.The students acknowledged the unfamiliarity with Sinug vocabulary and the struggle to express themselves.In addition, the responses recognized the impact of vocabulary barriers on confidence and participation in extracurricular activities, which also impede their ability to build strong relationships with peers outside the classrooms. Participant 2: "There are words I don't know and can't understand.It leads to feelings of selfconsciousness and reduced confidence when speaking Sinug.This can limit my participation in extracurricular activities." Cultural bias, as one of the challenges faced by the Sama students, holds back comprehension of context-specific conversations.The participants' responses suggest that the Sinug carries cultural nuances and particular communication styles that differ from Sinama and its speakers.These differences can lead to misinterpretations if one is not accustomed to the Tausug communication styles.They also emphasize the importance of understanding cultural references and idioms specific to Sinug to fully comprehend concepts.It made it difficult for the Sama to fully understand and interpret language usage, potentially hindering their ability to communicate effectively and comprehend context-specific conversations.In addition, the participants faced difficulties in adhering to the norms of the Tausug during communication interactions.Because of unfamiliarity with these social norms, there is a tendency for the Sama to inadvertently violate cultural expectations, use inappropriate language, or struggle to navigate social interactions in the university setting. Cultural bias can also manifest in nonverbal communication, including gestures and voice.For instance, the Tausug shouting during a conversation does not imply they are fighting.And because the Sama are not used to it, they have different interpretations of these nonverbal cues, leading to misunderstandings or awkward interactions in their efforts to adapt to this dominant language.Furthermore, cultural bias can impair stereotypes and prejudice, leading to individuals facing discrimination or biases based on their native language or cultural background.These create additional challenges in adapting to the dominant language as they experience feelings of exclusion or a lack of acceptance within the university community. Participant 12: "I heard some Tausug calling us aho' aho' (aho' means 'yes'), which we perceived as derision.Not only that, but they also even name-call us or assassinate our characters. Identity crisis, likewise, can create a dilemma between wanting to assimilate into the dominant language's culture and preserving one's cultural identity.The participants acknowledged the internal conflict they experienced between wanting to fit in and be accepted in the dominant language's culture and remaining true to their cultural heritage.It likewise deters language development, adding to linguistic insecurities.They doubt their cultural identity and sense of self, which leads them to question their language abilities in Sinug. Participant 6: "I want to feel a sense of belonging and acceptance within the dominant language's community.I do it by assimilating into their culture.However, I want also to preserve my cultural identity." Social isolation significantly limits opportunities for the Sama students to experience meaningful interactions in Sinug.The students shared that because they could not speak Sinug, they chose to stay away from their Tausug roommates in the dormitory.Without regular engagement with the native speakers of the dominant language, it became more challenging for the Sama students to practice and develop proficiency in Sinug. Students also felt that Sinama is subjected to linguistic inequality.Their accent is mainly the target of prejudice.Participant 7 stated, "My Tausug friends always make fun of me because of my accent.When I speak in Sinug, I tend to have a dragging accent."This linguistic imbalance has contributed to the feelings of exclusion and inferiority among the Sama students when communicating with the Tausug.It has created barriers to effective communication on the part of the Sama, limited participation, and negatively impacted their self-confidence. Coping Strategies Employed by the Sama Students Coping strategies are techniques individuals use to deal with the challenges they face when learning the language of a particular community.In the context of the Sama students learning the Sinug language, the strategies they employed to overcome linguistic barriers and facilitate effective communication and integration into the language-dominant environment are shown in Table 2. Table 2: Identified Themes for Coping Strategies and their Descriptions Theme Description Language mmersion Students engage in conversations in Sinug. Peer Ssupport Students seek support from fellow Sama students who have successfully adapted to Sinug. Speaking in the ingua franca Students communicate in Filipino, the national language of the Philippines.Utilizing electronic resources Students make use of the Sinug-Sinama E-dictionary. Language exchanges Students engage in language exchange activities with fluent speakers of Sinug. Building vocabulary Students learn new words and phrases. Cultural immersion Students participate in cultural activities. Language immersion is one of the coping strategies employed by the Sama students at MSU-TCTO to adapt to the dominant language, Sinug.The students conversed with native Sinug speakers to learn how to speak the language.Through this, they are acquainted with an authentic linguistic setting that enhances their language-learning experience. Participant 3: "Immersing myself in conversations with native Sinug speakers has been a precious experience.Through regular interactions with them, I have noticed a significant improvement in my speaking in Sinug." Participant 3 acknowledged that immersing themselves in conversations with native Sinug speakers has been a precious experience.This demonstrates an understanding of the importance of practical, real-life interactions in language acquisition.He also emphasized that regular interactions with native speakers positively impacted their language proficiency in Sinug. The participants also acknowledged peer support as one of the coping strategies they employed in language adaptation.They highlight fellow Sama students' understanding of the challenges faced when learning Sinug.Participant 1 stated, "Peer support from fellow Sama students has played a crucial role in my language adaptation journey.Whenever I face challenges or feel overwhelmed, I turn to my peers who have successfully adapted to Sinug."The student's response indicates that peer support has been effective in helping them overcome obstacles and foster a sense of belonging within the Sinug-speaking institution. Participant 9: "Many older students have adapted to Sinug and are more proficient in the language.Whenever I have questions or need help, I approach them.They willingly share their experiences and provide language learning strategies." This student expressed a strong appreciation and reliance on the peer support system.He recognized the knowledge and proficiency of older Sama students in Sinug and actively seek their guidance and assistance.The student highlighted the willingness of their peers to share experiences and provide helpful resources for language learning.The peer support system is emphasized as fostering collaboration, inclusivity, and a supportive learning environment.This response reflects the significant role of peer support in the student's language adaptation process and their overall positive experience. In an equal way, students admit that whenever they face difficulty communicating with the Tausug, they tend to shift to Filipino to be understood.Participant 5 detailed, "I initially found it challenging to express myself effectively in Sinug.However, communicating in Filipino allows me to converse more confidently with native speakers who might not understand Sinama fluently.It has helped me convey my thoughts and understand their responses."In this response, the student highlights the role of Filipino as a lingua franca in facilitating communication with native Sinug speakers.The student recognizes the initial difficulty of expressing themselves in Sinug and acknowledges the importance of effective communication.By using Filipino, they can engage in conversations more confidently and understand the responses from native Sinug speakers.The student emphasizes the significance of regular practice and exposure to sharpen their Sinug skills over time. Participant 8: "Speaking in Filipino has been a valuable coping strategy for me when conversing with native Sinug speakers.While I aspire to become fluent in Sinug, I initially struggled to express myself comfortably.However, using Filipino as a bridge language allows me to establish meaningful connections and be understood by native speakers. Here, the student expresses how speaking in Filipino is a valuable coping strategy.The student acknowledges their aspiration to become fluent in Sinug but recognizes the initial challenges.Speaking in Filipino allows them to effectively communicate and establish connections with native Sinug speakers, even if they are not fluent in Sinug.The student emphasizes the positive outcomes of using Filipino, including gaining insights into Sinug culture and forming stronger relationships within the community. Another way to cope with the challenges the Sama students face in adapting to the dominant language is by utilizing electronic resources such as the Sinug-Sinama e-dictionary.Participant 6 conveyed, "Utilizing the Sinug-Sinama e-dictionary has been immensely helpful in my language learning journey.Whenever I encounter unfamiliar words or phrases in Sinug, I can quickly look them up in the dictionary."In this response, the student emphasizes the convenience and effectiveness of utilizing the Sinug-Sinama e-dictionary as a coping strategy for learning Sinug.They appreciate the accessibility of the electronic resource, which allows them to look up unfamiliar words and phrases quickly.The student highlights the benefits of having accurate translations, definitions, and contextual examples, which contribute to a deeper understanding of the language.The convenience and flexibility provided by the edictionary are noted as significant factors in supporting the students' language acquisition process. The participants likewise acknowledge language exchanges with the native speakers of Sinug as one of the coping strategies they employed in learning Sinug.Participant 11 stated, "I actively seek out fluent speakers of Sinug and converse with them.We take turns practicing each other's languages, and they help me correct my pronunciation and grammar."The participant exchanges language with fluent Sinug speakers to practice their language skills.They highlight the benefits of having language exchange partners, including correcting pronunciation and grammar and cultural discussions.The participant also emphasizes the positive impact on their confidence and social integration within the Sinug-speaking community. The participants are also working on building their vocabulary in Sinug.Participant 10 explained, "Improving my vocabulary has been an essential coping strategy in adapting to Sinug.I've found that incorporating new words and phrases into my daily routine has significantly improved my language skills.One technique I use is creating word lists.This helps me remember and apply the vocabulary in relevant contexts."The response indicates a conscious effort to grow the participant's vocabulary and its positive impact on his ability to express himself with precision and confidence in Sinug. Participant 1: "I regularly learn new words and phrases.For example, whenever I come across unfamiliar words, I look them up and try to incorporate them into my vocabulary." The participants acknowledge the positive impact of this strategy on their fluency and ability to express themselves accurately in Sinug.They demonstrate a proactive approach to language learning and adaptation by actively seeking out new words and phrases.The responses underscore the importance of vocabulary development and its contribution to overall language proficiency and effective communication in Sinug. Lastly, the participants have seen the significance of cultural immersion in adapting to the target language. Participant 2: "I actively participate in various cultural activities to immerse myself in the traditions, customs, and values of the Sinug-speaking community.By engaging in these activities, I not only get to learn about the language but also gain a deeper understanding of the local customs and traditions. Participant 9: "It helps me to appreciate the nuances of Tausug culture and enables me to connect with native speakers on a more profound level.Additionally, being involved in these activities allows me to build relationships with the community members and helps break down any initial barriers." In these responses, the participants highlight the significance of cultural immersion as a coping strategy for adapting to Sinug.They emphasize their active participation in cultural activities.The participants recognize that engaging in these activities provides them with a deeper understanding of Sinug culture, allowing them to appreciate the language and connect with native speakers on a more personal level.Furthermore, they note the social benefits of cultural immersion, such as building relationships and breaking down barriers.These responses highlight how cultural immersion contributes to language and cultural adaptation, fostering a sense of belonging and integration within the Sinug-speaking community. Conclusion This research explored Sama students' challenges and coping strategies in adapting to the Sinug language.The difficulties identified include vocabulary barriers, cultural bias, identity crisis, social isolation, linguistic inequality, and identity formation.These challenges significantly hinder the students' integration into the Sinug-speaking academic community. However, the participants demonstrated resilience and determination by implementing various coping strategies.These strategies encompass language immersion, peer support, speaking in the Lingua Franca, utilizing electronic resources, language exchanges, building vocabulary, and cultural immersion.Through these coping strategies, the participants actively engaged in language and cultural activities, fostering their language proficiency, and enhancing their understanding of the local customs and traditions. The findings of this research highlight the importance of providing resources and support systems to aid Sama students in overcoming the identified challenges.By addressing the vocabulary barrier, cultural bias, and linguistic inequality, educational institutions and communities can create an environment that encourages inclusivity and facilitates adaptation. This research stresses the significance of language proficiency, cultural understanding, and integration.By employing effective coping strategies, Sama students can navigate the complexities of adapting to the dominant Sinug language and culture, fostering a sense of belonging and identity within the Sinugspeaking community.It is hoped that the findings of this research will contribute to a more inclusive and supportive environment for Sama students and similar linguistic minority groups in their pursuit of adapting to dominant languages.
4,652.2
2023-12-27T00:00:00.000
[ "Education", "Linguistics" ]
Electron-positron pair production in ion collisions at low velocity beyond Born approximation We derive the spectrum and the total cross section of electromagnetic $e^{+}e^{-}$ pair production in the collisions of two nuclei at low relative velocity $\beta$. Both free-free and bound-free $e^{+}e^{-}$ pair production is considered. The parameters $\eta_{A,B}=Z_{A,B}\alpha$ are assumed to be small compared to unity but arbitrary compared to $\beta$ ($Z_{A,B}$ are the charge numbers of the nuclei and $\alpha$ is the fine structure constant). Due to a suppression of the Born term by high power of $\beta$, the first Coulomb correction to the amplitude appears to be important at $\eta_{A,B}\gtrsim \beta$. The effect of a finite nuclear mass is discussed. In contrast to the result obtained in the infinite nuclear mass limit, the terms $\propto M^{-2}$ are not suppressed by the high power of $\beta$ and may easily dominate at sufficiently small velocities. Introduction The process of electromagnetic e + e − pair production in heavy-ion collisions plays an essential role in collider experiments. It has a long history of experimental and theoretical investigations. The process takes place in two different flavors dubbed as "free-free" and "bound-free" production, depending on whether the final electron is in the continuous spectrum or in the bound state with one of the nuclei. As for the free-free pair production, the pioneering papers [1,2] appeared already in 1930-s and dealt with the high-energy asymptotics of the process. In late 1990-s the interest to the process has been revived due to the RHIC experiment and approaching launch of the LHC experiment. In particular, the contribution of the higher orders in the parameters η A,B = Z A,B α (the Coulomb corrections) in the high-energy limit has been discussed intensively, see Refs. [3,4,5,6,7] and the review [8]. The interest to the lepton pair production in collisions of slow nuclei appeared long ago in connection with the supercritical regime taking place when the total charge of the nuclei is large enough (at least larger than 173), see Ref. [9] and references therein. Recently, in Ref. [10] the total Born cross section of the free-free pair production has been calculated exactly in the relative velocity β of the colliding nuclei. It turns out that the cross section is strongly suppressed as β 8 at β ≪ 1. A natural question arises whether such a suppression also holds for the Coulomb corrections (the higher terms in η A,B ). In the present paper we show that the Coulomb corrections are less suppressed with respect to β than the Born term. We assume that β ≪ 1 and η A,B ≪ 1 and take into account the higher-order terms in η A,B amplified with respect to β. We consider both free-free and bound-free pair production. In the next section we perform calculations in the approximation in which both nuclei have constant velocities, i.e., we treat the nuclei as infinitely heavy objects and neglect the Coulomb interaction between them. This approach has severe restrictions with respect to values of β. These restrictions are discussed in the third section together with the qualitative modification of the results in the region where the constant-velocity approximation is not valid. Pair production cross section. Let us first assume that the parameter η A is sufficiently small to be treated in the leading order. In particular, we assume that η A ≪ η B , β. We neglect the Coulomb interaction between the nuclei and work in the rest frame of the nucleus B with z axis directed along the momentum of the nucleus A. Since our primary goal is the total cross section of the process, we find it convenient to use the eigenfunctions of angular momentum as a basis. In this basis the cross section has the form where β is the relative velocity of the nuclei, q is the space components of the momentum transfer to nucleus A, ε = p 2 + m 2 is the electron energy, m is the electron mass, and the corresponding quantities with tildes are related to a positron. The summation in Eq. (1) is performed over all discrete quantum numbers related to the states of both particles, i.e., over the total angular momentum J , its projection M , and two possible values of L = J ± 1/2, related to the parity of the state. Within our accuracy, the matrix element M reads Here ω = ε +ε, U (η B , κ, ε|r) is the electron wave function with the energy ε, the total angular momentum J = |κ| − 1 2 , and L = J + 1 2 sgn κ. This wave function is the solution of the Dirac equation in the attractive potential −η B /r. The function V (η B ,κ,ε|r) is the negative-energy solution of the Dirac equation corresponding to the charge conjugation of the positron wave function, so that V (η B ,κ,ε|r) = iγ 2 U * (−η B ,κ,ε|r) . In the derivation of Eq. (2) we have used the gauge in which the photon propagator has the form The limit β ≪ 1 is quite special. From kinematic constraints, it is easy to conclude (cf. Ref. [10]) that the characteristic momentum transfers to both nuclei are of the order of m/β ≫ m. A simple estimate r ∼ β/m ≪ 1/mη B justifies using the small-r asymptotics of the Coulomb wave functions: where n = r/r, Ω κM = Ω JLM is the spherical spinor, and the radial wave functions read Here Let us assume for the moment that β ≪ η B 1. Then the underlined terms can be safely neglected due to the estimate r ∼ β/m. Moreover, due to the same estimate, the leading contribution to the sum in Eq. (1) is given by the terms with κ = ±1 andκ = ±1. If we also assume that η B ≪ 1, then only the contributions of two states with (κ,κ) = (+1, −1) and (κ,κ) = (−1, +1) survive. The underlined terms in Eq. (4) become important for η B β. In this region, in addition to the two states mentioned above, the states with (κ,κ) equal to also should be taken into account. Integrating over r in Eq. (2), substituting the result in Eq. (1), and integrating over q, we obtain the cross section σ f f of the free-free pair production: The relative order of the three terms in braces is regulated by the ratio η B /β. When this ratio is small, the last term dominates. This term coincides with the Born result obtained in Ref. [10], as should be. The parameter η B /β appears due to the "accidental" suppression of the Born amplitude of pair production and has nothing to do with the Sommerfeld-Gamov-Sakharov factor. The bound-free pair production can be treated exactly in the same way as the free-free pair production. It appears that an electron is produced mostly in ns 1/2 states (κ = −1). The positron spectrum reads where the Riemann zeta function ζ 3 = ∞ n=1 1 n 3 comes from summation over the principal quantum number. It is quite remarkable that Eq. (6) can be obtained from Eq. (5) by the simple substitution pεdε followed by the replacement ε → m. This substitution works because of the factorization of hard-scale r ∼ β/m and soft-scale r ∼ 1/mη B contributions. The total cross sections are obtained by the direct integration over energies (energy) 1 : The main contribution to the integral is given by the region p ∼ m,p ∼ m. Note the cancelation of the terms ∝ η B in braces for both free-free and boundfree cross sections. While this cancellation for the free-free case is a trivial consequence of the charge parity conservation, for the bound-free case it comes as a sort of surprise. As it concerns the free-free pair production, the results (5) and (7) can be reproduced in a completely independent way. Namely, one can obtain the matrix element of the process in conventional diagrammatic technique taking into account the diagrams shown in Fig. 1. and calculating the contribution of the region where all Coulomb exchanges have momenta ∼ m/β. 1 Note that Eq. (7) is in obvious contradiction with the results of Refs. [11,12]. The origin of discrepancy is different for these two papers. As it concerns free-free pair production, in Ref. [11] two definitions for the total momentum transfer from the nuclei (differing by the relative sign between momentum transfers from each nucleus) appear to be mixed. Meanwhile, in Ref. [12] the space components of momentum transfer from the projectile nucleus (of the order of m/β ≫ m!) are totally omitted in the annihilation current. This contribution gives the correct amplitude up to the Coulomb phase which cancels in the cross sections (5) and (7). Let us now assume that η A ∼ η B . Then the higher-order terms in η A should be treated on the same basis as those in η B . However, the account of these terms is not reduced to the substitution η A ↔ η B in Eqs.(5), (6), and (7). Speaking of the free-free pair production, the substitution η A ↔ η B should be taken into account on the level of matrix element, but not in the cross section. The relative phase between the contributions ∝ η A η 2 B and ∝ η 2 A η B to the matrix element can be fixed from the diagrammatic approach mentioned above. Then we obtain For the bound-free pair production we have Note that σ f f ≫ σ bf for η A , η B ≪ 1, in contrast to the statement in Ref. [12]. It is interesting that in the supercritical case, Z A + Z B > 173, the relation between σ f f and σ bf is opposite (see e.g., Ref. [13]), since in this case at β → 0 due to the energy conservation law the electron can be produced in the bound state but not in the free state. Account for the finite nuclear mass The results (8) and (9) are obtained in the limit M A , M B → ∞. We show in this section that the account for the finite nuclear mass leads to an essential modification of both free-free and bound-free pair production cross sections at sufficiently small β. One of the sources, which restrict the applicability of Eqs. (8) and (9), is a deviation of the nuclear trajectories from the straight lines due to the Coulomb interaction between the nuclei. This deviation can be neglected if a shift of the minimal distance between the nuclei is smaller than the impact parameter ρ : where M A and M B are the masses of the corresponding nuclei. Substituting ρ ∼ β/m we come to the constraint where M p is the proton mass and η max = max{η A , η B }. Let us discuss qualitatively the modification of the cross section at β mηmax Mp 1/3 . First of all let us consider the dependence of the cross section dσ f f on the impact parameter ρ at β ≪ η max ≪ 1. Similarly to the derivation of Eq. (8), we obtain Integrating over ρ we obtain the first term in Eq. (8). If we consider the classical motion of the nuclei interacting by the Coulomb field, we find that the minimal distance ρ between the nuclei and the relative velocity β at this point are expressed via the impact parameter ρ 0 and the relative velocity β 0 at infinity as Then the parameter a in Eq. (12) can be written as Calculating the minimum value of the quantity a with respect to κ, we find that Therefore, it follows from Eq.(12) that the cross section σ f f is exponentially small at mZ A Z B α/(M r β 3 0 ) ≫ 1. The same conclusion is valid for the boundfree cross section. Suppose now that the condition (11) holds. Since the results (8) and (9) are strongly suppressed by the factor β 6 , it is natural to ask whether the contributions formally suppressed with respect to m/M r may dominate at sufficiently small β. The answer is positive. Let us consider the "bremsstrahlung" mechanism of pair production, when the pair is produced by a virtual photon emitted by the scattered nucleus. We consider the case Z A Z B α/β ≫ 1 when the motion of the nuclei is classical. Then the cross section of the e + e − pair production can be written as a product of the cross section σ γ of bremsstrahlung of virtual photon with the energy ω =ε + ε and the probability of virtual photon conversion into e + e − pair. We assume that 2m < ω ≪ M r β 2 so that σ γ can be calculated in the non-relativistic dipole approximation. We have [14] dσ where ω 0 = Mr β 3 ZAZB α and the functions G(ν) has the following asymptotic forms It is seen from Eq. (16) that σ BS f f is exponentially small for m ≫ ω 0 , which is in agreement with our previous statement. For m ≪ ω 0 , the main contribution to the integral over ω is given by the region m ≪ ω ≪ ω 0 . Then, taking into account that Φ(x) ≈ − 2 3 ln x at x ≪ 1, we obtain in the leading logarithmic approximation It is seen that the contribution (18) to σ f f starts to dominate over (8) very soon as β decreases. Discussion and conclusion Let us discuss our results. We have calculated the infinite-mass limit of the free-free and bound-free pair production cross sections, Eqs. (8) and (9). As expected, the bound-free pair production cross section is much smaller than the free-free one, with the relative magnitude ∼ η 3 . In the region β η A,B both cross sections essentially deviate from the results obtained in the leading order in η A,B . This is due to the "accidental" suppression of the Born amplitude. In this connection, it is interesting to compare Eqs. (8) and (9) bf for the production of scalar particles. Using the same technique we easily obtain The result for σ (0) f f coincides with the asymptotics in Eq. (17) of Ref. [10] 2 . In contrast to the spinor case, the cross sections (19) do not contain the terms of relative order η A,B /β since the leading-order contribution in η A,B is not suppressed by the power of β anymore. We note that the Coulomb corrections in Eqs. (8) and (9) are still more strongly suppressed in β than the Coulomb corrections to the corresponding cross sections for scalar particles, though the suppression is only β 2 , which is to be compared with β 4 for the ratio of the leading terms in η A,B . Finally, we have obtained the contribution (18) of the bremsstrahlung mechanism which appears due to the account of the finite nuclear mass. It turns out that this contribution starts to dominate very soon when β decreases. This severely restricts the region of applicability of the results (8) and (9).
3,581.2
2016-07-19T00:00:00.000
[ "Physics" ]
Surrogate-assisted distributed swarm optimisation for computationally expensive geoscientific models Evolutionary algorithms provide gradient-free optimisation which is beneficial for models that have difficulty in obtaining gradients; for instance, geoscientific landscape evolution models. However, such models are at times computationally expensive and even distributed swarm-based optimisation with parallel computing struggle. We can incorporate efficient strategies such as surrogate-assisted optimisation to address the challenges; however, implementing inter-process communication for surrogate-based model training is difficult. In this paper, we implement surrogate-based estimation of fitness evaluation in distributed swarm optimisation over a parallel computing architecture. We first test the framework on a set of benchmark optimisation problems and then apply to a geoscientifc model that features landscape evolution model. Our results demonstrate very promising results for benchmark functions and the Badlands landscape evolution model. We obtain a reduction in computationally time while retaining optimisation solution accuracy through the use of surrogates in a parallel computing environment. The major contribution of the paper is in the application of surrogate-based optimisation for geoscientific models which can in the future help in better understanding of paleoclimate and geomorphology. Introduction Evolutionary algorithms are loosely motivated by the theory of evolution where species are represented by individuals in a population that compete and collaborate with each other, producing offspring over generations that to improve quality given by fitness measure [24,45].Particle swarm optimisation (PSO), on the other hand, is motivated by the flocking behaviour of birds or swarms represented by a population of particles (individuals) that compete and collaborate over time [1,26,27,35].Evolutionary and swarm optimisation methods have been prominent in a number of areas such as realparameter global optimization, combinatorial optimization and scheduling, and machine learning [47,53,49,1].Research in evolutionary and swarm optimisation has focused on different ways to create new solutions with mechanisms that are heuristic in nature; hence, different variants are available [42,26].A major challenge has been in applying them in large-scale or computationally expensive optimisation problems that require thousands of function evaluations where a single function evaluation can take minutes to hours, or even days [33].An example of an expensive function is a geoscientific model for landscape evolution problem [12], and deep learning models for big data problems [79].Computationally expensive optimisation can be addressed with distributed/parallel computing and swarm optimisation [6,62,3]; however, we need efficient strategies for representing the problem. Surrogate-assisted optimisation [31,40,52,81,30] provides a remedy for expensive models with the use of statistical and machine learning models to provide a low computational replicate of the actual model.The surrogate model is developed by training from available data generated during optimisation that features a set of inputs (new solutions) and corresponding output (fitness) given by the actual model.The method is also known as Bayesian optimisation where the surrogate model (acquisition function) is typically a Gaussian process model [10]; however, neural networks and other machine learning models have also been used [60].Evolutionary and swarm optimization methods have been used in surrogate-assisted and Bayesian optimization, and have been prominent in fields of engine and aerospace design [51,37,58,31], robotics [10], experimental design [29], and machine learning [63,60]. Although limited studies exist, surrogate-assisted optimisation has been applied to geoscience problems.Wang et al. [71] presented reliability-enhanced surrogate-assisted PSO in landslide displacement prediction where the method was used for feature selection and hyperparameter optimization.The PSObased surrogate model was used to search the hyperparameters and feature sets in the long short-term memory (LSTM) deep learning model for predicting landslide displacement.Gong et al. [28] presented an ensemble-based surrogate-assisted co-operative PSO for water contamination source identification.Zhou et al. [80] used a surrogate-assisted evolutionary algorithm that incorporated multi-objective optimization for oil and gas reservoirs focusing on good placement and hydraulic fracture parameters.The method employed global-local hybridization searching strategy via PSO with low-fidelity surrogate model that used a multilayer perception.Furthermore, Zhand et al. [78] used surrogate-assisted PSO for gas hydrate reservoir development.Wang et al. [72] presented a surrogateassisted model for the optimization of hyperspectral remote sensing images.Chen et al. [22] utilised a surrogate-assisted evolutionary algorithm for heat extraction optimization of enhanced geothermal systems. Evolutionary algorithms provide gradient-free optimisation which is beneficial for models that do not have gradient information, for instance, landscape evolution models [18,12].Some instances of such models are so expensive that even distributed evolutionary algorithms with the power of parallel computing would struggle.Hence, we need to incorporate efficient strategies such as surrogate-assisted optimisation that further improves their performance, but this becomes a challenge given parallel processing and inter-process communication for implementing surrogate estimation. Landscape evolution models are geoscientific models that can be used to reconstruct the evolution of the Earth's landscape over tens of thousands millions of years [23,20,8,48,7].These models guide geologists and climate scientists in better understanding Earth's geologic and climate history that can further also help in foreseeing the distant future of the planet [8,64].These models use data from geological observations such as bore-hole data and estimated landscape topography millions of years ago and require climate conditions and geological parameters that are not easily known.Hence, it becomes an optimisation problem to estimate these parameters which has been tackled mostly with Bayesian inference via Markov Chain Monte Carlo (MCMC) sampling in previous studies that used parallel computing [18], and surrogate assisted Bayesian inference [13].Motivated by these studies, we bring the problem of the landscape evolution model to the evolutionary and swarm optimisation community; rather than viewing it as an inference problem, we view it as an optimisation problem. In this paper, we implement a surrogate-based optimisation framework via swarm optimisation over a parallel computing architecture.We apply the framework for benchmark optimisation functions and a selected landscape evolution model.We investigate performance measures such as the accuracy of surrogate prediction given different types of problems that differ in terms of dimension and fitness landscape.The contribution of the paper is in the application of surrogate-based optimisation for geoscientific models using parallel computing.The estimation of parameters through surrogate-assisted estimation can in the future help in better understanding paleoclimate and geomorphology which can enhance knowledge about climate change. The rest of the paper is organised as follows.Section 2 provides background and related work, while Section 3 presents the proposed methodology.Section 4 presents experiments and results and Section 5 concludes the paper with a discussion for future research. Surrogate-assisted optimization PSO has been continuously becoming prominent in surrogate-assisted optimisation.Yu et al. [76] compared surrogate-assisted hierarchical PSO, standard PSO, and a social learning PSO for selected benchmark functions under a limited computational budget.Li et al. [44] presented a surrogateassisted PSO for computationally expensive problems where two criteria were applied in tandem to select candidates for exact evaluations.These included a performance-based criterion and a distance-based criterion used to enhance exploration that does not consider the fitness landscape of different problems.The results demonstrated better performance over several stateof-the-art algorithms for selected benchmark functions and a propeller design problem.Chen et al. [21] presented a hierarchical surrogate-assisted differential evolution algorithm for high-dimensional expensive optimization problems with RBF network for selected benchmark functions and an oil reservoir production optimization problem.Yi et al. [75] presented an online variable-fidelity surrogate-assisted harmony search algorithm with a multi-level screening strategy that showed promising performance for expensive engineering design optimization problems. Furthermore, Li et al. [43] presented an ensemble of surrogates assisted PSO of medium-scale expensive problems which used multiple trial positions for each particle and selected the promising positions by using the superiority and uncertainty of the ensemble simultaneously.In order to feature faster convergence and to avoid wrong global attraction of models, the optima of two surrogates that featured polynomial regression and RBF models were evaluated in the convergence state of particles.Liao et al. [46] presented multi-surrogate multi-tasking optimization of expensive problems to accelerate the convergence by regarding the two surrogates as two related tasks.Therefore, two optimal solutions found by the multi-tasking algorithm were evaluated using the real expensive objective function, and both the global and local models were updated until the computational budget was exhausted.The results indicated competitive performance with faster convergence that scaled well with an increase in problem dimension for solving computationally expensive single-objective optimization problems.Dong et al. [25] presented surrogate-assisted black wolf optimization for high-dimensional and computationally expensive black-box problems that featured RBF-assisted meta-heuristic exploration.The RBF featured knowledge mining that includes a global search carried out using the black wolf optimization and a local search strategy combining global and multi-start local exploration.The method obtained superior computation efficiency and robustness demonstrated by comparison tests with benchmark functions.Chen et al. [21] presented efficient hierarchical surrogate-assisted differential evolution for highdimensional expensive optimization using global and local surrogate models featuring RBF network with an application to an oil reservoir production optimization problem.The results show that the method was effective for most benchmark functions and gave a promising performance for a reservoir production optimization problem.Ji et al. [38] presented a dualsurrogate-assisted cooperative PSO for expensive multimodal problems which reported highly competitive optimal solutions at a low computational cost for benchmark problems and a building energy conservation problem.Ji et al. [39] further extended their previous approach using multi-surrogate-assisted multitasking PSO for expensive multimodal problems.Some of the prominent examples of surrogate-assisted approaches in the Earth sciences include modelling water resources [54,5], computational oceanography [69], atmospheric general circulation models [59], carbon-dioxide storage and oil recovery [4], and debris flow models [50]. Landscape evolution models and Bayeslands Landscape evolution models (LEMs) use different climate and geophysical aspects such as tectonics or climate variability [73,65,55,11,2] and combine empirical data and conceptual methods into a set of mathematical equations that form the basis for driving model simulation.Badlands (basin and landscape dynamics) [55,56] is a LEM that simulates landscape evolution and sediment transport/deposition [34,32] with an initial topography exposed to climate and geological factors over time with given conditions (parameters) such as the precipitation rate and rock erodibility coefficient.The major challenge is in estimating the climate and geological parameters and those that are linked with the initial topography since they can range millions of years in geological timescale depending on the problem.A way is to develop an optimisation framework that utilizes limited data.Since gradient information is not available, Badlands can be seen as a black-box optimisation model where the unknown parameters need to be found via optimisation or Bayesian inference.So far, the problem has been approached with Bayesian inference that employs MCMC sampling for the estimation of these parameters in a framework known as Bayeslands [14]. The Bayeslands framework had limitations due to the computational complexity of the Badlands model; hence, in our earlier works, we extended it using parallel tempering MCMC [19] that featured parallel computing to enhance computational efficiency.Although we used parallel computing with a small-scale synthetic Badlands model, the procedure remained computationally challenging since thousands of samples were drawn and evaluated.Running a single large-scale real-world Badlands model can take several minutes to hours, and even several days depending on the area covered by the landscape evolution model considered, and the span of geological time considered in terms of millions of years.Therefore, parallel tempering Bayeslands was further enhanced through surrogateassisted estimation.We used developed surrogate-assisted parallel tempering MCMC for landscape evolution models where a global-local surrogate framework utilised surrogate training in the main process that managed MCMC replicas running in parallel [13].We obtained promising results, where the pre-diction performance was maintained while lowering the overall computational time. PSO PSO is a population-based metaheuristic that improves the population over iterations with a given measure of accuracy known as fitness [41].The population of the candidate solution is known as a swarm while the candidate solutions are known as particles that get updated according to the particle's position and velocity.In the swarm, each particle's movement is typically influenced by its local best-known position which gets updated when better positions are discovered by other particles.Equations 1 and 2 show the velocity and position update of the particle in a swarm, respectively. where v t x t represent the velocity and position of a particle at time step t, respectively.c 1 and c 2 are the user-defined cognitive and social acceleration coefficients, respectively.γ 1 and γ 2 are random numbers drawn from uniform U[0, 1] distribution, and α is a user-defined inertia weight.There are several variants in the way the particles get updated which have their strengths and limitations for different types of problems [41,61,68,77,74,35]. Surrogate assisted Distributed framework We update the swarm using the canonical particle update method [41] and execute the swarms in a distributed swarm framework that employs parallel computing. Inter-process Communication In our framework, we exchange selected swarm particles after a certain number of generations with inter-process communication.During the exchange, we replace 20 percent of the weaker particles with stronger ones from other swarms.This enhances the exploration and exploitation properties of our framework for the optimization problem.Afterwards, the process continues where local swarms create particles with new positions and velocities as shown in Figure 1.Hence, we feature distributed swarms and parallel processing for better diversity and computational complexity.We execute distinct parallel processes for respective swarms with central processing units (CPUs).Each process features the optimisation function which is either a synthetic problem made to be computationally expensive using time delay, or an application problem. Surrogate Model Suppose that the true function (model) is represented as F = g(x); where g() is the function and x is a solution or particle from a given swarm.Our surrogate model outputs pseudofitness F = ĝ(x) that would give an approximation of the true function via F = F + e; where, e represents the difference between the surrogate and true function.The surrogate model gives an estimate using the pseudo-fitness for replacing the true function when required by the framework. S prob is an important hyperparameter that controls the use of a surrogate in the prediction.We do not want it to be too high in case we are not very confident, i.e. when enough data is not present for our surrogate model.We do not want it to be too low since the entire optimisation process will become time-consuming; hence, we need to tune this parameter.The surrogate model is trained by accumulating the data from all the swarms, i.e. the input x i,s and associated true-fitness F i,s pairs; where s represents the particle and i represents the swarm.In order to benefit from the surrogate model in the optimization process, it's very crucial to manage the surrogate training and surrogate usage.We cannot use a surrogate from the very beginning of the optimization as its predictions will be random, and we also cannot wait too long as the computational efficiency will be affected.Hence, to manage the surrogate training and its use, an important hyperparameter ψ is used which refers to the surrogate interval measured in terms of the number of generations.ψ is the interval from which the collected data is used to train the model.The updated surrogate model is used until the next interval is reached, and knowledge in the surrogate model is refined in the next stage (defined by ψ) -this can be seen as a form of transfer learning.The collected input features (Φ) combined with the true fitness λ, create θ for the surrogate model. where x i,s represents the given particle from the swarm, s, F i,s is the output from the true fitness, and M is the number of swarms.Therefore, the surrogate training dataset (Θ = [Φ, λ]) is made up of input features (Φ) and response (λ) for the particles that get collected in each surrogate interval (s + ψ).The pseudo-fitness is given by ŷ = f (Θ). Surrogate-assisted Framework In Algorithm 1, we present further details about our framework that features surrogate-assisted optimisation using distributed swarms. We implement the algorithm using distributed computation over CPU cores, as shown in Algorithm 1.The manager process is shown in black where inter-process communication among swarms takes place which exchange parameters at regular intervals (given by φ).Furthermore, the surrogate model is also trained at regular intervals (ψ).The parallel swarms of the distributed framework have been highlighted in pink in Algorithm 1. Stage 0 features the initialization of particles in the swarm.We begin the optimisation process by initialising all the swarms (Stage 0.1) in the ensemble with random real numbers in a range as required by the optimisation function (model). Once the swarms are initialised, we begin the evolution (optimisation) by first evaluating the particles in the swarms using the fitness (objective) function.Hence, we iterate over surrogate interval (ψ) and evolve each swarm for φ generations, both user-defined parameters.We update the best particle and best fitness for each of the respective swarms in the ensemble afterwards.Once these basic operations are done, we begin the evolution where we create a new set of swarms for the next generation by velocity and position update (Stage 1.1). The crux of the framework is when we consider whether to evaluate the fitness function (true fitness) or to use the surrogate model of the fitness function (pseudo-fitness) when computing fitness values.Stage 1.2 shows how to update the fitness using either surrogate or true fitness of the particle depending on the interval and S prob .Initially, this is not done until the very first surrogate interval is not reached, where all the fitness evaluations are from the true function.In Stage 1.3, we calculate the moving average of the past three fitness values for a particular particle by F past = mean(F g−1 , F g−2 , F g−3 ) to combine with the surrogate model prediction (Stage 1.4).This is done so that we incorporate the recent history of the true model fitness, with the surrogate-based estimated fitness, which is motivated by the autoregressive moving average (ARMA) model.Note that if present, the surrogate estimation fitness will be also considered as part of the past three fitness values.In the case when surrogate fitness is included, we note that additional errors will be added; however, this will be further averaged with true values as there cannot be two surrogate fitness values.In Stages 1.5 and 1.6, we calculate actual fitness and save the values for future surrogate training.Our swarm particle update depends on x i pbest and x i gbest , and due to poor surrogate performance, some of the weaker particles can get higher fitness scores.In order to avoid this issue, we ensure that x i pbest and x i gbest are from true fitness evaluation.In Stage 2.1, given a regular interval (φ generations), we prepare an exchange of selected particles with neighbouring swarms via elitism, where we replace a given percentage of weak particles (given by fitness values).β is a userdefined parameter that determines how much of the swarm particles need to be exchanged.We have given recommendations for β value in the design of experiments.We select only 20% of elitist values as we do not want the majority of the swarms to be similar and maintain diversity. We note that before we consider the use of pseudo-fitness, we need to train the surrogate model with the same training data which is created from the true fitness.Hence, we need to collect the training data for the surrogate model from all the swarms in the ensemble.In Stage 3.0, the algorithm uses surrogate training data collected from Stage 1.6 (Θ as shown in Equation 3).In Stage 4, the algorithm trains the surrogate model in the manager process with data from Stage 3. The knowledge from the trained surrogate model is then used in the fitness estimation as shown in Stage 1.4.Stage 5.0 implements the termination condition where the algorithm signals the manager process to decrement the number of swarms alive (active) in order to terminate the swarm process.This is done when the maximum number of fitness evaluations has been reached (T max ) for the particular swarm.We use a neural network-based surrogate model with Adam optimisation [2].In order to validate the performance of the algorithm, we measure the quality of the surrogate estimate using the root mean squared error (RMSE): where F i and Fi are the true and pseudo-fitness values, respectively.N denotes the number of times the surrogate model is used for estimation.Figure 1 provides a visual description of the proposed algorithm where multiple swarms are executed using the manager process that also controls the particle exchange and surrogate model update. Application: Landscape evolution models Similar to optimizing mathematical functions, in certain problems, we are required to evaluate a score using some simulation or computationally expensive process, such as geoscientific models [48,67,20].Landscape evolution models (LEMs) are a class of geoscientific models that evolve a given topography over a given time with given geological and climate conditions such as rock erodibility and precipitation [55].LEMs are used to model and understand the landscape and basin evolution back in time over millions of years showing surface processes such as the formation of river systems and erosion/deposition, where there is movement of sediments from source (mountains) to sink (basins) [20].LEMs help geologists and paleoclimate scientists understand the evolution of the planet and climate history over millions to billions of years; however, there are major challenges when it comes to data.LEMs generally require data regarding paleoclimate processes which is typically unavailable; and hence, we need to estimate them with methods such as MCMC sampling [17,15].There is limited work in the literature where optimisation methods have been used to estimate the unknown parameters for LEMs, which is the focus of this study.We note that typically, LEMs are computationally very expensive which is dependent on the resolution of the study area (points/kilometer) and the duration of evolution (simulation) back in time (millions of years).Hence, large scale study areas can take from hours to days to run a single LEM even with parallel computing [55].There is no gradient information in the case of Badlands LEM used for this study, and hence estimating the model parameters is a challenge. In order to demonstrate the optimisation procedure, we select problems where synthetic initial topography has been created using present day topography and used in our previous work [17,15].The selected LEM features a continental margin (CM) problem that is selected taking into account computational time of a single model run as it takes less than three seconds on a single central processing unit (CPU).The CM problem initial topography is selected from the present-day South Island of New Zealand as shown in Figure 2 which covers 136 by 123 kilometres.We provide a visualization of the initial and final topographies along with an erosion/deposition map for CM problem in Figure 3.The CM features six free parameters (Table 1).The notable feature of all three problems is that they model both elevation and erosion/deposition topography.We use the initial topography (Figure 3) and the true values given in Table 1, and run the Badlands LEM by simulating 1 million years to synthetically generate the ground-truth topography.We then create a fitness function with the ground-truth topography and set experiments so that the proposed optimisation methods can get back the true values.The details about the fitness function are given in the following section. Fitness function The Badlands LEM produces a simulation of successive time-dependent topographies; however, only the final topography D T is used for topography fitness since no successive ground-truth data is available.The sedimentation (ero- sion/deposition) data is typically used to ground-truth the timedependent evolution of surface process models that include sediment transportation and deposition [55,14].We adapt the fitness function from the likelihood function used in our previous work that used Bayesian inference via MCMC sampling for parameter estimation in Badlands LEM [17].Ω represents the vector of free parameters, such as precipitation rate and erodibility which are independent and optimised by our proposed algorithms based on PSO.The initial topography is given as a two-dimensional matrix D u,v , where corresponds to the location which is given by the latitude u and longitude v (Figure 2).Hence, our topography fitness function F topo for the topography is given by computing f (.) that represents the final topology (at final time t = T ) by Badlands LEM. where, ν is the number of observations.Badlands LEM produces a sediment erosion/deposition topography at each time frame.We use a selected vector of locations (Figure 3 -Panel c) at time (z t ) simulated (predicted) by the Badlands LEM for given Ω.The sediment fitness F sed (Ω) is given below The total fitness is the combination of the topography and sediment fitness.Note that the initial topography should not be confused with the initial state of the swarms of the PSO.The initial topography is simulated using pre-day topography of the region and hence not generated via any distribution.[17,15].The x-axis denotes the latitude and the y-axis denotes the longitude from the location given in Figure 2, and the elevation is given in meters which is further shown as a colour bar. Experiments and Results In this section, we present the results of our framework on synthetic benchmark functions and Badlands LEM.The experiments consider a wide range of performance measures which includes optimization performance in terms of fitness score, computational time, and accuracy of the surrogate model. Experiment design We provide the experimental design and parameter setting for our experiment as follows.We implement distributed swarm optimization using parallel computing and inter-process communication where the swarms can have separate processes and exchange solutions (particles) with Python multi-processing library 1 . The synthetic benchmark optimisation functions are given in Equation 6(Spherical), Equation 7(Ackley), Equation 8(Rastrigin), and Equation 9 (Rosenbrock).In Equation 7, we use the user-defined parameters, a = 20 and b = 0.2.Similarly in Equation 8, a and b and c are user-defined parameters.Note that these functions are chosen due to different levels of difficulty in optimisation and the nature of their fitness landscape.The Spherical function is considered to be a relatively easier optimisation problem since it does not have interacting variables.Ackley and Rastrigin are known to have many local minimums, while Rosenbrock is known as the valley-shaped function. We use (M = 8) swarms which run as parallel processes (swarms) that inter-communicate with each other at regular interval (ψ = 1).We use the same values for certain hyperparameters (ψ = 1 and φ = 1) to simplify the algorithm.We exchange of a subset of particles (best 20% from the population) with the neighbouring island (worst 20% of the population).This is implemented by setting β = 0.2 in Algorithm 1.We use the population size of 20 particles per swarm, the inertia weight w = 0.729 with social and cognitive coefficients(c1 = 1.4 and c2 = 1.4).We determined these values for parameters in the trial experiments by taking into account the performance on different problems with a fixed number of evaluations.We use minimum and maximum bounds on the parameters as described in Table 1, respectively.We run experiments for 30 and 50 dimension (30D and 50D) instances of the respective benchmark problems.The dimension of our problem for synthetic functions (eg.Rosenbrock) is much larger than LEM since in the literature, 30D optimization functions are more common.We set the total number of function evaluations to 100,000 and 200,000, respectively.In the case of the Badlands model, we use 10,000 function (model) evaluations for the CM problem. We used the PyTorch machine learning library2 for implementing the surrogate model that uses a neural network model with Adam learning.The surrogate neural network model architecture is given in Table 2, where the input dimensions are defined, i.e. 30D and 50D instances of the benchmark functions.In a similar way, we extend our approach for optimization in the Badlands model with the CM problem, featuring a 6-dimensional (6D) search space.The first and second hidden layers of the neural network-based surrogate model are shown in Table 2.We note that the output layer in the model for all the problems contains only one unit which provides the estimated fitness score. Results for synthetic benchmark functions We first present the results (fitness score) for different problems using serial (canonical) PSO, distributed PSO (D-PSO) and surrogate-assisted distributed swarm optimisation (SD-PSO) as shown in Table 3.We use a fixed surrogate probability (s prob = 0.5) and present results featuring mean, standard deviation (std), and best and worst performance for over 30 independent experimental runs with different random initialisation in swarms as shown in Table 3.Note that lower fitness scores provide better performance. We find a significant reduction in elapsed time for all the problems in Table 3.Note that we added a 0.05 seconds time delay to the respective problems in order to make them slightly computationally expensive to depict real-world application problems (models).When we consider serial PSO with D-PSO and SD-PSO in terms of optimisation performance given by the fitness score, we find that D-PSO improves the PSO significantly for 30D and 50D cases of Rosenbrock, Spherical and Rastrigin problems.We also see improvement in Ackley's problem, but it's not as large as in previous problems.We find the variation in the results given by the standard deviation is lowered highly with D-PSO which shows it is more robust to initialization and has the ability to provide a more definitive solution. Moreover, in comparison of SD-PSO with D-PSO, we mostly get better fitness with SD-PSO, with a reduction of computation time due to the use of surrogates.Despite the use of surrogatebased fitness estimation, we observe that the fitness score has not greatly depreciated.It is interesting to note in Ackley 30 and 50D case, addition of surrogates improved the fitness score.We find a 30% reduction in computational time, it is expected this to increase if we had more time delay (rather than 0.05 seconds), which will be shown in the Badlands LEM experiments to follow. The RMSE of prediction of fitness by the surrogate model is shown in Table 4.We notice large RMSE values for the Rosenbrock problem when compared to the rest.In surrogate prediction accuracy (RMSE) given in Figure 5, we observe a constant reduction in RMSE with intervals (along the x-axis) in Ackley (30D and 50D), and Spherical (30D) model functions.In other problems, the RMSE is lower towards the end, but the trend is not that smooth.We note that in the Rosenbrock function, there exists an interval where our surrogate performs poorly causing a major decrease in accuracy. Figure 6 and 7 provide a visual description of surrogates prediction quality in the evolution process.We show the bar plots for mean values with a 95% confidence interval (shown by error bars) of the actual fitness and pseudo-fitness at regular intervals for different benchmark problems.We notice that the Rastrigin and Ackley problems have a better surrogate prediction with better confidence intervals when compared to the Spherical problem.Finally, we evaluate the effect of the surrogate probability for Rosenbrock and Rastrigin 30D problems.Figure 4 (Panel a) provides a graphical analysis of the effect of the fitness score and computational time (Panel b) given different surrogate probabilities.We observe that the computational time decreases linearly with an increase in the surrogate probability.On the other hand, the fitness score degrades with an increase of surrogate probability (Rosenbrock 30D); however, with an elbow-shaped curve -a trade-off can exist between time and optimization performance (surrogate probability of 0.6).In the case of the Rastrigin 30D, there is not a large difference in loss of fitness (surrogate probability ≤ 0.5) when compared to the Rosenbrock 30D; we note that these problems have distinctly different fitness landscapes which could explain about the difference in the performance. Results for the Badlands model Finally, we present results for the case of the Badlands CM problem which is a 6D problem.The results for CM problem highlighting our methods (PSO, D-PSO and S-DPSO) when compared to previous approaches (PT-Bayeslands and SAPT-Bayeslands) are shown in Table 5.The results show the computational time and prediction performance of the Badlands model in terms of elevation and erosion/deposition RMSE (using Equations 4 and 5, respectively) given the optimised parameters.The results show the mean and standard deviation from 30 experimental runs from independent initial positions.We see a major reduction in computational time using D-PSO when compared to PSO and find consistent performance in terms of elevation and erosion RMSE.The experiments used a surrogate model at an interval of 10 generations with a probability of 0.5.We observe that the SD-PSO further improved the performance by reducing computational time using surrogates.The RMSE of the estimation of fitness function by the surrogate model when compared to the actual Badlands model is shown in Table 4.We note that the RMSE in this case cannot be compared to the synthetic fitness functions (eg.Rosenbrock) since the fitness function is completely different.In synthetic fitness functions, there is no data whereas in Badlands LEM, we use Badlands prediction and ground-truth topography data to compute the fitness.Figure 5 (Panel e) shows surrogate training accuracy (RMSE) for different surrogate intervals where we observe a constant improvement of performance by the surrogate model over time (surrogate intervals).This implies that the surrogate model is improving as it gathers more data over time. In Figure 8, we show the change in CM topology over selected time-slices simulated by Badlands according to the parameters optimized by S-DPSO.The elevation RMSE in Table 5 considers the difference between ground-truth topography given in Figure 3.We notice that visually the final topography (present-day) in Figure 8 (Panel f) resembles Figure 3 (Panel b).Furthermore, we show in Figure 9 a cross-section (Panel a) for Badlands predicted elevation vs the ground-truth elevation for final or present-day topography.We also show the bar plot (Panel b) of predicted vs ground-truth sediment erosion/deposition at 10 selected locations taken from Figure 3 (Panel c).The cross-section and bar plots show that the Badlands prediction well resembled the ground-truth data, respectively.We obverse that the cross-section (Panel a) uncertainty is higher for certain locations as highlighted.The high uncertainty is in an area of a high slope below sea level which is reasonable given the effect of sediment flow due to precipitation. Discussion We note that the surrogate-assisted method (SD-PSO) estimates the fitness and the velocity update in the next generation as done in a standard PSO (Equations 1 and 2).The surrogate fitness does not change the structure of the PSO, it is only used as a way to estimate the fitness and hence reduce the computational time.We note that the performance of an optimisation method depends on the nature of the optimisation benchmark problems [36] due to their fitness landscape modality, i.e., unimodal (Rosenbrock, Sphere) and multimodal (Rastrigin, Ackley), and separability (Rosenbrock is considered nonseparable and Sphere separable).It is easier to construct optimisation methods for separable functions using a divide and conquer approach, which faces challenges in separable problems [57,16].In terms of the results, we find that SD-PSO has done better than PSO and D-PSO (Table 3 and Table 5) for most problems.This could be done to the fitness estimation by the surrogate model, i.e. surrogate -based fitness estimations could have helped the algorithm in escaping local optima and hence it achieved better fitness.In general, the results show that distributed surrogate-assisted swarm optimisation framework can improve performance by decreasing computation time while retaining optimisation accuracy (fitness). We highlight that the Badlands model does not provide gradient information regarding the parameters; and hence, only gradient-free optimisation and inference methods can be used.In our previous work, MCMC sampling (PT-Bayeslands and SAPT-Bayeslands [14,19]) has been used, where the parameter inference was implemented via random-walk proposal distribution with MCMC replicas running in parallel.In this paper, the results show that the use of meta-heuristic (evolutionary) search operators from particle swarm optimisation provides better search features.The results motivate the use of the proposed methodology for expansive optimisation models, which can feature other geoscientific models.Further use of surrogates in larger instances of the Badlands LEM can provide a significant reduction in computational time. A major contribution of the framework is in the implementation using parallel computing, which takes into account interprocess communication when exchanging particles (solutions) during the optimisation process.In our proposed framework, the surrogate training was implemented in the manager process and the trained parameters were transferred to the parallel swarm processes, where the local surrogate model was used to estimate the fitness of the particle when required (Figure 1).This implementation seamlessly updated the local surrogate model at regular intervals set by the user.We note that although less than eight parallel swarm processes were used, in largescale problems, the same implementation can be extended and amended.We note that in the case when the number of parameters in the actual model significantly increases, different ways of training the surrogate model can be explored.We also note that the number of particles per swarm is strongly dependent upon the problem in hand [41,35]. Another major contribution from the optimisation process for the case of the landscape evolution model is the estimation of the parameters, such as precipitation values in the Badlands model.We note that MCMC sampling methods provide inference whereas, optimisation methods provide an estimation of parameters.The major difference is that we represent the parameters using probability distribution in the case of inference, whereas optimisation methods provide single-point estimates.Through optimisation, we can estimate what precipitation values gave rise to the evolution of landscape which resulted in the present-day landscape.The landscape evolution model hence provides a temporal topography map of the geological history of the region under study.These topographical maps, along with the optimised values for geological and climate parameters (such as precipitation and erodibility) can be very useful to geologists and paleoclimate scientists. Although PSO has been selected as the designated evolutionary optimisation algorithm, the framework has the flexibility to enable the implementation of other evolutionary algorithms depending on the application problem.A wide range of PSO variants exist in the literature with strengths and limitations [66,70,9].In the case when the application involves combinatorial optimisation or a scheduling problem, then an appropriate evolutionary algorithm would be needed. Our experiments considered a simulation where the presentday topography of a selected region was used as the initial topography.The Badlands model ran depicting a million years in time, to simulate successive topographies.However, in the area of LEMs, the focus is generally to understand the environmental and climate history back in time that led to present-day topography.In such cases, we need to estimate the initial topog- raphy (eg. a million years back in time), which can be based on present-day topography.The estimation of initial topography is an optimisation problem of its own and joint optimisation of parameters such as precipitation can make the process very complex.However, this can be addressed in future research. Conclusions and Future Work We presented a surrogate framework that features parallel swarm optimisation processes and seamlessly integrates surrogate training from the manager process to enable surrogate fitness estimation.Our results indicate that the proposed surrogate-assisted optimisation method significantly reduces the computational time while retaining solution accuracy.In certain cases, it also helps in improving the solution accuracy by escaping from the local minimum via the surrogates.Although we used PSO as the designated algorithm, other optimisation algorithms, such as genetic algorithms, evolution strategies, and differential evolution can also be used. In future work, the parallel optimisation process could be improved by a combination of different optimisation algorithms which can provide different capabilities in terms of exploration and exploitation of the search space.The proposed framework can also incorporate other benchmark function models, particularly those that feature constraints and also be applied to discrete parameter optimisation problems which are expensive computationally. Ethical approval The data used in the manuscript is openly available and does not need any ethical approvals. Funding details There are no external funding sources to report. Conflict of interest The authors do not have any conflict of interest with the manuscript and publication process. Availability of data and materials We provide an open-source implementation of the proposed algorithm in Python along with data and sample results3 . Figure 1 :Figure 2 : Figure 1: Surrogate-assisted distributed swarm optimisation features surrogates to estimate the fitness of expensive models or functions. (a) CM initial topography (b) CM ground-truth topography (c) CM erosion/deposition map where yellow dots show the location (well) for computing sediment fitness (Equation6). Figure 3 : Figure 3: The figures show the initial and evolved ground-truth topography and erosion/deposition after one million years for CM problems taken from[17,15].The x-axis denotes the latitude and the y-axis denotes the longitude from the location given in Figure2, and the elevation is given in meters which is further shown as a colour bar. Figure 4 : Figure 4: The figure shows the effect of surrogate probability on fitness score and computational time. (a) Cross-section reconstruction by optimizing Badlands CM model using S-DPSO.The ground truth represents the actual topography, whereas Badlands prediction reports the mean topography predicted by S-DPSO over 30 experimental runs.The 95% confidence interval represents the uncertainty.(b) Sediment prediction using S-DPSO.The ground truth represents the actual topography, whereas the Badlands prediction reports the mean topography predicted using S-DPSO over 30 experimental runs. Figure 9 : Figure 9: Prediction cross-section (Panel a) and sediment erosion/deposition (Panel b) with uncertainty given as 95 % confidence interval from 30 experimental runs. R. Chandra contributed by writing, coding, experiments and analysis of results.Y. Sharma contributed by coding, experiments, writing and analysis. Table 2 : Model architecture for surrogate model Table 3 : Optimization results on benchmark functions
9,501.8
2022-01-18T00:00:00.000
[ "Computer Science", "Environmental Science", "Engineering" ]
Dynamic Occupancy Grid Map with Semantic Information Using Deep Learning-Based BEVFusion Method with Camera and LiDAR Fusion In the field of robotics and autonomous driving, dynamic occupancy grid maps (DOGMs) are typically used to represent the position and velocity information of objects. Although three-dimensional light detection and ranging (LiDAR) sensor-based DOGMs have been actively researched, they have limitations, as they cannot classify types of objects. Therefore, in this study, a deep learning-based camera–LiDAR sensor fusion technique is employed as input to DOGMs. Consequently, not only the position and velocity information of objects but also their class information can be updated, expanding the application areas of DOGMs. Moreover, unclassified LiDAR point measurements contribute to the formation of a map of the surrounding environment, improving the reliability of perception by registering objects that were not classified by deep learning. To achieve this, we developed update rules on the basis of the Dempster–Shafer evidence theory, incorporating class information and the uncertainty of objects occupying grid cells. Furthermore, we analyzed the accuracy of the velocity estimation using two update models. One assigns the occupancy probability only to the edges of the oriented bounding box, whereas the other assigns the occupancy probability to the entire area of the box. The performance of the developed perception technique is evaluated using the public nuScenes dataset. The developed DOGM with object class information will help autonomous vehicles to navigate in complex urban driving environments by providing them with rich information, such as the class and velocity of nearby obstacles. Introduction In urban areas, numerous dynamic and static objects such as vehicles, pedestrians, trees, and guard rails coexist.For safe autonomous driving, it is essential to accurately recognize various objects on the road and make safe decisions.Therefore, it is necessary to integrate environmental perception results from multiple sensors to effectively represent the road environment.One prominent technique among such representation methods is the grid map.The grid map collects sensor information to depict the surrounding environment from a bird's eye view (BEV) perspective in a grid format.Therefore, it is advantageous for representing the position of an object in the surrounding urban environment.Conventional static occupancy grid maps provide only the position information of the objects in each grid.Thus, it is impossible to represent the dynamic state of the objects.To overcome this limitation, dynamic occupancy grid maps (DOGMs) have been developed, which facilitate the dynamic states of objects occupying a grid cell [1].This capability provides a more accurate understanding of the situation by distinguishing between dynamic and static states for each object.In recent years, numerous studies have explored methods for generating dynamic grid maps using various sensors. Figure 1a presents the results of generating a DOGM using light detection and ranging (LiDAR).Here, a challenge arises in misclassifying static objects, such as walls, as dynamic objects due to occlusion by nearby objects.This misclassification can potentially lead to erroneous path prediction and decision making during autonomous driving.Thus, the objective of this study is to enhance object detection and recognition capabilities by leveraging camera-LiDAR sensor fusion, as depicted in Figure 1b.In particular, a more accurate classification of dynamic and static states is possible by considering the semantic information of objects.For more accurate velocity estimation, we have developed two different update models to incorporate oriented bounding boxes (OBBs), the outcomes of object detection from deep learning techniques, on the grid map.Virtual points have been generated to represent the OBBs on the grid, and occupancy probabilities and label information have been assigned to the grid cells, including the virtual points.This approach is a contrast to conventional LiDAR-based DOGM generation [1].Yet, the use of virtual points for OBBs, as shown in Figure 1b, allows for a more accurate representation of the shape of a vehicle.The schematic of this study is shown in Figure 2, which is divided into two parts: one using unclassified LiDAR points to build a conventional static occupancy grid map, and the other using camera and LiDAR measurements for deep learning-based object detection to build a dynamic occupancy grid map.The results from the object detection are classified into four movable labels-cars, bikes, pedestrians, and others-in a form of oriented bounding box.Our method assigns occupancy probabilities to these dynamic objects through a particle filter.Meanwhile, LiDAR points that are not classified as movable labels are utilized to construct a static occupancy grid map.As a result, we can achieve robust environmental perception by representing not only dynamic states such as speed and angle, but also static environmental information in a BEV format.Furthermore, this study compares two sensor models for extracting more accurate velocity information from OBBs.The two models differ in how occupancy probabilities are assigned: one uniformly assigns probabilities within the bounding box, whereas the other assigns probabilities only along the perimeter of the box.By comparing the speed estimation results of these two models with different occupancy probability assignment methods, we propose a sensor model that allows for more accurate speed estimation. Our main contributions are summarized as follows: • The robustness of environmental perception is enhanced by developing a DOGM that incorporates deep learning-based object recognition results through the BEVFusion camera-LiDAR sensor fusion method. • More accurate classification of the dynamic and static states of objects is realized by leveraging semantic information with potential dynamic states as input for the DOGM, thereby estimating not only the position and class of each object, but also inferring its velocity. • Performance of the estimation of the speed and direction of dynamic objects is enhanced by using the edge bounding box update model (EBBUM) for potential dynamic objects classified using camera-LiDAR sensor fusion. Sensor Fusion of LiDAR and Camera Various sensors are commonly used in autonomous driving, such as cameras, LiDAR, and radar.Cameras have the advantage of extracting a wealth of information about the surrounding environment, leading to extensive research on object detection using cameras.However, a drawback of cameras is their inability to perceive depth from a single camera image. In contrast, a LiDAR sensor offers the advantage of providing precise distance measurements in a 3D format. Recently, numerous studies on deep learning-based object detection methods using sensor fusion have been conducted.Methods for deep learning 3D object detection using cameras and LiDAR can be categorized into early, middle, and late fusion.Early fusion integrates raw data from different sensors.An example is the fusion of raw data from a camera-LiDAR sensor, which creates RGB-D information by combining features at the raw data level.This approach is used to generate feature maps for object detection [2].Middle fusion involves combining features or information extracted at an intermediate step. Although it integrates raw data from each sensor, similar to early fusion, the distinction lies in the use of separate neural networks for fusion.A study on a methodology that operates the LiDAR data voxelization pipeline and the camera pipeline in parallel has been conducted.Subsequently, features from each sensor are fused, considering the disparity in their importance, and are used to refine the final region proposal [3]. Late fusion involves an independent information process, followed by a combination of features obtained from each sensor.Some studies have enhanced the detection performance by leveraging geometric and semantic consistencies prior to the non-maximum suppression stage of the detector [4].As demonstrated by various research efforts, sensor fusion techniques efficiently combine features from different sensors, addressing weaknesses while retaining their strengths, which makes them advantageous for object recognition.Thus, in this study, we perform an object recognition technique using camera-LiDAR sensor fusion to enhance object detection performance. Dynamic Occupancy Grid Maps (DOGMs) A DOGM is an environment perception technique used in robotics and autonomous driving applications.A DOGM has the advantage of not only detecting static obstacles, but also identifying the position and velocity of dynamic objects, allowing safe navigation in dynamic environments.Several studies are currently being conducted on DOGMs, with an emphasis on estimating the states of dynamic and static objects and velocity [5]. Additionally, research has been conducted on the fusion of multiple radar sensors, instead of a single laser.This research also includes the detection of dynamic objects using gridbased object tracking and mapping methods, in conjunction with clustering algorithms [6].Research is not limited to creating DOGMs using only radar sensors; there are ongoing studies on generating DOGMs by employing various sensors.One such study uses inverse perspective mapping images and LiDAR to create DOGMs [7].Efforts are also underway to perform object-level tracking using DOGMs in dynamic environments [8,9].Furthermore, research has been conducted on applying deep neural networks, including recurrent neural networks (RNNs) and deep convolutional neural networks, to DOGMs for estimating the states of dynamic objects [10,11]. In general, it is computationally expensive to maintain a large number of particles to represent a complex map.However, as shown in Figure 3, a substantial portion of the surviving particles are drawn towards static objects, such as bushes, trees, and buildings, leading to fewer surviving particles being allocated to dynamic objects.This phenomenon makes it challenging to maintain stable velocity estimation performance for moving objects, resulting in the misclassification of static objects as dynamic and a decrease in the accuracy of object velocity estimation.The class information of objects is valuable for understanding urban environments, particularly in areas with various types of static and dynamic objects.In this context, research has been conducted on integrating maps with semantic information using LiDAR, specifically employing the PointPillar approach for semantic labeling.This study combines spatiotemporal and conditional variational deep learning methods to detect vehicles [12].Another study combines DOGMs and object-level tracking using LiDAR.The tracked objects are labeled with semantic information considering temporal changes [13].Furthermore, research has been conducted on the development of semantic DOGMs using deep learning techniques.One study employed a multitask RNN with LiDAR point clouds as input to predict occupancies, velocity estimates, semantic information, and drivable areas on a grid map [14].Studies have also used stereo cameras, not only LiDAR.One research effort involved extracting semantic and distance information by generating depth maps and using image segmentation to estimate the occupancy grid map [15,16].Another study used monocular RGB images to process semantic segmentation, employing Bayesian filters to fuse occupied grids and estimate semantic grids [17].However, using only a camera to create an occupancy grid map can lead to inaccurate distance information, and relying solely on LiDAR may lead to low object detection performance, making it challenging to discern objects.Therefore, the objective of this study is to enhance object detection performance by using both a camera and LiDAR as inputs, enabling a more accurate understanding of the current situation of the driving vehicle. 3D Object Detection Sensor fusion for object recognition has been extensively researched to enhance object detection capability by compensating for the weakness of each sensor through a combination of multiple sensors [18][19][20].BEVFusion is a method developed for 3D object detection using only a camera or a combination of a camera and LiDAR.This method generates feature maps for both the camera and LiDAR and fuses them for object recognition, thereby maintaining the advantages of both the camera and LiDAR [21].BEVFusion represents the features of cameras and LiDAR sensors in the BEV format, providing a framework for sensor fusion.Therefore, the distance and semantic information are preserved to enhance 3D recognition performance.In this study, BEVFusion utilizes a camera and LiDAR fusion to generate OBBs, which serve as input for DOGMs.This enables the estimation of not only the position and class, but also the velocity information of objects. Additionally, an environmental perception map was created using LiDAR points that were not classified as movable labels.The developed grid map classified dynamic and static objects in urban environments while extracting semantic information about dynamic objects, such as vehicles, motorcycles, bicycles, and pedestrians.To find the grids belonging to the OBB generated by 3D object detection, virtual points were created within the bounding box using the coordinates of the four vertices, as illustrated in Figure 4.The virtual points (x box ) consist of the x and y coordinates (p x , p y ), the centroid x and y coordinates of the detected object (P c, ), the yaw angle (θ), and the label information (l).Similar to how LiDAR point clouds occupy grids, the virtual points within the bounding box assign occupancy probabilities to the DOGM. x box = p x , p y , v x , v y , θ, l, P c . (1) Using the OBB obtained from deep learning as input to the DOGM, the measurement update model influences the survival of the predicted particles, leading to performance variations in the velocity estimation.Therefore, in this study, we employed two different update models for grid updates: the solid bounding box update model (SBBUM) and the EBBUM. Figure 5a shows the SBBUM, which assigns a uniform occupancy probability to the grids corresponding to the box area; and Figure 5b shows the EBBUM, which assigns occupancy probability only to a specific region corresponding to the box border.The thickness (T) of the border was set to 0.4 m, considering a sufficient number of particles to represent the shape of the vehicle and the grid size of the grid map. Sensor Fusion DOGM In many studies that adopt DOGMs, the occupancy information of grid cells is typically measured using radar or LiDAR sensors.In this study, we developed an algorithm for generating virtual points based on the OBB estimation results for grid cells. In particular, we reduced the computational load by assigning particles only to grid cells occupied by objects with potentially movable classes, such as vehicles, motorcycles, bicycles, and pedestrians.This reduction in computational load is particularly significant when the number of moving objects is small. The generated grid map contains Occupancy (O), Free (F), and Label (L) information within each grid cell, as shown in Figure 6, where the Label (L) consists of car (l c ), bike (l b ), pedestrian (l p , and other (l o ).The DOGM with semantic information is updated by the following steps.• Initialization At time t, the state of the grid map is updated using particle filters.When there are ν c particles associated with the occupied grid cell c, the particles at time t include information such as their coordinates x • Prediction At time t, the particle configurations within the measured grid are predicted for time t + 1 using the constant velocity prediction model."+" denotes the prediction step from time t to t + 1.The weights of the predicted particles w estimate whether the grid at time t + 1 is occupied or unoccupied.The predicted label mass is computed using the label weights of the predicted particles l . Equation (3) calculates the mass of the predicted occupied grid ( m c + (O) , and the label mass of the predicted grid ( m c + (L) .The free mass is determined by taking the smaller of two masses: the predicted occupied mass and the free mass, adjusted by applying the discount factor α(t) over time t.In Equation (4), α(t) denotes the discount factor representing the decrease in confidence over time intervals. In this case, the sum of the particle and label weights for each grid is constrained to be 1.If the sum of the predicted occupied weight (m c + (O)) or the sum of the label weights (m c + (L)) exceeds 1, normalization is performed.Labels (L) for m c + (L) in this study consist of car (l c ), bike (l b ), person (l p ), and other (l o ). (3) • Update During the update step, the combination of the measurement mass m ρ and predicted mass m + is performed using the Dempster-Shafer evidence theory using (5), where the event defined by A and B corresponds to the hypothesis of occupancy, freeness, and uncertainty.K represents the sum of the masses of an empty set defined by mutually exclusive events between Occupancy and Free. The label estimation step defined by ( 6) also employs the Dempster-Shafer evidence theory to combine the measurement and predicted mass of label information. In ( 6), l 1 and l 2 represent the event defined by the hypothesis of the four labels.Here, K represents the sum of the masses of empty labels defined by mutually exclusive events between labels. At time t + 1, occupancy mass (m c t+1 (O)) comprises persistent cells (m c p,t+1 ) and newborn cells (m c b,t+1 ).Persistent cells represent grids that existed in the map at the previous and subsequent steps.Newborn cells denote grids occupied by an object in the t + 1 step that were not occupied in the previous time step.p B denotes a birth probability of the newborn cells.The mass of newborn cells is distributed between the predicted and measured masses through the probability of p B . The occupancy mass of the newborn cell (m c b,t+1 ) is calculated using (7).Similarly, the label mass of the newborn cell (m c b,l,t+1 ) is determined by (8). Equation ( 9) represents the occupancy mass of the persistent cell (m c p,t+1 ), which is calculated by subtracting the mass of the newborn cells from the occupancy mass of the grid. Similarly, (10) expresses the label mass of the persistent cell (m c p,l,t+1 ), which is calculated by subtracting the label mass of the newborn cell from the label mass of the grid. The mass of the persistent cell in a grid is composed of occupancy mass (m c p,t+1 ) and label mass (m c p,l,t+1 ).They are calculated by the weights for each particle (w p,t+1 ) and label ( l p,t+1 normalized by the total number of persistent particles (ν c p ) using (11). Similarly, the weights associated with the occupancy mass (m c b,t+1 ) and the label mass (m c b,l,t+1 ) for the newborn grids are calculated using (12). These weights are normalized by the total number of newborn particles ( ν c b . • Resampling Finally, resampling is performed to generate new particles, reducing the uncertainty of the particle filter and leading to more accurate estimations.In each grid, the weights of persistent and newborn particles are combined, and the particles are resampled on the basis of their weights.New particles are assigned uniform weights.In this step, particles with lower weights are eliminated, and the overall number of particles is maintained by resampling with higher weights. Dataset The main sensors in the nuScenes dataset include one 32-channel LiDAR, six cameras, and one radar.Table 1 explains the specifications of the sensors in the nuScens data set.The dataset contains measurements from the LiDAR, updated at 20 Hz, and six cameras with a resolution of 1600 × 1200 pixels, captured at 12 Hz [22].The nuScenes dataset comprises scenarios acquired in complex urban areas in Boston and Singapore.It comprises 1000 scenes, each lasting approximately 20 s, and includes labeling for 23 classes such as car, bus, bicycle, motorcycle, pedestrian, and barrier.For validation and analysis, two scenarios were selected: one with a dense forest and pond, and another with multiple objects, including bicycles. Evaluation Process In this study, the sensor fusion DOGM provides information about the dynamic and static states and velocity of each object.The algorithm developed in this study is validated by comparing the estimated velocity values from the sensor fusion dynamic grid map with the ground truth velocities annotated in the nuScenes dataset.Furthermore, OBBs contain angle information.Precise angle estimation is important to predict the path of objects.By using the OBBs as inputs for the sensor fusion DOGM, not only the velocity, but also the direction of the object is estimated.Consequently, this study analyzes the direction error of the estimated box using the sensor fusion dynamic grid map, and compares it with the ground truth obtained from the OBB provided in the nuScenes dataset. Results and Discussion 5.1.Scenario Involving Diverse Objects Such as Vehicles, Bicycles, and Pedestrians (Scene No. 98) In this scenario, various types of objects such as cars, pedestrians, and bicycles are commonly found in urban areas.Furthermore, there are both dynamic objects, such as cars and pedestrians, and static objects, such as traffic cones.The specific situation involves the ego vehicle traveling in a straight line, followed by a bicycle from behind.In this study, various objects were detected using a 3D object detection method called BEVFusion.Figure 7 shows the results validated using the nuScenes dataset with BEVFusion.The results demonstrate detection performance not only for large vehicles, but also for smaller objects such as bicycles and traffic cones.Figure 8 presents the BEVFusion results in the BEV format, showing stable detection even for objects located beyond 50 m.This demonstrates robust detection performance in complex urban environments with multiple vehicles.To further analyze the performance, we compared two cases based on the occupancy state of the bounding boxes using BEVFusion's 3D object bounding box as input for the sensor fusion DOGM. Figure 9 shows the results of the sensor fusion DOGM using the position and class information from the OBB as input.Figure 9a,c show the results when the SBBUM is used as the OBB input, and Figure 9b,d show the results when the EBBUM is used as the OBB input.The vehicles in Figure 9a,c occupy grids in a solid square shape, whereas those in Figure 9b,d occupy grids on the edge of the square shape.In both cases, stable dynamic and static classifications were observed even in scenarios with multiple vehicles and bicycles.For small objects, such as pedestrians, both cases accurately performed dynamic and static classification.The enlarged circular plots in Figure 9a,c show the velocity estimation of the bike behind the ego vehicle and the ground truth.Because the bike was making a left turn at the moment, the SBBUM had slower convergence to the ground truth than the EBBUM in estimating the orientation of the vehicle.In this scenario, the environment consists of static objects, such as trees and roadside structures.The ego vehicle follows a preceding vehicle that travels straight and then makes a left turn.Furthermore, another vehicle is behind the ego vehicle, and there are stationary motorcycles to the right of the ego vehicle.Figure 12 shows the results of using BEVFusion for object detection.The figure confirms the accurate recognition performance of the employed method for detecting various objects, such as motorcycles and cars, marked over the camera images.Figure 13 shows the BEVFusion results in the BEV format.Figure 13a,c represent the ground truth from the nuScenes dataset, and Figure 13b,d depict the outcomes after applying BEVFusion.The results show stable and reliable recognition of diverse objects. Figure 14 shows a scenario in which other vehicles are traveling in front of and behind the ego vehicle.The OBBs generated by BEVFusion were used to create sensor fusion DOGMs.Figure 14a,c show the results when the SBBUM is used for the OBB input, whereas Figure 14b,d show the results when the EBBUM is used for the OBB input.In both cases, accurate dynamic recognition of objects in front of and behind the ego vehicle was confirmed.The recognition of a stationary motorcycle along the roadside verifies the robust performance of the DOGMs.However, Figure 14a,b reveal that when the EBBUM is used, the accuracy of the direction of the velocity vector is more precise. In Figure 15a compares predicted and persistent particles when the SBBUM is used, and Figure 15b shows the particles when the EBBUM is used.In the SBBUM's case, the predicted particles within the object bounding box maintain their original direction even when the object rotates.This can lead to delayed convergence to the ground truth, resulting in many estimation errors in the orientation when the object is making a turn.To improve convergence, the proposed EBBUM in Figure 15b assigns probability only along the edge of the bounding box to help the survival of the predicted particles aligned with the direction of the OBB measurement.Consequently, particles predicted in the direction of the original motion are eliminated because the EBBUM assigns a low probability mass inside the bounding box.However, particles predicted in the direction of object rotation are more likely to survive, resulting in better accuracy of the velocity estimations of rotating objects.Therefore, it is expected that the EBBUM will provide a more accurate direction estimation of an object than the SBBUM, especially when the object is making turns.Figure 17 compares the direction estimations between the SBBUM and EBBUM.The SBBUM exhibited small errors in estimating the direction at the beginning.However, after 4.5 s, the vehicle following the ego vehicle started to rotate, and the velocity estimation results of the EBBUM converged faster to the ground truth direction than those of the SBBUM.In this result, the SBBUM had a mean absolute error of 0.153 radians, whereas the EBBUM reduced the error by 37%, resulting in 0.097 radians.This reduction in errors is attributed to the faster convergence of the direction estimates to the ground truth during the turn. Conclusions In this study, we propose a robust perception method for urban environments, based on state-of-the-art DOGM technology combined with semantic information obtained from the BEVFusion results of camera-LiDAR sensor fusion.Using the Dempster-Shafer theory of evidence, a method of updating occupancy and class information of a grid map has been developed, as well as velocity information, which is important for an autonomous vehicle with uncertain sensor information to navigate in an urban environment, where different types of dynamic and static objects coexist. OBBs are outputs commonly used as fusion results of LiDAR sensors and cameras.To employ the OBB as measurement input to a DOGM, virtual points are assigned to the grid cell occupying the OBB.Furthermore, two different update models are employed for a sensor fusion-based DOGM using OBBs: one assigns the occupancy probability to the entire grid, including the OBB (SBBUM), and the other assigns only the occupancy probability on the edges of the OBB (EBBUM).The speed and angle estimates are employed as evaluation metrics to compare the mean absolute errors from the ground truth provided by the public nuScenes dataset.In the case of the SBBUM, the speed errors for vehicles and bicycles were 0.4945 and 0.581 m/s, respectively.For the EBBUM, speed estimation errors of 0.5037 and 0.599 m/s were obtained for vehicles and bicycles, respectively.Both methods exhibited approximately similar performances in speed estimation.However, there was a significant improvement in estimating the directions of dynamic objects, especially during the scenario in which the moving directions of objects changed.During this scenario, the mean absolute heading angle error for the SBBUM was 0.153 radians, whereas that of the EBBUM was 0.097 radians.The proposed EBBUM showed a better angle estimation performance because the particles predicted in the direction of object rotation are more likely to survive. In conclusion, the proposed method can be used to incorporate different results from deep learning studies to construct a map of the surrounding environments with uncertainty.Because the constructed grid map is from a BEV perspective, it can also be used for motion prediction of nearby dynamic objects or path planning of the ego vehicle. In a future study, we plan to employ the outputs from different deep learning-based classification methods, such as semantic segmentation, to construct DOGMs.This extension will allow the DOGM to contain richer information of nearby environments, both static and dynamic. Figure 1 . Figure 1.Example of DOGM using (a) only LiDAR measurements and (b) LiDAR and semantic information. Figure 6 . Figure 6.Configuration of semantic occupancy information in each cell. to assign Occupancy mass ( m c t (O)) and Label mass ( m c t (L)) using (2). Figure 10 Figure 10 compares the estimated speed results of a bicycle following the ego vehicle in nuScenes scenario 98 with the ground truth speeds provided by the nuScenes dataset.The SBBUM exhibited a mean absolute speed error of 0.581 m/s, whereas the EBBUM exhibited an error of 0.599 m/s.Both models estimated speeds accurately compared with the ground truth values from the nuScenes dataset.Figure 11 compares the angle estimation of a bicycle following the ego vehicle with the ground truth.The angle estimation has a mean absolute error of 0.227 radians for the SUBBM and 0.135 radians for the EBBUM.The proposed EBBUM has a significantly lower angle estimation error than the SUBBM. Figure 10 compares the estimated speed results of a bicycle following the ego vehicle in nuScenes scenario 98 with the ground truth speeds provided by the nuScenes dataset.The SBBUM exhibited a mean absolute speed error of 0.581 m/s, whereas the EBBUM exhibited an error of 0.599 m/s.Both models estimated speeds accurately compared with the ground truth values from the nuScenes dataset.Figure 11 compares the angle estimation of a bicycle following the ego vehicle with the ground truth.The angle estimation has a mean absolute error of 0.227 radians for the SUBBM and 0.135 radians for the EBBUM.The proposed EBBUM has a significantly lower angle estimation error than the SUBBM. Figure 16 Figure16compares the estimated velocity results of the vehicle following the ego vehicle in nuScenes scenario 272 with the ground truth velocity.The graph shows the velocity estimation results for both the SBBUM and EBBUM cases, along with the ground truth velocity provided by nuScenes.Compared with the ground truth velocity from nuScenes, case (a) had a mean absolute speed estimation error of 0.4945 m/s and case (b) had a mean absolute speed estimation error of 0.5037 m/s.Both cases initially exhibited differences in speed estimation because of the convergence times of particles; however, over time, the velocity errors between the ground truth and sensor fusion DOGMs decreased.
6,790
2024-04-29T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Direct Neural Network Adaptive Tracking Control for Uncertain Non-Strict Feedback Systems With Nonsymmetric Dead-Zone In this paper, combined with the approximation of neural network, a novel direct adaptive alleviating tracking control algorithm is presented for a class of non-strict feedback uncertain nonlinear systems. Here, both nonlinear uncertainties and nonsymmetric dead-zone inputs are considered. First, according to some coordinate transforms and variable separation methods, the non-strict feedback form is converted into the normal form. Second, the relationship of state vector and error functions are established, and the inputs of dead-zone are compensated with adaptive approaches. This novel direct scheme assumes that the approximation error and optimal approximation norms of NN are to be bounded by unknown constants and can alleviate the number of online adjusted parameters so as to improve the robust control performance of the systems. At last, under Lyapunov theorem analysis, the uniformly ultimately boundness of all the signals in the closed-loop systems can be guaranteed and the dead-zone inputs can be compensated, the effectiveness of this algorithm is well demonstrated by simulation results. I. INTRODUCTION During the last decades, stability theory for uncertain systems with nonlinearities were discussed constantly [1]- [11], diverse adaptive approximation-based fuzzy or NN control schemes have been designed for uncertain systems with nonlinearities [5], [9]- [19]. Note that many of the these mentioned approximation-based fuzzy [7]- [10], [14], [17] or NN [3], [5], [6], [11]- [13] approaches were based on strict-feedback uncertain nonlinear systems [5], [12], [13] or pure-feedback nonlinear systems [8], [14]- [16], rather than uncertain non-strict feedback systems. In fact, the functions of non-strict feedback uncertain systems contain all the state variables of the system, that is to say, the above two structures strict feedback and pure-feedback forms are included in the The associate editor coordinating the review of this manuscript and approving it for publication was Choon Ki Ahn . non-strict feedback ones. So, the non-strict ones are more challenge and general for practical control systems. Recently, many adaptive researches and control strategies based on backstepping techniques and approximation of fuzzy or NN have been proposed for non-strict feedback uncertain nonlinear systems [17]- [23]. Combing with input saturation and output constraint, [17] discussed fuzzy control for non-strict feedback systems. [18] extended NN scheme to the non-strict feedback with backlashlike hysteresis uncertain systems. Considering a class of discrete-time systems, [19]established states NN reinforcement learning adaptive control approach. Based on finite-time adaptive control approaches, [20], [21] analyzed fuzzy states feedback control and output feedback dynamic surface control for non-strict feedback respectively. [22] extended NN adaptive command filter control to stochastic time-delayed systems with unknown input saturation. Neural control methods for VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ full-state constraints and unmodeled dynamics in non-strict feedback uncertain systems are designed [23]. But, many of these papers did not consider the unknown dead-zone inputs, especially for the more complex uncertain nonlinear nonsymmetric dead-zones. Unknown dead-zone input as one of the nonlinearities often occurs in the process of the practical engineering, which is a source of instability and limitation of performance of systems. Recently, the investigations of input dead-zone has attracted a great deal of attention [24]- [30]. Decentralized control for large-scale systems with actuator faults and tracking control for switched stochastic actuator deadzone systems were discussed in [24], [25]. [26] studied the non-backstepping VUFC algorithm for pure-feedback form. Based on switched nonlinear systems, [27] [28] extended time-varying tan-type barrier Lyapunov function adaptive fuzzy control and adaptive neural quantized control for states constrained systems and MIMO asymmetric actuator systems. Adaptive neural control [29] and fuzzy decentralized control [30] were proposed for unknown control directions systems and strong interconnected nonlinear systems in unmodeled dynamics. Based on robust optimal control method, [36] discussed the event-triggered physically interconnected mobile Euler-Lagrange systems. Although many researchers have extensively studied for non-strict feedback for nonlinear systems or for systems with unknown dead-zones, to the authors' best knowledge, very few investigators concentrated on non-strict feedback systems with uncertain nonlinearities and non-symmetric unknown nonlinear dead-zone inputs, and many adaptive parameters need to be adjusted in the recursive process of these backstepping or approximation-based approaches, due to updating parameters of NN optimal weight vector or the optimal approximation vector of fuzzy logic systems, which would affect the systems control performance and the online computation burden. As far as we know, for non-strict feedback nonlinear systems, no reports on novel alleviating computation NN control approach in the literature can be found. All of these motivate this paper. Motivated by the above considerations, aiming at alleviating the computation, this paper consider a novel adaptive NN tracking control for a class of non-strict feedback systems with nonsymmetric dead-zone inputs. Neural networks (NN) are utilized to approximate the unknown nonlinearities and nonlinear functions, and a robust NN state-feedback tracking control method is developed in the framework of backstepping design technique. This approach can not only compensate the effect of the non-symmetric dead-zone inputs but also improve the robust performance of the system by updating estimations of unknown bounds. Compared with the related existing literature, the main advantages and contributions of this paper proposed are listed below. 1) This established control scheme can compensate nonsymmetric dead-zone inputs, uncertainties and solve the problems of included non-affine structure states non-strict feedback simultaneously. Although the previous results in [17]- [23] also studied the same control design problem for non-strict feedback nonlinear systems, they do not consider uncertain non-symmetric dead-zones and have computing burden problem. 2) Based on NN novel alleviating computation control approach, at each design step, F-norm parameters and unknown constants are used to approximate the bound of optimal weight vector of NN and the approximation error. Thus, this approach needs to adjust only one parameter rather than the elements of the optimal approximation vectors of NN. As a result, compared with the traditional back-steppingbased and approximation-based scheme for nonlinear systems [4]- [14], [17], [23], [27], [29], [37], [38], the approach needs to adjust fewer parameters and the computational burden is significantly alleviated. The rest of this paper is organized as follows. Preliminaries and problem formulation and are explained in Sect. 2. A novel adaptive NN tracking control design procedure is presented in Sect. 3. Simulation is demonstrated in Sect. 4 to illustrate the availability of the approach, Sect. 5 gives the conclusion. II. PROBLEM STATEMENTS AND PRELIMINARIES A. PRELIMINARIES FORMULATION AND SYSTEM DESCRIPTIONS In this paper, we focus on a class of uncertain nonlinear timevarying non-strict feedback systems with unknown nonlinearities and non-symmetrical dead-zone inputs as follows: where the non-symmetrical dead-zone with input v(t) and output u(t) as shown in Fig. 1, and the dynamic model of unknown non-strict feedback dead-zone nonlinear systems [26] can be described as: (2) 220796 VOLUME 8, 2020 ∈ R n , and y ∈ R are the state vector and output of the systems respectively. u(t) ∈ R is the input of the system (output of the dead-zones ); v(t) ∈ R is the input to dead-zone. In this paper, f i (·), i = 1, 2, . . . , n and g i (·), i = 1, 2, . . . , n are unknown smooth nonlinear functions with f i (0) = 0, g i (0) = 0; i (·), i = 1, 2, . . . , n are smooth uncertain disturbance. m r (·), m l (·) for dead-zone are unknown nonlinear smooth functions; b r , b l represent unknown right and left slopes of dead-zone and dead-zone breakpoint parameters respectively. The control objective is to design robust adaptive NN controllers v(t) for the non-strict feedback systems (1), such that the following can be observed: 1) The system output y(t) = x 1 can track the desired trajectory reference signal y d (t) very small; 2) All the signals in the closed-loop systems are uniformly ultimately bounded. Where y d (t) and its kth order derivative y (k) d (t) (k = 1, 2, . . . , n) are assumed to be bounded and continuous. Similar to [32], [32], to facilitate control system design, we need the following Assumptions for the dead-zone of the control problem investigated in this paper. Asusmption 1 [26], [28]: The dead-zone outputs u(t) is assumed to be not available and the parameters b r and b l are assumed to be unknown constants, but their signs are known, i.e., b r > 0 and b l < 0. Remark 1: As stated in [25], [27], [32], [33], this non-strict feedback nonlinear model with unknown dead-zone input is a typical model for a hydraulic servo valve or a servo motor in many practical industrial mechanical processes. However, many results in these papers were based on traditional backstepping technique as well as the approximation features of FlSs or NN [17], [23], as we known that in the recursive process of these approximation and backstepping-based approaches, as the order increased, the design procedure can cause 'explosion of complexity' [25], [27], [33], many adaptive parameters were needed to be adjusted [29]- [33] even together with dynamic surface control (DSC) method [12], therefore, the online computation burden is rather heavy, especially in dealing with MIMO or non-strict feedback nonlinear systems [17], [23]. Different from these results [29]- [33], or the optimal control method to compensate the dead-zone [36], in this paper, we will explore a direct novel alleviating computation NN control method for nonlinear non-strict feedback systems. Asusmption 2 [26]: Assume that the dead-zones' left and right growth functions m r (·), m l (·) are smooth, and there exist unknown positive constants k l0 , k l1 , k r0 , k r1 , such that In general, for convenience, m r (v(t)) and m l (v(t)) in above Eqs. are assumed to be true for v(t) ∈ (−∞, m l ] and for v(t) ∈ [m r , +∞) respectively. According to the differential mean value theorem, there exist ξ l ∈ (−∞, b l ] and ξ r ∈ [b r , +∞) such that Now define vectors (t) and (t) as follows: and where Based on Assumption 2, the dead-zone model (2) can be redefined as follows: d(v) can be calculated from Assumption 2 and above equations: where Asusmption 3: Assume that the signs of g i (x i ), (i = 1, 2, . . . , n) are known, and there exist positive parameters g i0 and g i1 , satisfying Remark 2: In this paper, dead-zone output u(t) is assumed to be not available, parameters b l and b r are assumed to be unknown but with b r > 0 and b l < 0 [33]- [35]. In addition, according to Assumption 2, we conclude that |d(v)| p * , and p * is an unknown positive constant and can be chosen as p [35]. B. RADIAL BASIS FUNCTION NEURAL NETWORK(RBF NN) In this paper, we will exploit RBF neural networks to approximate the unknown nonlinearities for system (1). Such as, an unknown smooth nonlinear function ψ(Z ) : R → R will be approximated on a compact set by the following RBF neural network . . , m are constant vectors called the center of the basis function, and η > 0 is a real number called the width of basis function. As pointed out in [5] and [6], according to the approximation property of the RBF network, for a continuous realvalued function ψ(Z ) : → R, is a compact, and any ς H > 0, by appropriately choosing i ∈ and η, i = 1, . . . , m, for some sufficiently large integer m, there exists an ideal weight vector W * ∈ R m such that the RBF network W T ξ (Z ) can approximate the given function ψ(Z )with the approximation error bounded by ς H . Remark 4: In this paper, based on F-norm approximation of NN, the proposed direct novel alleviating NN tracking control could algorithm guarantee that the adaptive adjusted parameters here are only one no matter how many states in the design procedure. Thus, this new approach can alleviate the online computation burden and improve the robust control performance. III. ADAPTIVE ROBUST RBF NN CONTROL DESIGN AND PERFORMANCE ANALYSIS Different from the similar backstepping-based results in feedback form with unknown dead-zone inputs in [1]- [6], [12]- [16], [29]- [34], [38], in this section, we will discuss a novel alleviating computation adaptive NN approximationbased tracking control approach in details for the nonlinear non-strict feedback plant in (1). The concrete design procedure contains n steps. First, from step 1 to step n − 1, virtual controllers α i and adaptive lawsθ i ,δ i , (i = 1, 2, . . . , n − 1) will be constructed, in step n, actual controller v(t) will be designed to ensure that the whole system is stable and the adaptive lawsθ i ,δ i will be given in the following design procedure. A. ADAPTIVE NN DESIGNING PERFORMANCE The coordinate transformation is given as follows: where α i−1 (i = 1, 2, . . . , n) are virtual controllers, which will be determined in i − 1th steps. To make the system achieve the desired performance, the system (1) is considered to be a series of subsystems. Different from designing a fractional order controller, here, based on backstepping design technique, NN approximation and the alleviating algorithm, we will give the detailed feasible virtual control signals controller, NN adaptive laws and actual controller design procedure in the following steps. The first feasible virtual control signal α 1 and adaptation lawsθ 1 ,δ 1 are considered as follows: where parameters c 1 > 0, τ 1 > 0 and γ 1 > 0 are positive design constants to be designed.θ 1 ,δ 1 are adaptive adjusted parameters to be designed later. The ith feasible virtual control signal α i and adaptation lawsθ i ,δ i are considered as follows: where parameters c i > 0, τ > 0 are positive design constants to be designed later.θ i ,δ i are adaptive adjusted parameters to be designed later. Finally, the independent actual controller v(t) and the nth adaptive lawsθ n ,δ n are designed as follows: where parameters c n > 0, τ > 0 are positive constants to be designed later.θ n ,δ n are adaptive adjusted parameters to be designed later. The following four lemmas will be used for control design in this Section. Remark 4: Lemma 4 gives the relationship between x and error signals z i , (i = 1, 2, . . . , n), together with (5), plays an important role in this paper, due to the nonlinear function f i (x) contains the whole state variables in the ith differentiate equation, which cannot be estimated by RBF NN directly. Then, it provide a variable separation approach to decompose the function f i (x) into a sum bounded functions with respect to z i , (i = 1, 2, , . . . , n). The main results are presented by the following theorem. Theorem 1: Consider the closed-loop system with unknown dead-zone input of the plant (1) and (2), the virtual controllers α 1 in (8),α i in (11) and adaptive lawsθ 1 in (9), δ 1 in (10),θ i in (12),δ i in (13),θ n in (15),δ n in (16), and the actual controller v(t) in (14), under Assumptions 1-4. Suppose that for i = 1, 2, . . . , n, the unknown functions H i (Z i ) can be approximated by RBF NN system W T i ξ (Z i ) in the sense that the approximation error ς i is bounded, then based on the bounded initial conditions, according to the Lyapunov stable analysis methods. 1) It can guarantee that all the signals in the closed-loop system are ultimately uniformly bounded(UUB). 2) The output y = x 1 can track the reference signals y d and make sure that the tracking error convergence to a small neighborhood of zero. Proof: There will contain n steps. Step1: Consider the first part in plant (1)ẋ 1 = g 1 (x 1 )x 2 + f 1 (x) + 1 (t). Define the first tracking error variable z 1 = x 1 − y d , and along its trajectory, we haveż 1 Define the first smooth Lyapunov function as follows: will be designed in the following analysis. The time derivative of V 1 iṡ According to Assumption 4 and Lemma 1-4, we conclude that, whereφ 1 (|z j ψ j |) = (n + 1)|ψ j |h 1 ((n + 1)z j ψ j ) VOLUME 8, 2020 And together with Lemma 1, we conclude another inequality. Define the ith smooth Lyapunov function as follows: i will be designed in the following analysis. where The time derivative of V i at t is. Based on Lemma 1, we have , and ε i , σ are positive constants to be designed. Substitute these inequalities (33), (34) intoV n (32), we have: Now, we choose the whole Lyapunov function for the plant (1), V = n i=1 V i , the derivative of V is concluded based on the above analysis. where c(n, k) = (n − k + 1)/2 Substituting this equality in to aboveV , we get, To facilitate the adaptive controller design, we will use RBF neural networks to approximate the nonlinearities, now define H 1 (·), H i (·), H n (·) as follows: For H i , (i = 2, 3, . . . , n), we define, Then, substituting H i (Z i ), (i = 1, 2, . . . , n) intoV in (37), we obtain: According to the definition of H i (Z i ) and the Lemma1-4 and Assumption 1-4, it can conclude H i (Z i ) are also smooth functions, then, based on the universe approximation lemma, we can use RBF NN to approximate the unknown smooth function H i (Z i ) on the compact space 1 , and H i (Z i ) can be rewritten as where Z i is the input of the NN system, W * T i and ς i denote the ideal optimal approximation parameter vector and the NN approximator error, respectively. For simplification, we define Throughout this paper, in order to alleviate online approximation parameters, we assume the following Assumption: Assumption 5: Based on the definition of θ i , on the compact i , we assume that the optimal approximation parameter vector W * T i and the NN approximator errors ς i , satisfy: where i = 1, 2, . . . , n, parameters θ i 0 and δ i 0 are unknown constants. Z i , W * i and ς i will be defined later. θ i 0,δ i 0 will be used to denote estimations of the θ i and δ i respectively. Throughout this paper,( ·) =( ·) − (·). Remark 5: There are a lot of significant results regarding adaptive fuzzy or NN control or FNN control algorithms for nonlinear systems with unknown dead-zones. However, many of these approximation control methods go through updating the estimations of each optimal parameter of FLSs [7]- [11], [20]- [23] NN, FNN directly, resulting the heavy online computation burden due to the rules of fuzzy, the hidden nodes of NN, or FNN are rather large generally. In this paper, Assumption 5 relaxes the conditions that the approximation errors or external disturbance are bounded with only unknown constants rather than known constants or satisfying square integrable condition. Only estimationsθ i ,δ i , (i = 1, 2, . . . , n, of parameters θ i , δ i need to adaptively adjusted. Thus, this novel proposed approach reduces the adjusted parameters and alleviate the on-line computation burden. According to approximation functions, H i , (i = 1, 2, . . . , n), virtute controllers α 1 , . . . , α n−1 , and actual controller v(t), and the adaptive lawsθ i ,δ i , (i = 1, 2, . . . , n) back intoV . Based on Young's inequalities, we obtain the following inequalities: For i = 1, 2, . . . , n, based on the Lemma 4, we could conclude the following inequalities hold: Similarly, based on the adaptation laws (24), (25) and Young's inequality, we have By substituting inequality (43), (44) back into (40), we acquire, If we choose the appropriate adjusted parameters and constants τ (1) i , τ (2) i , σ i , ε i , ρ (1) i , ρ (2) i , γ (1) i , γ (2) i , p * , d * , θ i , δ i and based on the Assumptions 1-4, Lemmas 1-4 and the RBF NN approximations, together with the virtual and actual controllers, we will have the following inequalities. where µ = min i , 2ρ (2) i } and Then, we obtainV Multiplying both sides of the above Eq. by e µt and it can be rewritten as Then, integrating the above equation over [0, t], we can obtain If we note that 0 < e −µt < 1 and ( /µ ) e −µt > 0, then, we can know the above Eq.holds as and we can conclude that Therefore, it can be shown that all the signals z i ,θ i ,δ i (i = 1, 2, . . . , n) in the closed-loop systems (1) are bounded. There exists T > 0, for T > √ 2µ/ , satisfying |z 1 | T for all t T , the tracking error z 1 = x 1 − y d converges to a neighborhood of zero. The proof is completed. Remark 6: Compared with many approximation control approaches, which involve updating the estimations of each optimal parameter of FLSs NN, and FNN directly [2]- [4], [6]- [11], [19], [22], [29]- [31], due to the hidden nodes of NN, or FNN and the rules of fuzzy are rather large generally, which result in the heavy online computation burden. Based on Assumption 5, at each design procedure for each system in this paper, fewer parameters need to be adjusted, we only need to approximate the unknown constant for the norm of the optimal parameter. So, this new approach can improve the robust control performance and alleviate the online computation burden. IV. SIMULATION EXAMPLE In this section, based on a practical one-link robot simulation system and its figure model can be seen in [38], the effectiveness of the presented control technique will be illustrated. The dynamics of one-link manipulator with the inclusion of motor [20], [38] can be described by the following equations: where τ d is the torque disturbance, τ represents the torque produced by the electrical system [20], [38], q is the link position,q is velocity, andq is acceleration. D = 100kg/m 2 is the mechanical inertia. u is the control input used to represent the electromechanical torque. B = 1Nms/ras is the coefficient of viscous friction at the joint,K m = 2NM /A is the back-emf coefficient, H = 0.1F is the armature resistance, N = 10 is a positive constant related to the mass of the load and the coefficient of gravity [38], and M = 20 H is the armature inductance [38]. When we introduce the variable change x 1 = q, x 2 =q, and x 3 = τ , and assume that the system exist unknown disturbance and unknown functions, x = [x 1 , x 2 , x 3 ] T is the state of the system, and y = x 1 is system output, u is the input of the system and the output of the dead-zone. Then, above one-link system can be re-expressed as       ẋ where g 1 (x 1 ) = 1+0.6x 2 1 , f 1 (x) = ((B/D)x 1 +x 2 )x 3 , 1 (t) = exp (−(B/D)(x 1 + x 2 )), g 2 (x 2 ) = (N /D + cos(x 1 x 2 ))x 2 , 3 , and 3 (t) = 1 K m sin(x 3 ). Then, we obtain the following third-order uncertain non-strict feedback nonlinear system with unknown dead-zone input: Choose the initial values x 1 (0) = y(0) = x 2 (0) = 0.5, x 3 (0) = 0.7. The unsymmetrical dead-zone inputs satisfies the dead-zone break points are chosen as: The objective of simulation is to apply the proposed novel adaptive NN tracking control approach for this three-order system, satisfy 1) the whole signals in this closed-loop system are bounded, 2) the output y = x 1 can track the reference signal y d = 0.25 sin(t) very well. Based on the novel adaptive robust NN tracking control approach in Sec3, the designed adaptive NN virtual controller α i adaptive laws θ i , δ i and actual controller v(t) are chosen as follows: where ], (j = 1, 2, . . . , 9). From the above simulation results, it can be clearly shown that the proposed control method can guarantee that all the signals in the closed systems are UUB, the proposed controller design method is effective. Compared with related results [20], [22], [23], [38], we need only one adaptive law to compensate the unknown dead-zones in non-feedback form. This method can reduce the online computing burden and simplify the design procedure considerably. V. CONCLUSION In this paper, a novel NN alleviating tracking control approach has been proposed for a class of uncertain non-strict feedback systems with both asymmetrical deadzones inputs and unknown nonlinear functions. Compared with the existing results, we consider not only asymmetrical dead-zones, but also non-strict feedback structure. This presented scheme adopts variable separation technique and adaptive method to cope with the non-strict feedback structure and the unknown dead-zones, the unknown functions have been approximated by NN. By using two unknown parameters as the approximation error and the bound of the norm of the optimal approximation vectors of the NN, the number of adjusted parameters is alleviated. Furthermore, based on Lyapunov theorem analysis, it has been shown that all the signals in the closed-loop systems are UUB and the tracking error is controlled into a small compact set. Finally, simulation results illustrate the feasibility and effectiveness of this approach. In the future, we could explore these method to more complex systems, such as switched or stochastic nonlinear non-strict feedback system.
6,209
2020-01-01T00:00:00.000
[ "Mathematics" ]
A Composite Likelihood Method for the Analysis of Multivariate Survival Data: An Application to a PBDE Study When dealing with multivariate survival data, featuring the association structure is a key difference from the univariate survival analysis. In this paper, we explore to use the composite likelihood framework to handle multivariate survival data, where only the lower dimensional survival distributions need to be specified. The development allows us to use available modeling schemes for bivariate survival data to characterize association structures of correlated survival times. The inference procedure is based on the pseudolikelihood which is the product of the lower dimensional bivariate distributions. The proposed estimation procedure is assessed through simulation studies. As a genuine application, we apply the composite likelihood inference procedure to analyze the data from the polybrominated diphenyl ethers (PBDEs) study, where four types of PBDE congeners are available. The associations among the four PBDE congeners, and the relationships between the covariates and the PBDE congeners are of interest. The result shows that there is strong association among the concentrations of the four PBDE congeners, and statistically significant predictors on the concentrations of the four PBDE congeners are identified. Introduction Multivariate survival data arise in many settings where the association among times to events is a key difference from standard univariaete survival data. Analysis methods of multivariate survival data may be broadly based on three types of model: marginal models, frailty models and copula models. In marginal analysis (e.g., Wei, Lin and Weissfeld, 1989), the association among the survival times is ignored with the focus on deriving the point estimates of the marginal model parameters, and the sandwich type variance estimator is invoked to incorporate the ignorance of modeling the association structure. This method is usually applied for the regression models to estimate the covariate effects, but it is not useful if association among survival times is of interest. The frailty models (e.g., Clayton 1978) assumes that the association of the survival times is attributed to a random effect, and conditional on the random effect, the survival times are treated independent. The joint survival function can be obtained by integrating out the random effect. Copula models, on the other hand, directly assume the joint survival distribution structure to be a function of the marginal survival functions. Both frailty models and copula models are useful tools, especially for analyzing bivariate survival data. For example, bivariate frailty models such as gamma frailty model or Clayton model (Clayton 1978), and bivariate copula model such as Clayton (Clayton 1978) or Frank models (Frank 1979) are popularly used in practice. When the dimension of survival times is larger than two, the specification of the joint distribution, either through frailty models or copula models, as well as inference based on it, is often challenging. It is desirable to have methods that retain the advantages of simple specification of bivariate models and can also be used to handle general multivariate survival data. To this end, we explore the composite likelihood approach (Lindsay 1988;Cox and Reid 2004) to handle multivariate survival data. Composite likelihood inference has now become popular in various settings of multivariate data analysis. To name a few, Parner (2001) utilized a composite likelihood approach in an adoption study, where the pairs of relationships between children and their adopted or biological parents are postulated through pairwise survival distributions. Henderson and Shimakura (2003) applied the composite likelihood to handle longitudinal count data. Renard et al. (2004) considered the composite likelihood formulation under generalized linear mixed models. He and Yi (2011) explored a composite likelihood method for the analysis of binary data with missing observation. Lindsay, Yi and Sun (2011) investigated a weighted composite likelihood formulation to incorporate importance of lower dimension likelihoods. Varin et al. (2011) provided a survey of recent developments in the theory and application of composite likelihood. Our research is motivated by the PolyBrominated Diphenyl Ethers (PBDEs) study. Polybrominated diphenyl ethers are a class of brominated flame retardants that are widely used in building materials, electronic equipment, and household products, to provide longer escape times and to reduce the damage of property. Environmental concerns of PBDEs have been raised in past two decades because of their high lipophilicity and high resistance to degradation processes. PBDEs have been recognized as persistent organic pollutants and have become ubiquitous in the environment and in people (Hites 2004;Loeber 2008;Frederiksen et al. 2009). Health hazards of PBDEs have attracted increasing scrutiny, and prenatal exposure to PBDEs has been found to associate with adverse neurodevelopment (Costa and Giordano 2007;Herbstman et al. 2010;Bellinger 2013). The association among the PBDEs and the identification of predictors of PBDE concentrations in human body (e.g. breast milk and blood) are of great interest to health researchers. A study was conducted to characterize predictors of exposure to PBDEs, where a multi-ethnic, low-income cohort of healthy pregnant women at the age between 16 to 35 were enrolled from two New York City prenatal clinics between September 2009 and December 2010. An extensive questionnaire including items on demographics, diet and lifestyle were administered to these pregnant women, and their maternal serum samples were collected during the first half of pregnancy (Horton et al. 2013). While 12 PBDE congeners were measured in blood samples, it is of interested to focus on the most commonly detected PBDE congeners in the analysis, where the PBDE concentrations below the limit of detection were treated as left censored observations. A previous study examined how the predictors may be associated with each congener marginally using univariate parametric models with left-censoring accounted for (Horton et al., 2013). Such an analysis basically assumed that the predictors have different effects on the outcomes of PBDE congeners. The feasibility of this assumption, however, has not been rigorously justified, and furthermore, the association structure among the PBDE congeners has been overlooked. It is our goal here to provide a more comprehensive analysis of the PBDE data. Our objectives specifically include the assessment of the level of association among the PBDE congeners, identification of statistically significant predictors as well as estimation of their effects on the level of concentrations of the four commonly detected PBDE congeners in blood samples. To this end, we propose to use the composite likelihood formulation to analyze the PBDE data. We develop an estimation procedure that simultaneously estimate the association parameters and the covariate effects of response variables. The proposed composite likelihood estimation procedure assumes that each survival "time" (defined to be the concentration of the PBDE congeners in the blood) follows a Cox proportional hazards (PH) model, with a parametric Weibull baseline distribution, and paired survival "times" follow a bivariate Clayton frailty model (He and Lawless, 2003). The parameter estimation is achieved by maximizing the composite likelihood, and the covariance estimates are obtained through a sandwich type estimator. Although our method is motivated by the PBDE data, it applies to other multivariate survival data as well. Our method has several appealing features. It not only allows association structures to be accommodated for analysis of multivariate survival data, but also take advantages of existing modeling schemes. In addition, the implementation is straightforward. The remainder of the paper is organized as follows. Notation and model setup are introduced in Section 2, and the proposed estimation procedure using the composite likelihood formulation is described in Section 3. In Section 4, we conduct simulation studies to assess the performance of the estimation method. In Section 5, the proposed method is applied to analyze the PBDE data. Concluding remarks and discussion are presented in Section 6. Notation and Model Setup Suppose there are n clustered survival data in the study. For i = 1, ..., n and j = 1, ..., m i , let T i j and C i j be the survival time and censoring time for subject j in cluster i, δ i j = I(T i j ≤ C i j ) be the censoring indicator and X i j be the covariate vector of dimension p, where I(·) represents indicator function, and m i is the number of subject in cluster i. Write t i j = min(T i j , C i j ) for i = 1, ..., n and j = 1, ..., m i . Suppose that the marginal regression model of T i j is characterized by the Cox proportional hazards model as for i = 1, ..., n and j = 1, ..., m i , where S i j (·) is the marginal survival function of T i j , Λ j0 (·) is the cumulative baseline hazard function and β j is the vector of regression parameter. In general, both Λ j0 and β j can be subject-dependent; in some settings, Λ j0 and β j are common across the subjects within clusters. For ease of exposition, we consider that Λ j0 (·) = Λ 0 (·) and β j = β for some function Λ 0 (·) and parameter vector β, where j = 1, ..., m i . While the baseline cumulative hazard function Λ 0 (t) can be delineated by various method, here we take a parametric approach with Λ 0 (t) modeled by a Weibull distribution where λ and γ are positive parameters. Weibull distributions have been widely used in survival analysis due to their convenience and flexibility (Lawless 2003). Different values of γ enable us to describe various types of baseline cumulative hazard functions. Multivariate survival data may be handled with different strategies which can be distinguished by the way of treating association structures of survival times within clusters. A simplest approach is to deliberately ignore the correlation among survival times within clusters and focus on marginal features of individual survival times (e.g., Wei, Lin and Weissfeld, 1989). On the contrary, two other methods have been commonly adopted to describe association structures within clusters: one approach is to use frailty models (e.g., Clayton, 1978, He andLawless, 2003), and the other is to employ copula models (e.g., He and Lawless, 2005) to incorporate association structures into modeling survival times within clusters. While these two schemes allow us to account for potential correlation which is usually present for clustered survival data, they impose implicitly strict assumptions on the association structures. For instance, all the survival times T i j in cluster i are basically treated equally; with frailty models, they are assumed to be independent by conditional on the frailties (and possibly covariates) while under copula models a common parameter vector is used to facilitate the association. To deal with more general practical problems, we propose to model pairwise correlation structures within clusters and leave higher order association structures unspecified. Such an approach enable us to look into a broader range of problems with complex association structures, and usually applied modeling strategies such as frailty models and copula models are included as special cases. To flesh out our ideas, we employ bivariate Clayton models (Clayton, 1978) to postulate pairwise survival times within clusters but bearing in mind that other bivariate models can be used as well. Specifically, for i = 1, ..., n and j, k = 1, ..., m i with j k, the joint survivor function of T i j and T ik is given by (1), and ϕ is a positive pairwise association parameter. We note that as ϕ → ∞, model (2) corresponds to the scenario where T i j and T ik are independent. Tacitly, model (2) assumes that given the covariate {X i j , X ik }, survival times T i j and T i j are not influenced by other covariate X il with l j and l k. Inference Method Given the marginal model (1) and the bivariate model (2), we are interested in developing estimation procedures for the model parameters, denoted as θ = (β T , γ, λ, ϕ) T . Let L (i) jk (θ) be the pairwise likelihood of θ contributed from the pair of subjects j and k in cluster i. Then the logarithm of L (i) jk (θ) is For ease of notation, let Then Define to be the log pairwise likelihood function of θ, where l (i) = ∑ j<k l (i) jk . Then inference about θ may be carried out based on l p (θ). To conduct inference about θ by using the asymptotic result (4), we must estimate the covariance matrix Σ(θ) using the data. Typically, Σ(θ) can be estimated by the method of moments, given bŷ Simulation Study Simulation studies are conducted to assess the performance of the pairwise likelihood method developed in Section 3 for estimation of covariate effects β as well as parameters (λ, γ, ϕ). Different parameter settings are considered where we set m = 3. The sample size n is chosen to be 100 and 200 to examine the effect of the sample size on the proposed estimation method. 500 runs of samples are simulated for each parameter setting. Two dimensional covariates (X i1 , X i2 ) are generated, where X i1 follows a binomial distribution with the success probability being 0.5, and X i2 follows a normal distribution with mean 1 and variance 1. The coefficients of covariates are fixed at β 1 = 0.5 and β 2 = log2 for the marginal models. For the baseline cumulative hazard functions, we set the scale parameter λ = 1, and let the shape parameter γ = 0.5 or 1.5, representing a monotone decreasing or a monotone increasing hazard rate, respectively. Five values of the association parameter ϕ are assumed : ∞ (i.e., survival times are independent), 3, 2, 1/2 and 1/3, ranging from weak to strong dependence among the survival times. Specifically, we use the formulae to simulate three associated survival times: where U i1 , U i2 , U i3 are independently generated from the uniform distribution Uniform(0, 1). The censoring percentage is achieved by selecting a censoring time C such that the percentage of simulated survival times which are greater than C roughly equals the required percentage. Specifically, the value of C is set as C 1 = 2.2 for 20% censoring and C 2 = 0.5 for 40% censoring when γ = 0.5, and C 1 = 1.3 for 20% censoring and C 2 = 0.8 for 40% censoring when γ = 1.5. The simulation data are analyzed using the proposed method described in Section 3. The estimation bias (bias), the empirical standard error (SE), the model based standard error (ME), as well as the coverage rate (CR) of 95% confidence intervals are reported. The analysis results are displayed in Tables 1-5, corresponding to ϕ = 1/3, ϕ = 1/2, ϕ = 2, ϕ = 3, and ϕ = ∞, respectively. Finite sample biases of the estimatorsλ,γ,β 1 andβ 2 for the marginal model parameters are very small in all settings. Biases of the association parameter estimatorφ are small when the true value of ϕ is small (i.e., strong association), and biases tends to increase with increasing values of ϕ. This phenomenon agrees with what were observed in other studies. Model-based standard errors and empirical standard errors ofλ,γ,β 1 ,β 2 andφ are reasonably close, and coverage rates of 95% confidence intervals are around 95% for all scenarios except the setting of ϕ = ∞ where 95% coverage rates can not be constructed. As expected, a larger sample size (n = 200) gives better estimation results than a lower sample size (n = 100). The 95% coverage rate forφ is closer to 95% for n = 200 than for n = 100 when the true ϕ is large. Different censoring proportions do not seem to make noticeable changes in estimating λ, γ, β 1 , β 2 , however, for parameter ϕ (except for ϕ = ∞), an increasing percentage of censoring tends to increase the bias, as expected. In conclusion, the performance of the proposed method is satisfactory; the theoretical results in Section 3 are confirmed by the simulation studies. PBDE Data Analysis The proposed composite likelihood method is applied to analyze the data collected from the PBDEs study. The PBDE data contain observations from 316 pregnant women. The outcome variables include the concentrations of the four most commonly detected PBDE congeners, pb47, pb99, pb100, and pb153, in blood samples. Because the positivity, the concentrations can be taken as "survival" outcomes, where the 0 value of the concentrations is treated as being left censored, indicating the actual concentrations are unknown but below the detection limit of the equipment. Most observed PBDE concentrations assume small values, while only a few observations take extremely large values. Two extremely large observations of the four PBDE congeners are deleted from the analysis due to the possibly involved error. The majority observations of the congener pb47 are larger than the majority observations of other three PBDE congeners. In order to make a suitable comparison among the four congeners, all the observations are standardized to assume similar ranges with the original covariance structure retained. The standardization is implemented by first subtracting the corresponding median value of the congener and then divided by the corresponding inter-quantile range for each congener. Table 1 presents the summary statistics of the four PBDE congeners after the standardization. The correlation matrix of the four PBDE congeners remains the same after the standardization. Figure 1 displays the boxplot comparison of the values of the four PBDE congeners before and after the standardization. We further transform the standardized observations by subtracting them from 38 to convert left censoring to right censoring which is more convenient to handle by survival techniques, where 38 is a number greater than the all standardized observations. Potential predictors include maternal education (MEDUC), race, body mass index categories (BMIC), the number of household electronics (Electronics), current employment status (CurrentWork), servings of solid diary per week (Soliddairy), income categories (Income), and whether or not having any processed meat (ProcessedMeat). All the potential predictors are categorical with each category being assigned an integer value. Table 2 displays the categories of each predictor, the frequency, and the corresponding percentage. We employ (k − 1) dummy variables for each categorical predictor to reflect the effects from the different categories, where k represents the number of categories for a predictor. The proposed composite likelihood method is applied to the standardized PBDE data. For comparison purposes, we also apply the proposed method to the original data without standardization. We first build the full model which contains all possible predictors, and consider the full model as a starting point. We backward eliminate insignificant predictors, where the significance level α = 0.1 is applied for variable elimination. Table 3 and Table 4 report the estimated coefficients of the predictors for the final model for the standardized and the original data. The estimates of the parameters α, γ, and ϕ are presented in Table 5 for both the standardized and the original data. The large estimated positive values of γ show that the risk at the baseline increases as the concentration in the PBDE congeners increases. The small estimated values of ϕ demonstrates that the dependencies between the four PBDE congeners are high. Table 3 of the results for the standardized data, lower education (under the college level) is significantly associated with lower levels of the PBDE congeners. Higher usage of household electronics is significantly associated with higher levels of the PBDE congeners. Non-Hispanic Whites have significantly lower levels of the PBDE congeners compared to African American and Hispanics. Obese pregnant women have significantly lower levels of the PBDE congeners relative to underweight pregnant women. Pregnant women whose body weight were normal or overweight tend to have lower levels of the PBDE congeners in contrast to normal pregnant women, but this effect is only statistically significant at the significance level 0.1 but not significant at level 0.05. From Table 4 of the results for the original data, significant predictors are different from those obtained from the standardized data. Pregnant women with 5 − 10 servings of solid dairy per week have significant higher levels of PBDE congeners than those having 5 less or 10 more servings. Processed meat consumption is significantly associated with higher levels of the PBDE congeners. Conclusion and Discussion Usual analysis of multivariate survival data requires the specification of the joint survival function. When the dimension of the responses is more than two, the specification of the joint distribution is often difficult and its plausibility is not easy to check. In this paper, we propose to implement the composite likelihood inference which can reduce the burden of the joint distribution specification, and therefore is appealing for handling the case with a big number of responses. In particular, we explore to use the pairwise likelihood formulation where only the pairwise survival distribution is specified. Though other available bivariate survival models can also be used for our proposed estimation procedure, we use the Clayton bivariate model to delineate paired survival times. One advantage of the Clayton model lies in its interpretability from both the frailty and the copula model perspective (e.g., He 2014). The reliable performance of the proposed composite likelihood method is confirmed through simulation studies. We apply the proposed method to the PBDE data, and the analyses suggest the association among the four PBDE congeners and identify statistical significant predictors for the PBDE congeners. Although our development is motivated by the PBDE data, our method applies to any correlated survival data, including sequential survival data, clustered survival data, and their mixes, where complex association structures can be facilitated using available bivariate modeling schemes. In our discussion here, we take a common ϕ when characterizing the association for any two survival times. However, this treatment is not an essential requirement for our method, and it can be readily relaxed to hand general correlated survival times which may have different levels of association between different pairs. One may group the survival times into different subfamilies, with each subfamily sharing a common association structure. In these cases the composite likelihood method is still valid with proper modification in the formulation. Copyrights Copyright for this article is retained by the author(s), with first publication rights granted to the journal. This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).
5,189.8
2019-01-14T00:00:00.000
[ "Mathematics", "Medicine" ]
Regression Models for Order-of-Addition Experiments The purpose of order-of-addition (OofA) experiments is to identify the best order in a sequence of m components in a system or treatment. Such experiments may be analysed by various regression models, the most popular ones being based on pairwise ordering (PWO) factors or on component-position (CP) factors. This paper reviews these models and extensions and proposes a new class of models based on response surface (RS) regression using component position numbers as predictor variables. Using two published examples, it is shown that RS models can be quite competitive. In case of model uncertainty, we advocate the use of model averaging for analysis. The averaging idea leads naturally to a design approach based on a compound optimality criterion assigning weights to each candidate model. Introduction The purpose of order-of-addition (OofA) experiments is to identify the best order in a sequence of m components in a system or treatment (Van Nostrand 1995). For example, m herbicides may need to be combined to obtain a herbicide mixture targeting a range of weed species and it is not clear in which order the mixture components should be added to the tank (Mee 2020). Another example is a medical treatment involving m drugs, and the optimal order of administration needs to be determined (Table 1; Yang et al. 2020). [ Table 1 about here] With m components, there are m! possible orders (see Table 1). The full design comprises one run for each possible order. When m is large, only a subset of the possible orders may be tested. With smaller m, replication of some or all orders may be feasible, thus providing an independent estimate of error. The choice of design for OofA experiments has received considerable attention recently. Most authors focus on D-optimal designs, primarily based on combinatorics for design generation, e.g. using block designs or component orthogonal arrays (Yang et al. 2020;Zhao et al. 2020;Huang 2021), whereas other authors proposed purely numerical search strategies (Voelkel, 2019;Mee 2020, Winkler and. There are currently two popular basic models for analysis, i.e., a model using pairwise ordering (PWO) factors (Van Nostrand 1995) observing that there are 1 2  m constraints on the parameters   j c model is preferable will depend on the application. We conjecture, however, that for either model it will be beneficial to have components well spread out across positions over the runs of a design so that the optimal position of each component can be determined. The assumptions underlying the PWO and CP models are certainly plausible in principle, but they may not always be the best possible options. For use of OofA experiments in practice, it is useful to have a number of alternative models for analysis (Buckland et al. 1997). The purpose of this paper is to consider such alternative models and illustrate them using examples. The focus will be on regression models that use the position of components to define regressor variables. In particular, we will propose response-surface (RS) regression models, which can be thought of as accounting for both relative and absolute positions of the components at the same time. The general picture emerging from our review is that there is potentially quite a substantial number of regression models. Hence, we advocate model averaging (Buckland et al. 1997) as a viable analysis option. In the discussion, we will also consider the implications for design. A second-order response surface model The CP model assumes that the effect of a component c depends solely on its absolute position It should be pointed out that a regression on absolute positions p c (q c ) of the components can be considered as also accounting for relative position. This is because the m absolute positions q c can be re-expressed by the absolute position of one common reference and 1  m distances relative to the common reference. For example, taking the m-th component as the reference, we may replace q c   If this replacement is made in (10), we obtain a second-order model in the distances cm  , meaning that the models allows for an optimal position for each c < m relative to the reference component m. Modifications of the PWO model: A nearest neighbour model and a taperedeffects model The will not provide much improvement of the tPWO model over the PWO model (1). Criteria for judging the predictive accuracy of a model for a given design When m is large ( 10, say), even identifying the best order(s) from a fitted model may be computationally far from trivial, and the best strategy may well depend on the kind of model. In this paper, we will focus on the situation when m is small (m < 7, say). Then an obvious strategy is to predict the response for all possible orders and then perform all pairwise comparisons between the predictions in order to find the best one. One may simply use the order with the best prediction. Alternatively, a subset of best solutions could be identified, and the choice substantiated by significance tests. Specifically, Hsu's multiple comparison with the best (Hsu 1996, p.81) could be used to account for the multiple testing problem involved in identifying this subset. This approach is completely general and would work for any regression model. But it will become infeasible even for modest m, e.g. m = 10, where m! = 3,628,800. Even when predictions are not actually computed for all possible orders, however, identification of the best order still comes down, at least implicitly, to comparing the predictions of all possible orders. Thus, regardless of the value of m, in judging the efficiency of a design, it is appropriate to consider the predictions for all m! orders. Let the linear predictor for a design be given by  X , where  is the parameter vector, and let the full set of m! orders be represented by the matrix f X . Thus, we are aiming to predict . From this, we may obtain the average pairwise variance of predictions as (Piepho 2019) When apv is to be used in design search, we may set 1 2   and the matrix needs to be computed only once. We note that the apv bears some resemblance with the I-optimality criterion (Goos et al. 2016), sometimes also denoted as V-optimality (Atkinson et al. 2007, p.143), which focuses on the average variance (av) of predictions over the experimental region. For the full set of m! orders in a OofA experiment this equals whereas (12) , as is done in OPTEX when the option CODING=ORTH is used (Atkinson et al. 2007, p.188). Model averaging Model choice may not be obvious, in which case model averaging (Buckland et al. 1997;Burnham and Anderson 2002, p.450) is a good option for obtaining predictions of . A model-averaged prediction is given by (Buckland et al. 1997)  where k f ,  is the prediction based on the k-th model g k   where I denotes an information criterion such as the Akaike Information Criterion (AIC). The variance of the model-averaged prediction of the i-th order of application can be estimated from (Burnham and Anderson 2002, p. 450) For alternatives to estimate both the weights and the variance of predictions, see Buckland et al. (1997). To evaluate the overall performance of the prediction, (16) may be averaged across the complete set of application orders. Examples In Figure 1, the second-order response surface model (10) is illustrated for m = 3 using the data of Table 1. Circles correspond to design points; circle filled with red is the component order largest predicted response. [ Figure 1 about here] We further consider the data for N = 24 runs and m = 4 anti-tumor drugs in Table 3 of Yang et al. (2020). The fitted models are reported in Table 2. The RS model yields the best fit in terms of the root mean square of error (RMSE) and the Akaike Information Criterion (AIC) (Burnham and Anderson, 2002), whereas the PWO and tPWO models has a smaller avd. The model-averaged predictions for the ten best combinations is shown in Table 3. The RS model has the largest Akaike weight (w k = 0.410), closely followed by the tPWO model (w k = 0.376) ( Table 2). In this example, it would be hard to make a choice among these two best-fitting models, so the modelaveraged inference is very apt. The ranking of the model-averaged predictions for the ten best combinations agrees rather well with those obtained for the individual models, though there are some rank changes (Table 3). [Tables 2 and 3 about here] A third example is the data given in Table 4 Table 4 show that the RS model fits best and rather better than the second-best CP model. The very large Akaike weight (w k = 0.99996) for the RS model means that this dominates the model-averaged prediction in this example. The ranking of both the model-averaged prediction and the prediction based on the RS model, which agree perfectly, are rather different from those based on the other models (Table 5). [Tables 4 and 5 about here] Model selection Our review has revealed that there are several candidate regression models. This raises the question, which model should be used for analysis. The answer will very much depend on the application. If the best suitable model is known a priori for the application at hand, both the design and the analysis may proceed considering just that model. But what if model choice is not clear a priori? As regards analysis, one obvious option is to fit all candidate models and pick the best one based on some standard criterion such as information criteria (Burnham and Anderson 2002) or apv. A better strategy to deal with model-selection uncertainty is model-averaging (Buckland et al. 1997), as illustrated briefly in Section 2.5. The set of candidate models can be expanded in many ways. For example, each of the regression models considered in our review may be extended or modified. Mee (2020) suggested extending the PWO model by including interaction terms, e.g. for triplets of components applied in sequence. The tPWO model of Peng et al. (2019) gives rise to a whole family of models depending on the choice of the tapering function   h preceding a position and those following a position, calling for a directional version of   h z . Conversely, our NN model (11) assumes that direction matters for the neighbour effects. This could be simplified by assuming a non-directional NN model. We do not want to expand this list of examples further, but merely re-iterate that there are potentially many candidate models. A further option to expand the candidate set substantially is variable selection, e.g. by stepwise regression. This has been considered by various authors, e.g., Mee (2020) and Yang et al. (2020). With some proposed models, such as when higher-order interaction terms are included (Mee 2020), variable selection is a necessity when the number of runs is limited. Again, rather than settling for a single final model, which is a decision that is always subject to uncertainty and entails the risk of over-or underfitting (Burnham and Anderson, 2002;Heinze et al. 2018), model averaging may be a better strategy. It also seems prudent to limit the candidate set by making best use of prior knowledge and focus on the more parsimonious models. Implications for design generation There is a growing body of literature on optimal design for OofA experiments, much of which focuses on a single model, which is the obvious strategy when the best design is known with certainty a priori. For example, Winkler et al. (2020) and Chen et al. (2020) focus on the PWO model, while Huang (2021) focuses on the CP model. Yang et al. (2020) and Zhao et al. (2020) consider both of these models in finding a design, first constructing component orthogonal arrays. These designs are globally optimal under the CP model. Then among these designs, they select the best designs in terms of Doptimality for the PWO model. If there is uncertainty as regards the best model for analysis and there is a larger set of candidate models, however, it may not be sensible to optimize the design for particular models. Instead, a compound criterion could be optimized that averages an optimality criterion across a select set of candidate models. This idea has an obvious counter-part in analysis, i.e., model averaging (Buckland et al. 1997) (see sections 2.4 and 2.5). Thus, for design generation, it seems reasonable to consider two quite different scenarios: (i) The most suitable model is known a priori; (ii) the best model is not known and a set of candidate models will be considered for analysis. In scenario (i), the obvious design strategy is to find an optimal design, e.g. one that minimizes apv, for the known best model. In scenario (ii), a compound criterion covering the set of candidate models may be used to find an optimal design. If where k a , K k ,..., 1  is a set of non-negative weights (Atkinson et al., 2007, p.144 (Dykstra 1971). D-optimal designs also usually do quite well for other optimality criteria including I-optimality (Atkinson et al. (2007, p.153). Donev and Atkinson (1988) showed this for response surface designs. ; CP = component position; RS = response surface; NN = nearest neighbour; $ Model-averaged prediction using Akaike weights (see eg. 14); the model comprises a dummy for the two component orthogonal arrays of the design. $ Model-averaged prediction using Akaike weights (see eg. 14); the fitted model comprised a dummy for the two batches of the design. $ RS-3 = third-order response surface model (A6) Figure 1. Contour plot for fit of model (10) plotted in the (p 1 , p 2 )-plane for data in Table 1.
3,242.4
2021-01-26T00:00:00.000
[ "Computer Science" ]
Comparative transcriptomics and network analysis define gene coexpression modules that control maize aleurone development and auxin signaling The naked endosperm1 (nkd1), naked endosperm2 (nkd2), and thick aleurone1 (thk1) genes are important regulators of maize (Zea mays L.) endosperm development. Double mutants of nkd1 and nkd2 (nkd1,2) show multiple aleurone (AL) cell layers with disrupted AL cell differentiation, whereas mutants of thk1 cause multiple cell layers of fully differentiated AL cells. Here, we conducted a comparative analysis of nkd1,2 and thk1 mutant endosperm transcriptomes to study how these factors regulate gene networks to control AL layer specification and cell differentiation. Weighted gene coexpression network analysis was incorporated with published laser capture microdissected transcriptome datasets to identify a coexpression module associated with AL development. In this module, both Nkd1,2+ and Thk1+ appear to regulate cell cycle and division, whereas Nkd1,2+, but not Thk1+, regulate auxin signaling. Further investigation of nkd1,2 differentially expressed genes combined with published putative targets of auxin response factors (ARFs) identified 61 AL‐preferential genes that may be directly activated by NKD‐modulated ARFs. All 61 genes were upregulated in nkd1,2 mutant and the enriched Gene Ontology terms suggested that they are associated with hormone crosstalk, lipid metabolism, and developmental growth. Expression of a transgenic DR5–red fluorescent protein auxin reporter was significantly higher in nkd1,2 mutant endosperm than in wild type, supporting the prediction that Nkd1,2+ negatively regulate auxin signaling in developing AL. Overall, these results suggest that Nkd1,2+ and Thk1+ may normally restrict AL development to a single cell layer by limiting cell division, and that Nkd1,2+ restrict auxin signaling in the AL to maintain normal cell patterning and differentiation processes. In most maize lines, AL is a single, epidermis-like, cell layer of the endosperm (Becraft & Asuncion-Crabb, 2000). It features abundant lipid bodies and thickened cell walls (Kyle & Styles, 1977;Zheng & Wang, 2014). Regulated by a balance of gibberellic acid and abscisic acid (ABA) signaling, AL cells produce amylases during germination to digest storage materials in SE for seedling growth (Hoecker et al., 1999). AL also contains abundant indigestible fibers, antioxidants, ferulic acid, and minerals beneficial for health (Lillioja et al., 2013). The cellularization stage begins as anticlinal cell walls form between nuclei arranged around the periphery of the coenocyte. Then periclinal divisions generate a cellular peripheral layer and an alveolar interior layer (Olsen, 2001). Cell division in the peripheral layer is highly ordered with division planes oriented exclusively in either the anticlinal plane, to expand the number of cells in the peripheral layer and accommodate endosperm growth, or in the periclinal plane, to contribute new daughter cells to the endosperm interior Olsen, 2001). The highly ordered peripheral cells develop as AL, whereas the interior cells have random division planes and develop as subaleurone and SE Olsen, 2001). AL development is controlled by several gene networks and signaling pathways (summarized in Supplemental Figure S1). The defective kernel 1 (dek1) gene encodes a transmembrane protein functioning as a mechanosensitive calcium channel with a cytosolic calpain cysteine proteinase domain (Lid et al., 2002;Tran et al., 2017;Wang et al., 2003). Loss of function dek1 mutants completely lack AL, indicating DEK1 is required as a positive regulator of AL cell fate (Becraft et al., 2002;Lid et al., 2002). When dek1 mutants were induced somatically at later stages of development in a normal endosperm background, AL cells lost their identity and transdifferentiated to SE cells. Conversely, late-stage somatic gene reversion of dek1 mutant to wild type (WT) in the outermost cell layer of endosperm caused transdifferentiation of SE to AL identity. This highlighted the plasticity of AL cell fate and indicated that positional signaling pathways regulated by DEK1 were active throughout endosperm development (Becraft & Asuncion-Crabb, 2000;. Another transmembrane protein, CRINKLY4 (CR4), medi- Core Ideas • Transcriptomes were analyzed of thk1 and nkd1,2 mutants involved in maize endosperm development. • thk1 and nkd1,2 genes coregulate cell division processes to control aleurone cell layer number. • nkd1,2 genes specifically regulate auxin signaling pathways in aleurone cell differentiation. In sal1 mutants, endosperm has multiple AL layers, suggesting that SAL1 is a negative regulator of AL cell fate. SAL1 colocalizes with DEK1 and CR4 in endocytic vesicles, suggesting that SAL1 may negatively regulate DEK1 and CR4 by internalization and protein sorting activity (Tian et al., 2007). The thk1 gene is another negative regulator of AL cell fate and mutants cause multiple layers instead of the normal single layer of AL cells . The thk1 gene encodes a homolog of NOT1, a scaffold protein of the CCR4-NOT complex, a multifunctional regulatory complex that regulates cellular activities through various mechanisms including promoting transcript turnover by deadenylation (Villanyi & Collart, 2015;Wu et al., 2020). Transcriptomic analysis of thk1 mutant endosperm showed that Thk1+ may regulate cell division, hormone signaling, and stress responses (Wu et al., 2020). Interestingly, thk1 mutants are epistatic to dek1 suggesting that Thk1+ functions downstream of Dek1+ in a signaling pathway that controls AL cell fate . The molecular mechanism of this signaling interaction remains obscure. NKD1 and NKD2 are zinc finger transcription factors (TFs) of the INDETERMINATE DOMAIN (IDD) family, encoded by syntenic duplicated genes, nkd1 and nkd2 (Yi et al., 2015). Double mutants of nkd1 and nkd2 (nkd1,2) show multiple layers of partially differentiated AL cells, showing that Nkd1,2+ are required to restrict the number of AL cell layers as well as to promote the differentiation of AL cell characteristics (Yi et al., 2015). Nkd1,2+ functions are reminiscent of Arabidopsis IDD TFs, JACKDAW (JKD) and BALDIBIS (BIB). These bind to SCARECROW (SCR) and SCARECROW-LIKE23 (SCL23), functioning to restrict movement of the cell fate determinant SHORT-ROOT (SHR), resulting in a single cell layer of endodermis (Long, Goedhart, et al., 2015;Long, Smet, et al., 2015). Transcriptomic analysis showed that Nkd1,2+ regulate genes associated with cell division, cell differentiation, hormone signaling, starch biosynthesis, and nutrient storage, indicating a complex regulatory network for AL development (Gontarek et al., 2016). Results suggest that Thk1+ might interact with Nkd1,2+ to regulate endosperm and AL development (Wu et al., 2020). Triple mutants of nkd1,2 and thk1 showed more AL-like cell layers than either the nkd1,2 or thk1 mutant alone, suggesting additivity in regulating the number of AL layers. However, the cells showed the partially differentiated AL characteristics of the nkd1,2 mutant indicating that nkd1,2 was epistatic to thk1 with regard to AL cell differentiation . This suggests that Nkd1,2+ and Thk1+ function through independent pathways to restrict AL cell fate to a single cell layer and that Nkd1,2+ functions in AL cell differentiation downstream of Thk1+. In this study, our goal was to dissect the relationships among Nkd1,2+ and Thk1+ functions by performing a comparative analysis of nkd1,2 and thk1 mutant transcriptomes. Weighted gene coexpression network analysis (WGCNA) revealed the overlapping functions of Thk1+ and Nkd1,2+ in controlling AL cell layer number and AL cell differentiation. We identified auxin signaling, mainly regulated by Nkd1,2+, as an important element in AL differentiation and the differential expression of a DR5-red fluorescent protein (RFP) auxin reporter between WT and nkd1,2 mutant kernels supported this prediction. Plant materials nkd1-Ds and nkd2-0766 mutants were obtained from the Ac/Ds project (Ahern et al., 2009) and were backcrossed to W22 inbred for three generations. Homozygous WT, nkd1-Ds (nkd1), or nkd2-0766 (nkd2) single mutants and nkd1,2 double mutants were derived by self-pollinating a nkd1-Ds/+, nkd2-0766/+ heterozygote and propagating F 2 individuals of the appropriate genotypes. All materials for RNA isolation were grown in the field at the Curtiss Research Farm, Iowa State University, Ames, IA, during 2019. Sixteen DAP endosperm samples were collected from four independent ears of each genotype for RNA extraction. The DR5-RFP transgenic maize line was obtained from David Jackson (Cold Spring Harbor Laboratory) and contains ER localized RFP under the regulation of the synthetic auxin-responsive DR5 promoter (Gallavotti et al., 2008). The transgene was backcrossed to the B73 inbred four generations and then crossed to the nkd1, nkd2 double mutant, also in a B73 background. The F 1 plants were self-pollinated to gen-erate ears segregating all three factors, DR5-RFP, nkd1, and nkd2. RNA extraction and sequencing Total RNA was extracted following the published procedure (Li et al., 2014) Coexpression network analysis Transcriptome datasets of laser capture microdissected (LCM) nkd1,2 (National Center for Biotechnology Information Gene Expression Omnibus [NCBI GEO] accession number: GSE61057) and thk1 (NCBI GEO accession number: GSE155296) were collected from published studies (Gontarek et al., 2016;Wu et al., 2020). (Accession number of transcriptome datasets are listed in Supplemental Table S2, and accession number of genes are listed in Supplemental datasets.) The data were processed following the procedures described above. Whole endosperm transcriptome data of nkd1,2 from this study were combined with thk1 data to perform WGCNA. Batch effect was removed via the Com-Bat algorithm in the R package SVA (Johnson et al., 2007;Leek et al., 2012). Log2 transformed transcripts per million (TPM) data were filtered and non-DEGs (with low TPM variation) across all genotypes were excluded. The WGCNA was performed according to published procedures (Langfelder & Horvath, 2007. Soft-thresholding power 18 was chosen to build a signed network with the scale-free topology. Merge threshold and minimum module size were set as 0.25 and 50, respectively. The network was visualized by Cytoscape 3.7.2 with a connectivity threshold of 0.06 (Shannon et al., 2003). Hypergeometric test Hypergeometric tests were performed as described (Zhan et al., 2015) to statistically evaluate whether two gene sets are significantly related by calculating the number of overlapping genes relative to the total number of protein-coding genes (39,324) in the B73 genome (Jiao et al., 2017). The overlaps among datasets were visualized as heatmaps that were built by the R package gplots (https://github.com/talgalili/gplots). Fluorescence microscopy and RFP quantification Kernels were collected from three independent ears, each segregating nkd1, nkd2, and the DR5-RFP reporter. At least four mature kernels per phenotype (either WT or nkd1,2) per ear were collected, genotyped, and used to quantify RFP expression (fluorescence). Genotyping primers are as follows: nkd1 (forward: CCGATCATGTATAG-CATTTCTTCTT; reverse: GCTTCTTGATCCCCGTCAG; followed by sequencing the PCR products for detection of the SNP TGCAC → TGTAC [Yi et al., 2015]), nkd2 (forward: CCAACGGCCACACGTATAGAACG; reverse: CAACAAATCGACAGGGAGCCGAG; the mutants should give no band) and DR5-RFP (forward: CCAAGCTGAAG-GTGACCAA; reverse: TCTTCTTCTGCATTACGGGG [Gallavotti et al., 2008]; and the transgene should give a 288-bp band). Then the kernels were hand-sectioned at the abgerminal region and observed under an Olympus BX60 epifluorescence microscope equipped with a Chroma mCherry filter set (excitation 560 nm, dichroic barrier filter 600 nm, emission 635 nm). All images were captured under the 10× objective with a Jenoptic C-5 camera set to 500-ms exposure time and using constant gain settings. The fluorescence intensities were quantified by ImageJ (https://imagej.nih.gov/ij/index.html). Fluorescence intensity between WT and nkd1,2 kernels for each ear was compared by Student's t test through JMP (pro14, SAS Institute). The pooled fluorescence intensity was compared via two-way analysis of variance (ANOVA) with Type III sums of squares (SAS v9.4, SAS Institute). Confocal fluorescence microscopy was performed by the Roy J. Carver High Resolution Microscopy Facility, Iowa State University. The WT and nkd1,2 kernels were collected at 30 DAP from the same segregating ears expressing the DR5-RFP transgene. The kernels were hand-sectioned and mounted in phosphate buffered saline solution under a coverslip on the confocal microscope (Leica SP5 X MP). Images were captured under the 20× objective with pinhole size set at 1 and 39% laser power. Excitation and emission parameters for red fluorescence were set at 555 nm and a range of 581-708 nm, respectively. Images from bright field and red fluorescence channels were merged by Fiji-ImageJ v1.53c. Comparative analysis of thk1 and nkd1,2 mutant transcriptomes Both thk1 and nkd1,2 mutants cause multiple AL layers, whereas nkd1,2 mutants also disrupt AL differentiation. We hypothesized that a comparative transcriptomic analysis of these mutants would allow the identification of genes that control the specification of AL layer number versus genes involved in AL cell differentiation. A transcriptomic analysis of thk1 mutant endosperm was recently published (Wu et al., 2020). This consisted of three genotypes: WT and two mutant alleles, thk1-iso15 and thk1-iso17. Although a transcriptomic analysis of LCM nkd1,2 mutant endosperm was previously published (Yi et al., 2015), we conducted a new analysis on nkd1-Ds and nkd2-0766 null mutant alleles using samples and methods more comparable to the thk1 data. Sixteen DAP endosperms were collected from WT, nkd1-Ds and nkd2-0766 single mutants, and nkd1,2 double mutant, and RNA was isolated and subjected to 150 base pairedend RNA sequencing. An average of 36,928,566, 38,984,745, 40,343,136, and 50,055,590 unique reads were aligned to B74 v4 reference genome for WT, nkd1-Ds, nkd2-0766, and nkd1,2, respectively, with read alignment at least 71%. Genes were declared differentially expressed (DE) if they showed at least a twofold difference in expression between WT and mutant with adjusted p-value of .05. In nkd1,2 mutants, there were 2,504 genes upregulated and 1,389 genes downregulated; and in thk1 mutants, there were 1,194 genes upregulated and 405 genes downregulated (Wu et al., 2020) (Supplemental Figure S2). There were 452 DEGs (9% of total DEGs) overlapping between the two datasets. Gene Ontology term enrichment analysis showed that the overlapping DEGs are associated with cell wall organization or biogenesis, flavonoid biosynthesis, auxin signaling, and transmembrane transport. Differentially expressed genes specific to thk1 are mainly associated with cell division regulation, whereas DEGs specific to nkd1,2 are mainly associated with cell differentiation regulation, auxin signaling, carbohydrate metabolism, and lipid storage (Supplemental Figure S2). Coexpression network analysis of thk1 and nkd1,2 mutants The DEG GO term results suggest that Thk1+ and Nkd1,2+ may coregulate some gene networks, while regulating other networks independently. To further dissect their respective gene networks, WGCNA (Langfelder & Horvath, 2008) was performed based on the combined TPM data described above. Six datasets analyzed consisted of (a) Thk1+ WT, (b) thk1 mutants thk1-iso15 and thk1-iso17, (c) Nkd+ WT, (d) nkd1-Ds single mutant, (e) nkd2-0766 single mutants, and (f) nkd1,2 double mutant. The data were log2 transformed by log2(TPM+1) and batch effects were removed via ComBat (Johnson et al., 2007;Leek et al., 2012). The results showed that row means (average expression of a gene across all genotypes) are highly correlated (R before 2 = 0.38, p <1 × 10 −200 vs. R after 2 = 0.99, p <1 × 10 −200 ) between thk1 WT/mutant and nkd WT/mutant datasets, indicating that the datasets are comparable and suitable to perform WGCNA (Supplemental Figure 3a, b). The combined data were filtered to exclude non-DEGs across all genotypes and a total of 5,735 genes were obtained to proceed with subsequent analyses. The WGCNA showed that all but 445 genes were clustered into 10 coexpression modules (networks) coded by M1 through M10 (Figure 1a) in descending order of the number of genes contained. Modules ranged from 3,551 genes in M1 to 71 genes in M10. All filtered genes from each genotype were clustered according to correlations with modules (Supplemental Figure 3c). Genes in the nkd1,2 dataset (n1n2_whole) are positively correlated with M1 and negatively correlated with modules M2, M3, and M4, whereas genes in the thk1 dataset (thk1_iso15 and thk1_iso17) are positively correlated with M1, M6, and M7 (Supplemental Figure 3c). That both nkd1,2 and thk1 datasets are correlated with M1 indicates that Nkd1,2+ and Thk1+ might coregulate this module. To investigate the contribution of each factor to the regulation of each module, hypergeometric tests (Zhan et al., 2015) were performed to calculate whether the number of overlapping DEGs between each genotype and module was greater than expected by random chance. The result showed that DEGs of nkd1 or nkd2 single mutants, nkd1,2 double mutant, and thk1 mutant are strongly associated with M1 ( Figure 1b). Highly significant overlaps were also observed among DEGs of nkd1,2 with modules M2, M3, and M4 and among thk1 DEGs with modules M6 and M7 (Figure 1b). To examine the relationship between nkd1,2 and related modules with respect to tissue types, the previously published RNA-seq datasets of LCM AL and SE of nkd1,2 mutant were incorporated in the test (Gontarek et al., 2016). The raw reads were realigned to B73v4 reference genome and DEGs of B73AL vs B73SE and nkdAL vs nkdSE were called. The upregulated genes represent genes preferentially expressed in AL of WT and nkd1,2 backgrounds, respectively, while downregulated genes represent genes preferentially expressed in SE (Supplemental Figure S4a and b). Specific to the WT background, there were 375 genes preferentially expressed in AL and 404 in SE (Supplemental Figure S4b, regions a and b, respectively). Specific to the nkd1,2 mutant background, there were 1,414 and 1,319 genes preferentially expressed in AL and SE, respectively (Supplemental Figure 4b, regions c and d). There are 364 ALpreferential and 269 SE-preferential genes shared in both WT and nkd backgrounds (Supplemental Figure 4b, regions g and h). Very few genes alter their tissue preference between the WT and nkd1,2 background (Supplemental Figure 4b, regions e and f). The hypergeometric test showed that AL-preferential genes in both WT and nkd1,2 backgrounds have significant overlap with M1 and M2 (Figure 1b), indicating that these two modules may be associated with AL development. Gene Ontology term enrichment analysis showed that M1 is associated with cell cycle regulation and hormone signaling, especially auxin signaling (Table 1). The module M3 is associated with heat response, whereas M2 is associated with sucrose metabolism and lipid storage. No enriched GO terms were detected for modules M4, M6, or M7. Eigengene-based connectivities, or module memberships (kME), of GO term-associated genes in each module represent correlations of each gene or group of genes with the module eigengenes (Langfelder & Horvath, 2007. The higher the kME value in a module, the higher correlation of the gene(s) with the corresponding module. The kMEs of genes associated with selected GO terms were calculated across each module and auxin signaling and cell cycle showed the highest kMEs for M1 (Figure 1c), consistent with the GO term-module relationship reported in Table 1. Taken together, these results suggest that Thk1+ and Nkd1,2+ coregulate M1, which is important for AL development, and that cell cycle regulation and auxin signaling may be important aspects of this process. Lipid storage and sucrose metabolism, showed the highest kMEs in M2 (Figure 1c), suggesting that this module is important for Thk1+ and Nkd1,2+ regulation of storage product metabolism in AL. Gene expression modules regulating AL development To analyze relationships between endosperm tissues and coexpression modules, hypergeometric tests were performed to investigate overlapping genes in the modules from this study with previously published endosperm tissue-specific LCM RNA-seq data (Zhan et al., 2015). The results showed that genes expressed in developing AL were significantly overrepresented in M1 and, to a lesser extent, in M2 ( Figure 2a). This is consistent with Figure 1b, where genes preferentially expressed in AL significantly overlap modules M1 and M2. In addition to AL, M1 also overlapped with conducting zone and embryo surrounding region (Figure 2a). Developing conducting zone and basal endosperm transfer layer genes also significantly overlapped with modules M3 and M4, respectively (Figure 2a). These results suggest that these coexpression modules may be involved in tissue-specific regulation of endosperm development. Within M1, thk1 and nkd1,2 DEGs have more significant overlap with AL-preferential genes than SE-preferential genes in both WT and nkd backgrounds (Figure 2b), indicating that Thk1+ and Nkd1,2+ regulate several biological processes in common that function in developing AL. DEGs of nkd1 or nkd2 single mutants have strong overlap between one another and with nkd1,2 double mutant; however, compared with the double mutant, the single mutants have less significant overlap with AL-preferential genes (Figure 2b). This suggests that Nkd1+ and Nkd2+ have unique functions as well as redundant functions, consistent with the partial redundancy reported previously (Yi et al., 2015). To further explore the functional relationships amongst genes within M1, the overlap was examined among DEGs of thk1, nkd1,2, and AL-preferential genes in WT and nkd backgrounds (Figure 2c). Three groups of particular interest are highlighted. Group I (84 genes) represents ALpreferential genes from a WT B73 background that are regulated by thk1. The GO term cytokinesis by cell plate formation (GO:0000911, FDR = 0.00023) was enriched (Table 1) in this group suggesting that thk1 may regulate mitotic cytokinesis during AL development. Group II (225 genes) represents AL-preferential genes from a nkd1,2 mutant background that may also be regulated by thk1. The same GO term as Group I was enriched with higher FDR (1.4 × 10 −14 ) ( Table 1), suggesting both thk1 and nkd1,2 may be involved in controlling cell division processes in developing AL. Group III (250 genes) represents AL-preferential genes from either WT or nkd mutant backgrounds that are regulated by nkd1,2 but not by thk1. The GO terms auxin-activated signaling pathway (GO:0009734, FDR = 0.0056) and pattern specification process (GO:0007389, FDR = 0.0055) were enriched, suggesting that nkd1,2 has specific functions in regulating auxin signaling and cell patterning during AL development and these are not shared by thk1. Nkd1,2+ regulate ARFs in developing AL Responses to auxin signaling are regulated in large part by auxin response factors (ARFs), TFs that activate or repress expression of auxin-regulated genes and control multiple developmental processes in plant tissues (Guilfoyle & Hagen, 2007;Leyser, 2018). Phylogenetic analysis of the ARF family identifies three clades (Galli et al., 2018). Transcription reporter assays showed that most Clade A ARFs (ARF-A) are transcriptional activators; most Clade B ARFs (ARF-B) are transcriptional repressors; and most Clade C ARFs (ARF-C) produced no change in target gene expression (Galli et al., 2018). Eight ARF genes were DE in nkd1,2 and/or thk1 mutants. Interestingly, all ARFs that were DE in nkd1,2 were upregulated, and most of them are ARF-A, whereas all ARFs DE in thk1 were downregulated, and most of them are ARF-B (Figure 3a). This suggests that Nkd1,2+ mainly repress the expression of ARF-A, and Thk1+ mainly activates expression of ARF-B (Figure 3b). Among the nkd1,2-regulated ARF genes, arf35 contains NKD1 and NKD2 binding motifs in cisregions, whereas arf26 and arf29 contain NKD1 cis-binding motifs, identifying these as putative direct target genes of NKD1 and/or NKD2 regulation (Gontarek et al, 2016). Putative downstream target genes were also previously identified for several of the ARFs that are modulated by nkd1,2 or thk1 (ARF-A: ARF29, 34 and 35; ARF-B: ARF13). DNA affinity purification sequencing was used to identify cis-binding sites for these ARFs on a genome-wide scale (Galli et al., 2018). We examined the overlap between these putative ARF target genes with genes in coexpression modules, DEGs of nkd mutants, DEGs of thk1 mutant, as well as AL and SEpreferential genes. The results showed that ARF-A target genes have strong overlap with M1, and ARF34 and ARF13 targets have lesser overlaps with M2 and M7 (Supplemental Figure 5a). Also, nkd1,2 DEGs have strong overlap with selected ARF targets, whereas nkd1 or nkd2 single mutants do not (Supplemental Figure 5b), consistent with redundancy of nkd1 and nkd2 (Yi et al., 2015). There are 61 genes overlapping among ARF-A putative targets, nkd1,2 DEGs and AL-preferential gene Groups I + II + III (Figure 2c and Supplemental Figure 6a). These 61 genes are most likely regulated by an NKD-ARF pathway associated with AL development. Differentially expressed genes of thk1 have sig- Figure S5b). Note that ARF34 is not DE in the thk1 mutant (Figure 3a), suggesting these DEGs may be regulated independently of ARF34, or by a posttranscriptional mechanism. There are only two genes overlapping amongst putative targets of ARF13 and 35, thk1 DEGs and the AL-preferential gene Group I (Supplemental Figure 6b), indicating that the ARF signaling pathways may not play a major role in thk1mediated AL development. The 92 genes contained in both the ARF35 or 13 DAPseq peaks and thk1 DEGs may be associated with functions that are not specific to AL. To focus on how ARFs regulate AL development, overlap was examined between selected ARF target genes and AL-preferential gene groups from Figure 2c. Group III (nkdregulated AL) genes most significantly overlapped with ARF-A targets, Group II genes (thk1-and nkd-regulated AL) showed less significant overlap with ARF34 and ARF13, and there was no significant overlap between Group I The Plant Genome F I G U R E 4 Nkd1,2+ regulation of auxin signaling. (a) Expression comparison of auxin response factor (ARF) genes between wild type (WT) and nkd1,2. Normalized expression values were quantified by transcripts per million reads (TPM), and error bars represent standard errors. *, **, and *** represent p-value <0.05, 0.01, and 0.0001, respectively, by t-test. (b) Quantification of red florescent protein (RFP) fluorescence from microscopic images of WT and nkd1,2 mutant kernels expressing DR5-red florescent protein (RFP). Shown are means of kernels sampled from 3 independent ears. ***p-value <.001 by analysis of variance. Error bars represent standard errors. (c and d) Confocal fluorescence microscopy images of AL and adjacent regions in WT (c) and nkd1,2 mutant (d) kernels expressing the DR5-RFP transgene. AL, aleurone; SE, starchy endosperm; PE, pericarp. Scale bar = 100 μm. (e) NKD-modulated ARF-A putative target genes, and corresponding GO terms. Yellow highlights are significantly enriched Gene Ontology (GO) terms, and GO terms highlighted in green were not significantly enriched but are biologically relevant. The 61 genes represent the intersection of putative Clade A auxin response factor (ARF-A) targets, nkd1,2 DEGs, and AL-preferential gene Groups II + III. DEG, differentially expressed genes. (Refer to Supplemental datasets for gene_id and detailed information) (thk1-regulated WT AL) and targets of any ARF (Figure 3c). These results suggest a close relationship between nkd1,2, ARF-A, and AL development and are consistent with the enriched GO term auxin-activated signaling pathway of Group III (Table 1). Differential expression of ARF genes in nkd1,2 (Figure 4a) suggested that Nkd1,2+ regulates auxin response. To test this prediction, a DR5-RFP transgene, with the synthetic auxin-responsive DR5 promoter driving RFP (Gallavotti et al., 2008) was crossed to nkd1,2 and expression studied in kernels from segregating F 2 ears. Expression of this auxin reporter was significantly higher in nkd1,2 mutant endosperm than in WT (Figure 4b, Supplemental Tables S1 and S2). In WT endosperm, RFP expression was most prominent in the AL cells (Figure 4c), whereas in nkd1,2 mutant endosperm, expression was observed deeper into the endosperm tissues (Figure 4d). There was significant variation among ears sampled from different plants, but within each ear, nkd1,2 showed significantly higher mean RFP fluorescence than WT (Supplemental Table S1, Supplemental Figure S7). These data suggest that Nkd1,2+ function to restrict auxin response levels, as well as spatial distribution in the endosperm, and support the hypothesis that Nkd1,2+ control ARF-modulated auxin signaling in AL development. To explore the biological processes that are regulated by NKD-ARF-mediated auxin signaling in AL development, GO enrichment analysis was conducted on the 61 genes at the intersection of ARF-A putative targets, nkd1,2 DEGs, and AL-preferential genes (Supplemental Figure S6a). The results showed that these genes may be involved in developmental growth, auxin response, brassinosteroid (BR) response, and lipid metabolism, and, to a lesser extent, abscisic acid response, cell wall organization, and plant epidermis development (Figure 4e, Table 1). The 61 target genes of nkd1,2-modulated ARF-As are all upregulated in the nkd1,2 mutant (Supplemental datasets), which supports the proposed model in Figure 3b that Nkd1,2+ negatively regulate these genes through repression of ARF-As. Many of these genes belong to multiple GO terms, suggesting that they may be involved in more than one biological process. DISCUSSION Genetic studies suggested that Thk1+ and Nkd1,2+ functions interact to regulate the number of AL cell layers, AL cell differentiation, and other aspects of endosperm tissue development (Gontarek et al., 2016;Yi et al., 2011;Yi et al., 2015). THK1 is a NOT1 homolog, a scaffolding subunit of the multifunctional CCR4-NOT complex, whereas NKD1,2 are IDD family zinc finger TFs. As such, it is unclear how these regulatory functions are related. We sought to explore these relationships through comparative transcriptomics and gene network analysis of thk1 and nkd1,2 mutants. Nine percent of the total DEGs were DE in both mutants; these were enriched for functions associated with cell wall assembly, flavonoid biosynthesis, auxin signaling, and membrane transport. To better understand how the Thk1+ and Nkd1,2+ regulatory systems regulate these processes, WGCNA was performed using RNAseq data from mutants and corresponding WT, normalized from published studies (Gontarek et al., 2016;Yi et al., 2015) and additional RNA sequencing. The RNAseq data used in the WGCNA study were derived from whole endosperm, composed of multiple tissues. To help determine tissue-specific aspects of expression, we turned to published LCM studies (Gontarek et al., 2016;Zhan et al., 2015). Although these data could not be compared directly due to differences in developmental stages and sampling methods, the overlap between tissue-specific genes and genes of our coexpression modules was informative. The coexpression module designated M1 was the largest module ( Figure 1a). Module M1 had more significant overlap with AL specific or preferential genes than with SE or any other endosperm tissue at either 8 or 15 DAP (Figure 1b, 2a). That the largest coexpression module appears to be associated with developing AL is consistent with the pronounced AL phenotypes of thk1 and nkd1,2 mutants Yi et al., 2015). Two major GO terms, cell cycle (GO:0007049) and auxinactivated signaling pathway (GO:0009734), were enriched in M1 (Table 1). The GO term cytokinesis by cell plate formation (GO:0000911) was enriched in genes of the overlap among DEGs of thk1 and nkd1,2, contained in M1 and which were AL-preferential (Figure 2c, Groups I and II; Table 1). The GO term auxin-activated signaling pathway was enriched in Group III (Figure 2c), containing M1 genes that were ALpreferential and DE in nkd1,2 but not DE in thk1. Recall that nkd1,2 and thk1 both produced multiple-AL-layer mutant phenotypes, whereas nkd1,2, but not thk1, showed significant AL differentiation defects Yi et al., 2015). As such, these results suggest that regulation of cell division might be associated with controlling AL cell layer number, whereas auxin-activated signaling pathways and cell wall formation might be associated with Nkd1,2+-regulated AL differentiation. The gene expression profiles provide several clues to how Nkd1,2+ and Thk1+ may function to coordinate cell patterning and differentiation during endosperm development. Intriguingly, four GRAS family TFs (gras2, gras29, gras33, and gras55) were DE in nkd1,2 mutants (Supplemental datasets). As reviewed by Kumar et al. (2019), GRAS family members interact with IDD family TFs in several systems to regulate cell division, cell patterning, cell specification, and hormone signaling. This includes the well-known system in root cell patterning where the GRAS factors SHORTROOT and SCARECROW act in concert with IDD factors including JACKDAW to control the number and identity of cell layers (Long, Goedhart, et al., 2015;Long, Smet, et al., 2015). In addition, five GRAS family TFs (gras7, gras37, gras38, gras42, and gras76) were DE in the thk1 mutant (Wu et al., 2020), but there was no overlap with nkd1,2 suggesting that Thk1+ and Nkd1,2+ regulate independent GRAS pathways. Similarly, several homeodomain TF genes of the outer cell layer (ocl) family, including ocl1, 3, and 4 were DE in nkd1,2 mutants and belong to module M1. In addition, ocl4 is an ALpreferential gene that is upregulated in thk1 mutant (Gontarek et al., 2016;Wu et al., 2020) (Supplemental datasets). Altered cell division and patterning in developing anthers of maize ocl4 mutants resulted in an extra subepidermal cell layer with endothecium characteristics, causing partial male sterility (Vernoud et al., 2009). Thus, it is reasonable to speculate that regulation of ocl4 in endosperm may contribute to controlling the number of AL layers. The regulation of cell cycle and cell division is critical for cell patterning and tissue differentiation, and genes involved in cell cycle and cell division were DE in both thk1 and nkd1,2 mutants. Several cell cycle genes were AL-preferential in nkd1,2 background but were not DE in whole endosperm of nkd1,2 mutants (Figure 2c, Supplemental datasets), suggesting that Nkd1,2+ may preferentially regulate cell cycle and division in AL compared with other tissues. Prior studies showed that a significantly greater number of cell cycle genes were DE in AL than SE (Gontarek et al., 2016). Auxin plays an essential role in plant growth and development, and ARFs are key mediators of auxin signaling and regulators of auxin response genes (Guilfoyle & Hagen, 2007;Li et al., 2016;Roosjen et al., 2018). Both Nkd1,2+ and Thk1+ appear to restrict expression of auxin response genes in maize endosperm but likely through different pathways: Nkd1,2+ mainly inhibits genes for ARF-As, activators of auxin response genes, whereas Thk1+ mainly activates genes for ARF-Bs, repressors of auxin response genes (Figure 3a, b). Downstream target genes containing cisbinding sites for several of the DE ARFs were previously identified by DAP-seq (Galli et al., 2018). These putative ARF target genes are more significantly enriched among genes preferentially expressed in AL than SE and have more overlap with AL-preferential genes regulated by Nkd1,2+ than by The Plant Genome F I G U R E 5 Model of Nkd1,2+ and Thk1+ regulation of AL development. Nkd1,2+ and Thk1+ appear to repress cell cycle and division, which may be important for endosperm cell patterning by limiting the number of aleurone (AL) layers. Nkd1,2+ may promote AL differentiation by regulating developmental growth, hormone response and lipid metabolism via Clade A auxin response factor (ARF-A)-dependent pathways. Thk1+ may indirectly regulate ARF-B but has limited impact on AL differentiation Thk1+ (Supplemental Figure S5b, S6a and b). Furthermore, no GO terms were enriched among the thk1 DEGs preferentially expressed in AL (Supplemental Figure S6b). This suggests that Nkd1,2+ may have a more prominent function than Thk1+ to regulate auxin signaling in AL development, which is consistent with the enriched GO term of auxin-activated signaling pathway in AL-preferential genes that are regulated by nkd1,2 but not thk1 (Figure 2c Group III; Table 1). Several putative targets of ARF34 or ARF13 are DE in thk1 or nkd1,2, respectively, even though these ARF genes themselves are not DE in the corresponding mutant (Supplemental Figure S5b). These targets might be regulated by factors other than ARFs that are DE in the mutants, or the ARFs might be indirectly regulated by posttranscriptional mechanisms. Prior studies have implicated auxin in regulating AL development in a manner consistent with our results presented here. Developing maize endosperm has been shown to accumulate free indole-3-acetic acid (IAA), specifically in the AL layer, and treating plants with the auxin transport inhibitor N-1-naphthylphthalamic acid caused elevated levels of IAA and the formation of multiple layers of altered AL cells (Forestan et al., 2010). In our study, we found that nkd1,2 mutants contain elevated levels of ARF-A gene expression and elevated levels of auxin signaling, as reflected by expression of the DR5-RFP auxin reporter, in the multiple layers of abnormally differentiated AL cells. These results suggest that auxin regulation is important for AL cell patterning and development and that Nkd1,2+ repression of ARF-A gene expression may restrict auxin signaling in the AL to maintain normal cell patterning and differentiation. Expression of ARFs could potentially be directly repressed by NKD1,2 binding to their promoters, as was previously predicted for arf26, arf29, and arf35 (Gontarek et al., 2016), or could be indirectly affected as would be the case for all thk1-regulated ARF genes given that THK1 is not a TF. One possibility is that indirect regulation could be via auxin metabolism or transport given that some ARFs contain auxin response elements in their promoters (Galli et al., 2018). As shown in Figure 4e, several aspects of AL development appear to be regulated by auxin and the NKD-ARF network. Of particular note are cell growth and morphogenesis and cell wall organization or biogenesis. Auxin signaling is well known to promote cell wall loosening and cell expansion by modifications of cell wall composition (Majda & Robert, 2018). The thinner walls and irregular shape of nkd1,2 mutant AL cells Yi et al., 2015) might be associated with excessive auxin signaling. Lipid metabolism and plant epidermis development are also noteworthy functions because lipid accumulation is a key aspect of AL cells and AL has been proposed to be homologous to the plant epidermis . Putative ARF targets that were DE in nkd1,2 are associated with signaling systems of several hormones, including auxin, BR, and ABA. This suggests that Nkd1,2+-ARF auxin signaling may mediate crosstalk with other phytohormones. NKD1,2-ARF-A regulates two homologs of GH3 (Zm00001d006753, aas2, and Zm00001d022017, aas8). GH3 genes are involved in IAA inactivation and are induced by both IAA and BR in Arabidopsis and rice (Oryza s ativa L.) (Goda et al., 2004;Zhang et al., 2015). In rice, OsARF19 activates expression of OsGH3-5 and BRASSINOS-TEROID INSENSITIVE1 (OsBRI1), and overexpression of OsARF19 altered the expression of genes associated with auxin and BR signaling. This resulted in an increase in leaf adaxial cell division; therefore, it is reasonable to speculate that auxin-BR crosstalk could also influence maize endosperm tissue architecture. ABA signaling is critical for promoting maturation and quiescence in AL cells and ABArelated genes, including viviparous1 (vp1) and responsive to aba17 (rab17), are specifically expressed in AL (Cao et al., 2007;Gontarek et al., 2016). VP1 is a TF required for ABA regulation of gene expression (Suzuki et al., 2003) and the vp1 gene is directly regulated by NKD1,2 (Gontarek et al., 2016). Auxin crosstalks with ABA to regulate seed maturation as well as root development. Arabidopsis ARF10 and ARF16 regulate ABSCISIC ACID INSENSITIVE 3 (ABI3), the ortholog of VP1, which triggers ABA-induced seed maturation and dormancy (Liu et al., 2013). Also, auxin signaling enhances VP1-mediated ABA response in maize roots (Suzuki et al., 2001). The networks reported here provide further examples of auxin-ABA crosstalk. In summary, Figure 5 illustrates a putative model regarding how Nkd1,2+ and Thk1+ networks regulate AL development. Both Nkd1,2+ and Thk1+ appear to limit the number of AL layers by restricting expression of cell cycle and cell
8,778
2021-07-29T00:00:00.000
[ "Biology" ]
Elucidation of functional consequences of signalling pathway interactions Background A great deal of data has accumulated on signalling pathways. These large datasets are thought to contain much implicit information on their molecular structure, interaction and activity information, which provides a picture of intricate molecular networks believed to underlie biological functions. While tremendous advances have been made in trying to understand these systems, how information is transmitted within them is still poorly understood. This ever growing amount of data demands we adopt powerful computational techniques that will play a pivotal role in the conversion of mined data to knowledge, and in elucidating the topological and functional properties of protein - protein interactions. Results A computational framework is presented which allows for the description of embedded networks, and identification of common shared components thought to assist in the transmission of information within the systems studied. By employing the graph theories of network biology - such as degree distribution, clustering coefficient, vertex betweenness and shortest path measures - topological features of protein-protein interactions for published datasets of the p53, nuclear factor kappa B (NF-κB) and G1/S phase of the cell cycle systems were ascertained. Highly ranked nodes which in some cases were identified as connecting proteins most likely responsible for propagation of transduction signals across the networks were determined. The functional consequences of these nodes in the context of their network environment were also determined. These findings highlight the usefulness of the framework in identifying possible combination or links as targets for therapeutic responses; and put forward the idea of using retrieved knowledge on the shared components in constructing better organised and structured models of signalling networks. Conclusion It is hoped that through the data mined reconstructed signal transduction networks, well developed models of the published data can be built which in the end would guide the prediction of new targets based on the pathway's environment for further analysis. Source code is available upon request. Background "Any classification in a division of objects into groups is based on a set of rules -it is neither true nor false (unlike, for example, a theory) and should be judged largely on the usefulness of the results" [1]. For many years, model organisms have been studied extensively by scientists as they tried to better understand the functional implication of processes initiated during cellular signalling, and how organisms can use this to respond to perturbations outside of the cell [2]. With the advent of high throughput experimentation, the identification and characterization of molecular components involved in transduction events became possible in a systematic way. In addition to this, the discovered interactions between each of these components promoted the reconstruction of reactions leading to signaling pathways. Thus, elucidating the functional consequences of these interactions will be crucial in understanding the ways in which cells respond to extra cellular cues and how they communicate with one another. Activities of biological cells are regulated by proteins carrying signals that modify the expression of different genes at any given time, and these extra-cellular signals drive cell proliferation and programmed cell death via complex signal transduction circuits comprising of receptors, kinases, phosphatases, transcription factors and many others. It is unsurprising that many components of these signal transduction circuits are oncogenes or tumour suppressors, emphasizing the importance of understanding signalling in normal tissues and targeting aberrant signalling in diseases [3]. Signalling networks which are chiefly based on interactions between proteins are the means by which a cell converts an external signal (e.g. stimulus) into an appropriate cellular response (e.g. cellular rhythms -periodic biological process observed in cell cycles or day-night cycles (circadian rhythms) of animals and plants) [4][5][6]. It is from the resulting basic cellular responses that complex behaviour in multi-cellular organisms emerges. Signal transduction pathways have typically been drawn as separate linear entities, however it has become increasingly clear that signalling pathways are extensively interconnected and are embedded in networks with common protein components and cross talk with other networks [7][8][9][10][11]. In addition to this, signal transduction networks do not depend merely on the shifting of relevant protein concentrations from one steady state level to another, rather, the signals often have a significant temporal variation that carries much more information that is propagated in a complex manner through the networks [12][13][14][15]. Traditionally, study of the complex behaviour of networks require dynamic models that contain both the biochemi-cal reactions as well as their rate constant counterparts [16][17][18][19]. This information is usually not accessible directly through experiments for systems less well studied. Fortunately for many biological systems partial prior knowledge about the connectivity patterns of the networks is becoming available and readily stored in databases [20][21][22][23], even though the detailed mechanisms still remain undiscovered. An important goal of this research therefore is to attain a reconstruction of the network of interactions that gives rise to signalling pathways in a biologically meaningful way, which in turn allows the mathematical analysis of the emerging properties of the network [24,25]. So far, a great deal of data has accumulated on signalling systems and these large datasets are thought to contain much information on the structure of their underlying networks. However, this information is hidden and requires advanced algorithms and methods, such as data mining and graph theories of network biology to make sense of it all [26][27][28]. Data mining deals with the discovery of hidden knowledge, unexpected patterns and new rules [29]; nevertheless, there are some limitations with this technique. A fundamental issue is that biological data repositories are normally presented in heterogeneous and unstructured forms [30][31][32][33]. Therefore, there is a great need to develop effective data mining methodologies to extract, process, integrate and discover useful knowledge from multiple data sources [34]. The retrieved knowledge can then be better organized and structured to develop models, which in the end, would guide the prediction of new targets based on the pathway's environment [24,[26][27][28]35,36]. In this report, we present a systems analysis framework to examine how protein-protein interactions within these systems relate to multi-cellular functions, and how high throughput technologies allow the study of the different aspects of signalling networks for modelling. We assume that since mammalian cells are constantly remodelling their transcriptional activity profiles in response to a combination of inputs, the understanding of their coordinated responses have been lacking, and in essence requires a framework which examines the system or systems by extracting information on their topological and functional properties. An example of a system activated in response to a variety of signals is the NF-κB pathway [19,[37][38][39][40] (a family of proteins which functions as DNAbinding proteins and transcription factors); the disruption of which in recent years have been shown to contribute towards the many human diseases presently known. We also know from literature [41][42][43] that the NF-κB network does not exist in isolation, since many of its mechanisms have been shown to integrate their activity with other cell signalling networks. Such as the p53 system [17,[44][45][46][47][48][49] (another transcriptional activator that plays an important role in the regulation of apoptosis) and the E2F-1 [50][51][52][53] -a cell cycle transcriptional target that controls the expression of a number of genes needed for DNA synthesis and progression into S phase [46,49,[54][55][56][57][58][59]. It is thought that the cooperation between p53, NF-κB and E2F-1 is most likely to reflect on their ability to function together to induce expression of target genes regulated by promoters containing p53, NF-κB and E2F-1 binding sites [53,60,61], since target genes translated to proteins in one way or another affect the individual system in a positive or negative way. To capture the possible events involved in the pathways, only proteins involved in the oscillatory feedback loops of the systems were considered -which are ubiquitous feature of the biological examples given which can be adapted to yield distinct system level properties [16,17,40,62]. To generate the networks, the molecular components and their interactions were extracted from publicly available datasets [20][21][22][23]. In addition, associations of these networks with some cell cycle proteins, in particular, the G1/ S phase cell cycle proteins [63,64] were also examined. Cell cycle proteins were considered since previously published literature showed some of its proteins to be activated by one pathway and to be relevant for the regulation of another [44,[65][66][67][68][69][70]; and thus may be useful in showing a level of complexity not visible by looking at the NF-κB and p53 systems alone. We next identified key nodes of significant influence in the isolated systems investigated using some graph theories of network biology, namely, degree, vertex betweenness, and clustering coefficient measures. We used shortest paths calculation to find connecting nodes, most likely responsible for the propagation of transduction signals across the networks. And cross referencing them with reference databases, the interpretation of the functional properties of these key nodes, as well as, the highly ranked connecting nodes within the systems were realised. The idea is that through the data mined reconstructed signal transduction pathways which are comparable to the previously modelled networks of the real system, a phenomenological model of all the published data can be derived from which the key components of the system can be highlighted for further analysis. In fact, as we will show in this report, it is possible to reconstruct signalling networks in this way without additional constraint. Methods The development of high-throughput molecular assay technologies, as well as breakthroughs in information processing and storage technologies provide integrated views of biological and medical information. Databases enabling systematic data mining on bio-molecular interactions, pathways and molecular disease associations are becoming increasingly available, which it is hoped will facilitate the understanding of the dynamics of biological function in complex diseases. Summarised below are descriptions of the analytical methods used in this studysee Figure 1 for a schematic representation of the framework. Definition of Reference Databases Over the last few years many of the experimental data from gene expression studies have been made freely available for academic research in the form of reference databases [20][21][22][23] of which several exist. These different databases have their strengths and weaknesses and there is no universal method best for storing these data sets. A number of different approaches have been used to extract signalling data and integrate them for biologically valid conclusions to be drawn from the vast and comprehensive data sets available [71,72]. Table 1 lists a description of the individual databases used in this study, each of which A Schematic representation of the modelling framework introduced was used to retrieve information related to the proteins considered. These databases contain information on proteins, protein interactions and biological processes. Data extraction and data-mining The concerted efforts of genetics, molecular biology, biochemistry and physiology have led to the accumulation of an enormous amount of data on molecular components of signalling networks reported in the literature or stored in databases [73]. The availability of these vast amounts of data provides an opportunity for investigating further the design principles underlying structure and dynamics of signalling networks [71,72,74]. However, these data are diverse and dispersed in different databases. For this reason, data mining is employed and takes the responsibility of mining this amount of data in the hope that it will return useful hypotheses supporting life sciences. Due to its capability of processing different kinds of data, data mining has the ability to integrate these spread-out data in a unified framework thus solving more efficiently the problems that may arise due to their differences [29,30,32]. We started by looking into four databases: Universal Protein Resource (Uniprot), Interologous Interaction Database (i2d), Reactome and Pathway Interaction Database (PID), which we have listed in Table 1. Since different databases have different names for each entry, the Uniprot name for identifying proteins was used as the standard and thus all protein names were converted accordingly to To assure the proteins extracted from Uniprot are the exact proteins from the organism of interest, a form of verification was implemented, where the identity of the mined data is confirmed through a form of literature search. This step avoids the confusion and ambiguity that often occurs when mining and integrating multiple data. Table 2 lists the search proteins considered in the study (highlighted proteins are proteins reported to be activated in one system and involved in the regulation of another). Using the i2d database, information on protein-protein interactions was extracted. Such information is potentially useful in identifying proteins and their families, the interplay with their interacting partners, the influence of certain proteins in a network and key regulatory relationships which are most influenced by extracellular signals. More comprehensive knowledge concerning the proteins of interest and their connector proteins, for example, biological process, cellular component, coding sequence diversity, developmental stage, disease, domain, ligand, molecular function and post-translation modification were also extracted. For elucidating the functional consequences of the interactions, the Reactome databasewhich gives pathway information by combining with graph information of the PID database -was the database of choice. Table 3 presents a list of pathways and/or processes the explored proteins were revealed to be involved in. The data mining implementation was done in Perl programming language http://www.perl.org/ and derived from BioPython library http://biopython.org/wiki/ Main_Page. Network Biology The actions of specific proteins in a network have been investigated in this report. A network can be described as a series of nodes/vertices that are connected to each other by links. Formally it was referred to as a graph and the links as edges [26,[75][76][77]. The nodes in biological networks are the gene products/proteins and the links the interactions between two components [13,78]. A number of metrics have been used to characterise the networks of the systems studied: • The first, the degree (or connectivity) of a node/vertex k, indicates how many links/edges the node has to the other nodes. Of particular importance is the degree Diagram of the shortest path calculation Figure 2 Diagram of the shortest path calculation. An Illustration showing how the shortest path discussed in the report is calculated. It is assumed that; from P1 to P5: p 1 = (P1-P6-P7-P5) and l 1 = 3. From P1 to P8: p 2 = (P1-P6-P7-P8) and l 2 = 3. From P1 to P10: p 3 = (P1-P9-P10) and l 3 = 2. From P1 to P11: p 4 = (P1-P11) and l 4 = 1. distribution P(k), which measures the probability that a selected node has exactly k links. The degree distribution is used to distinguish between the different classes of network (which has not been reported in this account). • The second, vertex betweenness (B i ) is a measure of the centrality and influence of nodes in the networks [79][80][81][82]. • The third, average clustering coefficient C(k), characterises the overall tendency of nodes to form clusters or groups; and C(k) the average clustering coefficient of all nodes with k links is an important measure of the network structure [15]. • And finally, the shortest path, which is found between two vertices (or nodes) such that the sum of the weights of its constituent edges is minimized [82,83]. A graph G(E, V) consists of a set of vertices (V) and a set of edges (E) between them. An edge e ij connects vertex v i with vertex v j . Here, undirected graph is investigated since our studied protein interaction networks are undirected. An undirected graph has the property that e ij and e ji are considered identical. Therefore, the neighbourhood N for a vertex v i is defined as it's immediately connected neighbours in Eq. (1): where the degree k i of a vertex is defined as the number of vertices |N i |, in its neighbourhood N i . The betweenness centrality of a vertex v i is defined as the number of shortest paths between pairs of other vertices that run through v i as Eq. (2): where i ≠ j ≠ k, g jk is the number of equally shortest paths between nodes v j and v k , and g jk (i) the number of the shortest paths where node v i is located [84]. The clustering coefficient C i for a vertex v i is given by the proportion of links between the vertices within its neighbourhood divided by the number of links that could possibly exist between them [15]. Therefore, if a vertex v i has k i neighbours, k i (k i -1)/2 edges could exist among the vertices within the neighbourhood where the clustering coefficient for undirected graphs can be defined as Eq. (3): For the shortest path, given a real-value weight function f: E → R, and a start node v i of V, we find a path p of P (the set of paths) If the protein-protein interaction networks here constitute an unweighted graph, the weight function f can be considered as a path length l (the number of edges in path p). In this case, the shortest path problem is to find a path p having the minimal path length. A Breadth-First Search algorithm [82,83] has been employed to find the shortest paths between two nodes (the starting node v i and destination node v j ) (see Figure 2). The shortest paths may have different path lengths (l = 1, l = 2, l = 3, l = 4, etc.). In the example shown in Figure 2, there are different shortest paths from start node P1 to destination nodes (P5, P8, P10, P11) via different connector nodes (P6, P7, P9). If the path length is 1, this signifies a direct connection, where two nodes are directly connected (e.g., P1 and P11). For the shortest paths with l = 2, there are three nodes: a start node (P1), a connector node (P9), and a destination node (P10). Using this form of analysis the path lengths were used to obtain knowledge on the functional interactions between the proteins. For the purpose of this report we will only discuss findings for the shortest paths between two nodes of interest with path length l = 1 or l = 2; their connector nodes and their frequency ranking (f i ) [see Additional file 1: Suppl. 1-5 for the full list of shortest ∈ ∑ is minimal among all paths connecting and n n (4) Network representation of isolated p53, NF-κB and cell cycle systems Results and Discussion Recognising that individual signalling pathways do not act in isolation, an integrated approach to investigate the dynamic relationships between components, their organisation and regulation in signalling systems was undertaken. We started by searching the i2d database (containing 92,561 human protein interactions) for the proteins of interest. This search retrieved a total of 1,881 protein-protein interactions for components of p53 and NF-κB networks (see Table 2). To increase the confidence in the extracted interactions information, we excluded 47 interactions shown to have been derived from other organisms (other than human) by homologous methods, so that the number of protein interactions obtained involving both the NF-κB and p53 networks consists of 1,834 interactions. Information on protein-protein interactions within the NF-κB and p53 pathways were also retrieved and analysed. Finally, the interlinking connections between the NF-κB and p53, and proteins involved in the G1/S phase of the cell cycle (in particular, RB_HUMAN, CCND1_HUMAN, CDN1B_HUMAN, CD2A2_HUMAN, E2F1_HUMAN and CDN1A_HUMAN) were also investigated (see Table 4 for statistical information retrieved for the networks). Network of Interactions Following data extraction, descriptive analysis of the data was performed. The degree, betweenness and cluster coefficient values for the network's components were calculated in order to ascertain the level of connectivity of the three systems. Figure 3 illustrates the molecular interactions obtained for the NF-κB, p53 and the G1/S phase cell cycle, respectively. Figure 3A and Table 4 show for the proteins in the p53 network, 506 interactions and 436 nodes. Seven of which are articulation points (four original search nodes (in red) and three other associated nodes obtained from the extraction process (in cyan)). Articulation nodes (or cut vertex) [86,87] are nodes that play an important role in a network, where the removal of the node may drastically alter the network topology leading to it's fragmentation. Conversely, for the NF-κB network (see Table 4 & Figure 3B) 788 nodes and 1,352 interactions were observed. The articulation points were fifteen in number, fourteen of which were the search proteins considered (in yellow) and an associated TIP60_HUMAN (in cyan) obtained during the extraction process. A subset of the highest connectivity or degree values are shown in Table 5 and 6 [see Additional file 1: Suppl. 6-10 for connectivity values obtained for nodes not included in the Tables]. We found that for the three networks examined, the calculated degree for the initial list of proteins, with the exception of the CREL2 protein in the NF-κB network ( Table 2), were discovered to be much higher than the associated proteins found during the mining process; and therefore underscored the central role of the initial list within their individual networks (search proteins highlighted on Table 5 and 6; please note other nodes -TIP60_HUMAN in the NF-κB network ( Figure 3B), and TCP4_HUMAN, PINX1_HUMAN and PM14_HUMAN in the p53 network ( Figure 3A) -are associated articulation points). The highest-degree node (or connectivity) uncovered for the NF-κB network was IKKE_HUMAN, a protein respon- Results obtained by vertex betweenness produced similar results to the degree of connectivity index reported in Tables 5 and 6. A system of p53 and NF-κB sible for inhibiting the NF-κB inhibitory subunits with 324 interactions (see Table 5) [88]. A discovery that suggests IKKE_HUMAN to be the most studied protein of the NF-κB system; and maybe a possible molecular target for therapy in the NF-κB system. In addition to this, four other proteins were found to have interacting proteins numbering over 100. These were: TF65_HUMAN (RelA), NEMO_HUMAN (IKKγ), NFKB2_HUMAN (p52), and NFKB1_HUMAN (p50) [Note -this finding could also be a reflection of the fact that these proteins may be the most studied members of the NF-κB network]. For the cell cycle network, the highly connected nodes were four in number (see Figure 3C and Table 4). Compared to the NF-κB network ( Figure 3B), the p53 ( Figure 3A) and the cell cycle ( Figure 3C) networks appeared to be sparse, with each node connected to a relatively small number of edges within the network, many of whom "know" each other. The sparse nature could be explained by the fact that only proteins involved in the oscillatory feedback loops of the systems of interest, and not the entire published members were considered in this study. The highest-degree node for the p53 network was the P53_HUMAN protein, and -3]. These subgroups could also be described as network motifs [89][90][91], frequently recurring groups of interactions, usually highly conserved, which are thought to perform specific information processing roles in the networks; in some cases supporting their roles as oscillators [5,18,63,92]. Following the characterisation of the three networks with respect to their degree of connectivity, further calculations were made on their clustering coefficients. It was discovered that MDM2_HUMAN (mdm2) in the p53 network, REL_HUMAN (C-Rel) in the NF-κB network and E2F1_HUMAN (E2F-1) of the cell cycle were proteins found to have the highest clustering coefficient values; a finding reflecting on the nodes connectivity within their neighbourhood. That is to say, even though P53_HUMAN, RB_HUMAN and IKKE_HUMAN were found to be proteins with the most interaction within their individual networks; MDM2_HUMAN, REL_HUMAN (C-Rel) and E2F1_HUMAN were revealed to be proteins best at forming cliques in their networks. Having discovered for each system, the highly connected nodes, as well as the nodes with the most number of neighbours, it was of interest to study how all the individual system studied relates to each other. In order to do this, we set out to calculate the shortest paths and the frequency of proteins linking the systems to one another; thereby identifying key connector proteins thought to assist in the transmission of information (or cross talk) across the three networks. It was hoped that through this form of analysis, characteristics of the connector proteins linking the systems will be uncovered. Network of interactions between p53 and NF-κB pathways Since it has been suggested that the topology of a network affects the spread of information carried by a signal and thus diseases [34], the network of interactions between the p53 and NF-κB systems were investigated. Figure 4 illustrates the complex network formed between the p53 and the NF-κB systems, and the connector proteins linking them (proteins in the p53 network are denoted in red, and those of the NF-κB are in yellow - Figure 4A). We found 365 paths connect proteins in the p53 network to proteins in the NF-κB network; among which, only two are direct connections and 295 require a connector protein. The two direct interactions were revealed to be between: P53_HUMAN and IKKA_HUMAN, and P53_HUMAN and IKBA_HUMAN proteins; illustrating potential connection route to consider when creating a RB_HUMAN and E2F1_HUMAN proteins. Triangular connector nodes represent common components between RB_HUMAN and the two networks (in green), E2F1_HUMAN and the two networks (in blue), and RB_HUMAN and E2F1_HUMAN connections with the NF-κB and p53 networks (in yellow). Circular nodes in green denote RB_HUMAN connectors to p53 or NF-κB networks; and in blue for E2F1_HUMAN to p53 or NF-κB networks. The yellow and magenta circular nodes represent proteins connecting both E2F1_HUMAN and RB_HUMAN to members of the NF-κB (in yellow) and p53 (in magenta). Refer also to Tables 9, 10, 11, 12 and 13 for further information. Network representation of p53, NF-κB and cell cycle interactions unified model of the NF-κB and p53 system. Indirect links for the rest of the nodes were found to require protein mediators to act as connector proteins. The proteins acting as connectors between the two networks are shown in blue in Figure 4A, B and 4C. It is evident that the P53_HUMAN protein can itself act as a connecting protein between members of the NF-κB pathway and members of the p53 system (for example, CDN1A_HUMAN -P53_HUMAN -IKKA_HUMAN; and, MDM2_HUMAN -P53_HUMAN -IKKA_HUMAN). After having determined the shortest paths linking the p53 and NF-κB systems, the identified connector proteins linking the two systems were grouped according to their frequency values, and cross referenced with reference databases, for the interpretation of their functional prop-erties. Network of interactions between p53, NF-κB and the G1/ S phase of the Cell cycle Since it has been suggested, that some cell cycle proteins are activated by one pathway and are relevant for the regulation of another [44,[65][66][67][68][69], it was of interest to investigate the relationship between the NF-κB, p53 and the cell cycle systems. For this study, only events leading to the G1/S transition phase of the cell cycle, the point where NF-κB and p53 signal transduction events are active the most [93] were considered. We start by exploring the interactions between RB_HUMAN and E2F1_HUMAN cell cycle proteins, with members of the p53 and NF-κB networks. Figure 5 show the network obtained from this analysis. Proteins that link the proteins in the p53 and NF-κB networks to RB_HUMAN are denoted in green, whilst the proteins connecting the two networks to E2F1_HUMAN are in blue ( Figure 5A). Common protein shared between the p53 and NF-κB networks have been represented in the form of green triangles (for links to RB_HUMAN) and blue triangles (for links with E2F1_HUMAN) (see Figure 5B). Closer evaluation of the interactions linking the p53 network to the cell cycle proteins ( We repeated this analysis to include interactions between the rest of the G1/S cell cycle proteins (RB_HUMAN, CCND1_HUMAN, CDN1B_HUMAN, and E2F1_HUMAN) and the members of the p53 and NF-κB networks (see Figure 6 -only the connecting nodes linking CDN1B_HUMAN (p27, circle, yellow), CCND1_HUMAN (Cyclin D1, circle, magenta), RB_HUMAN (Rb, circle, green) and E2F1_HUMAN (E2F-1, circle, blue) to the p53 and NF-κB networks have been colour coded - Figure 6A and 6B). Shortest path lengths calculated for interactions between proteins in the p53 network and CDN1B_HUMAN, numbered 31 (all indirect links with path length = 2); and 26 for interactions with CCND1_HUMAN (25 indirect connection with path length = 2 and 1 direct connection {CDN1A_HUMAN -CCND1_HUMAN}). Similarly, for the NF-κB system, 83 shortest paths connecting CDN1B_HUMAN (33 of which have path length = 2), and 91 shortest paths connecting CCND1_HUMAN(46 of which are indirect links mediated by a single connector path length = 2) to members of the NF-κB network were determined (see Table 9 for shortest paths statistics). Frequency values and functional properties ascertained for nodes linking the p53 and cell cycle networks, as well for those linking the NF-κB with the cell cycle network have been reviewed in Table 10 Conclusion A network is usually thought of as a coherent system that comprises of units interacting in some kind of orchestrated and regulated fashion -such that the emergent behaviour of the whole (i.e. the network) is recognisable and can be characterised. Once some of the behaviour is recognised, the system can be described at a level of detail appropriate to the system's behaviour whilst ignoring the details of the constituent parts. Since molecular networks are large and complex, with their components and their interactions quite heterogeneous characterising the relationship between structure and dynamics of the system makes it far from straightforward. Although research aiming at coping with these challenges has become very popular, it is important to bear in mind that the current efforts can only profit from a combined theoretical and experimental approach. This is where the approach presented in this paper becomes beneficial. The idea is that by combining both the data driven and knowledge driven strategies, direct and or combinatorial interaction parameters of many protein can be captured from the information gained, and can thus be used to construct, guide and or unify dynamical models of signal transduction pathways from which a realistic model of the systems behaviour can be determined. The resulting dynamical model can then provide the conceptual and explanatory linkage between the observed phenomena and the predicted. This framework of computational modelling of molecular networks at various levels or organisation has the potential to allow cost effective experimentation and hypothesis exploration, computationally uncovering the behaviour of molecular species and combinatorial interactions that would be difficult and too expensive to carry out in a wetlab setting. While, network topology analysis is thus useful for showing which proteins in the network depend on which other protein, it does not give us any further information on the regulatory effects of these dependencies. Despite these methodological limitations, our results offer a view, demonstrating the importance of elucidating the functional roles key or shared components play in the propagation of signals across transduction systems. The main implication of the presented application is the recognition that changes in one signalling system, undoubtedly causes a ripple effect on the rest of the surrounding system -as shown by the extensive interconnection of the systems studied and their common shared components. It is hoped that the use of this form of analysis may also be beneficial in highlighting areas of research where very little is known for further future study.
7,236.6
2009-11-06T00:00:00.000
[ "Computer Science" ]
Adaptive Parallel Particle Swarm Optimization Algorithm Based on Dynamic Exchange of Control Parameters Updating the velocity in particle swarm optimization (PSO) consists of three terms: the inertia term, the cognitive term and the social term. The balance of these terms determines the balance of the global and local search abilities, and therefore the performance of PSO. In this work, an adaptive parallel PSO algorithm, which is based on the dynamic exchange of control parameters between adjacent swarms, has been developed. The proposed PSO algorithm enables us to adaptively optimize inertia factors, learning factors and swarm activity. By performing simulations of a search for the global minimum of a benchmark multimodal function, we have found that the proposed PSO successfully provides appropriate control parameter values, and thus good global optimization performance. Introduction In the various aspects of optimization, there are cases where a globally optimal solution is not necessarily obtainable.In such cases, it is desirable to find instead a semi-optimal solution that can be computed within a practical timeframe.To achieve this goal, heuristic optimization techniques are popularly studied and used, typified by genetic algorithms (GA), simulated annealing (SA) and particle swarm optimization [1] (PSO).In addition, since multipoint search algorithms like GAs and PSO can determine a Paretooptimal solution based on a one-time calculation, they are actively employed in applied research to handle multipurpose optimization problems. If the objective function under consideration is multimodal, then heuristic optimization techniques are desired to have qualities including a global solution search ability, maintained by preservation of solution diversity; a local solution search ability, maintained conversely by centralization of the solution search; and a balance between these two.Solution diversification and centralization strategies are factors universally shared by heuristic optimization techniques, and influence their performance.However, there are few precise and universal guidelines for configuring the values of the parameters that control these strategies: their configuration is problem-specific.Additionally, tuning of these parameters is not simple, and generally requires many preliminary calculations.Furthermore, which parameter values are suitable may vary at every stage of the solution search; pertinent examples include the configuration of crossover rate and spontaneous mutation rate in GAs and the temperature cooling schedule in SA. Focusing on PSO, a kind of multipoint search heuristic optimization technique, in this study we propose several parallel PSO algorithms in which control parameters are dynamically exchanged between a number of swarms and are adaptively adjusted during the solution search process.We also share our findings from an evaluation of algorithm performance on a minimum search problem for a multimodal objective function. Particle Swarm Optimization PSO is an evolutionary optimization calculation technique based on the concept of swarm intelligence.In PSO, the hypersurface of an objective function is searched as information is exchanged between swarms of search points, which simulate animals or insects.The next state of each individual is generated based on the optimal solution in its search history ("personal best"; pbest), the optimal solution in the combined search history of all individuals in the swarm ("global best"; gbest), and the current velocity vector.Briefly, assuming a population size N p and problem dimension N d , the position and velocity of an individual i (where ) at the t + 1 th step of the search, respectively ( ) , , , , , , , , These two variables can be updated by means of the following equation, using the position and velocity at the t th step, ( ) The algorithm behind PSO is simpler than a GA, another multipoint search heuristic optimization technique, making it easier to code and tending to lead to faster solution convergence.On the other hand, PSO sometimes loses solution diversity during the search, which readily invites excessive convergence.In response, improved PSO techniques have begun to be proposed around the world.Examples include distributed PSO and hierarchical PSO, which search the solution space with multiple different swarms [2] [3]; a method that performs a global search for its initial calculations but intensively searches the area of suboptimal solutions thereafter similar to SA [4]; a technique that incorporates bounded rational randomness thereafter, like the "lazy ant" in ant colony optimization; and a method that avoids local solutions if the algorithm become caught in them for a while.As with many other heuristic optimization techniques, PSO includes several option control parameters that analysts can set.Because these settings can greatly influence search performance, theoretical research on stability and convergence due to parameter values [5] [6] and research and development on PSO with adaptive parameter tuning functions (e.g., [7] [8]) are underway.The tuning of the quantum PSO (QPSO) [9] is simpler compared to standard PSO since QPSO has only a single control parameter. Proposed Method In this study, we focus on several parameters to control diversification and centralization in solution search: the inertia factor γ , learning factors c 1 and c 2 , and swarm activity (described in Section 3.3).Introducing concepts similar to those employed in the replica-exchange method [10] and parallel SA method, we propose parallel PSO algorithms in which parameter values are adaptively adjusted via dynamic exchange of the above control parameters between multiple swarms during the solution search process. The replica-exchange method was developed in response to problems like spin glass and protein folding, in which it is difficult to find the ground-energy state (a global optimum solution) because several semi-stable states (local optimum solutions) exist in the system.In the replica-exchange method, several replicas of the original system are prepared, which have different temperatures and never interact with each other.We encourage readers to imagine "temperature" here as the temperature parameter in Metropolis Monte Carlo simulations, i.e., it indicates the degree to which deterioration is permitted when making the decision to transition to a candidate in the next state.Solution searches in high-temperature systems exhibit behavior close to a random search, whereas solution searches in low-temperature systems exhibit behavior close to the steepest descent method.Solution search calculations are run independently and simultaneously for each replica, each at its respective constant temperature.At the same time, temperature is exchanged periodically after a certain number of search steps according to the exchange probability w in the following equations between a given replica pair (with respective states k T T X = and ) having adjacent temperatures (T k and T k+1 ). Here, E(X) represents the energy of a replica at state X (i.e., the objective function value).Figure 1 shows a schematic diagram of the replica-exchange method.In the replica-exchange method, high-temperature calculations correspond to retention of solution diversity, while low-temperature calculations correspond to a local solution search.Moreover, we can argue that it has some qualities of heuristic optimization algorithms for multimodal objective functions, in that its calculations are repeated as temperatures are probabilistically exchanged.Unlike SA, in which temperature falls monotonically, in this technique the temperature meanders if we focus on a single given replica.Thus, one can use this method to search a large solution space without becoming caught in a semi-s state. Inertia-Factor Parallel PSO Here, we first propose a technique focusing on the inertia factor γ , a control parameter in Equation ( 3).The search trajectories of individuals with large γ are more curved, Figure 1.Schematic illustration of the replica-exchange method, with four replicas. whereas those of individuals with small γ converge to an intermediate point between pbest and gbest (dependent on c 1 and c 2 ).Thus, efficient optimization should be achievable if, in the initial search, individuals are given large γ values and the solu- tion search space is wide, while in its final stages, individuals are instead given small γ values and a solution is searched for intensively at pbest, gbest, and the area between them.For this reason, each individual's γ is typically reduced linearly with increasing search step t, according to the following equation: Here, max γ and min γ respectively represent the maximum and minimum inertia factors, and t max is the maximum number of search steps.Note that there is no single optimal reduction schedule one can choose for inertia factor γ : in truth, multiple techniques have been proposed besides the linear reduction described above [11], including exponential reduction [12] and stepwise reduction methods [13]. We consider N s swarms with various different γ values in the Inertia-factor Parallel PSO (IP-PSO) proposed in this section.We assign ( ) in this paper using the following equation: ( ) In IP-PSO, each swarm has its own gbest, the optimal solution found across all individuals in the swarm.Periodically, after a certain number of search steps, the objective γ value is then probabilistically exchanged (or not) according to the Metropolis decisions in Equations ( 9) and (10). gbest gbest (10) Figure 2 shows a schematic diagram of IP-PSO.The IP-PSO (and also the other proposed adaptive parallel PSOs described in Sections 3.2 -3.4) employs the Metropolis criterion to determine the exchange acceptance of the control parameter rather than the move acceptance of each solution.The Metropolis decision will assign smaller k γ val- ue to the swarm having the superior f(gbest) value with higher probability.(Note: "superior" here means "smaller", since this paper is concerned with minimum search problems.)As a result, a more intensive search can be performed in the vicinity of gbest.On the other hand, it is also possible to escape local optimum solutions by a global search, because larger k γ value is assigned to the swarm having the inferior f(gbest) value with higher probability.In addition, unlike the related methods mentioned above in which γ decreases monotonically, this method can escape local op- timum solutions, even if it becomes stuck during a search with a small γ value, be- cause a larger γ value could be probabilistically assigned.The dynamic assignment of appropriate inertia factor γ values to each swarm according to the search conditions makes it unnecessary to configure a γ reduction schedule before carrying out opti- mization.The IP-PSO solution search procedure is described below: 1. Decide total population size, number of swarms, and maximum number of search steps. 2. Assign initial inertia factor values to each swarm according to Equation (8). Learning-Factor Parallel PSO Here, we propose a technique focusing on learning factors c 1 and c 2 , control parameters in Equation (3).For individuals with relatively large c 1 , PSO searches in the vicinity of the optimal solution in that individual's search history, pbest, whereas for individuals with large c 2 , it searches in the vicinity of the optimal solution in the search history of the swarm, gbest.Thus, efficient optimization should be realizable if, in the initial search, individuals are given large c 1 and small c 2 values to ensure solution diversity, while in its final stages, individuals are instead given small c 1 and large c 2 values in an attempt to centralize the search in the vicinity of gbest.Some time-change schedules have learning factors c 1 and c 2 decrease (or increase) linearly with increasing number of search steps [14]. For the Learning-factor Parallel PSO (LP-PSO) proposed in this section, we introduce the allocation parameter ( ) ≤ ≤ , which regulates the balance between learning factors c 1 and c 2 , and define the LP-PSO learning factors 1 c′ and 2 c′ accord- ing to the following equation: ( ) Here, c 0 is a constant.This paper uses c 0 = 1.4955, a learning factor determined to be stable in a PSO stability analysis run by Clerc et al. [5].When 1 α = in Equation ( 11), 2 0 c′ = and the search is run in the vicinity of pbest; when 0 α = , 1 0 c′ = and the search is run in the vicinity of gbest.Similar to IP-PSO in Section 3.1, we consider N s swarms having various different α values.We determine ( ) in this paper using the following equation: Thereafter, as in IP-PSO, each swarm has its own gbest, the optimal solution found across all individuals in the swarm.Periodically, after a certain number of search steps, the objective function f(gbest) values of two swarms having adjacent k α values are compared.Each k α value is then probabilistically exchanged (or not) according to the Metropolis decisions in the same manner as Equations ( 9) and (10).The decision will assign smaller k α value to the swarm having the superior f(gbest) value (i.e., small 1 c′ and large 2 c′ ) with higher probability.As a result, a more-intensive search can be performed in the vicinity of gbest.On the other hand, it is also possible to escape local optimum solutions using a global search based on the pbest of each individual, because larger k α value (i.e., large 1 c′ and small 2 c′ ) is assigned to the swarm having the in- ferior f(gbest) value with higher probability.The assignment of appropriate learning factor 1 c′ and 2 c′ values to each swarm according to the search conditions actually makes it unnecessary to configure a time-change schedule before carrying out optimization. Activity Parallel PSO Here, we propose a technique focusing on the control parameter for swarm activity.Yasuda et al. [15] used molecular motion as an analogy for the movement of each individual in PSO, and defined the activity of a swarm Act as an index of the diversification/centralization of a solution search according to the following equation: Swarms with high activity have many individuals with high velocities, and search over a wide solution space.Swarms with low activity, on the other hand, have many individuals with low velocities, and so search intensively for local solutions.Activity is observed moment-to-moment, because searches in which activity decreases gradually and continually can yield favorable solutions.In the event that measured activity is lower than the preset baseline activity, increasing the inertia factor γ of each individ- ual promotes global searches; in the event that measured activity is higher than the preset baseline activity, decreasing the inertia factor promotes local searches.These behaviors thus constitute adaptive parameter regulation (of the inertia factor).However, the baseline activity reduction schedule must be set appropriately in advance, such that it decreases gradually with increasing search steps. For the Activity Parallel PSO (AP-PSO) proposed in this section, we directly control swarm activity (i.e., what was measured in [15], the past study mentioned above) in a manner similar to temperature control methods in molecular dynamics applications. Each individual's velocity should be appropriately scaled at each search step in order to control the measured, actual activity Act (defined by Equation ( 13)) at the target activity Act 0 .Briefly, the scaling factor s is calculated according to the following equation, where the velocity v i of each individual is converted to We consider N s swarms controlled by various different target activity Act 0 values.We determine Act 0,k ( 1, , s k N = ) in this paper using the following equation: ( ) Thereafter, as in IP-PSO and LP-PSO, each swarm has its own gbest, the optimal solution found across all individuals in the population.Periodically, after a certain number of search steps, the objective function f(gbest) values of two swarms having adjacent Act 0,k values are compared.Each Act 0,k value is then probabilistically exchanged (or not) according to the Metropolis decisions in the same manner as Equations ( 9) and (10).The decision will assign smaller Act 0,k value to the swarm having the superior f(gbest) value with higher probability.As a result, a more-intensive search can be performed in the vicinity of gbest.On the other hand, it is also possible to escape local optimum solutions using a global search, because larger Act 0,k value is assigned to the swarm having the inferior f(gbest) value with higher-probability.Assigning an appropriate activity value to each swarm according to the search conditions makes it unnecessary to configure an activity reduction schedule in advance. PSO with Simultaneous Exchange of Multiple Control Parameters The proposed PSO techniques in Sections 3.1 -3.3 above focus on only one kind of control parameter at a time, and assign parameter values that differ between each swarm.Nonetheless, adaptive control is possible if several control parameters are simultaneously and dynamically exchanged between swarms.For example, we can consider N s swarms each having a different inertia factor γ and target activity Act 0 : we call this technique the Inertia-factor and Activity Parallel PSO (IAP-PSO).For IP-PSO in Section 3.1, a given k γ value corresponds one-to-one with a given swarm; however, with IAP-PSO, a given ( ) Act γ pair corresponds one-to-one with a given swarm. These control parameter pairs are exchanged between swarms.Figure 3 shows a schematic diagram of IAP-PSO.We can consider an Inertia-factor and Learning-factor Parallel PSO (ILP-PSO) the same way: in it, the inertia factor and learning factors are simultaneously exchanged. Numerical Simulation and Discussion We evaluate the performance of the proposed techniques using a minimum search problem for a Rastrigin function, a representative multimodal function.Our Rastrigin function is represented by the following equation: The Rastrigin function is multimodal, its variables are completely independent of each other, and it has a minimum value of ( ) We first evaluated the proposed PSO techniques, in which only a single control parameter is exchanged in the search process.We observed the relationship between successful transitions in control parameter value and changes in objective function value, and compared its performance with other techniques.Specifically, we compared a Linearly decreasing Inertia factor PSO (LDI-PSO), in which the inertia factor linearly and continually decreases according to Equation (7) with increasing search step t, with the proposed techniques IP-PSO, LP-PSO, and AP-PSO.Table 1 shows the major simulation conditions for each technique. solution discovered during the search through the t th step by individual i itself, while ( ) t gbest represents the optimal solution discovered during the search through the t th step by the swarm to which individual i belongs.The term γ represents inertia, and takes a value between [0, 1] (inertia fac- tor); c 1 and c 2 are weighting factors, respectively called the cognitive learning factor and the social learning factor (learning factors); and rand1 and rand2 are uniform random numbers in [0, 1].The PSO solution search procedure is described below: 1. Decide population size and maximum number of search steps.2. Set initial position and velocity of each individual.3. Calculate objective function value for each individual.4. Determine the optimal individual solution pbest for each individual and the optimal swarm solution gbest, and update these values.5. Update the position and velocity of each individual according to Equation (3) and (4).6. End search if desired solution accuracy is obtained or if maximum number of steps is reached. function f(gbest) values of two swarms ( 3 . Set initial position and velocity of each individual.4. Calculate objective function value for each individual.5. Determine the optimal individual solution pbest for each individual and the optimal swarm solution gbest, and update these values.6. Update the position and velocity of each individual according to Equation (3) and (4).7. Periodically, after a certain number of search steps, compare objective function values between two swarms having adjacent inertia factor values; make the decision to exchange inertia factors according to Equations (9) and (10).8. End search if desired solution accuracy is obtained or if maximum number of search steps is reached.Because each swarm can be simulated independently and simultaneously with only a slight communication cost, the IP-PSO (and also the other proposed adaptive parallel PSOs described in Sections 3.2 -3.4) are well suited for and very efficiently runs on massively parallel computers. performance evaluation experiments, the number of dimensions was set at N d = 100, and the initial coordinates and initial velocity of each individual were set according to uniform random numbers in the respective ranges Figure 3 . Figure 3. Schematic illustration of the Inertia-factor and Activity Parallel PSO (IAP-PSO), with four swarms. Figure 4 Figure 4 . Figure 4 shows time series data for the objective function f(gbest) and the control parameter values obtained via each technique.The time series shown for LDI-PSO is data for the best of eight search attempts, assuming 6400 individuals × 1 swarm.The time series shown for IP-PSO, LP-PSO, and AP-PSO are respective data for the best swarm within a representative search attempt (eight search attempts in total), assuming 800 individuals × 8 swarms.In IP-PSO, adaptive control is realized through the dynamic exchange of inertia factor values, which occurs spontaneously without the need to configure a stepwise reduction schedule for the inertia factor, a condition seen in the aforementioned [13].Compared with LDI-PSO, which also uses the inertia factor as a control parameter and has a linear reduction schedule for the inertia factor, IP-PSO achieves lower objective function values.Looking at LP-PSO, on the other hand, small k α (i.e., small 1 c′ and large 2 c′ ) values are assigned from around the 350th search step onward, in response to the relatively small objective function values obtained in the initial search around that step number.The solution ceases to improve for a while thereafter, but large k α (i.e., large 1 c′ and small 2 c′ ) values were assigned from step 1150 to around step 1700; as a result, the trajectory escapes the local optimum solution, and the solution continues to improve from around step 1400 onwards.After around the 1700th search step, small k α values are assigned once more; as a result of this search centralization, the objective function continues to drop until the maximum (i.e., final) search step.For AP-PSO, the solution continually improves as activity frequently fluctuates.The above results obtained with each proposed PSO technique show that the observed diverse shifts in control parameters depend on the search conditions.LP-PSO and AP-PSO achieved superior results to LDI-PSO and IP-PSO, with final objective function values of, respectively, 24.5 and 25.0 versus 135.3 and 80.6.We next evaluated the proposed PSO techniques in which multiple parameters are exchanged simultaneously.The four techniques compared were (1) linearly decreasing inertia-factor and learning-factor PSO (LDIL-PSO), in which the inertia factor γ and Figure 5 Figure 5 shows time series plots for the objective function f(gbest) obtained via each technique.The time series shown for LDIL-PSO and LDIA-PSO are the data for the best of eight search attempts, assuming 6400 individuals × 1 swarm.The time series shown for ILP-PSO and IAP-PSO are the data for the best swarm within a representative search attempt (eight search attempts in total), assuming 800 individuals × 8 swarms.We can see that the simultaneous adjustment of multiple control parameters improves search performance compared with Figure 4.This is true if we compare those techniques in which control parameters are linearly reduced (LDI-PSO vs. LDIL-PSO, LDIA-PSO) with one another, as well as if we compare those techniques in which control parameters are dynamically exchanged (LP-PSO and AP-PSO vs. ILP-PSO and IAP-PSO).This is because the simultaneous adjustment of multiple control parameters enables each swarm to approach the equilibrium state corresponding to the parameter values at the time more rapidly.ILP-PSO and IAP-PSO yielded mean final objective function values of 2.6 and 20.6, respectively.Performance enhancement was particularly pronounced in ILP-PSO, which achieved the best objective function value of all techniques. Table 1 . Summary of simulation conditions for performance evaluation of the proposed PSOs which exchange a single control parameter.
5,360.6
2016-08-11T00:00:00.000
[ "Computer Science", "Engineering" ]
Development of a High-Power Surface Grating Tunable Distributed-Feedback Bragg Semiconductor Laser Based on Gain-Coupling Effect : Lasers used for space communication, lidar, and laser detection in space-air-ground integration applications typically use a traditional 1550 nm band tunable distributed-feedback Bragg (DFB) semiconductor laser. This has low output power, complex fabrication process, and high fabrication cost. In this paper, we present a gain-coupled surface grating-based 1550 nm DFB semiconductor laser that can be fabricated without the use of secondary epitaxial growth techniques or high-precision lithography. The periodic electrical injection is used to achieve a gain coupling effect. A tapered waveguide is added to achieve a high output power, and the use of AlGaInAs multiple quantum wells in the active region reduces the linewidth of the laser. A continuous-wave (CW)output power of 401.5 mW is achieved at 20 ◦ C, the maximum side mode rejection ratio exceeds 55 dB, the measured 3 dB linewidth is 18.86 MHz, and the stable single-mode output with a quasi-continuous tuning range of 6.156 nm near 1550 nm from 10 ◦ C to 50 ◦ C. This simple preparation method, low cost, excellent performance, and stable tunable laser have extremely high commercial value in applications such as space communication, lidar, and laser detection. Introduction DFB semiconductor lasers have resulted in their use across a wide range of applications in optical communication, medical, materials processing, and integrated optics [1][2][3][4]. These include stable single-mode selection, tunable wavelength, direct modulation, and easy monolithic integration with other devices, 1550 nm DFB semiconductor lasers are a crucial component in laser ranging, free-space laser communication, and vehicle-mounted lidar applications due to their strong airborne penetration abilities and small attenuation [5][6][7]. According to the coupled-mode theory, DFB semiconductor lasers can be divided into two modulation modes: gain-coupled modulation or refractive index-coupled modulation [8]. Both modulation methods have some shortcomings. The main issue with refractive index-coupled DFB semiconductor lasers based on a uniform grating with periodic modulation of the refractive index is that they have two modes with the same loss and the smallest loss [9]. That is, there are two modes that occur simultaneously, and neither occurs at the Bragg wavelength. The solution to this problem is to use λ/4 wavelength phase shift gratings. However, this not only takes more time and costs more but also introduces complex fabrication techniques such as epitaxial regrowth or fine nanoscale grating fabrication [10][11][12]. Another way to achieve Bragg wavelength lasing is to use gain-coupled DFB semiconductor lasers based on a periodic modulation of gain (loss) [8]. Compared with the index-coupled DFB semiconductor laser, the gain-coupled DFB semiconductor laser has stable dynamic single-mode characteristics [13]. Its modulation characteristics are better, and it is less affected by facet reflection. Even if the facet is not reflective, optimal performance can be achieved [14,15]. In addition, the gain-coupled DFB semiconductor laser has a wide range of spectral characteristics and is the best choice for manufacturing tunable lasers [16]. However, as with the index-coupled DFB diode laser, the traditional gain-coupled DFB diode lasers also take advantage of complex epitaxial regrowth techniques, which reduces its advantages over the index-coupled DFB diode laser. In our previous studies, gain-coupled DFB semiconductor lasers with wavelengths of 795 nm, 905 nm, 990 nm, and 1045 nm were fabricated based on periodic electrodes windows and surface grating structures [17][18][19][20]. The periodic electrical injection of the surface p-electrode caused the quantum wells in the active region to generate periodic gain differences to realize the gain coupling effect and enable the device to achieve stable single longitudinal mode output [21][22][23]. Surface gratings can provide optical feedback as well as modulate the spectrum, enabling wavelength tunability and improving single longitudinal mode performance [24][25][26]. This method not only avoids complex epitaxial re-growth, nano-grating, and other time-consuming processes but can be realized by using ordinary i-line lithography technology, which reduces costs and greatly improves the yield. In this paper, we apply periodic p-electrodes and surface gratings to a 1550 nm DFB semiconductor laser. AlGaInAs multi-quantum wells with smaller linewidth enhancement factor and improved temperature characteristics compared to InGaAsP equivalents are selected, making it easier to achieve narrow linewidths [27,28]. In addition, since the output power of the traditional 1550 nm tunable DFB laser only reaches tens of milliwatts or even a few milliwatts [29,30], we added a tapered waveguide to increase the output power [31]. In this paper, the design of the device is described in Section 2 and the measurement results and analysis are characterized in Section 3. Finally, the conclusion is presented in Section 4. Materials and Methods The device consists of two parts: the front part is the DFB laser, and the rear part is the tapered waveguide as shown in Figure 1a. In the DFB laser section, the period of the grating was 6.0 µm, and the ridge waveguide (W = 4.0 µm, L = 1.0 mm) was in the center of the grating. Electrode windows (R = 1.5 µm) were patterned on the grating to form periodic electrical injection channels, and there was no current injection anywhere except at the electrode windows. The topography of the surface grating and periodic electrodes windows can be seen by scanning electron microscopy (SEM) as shown in Figure 1b. A tapered waveguide (L = 1.5 mm) with a 6 • taper angle increased the output power of the device. The width of the tapered waveguide port was 150 µm. The insulating channel between the two parts is 250 nm to ensure that the two parts are independently connected to electricity without affecting each other. Ti/Pt/Au metal layers are deposited as p-and nelectrodes by magnetron sputtering. The lift-off technique is used to define the p-electrode shape. Thermal annealing is then applied to improve metal contact and lower electrical resistance. Then the lasers are obtained after wafer cleaving, and two cleaved facets are coated with silicon oxide layer for anti-reflection and protection. The current that defines the access front part is I DFB , and the rear part is I Tap . The device (W = 500 µm, L = 2.5 mm) is soldered to a heat sink with the p-side facing down to form an ohmic contact mounted on a thermoelectric cooling (TEC) controlled stage for temperature tuning. No complex process technology is used in the entire process, which makes the fabrication simple and low-cost. The surface gratings, ridge waveguides, taper waveguides, and periodic electrodes were all fabricated by i-line lithography, after which inductively coupled plasma (ICP) is used to etch them. The surface grating was fabricated by inductively coupled plasma shallow etching, which helped to reduce scattering loss. The periodic electrical injection creates periodic gain differences in the active area, as can be seen in Figure 1c. Considering that surface gratings cause simultaneous changes in the real and imaginary parts of the refractive index, we have carefully designed them to avoid too many refractive index coupling effects to prevent higher-order scattering, which can cause additional optical losses and increase the threshold. The coupling coefficient κ is expressed as: where k 0 = 2π/λ 0 is the vacuum wave number for the vacuum wavelength λ 0 . Γ is the optical confinement factor of the gating in our device. ∆n is the refractive index change in the waveguide. Lg represents the grating's groove width. Λ is the period for the l order grating (Λ = lπ/n e f f ) and Γ is the optical confinement factor of the active region. ∆g represents the gain/loss change in the waveguide. By analyzing the coupling coefficient κ, the influence of the refractive index coupling on the device is reduced by decreasing ∆n and Γ in the real part, and increasing ∆g in the imaginary part makes the gain coupling occupy the main modulation position in practical work. It is known by calculation that the imaginary part is 4 orders of magnitude higher than the real part. This means that the strength of the refractive index coupling is negligible, and the device is mainly modulated by the gain coupling. The epitaxial structure of the device was fabricated by metal-organic chemical vapor deposition (MOCVD), as shown in Figure 1d. The active region of the device was made of AlGaInAs material, which reduced the linewidth of the device. process technology is used in the entire process, which makes the fabrication simple and low-cost. The surface gratings, ridge waveguides, taper waveguides, and periodic electrode were all fabricated by i-line lithography, after which inductively coupled plasma (ICP) i used to etch them. The surface grating was fabricated by inductively coupled plasma shal low etching, which helped to reduce scattering loss. The periodic electrical injection cre ates periodic gain differences in the active area, as can be seen in Figure 1c. Considering that surface gratings cause simultaneous changes in the real and imaginary parts of the refractive index, we have carefully designed them to avoid too many refractive index cou pling effects to prevent higher-order scattering, which can cause additional optical losse and increase the threshold. The coupling coefficient κ is expressed as: is the vacuum wave number for the vacuum wavelength 0  .  is th optical confinement factor of the gating in our device. n  is the refractive index change in the waveguide. g L represents the grating's groove width.  is the period for the l order grating ( / eff l n  = ) and '  is the optical confinement factor of the active region g  represents the gain/loss change in the waveguide. By analyzing the coupling coefficient  , the influence of the refractive index cou pling on the device is reduced by decreasing n  and  in the real part, and increasing g  in the imaginary part makes the gain coupling occupy the main modulation position in practical work. It is known by calculation that the imaginary part is 4 orders of magni tude higher than the real part. This means that the strength of the refractive index coupling is negligible, and the device is mainly modulated by the gain coupling. The epitaxial struc ture of the device was fabricated by metal-organic chemical vapor deposition (MOCVD) as shown in Figure 1d. The active region of the device was made of AlGaInAs material which reduced the linewidth of the device. The 3 dB linewidth of the device was tested by the frequency-shifted non-zero delay self-heterodyne method. As shown in Figure 2, the output signal of the DFB laser entered coupler 1 after passing through the isolator and was then split into two paths. One path was delayed by the fiber ring and then sent to coupler 2; the other path was sent to coupler 2 Appl. Sci. 2022, 12, 4498 4 of 9 through an acousto-optic modulator (AOM), and the two paths were mixed by the detector. Following this, the difference frequency electrical signal was sent to the spectrum analyzer. The fiber-optic delay line was formed of a 50 km long single-mode fiber. Couplers 1 and 2 were single-input double-output types with a split ratio of 1:1. AOM was an acousto-optic frequency shifter with a frequency up-shift of 80 MHz. The 3 dB linewidth of the device was tested by the frequency-shifted non-zero self-heterodyne method. As shown in Figure 2, the output signal of the DFB laser en coupler 1 after passing through the isolator and was then split into two paths. One was delayed by the fiber ring and then sent to coupler 2; the other path was sent to co 2 through an acousto-optic modulator (AOM), and the two paths were mixed by th tector. Following this, the difference frequency electrical signal was sent to the spec analyzer. The fiber-optic delay line was formed of a 50 km long single-mode fiber. plers 1 and 2 were single-input double-output types with a split ratio of 1:1. AOM w acousto-optic frequency shifter with a frequency up-shift of 80 MHz. Results and Discussion The power-current (PI) characteristics with CW operation at 20 °C are shown in ure 3. The threshold current of the DFB laser was measured to be 20 mA. Figure 3 s that when the IDFB was fixed, an increase in the ITap resulted in increased output powe the current was increased to 3.0 A, the output power reached 401.5 mW. During th riod, the curve maintained a good linear trend and did not reach the non-linear regi saturation, so we hypothesize that the output power will be significantly improved nificantly at higher currents. Results and Discussion The power-current (PI) characteristics with CW operation at 20 • C are shown in Figure 3. The threshold current of the DFB laser was measured to be 20 mA. Figure 3 shows that when the I DFB was fixed, an increase in the I Tap resulted in increased output power. As the current was increased to 3.0 A, the output power reached 401.5 mW. During this period, the curve maintained a good linear trend and did not reach the non-linear region of saturation, so we hypothesize that the output power will be significantly improved significantly at higher currents. self-heterodyne method. As shown in Figure 2, the output signal of the DFB laser entered coupler 1 after passing through the isolator and was then split into two paths. One path was delayed by the fiber ring and then sent to coupler 2; the other path was sent to coupler 2 through an acousto-optic modulator (AOM), and the two paths were mixed by the detector. Following this, the difference frequency electrical signal was sent to the spectrum analyzer. The fiber-optic delay line was formed of a 50 km long single-mode fiber. Couplers 1 and 2 were single-input double-output types with a split ratio of 1:1. AOM was an acousto-optic frequency shifter with a frequency up-shift of 80 MHz. Results and Discussion The power-current (PI) characteristics with CW operation at 20 °C are shown in Figure 3. The threshold current of the DFB laser was measured to be 20 mA. Figure 3 shows that when the IDFB was fixed, an increase in the ITap resulted in increased output power. As the current was increased to 3.0 A, the output power reached 401.5 mW. During this period, the curve maintained a good linear trend and did not reach the non-linear region of saturation, so we hypothesize that the output power will be significantly improved significantly at higher currents. Figure 4 shows the spectra of different I DFB values (from 50 mA to 700 mA, measured every 50 mA) at a fixed I Tap value of 0.5 A at 20 • C (measured with AQ6370C spectrum analyzer with a spectral resolution of 0.02 nm). As the current increased, a significant number of carriers were injected into the device, causing the refractive index of each layer to change. In addition, the injection of carriers caused a build-up of heat in the device, which eventually led to the narrowing of the bandgap of the gain material in the active region. This means that the gain peak and the corresponding lasing wavelength to the longwavelength cause a red shift. The wavelength redshift of Figure 4 was from 1548.44 nm to 1550.52 nm (2.08 nm). The spectrum shows that several side peaks appeared in addition to the main laser peak as the current increased. Although side peaks also appeared, the SMSRs at all lasing wavelengths were still over 37 dB, indicating that the device always displayed acceptable single-mode operating characteristics. Appl. Sci. 2022, 12, x FOR PEER REVIEW 5 of region. This means that the gain peak and the corresponding lasing wavelength to th long-wavelength cause a red shift. The wavelength redshift of Figure 4 was from 1548.4 nm to 1550.52 nm (2.08 nm). The spectrum shows that several side peaks appeared i addition to the main laser peak as the current increased. Although side peaks also ap peared, the SMSRs at all lasing wavelengths were still over 37 dB, indicating that the de vice always displayed acceptable single-mode operating characteristics. Figure 5 shows the variation curve of the central lasing wavelength obtained b changing the IDFB at different temperatures. When the temperature was constant, the las ing wavelength increased as the values of IDFB increased. Similarly, as the temperatur increased from 10 °C to 50 °C, the lasing wavelength also increased. It can be seen from Figure 5 that the change in lasing wavelength for different current wavelengths at diffe ent temperatures is almost linear. This is due to the fact that the surface Bragg grating i the device is minimally affected when the temperature increases. However, the bandga and refractive index do change in a linear relationship. This also limits the rate of chang of the lasing wavelength for the DFB laser with temperature. However, this also make the single-mode stability of the device excellent, and no mode-hopping phenomenon o curs. The achieved quasi-continuous tuning range is 6.156 nm (from 1547.468 nm t 1553.624 nm), and the rate of change of lasing wavelength with temperature was 0.15 nm/°C. Figure 5 shows the variation curve of the central lasing wavelength obtained by changing the I DFB at different temperatures. When the temperature was constant, the lasing wavelength increased as the values of I DFB increased. Similarly, as the temperature increased from 10 • C to 50 • C, the lasing wavelength also increased. It can be seen from Figure 5 that the change in lasing wavelength for different current wavelengths at different temperatures is almost linear. This is due to the fact that the surface Bragg grating in the device is minimally affected when the temperature increases. However, the bandgap and refractive index do change in a linear relationship. This also limits the rate of change of the lasing wavelength for the DFB laser with temperature. However, this also makes the single-mode stability of the device excellent, and no mode-hopping phenomenon occurs. The achieved quasi-continuous tuning range is 6.156 nm (from 1547.468 nm to 1553.624 nm), and the rate of change of lasing wavelength with temperature was 0.154 nm/ • C. nm to 1550.52 nm (2.08 nm). The spectrum shows that several side peaks appeared addition to the main laser peak as the current increased. Although side peaks also ap peared, the SMSRs at all lasing wavelengths were still over 37 dB, indicating that the d vice always displayed acceptable single-mode operating characteristics. Figure 5 shows the variation curve of the central lasing wavelength obtained b changing the IDFB at different temperatures. When the temperature was constant, the la ing wavelength increased as the values of IDFB increased. Similarly, as the temperatu increased from 10 °C to 50 °C, the lasing wavelength also increased. It can be seen fro Figure 5 that the change in lasing wavelength for different current wavelengths at diffe ent temperatures is almost linear. This is due to the fact that the surface Bragg grating the device is minimally affected when the temperature increases. However, the bandga and refractive index do change in a linear relationship. This also limits the rate of chang of the lasing wavelength for the DFB laser with temperature. However, this also mak the single-mode stability of the device excellent, and no mode-hopping phenomenon o curs. The achieved quasi-continuous tuning range is 6.156 nm (from 1547.468 nm 1553.624 nm), and the rate of change of lasing wavelength with temperature was 0.15 nm/°C. As shown in Figure 6a, the maximum SMSR of the device exceeded 55 dB. At the same time, the SMSR at different temperatures and different values of I DFB were also measured, which are displayed in Figure 6b. It is clear that when the temperature and the current were lower, and just after the laser had begun lasing, the SMSR increased proportionally with the increase in current. As the I DFB increased slightly, a side mode appeared, causing the side mode and the main mode to compete. This caused the SMSR to become smaller as the I DFB increased. This situation improved as the current continued to increase. This is because, at the high current level, the main mode dominates, which suppresses the increase in the side mode, increasing the size of the SMSR. At high temperatures, the lasing wavelength of the DFB laser will move to a long wavelength. This means that the mode competition will also move to a long wavelength. However, as the performance of the device will decrease at high temperatures, non-radiative recombination will consume more carriers, and the carriers are mainly provided to the lasing mode. This means that the side modes will not receive many carriers, and so the SMSR will increase with the increase in I DFB . As shown in Figure 6a, the maximum SMSR of the device exceeded 55 dB. At th same time, the SMSR at different temperatures and different values of IDFB were also mea ured, which are displayed in Figure 6b. It is clear that when the temperature and the cu rent were lower, and just after the laser had begun lasing, the SMSR increased proportio ally with the increase in current. As the IDFB increased slightly, a side mode appeare causing the side mode and the main mode to compete. This caused the SMSR to becom smaller as the IDFB increased. This situation improved as the current continued to increas This is because, at the high current level, the main mode dominates, which suppresses th increase in the side mode, increasing the size of the SMSR. At high temperatures, the la ing wavelength of the DFB laser will move to a long wavelength. This means that th mode competition will also move to a long wavelength. However, as the performance the device will decrease at high temperatures, non-radiative recombination will consum more carriers, and the carriers are mainly provided to the lasing mode. This means th the side modes will not receive many carriers, and so the SMSR will increase with th increase in IDFB. The 3 dB linewidth test results are shown in Figure 7. The measured spectral ran was 160 MHz with a resolution of 1 MHz. Figure 7a shows the 3 dB linewidth measure at 20 °C using a delay self-heterodyne linewidth test system. In the spectrogram, the 3 d linewidth represents the spectral width corresponding to the 3 dB drop from the max mum value of the spectral line amplitude. The measured spectral lines are of the Loren line type, so more accurate results can be obtained by performing a Lorentz fit on th measured spectral lines. The measured results have been fitted by the Lorentz metho and the fitted 3 dB linewidth was 18.86 MHz. As the temperature increased, the linewid generally displayed a clearer trend. In theory, when the output power is not saturate the linewidth of the semiconductor laser is proportional to the derivative of the outp power. However, in practice, the linewidth expands twice due to the linewidth enhanc ment factor and the spatial hole burning effect. Therefore, the actual measured line wid is affected by multiple factors. Increasing the differential gain and optimizing the couplin coefficient had a beneficial impact on reducing the linewidth of the semiconductor lase The device maintained a stable single longitudinal mode output throughout the test p riod. The 3 dB linewidth test results are shown in Figure 7. The measured spectral range was 160 MHz with a resolution of 1 MHz. Figure 7a shows the 3 dB linewidth measured at 20 • C using a delay self-heterodyne linewidth test system. In the spectrogram, the 3 dB linewidth represents the spectral width corresponding to the 3 dB drop from the maximum value of the spectral line amplitude. The measured spectral lines are of the Lorentz line type, so more accurate results can be obtained by performing a Lorentz fit on the measured spectral lines. The measured results have been fitted by the Lorentz method, and the fitted 3 dB linewidth was 18.86 MHz. As the temperature increased, the linewidth generally displayed a clearer trend. In theory, when the output power is not saturated, the linewidth of the semiconductor laser is proportional to the derivative of the output power. However, in practice, the linewidth expands twice due to the linewidth enhancement factor and the spatial hole burning effect. Therefore, the actual measured line width is affected by multiple factors. Increasing the differential gain and optimizing the coupling coefficient had a beneficial impact on reducing the linewidth of the semiconductor laser. The device maintained a stable single longitudinal mode output throughout the test period. The performance characteristics of the DFB laser fabricated in this study were compared with those of other reported DFB lasers. The results are listed in Table 1. As can be seen from Table 1, all devices do not perform very well on all characteristic parameters. The device fabricated in this paper is much higher than that reported in other literature in terms of output power and can have a high SMSR while maintaining high output power. In terms of tuning range and line width, it is not as excellent as the output power and SMSR performance. In the following research, we will focus on how to improve the tunable range and reduce the linewidth and then make our device more perfect. Conclusions In this paper, we demonstrated a high-power surface grating 1550 nm tunable DFB laser based on the gain-coupling effect. The gain coupling effect was realized by forming a periodic gain difference with periodic p-electrode windows. A tapered waveguide was introduced to improve the output power, and a multiple quantum well structure formed of AlGaInAs material was used in the active region to reduce the linewidth of the laser. Our device achieved consistently excellent performance. This was exhibited by CW output power of 401.5 mW at 20 °C, a maximum side-mode rejection ratio exceeding 55 dB, a measured 3 dB linewidth was 18.86 MHz, and a wavelength quasi-continuous tuning The performance characteristics of the DFB laser fabricated in this study were compared with those of other reported DFB lasers. The results are listed in Table 1. As can be seen from Table 1, all devices do not perform very well on all characteristic parameters. The device fabricated in this paper is much higher than that reported in other literature in terms of output power and can have a high SMSR while maintaining high output power. In terms of tuning range and line width, it is not as excellent as the output power and SMSR performance. In the following research, we will focus on how to improve the tunable range and reduce the linewidth and then make our device more perfect. Conclusions In this paper, we demonstrated a high-power surface grating 1550 nm tunable DFB laser based on the gain-coupling effect. The gain coupling effect was realized by forming a periodic gain difference with periodic p-electrode windows. A tapered waveguide was introduced to improve the output power, and a multiple quantum well structure formed of AlGaInAs material was used in the active region to reduce the linewidth of the laser. Our device achieved consistently excellent performance. This was exhibited by CW output power of 401.5 mW at 20 • C, a maximum side-mode rejection ratio exceeding 55 dB, a measured 3 dB linewidth was 18.86 MHz, and a wavelength quasi-continuous tuning range of 6.156 nm from 10 • C to 50 • C. Our device does not use complex manufacturing techniques such as secondary epitaxial growth and can be manufactured by i-line lithography, which
6,400.8
2022-04-29T00:00:00.000
[ "Physics" ]
Non Local Impact Ionization Effects in Semiconductor Devices Impact ionization processes define the breakdown characteristics of semiconductor devices. An accurate description of such phenomenon, however, is limited to very sophisticated device simulators such as Monte Carlo. A new physical model for the impact ionization process is presented, which accounts for dead space effects and high energy carrier transport at a Drift Diffusion level. Such model allows to define universal impact ionization coefficients which are device-geometry independent. By using available experimental data these parameters have been calculated for In0.53Ga0.47As. INTRODUCTION Device simulation has become a fundamental part in the development and optimization of electronic devices.Next to standard approaches such as the Drift Diffusion method ], which has lead to several com- mercial software packages, newer techniques are also available, ranging from hydrodynamic approaches to particle-based simulations, such as the Monte Carlo or the Cellular Automaton methods [2,3]. In general, one pays the enhanced physical content of the latter approaches with a heavier computational load.Thus is particularly true in the presence of high electric fields.While physical simulators require an accurate description of high energy effects, Drift Dif- fusion methods can rely on simplified phenomenological approaches. The aim of this paper is to show the inadequacy of some of these approaches, by focusing on the process which is the main responsible for the breakdown of devices, namely impact ionization (II).Furthermore, we will indicate improvements which allow us to keep the speed advantages of Drift Diffusion methods while incorporating the main physical features of the II process. IMPACT IONIZATION IN DRIFT DIFFUSION AND MONTE CARLO SIMULATORS II is the phenomenon by which a hot carrier colliding with a valence electron creates a new pair of carriers, both available for conduction.A careful model of II must account for the high energy behaviour of carri- ers, since the threshold for the process is at least equal to the energy gap of the semiconductor under consid- eration.The best treatment in terms of physical accu- racy is provided by the Monte Carlo method, which Corresponding author.Tel: +39 6 72594469.Fax: +39 6 2020519.E-mail<EMAIL_ADDRESS>typically includes II by associating to each carrier ionization probability per unit time, dependent on the carrier status (energy, velocity, momentum etc.).Sev- eral degrees of depth are possible, based on simplified bands and phenomenological II rates, or on complete bands and microscopical II rates.The latter approach has been applied mainly to bulk semiconductors, due to its numerical complexity, while the former one can be applied to devices.Recently, a Monte Carlo analy- sis has been presented of an AlGaAs/GaAs Heterojunction Bipolar Transistor (HBT) [4,5], which perfectly reproduced available experimental results both on the multiplication factor as well as on the electroluminescence spectra of the device in a near- breakdown regime [6,7].There, a three valley non- parabolic model was implemented for electrons and holes, together with the Kane model for II.The main conclusions of that work have been 1.The average energy and the ionization coefficients reach their maximum not at the base-collector junction, where the electric field reaches its maxi- mum, but rather inside the collector.Such effect is referred to as "dead space" effect [8][9][10][11]. Electrons and holes gain considerable energy in the collector due to the presence of very high elec- tric fields.As a result, the electron and hole distri- bution finctions are very hot, leading to strong ionization processes and to radiative transitions within the conduction and the valence band, respectively [6], responsible for the observed elec- troluminescence. Although the Monte Carlo simulation is physically very accurate, it is also extremely time consuming.It would therefore be preferable, in the context of device modeling (and especially in the presence of ionization phenonena), to use faster numerical tools such as those based on the Drift Diffusion algorithm.There, the following equations are used dp. Jp qplJpF qDp--x dn Jn qnlJrtF + qDn--r- ax (2) (3) dx q(G-R) dx In Eqs.(3), II is accounted for by means of the gener- ation term, G.Unfortunately such approaches do not correctly describe hot carrier and non local effects, unless ad hoc phenomenological corrections are implemented.In the following, we will briefly revise the widely accepted way to deal with II in conjunction with Eqs.(1)(2)(3). LOCAL MODEL: DESCRIPTION AND FAILURE The most common method to deal with II henomena in Drift Diffusion simulations, is represented by the Local Model (LM) [12-15].Within such model the probability to generate a II event in (x, x + dx) for a carrier moving in the +x direction depends only on the local electric field.Under this hypothesis, II can be fully characterized by the mean free path between ionizing collisions, for both electrons, < n >, and holes, < Ip >. The II generation rate is linked to the < li> parameters of the LM for electrons and holes respectively.Equations (4) are usually introduced referring to a constant electric field, but can be extended to a generic field shape by assuming the II coefficients to be function of the local electric field. Within the LM, the II process in a given semicon- ductor under a specified electric field is fully charac- terized by a pair of real numbers, namely the ionization coefficients. Equations (4-5) can be used to link the lI coeffi- cients (i.e. the inverse of the carriers mean free path between ionizing collision) to macroscopically observable quantities, such as the multiplication fac- 071 --=--Canali et al. [18] 106. --o--Ritter et al. [17] , , i l f --e--Bulmann et al: [8].% 10 0 5 10 F " xl0 a rn V) FIGURE Dispersion of the LM ionization coefficients measured by different authors for electrons on (a) In.53Ga.47Asand (b) GaAs tor or the breakdown length [12-14].Such equations can, in turn, be applied to extract the II coefficients as a function of the electric field from experimental data [12, 13].As a matter of example, we report in Fig. la a comparison of the InGaAs electron ionization coef- ficients obtained by application of the LM from dif- ferent authors [16][17][18].The same quantity is plotted in Fig. lb for GaAs [6,8].The observation of Fig. reveals that the ionization coefficients are dependent on the particular device under examination.Particularly, their inverse cannot be taken as a reliable estimate of the mean free path between ionizing collisions. MONTE CARLO ANALYSIS: THE DEAD SPACE CONCEPT The dispersion in the experimental determinations of the II coefficients shown in Fig. arises from the inadequacy of the LM.In order to clarify this point we refer to a Monte Carlo simulated experiment, by considering an electron current entering a region where a strong electric field (F=5 Position (nm) FIGURE 2 Number of ionizing collisions for unit time predicted with Monte Carlo simulation both for primary and secondary carri- ers.The quasi-exponential decay of the number of ionizing colli- sions due to the primary allows to define dead spaces and DM ionization coefficients in the simulation.In Fig. 2, the number of ionizing collisions (per unit time and volume) is plotted as a function of position.Two types of processes are con- sidered: the first ('primary') collisions of the carriers entering the high field region, and all other ('secondary') collisions, either of the primary carriers or ,of those that are generated within the field region. Figure 2 clearly illustrates a fundamental physical feature of II process, that is the "dead-space" effect: a carrier must travel a certain distance before reaching the threshold energy for II.The "cold" injected pri- mary electrons are not immediately available for II. Thus, a dead-space zone (dn) where no ionizing colli- sions occur is present near the contact.Equivalently, the dead-space effect can be described by the associ- ated energy (Eth,n gained from the carriers over the d n length.For a constant electric field, threshold energy and dead-space are related by the equations Eh,p q" IFl-dp; Eh,, q.IFI" d, (6) for holes and electrons respectively.It is often more convenient to speak of threshold energy because such quantity is generally less sensitive to the field value than d i, and often, as a first approximation, it is con- sidered to be constant for a given semiconductor.Eth, represents the energy that a carrier must receive from the fi.eld to appreciably initiate II. Secondary carriers come into existence only after the primary ones collide, that is not before x d n.As it can be seen on the figure, they also cannot ionize before another dead-space, that is around x 2d n. The reason of such behaviour can be understood in the light of the microscopy of the ionizing event. When the primary carrier impinges on the valence electron, it release its energy almost completely to generate a hole-electron pair roughly in rest condi- tion.Accordingly, all secondary carriers can be assumed to start their motion close to zero kinetic energy.This behaviour, totally neglected in the LM, must be taken into account to adequately reproduce the II process.The mean free path between ionizing collision is only a first order description of the proc- ess.The dead space concept gives further information and, if considered in a suitable form, can add in accu- racy to an adopted model, as we will see shortly. IMPLEMENTATION We will refer here to a region with a step-constant, positive electric field, E We further assume, for the sake of simplicity, that carriers are approximately in the same rest condition after suffering an ionizing col- lision, or after being generated by an ionizing collision. Under these hypothesis a very general procedure for II modeling can start by considering the spatial probability density of the ionization event, Pn(x) (Pp(x)), for a rest electron (hole) entering the field region at x 0 10].Clearly, Pn (x) 0, x > 0 and Pp(x) 0, x < 0. Field edge inclusion is easily accom- plished with the further assumption of zero II proba- bility soon after the field falls to 0. Under the previous assumption Eqs. ( 5) must be substituted with the more As we have previously shown, most of the limitations implicit in the LM arise from the fact that the dead space effect is neglected.The simplest assumption that allows us to overcome such difficulty is to consider the Pi functions to be delayed exponential func- tions of the type De -'(x-dt'), x >_ dp Pp(x) O, x < dp that is, to assume that the probability of a hole (electron) initiated II event occurring between x and x + dx (x dx) is 0 until the carrier travels a dead space length, and 5D dx (c D dx) afterwards [10].The reason for such an assumption can be supported by the obser- vation that the shape of the curve referred to as "pri- mary" in Fig. 2 is proportional to Pn(-x).Indeed, it is very close to the delayed form of Eqs. ( 9), and D and d n can be extracted from a fitting procedure.Similar arguments hold for holes.It is possible to show that the previous assump- tions, which define the Delay Model (DM) of II, lead to the following expressions for the generation rate [9] (see also [19] where carriers are used instead of currents) dJp dJn dx qG atDJn + DJp dx (10) x-t-dn / dx' x J Jp(xdp) --jHp dx'. (12) x-dp Equations (10-12) are valid if II is the only relevant generation-recombination mechanism.The appropri- ate boundary conditions are o, < jHn (x) --O x > W-dn Jp(O) Jpo (13) J(W) J.w The DM is actually equivalent to the model devel- oped in [10,20] to study the mean gain of avalanche photodiodes.Equations (10-12) can be easily inserted in Drift Diffusion simulators, whose solution can still be accomplished via an iterative scheme, as proposed by Gummel [21 ].Within the DM, the II characteristic of a given semi- conductor under a specified electric field are expressed by two pairs of parameters: the ionization coefficients and the threshold energies for electrons and holes, respectively.Those parameters are linked to the mean free path between ionizing through Eqs. (7) which takes the form 1 < lp >---D + dp; < In >--D + dn. (14) The d terms are usually negligible in the lower field range, but their contribution increases with field strength.Similar results were reported for the first time in [19] by Y. Okuto and C. R. Crowell, but up to now they have not been received considerable atten- tion. It should be noticed that Eqs.(10) (13) have been introduced referring to constant fields, but they can be easily extended by assuming both threshold energies and II coefficients to be dependent on the local elec- tric field. PARAMETERS EXTRACTION FOR DELAY MODEL A practical way to obtain the DM parameters is through general techniques of error minimization.Let us suppose that N different experimental values are available, such as the multiplication factors (Mi) for electrons and holes, for a certain number of devices under a specified bias condition.First, the field shape, Fi(x corresponding to each value has to be deter- mined.Then, a sample set of the unknown functions (D(F), liD(F), Eth,n(F), Eth,p(F)) is chosen, in corre- spondence to an arbitrary number (P) of field values, F h (h (l, P)), and a gueSs is made about the range of variation of the 4.P unknown samples CD(Fh), [$D(Fh), Eth,n(Fh), Eth,p(Fh). The extraction procedure simply consists of two further steps in the first we randomly choose the value of the 4.P unknown samples in their relative range of variation.For each choice we simulate all the N structures using a suitable function to interpolate the samples, and we calculate an error function which could be just the simple mean root square of the dif- ferences between experimental and calculated log(Mi) values.By tracking the sample set which gave the minimum value, we reduce the searching range and update its bound to maintain the central value over the best samples set.When the error function decreases under a prescribed quantity, the last step starts, con- sisting of a simple gradient search for the minimum value of the error function until the zero value is reached. A series of improvements can be thought of in order to enhance the method convergence and accu- racy, depending on the particular device that provided the experimental data.The fundamental point is that, thanks to this method, we can successfully extract a set of highly reproducible parameters that can be used to describe II process in a given semiconductor.If we restrict our attention to structures with low overall multiplication factor (M 1 << 1), many simplifica- tions can be done on the full DM, that allow a faster parameter extraction. We have applied the described algorithm to experi- mental data measured on npn InP/In.53Ga.47AsHBTs [17,18].In such transistors, the electron current injected from the base grows in the collector region because of II processes, the strength of multiplication depending on collector bias.The electron multiplication factor, M n, is defined as the ratio between the col- lector current and the electron current injected from the base into the collector With a suited procedure [18,22] the M n dependence on collector polarization for a given device can be extrated with great sensitivity.Both structures con- sidered in 17, 18] are operating in the range of col- lector biases that satisfy the relation M n-1 < < 1.For sake of simplicity, we have made the widely accepted assumption to consider Eth,n independent on the local electric field (at least for the range of fields considered).The procedure to extract an estimate of the OD(F) and dn(F) functions proceeds by applying the algorithm described above.We calculated the field shapes in the collector region for the device reported in 18] by means of a Drift Diffusion simula- tor, using doping data obtained with C-V measure- ments [22].For the structure reported in [17] we referred to the nominal dbping value, reported in the original work. In Fig. 3we plot the universal electron impact ioni- zation coefficient for In.53Ga.47As extracted from our algorithm at the threshold energy of 1.25 eV in the considered field range.It is worth reminding that, if the LM was to be used (as actually done in [17,18], see Fig. 1), the ionization coefficients for the two F " xlO "a m V") In.53Ga.47Ascompared with previous LM determinations.The cal- culated values, used in DM Drift Diffusion simulator, allow a per- fect calculation of the multiplication factors in both HBTs structures would differ from one another.The calcu- lated values on Fig. 3, used in DM-based Drift Diffu- sion simulator, allow a perfect calculation of the multiplication factors in both HBTs. In conclusion, we have shown that it is possible to extend Drift Diffusion algorithms to include non-local effects of impact ionization, thus recovering the accu- rate physical description provided by Monte Carlo simulations. The work was partially supported by Piano Nazion- ale "Materiali Innovativi e Avanzati" of the Italian Ministry for University and Research Here o c and [L represent the ionization coefficients --Neviani et al,[6] the expected value calcula- tion.Different choices for the Pi(x) functions would lead to different degrees of accuracy of the model and for the associated computational effort.If the Pi(x) functions are taken as simple exponentials of the form Le +aL'x, x<_O Pn FIGURE 3 FIGURE 3 Universal DM ionization coefficient for electrons on
3,974.6
1998-01-01T00:00:00.000
[ "Physics", "Engineering" ]
MoRPI: Mobile Robot Pure Inertial Navigation Mobile robots are used in industrial, leisure, and military applications. In some situations, a robot navigation solution relies only on inertial sensors and as a consequence, the navigation solution drifts in time. In this paper, we propose the MoRPI framework, a mobile robot pure inertial approach. Instead of travelling in a straight line trajectory, the robot moves in a periodic motion trajectory to enable peak-to-peak estimation. In this manner, instead of performing three integrations to calculate the robot position in a classical inertial solution, an empirical formula is used to estimate the travelled distance. Two types of MoRPI approaches are suggested, where one is based on both accelerometer and gyroscope readings while the other is only on gyroscopes. Closed form analytical solutions are derived to show that MoRPI produces lower position error compared to the classical pure inertial solution. In addition, to evaluate the proposed approach, field experiments were made with a mobile robot equipped with two types of inertial sensors. In total, 143 trajectories with a time duration of 75 minutes were collected and evaluated. The results show the benefits of using our approach. To facilitate further development of the proposed approach, both dataset and code are publicly available at https://github.com/ansfl/MoRPI. I. INTRODUCTION M OBILE robots are used in different applications while operating under various constraints. For example, they can be found in industry, hotels, and warehouses and can be used for delivery, agriculture, healthcare, and military applications. Besides the improvements in technology, price reductions for electronic sensors and devices have caused an increase in research and in demand. Therefore, many companies worldwide produce mobile robots to answer the demand and infiltrate new markets. In parallel, major breakthroughs in low cost inertial sensors based on micro-electrical-mechanical-system (MEMS) technology provide better accuracy and robustness. The inertial sensors-namely, the accelerometers and gyroscopes-are packed in an inertial measurement unit (IMU) that is relatively very small (mm in scale), has low power consumption, and can be deployed easily in a variety of devices. In pure inertial navigation, the inertial measurements are integrated to obtain the position, velocity, and orientation of the platform. However, as the inertial sensor measurements contain noise and other types of errors when integrated, they cause the navigation solution to drift over time. To compensate for such drift, external sensors or vehicle constraints were suggested in the literature, and solutions have been proposed over the years. One method, described by [ is the use of absolute position measurements to obtain the location. A common approach is to use vision for navigation, [2], [3]. Similarly, Lidar [4], and Sonar [5] can be used for localization. Using these methods, a pre-stored map can be saved in the robot's memory, and the robot compares its location to the saved map. Another common way to use the sensors is by scanning the environment and creating a map so the robot can estimate its relative position to features and seek landmarks (SLAM) [6]. A disadvantage of these methods is the sensitivity to changes. For example, when furniture is moved, this can confuse the navigation system. Moreover, reflections or a lack of proper light can blind the sensors. Another approach is to use active beacons. Antennas, placed in known locations, can cover the environment so that triangulation [7] or trilateration [8] can be used to compute locations. GNSS is an example of this kind of navigation sensor. In order to obtain the location with beacons, they need to be situated in known locations, and eye contact between the robot receiver and the beacons is mandatory. Therefore, GPS cannot be used indoors, in urban canyons, or in outer space. Another kind of method is dead-reckoning. The IMU belongs to this group together with odometry. In odometry, sensors are installed next to the wheels [9], [10]. Relying on the predetermined wheel diameter and wheelbase, the position and heading are achieved. In some situations, external measurements are not available and the solution is based only on inertial sensors; hence, the navigation solution drifts in time. For example, GNSS signals are not available indoors and cameras suffer from lighting conditions. To cope with such situations, vehicle constraints could be applied. Several approaches were presented over the years using different types of prior knowledge as pseudomeasurements. For example, model of the vehicle dynamics and operating environment such that the vehicle travelling on a road [11], [12], using stationary updates for zero velocity and angular velocity [13], and modelling the sensor error [14]. In other navigation domains such as indoors, to cope with the navigation solution drift, instead of integrating the inertial sensor readings, an empirical formula estimates the drift in a pedestrian dead reckoning (PDR) framework [15]. In recent years, such empirical formulas have been replaced by machine learning approaches to regress the change in distance in any required time interval [16], [17]. Recently, the quadrotor dead reckoning (QDR) framework was developed for pure inertial navigation of quadrotors, employing PDR guidelines to improve position accuracy [18]. In this paper, inspired by PDR and QDR, we derive the MoRPI framework: a mobile robot pure inertial navigation solution that operates for short time periods to bound the navigation solution drift when external sensors are not available. The main idea is to drive the robot in a periodic motion instead arXiv:2207.02982v1 [cs.RO] 6 Jul 2022 of a straight line trajectory, as commonly the path planning of mobile robots made up of straight lines, and adjust some of the PDR and QDR principles. This is done in MoRPI-A where both accelerometer and gyroscopes readings are used to determine the robot's two-dimensional position. In some scenarios, like narrow corridors when the amplitude should be small, the periodic motion may not be reflected in the accelerometer readings due to their high noise characteristics, so we also offer MoRPI-G, which uses only the gyroscope measurements to calculate the position of the robot. The contributions of this paper: 1) The MorRPI framework copes with situations of pure inertial navigation in mobile robots. 2) MoRPI-G determines the mobile robot position using only gyroscope measurements. 3) An analytical error assessment of the MoRPI approach is provided and compared to the classical pure inertial solution. 4) Our dataset and code are publicly available and can be found here: https://github.com/ansfl/MoRPI. To evaluate the proposed approach, field experiments were made with a mobile robot equipped with two types of inertial sensors. In total, 143 trajectories with a time duration of 75 minutes were collected and evaluated. Comparisons to the classical inertial navigation solution were made in two and three dimensions. The rest of the paper is organized as follows: Section II presents the INS equations and the QDR method. Section III describes the proposed MoRPI approach and provides an analytical assessment of its position error. Section IV explains the experiments and gives the results, and Section V gives the conclusions of this paper. II. PROBLEM FORMULATION In this section, we address the process of inertial measurements within the inertial navigation system (INS) to calculate the robot navigation solution in three dimensions. Also, as mobile robots move in two dimensions, the INS equations are reduced to planar motion and presented here. Then, we briefly review the QDR approach. A. Inertial Navigation System The INS equations provide a solution for the position, velocity, and attitude based on the inertial sensor readings. As short time scenarios are addressed, the inertial frame (iframe) is defined at the robot's starting point, and the body frame (b-frame) coincides with the inertial sensors' sensitive axes. Let the accelerometer measurement vector, the specific force vector expressed in the body frame f b ib , be denoted as and the gyroscope measurement vector, the angular velocity vector expressed in the body frame ω b ib , as where the subscript ib stands for the body frame with respect to the inertial frame, and the superscript b denotes that the vector is resolved along the axes of the body frame. As our scenarios include low-cost inertial sensors and short time periods, the earth turn rate and the transport rate are neglected. Hence, the INS equations of motion are [19]: where p n is the position vector expressed in the navigation frame, v n is the velocity vector expressed in the navigation frame, g n is the gravity vector expressed in the navigation frame and assumed constant throughout the trajectory, C n b is the body to navigation orthonormal transformation matrix, and Ω b ib is the skew-symmetric matrix of the angular rate, defined as: where ω j=x,y,z are the gyroscope measurements as defined in equation (2). B. Two-Dimensions INS Leveraging the wheeled robot planner motion, it is assumed the robot moves with nearly zero roll and pitch angles and only the motion in the x − y plane is relevant. Therefore, the bodyto-navigation transformation matrix depends only on the yaw angle, ψ, and is given by [20]: Substituting (7) into (4) shows that f z has no influence on the velocity and the position in the x − y plane, and thus it is not needed in the inertial calculation. In addition, as only the yaw angle is taken into account, the gyro measurements in the x − y plane, i.e., ω x , ω y , are neglected and only ω z is considered. C. Quadrotor Dead Reckoning In [18], an adaptation of PDR principles was used to derive the QDR approach for situations of pure inertial navigation for quadrotors. To that end, the accelerometer readings were used to detect a peak-to-peak event. Then, using a step length estimation approach, the peak-to-peak distance was estimated. In their analysis, the Weinberg approach [21] was employed to estimate the peak-to-peak distance. Originally, it was developed to cope with constant stride length estimation approaches (based on user height). To that end, Weinberg proposed an empirical method taking into account the accelerometer readings during each stride. The underlying assumption of this approach is that the vertical bounce (impact) is proportional to the stride length. In the QDR approach, the peak-to-peak distance estimation is: where s w is the estimated peak-to-peak distance according to Weinberg's approach, and G w is the approach's gain. To apply (8), the approach's gain needs to be determined prior to application. Once the peak-to-peak distance is found, it is used together with the gyro-based heading and initial conditions to propagate the quadrotor position by where k is the time index. III. PROPOSED APPROACH Motivated by the QDR approach, our goal is to derive an accurate navigation solution for mobile robots using only inertial sensors for short time periods. Compared to the QDR approach, the mobile robot maneuvers are limited due to the indoor environment (corridors, for example). As a consequence, the periodic motion requires fewer accelerations, which may not be sensed using low-cost MEMS accelerometers. To cope with this challenge, in addition to applying and modifying QDR for mobile robots (MoRPI-A), we propose a gyroscope-only solution for positioning the mobile robot (MoRPI-G). We argue that regardless of the limited space for maneuvering, the angular rate in the z direction (perpendicular to the robot's plane of motion) is dominant enough to be recognized and utilized for positioning the robot. Both of Our MoRPI approaches consist of the following phases: • Peak detection: The peaks during the motion are extracted as local maxima from the inertial measurements. • Gain calculation: Prior to the application of the proposed approach, the empirical gain is estimated by moving the robot at a known distance with a known number of periods while using the Weinberg approach. This procedure is repeated several times with slightly different maneuvers and the gain is taken as the average from all runs. Once obtained, this gain is used in real-time to estimate the peak-to-peak distance. • Peak-to-peak distance estimation: The 'step', in analogue to PDR, is the segment between two peaks. The peak-to-peak distance estimation is done using the Weinberg approach with the predefined gain and the inertial sensor readings. • Heading determination: We use the heading extracted from the transfer matrix C n b to project the peak-to-peak distance into local planar coordinates. • Position update: As a dead-reckoning method, the position is updated relative to the previous step while using the current heading angle and peak-to-peak distance. Our proposed approach is illustrated in Figure 1. As discussed above, we distinguish between two MoRPI approaches based on the inertial sensors they employ for the peak-to-peak length estimation: 1) MoRPI-A: uses both accelerometers and gyroscopes. As applied in PDR and QDR, the advantage of this method over the INS is that it uses less integration on the inertial sensor readings and as a result reduces the position drift. For clarity, we define the body coordinate frame axes: the x-axis points towards the moving direction, the z-axis points downwards, and the y-axis completes the orthogonal set. In PDR, the motion is expressed in the vertical direction; thus, the accelerometer zaxis readings are used to determine the step length. In QDR, the magnitude of the specific force vector is used instead. In the proposed approach, the y-axis accelerometer readings are used instead, as the applied periodic motion is exhibited and captured best in this direction. Thus, the peak-to-peak distance is calculated by where s A is the peak-to-peak distance and G A is the gain of MoRPI-A. In general, it is necessary to determine G A before using (11). To that end, the mobile robot is moved in a trajectory with the required dynamics, where the travelled distance of this trajectory is known. By plugging the accelerometer readings in each peak-topeak distance and summing the results, the gain value can be estimated. Commonly, this procedure is repeated to obtain a more accurate gain. 2) MoRPI-G: uses only gyroscopes. To cope with realworld situations of small amplitudes within the periodic motion (in the horizontal plane) that cannot be sensed by the accelerometers, we employ the gyro z-axis readings for estimating the robot's peak-to-peak distance using where s G is the peak-to-peak distance and G G is the gain of MoRPI-G. G G is extracted in the same manner as G A except here, instead of the accelerometer readings, the angular rate ω z is employed. Regardless of how the peak-to-peak distance was estimated, i.e., by (11) or (12), the robot position is calculated by where i = A, G depending on the approach. As relative positioning is used here, the initial position is set to zero. A. Analytical Assessment In this section, we offer an analytical assessment of the expected position error using two and three dimensional INS while the robot moves in a straight line trajectory compared to our proposed approach where the robot moves in a periodic motion trajectory with the same distance. Maintaining consistency with Section II, the earth and transport rates are neglected in the analysis. We employ the 15 error state model [22], [23] expressed in the navigation frame with the following error state vector: where δp n is the position error vector expressed in the navigation frame, δv n is the velocity error vector expressed in the navigation frame, n is the misalignment vector, b a is the accelerometer bias residuals, and the gyro bias residuals is b g , as expressed in the body frame. As short time periods are considered, we assume constant biases during the analysis. The resulting error state model is where F is the system matrix and − (f n ×) is the skew-symmetric form of the specific force vector expressed in the navigation frame. The solution of the set of first order differential equations (16) is where δx t0 is the initial condition vector of the system and Φ is the transition matrix. A closed form solution of the transition matrix in (18) was offered in [24], [25]: where As a straight line trajectory for short time periods is considered, we assume that the body and navigation frame coincide: and, as a consequence where b a,x , b a,y , and b a,z are the biases of the accelerometer in the x, y, and z axes, respectively. Finally, as for both INS and MoRPI approaches the initial position and misalignment errors have the same influence on the position error, we assume zero initial position and misalignment errors. Yet, the initial velocity error influences only the INS approaches due to the integration on the velocity states. In the MoRPI approach, the position is obtained from an empirical formula without the need to integrate velocity errors. Thus, only the initial velocity error is considered in our analysis. Taking into account (23)-(25), when solving (19), the position error is: The heading error is the same for the all methods we examined. Therefore, the elements that depend on b g,z were discarded. The resulting distance error is: e 3D = δv 2 t0,x + δv 2 t0,y t 2 + (δv t0,x b a,x + δv t0,y b a,y ) t 3 where α g + b a,z . When considering the two dimensional INS b g,x , b g,y and b a,z are not relevant for the position estimation, thus the distance error (28) reduces to e 2D = δv 2 t0,x + δv 2 t0,y t 2 + (δv t0,x b a,x + δv t0,y b a,y ) t 3 As a consequence, the expected error of the two-dimensional INS is smaller than the three dimensional one. In our MoRPI approaches, the distance error is based on the peak-to-peak distance based on the Weinberg approach (11) for MoRPI-A and (12) for MoRPI-G. In this analysis, we focus only on MoRPI-A as the same procedure can be applied exactly on MoRPI-G. As shown in (11), the peak-topeak distance is a function of the gain and the specific force readings in the y-axis between the peaks. Let Note that as constant biases are addressed they are cancelled out in (30) and therefore have no influence on the distance error. Substituting (30) into (11) and linearizing to obtain the peakto-peak error at peak k gives where s A,k is the true k-th peak-to-peak value, δs k is the peak-to-peak error, and δG A reflects the error of the actual gain that should have been applied, depending on the actual periodic motion, which differs from the expected one. Removing the true values of (11) in (32) yields That is, the peak-to-peak distance error of the MoRPI-A approach depends only on the gain error and not on the biases of the accelerometers. The distance error of the whole trajectory is the sum of all peak-to-peak distance errors: where N is the number of peaks. To summarize, the 3D INS distance error (28) and the 2D INS distance error (29) are polynomial in time and therefore expected to diverge much faster than the MoRPI-A approach. This is illustrated in Figure 2 using numerical values as described later in Section IV. In addition, the gain choice should fit the expected dynamics to obtain the best performance, otherwise performance degradation should be expected [26]. Hence, in practice moving the vehicle differently than planned, will yield a position error. This behaviour corresponds to working with erroneous gain instead of the expected one. Therefore, to evaluate the gain error, we used δG A = 5% and δG A = 10% from the true gain. A. Field Experiment Setup A remote control car and a smartphone were used to perform the experiments and record the inertial data to create our dataset. The smartphone was rigidly attached to the car as shown in Figure 3. The model of the RC car we used is a STORM Electric 4WD Climbing car. The car dimensions are 385 × 260 × 205mm with a wheelbase of 253mm and tire diameter of 110mm. The car has a realistic suspension system that enables it to reach up to 40kph and cross rough terrain. Two different smartphones, with different inertial sensors, were used in our experiments: 1) A Samsung Galaxy S8 Smartphone with an IMU model of LSM6DSL manufactured by STMicroelectronics. 2) A Samsung Galaxy S6 smartphone with an IMU model of MPU-6500 manufactured by TDK InvenSense. The error parameters of both sensors are presented in Table I. In both smartphones, the inertial sensor readings were recorded with a sampling rate of 100Hz. The smartphone was placed on the top of the car with the screen facing upward. At the starting point, the car was directed to the end point and the phone accelerometer on the x-axis was aligned to the direction of movement. At the beginning of each recording, the phone was mounted parallel to the floor. B. Dataset Five types of trajectories were made during the field experiments. 1) Straight Line: To evaluate the INS solution, 24 recordings of driving in a straight line were made with the Samsung Galaxy S8 cellphone as part of the autonomous platform's inertial dataset [27]. The length of the straight line trajectory was 6.3m and the recordings were done indoors. Each of the recordings contains at least three seconds of stationary conditions at the beginning and end of the trajectory. An example of typical recordings of this trajectory type is presented in Figures 4 and 5 for the accelerometers and gyroscopes, respectively. The direction of the motion is along the x-axis, therefore there is a spike in the specific force in that axis at the beginning of the motion, and then decreases towards zero because we tried to keep a constant velocity during the experiments. At the end of the motion, deceleration slowed down the car until it came to a complete halt. A slight force in the y and x axes can be observed as it was difficult to maintain a straight line along the course. 2) Periodic Motion -Short Route: To evaluate our proposed approach, a sine shaped trajectory was recorded 23 and 30 times with the two smartphones: Samsung Galaxy s8 and s6, respectively. The start and end points of the trajectory were the same as for the straight line trajectory, with the same distance of 6.3m. The recordings were done indoors with three seconds of stationary conditions at the beginning and end of the trajectory. An amplitude of approximately 0.1m was applied in periods of 1m length, with different velocities of the mobile robot. An illustration of this trajectory type with a straight line trajectory is presented in Figure 6. In addition, another 26 recordings were gathered using only the Samsung Galaxy S8 smartphone with longer periods of 1.5m and 0.2m of amplitude on the same route. The average time of the trajectories with periodic motion is 11s for 1.5m peaks and about 14.5s for the 1m peak trajectories. It is more than twice than in the straight line trajectories, which have an average duration of 5s for the same travelled distance (start to end point). 3) Periodic Motion -Long Route: In the same manner, as the short route, a sine shape trajectory was recorded ten times with the Samsung Galaxy s8 and ten times with the Samsung Galaxy s6 for a longer distance of 13m, which is about twice the short route. An amplitude of approximately 0.1m was examined with periods of 1m, with different velocities. These recordings were taken outdoors. The smartphones were placed together on the car, with the s8 in the same spot as in the short route recordings and the s6 on the front of the car, as shown in Figure 7. An example of the inertial sensor recordings during this Fig. 7. Setup of the RC car with two phones. trajectory is presented in Figures 8-9. The periodic motion is seen in the specific force f y and in the gyro ω z readings. 4) L-Shaped -Straight Lines: To examine the robustness of our method, an L-shaped trajectory was examined. The trajectory consists of an 18m straight line segment followed by a 90 degrees turn and a 10m straight line segment (L-shape trajectory). This trajectory was carried out on an asphalt surface, with a slope of approximately 15°downhill along the first 5m of the first segment. The total length of the trajectory, 28m, is more than twice the long route presented in Section IV-B3. Ten recordings were gathered using a Galaxy S6 smartphone. The smartphone was located on the front of the mobile robot similar to the location used in the long route. 5) L-Shaped -Periodic Motion: The same L-shaped trajectory, as in the previous section, using the Galaxy S6 smartphone, is used. Instead of moving in straight lines, a periodic motion was applied with periods of 1m and an amplitude of 0.1m. This trajectory was recorded ten times. 6) Summary: A total of 143 experiments with a total of 75 minutes were made. Among them, 83 experiments were recorded with the Samsung Galaxy S8 smartphone and 60 with the Samsung Galaxy S6 smartphone. One hundred and three experiments were made indoors on a floor, while 40 experiments were recorded outdoors on an asphalt surface. The number of experiments varies between the different types of trajectories because of unusable recordings due to the manual operation of the robot. The dataset is publicly available and can be downloaded from https://github.com/ansfl/MoRPI. The dataset of the periodic movement was split to have a variety of velocities in both train and test sets, where the train was used to determine the gain, and the test to examine our method. The groups were divided almost equally. C. Indoor Experiments 1) Straight Line Trajectory: Equations (3)-(5) were used for calculating the mobile robot location in the INS mechanism. First, the raw inertial sensors readings were plugged into those equations in a naive approach denoted as RD for raw data. Second, to improve performance, zero-order calibration for the gyroscopes was made by utilizing the stationary conditions at the beginning of the trajectory and addressing the mean value in each axis as the bias. In addition, it was assumed that the smartphone is perfectly parallel to the floor, thereby aligning the z-axis with the direction of gravity. As a consequence, a zero-order calibration was also applied for the accelerometers, taking into account the local gravity value. This gyro and accelerometer calibration approach is denoted as (GAC). The same procedure was applied in the two-dimensional INS mechanism as described in Section II-B. The results with the three and two dimensional INS with the RD and GAC approaches are given in Table II. Using the raw data without any calibrations, the 3D INS obtained an error of 3.38m, corresponding to 53.7% of the travelled distance, while the 2D INS obtained a higher error of 3.91m, corresponding to 62%. Applying zero order calibration in the GAC approach has less influence over the 3D INS. Yet, the 2D INS error of the travelled distance was reduced from 62% to 28.6%. Those results show that after removing the biases of the inertial sensors, the 2D assumptions hold and therefore the performance improves. A typical plot of the 2D and 3D TABLE II INS ERRORS AT THE END OF THE TRAJECTORY PRESENTED AS PERCENTAGES OF THE TRAVELLED INS solutions with RD and GAC approaches is presented in Figure 10 to demonstrate the results discussed above. 2) Periodic Motion: All of the periodic motion recordings were analyzed to extract the peaks, the start point, and the end point of the motion. Then, for each segment, the peak-topeak distance was calculated for the MoRPI-A method using (8) and for MoRPI-G using (12). To calculate the gain of each approach, the training dataset was used with the known travelled distance, allowing to solve for the gain in each equation. The results provided in this section are for the test dataset only. The heading angle at each epoch, ∆ψ k , was calculated, as in the INS mechanism, using (5). Besides using the raw data (RD) in MoRPI approaches, we also examined the influence of the gyroscope calibration (GC). Note that when using accelerometer and gyroscopes readings in the INS equations, integration is made on both of the sensor readings, which is the reason that GAC was applied in the straight line trajectories. However, in the proposed MoRPI approaches, the accelerometer readings are used only to detect the peaks and to determine the peak-to-peak distance using an empiric formula without any integration; therefore, only GC was applied. The calibration process was done in the same manner as the INS method, using the first three seconds of the recordings when the robot was in stationary conditions. Eventually, using the peak-to-peak distance and heading angle, the total distance of the trajectory was calculated using (9)-(10), for both MoRPI approaches. The results of the test dataset of the short route are presented in Table III for both smartphones and both MoRPI approaches as a function of the raw data used, RD, or GC, and as the designed peak-to-peak distance. As observed from the table, the proposed MoRPI approaches in all examined configurations greatly improved the 3D and 2D INS solutions. In particular, MoRPI-G obtained the best performance for both smartphone types, with gyro calibration, with a distance error of 4.60%-4.76% compared to the travelled distance. This corresponds to an improvement of the best INS result (2D INS with GAC) by approximately a factor of five. It is important to note that the variance of the results in each configuration shown in Table III is less than 2.5cm. The consequence of these results is that the gyroscope is more sensitive than the accelerometer in this process. Thus, peak detection is easier because the peaks are more discernible, so we received more uniform peaks. This also affected the gain calculation in addition to more robustness over different velocities. best approach with each method; i.e., the GAC from 3D and 2D INS, and GC from MoRPI-A and MoRPI-G, relative to the end point of 6.3m in the x-axis, and the improvement in percentages of each method relative to the others. Despite the longer time duration in periodic motion trajectories, compared to the straight line, the position error was significantly lower as described in Section IV-B2 and as expected from our analytical assessment in Section III-A. D. Outdoor Experiments 1) Periodic Motion: To further evaluate our approach, we performed outdoor experiments that differ from the indoor ones, by surface type (asphalt instead of a tiled floor) and trajectory distance (13m instead of 6.3m). Based on the analysis of the indoor experiments' results, we examine here only the 1m desired peak-to-peak distance using both smartphones. In addition, the gain that was calculated in the indoor experiments was used for fair comparison; i.e., all outdoor experiments were treated as a new test dataset to examine the robustness of the proposed approach. Finally, we examined the MoRPI-A and MoRPI-G methods with RD and GC approaches, as in the indoor experiments. The results, presented in Table V, show the same behaviour as in the indoor experiments: an improvement when using calibration and an improvement using MoRPI-G, and with similar accuracy. This is consistent with our assumption of linear error in the proposed method. There is a small difference in the error percentages in the long route, where the main cause is the human factor in the experiments, which becomes more significant as the distance increases. In addition, the poor outdoor conditions, where the ground is not level and included obstacles such as cracks and loose gravel, contributed to the error. Moreover, the setup of the recordings, with two phones recording simultaneously, changed the dynamics of the robot and influenced the results. 2) L-Shaped Trajectory: In the same manner as in the straight line trajectory, Section IV-C1, the INS equations were used with the two types of configuration: RD or GAC, and 2D or 3D. The error was calculated by the Euclidean distance of the achieved end point relative to the real end point coordinates (28m distance). The results, given in Table VI, show that without any calibration the errors are huge in both cases of pure inertial navigation, 2D and 3D, with 1523% and 1123%, respectively. Using zero-order calibration improves the results to 156% and 161%, for 2D and 3D, respectively, but they are still unusable for most applications. To evaluate MoRPI approaches on the L-shaped trajectory, Table VII. The results show the same behaviour as in the indoor and outdoor experiments: an improvement using calibration and improvement using MoRPI-G. In particular, the lowest error, when using a pure inertial navigation approach was 156% while MoRPI-G reduced the error to 8.2%. Focusing on MoRPI approaches, in this experiment, there is a degradation in accuracy compared to the short and long trajectories. The reasons for the degradation are: 1) A slope of 15 degrees in the first 5m of the trajectory. 2) The presence of a 90 degrees turn. 3) This experiment included a longer distance than the previous one and the errors caused by the manual operation were more significant. Despite all of the above issues, and the fact that this experiment was treated as a test, the results show that MoRPI approaches are robust even to complicated scenarios and greatly improve the standalone pure inertial solution. V. CONCLUSION To reduce the error drift in situations of pure inertial navigation we proposed MoRPI, a mobile robot pure inertial approach. To evaluate MorPI and baseline approaches, two different smartphones were mounted on a mobile robot and their inertial sensors were recorded in two different types of periodic motion, differing in surface type and length. A total of 143 trajectories were recorded with a total time of 75 minutes. Our results showed that the 2D INS with accelerometer and gyroscope calibration obtained the best performance in the baseline approaches, achieving an error of 1.8m for the 6.3m trajectory, which corresponds to 28.6% of the travelled distance. Using the MoRPI-A approach, the average error using the two smartphones was 6.5% of the travelled distance, while MoRPI-G obtained the overall best performance with an error of 4.68% of the travelled distance. This means that our proposed approach improved the INS approach by a factor of six. We showed that even for twice the distance, and as a consequence of a longer duration of movement, the error increased in a linear manner. For example, the error over 6.3m was 4.76% using the Samsung s8, and over 13m the error was 4.46%. Finally, an L-shaped trajectory, including a slope and a 90 degrees turn, was also examined. As in the other trajectories, MoRPI approaches greatly improved the pure inertial solution. The above experiment results and characteristics coincide with our analytical assessment closed form solution for the position error of the INS and MoRPI approaches. To conclude, in scenarios where pure inertial navigation is needed, our proposed approaches, MoRPI-A and MoRPI-G, provide a lower position error compared to the INS solution. In particular, MoRPI-G obtained the best performance using only the gyroscopes readings. All of the recorded data and code used for our evaluations are publicly available at https://github.com/ansfl/MoRPI. He is currently an Assistant Professor, heading the Autonomous Navigation and Sensor Fusion Laboratory, at the Hatter Department of Marine Technologies, University of Haifa, Israel. His research interests include data-driven based navigation, novel inertial navigation architectures, autonomous underwater vehicles, sensor fusion, and estimation theory.
8,198.4
2022-07-06T00:00:00.000
[ "Engineering", "Computer Science" ]
Covariate Selection for Mortgage Default Analysis Using Survival Models The mortgage sector plays a pivotal role in the financial services industry, and the U.S. economy in general, with the Federal Reserve, St. Louis, reporting Households and Nonprofit Organizations for One-to-Four-Family Residential Mortgages Liability Level at $10.8T in Q3 2020. It has been in the interest of banks to know which factors are the most influential predicting mortgage default, and the implementation of survival models can utilize data from defaulted obligors as well as non-default obligors who are still making payments as of the sampling period cutoff date. Besides the Cox proportional hazard model and the accelerated failure time model, this paper investigates two machine learning-based models, a random survival forest model, and a Cox proportional hazard neural network model DeepSurv. We compare the accuracy of covariate selection for the Cox model, AFT model, random survival forest model, and DeepSurv model, and this investigation is the first research using machine learning based survival models for mortgage default prediction. The result shows that Random survival forest can achieve the most accurate, and stable, covariate selection, while DeepSurv can achieve the highest accuracy of default prediction, and finally, the covariates selected by the models can be meaningful for mortgage programs throughout the banking industry. Introduction Home building and sales are one of economic engines driving the United States' $21T economy, and the Federal Reserve, St. Louis, reports Households and Nonprofit Organizations for One-to-Four-Family Residential Mortgages, Liabil-ity Level, at $10.8T in Q3 2020. Housing foreclosure and mortgage default were major drivers of the 2008 Great recession, however, to the contrary in 2020, existing housing sales are up, generating demand for mortgages even during the Cov-19 crisis and are a bright spot in the US economy as shown below. Figure 1 ([1]), shows a graphic for existing home sales during the Cov-19 pandemic which shows a strong increase starting in May 2020 in existing home sales totaling 6,690,000 by November 2020 generating revenue for banks but also requiring capital reserves, and consequently, predicting mortgage default will be valuable for a bank's decision on the amount of capital reserve to hold. Also, two important aspects involved in predicting performance evaluation and prediction interpretation, are respectively: 1) Prediction accuracy, and 2) the rank of covariate importance. Mortgage default data presents a binary classification problem with an obligor either defaulting or not defaulting, a logistic regression model seems to sufficiently handle this type of classification problem with 1 indicating default and 0 indicating nondefault, however, the classification of 0 as nondefault is incomplete, since the status of this loan is unknown after the end of the sampling period: for example, consider a performing mortgage loan with a loan term of 30 years that does not default or pay-off during the sampling period, then classification of 0 is incomplete, since the status of the loan is unknown from the end of the sampling period to the end of the 30-year loan term. However, this incomplete data can still give information on default probability, and should not be discarded, instead survival analysis is a modelling methodology that can incorporate this type of incomplete data. A wide variety of applications for survival analysis abound in economics and other social science disciplines ranging from unemployment analysis to the tendency of a convicted criminal to reoffend (recidivism). [2] examines the unemployment rate calculated from the Current Population Survey and indicates that information on the length of employment is only collected on those individuals unemployed at the time of the survey, and not for an individual's unemployed between surveys. [2] indicates that individuals are unemployed coming into the survey period presenting left censoring, and individuals are employed at the end of the survey period, but become unemployed after the end of the survey, presenting right censoring. [3] examines recidivism from the standpoint of survival analysis and indicates that ad hoc attempts to introduce time varying covariates, without the use of survival analysis introduces unintended consequences. Censoring and time varying covariates need to be formally introduced into the analysis through survival methodology to accommodate incomplete data, and proper likelihood functions need to be developed to examine censoring and time varying covariates. In this paper, several survival analysis methodologies will be compared in relation to their accuracy of default prediction and accuracy of covariate importance ranking. Mortgage prediction has been examined by other researchers: [4] used logistic regression on Hongkong residential mortgage data and found that current loan to value ratio (LTV) and unemployment are the two most important factors influencing default. [5] used mortgage data from one financial institution, and employed a Cox proportional hazard model, and the author found that among all the covariates, LTV, and debt-service-coverage ratio had largest impact. [6] compared four classification models, logistic regression, random forest, boosted regression trees, and generalized additive models, and the author found that random forest outperforms other models with prediction accuracy as the metric. Although these papers discussed the covariate importance based on different models there are several considerations deserving further investigation: First, these papers did not discuss model accuracy, which is a deficiency when discussing covariate importance, and second, most classification models did not utilize the incomplete data, which potentially could be a large fraction of the total dataset. To compare the accuracy of the models, training and test data will be constructed and a C-index will serve as the accuracy metric, and the motivating factor for using C-index as the performance metric is because for incomplete data, accuracy could not be defined. This paper will also generate covariate importance ranks for all the models, and construct a model-covariates matrix to discover the best model in terms of covariate importance. Finally, this paper is arranged as follows: Section 2 will be the introduction of survival theory and models including Cox, AFT, RSF and DeepSurv. Section 3 is building the covariates ranking with Cox model, AFT model, RSF model. Section 4 will be evaluating the effectiveness and accuracy of covariates ranking generated with the three models using DeepSurv model. Section 4 will be the results, discussion and conclusion. Censored Data Censoring arises naturally in time-to-event data when, the starting of an event or ending of the event, are not precisely observed ( [10]), and there are various censoring types, for example, right censoring, interval censoring and left censoring. The most common type of censoring is right censoring, where time-to-event is not observed, and as an example consider mortgage default data, where mortgage default is the event of interest in terms of survival analysis. Now assume that for a given dataset, 30% borrower default is observed, and each defaulted observation recorded with an appropriate default date, however, for the remaining 70% of the borrowers in the data, default is not observed, and therefore the observations are recorded as right censored. Although right censored data seems to be a case of missing data, their time-to-event is not actually observed before the end of study, however, these subjects are very valuable because they went a certain amount of time without experiencing an event, and this in itself is informative to the analysis. To use the common statistical models, such as linear regression and logistic regression, for time-to-event data will result in biased estimation and misleading results, since these analyses cannot handle censored data when survival experience is only partially known. Survival Function Survival analysis is a statistical data analytic technique for analyzing time-to-event data, and one fundamental relationship in survival analysis is the survival function. Assume T is a continuous random variable, then the probability of an individual surviving beyond time t can be defined as Equation (1). where ( ) The other basic quantity is the hazard function. It is also known as the hazard rate, the instantaneous death rate, or the force of mortality. The hazard function can be expressed as in Equation (2). expresses the conditional probability that the event of interest will happen in time interval dt given it did not occur before. Combining Equation (1) and Equation (2), Equation (3) can also be derived, which shows that survival and hazard function provide equivalent information. is called cumulative hazard, which every model uses to calculate the survival function, S(t). Concordance Index In time-to-event data, because some outcomes are unknown, it will not be possible to use accuracy or area under curve (AUC) to evaluate the performance of a model, however, [11] proposed a rank-based method to judge the prediction capability of survival models. Every survival model generates a risk score for each subject, and usually, the risk score is the median survival time of a subject, and then all possible appropriate subject pairs will be evaluated as principles shown in [11]. For example, if both subjects of the pair are not censored, and the median survival time of A is larger than that of B, and the time to event of A is larger than that of B, [11] calls this a concordant pair. [11] also explained how to handle concordance when only one subject is censored, or two subjects are censored. Finally, the ratio of concordance counts to the counts of all valid pairs is the concordance index (C-index) where the range of the C-index varies between 0.5 to 1, and when the C-index is 0.5, the model is no better than a random guess, and when the C-index is 1, the model can predict perfectly. [12] found that the C-index is the weighted average of time specific AUC, which explains its popularity as the metric of choice for evaluating survival models. Data Introduction and Exploration The dataset used in this paper has 50,000 U.S mortgage borrowers (obligors), D. F. Zhang et al. and is the dataset used by [13], which can be downloaded from their book's website, and at the end of the tracking of each mortgage borrower, some borrowers were recorded as defaulted or finished payment. In this data, 30% of obligors defaulted, only 17% of the obligors were recorded as continuously paying, and rest of obligors finished their loan term. Each mortgage is associated with an origination time, record time, indicator of default or indicator of payoff, and other covariates associated with the mortgage. Names and explanations of all the 15 covariates are in Table 1, and these 15 covariates can be grouped as macroeconomic variables (gdp, uer, hpi, interest_rate), loan related variables (LTV, LTV_orig, FICO_orig, investor_orig, balance, balance_orig, hpi_orig, Inter-est_rate_orig), and property related variables (Retype_co_orig, Retype_PU_orig, Retype_SF_orig). The four macroeconomic covariates are grouped, and their paired correlations calculated, as shown in Figure 2. Figure 2 shows the covariate hpi has a strong negative correlation with the covariate uer, which from the economic perspective is supported, as the economy gets stronger more people are employed, decreasing the unemployment rate, and similarly a more active economy increases disposable income which implies more homes are purchased, increasing the hpi leading to a negative correlation between uer and hpi. The covariate gdp has a fairly strong positive correlation with hpi, where a sustained increase in economic activity increases gdp, and similarly increases home sales, which in turn increases hpi leading to a positive correlation between gdp and hpi. Likewise, a sustained increase in economic activity increases employment, which in turn decreases the unemployment rate, uer, and presents a fairly strong negative correlation between gdp and uer in Figure 2. Before fitting survival models, the data set is preprocessed by performing two steps. First, it is moving the origination date of each mortgage to 0, and the second, is keeping only the last record of each mortgage, and computing the time from origination date to the last observation. The default indicator variable takes on two values, the value 1 if the mortgage has defaulted during the sampling window, and 0 if the observation has not defaulted, that is, survived and is censored, and finally, left censoring is avoided by assuming all loans start from the first observation. Before fitting any survival model, it is standard practice to generate Kaplan-Meier survival curves to explore the impact of univariate data on survival, and since most of the covariates in the mortgage data set are continuous, dummy variables are generated as follows: For any continuous covariate, if the value is larger than the mean of the covariate, it is labeled as 1, otherwise it is labeled as 0. Figure 3 is the Kaplan-Meier survival curves of all 15 covariates. In each survival curve, if the two curves are overlapping, it signifies the value of that covariate does not matter to the survival time of the mortgage, and if the two curves separate, it indicates the covariate impacts survival time. Figure 3 shows that a few covariates have no impact on survival time, such as inves-tor_orig, REtype_CO_orig, REtype_PU_orig, and REtype_SF_orig, however, for many of the univariates there is separation between the two curves, indicating an impact on survival time, for example, FICO_orig, interest_rate, uer, inter-est_rate_orig, balance, hpi, balance_orig, hpi_orig, LTV, gdp, LTV_orig. Note, the first graph examines FICO score at origination, and for FICO scores at origination above the mean, there is less risk, represented by the higher turquoise survival curve, versus, the lower FICO scores at origination represented by the Cox proportional Hazard Model In D.R. Cox's famous 1972 paper [14], Regression Models and Life-Tables, in the Journal of the Royal Statistical Society, Cox proposed a model for handling time-to-event data, explaining the effects of multi-covariates with continuous or categorical covariates, and this model, the Cox model, is expressed in Equation (4). are the values of covariates of object i. The Cox model attempts to find the effect of covariates on the hazard rate, λ(t), by multiplying the base hazard rate, which changes with time, and an exponentiated linear combination of covariates. The above model implies the effect of the covariates on the hazard rate does not change over time, and the Cox model is called a proportional hazards model since the ratio of the hazard rate of one object, X i over that of another object, X j is a constant. Lasso regression has the quality of shrinking and selecting covariates, and Tibshirani ( [16]) shows that LASSO regression can select the best set of covariates, compared with other covariates selection methods, and the covariate selection of the Cox model can be incorporated into the LASSO regression, as illustrated in [17] [18]. This algorithm is included in glmnet package of R, which is used in this paper, and as Tibshirani ([18]) indicates, all covariates should be standardized for the purpose of covariate selection, otherwise the coefficients cannot be compared. From Equation (6) it can be concluded that when a coefficient is 0, the covariate has no impact on the survival function and, when the coefficient is larger than 0, it will reduce survival time, and when the coefficient is negative, it increases survival time. This can explain why the coefficient of gdp and FICO_orig are smaller than 0, i.e., since a higher gdp growth rate and larger FICO scores have a positive impact on survival time, the coefficients are less than 0, which in turn, positively affects survival time. The coefficient values for interest rate, LTV, hpi_orig are positive, since the higher values for those risk drivers indicate the possibility of a shorter survival time, i.e., interest rate is a measure of default risk, the higher the interest rate, the higher the risk of defaults, and for LTV the larger the loan in relation to the value of the property the higher the risk, and finally for hpi_orig, the higher the house price at origination the higher the mortgage payment, and the more difficult for the obligor to make larger payments over the business cycle. Now, given that all the covariates are standardized to the same magnitude, the absolute value of the coefficient reflects the extent survival time can be reduced, and the survival function altered. Accelerated Failure Time Model Like the Cox model, the accelerated failure time (AFT) model is also a linear model, and the L1-regularized, Lasso penalty, AFT model will be employed with this data to choose the five most significant covariates that drive failure time. There are several parametric AFT models, and the Weibull AFT model is the most popular since it has characteristics of both a proportional hazard model and an accelerated failure time model. Equation (7) shows the Weibull AFT model. In Equation (7), i ε is an i.i.d. random variable that satisfies the log-Weibull distribution and σ is a scale parameter, and since Weibull AFT is a parametric AFT model, the expected survival time can be derived as Equation (8), which can give a clear indication how covariates impact survival time ( [19]). The Lifelines python package will be used in this section, and similar to the Cox model, all the covariates are standardized before applying the AFT model, and from Equation (8), we can find that when one covariate's coefficient is 0, the covariate does not have an impact on survival time. When the covariate's coefficient is larger than 0, it has positive impact on survival time, and therefore, the coefficients from the AFT model usually are opposite of that from the Cox model, as shown in Table 3. The covariates selected with the AFT model are consistent with the Cox model, and the signs of the coefficients are opposite of the Cox model, which confirms the theoretical analysis. Random Survival Forest The random survival forest (RSF) model derives from the Random forest model of Breiman ([20]), and contrasted to the Cox model and the AFT model, which are parametric and continuous models, a random survival forest model is a non-parametric, discrete model. It has the advantage that it does not depend on any distributional assumptions, and the drawback is it is hard to explain the quantitative effect of different covariates, although it can still generate the rank order of covariate importance. Like Random forests, RSF models also produce hundreds of decision trees based on some splitting rule, and the most commonly used splitting rule is the log-rank statistic. For each tree, a subset of the covariates is selected randomly based on the square root of p, where p is the number of covariates, then recursively a covariate is chosen, and its splitting value determined, so that the left node and the right node of the tree has the maximum difference of the log-rank statistics ( [21]). The log-rank statistic measures how large is the difference in hazard rates of two groups, and in the case of RSF, it measures the difference in hazard rates between left node and right node in the current split. The covariate ranking in RSF is similar with that of Random forest. It calculates the drop of prediction accuracy on the test data excluding the selected covariate, and since RSF is an ensemble algorithm, there are efficient ways to implement this process ( [22]). This paper uses the random ForestSRC package of R, and Figure 5 shows the covariate importance from RSF. Unlike the Cox and AFT models, coefficients have quantitative meaning on how they impact survival time, covariate ranking from RSF does not ( [23]). Note that gdp, LTV, uer, interest_rate and hpi_orig are the top five covariates as in the other rankings above. DeepSurv DeepSurv presents as a deep learning algorithm based on a Cox model ( [24]), keeping the structure of the neural network algorithm, and retaining the Cox proportional hazards paradigm as a one layer response output with appropriate updated parameters. The effects of covariates are modeled with a multiple layer neural network to capture the interactions between covariates, and like other deep learning neural network algorithms, DeepSurv also uses alternate fully connected layers and drop out layers to avoid overfitting. DeepSurv, also uses a scaled Exponential Linear Unit (SELU) as the activation function with a hazard function output, and finally, the loss function is the average negative log partial likelihood with regularization ( [24]). Katzman, briefly describes the algorithm below. DeepSurv is a multi-layer perceptron similar to the Faraggi-Simon network. However, we allow a deep architecture (i.e., more than one hidden layer) and apply modern techniques such as weight decay regularization, Rectified Linear Units (ReLU) … Batch Normalization … dropout … stochastic gradient descent with Nesterov momentum … gradient clipping … and learning rate. Scheduling … The output of the network is a single node, which estimates the risk function ( ) h x θ parameterized by the weights of the network [20]. As seen above, Deepsurv is a highly flexible model facilitated, in part, by modifying the basic gradient descent algorithm into a more adaptable method, and also, as noted, allowing for more neural network layers, introducing more parameters, within the hidden layer framework ( [24]), giving more tractability to the neural network environment, and in this paper, the implementation of Deepsurv is accomplished using the python Pycox package ([25]). The default structure of the DeepSurv neural network will be employed, which is composed of two hidden layers each with thirty-two nodes, a ReLU activation function, a batch norm, and 10% drop out, and finally, the data is split as 80% training data and 20% test data. Training data is used to fit the model, and test data is used for model evaluation by applying the C-index metric to determine the best model fit, and finally, all the models were given a random state, so the results are repeatable. Unlike the Cox model, which can identify the coefficients of covariates, DeepSurv is a black box model, and consequently is not the optimal choice for coefficient explanation or selection, however, as with neural networks in general, DeepSurv is an excellent prediction model. Kim ([26]), and Zhu ( [27]) claim DeepSurv can achieve higher prediction accuracy than other survival models, and this is confirmed with results in Table 4, where all the four models were fit with training data and predicted using the test data on all covariates. Table 4 shows that DeepSurv indeed can achieve much higher accuracy than other models on the mortgage data, where DeepSurv obtains a 16.15% higher percentage change in the C-index than the next highest C-index score attained by RSF. Next, DeepSurv is used as a tool to compare and evaluate, the performance of covariate ranking obtained from the other models, and the covariate ranking will be evaluated at 5 levels first, the top covariate, then the top 2 covariates, continuing until finally, the top 5 covariates are evaluated. Table 5 shows the results. Note the Cox model, and the AFT model C-index levels off from the top 4 to top 5 covariates, and the RSF model C-index increases from the top 4 to the top 5 covariates by just 2.8%. Table 5 shows RSF can pick better covariates at every level, since its C-index outperforms the other two models on all the levels. Cox and the AFT model perform similarly on each level because they have four covariates overlapping for the top five covariates. Figure 6 is the C-index on test data using the top N covariates of the RSF model, and this figure shows that the top five covariates from the RSF model essentially achieve maximum C-index accuracy, as marked by the horizontal red line in the graphic. Notice the C-index varies very little from 5 covariates to 14 covariates, and as with scree plots and elbow plots, the top 5 covariates are chosen to be the most parsimonious model covariates that have the highest C-index. Finally, in Section 3.2 and Section 3.3, the choice of a 5-covariate model with a selection of 0.05 λ = , is supported by the conclusions gleaned from Table 5 and Figure 6, and the preceding paragraphs. Conclusions Determining the probability of mortgage default is a critical part of a bank's risk assessment profile affecting originations, relationship management, and loss reserves, consequently, determining the best modeling algorithm is also critical to Further analysis shows that DeepSurv can achieve far better prediction accuracy than the other models in this study, and using the C-index as the measure of goodness-of-fit for the Cox, AFT, and RSF models, the RSF model achieves the best goodness-of-fit ranking. Among all the 15 covariates, the RSF model picked 5 covariates which can successfully predict mortgage default, and finally, the chosen top 2 covariates are gdp growth rate and the loan to value ratio, and this result is consistent with findings from the literature ( [4] [5] [6]).
5,692.4
2021-03-01T00:00:00.000
[ "Economics" ]
RELEVANCE OF MGNREGA ASSETS One of the essential features of MGNREGA is to boost the engine of rural development through creation of durable assets. The programme is presumed to transform rural economy with ultimate objective of sustainable development through enhancement in agriculture production. Different categories of works that are executed under the scheme are aimed to make favourable conditions for villagers through environmental and infrastructure upgradation. The present study is aimed to reveal the significance of assets and their worth to the rural people on the basis of micro level field investigation in the district Mandi of Himachal Pradesh. Introduction Mahatma Gandhi National Rural Employment Guarantee Programme is the largest ever rural development programme of its type. The programme embedded unique features of guaranteed wage employment through creation of durable assets, provision of social security under Rashtriya Sashay Baima Yojana, participative planning for works to carry out, creation of durable assets to sustain and enhance agricultural production, etc., making it different from all other development programmes in the past. In ten years of its implementation, the programme has emerged as a ray of hope for millions of villagers to earn livelihood in their locality and to enhance their agricultural production with the help of assets constructed under the scheme. With ultimate objective of strengthening the livelihood of rural people, the contribution of assets created under MGNREGA has always been a matter of concern, since the implementation of scheme in February, 2006. With a total of 4,50,92,923 works completed under the scheme (till March 31, 2016) these assets further assume significance. Further, with a mandate of 60 per cent of works meant for enhancement of agricultural and allied activities, the scheme envisions that the works undertaken will improve the natural resource Sanjay Thakur* management and address the causes of chronic poverty such as drought, deforestation, barren land, etc., through sustainable assets. Literature Review MGNREGA based upon the theme of employment guarantee scheme (1982) in Maharashtra, has attracted a considerable amount of academic interest, because of its size and implications for rural development. A study conducted by Ambush, Shankar and Shah (2007) provides a roadmap for effective implementation of the scheme by suggesting measures like deployment of full time professional staff, social mobilisation for participative planning in deciding the shelf of projects and intensive use of information technology practices, etc. Deepak and Sova (2010) observed that one of the reasons for poor performance of MGNREGA was that most of the works were concentrated only around water conservation and irrigation facilities. They recommended the inclusion of MGNREGA with other schemes of public works administered by the government through agriculture, horticulture and forest departments. Esteues et.al. (2013) revealed that works like water conservation, land development, forestation, etc., have led to the enhancement of agricultural productivity and regeneration of natural resource base. Basu et.al. (2013) quantified the environmental and socioeconomic benefits generated by the works taken under the scheme. They observed a reduction in vulnerability to poor due to implementation of works and increased environmental benefits from the works undertaken in the scheme. Ranaware, Ashwini and Sudha (2015) evaluated the impact of MGNREGA works through their empirical study in Maharashtra. They revealed that MGNREGA works support agriculture and benefit a large number of small and marginal farmers. They suggested an increase in local participation, careful selection of works and better design to ensure the effectiveness of MGNREGA. MGNREGA is largely presumed as a poverty alleviation programme through employment generation, although it derives its legitimacy from being assets generating programme. Recently some researchers have focused upon impacts of scheme on environmental changes, Methew S. (2014), socio-economic benefits generated , Basu et.al. (2013), vulnerability of agricultural output, Tashina et.al. (2013), assets contribution towards agriculture production, Krishna et.al. (2014) and benefits of MGNREGA works for small and marginal farmers through enhanced agricultural output, Ranaware, et.al. (2015). While there is a rich documentation of the implications of MGNREGA on its outcomes in terms of employment provided, wage and consumption, rural urban migration, women empowerment, etc., very little is known about the relevance of assets and their utilisation towards the enhancement of living standards of rural people. The present study is proposed to contribute to the emerging body of knowledge by focusing on MGNREGA works and their relevance for rural people in the district Mandi of Himachal Pradesh. Methodology The study focuses on making a qualitative assessment of assets contributions in terms of benefits associated. Likewise, an increase in agricultural output and progression from traditional crops to cash crops with the help of MNREGA assets. To fulfil the objective of the study, following research issues have been outlined. To examine the relevance of assets in terms of their utilisation and contribution towards betterment of rural lives. To identify the type of works required most by the villagers in their locality. To carry out a time series analysis of works performed in the last nine years under different categories. The study may throw some light on the problems, issues, constraints and limitations of assets contributing towards rural development and may indicate solution to the problems of effective implementation of rural development programmes like MGNREGA. The study may also help in formulation of better policies and strategies for the effective contribution of MGNREGA assets towards rural development. The study is based on both primary and secondary data. Secondary information had been collected from the annual reports: Report to People, Sameeksha, etc., of the Ministry of Rural Development, District Rural Development Agency and from MGNREGA website. For collecting the primary data, a multistage sampling design was adopted. In the first stage, district Mandi of Himachal Pradesh was taken purposively. In the second stage, all the blocks of district Mandi were selected to represent the district uniformly. From each of these ten blocks, four panchayats had been chosen, based on their performance in terms of employment provided for the construction of assets. Finally, 400 randomly selected beneficiaries from each of 40 panchayats had been interviewed with the help of a schedule to collect responses. The schedule was constructed to elicit both the significance and perception towards MGNREGA works. An assessment of how the decision regarding the creation of assets being made, how many of villagers have shifted the agricultural pattern towards cash crops, which of the assets are most needed and useful for villagers, awareness about the repair and maintenance of assets and whether the assets are in use or not in use after construction with time span for beneficiaries has been made based on data collected from the field. Hence the study is aimed to contribute to the emerging documentation of contribution of assets towards enhancement of living standards in rural areas. The scope of study is confined to exploring the usefulness of works rather than cost volume ratio of these works. The study throws some light upon usage and usefulness of assets created under the scheme in the last ten years as well as type of works that are required to be carried out in a specific area. The study also helps in the formulation of strategies towards assets generation in specific rural areas. Table I indicates the economic status of surveyed beneficiaries. It is noted that surveyed beneficiaries possess 2-5 bigha of land, while only 16.5 per cent of families are in the below poverty line group. Mainly for widows and old persons, MGNREGA has become the only source of income to sustain their livelihood. As agriculture is the main occupation for households in rural areas, assets directed towards promotion of agriculture and allied activities will boost the growth engine of rural development. Significantly, with the help of assets created under the scheme, agriculture has become the main source of income for 45 per cent of surveyed households. Assessing Impacts of Assets Created under MGNREGA MGNREGA provides an ample opportunity for productive and durable assets creation. The assets created under different categories of works are primarily meant to foster the agricultural production through works like: land development, water conservation, water harvesting and micro irrigation, etc. The ultimate objective of the scheme is to raise the agricultural productivity of millions of those farmers who will then be able to return back to farming and will no longer need to depend on schemes like MGNREGA for their livelihood. There was also a decline of about 42.99 per cent in FY 2014-15 and it was attributed to the change of Central government in Delhi. The new government wanted to make MGNREGA to be project oriented rather than need based employment generating programme. Hence, some modifications were proposed in the scheme hampering the progress of works under it. These assets have the potential to transform rural development by improving irrigation facilities, enhancing land productivity and connecting remote areas to the input and output market, having both direct and indirect benefit to villagers. Development, Vol. 37, No. 1, January -March : 2018 Rural Infrastructure Construction of rural infrastructure under MGNREGA is an important tool to facilitate rural development. Works related to natural resources management for public usage and private usage, agriculture and horticulture development, disaster management, etc., are provisioned under the scheme. New works are also included under the scheme as per the geographical requirements. Most Needed Works by Sample Households in Their Locality Section 16 of the Act mandates the meetings of gram sabha to determine the priority of works to be carried out under the scheme. However, the preference for works shows a spatial variation among blocks. Households producing cash crops and vegetables are inclined more towards water conservation and land development works while households in far flung remote areas prefer more of rural connectivity works. Figure 2 depicts the preferences for different categories of works to be undertaken in the locality of respondents. The survey conducted revealed that works related to land development (33 per cent) followed by rural connectivity (29.5 per cent), water conservation (18.8 per cent), flood protection (10.3 per cent) and drought proofing (4 per cent) are preferred Total Number of Works Completed from FY 2007-08 to 2015-16 by the respondents to be carried out in their locality. Although a need was felt to incorporate new works like fencing of agricultural land to safeguard against wild animals, collection of pine tree leaves to prevent forest fires in summer, plantation of fruit trees on personal land, etc. An Analysis of Works Done in Actual Of the total works (10,732) executed under different categories in FY 2015-16 in the district Mandi, majority of works like land development, irrigation facilities, flood control, renovation of water bodies, drought proofing, works on individual lands, water conservation and water harvesting, support agricultural and allied activities directly or indirectly. It can be concluded from Figure 3 that these works constitute a total of 87.55 per cent of the total works followed by works of rural connectivity (10.14 per cent). A few works of rural sanitation (1.83 per cent) and works like construction of fisheries (0.04 per cent) and Bharat Nirman Rajeev Gandhi Sewa Kendra (0.07 per cent) were also carried out under the ambit of MGNREGA. Hence it can be concluded that the works being carried out are directed towards the sustainable development of rural areas. Extent of Benefits Availability of employment opportunity in native village is one of the direct benefits of the scheme. Altogether MGNREGA works of land development, rural connectivity and water conservation have benefited villagers to a large extent. These works are considered by respondents as great help for them to enhance their agricultural production and shift towards vegetable or cash crop production. Figure 4 depicts the relevance of MGNREGA assets for respondents. As is evident from Figure, majority of respondents (36 per cent) are making use of these assets to enhance agriculture production. While others are getting benefited through various assets like: cemented pathways, tractor roads, etc., constructed with the purpose of connecting rural areas. Usefulness of MGNREGA Assets The assets created under the scheme are aimed to rejuvenate rural economy through land development, enhancement of water level through water conservation and environmental conservation. Which will further enhance the ecosystem and will enable thousands of farmers to return back to agriculture? The assets constructed under MGNREGA are highly useful for villagers in many respects viz. -selfemployment, water availability for production of cash crops and vegetables, rural connectivity, etc. It has been observed that about 15.5 of respondents have become self-dependent with the help of MGNREGA assets as they have shifted towards cash crops and vegetable production. Hence, it can be concluded that assets generated under the scheme are playing a significant role by linking rural people with agriculture production. Assessment of Usefulness of Assets It was observed that more than half of the beneficiaries perceive assets to be very useful for development. Remarkably, only 9.5 per cent of respondents feel that assets were useless and have no usage for them. While 84 per cent of beneficiaries presume that assets are extremely useful for soil and water conservation and 70.6 per cent reported that assets constructed under the scheme have changed the crop pattern, enabling them to produce more of cash crops and vegetables on their land. Overall 95.7 per cent beneficiaries believe that assets created are of good quality and in usage after 3-4 years of their construction. It was observed that for 62.5 per cent of total respondents the assets created Self-employment under the scheme have increased the engagement in agricultural activities, hence have solved the concern of employment in rural areas, a positive sign of transformational development. Response of households surveyed that said assets created under MGNREGA Percentage Have improved agricultural productivity 80 Are very useful for soil and water conservation 84 Have changed the crop pattern from traditional crops to cash crops 50.6 Have increased the engagement in agriculture related activities hence solved the unemployment problem in rural areas 42.5 Are of good quality and useful 90.2 Assets are long lasting and are in use even after 5-6 years of creation 64.3 Assets are repaired from time to time for their maintenance 0.8 It was also observed that only a nominal fraction of 3 per cent of surveyed beneficiaries were aware of maintenance provisions under MGNREGA. While majority of respondents wish to repair assets like water tanks, but were unaware of the provision resulting in non-usage of water tanks created in private lands. A strong need was felt to repair many assets to make them useful. Conclusion The ultimate objective of MGNREGA is the upliftment of rural people through creation of durable assets. These assets are aimed to rejuvenate rural economy by enhancing agricultural production. After ten years of its implementation it is time to reveal the significance of these assets. Hence, the study is primarily aimed to examine the relevance of assets in terms of their utilisation, to identify most required works to be carried out in the locality of beneficiaries and to evaluate the performance of the scheme in terms of total assets constructed in different years. The study provides evidences that works carried out under MGNREGA support agriculture and have benefited many beneficiaries. It is revealed from MGNREGA website that there has been an increase in total number of works carried out, over the period of ten years. Major objective of the scheme is to provide employment to rural households through assets generating works meant to transform rural economy. Although these assets are contributing effectively towards enhancement of agricultural production and cash crops production in few areas where there are good irrigation facilities, in rainfed areas water tanks and rain harvesting structures are not in good condition and are not in use, most of water tanks need to be repaired for their utilisation. Based on findings of study, it is recommended that usefulness of MGNREGA assets need to be monitored and evaluated at the grassroots level. The ward members should ensure that the assets constructed on personal land are well maintained and are in use after construction. The record of 'expected outcome' at the time of construction or execution of assets should be made mandatory. Villagers should be made accountable for the maintenance of assets constructed on their personal lands, as it was observed that a lot of water tanks on private land were not in use after their construction. Further, a progress report should be submitted by each panchayat on how the assets generated have transformed rural life in terms of agricultural output and condition of these assets in terms of utility and usage by rural people. An awareness campaign with focus on provisions and entitlement of the scheme needs to be undertaken. Ultimately, we need to change the perception of villagers towards MGNREGA, of rather being an employment programme to a programme which is meant for their own upliftment through creation of durable assets in their locality. Hence, enabling them to return to agriculture and to become self-dependent through effective utilisation of millions of assets that have been created under the scheme. 1. The district reflects the diversity of rural hill State in terms of agro-climatic and geographic characteristics of Himachal Pradesh. The district is performing exceptionally good since the implementation of scheme in the district. It is the only district with highest number of employment provider to beneficiaries on year on year basis among all 12 districts. The district also contains backward panchayats as well. Help us to know how the programme is implemented and functioning in these panchayats. 2. MGNREGA works are classified into four categories into public works relating to natural resource management, individual assets for vulnerable sections, common infrastructure for self-help groups and rural infrastructure. Largely public works include land development, construction of water conservation and water harvesting structures on private land, creation of pathways for rural connectivity. Other works that need to be incorporated under the scheme may include: collection of pine leaves to prevent forest fire in summer, fencing of agricultural land to prevent encroachment of wild animals, etc.
4,173.4
2018-03-01T00:00:00.000
[ "Agricultural and Food Sciences", "Economics", "Environmental Science" ]
Spectroscopic and Spectrometric Methods Used for the Screening of Certain Herbal Food Supplements Suspected of Adulteration Purpose: This study was carried out in order to find a reliable method for the fast detection of adulterated herbal food supplements with sexual enhancement claims. As some herbal products are advertised as "all natural", their "efficiency" is often increased by addition of active pharmaceutical ingredients such as PDE-5 inhibitors, which can be a real health threat for the consumer. Methodes: Adulterants, potentially present in 50 herbal food supplements with sexual improvement claims, were detected using 2 spectroscopic methods - Raman and Fourier Transform Infrared - known for reliability, reproductibility, and an easy sample preparation. GC-MS technique was used to confirm the potential adulterants spectra. Results: About 22% (11 out of 50 samples) of herbal food supplements with sexual enhancement claims analyzed by spectroscopic and spectrometric methods proved to be "enriched" with active pharmaceutical compounds such as: sildenafil and two of its analogues, tadalafil and phenolphthalein. The occurence of phenolphthalein could be the reason for the non-relevant results obtained by FTIR method in some samples. 91% of the adulterated herbal food supplements were originating from China. Conclusion: The results of this screening highlighted the necessity for an accurate analysis of all alleged herbal aphrodisiacs on the Romanian market. This is a first such a screening analysis carried out on herbal food supplements with sexual enhancement claims. Introduction During the last period, the consumption of herbal food supplements meant to improve sexual performance has seen a considerable increase. Most consumers trust "100% Natural" advertised products considering them as safe and side effects free. 1 In spite of consumers'belief, this category of herbal supplements could be "tainted" with legal drugs such as: sildenafil, tadalafil, vardenafil, as well as their analogues. Unfortunately, all these substances were not tested from pharmacological or pharmacokinetic point of view. 2,3 Phosphodiesterase type 5 inhibitors (PDE-5 inhibitors), namely sildenafil, tadalafil, vardenafil are drugs commonly used to treat erectile dysfunction and can be consumed only on medical prescription. 4-6 PDE-5 inhibitors are not recommended for patients on specific prescriptions as: organic nitrates (e.g. nitroglycerin, isosorbide dinitrate, isosorbide mononitrate, amyl nitrite, or nitrate used for the treatment of diabetes, hypertension, hyperlipidemia and ischemic heart disease), as they can cause serious and unpredictable blood pressure falls. These blood pressure falls are often accompanied by other specific symptoms among which: headache, flushing, dyspepsia, nasal congestion, dizziness, myalgia, back pain, and abnormal vision, are but a few to be mentioned. [7][8][9] A number of efficient analytical methods such as: TLC, GC-MS, LC/MS/MS, LC-HR/MS, HPLC-DAD, HPLC-MS, NMR have been developed in order to detect the PDE-5 inhibitors adulteration of food supplements. 1,[10][11][12][13][14][15] Attia et al., have determined the vardenafil hydrocholorides as pure active ingredients in drugs using the TAI (thermal analysis investigation). 16 However, all these methods require a laborius sample preparation and analysis while being very expensive from financial point of view. Therefore, for a quality screening of adulterated food supplements, a new approach was required: a non-destructive procedure, less laborious in respect of sample preparation and analysis, a faster and more efficient method involving minimal costs. Among such methods, Raman and Infrared spectroscopy proved to be the most efficient. According to specialized literature, these methods have been applied for various purposes: Kim [17][18][19][20] Fourier Transform Infrared Spectroscopy with Attenuated Total Reflectance (ATR-FTIR) and IR Spectroscopy were used to detect counterfeit drugs like Viagra and Cialis type by Ortiz et al.,and Custers et al. 21,22 Using the same techniques, Champagne et al., detected sildenafil and tadalafil in raw materials used as ingredients in food supplements. 23 Chuang et al., and Yang et al., used near-infrared spectroscopy (NIR) to analyze bioactive compounds in herbs and herbal medicines, respectively. 24,25 As consumers show an increasing demand for natural products (supplements), quality control, adequate risk assessment and clear regulation for botanicals and botanical preparations are highly required. While adulteration could be "economically motivated", occurrence of such pharmacological active compounds in herbal supplements may become a serious health threat for consumers. In this study, a qualitative screening of 50 herbal food supplements samples collected from the local market, was carried out. Raman spectroscopy and Fourier Transform Infrared Spectroscopy were used as rapid screening methods. These analytical methods are complementary and cheaper and can be used to identify several chemical functional groups. They are less time consuming in both sample preparation and analysis. As soon as the adulteration was detected, the GC-MS technique was applied to confirme the adulterants spectra in the respective samples. Chemicals Reference standards of sildenafil (with purity of 98,8%) and tadalafil (with purity of 99,9%) were purchased from European Directorate for the Quality of Medicines & HealthCare, European Pharmacopoeia (Strasbourg, France). Acetone and methanol used as solvents were provided by Merck, Germany. Commercial formulations of dietary supplements A total number of 50 herbal food supplements promoted to improve sexual performance were analyzed. The samples were encoded from Hfsd1 up to Hfsd50. A number of 45 products were provided by the National Office for Medicinal, Aromatic Plants and Bee Products, 4 products were purchased on line while one product was bought from a local drugstore. The analyzed herbal supplements originated from different countries and were marketed as: capsules, tablets, liquids in vials, sachets, powders. Raman measurements Raman spectra were recorded by a NXR FT-Raman Module with InGaAs (Inidium-Gallium Arsenide) detector and CaF 2 beam splitter. The power of laser beam at the surface of the sample was about 0.3 mW. Each spectrum consisted of 64 co-added scans at a spectral resolution of 4 cm -1 in the field of 3701-100 cm -1 . Omnic software version 8 (Nicolet Instrument Co. Madison, USA) was used to determine the spectra. ATR-FTIR measurements A Nicolet 6700 FT-IR Spectrometer (Nicolet Instrument Co., Madison, USA) with DTGS (Deuterated Triglycine Sulphate) detector was used to record the absorbtion spectra. A single bounce ZnSediamond crystal was used in the attenuated total reflectance (ATR) sampling system. A small amount of each homogenized sample was directly applied and pressed on the diamond crystal. The same pressure value was applied to all samples. Each spectrum was measured at a spectral resolution of 4 cm -1 and consisted of 64 co-added scans. Recordings were performed in the range of 3701-100 cm -1 . The crystal was cleaned in acetone and dried in open air, at room temperature following each measurement. To avoid possible contamination of the crystal, the background spectrum using identical instrumental conditions was measured after each acetone cleaning. Sample preparation The sample required minimal preparation. Each solid sample consisted of: the content of one capsule, a tablet (that was crushed) or the contents of a sachet. The powder obtained from each sample was homogenized using a mortar and pestle. For Raman analysis the obtained homogenized powder was inserted into a 6 mm diameter vial, that was further inserted in the equipment. Liquid samples did not require previously preparation. GC-MS analysis Stock solutions of sildenafil and tadalafil For stock solutions preparation, 1 mg of high purity reference standards (sildenafil and tadalafil, respectively) was each dissolved in 1 ml absolute methanol. To get the reference ion chromatogram for each adulterant, a 1:10 dilution was performed. Samples Each 100 mg of homogene fine powder/sample (from a sachet, by emptying a capsule or crushing a tablet) was disolved in 1 ml of absolute methanol. For the liquid products, 2 ml/sample were taken and dilluted in 1 ml absolute methanol. Samples were thoroughly vortexed, followed by 15 minutes of sonication and 5 minutes centrifugation at 4000 rpm. The supernatant was collected and filtered by 0,2 µm membrane filters for GC-MS analysis. Raman spectroscopy The high purity reference standards of sildenafil and tadalafil were analyzed. The specific bands corresponding to the characteristic functional groups of sildenafil were identified at 1698 cm -1 (band that can be attributed to stretching vibrations of the group C=O) as well as at the doublet 1580/1563 cm -1 (which is specific to C=C bond). For bonds containing nitrogen, Raman bands were present at 1529 cm -1 , due to ʋ(N-C=N) and at 1238 cm -1 , respectively, due to ʋ(C=N). Raman bands registered at 1170 cm -1 and 648 cm -1 are attributed to the symmetrical group ʋ(SO2), as well as to the stretching vibrations ʋ(C-S), respectively. 19 Specific responses for tadalafil occurred in the 3100-3000 cm -1 range and 1700-1500 cm -1 range, respectively. The characteristic spectral bands are consistent with the literature data. 20 These bands correspond to the vibrations of unsaturated or aromatic C-H bond and to the vibrations of unsaturated C=C bond, respectively. The Raman spectra for all the 50 samples of dietary supplements that were registered according to the same procedure used for the reference standards. As seen in Table 1 (A), four herbal supplements adulterated with sidenafil were identified. Almost all characteristic bands of high purity reference standard were present in the spectra of Hfsd50 sample (1170 cm -1 band was missing) and Hfs48 sample (1238 cm -1 was missing), while two or three bands were absent in the spectra of Hfsd4 and Hfsd49 samples. As shown in Table 1 (B), other seven samples of herbal food supplements adulterated with tadalafil were also detected. Hfsd30 sample showed all the characteristic spectral bands of tadalafil, while in the spectra of Hfsd12 and Hfsd29 samples, one specific band was missing. In the spectra of Hfsd27 and Hfsd32 samples two and three bands, respectively, were missing. It has also been noticed that the spectra of the examined samples were of poor quality. In the Hfsd28 and Hfsd31 samples spectra only a part of the characteristic bands of tadalafil were detected. As the Raman spectra of the Hfsd28 and Hfsd31 samples were not enough relevant a clear conclusion on their tadalafil adulteration could not be drawn. Raman spectroscopy used to screen the 50 herbal food supplements with sexual enhancement claims, detected 9 adulterated samples among which 4 products with sildenafil and 5 products with tadalafil. As already mentioned, two samples suspected to be adulterated with tadalafil showed no relevant spectra, missing three specific bands of the reference standard. ATR-FTIR spectroscopy The FTIR spectra for the 50 samples of herbal food supplements were also analysed against the sildenafil and tadalafil reference standards. Absorption peaks characteristic for PDE-5 inhibitors were registerd in the 1800-525 cm -1 range. According to Champagne et al., this spectral range includes the 1720-1150 cm -1 domain, important for the detection of PDE-5 inhibitors analogues and homologues, respectively. 23 The sildenafil spectrum (Figure 1a) showed significant absorption peaks at: 1698 cm -1 (characteristic to carbonyl groups (C=O)); 1579 cm -1 (specific to N-H bonds, occurring in the range of 1650-1500 cm -1 ); 1489 cm -1 (resulted from C=C bonds in the benzene ring). C-N bonds from the functional group O=C-N is absorbed at 1400 cm -1 (but in this experiment the absorbtion value was 1391 cm -1 ). Anzanello et al. reported that C-H aromatic out-of-plane deformation occurred at 939 cm -1 , which resulted in addition of new peaks at 1172, 758, 619, 587 cm -1 . 27 The specific tadalafil absorption peaks of FTIR spectrum (Figure 1b) were registered at 1675 cm -1 (characteristic of amides C=O), 1646 cm -1 (C=C aromatic). The band of 1435 cm -1 belongs to the stretching vibration C-N, and the band 746 cm -1 is representative for benzene. 27 Comparing the spectrum of sildenafil reference standard with all spectra of the 50 analyzed samples of herbal food supplements, the occurence of sildenafil adulteration of four samples (Hfsd4, Hfsd48, Hfsd49, Hfsd50) was noted (Table 2 (A)). All characteristic absorption peaks for sildenafil were identified in the spectrum of Hfsd50 sample. The Hfsd48 sample spectrum did not show, the characteristic absorption peaks for: 1579 cm -1 , 1172 cm -1 , and 758 cm -1 . In the spectrum of Hfsd49 sample the four bands: 1579 cm -1 , 1489 cm -1 , 1391 cm -1 and 758 cm -1 , respectively, were absent. Hfsd4 sample showed a very poor spectrum (6 bands were missing: 1698 cm -1 , 1391 cm -1 , 1172 cm -1 , 758 cm -1 , 619 cm -1 and 587 cm -1 , respectively). This last result is not relevant if the adulteration with sildenalfil is to be considered. Out of the total analyzed samples, seven herbal food supplements (Hfsd12, Hfsd27, Hfsd28, Hfsd29, Hfsd30, Hfsd31, Hfsd32) were identified as adulterated with tadalafil (Table 2 (B)). All spectra of the adulterated samples showed the same absorption peaks as the tadalafil reference spectrum. Thus, FTIR analysis confirmed that Hfsd28 sample was adulterated with tadalafil while application of Raman spectroscopy to the mentioned sample could not detect this adulteration. The screening of 50 herbal food supplements with sexual enhancement claims performed by ATR-FTIR spectroscopic method had as result the detection of a number of 10 adulterated samples with sildenafil (3 products) and tadalafil (7 products). GC-MS Due to its capacity to separate, quantify and identify unknown organic compounds, GC-MS was used as a sensitive method to confirm the results obtained by the two spectroscopic techniques (Raman and FTIR). The advantage of this method is related to the sample preparation (no derivatization or hydrolysis procedure is needed) as well as to the detection time (much shorter) of adulterant substances. A good chromatographic separation of sildenafil and tadalafil reference standards was obtained at a retention time of 38.66 minutes, 36.63 minutes, respectively. Identification of the two PDE-5 inhibitors (namely sildenafil and tadalafil) by GC-MS was facilitated by the presence of their molecular ion m/z 474 (for sildenafil) and m/z 389 (for tadalafil). Out of the total number of analyzed samples, 11 herbal food supplements proved to be adulterated with sildenafil or tadalafil (Table 3). The adulterated products (91% of the analyzed samples) were of Chinese origin. Surprisingly, phenolphthalein was detected in Hfsd48 sample (Figure 2), in which sildenafil was also detected. It has to be noticed that phenolphthalein is a banned substance both in Europe and in USA since 1997 because of its potential carcinogenic effects. We suppose that phenolphthalein was introduced in the herbal food supplement with the intention to conceal the presence of sildenafil and thus to prevent the detection of the adulterant (PDE-5 inhibitor) by spectroscopic analysis. The occurence of phenolphthalein could be the reason for the non-relevant results obtained by FTIR method in Hfsd48 sample. GC-MS technique proved to be more sensitive than spectroscopic methods Raman and ATR-FTIR: in Hfsd4, Hfsd49 and Hfsd50 samples, sildenafil was identified togeher with other two unknown compounds with similar fragmentation pattern (probably analogues of sildenafil) as it could be seen in Figure 3. We suppose that sildenafil analogues (detected in Hfsd4, Hfsd49 and Hfsd50 samples) as well as phenolphthalein (detected in Hfsd48 sample) identified by GC-MS could be responsible for the low quality spectra recorded by ATR-FTIR method used to screen the herbal food supplements. Using GC-MS analysis, the adulterant tadalafil was detected in seven herbal food supplements (Hfsd12, Hfsd27, Hfsd28, Hfsd29, Hfsd30, Hfsd31, and Hfsd32 samples). The chromatograms of all samples showed a common significant peak for tadalafil at the retention time at 37.4 min (see Figure 4). The GC-MS method used to confirm the results of the spectroscopic methods applied for same samples showed that tadalafil could be detected more specific as compared to Raman technique (Hfsd28 and Hfsd31 samples were relevant examples). About 22% of herbal food supplements with sexual enhancement claims analyzed by spectroscopic and spectrometric methods proved to be "enriched" with active pharmaceutical compounds such as: sildenafil and two of its analogues, tadalafil and phenolphthalein. All these adulterants were detected in similar alleged herbal aphrodisiacs by different researchers (see literature data). [13][14][15] Conclusion The screening performed on a total number of 50 herbal food supplements promoted as natural aphrodisiacs emphasised that 11 samples (22% of the analyzed products) were adulterated with pharmaceuticals compounds (PDE-5 inhibitors and their analogues) or chemical substances (phenolphthalein). The adulterated food supplements are a real health threat for the consumers, due to the misleading labelling ("100% natural products") and undeclared pharmaceutical substances hidden in their composition. The consumption of these adulterated herbal food supplements can seriously harm health as PDE-5 inhibitors and their analogues the most frequent adulterants in sexual enhancement products) could interact with nitrates based drugs and result in side effects or adverse reactions. As adulteration could affect not only the safety of end products but also the raw materials used by manufacturers, the fast screening spectroscopic methods could have more applications in the market control and surveillance of natural products. In this study, the detection of adulterants using spectroscopic methods (Raman and Fourier transform infrared) was confirmed by gas chromatography coupled with mass spectrometry (GC-MS). The minimal sample preparation required by spectroscopic methods as well as the very short analysis time (about 5 minutes/sample) and minimal costs (no special reagents are needed) are the main advanteges when a huge number of samples have to be screened. Thus, spectroscopic methods proved to be a very useful tool for the control and surveillance of the herbal food supplements market. The screening of the products suspected to be adulterated could be rapidly performed. As soon as non-compliant products are identified, the adulterants can be detected by GC-MS techniques, which proved to be a very sensitive key method able to confirm the spectroscopic results. The preliminary results obtained by the screening of 50 herbal food supplements with sexual enhancement claims, highlighted the necessity for an accurate analysis of all alleged herbal aphrodisiacs commercialized on the Romanian market, due to their potential risk profile. Simultaneous determination of synthetic phosphodiesterase-5 inhibitors found in a dietary supplement and pre-mixed bulk powders for dietary supplements using high-performance liquid chromatography with diode array detection and liquid chromatography-electrospray ionization tandem mass
4,138.4
2017-06-01T00:00:00.000
[ "Chemistry", "Medicine" ]
High‐Efficiency Photo‐Induced Charge Transfer for SERS Sensing in N‐Doped 3D‐Graphene on Si Heterojunction Nitrogen‐doped three‐dimensional graphene (N‐doped 3D‐graphene) is a graphene derivative with excellent adsorption capacity, large specific surface area, high porosity, and optoelectronic properties. Herein, N‐doped 3D‐graphene/Si heterojunctions were grown in situ directly on silicon (Si) substrates via plasma‐assisted chemical vapor deposition (PACVD), which is promising for surface‐enhanced Raman scattering (SERS) substrates candidates. Combined analyses of theoretical simulation, incorporating N atoms in 3D‐graphene are beneficial to increase the electronic state density of the system and enhance the charge transfer between the substrate and the target molecules. The enhancement of the optical and electric fields benefits from the stronger light‐matter interaction improved by the natural nano‐resonator structure of N‐doped 3D‐graphene. The as‐prepared SERS substrates based on N‐doped 3D‐graphene/Si heterojunctions achieve ultra‐low detection for various molecules: 10−8 M for methylene blue (MB) and 10−9 M for crystal violet (CRV) with rhodamine (R6G) of 10−10 M. In practical detected, 10−8 M thiram was precisely detected in apple peel extract. The results indicate that N‐doped 3D‐graphene/Si heterojunctions based‐SERS substrates have promising applications in low‐concentration molecular detection and food safety. Introduction Surface-enhanced Raman scattering (SERS) for chemical detection has the advantages of being fast, convenient, non-contact, reliable, and high sensitivity, which has been widely applied in food safety, environmental monitoring, medical diagnosis, etc. [1][2][3] The enhancement mechanism of SERS originates from two aspects: the electromagnetic mechanism (EM) and the chemical mechanism (CM).6][7] CM is generated by charge transfer (CT) between the substrate and probe molecule.However, the development of SERS is limited by the reliance on various noble metal nanoparticles (such as Au and Ag).The apparent lack of horizontal gap control in noble metal nanoparticles leads to signal fluctuations, resulting in poor spatial repeatability of molecular detection.In addition, the high chemical activity of noble metals inevitably affects the long-term stability and reproducibility of SERS substrates. [8]Therefore, there is still critical to construct highsensitivity and high-reliability SERS substrates. Based on CM-dependent SERS enhancement (CT between probe molecules and substrate), 2D-graphene its derivatives have been used as SERS substrates for molecular detection.[11] N-doped 3D-graphene is assembled by growing N-doped 2D graphene perpendicular to the substrate, which has unique properties due to its novel 3D structure. [12]The large specific surface area of Nitrogen-doped three-dimensional graphene (N-doped 3D-graphene) is a graphene derivative with excellent adsorption capacity, large specific surface area, high porosity, and optoelectronic properties.Herein, N-doped 3Dgraphene/Si heterojunctions were grown in situ directly on silicon (Si) substrates via plasma-assisted chemical vapor deposition (PACVD), which is promising for surface-enhanced Raman scattering (SERS) substrates candidates.Combined analyses of theoretical simulation, incorporating N atoms in 3D-graphene are beneficial to increase the electronic state density of the system and enhance the charge transfer between the substrate and the target molecules.The enhancement of the optical and electric fields benefits from the stronger light-matter interaction improved by the natural nano-resonator structure of N-doped 3D-graphene.The as-prepared SERS substrates based on N-doped 3D-graphene/Si heterojunctions achieve ultralow detection for various molecules: 10 À8 M for methylene blue (MB) and 10 À9 M for crystal violet (CRV) with rhodamine (R6G) of 10 À10 M. In practical detected, 10 À8 M thiram was precisely detected in apple peel extract.The results indicate that N-doped 3D-graphene/Si heterojunctions based-SERS substrates have promising applications in low-concentration molecular detection and food safety. [15][16] Moreover, incorporating N atoms is beneficial to increasing the electronic state density of the system and enhancing the CT between the substrate and the target molecules.N-atom doping technology can open the graphene bandgap, effectively promoting the generation of CTs between energy levels.Therefore, N-doped 3D-graphene interacts more strongly with incident light than 2D-graphene and can generate more photogenerated carriers.These unique properties suggest that N-doped 3D-graphene/Si will provide ideas for the structural design and multimodal molecular detection of SERS substrates. [17,18]n this work, we fabricated an N-doped 3D-graphene/Si heterojunction as a SERS substrate by a one-stage process with excellent adsorption capacity, large specific surface area, high porosity, and optoelectronic properties.Through the density functional theory (DFT) calculation and scanning Kelvin probe microscopy (SKPM) analysis, based on the unique physical/chemical features of N-doped 3D-graphene, the electron interaction between the target molecules and the substrate is enhanced to improve charge transfer effects in the heterojunction.This SERS substrate is highly stable, ultrasensitive, inexpensive, convenient, and reusable for various molecule detection.The detection results show that the substrate is highly sensitive to various molecules: the lowest detection limits measured using a 532 nm laser were found to be 10 À8 , 10 À9 , 10 À10 , and 10 À9 M for methylene blue (MB), crystal violet (CRV), rhodamine (R6G) and thiram, respectively.These results indicate that the N-doped 3D-graphene/Si substrate has broad application prospects in molecular detection and food safety. Characterization of Structure and Optical Properties of N-Doped 3D-Graphene/Si Heterojunction The cross-sectional scanning electron microscopy (SEM) image of N-doped 3D-graphene/Si is shown in Figure 1a, indicating a vertical heterojunction structure.The surface topography of N-doped 3Dgraphene/Si is described in Figure S1, Supporting Information.The wettability of N-doped 3D-graphene compared with 3D-graphene and pure Si, verifying the vertical features of N-doped 3D-graphene, as shown in Figure S2, Supporting Information.The 3D atomic force microscopy (AFM) image (Figure 1b) displays the morphological features of N-doped 3D-graphene, demonstrating the vertically aligned nature of N-doped graphene on the Si substrate.The height of Ndoped 3D-graphene is taken along the dashed line in Figure 1b, as shown in Figure S3, Supporting Information, indicating a uniform structure with an average height of about 580 nm.The 3D AFM image and height distribution of 3D-garphene/Si are presented in Figure S4, Supporting Information.The Raman spectra of N-doped 3D-graphene compared with intrinsic 3D-graphene are shown in Figure 1c.Intrinsic graphene has three distinct peaks: D peak (~1350 cm À1 ), G peak (~1580 cm À1 ), and 2D peak (~2700 cm À1 ).It is worth noting that the G and 2D peaks of N-doped 3D-graphene appear blue-shifted and red-shifted, respectively, originating from the introduction of N atoms. [19]The energy-dispersive X-ray spectrometry (EDS) in the inset of Figure 1d shows the presence of C, N, and Si elements.The cross-sectional EDS elemental maps confirm the uniform N doping of 3D-graphene in Figure 1d.The chemical states of N-doped 3D-graphene are shown by the high-resolution XPS spectrum of C-1s (Figure 1e) and N-1s spectrum (Figure 1f).The highresolution C-1s spectrum can be fitted to three peaks at ~284.6, ~285.5, and ~289.3 eV, corresponding to C=C, C=N and C-N, respectively, [20] as shown in Figure 1e.Meanwhile, the highresolution N-1s spectrum can be decomposed into three components at ~398.3, ~400.8, and ~403.5 eV, as shown in Figure 1f, indicating the presence of pyridinic N, pyrrolic N and graphite N. [21,22] These prove that N atoms have been successfully bonded to the graphene lattice.Approximately 80% porosity of N-doped 3D-graphene enables it to have a large specific surface area (Figure 1g), which endows it with the ability to adsorb a mass of probe molecules.In contrast, the porosity of 3D-graphene is only 70%, as shown in Figure S4, Supporting Information.The interaction of light with N-doped 3Dgraphene/Si substrate was explored using Finite-difference timedomain (FDTD) simulations.Details about the calculation method are presented in Supporting Information.Figure 1h exhibits the normalized power loss density distribution of the N-doped 3D-graphene/Si substrate.The distributions of power loss density distribution along the vertical direction are selected, as shown in Figure S5a, Supporting Information, implying that the light absorption occurs inside the graphene and the maximum absorption positions are in the top 3D graphene.Furthermore, the normalized local electric field distribution of N-doped 3D-graphene/Si substrate was simulated in 3D views, as shown in Figure 1i.The distributions of normalized local electric field distribution along the vertical direction are selected, as shown in Figure S5b, Supporting Information, suggesting that electric field enhancement is in the graphene region.It can be estimated that the enhancement factor based on electric field enhancement of N-doped 3D-graphene is 5.4 9 10 2 .The detailed analysis and calculation process are placed in Supplementary Information.The enhancement of the optical and electric fields benefits from the stronger light-matter interaction improved by the natural nano-resonator structure of Ndoped 3D-graphene.The electromagnetic mechanism of N-doped 3D-graphene mainly originated from the nanogap structure.Local surface plasmon resonance is caused by high porosity and sharp edges of N-doped 3D-graphene.Furthermore, the Fermi level of graphene is regulated by doping technology, so the system gains more active CTs.Therefore, N-doped 3D graphene-based SERS substrates exhibit high sensitivity and enhancement factors. Explore the Charge Transfer Mechanism by DFT Calculation of Two Structures The intense light absorption of 3D graphene provides the charge basis for photoinduced charge transfer (PICT).Typically, the efficiency of the PICT transition strongly depends on the intensity of vibronic coupling in the substrateÀmolecule system. [23,24]To investigate the vibronic coupling between the probe molecule and different graphene systems, we adopted the first-principles density functional theory (DFT) to calculate the electronic density of states (DOS) of the two graphene systems.The results are displayed in Figure S6, Supporting Information.To further explain the reason that the substrate composed of N-doped 3D-graphene/Si shows higher SERS enhancement compared with the 3D-graphene/Si, the electrostatic potentials in the Z-direction of two systems were discussed and found that the work function of N-doped 3D-graphene/Si increased than 3D-graphene/Si.The work functions of 3D-graphene/Si and N-doped 3D-graphene/Si are 4.576 eV and 4.966 eV, as shown in Figure 2a.As shown in Figure 2b, the Fermi level of 3D-graphene/Si and N-doped 3D-graphene/Si is located between the lowest unoccupied molecular orbital (LUMO) (À3.364 eV) and highest occupied molecular orbital (HOMO) (À5.143 eV) of the R6G molecules, suggesting that the CT is easily generated between 3D graphene and R6G molecules.The Fermi energy level of the N-doped 3D-graphene/Si structure was closer to the HOMO of R6G, promoting the charge transfer between R6G and SERS substrate, leading to better SERS performance. [25]To investigate the interfacial charge transfer between 3D-graphene structure and surface complex, we built the R6G & 3D-graphene/Si and R6G & N-doped 3D-graphene/Si models, respectively, as shown in Figure 2c,f.Bader analysis was used to calculate the charge transfer in atoms to directly compare numerical values for electron transfer.Figure 2d,e shows the front and side views of the difference in charge distributions for R6G & 3D-graphene/Si.Figure 2g,h shows the front and side views of the difference in charge distributions for R6G & N-doped 3D-graphene/Si.The calculation results indicated that 0.554 e À was transferred from R6G to the 3D-graphene/Si system, and 0.645 e À was transferred from R6G to the N-doped 3D-graphene/Si system.That suggested a significant increase in molecular polarizability, which promotes Raman scattering. [23] Investigate the Charge Transfer Mechanism by SKPM and CAFM To demonstrate that the SERS performance of the N-doped 3Dgraphene/Si substrate can also be enhanced via the CM, SKPM and conductive AFM (CAFM) were used to show that carrier transfer occurs between certain probe molecules and the substrate.This work, used a solution of the fluorescent dye molecule R6G for this purpose.Figure 3a,b depict the surface potential distribution of N-doped 3D-graphene/ Si without and with adsorption of R6G molecules using SKPM, respectively.It was observed that the surface potential increased significantly after R6G adsorption, indicating that R6G molecules and N-doped 3Dgraphene/Si occur in highly efficient CT.The average surface potential of N-doped 3D-graphene/Si increased from 240 mV to 600 mV before and after the adsorption of R6G molecules, as shown in Figure 3c.In order to further investigate the charge transfer process, different molecules (R6G, CRV, and MB) on N-doped 3D-graphene/Si are characterized by SKPM, as shown in Figure S7, Supporting Information.It was observed that the surface potential increased significantly after R6G, CRV, and MB adsorption, indicating that dye molecules and N-doped 3D-graphene/Si occur in highly efficient charge transfer.CAFM was used to study the changes in the current map of R6G molecules adsorbed on N-doped 3D-graphene/Si under dark and light conditions, as shown in Figure 3d,e.The average surface current of N-doped 3Dgraphene/Si increased from 2.8 to 8 nA under dark and light conditions, respectively, as shown in Figure 3f.The current under the light condition is much larger than in the dark condition, indicating that PICT occurs.A photoelectric response test based on the N-doped 3Dgraphene/Si structure was conducted to better understand the PICT process.Figure S8, Supporting Information, presents the photoresponse measurement of N-doped 3D-graphene/Si and R6G/N-doped 3D-graphene/Si hybrid photodetector.The probe solution was added onto the SERS substrate for each SERS detection.Take the R6G molecule as an example, R6G molecules uniformly exist on the surface of 3D-graphene/Si and N-doped 3Dgraphene/Si by physical adsorption.N-doped 3D-graphene/Si adsorption efficiency is higher than 3D-graphene/Si due to its larger porosity.[28][29][30] The enhancement factors of 3Dgraphene/Si substrate and N-doped 3D-graphene/Si substrate are 1.01 9 10 4 and 2.78 9 10 6 , respectively.The detailed calculation method is presented in Supporting Information.Compared with 3Dgraphene/Si system, the N-doped 3D-graphene/Si system has a more abundant electronic density of states near the Fermi level and more photogenerated charges produced by optical absorption enhancement, as shown in Figure S10, Supporting Information.It has the Fermi energy level closest to the HOMO of the molecule.This makes the EF of the N-doped 3D-graphene/Si substrate far greater than that of 3Dgraphene/Si substrate.The Raman spectra of various concentrations of CRV, R6G, and MB on N-doped 3D-graphene/Si substrates are shown in Figure 4e-g, indicating the characteristic peak intensities of different probe molecules are positively correlated with their concentrations.In addition, CRV, R6G, and MB with detection limits as low as 10 À9 M, 10 À10 M and 10 À8 M, respectively.In addition, the linear fitting of the characteristic peak intensity as a function of concentration is performed respectively, as shown in Figure 4h-j.The linear correlation coefficient (R 2 ) of CRV at 1621 and 1177 cm À1 is 0.865 and 0.883; The R 2 of R6G at 1651 and 1365 cm À1 is 0.976 and 0.985; The R 2 of MB at 1626 and 1397 cm À1 is 0.853 and 0.846.The high linearities indicate that N-doped 3D-graphene/Si substrate can detect low-concentration probe molecule solutions.The Raman spectra of various concentrations of CRV, R6G, MB, and thiram on 3D-graphene/Si substrates are shown in Figure S11, Supporting Information. The uniformity, stability, and reproducibility of the SERS substrate are the key indicators for evaluating its capability.Therefore, 900 points were randomly selected on the surface of a (1 cm 9 1 cm) sample of N-doped 3D-graphene/Si substrate.The intensity of the 2D-peak was recorded, and the results are shown in Figure 5a.At the same time, we calculated that the relative standard deviation (RSD) was 5.86% (Figure 5c).The Raman spectra of N-doped 3D-graphene from 0 to 8 months are shown in Figure S12, Supporting Information.Both show that this N-doped 3D-graphene/Si is stable and changes little over time.Similarly, 900 points were randomly selected on the surface of R6G & N-doped 3D-graphene/Si substrate, recording the intensity at 1651 cm À1 and showing the results in Figure 5b.We gained a minute RSD value of 12.79% (Figure 5d), showing that the SERS signal undergoes only small fluctuations.We track the detection capability of the substrate over a long time, as shown in Figure 5e-g.The results showed that the ability of the substrate to detect these probe molecules did not change with time, which proved that the substrate has ultrahigh stability.Figure 5h shows the 3D AFM surface topography of Ndoped 3D-graphene before and after eight times of ethanol-washing processes, and no change in the vertical structure was observed.Moreover, this also demonstrates the excellent reusability and stability of the substrate.These results show that the N-doped 3D-graphene/Si substrate is a credible SERS platform. To Explore the Practical Applications of the N-Doped 3D-Graphene/Si Substrate in Thiram Detection Surface-enhanced Raman scattering detection of pesticide residues in fruits was studied to demonstrate further the practical application of Ndoped 3D-graphene/Si substrates in food safety.As the primary type of pesticide residue, thiram is dissolved in alcohol with its concentration adjusted.Figure 6a shows the process of SERS detection of thiram.To describe the combination of substrate and adsorbed molecule legitimately and at the same time can be used to calculate vibrational spectra properly, we established and optimized the Thiram molecular model in Figure 6b, and we calculated the vibrational frequencies for the optimized structures based on the models.Figure 6c shows the comparison of experimental values with the calculated Raman spectrum.It was found that the calculated Raman frequencies were in good agreement with experimental values, which proves that our calculated model was probably reasonable. [31]Figure 6d shows the Raman spectra of Ndoped 3D-graphene/Si substrates detecting thiram ethanol solution 10 À6 M-10 À9 M concentrations.Characteristic peaks can still be observed at concentrations as low as 10 À9 M. Furthermore, the residual thiram on the surface of apple peel was extracted with an ethanol solution and was successfully detected as 2.4 9 10 À3 mg kg À1 , which was lower than the maximum residue level of pesticides in food allowed by EU standards (0.1 mg kg À1 ) (Figure 6e).By linearly fitting the characteristic peak intensities at 1381 and 1507 cm À1 , R 2 can be found to be 0.835 and 0.812, respectively, as shown in Figure 6f.These results prove that the N-doped 3D-graphene/Si substrate has high practicability.Compared with the earlier literature, the detection limit of the N-doped 3D-graphene/Si-based SERS substrate is better than that of most graphene composite substrates and other SERS substrates, as shown in Figure S13, Supporting Information. Conclusions The N-doped 3D-graphene/Si substrate fabricated via PACVD exhibits excellent adsorption capacity, large specific surface area, and high porosity, which can attribute to its unique 3D structure.Combined analyses of theoretical simulation, incorporating N atoms in 3Dgraphene are beneficial to increase the electronic state density of the system and enhance the CT between the substrate and the target molecules.The as-prepared SERS substrates are highly stable, so they can maintain stability after a long time of storage.Besides, the reusability of N-doped 3D-graphene/Si substrate greatly reduces the cost and enables large-scale application.The FDTD simulation shows that light absorption occurs inside the graphene and the maximum absorption position is in the top N-doped 3D-graphene.And then, more charge carriers are produced in graphene in incident light also improve chemical/charge transfer effects in the heterojunction.Hence, SERS substrates based on N-doped 3D-graphene/Si heterojunction achieve ultra-low detection for various molecules.In practical application, apple peel extract precisely detected 10 À8 M thiram.This study provides highly stable, lowcost, reusable, ultrasensitive, and novel SERS substrates, which have broad application prospects in low-concentration molecular detection and food safety. Experimental Section Materials characterizations and simulation method: Materials characterizations and details of the simulation method can be found in Supporting Information. N-doped 3D-graphene/Si is prepared via PACVD: Si substrates with the size of 1 cm 9 1 cm were placed in a PACVD quartz tube for the subsequent treatments.Equipment model BTF-1200C-II-AS-PECVD, purchased from Anhui Beyike Equipment Technology Co., Ltd.The quartz tube was first vacuumed to ~5 Pa and then heated to 750 °C in a mixed atmosphere of 10 sccm argon (Ar, 99.9999% purity) and 1 sccm hydrogen (H 2 , 99.9999% purity).After reaching the set temperature, Ar and H 2 were turned off simultaneously, and 5 sccm of methane (CH 4 ) and 1 sccm of ammonia (NH 3 ) were introduced.After that, the plasma source was turned on and set to 200 W.After 60 minutes of growth, CH 4 and NH 3 were turned off, 10 sccm of Ar was then introduced to raise the tube back to atmospheric pressure.The samples were taken out as the temperature of the quartz tube dropped to room temperature. Probe molecular materials and raman measurement: CRV, R6G, and MB (Aladdin Reagents, Shanghai, China) were dissolved in ethanol as dye probe molecules, respectively.A certain concentration of thiram solution was used as the probe solution.For the Raman test, 5 ll of probe molecules were dropped onto the substrate using a pipette.A 532 nm laser was used for the SERS test as the Energy Environ.Mater.2024, 7, e12565 excitation source with a laser spot diameter of 12.5 lm.The laser power was set to 2.5 mW and the integration time was 10 s. Simulation method: The FDTD simulations were used to calculate the normalized electric field and power loss density distributions of the N-doped 3Dgraphene/Si structure.The height of N-doped 3D-graphene is 580 nm.The wavelength of the excitation source is 532 nm.Details about the simulation method are presented in Supporting Information. Computational methods: All calculations were carried out using density functional theory (DFT) with the Perdew-Burke-Ernzerhof (PBE) form of the generalized gradient approximation (GGA) functional.The Vienna ab initio simulation package (VASP) was used.The energy cutoff for plane wave expansions was set to 400 eV, and the energy (converged to 1 e À5 eV atom À1 ) and force (converged to À2 e À2 eV A À1 ) were set as the convergence criteria for geometry optimization.The 3D-graphene/Si (100) model and N-doped 3D-graphene/Si (100) model were constructed (see Supporting Information).The Brillouin zones were sampled with the gamma-centered Monkhorst-Pack (5 9 5 9 1) k-points meshes for all models.As for the slab models, a vacuum space of 15 A was added to the nonperiodic direction to avoid interaction between periodic images.In addition, the DFT-D3 method was included to improve the description of the long-range weak van der Waals (vdW) interaction for all DFT calculations.The optimizations and Raman spectra of molecules were carried out with Gaussian 16 program, using the ωB97XD hybrid function and the 6-31G(d) level, with the Raman frequency correction factor 0.9490, which makes it easy and accurate to investigate the weak interaction changes between molecules.DFT-D4 correction provided a satisfactory result and fit well with the experimental findings.The molecular orbital calculations were manipulated by the Multiwfn 3.8 package.Details about the calculation method are presented in Supporting Information. Figure 2 . Figure 2. a) Electrostatic potential of 3D-graphene/Si structure and N-doped 3D-graphene/Si structure, respectively.b) The ground state charge transfer between R6G and 3D-graphene/Si structure and N-doped 3D-graphene/Si structure, respectively.c) The frontal view of the R6G & 3D-graphene/Si model.d) Front view of the charge distributions for R6G & 3D-graphene.e) Side view of the charge distributions for R6G & 3D-graphene.f) The frontal view of the R6G & N-doped 3D-graphene/Si model.g) Front view of the charge distributions for R6G & N-doped 3D-graphene.h) Side view of the charge distributions for R6G & N-doped 3D-graphene. Figure 3 . Figure 3. Investigate the charge transfer mechanism by SKPM and CAFM.N-doped 3D-graphene/Si surface potential distribution a) before and b) after adsorption of R6G, respectively.c) The average surface potential value before and after adsorption of R6G.Current maps of the N-doped 3D-graphene/Si under different conditions: d) in the dark and e) in the presence of light.f) The average value of the currents produced under dark and light conditions. Figure Figure 4a,b separately show the Raman spectra of both CRV (10 À4 M) and R6G (10 À4 M) on N-doped 3D-graphene/Si and 3D-graphene/Si substrates.The experimental values and calculated Raman spectra of these molecules are compared in Figure S9, Supporting Information, and the inset shows the corresponding molecular model.Comparing the characteristic peak intensities of the two molecules (CRV molecules at 1177, 1371, 1585 and 1621 cm À1 ; R6G molecules at 1187, 1365, 1511 and 1651 cm À1) on N-doped 3D-graphene/Si and 3Dgraphene/Si, as shown in Figure4c,d, it indicates that the peaks intensity of the N-doped 3D-graphene/Si substrate is almost three times that of 3D-graphene/Si substrate.[26][27][28][29][30]The enhancement factors of 3Dgraphene/Si substrate and N-doped 3D-graphene/Si substrate are 1.01 9 10 4 and 2.78 9 10 6 , respectively.The detailed calculation method is presented in Supporting Information.Compared with 3Dgraphene/Si system, the N-doped 3D-graphene/Si system has a more abundant electronic density of states near the Fermi level and more photogenerated charges produced by optical absorption enhancement, as shown in FigureS10, Supporting Information.It has the Fermi energy level closest to the HOMO of the molecule.This makes the EF Figure 6 . Figure 6.a) SERS measurement process using a wavelength of 532 nm.b) The structural model of the Thiram molecule.c) The comparison of the experimental values (Measured on N-doped 3D-graphene/Si substrate) with calculated Raman Frequencies of thiram.d) Raman spectra of 10 À6 M-10 À9 M thiram measured on N-doped 3D-graphene/Si substrate.e) Raman spectrum of Thiram extracted from apple peel measured on N-doped 3D-graphene/Si substrate.f) Intensities of the 1381 and 1507 cm À1 peak were plotted against the concentration for thiram.
5,579
2022-12-10T00:00:00.000
[ "Materials Science", "Chemistry", "Engineering", "Physics" ]
p27Kip1 – p(RhoB)lematic in lung cancer‡ Abstract Lung cancer is the leading cause of cancer mortality worldwide, with adenocarcinomas of the non‐small cell lung carcinoma (NSCLC) subtype accounting for the majority of cases. Therefore, an urgent need exists for a more detailed dissection of the molecular events driving NSCLC development and the identification of clinically relevant biomarkers. Even though originally identified as a tumour suppressor, recent studies associate the cytoplasmically (mis)localised CDK inhibitor p27Kip1 (p27) with unfavourable responses to chemotherapy and poor outcomes in NSCLC, supporting the hypothesis that the protein can execute oncogenic activities. In a recent issue of The Journal of Pathology, Calvayrac and coworkers uncover a novel molecular mechanism that can explain this oncogenic role of p27. They demonstrate that cytoplasmic p27 binds and inhibits the small GTPase RhoB and thereby relieves a selection pressure for RhoB loss that is frequently observed in NSCLC. This is supported not only by studies with genetically modified mice, but also through identification of a cohort of human lung cancer patients with cytoplasmic p27 and continued RhoB expression, where this signature correlates with decreased survival. This not only establishes a potentially useful biomarker, but also provides yet another facet of the complex roles p27 undertakes in tumourigenesis. © 2018 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of Pathological Society of Great Britain and Ireland. p27 Kip1 (p27, encoded by CDKN1B) is a CDK inhibitor of the Cip/Kip family which regulates CDK activity and has a vital role in controlling progression through the G1 phase of the cell cycle [1]. Since its discovery, it has been assigned a multitude of additional biological functions including regulation of apoptosis, transcription and cell migration [1,2]. In accordance with its canonical role as a CDK inhibitor, it is well established that loss of p27 can contribute to the development of neoplastic disorders. Studies of Cdkn1b-deficient mice indicated that p27 is haplo-insufficient for tumour suppression. Animals lacking one or both alleles were found to be increasingly susceptible to carcinogen induced tumourigenesis, but the wild type allele was always retained in heterozygous animals [1,2]. This might hint towards a role of p27 in promoting tumour development. In addition, an increasing number of somatic and germline mutations have been identified and uncovered a role of CDKN1B as a tumour susceptibility gene in a number of human neoplasms [3]. Consistent with these observations, low p27 is associated with poor prognosis in various cancer types. Alternatively, p27 can mislocalise to the cytoplasm in human tumours, where it is unavailable for inhibition of nuclear CDKs [1,2]. Interestingly, a number of studies revealed that the role of p27 in oncogenesis is more complex than that of a simple tumour suppressor [1][2][3][4] (Figure 1). For example, tyrosine phosphorylation impairs CDK inhibition by p27 and converts the inhibitor into an activating assembly factor of CDK4,6 that can promote cell proliferation [1] (Figure 1). Additional and CDK-independent oncogenic functions of p27 have been proposed and were substantiated in a knock-in mouse with a mutant p27 that is unable to bind cyclin/CDK complexes (p27 CK− ) and thereby loses its function as a tumour suppressor. Strikingly, this p27 CK− allele caused a dominant increase in spontaneous tumourigenesis compared to both wild type and Cdkn1b −/− knockout mice [5,6]. Whereas the complete loss of p27 caused spontaneous tumours only in the pituitary, expression of the p27 CK− allele caused tumours in multiple organs, including the lung [5,6]. In K-Ras induced lung tumours, increased tumour numbers and aggressiveness were associated with increased Ras-induced cytoplasmic localisation of p27 CK− . In contrast, in the presence of c-Myc, p27 CK− remained nuclear and did not contribute to c-Myc induced transformation, suggesting that cytoplasmic localisation might be a prerequisite for the oncogenic activity of p27 CK− [6]. SR Podmirseg, J Vosper et al p27 also plays a complex role in lung cancer, displaying both tumour suppressor and oncogenic activities. The growth of human non-small cell lung carcinoma (NSCLC) cell lines carrying K-Ras oncogenes requires interphase CDK activity. In particular, CDK4 was shown to be essential for the progression of K-Ras driven NSCLC [7]. Inhibition of CDK4 by nuclear p27 might be crucial in limiting lung tumourigenesis. Furthermore, cytoplasmic p27 was observed to be associated with a decrease in overall survival and unfavourable response to cisplatin-based chemotherapy in human NSCLC [8]. In a recent issue of The Journal of Pathology, Calvayrac et al uncovered a novel RhoB (Ras homolog gene family member B) dependent molecular mechanism that can explain an additional tumour-promoting function of cytoplasmic p27 [9]. Loss of RhoB expression is frequent in lung cancer, suggesting that it may have a tumour suppressor function in this tissue [10,11]. The three highly conserved Rho family GTPases RhoA, RhoB and RhoC share common roles in cytoskeletal reorganisation and cell motility [12]. However, RhoB has a unique C-terminus that is not only palmitoylated but can also be geranylgeranylated or farnesylated, which alters its function and localisation. Whereas farnesylated RhoB preferentially localises to the plasma membrane, geranylgeranylated RhoB can also localise to endosomes, multivesicular bodies and the nucleus. In most cancer types RhoA and RhoC are pro-tumourigenic [12]. The influence of RhoB activity on tumour development is less well studied and it appears that the small GTPase can play a two-sided role. Its effect on tumour initiation and progression seems to depend on subcellular localisation and cellular context [12]. Despite often being downregulated in lung cancer, RhoB seems to promote aggressive metastasis and resistance to therapy in lung adenocarcinoma [12]. Based on the observation that p27 can inhibit RhoA activation [1,2], Calvayrac et al speculated that p27 might also be able to bind to a conserved region in RhoB and thereby prevent its activation. They demonstrate that this is indeed the case and proceed to show that p27 inhibits the interaction of RhoB with two RhoGEFs, p115 RhoGEF (ARHGEF1) and Lbc (AKAP13). As initially described for RhoA, this inhibition is also independent from the ability of p27 to bind CDKs and involves the C-terminal eight amino acids of p27. These biochemical and cell biological observations were expanded with genetic evidence from mouse models, where Calvayrac and co-workers ascertained that p27 and RhoB are linked in lung tumourigenesis. They speculated that the inhibition of RhoB by p27 might abrogate the selective pressure for RhoB loss in lung cancer. In support of this hypothesis, they observed that RhoB expression was preferentially lost in p27 knockout animals (64%), whereas RhoB remained more frequently expressed in mice expressing p27 or the p27 CK − allele (40 or 31%, respectively). In addition, and consistent with their model, Calvayrac et al also observed that absence of RhoB enhanced the mean tumour size in p27 −/− animals, but had no effect on tumour number or size in p27 CK − mice. This documents that at least some oncogenic activity of p27 is based on its inhibition of RhoB. It is somewhat unexpected that the p27/RhoB axis in lung cancer 5 combined deletion of RhoB and p27 leads only to significantly increased tumour volumes compared to p27 deletion alone, but did not significantly increase mean tumour numbers. One would have expected that, in the absence of p27, additional loss of RhoB would have a cumulative effect, if the tumour suppressor activities of the two proteins act on independent pathways. The lack of significantly increased tumour numbers indicates that inhibition of RhoB by p27 is surprisingly less important for tumour formation than for tumour growth. The mouse model was based on urethane-induced tumours that frequently involve activating mutations in K-Ras (Q61 to R/L). Activated K-Ras causes a fraction of p27 or p27 CK− to localise to the cytoplasm [13], potentially enhancing its interaction with RhoB. The susceptibility of mice to urethane-induced carcinogenesis depends on genetic background where C57BL/6J mice are resistant to carcinogen induced lung tumourigenesis [14]. As p27 −/− and RhoB −/− transgenics were originally in different strains, Calvayrac et al generated a double heterozygous F1 generation in a mixed C57BL/6J/129S4 background. The F2 hybrids used in the experiments will carry some genomic heterogeneity that might also contribute to differences in the susceptibility to urethane-induced tumourigenesis. However, a third independent approach further corroborated the link between cytoplasmic p27 and maintained RhoB expression, as it was observed in a patient cohort that cytoplasmic p27 and RhoB immunostaining associated with poor clinical outcomes, and that RhoB was preferentially lost in lung tumours that do not express p27. Together, these findings are exciting and open up the prospect that p27 and RhoB staining could be used as a biomarker in human lung cancer. It remains to be determined if the lung cancer promoting activities of p27 CK− are solely due to RhoB inhibition. This could for example be addressed by a knock-in model containing p27 CK− that cannot bind to RhoB. This may be challenging, since recent studies indicate that the interaction of p27 with the related RhoA protein may involve additional factors that bind to the p27 C-terminus and a low affinity direct interaction of RhoA only with the p27 N-terminus [15]. RhoB influences multiple cellular pathways including apoptosis, DNA damage response, cell cycle progression, migration and invasion. It will be interesting to determine the functions that are crucial for the tumour suppressive role of RhoB in NSCLC. While this study established a role of the RhoB/p27 axis in lung cancer, it will be important to elucidate if this mechanism also contributes to tumourigenesis in other organs and to uncover the crucial pathophysiological pathway affected by the RhoB/p27 axis.
2,227.4
2019-02-04T00:00:00.000
[ "Medicine", "Biology" ]
Autonomous pick-and-place using the dVRK Purpose Robotic-assisted partial nephrectomy (RAPN) is a tissue-preserving approach to treating renal cancer, where ultrasound (US) imaging is used for intra-operative identification of tumour margins and localisation of blood vessels. With the da Vinci Surgical System (Sunnyvale, CA), the US probe is inserted through an auxiliary access port, grasped by the robotic tool and moved over the surface of the kidney. Images from US probe are displayed separately to the surgical site video within the surgical console leaving the surgeon to interpret and co-registers information which is challenging and complicates the procedural workflow. Methods We introduce a novel software architecture to support a hardware soft robotic rail designed to automate intra-operative US acquisition. As a preliminary step towards complete task automation, we automatically grasp the rail and position it on the tissue surface so that the surgeon is then able to manipulate manually the US probe along it. Results A preliminary clinical study, involving five surgeons, was carried out to evaluate the potential performance of the system. Results indicate that the proposed semi-autonomous approach reduced the time needed to complete a US scan compared to manual tele-operation. Conclusion Procedural automation can be an important workflow enhancement functionality in future robotic surgery systems. We have shown a preliminary study on semi-autonomous US imaging, and this could support more efficient data acquisition. Introduction Developments in robot-assisted laparoscopic surgery have enabled highly dexterous instrument manipulation with enhanced ergonomics which facilitate precise movement within the anatomy without direct access to the surgical site. This has led to significant growth in robotic surgery as an alternative to traditional laparoscopic surgery [1]. Yet despite the increased uptake of surgical robotics, automation is not currently available in clinical practice due to significant technical difficulties in robust robot perception and control within B Claudia D'Ettorre soft-tissue anatomical areas, and also regulatory considerations [2]. Robotic-assisted partial nephrectomy (RAPN) is one of the most common robotic-assisted surgical procedures [1]. Removing tumours while retaining healthy tissue has been shown to maximise the patient's post-operative kidney functions [3]. Intra-operative US imaging in RAPN supports healthy tissue preservation, but manual control of the US probe significantly complicates surgical workflow [4]. As shown in Fig. 1, the endoscopic view of the surgical site and the US image are shown in the surgical console without any co-registration, and the surgeon must interpret multi-modal information and retain it after US probe manipulation for clinical decisions. Computer-assisted interventions (CAIs) in RAPN have focused on enhancing surgical navigation and improving the management of US imaging probes or the fusion of endoscopic, US and CT modality data. This has been approached from both the image processing and understanding perspective where US information can be used to infer information Surgeons visualise the endoscope output on top and the US image on the bottom. They mentally compute the registration between the two images. Right: model representation of the system design. The suction rail is placed on the kidney surface, and the model of US probe is equipped with an adaptor to slide it along the rail such as tissue deformation [5] and from an information registration angle co-registering multiple modalities [6]. Hardware solutions have also been proposed to guarantee repeatable grasping of the US transducer by augmenting the probe design [7,8] and subsequently detecting vessels in the US image and registering to pre-operative CT data. Minor probe modifications have been used to add fiducial markers for automatic detection in the endoscopic image and estimation of the US pose for image overlay [9], although this work does not consider the renal cortex's curvature. Autonomous US scanning for tumour identification has also been reported [10] that considers the curvature of the tissue's surface and possible physiological motion within it, approximated with a periodic model. More recently, a similar approach has been extended to work under free-form motion where the US scanning trajectory is manually defined and continuously updated to follow intra-operative tissue motion [11]. Most CAI approaches so far have been relying on the current clinical protocol for intraoperative US scanning where the laparoscopic tool freely moves over the scanning surface. In this paper, we present a new framework for automated localisation and placement of a pneumatically attachable flexible rail (PAF) [12], [13] using the da Vinci Research Kit (dVRK). This is an incremental but novel step towards assisted US imaging during RAPN to advance surgical workflow beyond manual US probe management. More specifically, the paper proposes a new platform architecture, algorithm and a pre-clinical user study with five surgeons. This has the following specific contributions: • Trajectory generation using dynamic time warping motion planning; • A control scheme based on visual servoing using endoscopic images during the pick-and-place of the rail; • A comparison between two different surface registration techniques applied to ex vivo porcine kidneys [14]; • Pre-clinical study of system performance with five surgeons comparing semi-autonomous and manual execution of the same task. Albeit a preliminary study, we investigate and compare the behaviour of expert surgeons and novices in their use of the device and their experience of the algorithm and workflow for automation. Platform configuration The dVRK system was used as the surgical robot underpinning our experiments. One of the patient side manipulators (PSM1) is equipped with a large needle driver (LND), while the other PSM2 holds the Pro-Grasp tool. Figure 2 shows the set-up overview alongside the PAF rail, the two PSMs, the stereo endoscope and the porcine kidney. The black rail, presented by the authors in previous works [12,15], is attached to the kidney surface using a series of bio-inspired vacuum suckers. It is used as a guide on which the surgeon engages and slides the drop-in US probe (3D printed model) to identify vessels and resection margins. Software configuration A custom ROS architecture was developed embedding different software algorithms, shown on the right part of Fig. 2. To enhance the accuracy of the system, a calibration process was performed as a first step (described in "System calibration" Section). The robot calibration involves making minor adjustments to kinematic model parameters to account for factors like manufacturingtolerances, to increase On the bottom left part is shown ex vivo porcine kidney, with the two robotic tools, the stereo endoscope, the black sucker rail and the drop-in US probe. Four ink markers used for surface registration are highlighted on the kidney surface. Orthogonal clockwise reference frame systems are defined by "/ ". The top left of the figure shows two frames from the left and right camera of the stereo endoscope. On the right side, the ROS node architecture is summarised according to the different methodologies described in the labelled sections model accuracy. The PSMs are characterised by set-up joints and active joints. It is worth noticing that the instrument's tip accuracy is generally more sensitive to small angular errors in the base joints than in the more distal ones. Considering this, an approach similar to [14] was followed, attaching the base coordinate frame at the beginning of the active joints. The workspace calibration is defined by the transformation between the workspace and the arm's remote centre of motion. A vision node ("Rail detection and surface acquisition for kidney registration" Section) was necessary to deal with the rail tracking and the kidney surface registration. The control scheme node ("Dynamic time warping trajectory planning and control features" Section) was introduced to accomplish safety and accuracy requirements. Notation Scalars are represented by plain letters, e.g. λ, vectors by bold symbols, e.g. x. Orthogonal clockwise reference frames are defined with the notation of /, e.g. /ws. A 3D point represented in Cartesian space is expressed through the vector of the components, e.g. [x P , y P , z P ]. System Calibration The fiducial localisation error (FLE) allows to measure spatial data points during image guidance [16] and in this work, it was quantified for the stereo endoscope and for the two PSMs. The FLE is estimated by calculating the average of the measured distance values in terms of the Cartesian position between the localised points and the known checkerboard dimensions following co-registration. The acquisition of localised points will be explained in the experiments section. FLE can be mathematically formulated as follows: where n represents the number of selected points, and p l i and p k i are respectively the Cartesian coordinates of the localised and known points. Rail detection and surface acquisition for kidney registration The PAF rail has a fiducial represented by a checkerboard composed of squares with a side length of 1 mm used to estimate its pose. The pattern could be easily replaced with any other clinically compatible solution available in the market [17]. The checkerboard tracking and the stereo triangulation functions from MATLAB calibration toolbox were used to determine the location of the rail inside the workspace. Algorithm 1 Kidney Registration 1: ink-surgical markers definition: 4 markers with a known geometry. 2: procedure PSM-probing 3: while m < markers do 4: Localise tool tip on the m-marker 5: Acquire q joints values 6: r = DK(q) DK: Direct Kinematics 7: end while 8: return r 9: end procedure 10: procedure ECM ink-markers 11: Color thresholding stereo pairs in HSV color space 12: k-means clustering 13: Centroids Triangulation 14: end procedure Knowing the rail's geometrical model, it is possible to compute the location of the grasping site using perspectiven-point (PnP [18]) pose estimation and forward driving the trajectory of the kinematics to grasp. Registration of kidney soft tissue for image guidance is important since the kidney surface represents the target structure for positioning the image guidance rail. Our experimental set-up follows a previous registration comparison [14] for phantom models adapted to a porcine kidney. Two methods were analysed as shown in the Algorithm 1 formulation: PSM-probing and ECM ink-markers. The PSM-probing procedure returns r , which is the probed Cartesian position. The results coming from each method are compared in terms of distances between the real makers representing the ground truth. Dynamic time warping trajectory planning and control features To automatically place the rail on the target position with the robotic tool, two main tasks are needed: generate a trajectory to position the rail and develop a control strategy to optimise the operational performance. Trajectory generation We separate the pick and place task in four stages which are shown in Fig. 3a. STEP I, the tooltip starts from the home position defined in the 3D space by [x H , y H , z H ]. STEP II, the robotic tool approaches the grasping site in the central part of the rail. [x G , y G , z G ] is the Cartesian position of the grasping point coming from the rail detection through the stereo camera and successively triangulate in the 3D space. STEP III, the tip moves back to the predefined home position. Finally, in STEP IV, the tool holding the rail moves towards the kidney surface to reach the target point [x T , y T , z T ]. The Cartesian position of the target point is computed as the centroid of the bounding box generated by the four ink markers. Dynamic time warping (DTW) [19] was used to estimate the path followed during the transition phases among the described steps: 10 different repetitions of the same locating task were executed in tele-operation by a trained operator. During these procedures, both the Cartesian and the joints values were acquired using the software framework of the dVRK (Fig.3b-left image). Considering two Cartesian trajectories at the time t j and t k , dynamic time warping between them can be formally defined as a minimisation of the cumulative distance over potential path between two time series elements, as shown in the following equation: where w i indicates a point (j,k) identifying one element from t j and one from t k which are aligned. w i represents each element of the matrix W defined as distance matrix (DM). The DM has the dimension of the element of t j times the element of t k , indicated with P in the equation. The values inside each cell of the DM are computed as: where T j and T k are two respective elements from t j and t k and D(·,·) represents the values of the previous computations. The procedure is then iterative replicate for the 10 trajectories. During STEP II, it is important to guarantee the correct orientation of the tool in order to achieve a solid grasp. This can be done tuning the last three joints of the PSM arm that are indicated with [q 4 , q 5 , q 6 ]. The joints values have been filtered and averaged in order to define the final value. To account for uncertainties and minor errors, some other control features were added to enhance the performance of driving the tool. Starting from the estimation of the grasping point: x G defines the position of the first target motion. This point is reconstructed in 3D space starting from the stereo pairs. In the dVRK, the baseline between the two cameras is only few millimetres and this reflects consistent uncertainty in the depth estimation, which are also enlarged by the small dimensions of the rail's fiducial. Once the depth component of the grasping position is estimated, it is then compared with the respective component of the kidney surface. Since the two objects are located on the same table surface, their depth estimation cannot differ more than the thickness of the kidney itself. This works as a safety initial control to ensure that the rail tracker is working correctly. Control strategy A further control policy based on visual servoing was added to enhance the performance of manipulating the tool, starting from STEP II. The system does not present any tool tracking node, but once the relationship between the rail and the tool is geometrically established, after the grasping phase, it is possible to infer the same information. The position of the tooltip is extracted dynamically and transposed in the /ws and compared to the position acquired through the dVRK and transposed in the same space. This error function is then minimised while proceeding to the next step. During STEP IV, an additional control measure is added in order to be sure that at the end of the task the sucker rail is located parallel to the kidney surface. This control was implemented comparing the known position of the rail and the registered kidney surface at the end of the task. Data acquisition for calibration Transformations between images and the robot coordinates were computed and the accuracy of the tooltips' position was examined through experiments. Forty-five image pairs of a 7 row by 10 column checkerboard acquired from different endoscopic poses were used as input for the stereo calibration ( Fig. 2-Stereo Calibration). Then, the MAT-LAB toolbox [20] was used for this step, which first solves the intrinsic and extrinsic parameters of the camera in a closed form considering zero lens distortion and as second step estimates all parameters simultaneously including the distortion coefficients using nonlinear least-squares minimisation. Seven additional image pairs of the checkerboard were acquired in order to determine the pose of the left camera inside the workspace. The corner intersections of the checkerboard have been extracted from these frames, and point-registered with the known dimensions [21] (Fig. 2-Left Camera Pose Estimation). For the two PSMs, equipped, respectively, with the LND and the Pro-Grasp, the FLE was characterised carefully probing 20 intersection points in a 3D printed checkerboard (side length 10 mm) for each of the tools (Fig. 2-Workspace Calibration). Every time a point in the grid was touched, the robot encoder's values were recorded and used in the forward kinematic model of the dVRK to localise the 3D Cartesian position. The points were then projected back using the known transformation, and compared with the checkerboard dimensions, this procedure was repeated for each PSMs. Lastly, the FLE components for the stereo endoscope were taken as a difference between the localised points inside the frame and the known checkerboard dimensions following co-registration. Task automation A preliminary test of the repeatability of the task is run to validate that the kinematics can be used for guiding the robot. The rail is positioned in the field of view of the endoscope and the LND tooltip starts from the home position. The task is completed when the tool has grasped the rail and precisely placed it on the kidney surface and headed back to the home position. The whole procedure is repeated for 6 times with the same initial conditions. The experiment is defined as follows: the rail is deployed in the field of view of the stereo endoscope without a pre-defined orientation to emulate the clinical protocol. During laparoscopic surgery, external devices are inserted inside the patient, "dropping" them via an auxiliary trocar. If the rail reaches an upside-down position, it is relocated by the assistant using the suction pipe so that the marker is always visible. Once the system is detected inside the endoscope field of view, the robot starts its motion grasping the target and locating it on the organ surface following the pre-planned trajectory. The automated part of the task is considered concluded when the rail is effectively in suction with the kidney itself and the surgeon can start the tele-operated sliding of the probe. A dataset of 40 acquisitions has been recorded to test the overall architecture. Pre-clinical user study Five surgeons took part in the acquisitions, with different years of experience in RAPN. The surgeons were not allowed to familiarise with the system before the testing and no instructions were given on how to execute the task in order to minimise their bias. Furthermore, all the participants conduct the study independently. During the acquisitions, they had to complete three main tasks described as follows: Location: the surgeon grasps the rail and place it over the kidney surface, ensuring that the suction line is firmly attached. The Sliding task follows: once the rail is in place the surgeon has to pair the probe adaptor with it and complete a full slide back and forth, concluding the task removing the probe from the rail. In the last task, Kidney Motion, the rail is grasped while paired with the kidney surface and used to move the kidney with circular movement in respect to the main longitudinal axis as can be visualised in Fig.4 on the right. Each task was repeated three times and the variables measured are: execution time T exe in minutes, success rate SR represented by a fraction where the denominator shows the number of attempts needed before succeeding the task, difficulty in using the system DF scored from 1 to 10, and how much they were willing to use the system in real clinical practice WU scored from 1 to 10. Calibration and kidney registration A quantile-quantile (QQ) plot (Fig. 5) was used to characterise the FLE distribution. The mean and the standard deviation obtained of the FLE magnitudes are the following: for PSM1 1.10 ± 0.58 mm, PSM2 4.33 ± 0.78 mm and ECM 0.93 ± 0.56 mm. Notably, the value related to the PSM2 equipped with the Pro-Grasp is significantly higher than the one with the LND. During the probing procedure, the nominal DH parameters provided with the intuitive surgical API were used for both the robotic tools. In the case of the Pro-Grasp, the probing procedure results are inaccurate due to the hardware design. The rounded shape of the Pro-Grasp tip makes difficult to isolate the same precise point to guarantee a repetitive probing, while the design of the LND allows for more exact and accurate acquisitions. Based on these outcome values, it has been decided to use the LND as grasping tool during the experimental acquisition instead of the Pro-Grasp, although the rail has been designed for that particular tool. Regarding the kidney registration, the accuracy is quantified by the error in the markers reconstruction for each method in terms of Euclidean distance between the markers themselves. Given the ground truth of 50 mm, the value obtained from surface tracking with the PSM tip is 51.23 ± 0.44 mm, while with the ECM triangulation of surgical ink markers is 54.10 ± 0.88 mm. Unsurprisingly, given the small baseline 5.4 mm of the stereo camera in the da Vinci endoscope, localisation registration with PSMs was more accurate than with endoscope-based technique. These results do not affect the experiment negatively since probing techniques can be potentially computed in real surgical environments. Automation results Results from the repeatability test are shown in Fig 5. The box plots show the mean values of the position estimated during y.o.e stays for "years of experience" in robotic surgical operation. The execution time is reported in minutes increased baseline. This would improve the accuracy of the results but move us a step further away from an environment more similar to the surgical one. Fig. 6 left side shows the results coming from the experiments. The success rate is of 35 acquisitions over 40, while in the remaining 5/40 the tooltip is not able to reach the rail. These failed executions can be attributed to the inaccuracy associated with the tracking and reconstruction of the rail in the 3D space. Additionally, in 4/40 acquisitions the task was correctly completed but with some clear error in the pose estimation of the rail (highlighted with the yellow circle in the Fig. 6 on the right side). As a matter of fact, the reconstructed position appears to be on the correct plane but parallel compared to the real one. When the tool tries to grasp the rail, it generates a small sliding movement on the plane, due to the fact that the reconstructed position appears to be further down compared to the robot reference frame. In this case, using external cameras with increased baseline would improve the success rate of this experiment. The average time among all the acquisitions to execute the autonomous part was 42 seconds. Table 1 reports the results obtained from the surgeons during the tests. Values are shown as the mean values among the three different repetitions of the same task. The different surgeons are represented by "S" followed by a number. Comparing the execution times between the automated task and the one executed by surgeons, it is possible to see how T exe increases by an average of 85 ± 109 s among all the executions. Discussion and conclusion In this paper, we have reported preliminary experiments showing that automatic positioning of a PAF rail is possible by combining motion planning and visual servoing. Our framework can be used to place the PAF rail onto tissue by autonomous instrument motion following a planned trajectory and subsequently the rail can be used to manoeuvre a US probe. We implemented and compared calibration accuracy of this approach with two different dVRK instruments. Experimentally our pre-clinical case study showed surgeons' interacting with automation of procedural sub-tasks. Results highlighted the need to build inherent user flexibility and make the system compatible with every tool, not only the LND and Pro-Grasp, meaning that design process for the rail system and handles can be improved. The proposed solution, although in a preliminary stage, showed promising results in terms of execution time. Multiple difficult challenges remain for translating such technology within more realistic experiments and a clinical environment. Examples include robust vision algorithm for taking into account tissue deformation or coping with dynamic effects like bleeding that obscures information inference for visual servoing. In addition to the positioning of the PAF rail, automation of the US probe manipulation requires additional motion planning and adaptation to tissue geometry. Further work is also needed for adaptive control in the presence of physiological motion and more comprehensive clinical workflow studies are necessary including the use of a functional US probe. These are some of the aspects ascribable to the failed executions highlighted in Fig. 6. We believe that the introduction of the described technical features in the experiment will improve the success rate of the experiments allowing to meet the clinical standards. Although some of the technical aspects related to the problem still need to be addressed as stated above, the authors believe that the adoption of this new device and the introduction of a new clinical protocol are fundamental to boost towards partial automation in RAPN. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.
5,875.8
2021-05-15T00:00:00.000
[ "Computer Science", "Medicine" ]
Non-Perturbative Explorations of Chiral Rings in 4d N = 2 SCFTs We study the conditions under which 4d N = 2 superconformal field theories (SCFTs) have multiplets housing operators that are chiral with respect to an N = 1 subalgebra. Our main focus is on the set of often-ignored and relatively poorly understood ¯ B representations. These multiplets typically evade direct detection by the most popular non-perturbative 4d N = 2 tools and correspondences. In spite of this fact, we demonstrate the ubiquity of ¯ B multiplets and show they are associated with interesting phenomena. For example, we give a purely algebraic proof that they are present in all local unitary N > 2 SCFTs. We also show that ¯ B multiplets exist in N = 2 theories with rank greater than one and a conformal manifold or a freely generated Coulomb branch. Using recent topological quantum field theory results, we argue that certain ¯ B multiplets exist in broad classes of theories with the Z 2 -valued ’t Hooft-Witten anomaly for Sp ( N ) global symmetry. Motivated by these statements, we then study the question of when ¯ B multiplets exist in rank-one SCFTs with exactly N = 2 SUSY and vanishing ’t Hooft-Witten anomaly. We conclude with various open questions. Introduction Chiral operators play a fundamental role in 4D N = 2 quantum field theories (QFTs) at all length scales.At short distances, the allowed N = 2-preserving relevant deformations of N = 2 superconformal field theories (SCFTs) are chiral [1].Expectation values of chiral operators parameterize the N = 2 moduli spaces of these SCFTs and initiate renormalization group (RG) flows to vacua where low-energy vector multiplets and hypermultiplets live.These effective multiplets are also chiral. 1ince chiral operators are so ubiquitous, it is no surprise that many of the most important non-perturbative insights into 4d N = 2 QFTs are intimately connected with these operators.For example, Seiberg-Witten geometries [2] encode the exact infrared (IR) prepotential, which is a chiral object.Higgs branches enjoy various non-renormalization theorems [3], and their associated chiral operators are closely related to 2d VOAs [4] and hidden infinite-dimensional symmetries. Given their prominence and the powerful geometrical and algebraic constraints on their spectra, one may have the impression that chiral sectors of N = 2 theories are completely understood, simple to characterize, and probe phenomena that are well known.Each of these statements is far from the truth. To better understand these points, it is helpful to first think about ultraviolet (UV) physics and understand which superconformal representations house chiral operators.As we will review in the next section, this question was answered in [5].A basic but important point is that the operators directly connected with the relatively well-understood Coulomb branch physics of Seiberg-Witten theory and the physics of the Higgs branch sit in half-BPS multiplets. 2 In the nomenclature of [6], these are the Ē and B multiplets (a more detailed discussion appears in the next section). 3In particular, vevs for the superconformal primaries (SCPs) of these multiplets parameterize the Coulomb and Higgs branches respectively. However, N = 2 SCFTs contain less protected multiplets with chiral primaries, and these multiplets give rise to interesting physics.For example, D multiplets are also intimately connected with 2d VOAs [4] and probe various subtle properties of the topology and punctures of class S compactification surfaces (e.g., see [7]).Moreover, some D multiplets contain the extra supercurrents of N > 2 SCFTs while others capture the physics of the Weinberg-Witten theorem. Still, from a purely algebraic point of view, the above multiplets are not the most general representations containing chiral operators.Indeed, while these multiplets have primaries that depend on at most two quantum numbers, there are more general multiplets with primaries that depend on three quantum numbers.These are the B multiplets and are the main focus of this paper. Given the greater freedom in their quantum numbers, one might wonder if B multiplets are ubiquitous in 4d N = 2 SCFTs.Unfortunately, the answer to this question is 2 Even these sectors are not fully understood in general.For example, it is believed (without proof) that any interacting 4d N = 2 SCFT has a Coulomb branch.But even basic properties of the Coulomb branch, such as the most general conditions under which its corresponding chiral ring is freely generated, are not known. 3More precisely, B multiplets are chiral with respect to half the supersymmetry and anti-chiral with respect to the other half.The remaining multiplets housing chiral operators (except for Ē) satisfy less restrictive shortening conditions.See the next section for further details. obscured by the fact that these multiplets are under less control then the Ē, B, and D representations.For example, B multiplets are not captured by any of the special limits of the superconformal index.Moreover, Seiberg-Witten theory and the 4d/2d correspondence of [4] do not directly detect these degrees of freedom. 4 One well-known instance where B multiplets appear is whenever a UV theory has a "mixed" branch.This is a branch of moduli space where low-energy vector multiplets and hypermultiplets co-exist.In other words, mixed branches include a Coulomb branch and a Higgs branch component at common points in moduli space.Therefore, we expect a B multiplet to appear in the following operator product where the B primary is a composite built out of a Coulomb branch Ē primary and a Higgs branch B primary.Giving an expectation value to the B primary gives a vev to both Ē and B primaries and initiates an RG flow to a mixed branch.However, in many theories, null relations set B = 0 and lead to geometrically separate Coulomb and Higgs branches. The main purpose of this paper is to explain a much broader array of phenomena that are captured by B multiplets beyond the existence of a mixed branch.Indeed, we will argue that • All local unitary 4d N > 2 SCFTs have B multiplets.We give an algebraic proof of this fact that follows purely from locality and unitarity (see Section 3.1). • All higher-rank 4d N = 2 SCFTs with conformal manifolds parameterized by gauge couplings 5 have B multiplets that exist at all points on the conformal manifold 6 (see Section 3.2). 4 Seiberg-Witten curves indirectly detect certain B multiplets in the low energy description of the Coulomb branch. 5All known examples of 4d N = 2 conformal manifolds have a gauge coupling interpretation.Such families of SCFTs generally have "matter" sectors that are interacting isolated SCFTs (as opposed to only containing collections of free hypermultiplets whose symmetries are gauged). 6It is straightforward to construct B multiplets that exist at special points on the conformal manifold (or, more generally, for special values of a gauge coupling).For example, at zero gauge coupling we can construct, for SU (N ) (and N > 2), B primaries of the schematic form Trφ 2 O, where O is a BR primary transforming in the adjoint of SU (N ), and φ is the corresponding vector multiplet scalar.However, such operators are not protected from recombination and typically become part of long multiplets as we turn on the gauge coupling (here we have taken the generic case R > 1/2; in the special case of a free matter sector with R = 1/2, we do obtain a protected multiplet).Our interest is in B multiplets that are robust against quantum corrections, are present everywhere on the conformal manifold, and do not require considering special matter sectors. • Any 4d N = 2 SCFT with an Sp(N) symmetry having a Z 2 -valued 't Hooft anomaly [8] has a B multiplet if its Coulomb branch has at least one point consisting of purely free fields (see Section 3.3). Therefore, we will see that B multiplets are indeed ubiquitous and that they are related to various interesting phenomena. Given the above results, we are also motivated to study rank-one SCFTs with purely N = 2 SUSY, no Z 2 -valued 't Hooft anomaly, and no mixed branch.Indeed, the existence of B multiplets in these theories is not implied by the above results.For example, in the case of the rank-one N = 2 theory studied in [5], such multiplets were shown to be absent. The main tool used in that paper was N = 2 superconformal representation theory coupled with the dynamics of N = 1 → N = 2 SUSY enhancement along an RG flow to the IR. Using similar techniques, we will study various other rank-one theories amenable to such analysis.In all such isolated theories that we study (for simplicity, we stick to those of Argyres-Douglas type), we will see that B multiplets are absent. The plan of the paper is as follows.In the next section, we introduce various details of the superconformal analysis of chiral operators in 4d N = 2 SCFTs.In section 3 we present the general results described in the bullet points above.We are then motivated to make some conjectures on the spectrum of B multiplets in general theories.In the remainder of the paper, we study various rank-one SCFTs and conclude with a discussion of future directions. B multiplets and superconformal representation theory In this section, we briefly discuss the superconformal representation theory and ring structure of N = 2 multiplets that contain an operator that is chiral with respect to an N = 1 ⊂ N = 2 subalgebra.We conclude by explaining where B multiplets sit in this universe. Recall that an N = 2 superconformal field theory has an SU(2) R × U(1) R symmetry with eight Poincaré and eight special supercharges transforming as doublets under the R symmetry.Without loss of generality, we follow [5] and take our N = 1 subalgebra to be generated by the following Poincaré supercharges where the quantum numbers are (j, j) R,r , with (j, j) the left and right spin, R the SU(2) R weight, and r the U(1) r charge (note that, to get a superconformal subalgebra, we should also include the special supercharges With these conventions, the N = 1 chiral operators are those satisfying Such operators form a chiral ring (their OPEs are free from singularities), and the second condition in (2.2) is equivalent to demanding that O is non-trivial in this ring (here O ′ is any well-defined local operator in the theory).Note that we have suppressed any SU(2) R and Lorentz quantum numbers of O. In the context of N = 2 SCFTs, operators satisfying (2.2) can sit in various representations.We find homes for our chiral operators in these multiplets by acting on chiral superconformal primaries with the (chiral part) of the subalgebra orthogonal to (2.1) [5] 3) The analysis of [5] shows that operators satisfying (2.2) can only sit in the following positions in an N = 2 multiplet Here the leftmost operator is a chiral superconformal primary of an N = 2 multiplet (it has highest SU(2) R weight), and the remaining operators are successive Q 1 α descendants.Depending on the multiplet in question, some of the descendants may be null. In the language of [6], solutions to (2.4) are exhausted by multiplets in the so-called "full chiral sector" (FCS) [5] FCS := Ēr ⊕ BR ⊕ DR(j,0) ⊕ BR,r(j,0) . (2.5) Let us analyse these multiplets in turn: • The Ēr primary is U(1) r charged but has R = j = j = 0 [9].It is annihilated by all the Qi α .In this sense, it is maximally protected.According to the standard lore, it is also the most universal FCS multiplet: such multiplets exist in all known interacting 4d N = 2 SCFTs, and their vevs give coordinates on the Coulomb branch of these SCFTs.The superconformal primaries form the closed Coulomb branch chiral (sub)ring. 7More generally, these multiplets house three N = 1 chiral operators from The descendant states do not form part of the Coulomb branch chiral ring. • The BR multiplets have r = j = j = 0 and R > 0. All known examples of these multiplets parameterize Higgs branches of N = 2 SCFTs via the vevs of their primaries. They form a closed Higgs branch chiral (sub)ring.Therefore, while these multiplets are also common, they are more special than the Ē type.Indeed, the SCFT in question has to have sufficient "matter" for a Higgs branch to exist.These multiplets form Virasoro primaries under the 4d/2d map of [4] and hence are under good analytic control.BR multiplets house only one N = 1 chiral operator (the primary) As a result, the primary is also anti-chiral with respect to the orthogonal algebra in (2.3).Like the Ēr multiplet, BR is therefore maximally protected under the SUSY algebra (although the primary is not annihilated by all of the same supercharges). • The DR(j,0) multiplet has j = 0 and r = 1 + j.In general, it has R, j > 0, but it can also have j = 0 and R ≥ 0. This multiplet is ubiquitous in low energy effective theories: the D0(0,0) multiplet contains the chiral half of the free vector and is therefore present on the Coulomb branch of any theory.More generally, D multiplets with R > 0 appear if we also have decoupled hypermultiplets (or more complicated matter sectors with Higgs branches) appearing on the Coulomb branch. 8Indeed, consider 7 Such operators therefore give rise to coordinates in Seiberg-Witten geometries and their generalizations. This fact explains their ubiquity, although it is not completely clear to us if their ubiquity is also a consequence of the type of 4d N = 2 SCFTs we have been able to construct to date.Ideally, one would like to understand if such multiplets emerge from some more minimal set of algebraic criteria.[7]).These multiplets therefore seem to know interesting things about topology.They also give rise to Virasoro primaries under the 4d/2d map of [4] and are therefore under stringent analytic control.For generic R and j, they house the following N = 1 chiral primaries (2.9) • Finally, we consider the BR,r(j,0) multiplets of interest.They are clearly the most general FCS multiplets in the sense that they have three independent quantum numbers R > 0, j, and r > 1 + j (only j = 0).Moreover, all states in (2.4) are present (2.10) On a mixed branch, these multiplets are as common as D multiplets.For example, in the presence of free vectors, we have Ēn operators from the n-fold D×n 0(0,0) ∋ Ēn OPE.We can then repeat the IR OPE in (2.8) but with D0(0,0) → Ēn and DR(0,0) → BR,n(0,0) .We will show below that B multiplets exist whenever a theory has a freely generated Coulomb branch of rank at least two (in this sense they are slightly less ubiquitous than the D multiplets since they do not appear in the theory of a single free vector [5]). In interacting theories, we expect such multiplets to be more common than D multiplets.This is because r > 1 + j is an inequality (as opposed to the equality in the D case).For example, we expect Ēr × BR ∋ BR,r(0,0) , (2.11) whenever the SCFT supports a mixed branch.The vevs of the corresponding BR,r(0,0) chiral primaries parameterize these mixed branches. (2.12) Unless all the D primaries are minimally nilpotent in the chiral ring, they must give rise to corresponding B multiplets. 10 Combined with (2.11), this observation again suggests that B multiplets should be more common than D multiplets in interacting theories. We can also imagine constructing B multiplets via chiral ring products involving descendants of the multiplets discussed in previous bullets. 11For example, we can take where κ, κ 1 , κ 2 ∈ C are required to make the above operators superconformal primaries.12 9 See [10] for B production channels outside the chiral ring OPE (and footnote 6 for production channels that do not involve OPEs of bulk local operators).We will not discuss these channels in this paper. 10Indeed, in the next section, we will use algebraic techniques to show that the D1/2(0,0) multiplets housing extra N > 2 supercurrents are never minimally nilpotent. 11In some cases this is impossible.For example, where κ ∈ C is required to make the operator in question a superconformal primary.More generally, if we involve at most a single DR(j,0) primary, we must also take spin contractions (this is because the descendant in (2.9) has r = j). More generally, it is apriori possible that B multiplets can appear as chiral ring generators. 13inally, we note that, at the level of multiplication of superconformal primaries in the chiral ring, BR,r(j,0) multiplets form (two-sided) ideals Therefore, to summarize: in the absence of FCS chiral ring relations, we expect B multiplets to be present whenever the theory is interacting (since we then expect Ē multiplets). Moreover, we expect the corresponding chiral primaries to form ideals in the chiral ring and therefore to be crucial in understanding the full local operator algebras of interacting 4d N = 2 SCFTs. However, N = 2 theories often have FCS chiral ring relations14 (these relations will feature prominently in our rank-one discussion below), and so the above conclusion is too naive.Still, given how easy it is to generate such multiplets in the chiral ring, we expect B multiplets to be present in broad classes of theories and to detect various physical phenomena.Indeed, we will arrive at a few general results on these multiplets in the next section. General results In this section, we discuss several abstract results on the presence of B multiplets in broad classes of 4d N = 2 SCFTs.These results are connected with various physical phenomena. Local unitary SCFTs with N ≥ 3 SUSY Any local unitary SCFT with N ≥ 3 supersymmetry has a B multiplet. 15Indeed, by locality, any such theory has an N = 3 stress tensor multiplet.As a result, from an N = 2 perspective, the theory has a U(1) flavor symmetry (descending from the N = 3 R symmetry), which we will call U(1) G .The Noether current for this symmetry sits in a corresponding B0 1 ∼ = B 1 B1 [0; 0] (2;0),0 multiplet.Here we use the language of both [6] and [11]. 16The reason we introduce new nomenclature is that we will make a claim regarding the presence of B multiplets using N = 3 superconformal representation theory, and [6] only discusses N = 2 representations. The highest SU(2) R -weight component of this B1 ∼ = B 1 B1 [0; 0] (2;0) multiplet is the holomorphic moment map, M 11 .Together with the highest SU(2) R -weight operator in the stress tensor multiplet, Ĉ0(0,0 , and in the extra supercurrent multiplets, , M 11 is related by the chiral algebra map of [4] to the generators of a 2d N = 2 super-Virasoro VOA [13] Here J is a U(1) affine current, T is the 2d EM tensor, and the remaining operators are the 2d supercurrents (we refer the reader to [4,13] for more detailed discussions of this correspondence). In the 2d picture, the B1 contribution to the OPE comes from χ(∂M 11 ) = ∂J. To verify (3.2), we therefore need to check that there are no null relations involving JJ, ∂J, and T (otherwise, the 4d normal-ordered product in (3.2) vanishes according to the 16 The additional superscript in B0 1 ∼ = B 1 B1 [0; 0] (2;0),0 refers to the fact that this multiplet has zero U (1) G charge (we follow the conventions of [12]).We will only write this superscript explicitly in cases where the U (1) G charge is relevant to the argument. 17We only keep track of so-called "Schur" multiplets in these selection rules.The reason is that these multiplets house the operators subject to the correspondence in [4]. general prescription in [4]).We can use the bosonic part of the super-Virasoro algebra and compute the matrix of inner products The above matrix has determinant c 3 (c − 1)/864 and is therefore non-invertible for c = 0, 1. Where does the above multiplet sit in N = 3 representation theory?A moment's thought indicates it must sit inside the N = 3 stress-tensor multiplet self-OPE Note that the B 1 B1 [0; 0] (1,1;0) multiplet transforms in the 8 of SU(3) R .Now, recall that Clearly, B 1 B1 [0; 0] (4;0),0 cannot sit in the first three SU(3) R representations above.Using branching rules, it is also easy to check that (M 11 ) 2 transforms as part of a 27 of SU(3) R . As a result, we have arrived at our promised statement: any local unitary N ≥ 3 SCFT has a B multiplet.In N = 2 language, this multiplet can be understood as arising from the normal-ordered product of the extra supercurrent multiplet19 This is a particular example of the more general channel described in (2.12). Note that this discussion does not require the existence of a moduli space of vacua (although all known examples of N ≥ 3 theories have such moduli spaces).Moreover, although the 4d/2d VOA map of [4] doesn't directly detect B multiplets, we see that we can combine that map with locality and N > 2 SUSY to deduce the existence of B multiplets. Higher-rank SCFTs In the previous section, we saw that all local unitary N ≥ 3 SCFTs have B multiplets. However, these theories are special by virtue of their enhanced symmetry.It is natural to then wonder if B multiplets are always related to symmetry enhancement or other more special phenomena. In this section, we will see the answer is no.In particular, we will demonstrate the ubiquity of B multiplets.Indeed, we will see that, under relatively relaxed assumptions, higher-rank theories have such multiplets. Let us begin with a rank N ≥ 2 SCFT that is part of an N = 2 or N = 4 conformal manifold. 20All known exactly marginal deformations in N = 2 SCFTs involve gauge couplings for some non-abelian group, G.Such theories always admit at least one weak gauge coupling limit in which the theory factorizes into a sector consisting of the vector multiplets, V G , and one or more matter sectors, T i . 21If the theory has rank N ≥ 2, this means that the we have at least two generators of the Coulomb branch chiral ring.One generator must be the quadratic Casimir, Ē2 ∈ V G , and the other generator is either a higher Casimir of V G (e.g., if G = SU(3)) or else is a Coulomb branch generator, Ēr ∈ T i . 22 In general there is no invariant distinction between these two possibilities (e.g., as can be seen by looking at both sides of the duality in [16]). Let us now set the G gauge coupling to zero.Then, we see that Then, in the free IR theory we flow to by turning on a vev for a chiral operator in the UV, we will encounter operators similar to those in (3.11).For example, we have where i, j = 1, • • • , N denote the particular free vector component in the U(1) N free super-Maxwell theory present at generic points on the Coulomb branch.For N ≥ 2 such B multiplets clearly exist, and this logic therefore explains the comment on higher-rank Coulomb branches below (2.10). 21The T i need not be weakly coupled themselves.For example, consider the Minahan-Nemeschansky E 6 theory, MN E6 , appearing in the SU (2) duality frame of [16]. 22As an example of this latter phenomenon, consider the case of Ē3 ∈ MN E6 in the example of [16]. What can the multiplets in (3.12) descend from in the UV? Since superconformal recombination of B1/2,r(1/2,0) multiplets is forbidden, it is tempting to argue that the multiplets in (3.12) come from B1/2,r(1/2,0) multiplets in the UV theory.However, we should be careful: in the flow back up to the UV we break superconformal (and U(1) r ) symmetry.Therefore, we should understand whether these multiplets can sit inside larger non-conformal multiplets. To that end, let {λ a } denote the (generally infinite) collection of irrelevant couplings in the deformed IR theory that flows back up to the UV theory in question.Since supersymmetry and SU(2) R symmetry are both preserved, these couplings should sit as superconformal primaries in background / spurion multiplets with quantum numbers If O 3,i,j,α becomes part of a longer non-conformal representation but remains chiral and a (SUSY) primary after introducing the λ a , then, in the UV theory, we expect a corresponding B1/2,r(1/2,0) multiplet when superconformal symmetry re-emerges.However, this scenario is incompatible with spontaneous superconformal symmetry breaking. Instead, let us suppose that O 3,i,j,α is not a SUSY primary or is no longer chiral after turning on the λ a .To that end, first suppose O 3,i,j,α is a SUSY primary satisfying In the first equation in (3.14) we could have considered a more general linear combination of operators multiplying different polynomials (series) in the λ a .For simplicity, we have written a single such term with a single power of a coupling (but our arguments can be generalized straightforwardly to the most general case). To find the contradiction, let us first suppose that O 3,i,j,α α is an IR superconformal primary.Then it is a primary of an IR C1,r even after the irrelevant deformation.Then, to try to avoid a potential B1/2,r(1/2,0) multiplet in the UV, let us suppose O 3,i,j,α is now a descendant Let us first imagine that Õ3,i,j,α α is a SUSY primary after the irrelevant deformation.Then, in the UV it satisfies the shortening condition in (3.18).UV superconformal invariance implies that Õ3,i,j,α α is a C (or Ĉ) superconformal primary.However, (3.18) contradicts the spontaneous breaking of U(1) r in the flow to the IR. Let us suppose instead that Õ3,i,j,α α is a SUSY descendant.Then, it is a Q1 α descendant, and we require that Note that in the IR SCFT, Õα cannot be a descendant since then it is a Q1 α descendant, and (3.19) cannot hold.Suppose Õα is an IR superconformal primary.Then, in the IR SCFT, it must be in a D, B, Ĉ, or C multiplet with R ≥ 1/2.From (3.19), we see that Õα would be a member of such a multiplet with smaller scaling dimension than the operators in (3.12).More precisely, the irrelevant couplings have non-positive mass dimension and so in the IR SCFT ∆( Õα ) ≤ ∆(O 3,i,j,α ) − 1 = 5/2.Then, the only possibility is that Õα is an IR D1/2(1/2,0) primary.However, there are always fewer such operators than B operators of the type described in (3.12) as long as the Coulomb branch is genuine (i.e., just consisting of U(1) N super-Maxwell theory at generic points).More generally, as long as the Coulomb branch is freely generated, we can repeat the analysis starting around (3.14) for D1/2(1/2,0) to arrive at Statement 3: In any rank N ≥ 2 4d N = 2 SCFT with an N-dimensional freely generated Coulomb branch, there is at least one r such that B1/2,r(1/2,0) is in the spectrum. B multiplets and the Witten anomaly In theories with an Sp(n) global flavor symmetry (here Sp(1) ∼ = SU(2)), we may find a Z 2 -valued 't Hooft anomaly arising from large (background) gauge transformations associated with π 4 (Sp(N)) ∼ = Z 2 [8].We will argue that, under fairly lax assumptions, any theory possessing such an anomaly has a B1,r(0,0) multiplet (here r is the U(1) r charge of a generator of the Coulomb branch chiral ring). To understand this statement, we note that these anomalies are invariants of Sp(n)preserving RG flows.Since Coulomb branch operators are necessarily uncharged under flavor symmetries [17], RG flows onto the Coulomb branch triggered by turning on vevs for Coulomb branch chiral primaries preserve Sp(n) flavor symmetry. Therefore, let us assume that the theory has a Coulomb branch, and let us study flows onto this space.To get a handle on the possibilities in the IR, note that the arguments in [18] show the Z 2 -valued anomaly cannot be saturated by a TQFT (see [19] for another application of this fact).Therefore, on the Coulomb branch, we require massless degrees of freedom that match the UV Sp(n) anomaly. A simple example is the SU(2) N = 4 SYM theory.This theory has, from the N = 2 perspective, an Sp(1) flavor symmetry under which the components of the adjoint hypermultiplet, (Q a , Qa ) transform as doublets. 23Since a = 1, 2, 3, we have an odd number of doublets and hence a Z 2 anomaly.Now, consider the Sp(1)-preserving RG flow gotten by turning on a vev for the vector multiplet scalars.This vev results in an IR theory which is just U(1) N = 4 SYM.The abelian effective theory has a single doublet, (q, q), which realizes an Sp(1) symmetry and therefore also gives rise to a Z 2 anomaly. A more elaborate example involves the rank-one theory with Sp(5) symmetry discovered in [20].There we can also turn on an expectation value for the Coulomb branch generator and flow to a Coulomb branch which has five hypermultiplets at generic points.These fields realize the Sp(5) symmetry and exhibit the Z 2 anomaly as well (the hypermultiplets form a single 10 representation of Sp( 5)).Now, let us suppose we flow onto the Coulomb branch of a theory exhibiting the Z 2 't Hooft anomaly by spontaneously breaking superconformal symmetry via a vev for a Ē primary.Since no IR TQFT saturates the anomaly, we have a decoupled massless sector furnishing an Sp(n) holomorphic moment map, µ (we assume the flavor symmetry is locally realized).Therefore, in conjunction with the Coulomb branch operator, φ 2 , we can construct the normal-ordered product which is clearly a superconformal primary of the correct type (recall that µ has R = 1 and r = 0).Note that this multiplet transforms in the adjoint of Sp(n).For technical reasons that will become apparent later, let us assume that the IR theory is completely free (i.e., it consists of free vectors and hypers). Next, suppose we deform the theory and flow back up to the UV.What can (3.20)come from in the UV?We repeat the logic beginning around (3.14).In particular, to avoid a B1,r(0,0) multiplet in the UV, we need to have that (3.20) becomes a SUSY descendant or is no longer chiral after turning on some (generally infinite) irrelevant couplings, {λ a }, and flowing back to the UV. To that end, first suppose O W is a SUSY primary satisfying24 At short distances, the superconformal primary, O W , satisfies the shortening condition in (3.21) and is therefore in either a B1,r(0,0) or a C1,r(0,0) multiplet (note that a D1,(0,0) multiplet clearly contains too few degrees of freedom relative to the IR).As in the related proof of statement 3, neither possibility is consistent. Finally, if O W α is a superconformal descendant, then it must be a superconformal Q1 α descendant, but this statement contradicts (3.22).Therefore, (3.22) is inconsistent. Next let us consider the case that even after the irrelevant deformation.Then, to try to avoid a potential B1,r(0,0) multiplet in the UV, let us suppose O W is now a descendant Let us first imagine that ÕW α is a SUSY primary after the irrelevant deformation.Then, in the UV it satisfies the shortening condition in (3.25).UV superconformal invariance implies that ÕW α is a C (or Ĉ) superconformal primary.However, (3.25) contradicts the spontaneous breaking of U(1) r in the flow to the IR. Let us suppose instead that ÕW α is a SUSY descendant.Then, it is a Q1 α descendant, and we require that Note that in the IR SCFT, ÕW cannot be a descendant since then it is a Q1 α descendant, and (3.26) cannot hold.Suppose ÕW is an IR superconformal primary.Then, in the IR SCFT, it must be in a D, B, Ĉ, or C multiplet with R ≥ 0. From (3.26), we see that ÕW would be a member of such a multiplet with smaller scaling dimension than the operators in (3.20).More precisely, the irrelevant couplings have non-positive mass dimension and so in the IR SCFT ∆( ÕW ) ≤ ∆(O W ) − 1 = 3.Then, the only possibilities are that ÕW is an IR D1(0,0) , B1/2,2(0,0) , Ĉ1/2(0,0) , or a C0,1(0,0) primary.From (3.20), it is clear that this operator must transform in the adjoint of Sp(n). Let us suppose that we can go to a point on the Coulomb branch where the theory is completely free (this includes free hypermultiplets).Then, we see that to get an Sp(n) adjoint primary, we require two hypermultiplet scalars.This logic leaves a single free vector scalar for us to adjoin to get an operator of dimension three.As a result, we can immediately rule out the B, Ĉ, and C options. 25 We are left with the D option.For higher-rank Coulomb branches, we can always build more B operators of type (3.20), but in the rank-one case we cannot.Instead, for rank one, we can repeat the logic around (3.21) for φµ ∈ D1(0,0) rather than φ 2 µ ∈ B1,2(0,0) .Therefore, we arrive at Statement 4: Consider a 4d N = 2 SCFT with an Sp(n) flavor symmetry having a non-vanishing Z 2 -valued anomaly.If the theory possesses points on the Coulomb branch consisting purely of free fields, then it has a B1,r(0,0) multiplet, where r is the U(1) r charge of a Coulomb branch generator.This multiplet transforms in the adjoint of Sp(n). 26 Free theories and a conjecture on general 4d N = 2 SCFTs In this section, we would like to prove a few statements about the spectrum of B multiplets in free 4d N = 2 SCFTs and then use these statements to conjecture a general constraint on 4d N = 2 SCFTs. To that end, we begin with the following observation: Fact 1: In a theory of N free vectors and M free hypermultiplets, all BR,r(j,0) and DR(j,0) multiplets satisfy j ≤ R. To understand this statement, note that a free vector and a free hypermultiplet have the chiral fields listed in table 1.Let us construct highest SU(2) R and Lorentz-weight primaries of B and D multiplets in a theory of N free vectors and M free hypers.The only sources of j are the gauginos, λ 1 i,α (in particular, derivatives would lead to operators that are trivial in 25 For the latter case, note that to get R = 0 we should anti-symmetrize the free hyper SU (2) R indices. This implies that the operator is not in the adjoint of the flavor symmetry. 26We can arrive at this result more simply if we are willing to invoke the lore that a mixed branch implies the existence of operators in the (2.11) channel.Indeed, the existence of the mixed branch follows from the discussion around (3.20). R r Table 1: List of chiral fields in a theory with N abelian free vector multiplets the chiral ring and hence not highest-weight primaries of B or D multiplets).Since j ≤ R for the gaugino (and all other chiral fields), any word constructed out of letters in table 1 has j ≤ R. While the above argument provides a bound on spin versus SU(2) R quantum numbers, it is natural to ask if we can realize all of the above multiplets with j ≤ R. Indeed, this is the case: Fact 2: In a theory of N = 2n free vectors and M ≥ 1 free hypermultiplets, there exist values of r such that we have at least one BR,r(j,0) and one DR(j,0) multiplet for all R ≤ (N − 1)/2 and j ≤ R. To derive this set of facts, note that is a highest-weight primary of a Bu/2,v(0,0) multiplet if v > 1 and a Du/2(0,0) multiplet if v = 1 (clearly the above component fields are superconformal primaries and therefore so too is the product). To arrive at the claim for B multiplets, we can simply take the above operators and multiply by φ i Indeed, this statement follows from the fact that φ i is a superconformal primary and has r = 1 and j = 0. This logic also implies the following fact: Fact 3: For any j ≤ R, there exist values of r and a local unitary 4d N = 2 SCFT, T , such that BR,r(j,0) and DR(j,0) is in the spectrum of T . If we consider the full set of free theories, it is reasonable to imagine that we see all possible superconformal representations up to deformations of the U(1) r charge.The heuristic reason for this belief is that, in a general theory, we expect interactions to lead to new null states.At the same time, interactions cannot change the SU(2) R and Lorentz-spin quantization (they can only lead to changes in the quantization of U(1) r ). Moreover, the general arguments of [21] show that any local 4d N = 2 SCFT only has DR(j,0) multiplets for j ≤ R. Therefore, we are led to the following conjecture: Conjecture: BR,r(j,0) multiplets with j > R are forbidden in general local unitary 4d N = 2 SCFTs. 27ote that a bound of the above form on B implies the bound on D found in [21].Indeed, suppose this were not the case.Then, we would have a DR(j,0) multiplet with j > R. Taking the product of the corresponding highest-weight primary with a free vector φ primary would give a B multiplet with j > R. On the other hand, note that the D bound does not imply the B bound since not all B operators need to come from a product of the form D × Ē. Note also that the above bound on B implies a known bound on Ēr(j,0) ruling out j > 0 in these latter multiplets [9].Indeed, suppose that there were a j > 0 such that Ēr(j,0) existed in some SCFT, T .Then, we could take N decoupled copies of T to get Ēr ′ (j ′ ,0) with arbitrarily large j ′ > 0. Therefore, multiplying with a free hypermultiplet would give B violating the conjecture by an arbitrarily large amount. B multiplets in Rank-one theories We have shown that, from relatively minimal assumptions, theories with N > 2 SUSY, higher rank and a conformal manifold, a Z 2 -valued Sp(N) anomaly, or with freely generated higher-dimensional Coulomb branches must possess B multiplets.Therefore, in this section, we specialize to rank-one theories that do not satisfy any of these properties (and also have no mixed branches) in order to understand whether any of these theories have B multiplets. We focus on a subset of such theories that can be described by certain non-conformal N = 1 Lagrangians with accidental IR enhancement to N = 2 (e.g., see [23]).In this context, a necessary condition for a Lagrangian to be useful in carrying out precision spectroscopy is for the IR superconformal U(1) r and SU(2) R Cartan to be visible in the UV and unbroken along the RG flow.This property is typically absent in N = 2 RG flows and hence explains the utility of constructions involving accidental SUSY enhancement. General Strategy As described in section 2, the spectrum of all multiplets in the FCS except the B multiplets can easily be determined either from a Seiberg-Witten description (in the case of the Ē multiplets) or from the associated 2d VOA (in the case of the B and D multiplets). Therefore, to get a handle on the B spectrum, our strategy will be similar to the one adopted in [5]: we will study rank-one N = 2 theories with weakly coupled UV descriptions in terms of an N = 1 gauge theory that is connected to our SCFT of interest by a sufficiently "smooth" RG flow (here we require that the superconformal IR symmetry is visible along the RG flow). More precisely, we will use the fields in these Lagrangian descriptions to write down the full set of gauge-invariant local operators that are candidates to generate the N = 1 chiral ring (we assume the RG flow only acts to truncate this ring).This is the set of chiral operators that cannot be expressed as a product of two nontrivial chiral operators. 28e whittle down this list by demanding that these generators sit in certain IR N = 2 superconformal representations. In fact, in all the rank-one theories we will consider, we never find a situation where an operator in this smaller list belongs to a B multiplet.While we do not have a full understanding of when B multiplets do and do not furnish generators of the chiral ring more generally, this observation plays an important role in further results we derive about the spectrum of chiral operators in the particular theories we study. As a consistency check of our method, we can make contact with known results on the non-B part of the FCS when characterizing our generators.We can then use these operators to exhaustively attempt to construct B operators through chiral ring fusion. In particular, rank-one theories have a one-complex-dimensional Coulomb branch.We Moreover, all the examples we study here can be mapped to known 2d VOAs, and we can use these associated VOAs to conclude that there are no D multiplets in our spectra. As a result, the B generation channel in (2.12) is not available (recall that none of the theories we study here have N > 2 SUSY and so the result of section 3.1 does not apply). Therefore, any B multiplets in our theories of interest must be linear combinations of normal-ordered products of chiral operators sitting in Ēr and / or B1 multiplets.We construct all such products allowed by the OPE constraints and chiral ring relations in the theory, and we consider their linear combinations. Two kinds of B multiplets will be of particular interest in our analysis, so we single them out beforehand.These are obtained from the OPE in (2.11) and the second OPE in (2.14).In the rank-one theories we consider, these B multiplets often vanish as a result of chiral ring relations arising from the dynamics of the N = 1 → N = 2 RG flows. We now proceed to our analysis of the N = 1 chiral spectra of individual rank-one SCFTs.For simplicity, we stick to isolated theories of Argyres-Douglas type and to the SU(2) theory with four flavors. Table 2: UV fields in the N = 1 description of the (A 1 , A 2 ) theory [27,28].Here r and R are the IR U(1) r and SU(2) R Cartan respectively. The The chiral ring of this theory was analyzed in detail in [5], and so we merely summarize the story here.Recall that this is the original Argyres-Douglas theory [24] (referred to in [5,15,25] as the "Minimal" Argyres-Douglas (MAD) theory).It has a Coulomb branch chiral ring generator of dimension 6/5 sitting as a primary in a Ē6/5 multiplet.The theory has no Higgs branch, and, consistent with this fact, the associated 2d VOA is the Lee-Yang Virasoro vacuum module [26].Arguments in [15] then imply that there are no B or D multiplets in this theory. The N = 1 SU(2) gauge theory Lagrangian with fields in Table 2 29 and superpotential [27,28] W = Xφ 2 + Mφq ′ q ′ + φqq , (4.1) was used in [5] to argue that there are no B multiplets and where the chiral operators in Ē6/5 are located as follows The basic idea of the proof in [5] was to write down all possible chiral ring generators and argue that none can sit in B representations (i.e., generators cannot sit at locations described in (2.10)).Then, dynamical constraints from the superpotential (4.1) rule out chiral ring products of operators in (4.3) giving rise to a B multiplet (crucially, the channel described in (2.14) does not create B chiral operators). Table 3: UV fields in the N = 1 description of the (A 1 , A 3 ) theory [27,32].Here r and R are the IR U(1) r and SU(2) R Cartan. The This theory was not analyzed in [5].It is the simplest Argyres-Douglas theory with flavor symmetry [29] (SO(3) for the untwisted Hilbert space in this case [30]).The Coulomb branch chiral ring generator has dimension 4/3 and sits as a primary in an Ē4/3 multiplet. Unlike the previous case, the theory has a one-quaternionic-dimensional Higgs branch with the Higgs branch chiral ring generated by the holomorphic SO(3) moment map µ a ∈ Ba 1 (here a = ±, 0 indicates SO(3) flavor weight) subject to the relation This theory also has a known associated VOA [31]: the su(2) −4/3 affine Kac-Moody (AKM) algebra.This fact allows us to immediately rule out D multiplets.Indeed, su(2) −4/3 is (strongly) generated by the affine current (related, by the map in [4], to µ a which has r = 0).This means that any operator in the 2d VOA is built from normal-ordered products of (derivatives) of this current.Since the procedure in [4] used to construct the VOA respects the U(1) r symmetry, we conclude that all Schur operators in the (A 1 , A 3 ) theory are U(1) r neutral.Since D Schur operators necessarily have r = 0, they cannot be present. Therefore, the FCS can at most consist of Ē, B, and B multiplets.To get a handle on these latter multiplets, we consider the N = 1 Lagrangian with fields given in Table 3 and superpotential [27,32] W = α 0 q q + β 2 φ 2 .( As in the (A 1 , A 2 ) case, the UV theory is an SU(2) N = 2 gauge theory.However, this time the superpotential preserves the SU(2) flavor symmetry under which (q, q) transforms as a doublet.This fact is crucial in order to reproduce the IR symmetry discussion around (4.4). Operator R r j α 0 0 4/3 0 Table 4: List of N = 1 chiral generator candidates in the (A 1 , A 3 ) theory.Here r and R are the IR U(1) r and SU(2) R Cartan, and j is the left spin. multiplet exists in the theory.If this is a level-one descendant, then the primary has R = 1/2, r = 11/6, j = 1/2 (it cannot have j = 3/2, since we would then require that r > 5/2).However, there is no product of generators in Table 4 that has these quantum numbers.Therefore, φλ α λ β must be trivial in the IR FCS. As a result, we see that we have the following characterization of the generators of the FCS and the FCS itself where I is the ideal generated by the constraint in (4.4).We would like to understand if these generators can give rise to a B multiplet In particular, we can try to form B multiplets by taking products of operators in (4.6) and (4.7).At the quadratic level, there are several possibilities, that we now proceed to study. • α 2 0 : This is the superconformal primary of the Ē8/3 multiplet.In fact, since the Coulomb branch is freely generated, α k 0 will be the superconformal primary of the Ē4k/3 multiplet for all k ∈ N. • α 0 φλ α : This is the level one descendant of the Ē8/3 multiplet.Similar to the case above, α k−1 0 φλ α will be the level-one descendant of the Ē4k/3 multiplet for all k ∈ N. • β 2 φλ α : This is ruled out by the superpotential constraint, This constraint cannot receive quantum corrections because they would involve terms with R = 3/2 and r = 7/6 (recall that φ 2 hits a unitarity bound and decouples). • α 0 β 2 and (φλ α ) 2 : One linear combination of these two operators will be the leveltwo descendant of the Ē8/3 multiplet.Can we find a linear combination of these two operators that would be a superconformal primary?We see that the only candidate for the level-one descendant of such a multiplet is β 2 φλ α .However, this has already been set to zero by (4.10).Therefore, no linear combination of these two operators can be a superconformal primary. Therefore, 31α 0 qq = α 0 q q = α 0 q q = 0 .(4.12) These constraints cannot receive quantum corrections because they would involve operators with R = 1 and r = 4/3 (and having the same SO(3) weight as the operators in question). • β 2 φqq, β 2 φq q, and β 2 φqq: These operators vanish in the chiral ring as a result of the superpotential constraints, Table 5: UV fields in the N = 1 description of the (A 1 , D 4 ) theory [34,35].Here r and R are the IR U(1) r and SU(2) R Cartan.U(1) B and U(1) m are flavor symmetries corresponding to Cartans of the IR SU(3) flavor symmetry. We see that the only products that survive at the quadratic level are α 2 0 , α 0 φλ α , (φλ α ) 2 , and α 0 β 2 .Of these, the last two cannot be superconformal primaries (nor can any of their linear combinations be). We therefore see that the most general product of operators from the generating set that we can write down and which is a superconformal primary has the form α m 1 0 , (φqq) m 2 (φq q) m 3 (φq q) m 4 , m 1 , m 2 , m 4 ∈ N , m 3 = 0, 1 . (4.14) These are the Coulomb and Higgs branch operators respectively (recall the constraint in (4.4) that constraints m 3 ).Therefore, there are no B multiplets in the (A 1 , A 3 ) theory. The This Argyres-Douglas theory has SU(3) flavor symmetry and was originally discovered in [29].Its Coulomb branch chiral ring generator has dimension 3/2 and is a primary in a Ē3/2 multiplet.This theory has a two-quaternionic-dimensional Higgs branch and a corresponding chiral ring generated by the holomorphic SU(3) moment map transforming in the 8 (adjoint) representation, µ a ∈ Ba 1 (here we will take a to be an adjoint index) subject to the Joseph ideal constraint. As in the previous cases, this theory has a known associated 2d VOA [31]: the su(3) −3/2 AKM algebra.Using the same logic we used in the case of the (A 1 , A 3 ) theory in previous subsection, we can again rule out D multplets here too. As a result, the FCS can again at most consist of Ē, B, and B multiplets.To understand the spectrum of the B multiplets we study an N = 1 Lagrangian theory with fields given in Table 5 and superpotential [35] W = M 2 q 2 q2 + φq 1 q1 + β 2 φ 2 . (4.15) Note that (as in the closely related (A 1 , A 3 ) case) the φ 2 operator hits a unitarity bound and decouples in the IR.Moreover, only a SU(2) × U(1) ⊂ SU(3) flavor symmetry is manifest.Under this symmetry, (q 2 , q2 ) transforms as a doublet (and q 1 , q1 are singlets). We can construct a list of naive multiplets using exactly the same set of procedures as in the previous subsection.Although the number of fields involved here is larger, we have done this in Tables 6 and 7. It will again prove useful to identify the Ē3/2 and B1 chiral operators.Unitarity implies that the primaries of the B1 multiplets and the primary and level-one descendant of the Ē3/2 multiplet cannot be composites built out of products of gauge-invariant operators (the level-two descendant of Ē3/2 can at most be built out of a product of two gauge invariant operators).Therefore, we can immediately identify the holomorphic moment maps µ a ∈ {β 2 , q2 q 1 , q 1 q 2 , q1 q2 , q1 q 2 , φq 2 q2 , φq 2 q 2 , φq 2 q 2 } ∋ Ba 1 . For the Ē3/2 multiplet we have (4.17) We have mapped q1 q 1 to the level-two descendant of Ē3/2 using the fact that there is no other candidate built from a single generator in Table 4 or a product of two such generators that has the correct superconformal quantum numbers and is SU(2) × U(1) ⊂ SU(3) invariant (recall also that, by construction, φ 2 decouples from the IR chiral ring). In what follows, we will make use of the following superpotential constraints, Let us now proceed to discuss potential B chiral generators of the theory • Any operators with r = j in Tables 6 and 7 cannot sit in B multiplets (see (2.10)).Since they are no D multiplets, these operators (modulo those in (4.16)) are trivial Table 7: Remaining candidate chiral ring generators for the (A 1 , D 4 ) theory (continued from Table 6).Here R, r, and j are the IR SU(2) R Cartan, U(1) r charge, and left spin. is a unique candidate for the level-two descendant of this multiplet, (q 1 q1 ) 2 .As we will show below, this operator vanishes in the IR chiral ring as a result of a superpotential constraint.Therefore, this B multiplet cannot exist in the (A 1 , D 4 ) theory. • (q 1 q1 ) 2 (R, r(j, j)) = (2, 1, (0, 0)): We have the following superpotential constraint ∂W ∂φ a (q 1 q1 ) a = 0 , which leads to, We have already shown that φq 1 q1 is trivial in the IR N = 1 chiral spectrum.Therefore, we are left with, • φλ α B1 has ((R, r(j, j)) = (3/2, 1(1/2, 0))): These eight operators cannot form a B primary since r < 1 + j. 34 • q 1 q1 B1 has ((R, r(j, j)) = (2, 1/2(0, 0)): These operators cannot be B primaries since r < 1. 35 34 In fact, we can show these operators are trivial in the IR chiral ring.Indeed, if they form a levelone descendant, then the primary has ((R, r(j, j)) = (1, 3/2(0, 0)), but this B multiplet has already been removed by the superpotential constraint above.If they form a level-two descendant, then the primary has (R, r(j, j)) = (1/2, 2(1/2, 0)), and the only candidate operator is M 2 φλ α .However, this is itself a level-one descendant. 35In fact, these operators are trivial in the IR chiral ring.Indeed, if they form a level-one descendant, then the primary has (R, r(j, j)) = (3/2, 1(1/2, 0)), but this also has r < Therefore, we see that we cannot construct a B primary, and so there are no B multiplets in the (A 1 , D 4 ) SCFT. 36 In particular, we see that where I is the Joseph ideal constraint.In the interacting theory, we can again show that all chiral ring generators are in the Coulomb branch and Higgs branch chiral subrings. 37In this case, this means the chiral ring generators live in Ē2 or BM 1 multiplets (with M an SO(8) adjoint index).Since the gauge group is SU(2), the naive list of chiral generators is very similar to the lists in the Argyres-Douglas examples we treated before (which were based on SU(2) gauge theory Lagrangians), so we will keep the discussion brief. We use Table 8 to write down the list of UV chiral fields.The Lagrangian in this case is .27) then the primary has ((R, r(j, j)) = (1, 3/2(0, 0)), but we have already eliminated this B multiplet by the superpotential constraint. 36In fact, using the argument in footnotes 34 and 35, we directly see that all operators except those in the Coulomb branch and Higgs branch chiral rings vanish in the IR FCS. 37 By the interacting theory, we mean the interacting theory at generic points on the conformal manifold. Note that in terms of SU(4) ⊂ SO(8), we can define q i := Q i and qi := Q i+4 where i The naive chiral ring generators are the following: • φ 2 , φλ α , λ 2 : These are, respectively, the primary, the level one, and the level-two descendants of the Ē2 multiplet.In the interacting theory, λ 2 mixes with φ(Q a Q a ) via (4.27). • (there are 28 of these operators transforming in the adjoint of SO( 8)). These are the BA 1 Higgs branch generators housing the Noether currents of the flavor symmetry. • The other possible operators with j = 0 have the following form: φQQ (R = 1, r = 1).If they are non-trivial in the chiral ring, these operators can only sit in D multiplets.In the interacting theory, one linear combination with λ 2 becomes the level-two descendant of Ē2 . • At j = 1/2, the possible operators have either of two forms, multiplets. To summarize, we see that these operators may lie in the following multiplets: (1) D multiplets.However, in the interacting theory, these multiplets are not present due to the same logic as in the (A 1 , A 3 ) and (A 1 , D 4 ) cases: the associated chiral algebra is of AKM type ( so(8) −2 in this case [4]).To see this statement more concretely, note that, when we turn on interactions, thirty six linear combinations of the φQQ and λ 2 operators discussed above pair up with thirty-six of the thirty-seven stress tensor multiplets in the free theory to become long multiplets (the remaining stress tensor multiplet is protected along the full conformal manifold; see [36] for further details of this argument).(2) As descendants in a B1 2 , 5 2 ( 1 2 ,0) multiplet.However, we cannot construct a candidate for the superconformal primary of this multiplet in a rank-one theory.(3) As a descendant in a B1,2(0,0) multiplet (we can construct a candidate primary for this multiplet, but the multiplet itself is known to be absent in this theory since it does not have a mixed branch; more on this later). Therefore, we conclude that, in the interacting theory, the only non-trivial chiral generators lie in Ē2 or B1 multiplets. Once again, we proceed to take normal-ordered quadratic products of the above Coulomb and Higgs branch generators.There are two possible sources of B multiplets which we can form by taking products of the above generators.We now study each case individually. We can take a linear combination of these operators where κ ∈ C, and O is a superconformal primary of a B1,3(0,0) multiplet.It is trivial to check that this operator is present in the free theory.In the index, it is also easy to check that there is no B1,3(0,0) contribution at the leading order it can appear.The reason is that there are thirty-eight C0,2(0,0) multiplets in the free theory: thirty-seven from Ē2 × Ĉ0(0,0) and one of the form f ABC ǫ αβ ǫ IJ λ IA α λ JB β φ C , where we have contracted SU(2) R and Lorentz indices of the gauginos (note that the anti-symmetrization of gauge indices makes this latter operator a superconformal primary).On the other hand, there are thirtyseven B1,3(0,0) multiplets: thirty-six arising from Ē2 × D1(0,0) and one linear combination as in (4.29).The corresponding index contributions cancel up to a net C0,2(0,0) contribution (note that three of the C0,2(0,0) multiplets are flavor singlets and so are two of the B1,3(0,0) contributions). Finally, we can form a potential primary of a B1,2(0,0) multiplet from the operator, M 0 Tr(QQ) , (4.30) which would correspond to a mixed branch.These operators vanish in the chiral ring at leading order in the coupling (they pair up with an adjoint-valued C0,1(0,0) multiplet to become long multiplet descendants).This statement is consistent with the fact that the theory does not have a mixed branch.In fact, any such candidate obtained from the Ē2m × Bn OPE for m, n ∈ N will not correspond to a Bn,2m(0,0) multiplet for the same reason. Conclusion In this paper, we have explored various new roles that B multiplets play in 4d N = 2 SCFTs.We have shown that these operators are ubiquitous and are connected to many interesting phenomena.Our work also raises several questions: • Can our algebraic proof that the product of primaries in D1/2(0,0) × D1/2(0,0) is nonvanishing and produces a B multiplet be generalized to show that any N > 2 theory has an infinite chiral ring generated by the primary of D1/2(0,0) ? 38Since this multiplet houses the extra supercurrents in its higher components, it would be interesting to understand if such an infinite chiral ring exists as a consequence of physics related to Ward identities for extended SUSY.If so, this may be a step in an abstract CFTbased proof that all N > 2 theories have a mixed branch of moduli space. • We saw that the existence of (adjoint valued) B multiplets can be a diagnostic of the Z 2 -valued Sp(N) 't Hooft anomaly in [8].What about more general 't Hooft anomalies?Can these operators help diagnose the existence of 2-groups and other more elaborate structures? • We saw that B multiplets form a natural ideal in the FCS ring.Can we use this fact to prove new results about chiral rings in 4d N = 2 SCFTs? • The B production channels we discussed do not lead to B chiral ring generators.It would be interesting to understand the most general conditions under which B multiplets can and cannot house chiral ring generators (see footnote 6).Can gauging discrete symmetries help? 38 Such a ring would contain an infinite number of B multiplets. . 22 ) In this case, O W α is highest SU(2) R weight and satisfies( Q1 ) 2 O W α = 0 .(3.23)Let us now understand where O W α can sit in the IR superconformal representation theory. will study examples where the corresponding Coulomb branch chiral ring is freely generated.At the level of superconformal representation theory, this means that we have a single Ēr generator giving rise to an N = 2 chiral ring of primaries via the n-fold OPEs (for n ∈ Z >0 ), Ē×n r ∋ Ērn .Most of the rank-one theories we study also have an N = 2 flavor symmetry.As we have seen in previous sections, the Noether currents sit in corresponding B1 multiplets transforming in the adjoint of the flavor symmetry.These multiplets are associated with the Higgs branch.For all the theories we consider, the B1 multiplets generate the Higgs branch chiral ring. . 13 ) These constraints cannot receive quantum corrections because they would involve other operators with R = 2 and r = 1/3 (and having the same SO(3) weight as the operators in question) since φ 2 decouples.Field Rep U(1) B U(1) . 24 ) This constraint cannot receive quantum corrections since all other operators with the same superconformal quantum numbers have already been shown to vanish in the IR chiral ring.•M 2 B1 has (R, r(j, j)) = (1, 3/2(0, 0)): Here we consider the eight so-called "mixed branch" primaries consisting of the product of the primaries in the Coulomb and Higgs branch chiral ring generators.The following chiral ring relation causes them to vanish: ∂W ∂q 2 φq 2 = M 2 φq 2 q2 = 0 .(4.25)By an SU(3) flavor rotation, all other products of the form M 2 B1 also vanish in the IR chiral ring. 4. 5 . 4 Finally, we discuss the N = 2 SU(2) SQCD with N f = Lagrangian theory of SU(2) SQCD with four flavors.Unlike the previous cases, this theory is not isolated (it has an exactly marginal coupling). Table 6 : List of candidate chiral ring generators for the (A 1 , D 4 ) theory (continued in Table7).Here R, r, and j are the IR SU(2) R Cartan, U(1) r charge, and left spin.U(1) B and U(1) m are N = 2 flavor symmetries. Table 8 : List of chiral fields appearing in the Lagrangian of the SU(2) theory with four fundamental flavors.Here r, R, and j are the U(1) r , SU(2) R Cartan, and left spin respectively.In the rightmost column we record the representation under the SO(8) flavor symmetry (the matter fields transform in the vector representation).If we want to write our matter field charges in terms of SU(4) ⊂ SO(8), we can define q 1 + j.If it is a level-two descendant,
15,002.4
2023-06-21T00:00:00.000
[ "Mathematics" ]
F-Box Protein FBXW17-Mediated Proteasomal Degradation of Protein Methyltransferase PRMT6 Exaggerates CSE-Induced Lung Epithelial Inflammation and Apoptosis Chronic obstructive pulmonary disease (COPD) is a chronic debilitating lung disease, characterized by progressive airway inflammation and lung structural cell death. Cigarette smoke is considered the most common risk factor of COPD pathogenesis. Understanding the molecular mechanisms of persistent inflammation and epithelial apoptosis induced by cigarette smoke would be extremely beneficial for improving the treatment and prevention of COPD. A histone methyl modifier, protein arginine N-methyltransferase 6 (PRMT6), is reported to alleviate cigarette smoke extract (CSE)-induced emphysema through inhibiting inflammation and cell apoptosis. However, few studies have focused on the modulation of PRMT6 in regulating inflammation and cell apoptosis. In this study, we showed that protein expression of PRMT6 was aberrantly decreased in the lung tissue of COPD patients and CSE-treated epithelial cells. FBXW17, a member of the Skp1-Cullin-F-box (SCF) family of E3 ubiquitin ligases, selectively bound to PRMT6 in nuclei to modulate its elimination in the proteasome system. Proteasome inhibitor or silencing of FBXW17 abrogated CSE-induced PRMT6 protein degradation. Furthermore, negative alteration of FBXW17/PRMT6 signaling lessened the proapoptotic and proinflammatory effects of CSE in lung epithelial cells. Our study, therefore, provides a potential therapeutic target against the airway inflammation and cell death in CS-induced COPD. INTRODUCTION Chronic obstructive pulmonary disease (COPD) is a chronic debilitating and progressive lung disease, leading to more than 3 million deaths worldwide each year, which is characterized by persistent inflammation in the lung parenchyma and small airways (Lozano et al., 2012;Vogelmeier et al., 2017). Smoking is the prominent risk for the developmental progress of COPD (Shapiro, 2001;La Rocca et al., 2007;Laniado-Laborin, 2009;Angelis et al., 2014;Vogelmeier et al., 2017). Long-term exposure to cigarette smoke induces severe damage in the lung epithelial cells and contributes to their death, and thus, the induction of immune response, and subsequent destruction of lung parenchyma (Yang et al., 2006;Comer et al., 2013;Angelis et al., 2014;Kim et al., 2016). Nevertheless, progressive and persistent injury of the lungs causes irreversible airflow limitation and severely impacts a patient's life (Vogelmeier et al., 2017). Although symptomatic treatment can attenuate symptoms and slow down the progression of disease, currently, COPD is still not curable (Lozano et al., 2012). Discovering associated signaling or key factors in immune response and cell death might provide a useful strategy for lessening the severity of pulmonary inflammation and aberrant apoptosis. Epigenetics is emerging to play an essential role in the modulation of cell fate decision and inflammatory responses, as well as in the pathogenesis of COPD (Bayarsaihan, 2011;Rivera and Ross, 2013;Wahlin et al., 2013;Hagood, 2014). Targeting epigenetic modifiers might be a potential choice for COPD therapies (Comer et al., 2015;Wu et al., 2018). Protein arginine N-methyltransferase 6 (PRMT6) is a type I histone methyl-modified enzyme, displaying a unique substrate specificity to catalyze the asymmetric dimethylation of histone H3 arginine 2 (H3R2me2a), as well as binding to the H3 tail to prevent methylation of H3K4 (Guccione et al., 2007;Hyllus et al., 2007). PRMT6 resides predominantly in the nucleus and is associated with gene transcription suppression (Frankel et al., 2002). Despite its epigenetic function, PRMT6 also plays an essential role in the methylation of non-histone proteins and is involved in a variety of life processes, including cell senescence (Phalke et al., 2012;Stein et al., 2012), cell cycle arrest (Kleinschmidt et al., 2012;Wang et al., 2012), cell apoptosis (Kang et al., 2015;Luo et al., 2015), and immune responses (Zhang et al., 2019). Evidence shows that PRMT6 controls inflammatory gene expression via regulating transcription factors involved in inflammation and cell death signaling, including nuclear factor kappa B (NF-κB) and G-protein pathway suppressor 2 (GPS2) (Di Lorenzo et al., 2014;Zhang et al., 2019). Ablation of PRMT6 negatively regulates p53 expression and thus induces cell senescence (Neault et al., 2012;Phalke et al., 2012). Our previous work proved that PRMT6 overexpression in the respiratory system alleviated the emphysema change in a cigarette smoke extract intraperitoneal-established COPD mouse model (He et al., 2017). Collectively, these studies implicate the important role of PRMT6 in the regulation of inflammation and cell fate decision, but its modulating mechanisms in CS-induced airway epithelial cell remain unclear. The ubiquitin-proteasome system (UPS) is a major machinery for the degradation of intracellular proteins, terminating the function of targeted proteins to regulate a multitude of cellular processes (Hershko and Ciechanover, 1986;Rock et al., 1994;Lecker et al., 2006;Schwartz and Ciechanover, 2009;Stintzing and Lenz, 2014). The enzymatic chains, including E1-activating enzyme, E2-conjugating enzyme, and E3 ligase, participate in connecting ubiquitin chains to target protein substrates (Hershko and Ciechanover, 1986;Schwartz and Ciechanover, 2009;Shen et al., 2013). The >1,000 identified E3 enzymes are the largest family, ensuring the specificity of substrate for ubiquitylation and destined degradation (Meyer-Schwesinger, 2019). Among the E3 enzymes, the SCF ligase complex is the largest and the most well-characterized subfamily (Nandi et al., 2006;Meyer-Schwesinger, 2019), while in this complex, the F-box protein (FBP) is the pivotal element for specific binding to substrate . In humans, more than 60 F-box proteins have been described, which can be classified into three subunits: F-box and WD-40 domain proteins (FBXWs), F-box and leucine-rich repeat proteins (FBXLs), and F-box-only proteins (FBXOs) Skaar et al., 2013). Several FBPs, such as β-Trcp, FBXW7, FBXW15, FBXO17, and FBXL19, are characterized as participating in cellular activities Zou et al., 2013;Suber et al., 2017;Ci et al., 2018;Yang et al., 2018). However, the function of multiple FBPs still awaits further discovery. FBXW17, an ortholog of FBXW12 in mouse genome, is a 466-amino-acid protein containing an F-box motif for the recognition of substrate and the recruitment of ubiquitylation (Jin et al., 2004). As an SCF protein member, FBXW17 has a similar sequence to other FBPs, which is recently reported to ubiquitinate FBXL19 for its degradation (Dong et al., 2020). However, its function and other molecular targets remain to be determined further depending on its role in SCF subunits. Here, we found that FBXW17 targeted PRMT6, selectively mediating the proteasomal degradation of PRMT6 to exaggerate cigarette smoke extract (CSE)-induced pulmonary inflammation and apoptosis. Our results might provide insights into developing new approaches to limit the severity of inflammation and apoptosis through regulating the proteasome machinery to dispose of indispensable epigenetic enzymes linked to CSEinduced lung injury. Patients and Samples The Research Medical Ethics Committee of the Second Xiangya Hospital of Central South University (Changsha, China) granted approval for this study. Fresh tissue samples from patients with COPD were collected from the Department of Thoracic Surgery, Second Xiangya Hospital of Central South University. Tissue samples from the lungs of patients who received thoracic surgery at the Second Xiangya Hospital were used for Western blotting. Cigarette Smoke Extract Preparation Cigarette smoke extract was prepared with slight modifications, as previously described (Kang et al., 2015). Briefly, one cigarette (Furong Brand, filtered cigarettes; 12 mg tar; 1.1 mg nicotine; 14 mg carbon monoxide per cigarette) was used for bubbling through 10 ml of serum-free medium until the cigarette smoke largely disappeared in the syringe. Then, the CSE solution was filtered through an aseptic 0.22-µm filter. The filtered CSE was considered as 100% and serially diluted with culture medium for subsequent study. The CSE solution was prepared freshly for all experiments. Plasmid and shRNA Transfection The plasmid vector containing V5, green fluorescence protein (GFP), or FLAG tag was constructed by and purchased from Cyagen (Shanghai, China) and Genchem (Shanghai, China). Plasmids were transfected into MLE12 cells using a Nucleofection TM II system (Amaxa Biosystems, Gaithersburg, MD, United States) as previously described (Zou et al., 2013). Briefly, 1 × 10 6 MLE12 cells were homogeneously suspended in 100 µl of electrotransfection buffer [1 × phosphate-buffered saline (PBS) with 20 mM HEPES], and 1-3 µg of plasmids was added into each cuvette. Electroporation was executed in the preset program of T-013. After electroporation, 1 ml hydrocortisone, insulin, transferrin, estradiol, and selenium (HITES) medium was immediately added to each cuvette. Transfected cells were cultured in six-well plate for 48 h and were then used for further assay. Scramble shRNA and small hairpin RNAs (shRNAs) against FBXW17 were purchased from GeneChem (Shanghai, China). shRNAs were transfected into cells by Lipofectamine 2000 reagents according to the manufacturer's instructions. After 72 h of incubation, transfected cells were used for further CSE treatment and analysis. In vitro Binding Assay MLE12 cells were lysed with an IP lysis buffer (catalog no. 87787, Thermo Fisher, United States) on ice for 30 min. Then, 1 mg of cell lysates from each group was subjected to immunoprecipitation by incubating with FLAG or PRMT6 primary antibodies overnight at 4 • C, followed by incubation with 40 µl of protein A/G-agarose for 2 h at room temperature. The immunoprecipitants were then washed three times with 1% Triton X-100 in ice-cold PBS. Ladder buffer was added to resuspend beads and boiled at 95 • C for 5 min. Western blotting with an enhanced ECL system was applied for detection. Immunofluorescence Staining MLE12 cells (2 × 10 5 ) were plated on 35 mm glass-bottom culture dishes at 70% confluence. Then, 4% paraformaldehyde was used to fix cells for a period of 20 min. Cells were then permeabilized with 0.1% Triton X-100 for 2 min. After being incubated with a 1:500 dilution of antibodies to GFP or FLAG tag, cells were immunoblotted with a 1:200 dilution of fluorescence-conjugated secondary goat anti-mouse antibody. The nucleus was stained with 4'6-diamidino-2-phenylindole (DAPI). Immunofluorescent imaging of the cell was performed on a Nikon confocal microscope. Flow Cytometry for Apoptosis Detection The apoptotic cells under FBXW17 overexpression and FBXW17 knockdown after CSE treatment were assessed by flow cytometry with annexin V-APC conjugate fluorescein (annexin V-APC) and propidium iodide (PI) staining. According to manufacturer's instruction (catalog no. 640932, BioLegend, CA, United States), the cells were trypsinized and washed twice with cold BioLegend's Cell Staining Buffer (catalog no. 420201). Then, the cells were spin down and resuspended in provided annexin V binding buffer at a concentration of 2 × 10 6 cells/ml. After that, 100 µl of cell suspension was transferred into a 5-ml test tube, followed by adding 5 µl of annexin V-APC and 10 µl of PI solution. The cells were gently mixed and incubate in the dark for 15 min at room temperature. Finally, 400 µl of annexin V binding buffer was added to each tube. The apoptosis was analyzed by flow cytometry with Beckman CytoFLEX (Beckman Coulter, United States). The annexin V-positive, PI-negative (the lower right quadrant of the dot plot) and annexin V-positive, PIpositive (the upper right quadrant of the dot plot) cells were considered as early and late apoptotic cells, respectively. The sum of both early and late apoptotic cells percentage was assumed as apoptosis. Statistical Analysis All quantified data are presented as mean ± SD. One-way analysis of variance (ANOVA) and an unpaired Student's t test were used for statistical analysis. A value of p < 0.05 was considered indicative of statistically significant difference. All the statistical analysis was carried out with GraphPad Prism 5 software. CSE Diminishes PRMT6 Protein Expression Accumulated data show that cigarette smoke induces lung epithelial cell dysfunction and death, following impairment to the integrity of the airway barrier contributing to the pathogenesis of COPD (Heijink et al., 2012;Amatngalim et al., 2016). PRMT6 was reported to be a negative regulator of cell apoptosis and inflammatory response (Kang et al., 2015;He et al., 2017He et al., , 2020Zhang et al., 2019). We first evaluated the protein level of PRMT6 in the lung tissue of COPD patients with Western blotting. We observed that the protein level of PRMT6 is aberrantly decreased throughout the lung tissue of COPD patients ( Figure 1A). To FIGURE 1 | Cigarette smoke extract (CSE) diminishes protein arginine N-methyltransferase 6 (PRMT6) protein expression. (A) Lung tissue of chronic obstructive pulmonary disease (COPD) patients and healthy controls were lysed for PRMT6 and β-actin immunoblotting. The densitometry results of PRMT6 protein expression are plotted in the right-hand panel. (B) CSE (2, 4, and 6%) was applied to treat BEAS-2B cells for 8 h. Cell lysate were collected and applied to immunoblotting. The plotted densitometric results were presented in the lower panels. (C) BEAS-2B cells were stimulated with 4% CSE for 2, 4, and 8 h. Immunoblotting was performed to examine PRMT6 protein expression. Lower panels showed the densitometric results of the blots. (D) Murine lung epithelial cells (MLEs) were treated with 2.5, 5, and 7.5% CSE. (E) CSE (5%) was applied to MLE12 cells for a range of time points. Cell lysate was analyzed with PRMT6 or β-actin immunoblotting. The relative expression of PRMT6 protein is plotted in the right-hand panel. Data represent n = 3 separate experiments. The graph shows mean ± SD and " * " denotes p < 0.05. The black line and " * " indicated the differences between groups. Frontiers in Cell and Developmental Biology | www.frontiersin.org further verify whether cigarette smoke decreases the PRMT6 level in vitro, we primarily conducted CSE treatment in the human bronchial epithelial BEAS-2b cells. Results from immunoblotting analysis showed that CSE decreased PRMT6 proteins levels in both a concentration-and time-dependent manner, with 4% of CSE significantly reducing the protein levels of PRMT6 over a period of 8 h (Figures 1B,C). To confirm this observation, the mouse lung epithelial MLE12 cells were applied to CSE stimulation. In MLE12 cells, CSE with a concentration of 5% markedly reduced the PRMT6 protein level at 8 h (Figures 1D,E). These data indicated that cigarette smoke diminished PRMT6 protein in a time-dependent manner, both in vivo and in vitro. PRMT6 Is Degraded via Proteasome System Proteins destined to be turned over were generally disposed of by proteasome or lysosome machinery (Stintzing and Lenz, 2014). To investigate whether PRMT6 is unstable and supposed to be degraded, we first assessed the protein stability of PRMT6. The lung epithelial MLE cells were treated with the proteinbiosynthesis inhibitor cycloheximide (CHX) (20 µg/ml), and PRMT6 protein levels were then analyzed by immunoblotting. Results showed that CHX diminished endogenous PRMT6 mass in a time-dependent manner, and PRMT6 is a liable protein with a predicted half-life (t 1/2 ) of ∼5 h. To further identify whether the proteasomal or lysosomal system is involved in the degradation of PRMT6 protein, we treated MLE12 cells with proteasomal inhibitor MG-132 (20 µM) or lysosomal inhibitor leupeptin (100 µM). Endogenous PRMT6 accumulated under MG-132 exposure, while it was hardly changed under leupeptin treatment (Figures 2A,B). These results supported the belief that PRMT6 is an unstable protein degraded via proteasome machinery but not a lysosomal system. FBXW17 Targets the Proteasomal Degradation of PRMT6 The important components of the SCF ubiquitin-ligase F-box proteins were used as mediators of binding to substrates for further ubiquitin-proteasomal proteolysis (Kipreos and Pagano, 2000). In order to clarify which F-box protein determined the PRMT6 degradation, several F-box-overexpressed plasmids were constructed to check the PRMT6 level. We found that FBXW17 overexpressed in MLE12 cells decreased the PRMT6 protein content (Figure 3A). To detect whether FBXW17 specifically targeted PRMT6 degradation, we randomly analyzed the effect of FBXW14-FLAG plasmid on PRMT6 degradation in MLE12 cells. Results showed that FBXW17 decreased the PRMT6 protein level in a dose-dependent manner but not FBXW14 ( Figure 3B). Moreover, under the treatment of 20 µg/ml CHX, the PRMT6 protein level reduced faster in FBXW17-overexpressed cells than FBXW14-overexpressed cells (Figure 3C). To investigate whether FBXW17-mediated PRMT6 degradation is in a proteasome pathway, we overexpressed FBXW17-FLAG in wildtype MLE12 cells under MG132 or leupeptin treatment. FBXW17 decreased the PRMT6 protein level in MG132 treatment but not leupeptin (Figure 3D), indicating that FBXW17 specifically targeted PRMT6 degradation via a proteasomal pathway. FBXW17 Interacting With PRMT6 Colocalize in the Nucleus In general, targeted proteins interact with F-box proteins and are then subjected to subsequent ubiquitin-proteasomal degradation (Skaar et al., 2013). To explore whether FBXW17 directly binds with PRMT6 and mediates PRMT6 degradation, we overexpressed FLAG-tagged FBXW17 plasmids in MLE12 cells and immunoprecipitated ectopic-expressed FBXW17 protein using FLAG antibody in the presence of MG132. Further analysis of the immunoprecipitated protein mix by PRMT6 immunoblotting demonstrated that PRMT6 was associated with FBXW17 ( Figure 4A). In turn, we immunoprecipitated endogenous PRMT6 by using PRMT6 antibody under MG132 stimulation and analyzed the immunoprecipitants by immunoblotting with FLAG antibody, which demonstrated that FBXW17 was also associated with PRMT6 ( Figure 4B). To further elucidate the direct binding of FBXW17 and PRMT6, immunofluorescence was conducted by transfection of FBXW17-FITC (red) and PRMT6-GFP (green) plasmids. PRMT6 is considered to be a nuclear protein (Frankel et al., 2002). Our data also showed that overexpressed mouse PRMT6 with a green fluorescent protein (GFP) tag localized mainly in the nucleus. The merged figure indicated that FBXW17 and PRMT6 coexist mainly in the nucleus ( Figure 4C). Overall, these results suggested that FBXW17 was associated with PRMT6 in the nucleus, indicating that FBXW17-mediated PRMT6 protein reduction might happen in the nucleus. FBXW17 Promotes CSE-Induced PRMT6 Degradation Previous data showed that CSE decreased both the protein and mRNA level of PRMT6 (Kang et al., 2015;He et al., 2017). The levels of proteins within cells are determined not only by rates of synthesis but also by rates of degradation. However, whether CSE-mediated PRMT6 protein reduction can be blocked by proteasome inhibitor is not clear, so here, we stimulated the MLE12 cells with CSE and CHX or MG132. Notably, protein synthesis inhibitor CHX obviously promotes CSE-induced PRMT6 protein reduction, while blockade of the proteasome machinery with the proteasome inhibitor MG132 prevented PRMT6 proteins from CSE-induced degradation ( Figure 5A). The above data suggested that CSE promotes PRMT6 reduction both in mRNA downregulation and protein degradation. FBXW17 mediated PRMT6 proteasomal degradation (Figures 3, 4). To determine whether FBXW17 contributed to CSE-mediated reduction in PRMT6 protein, we modified FBXW17 with overexpressed plasmids and shRNA in MLE12 cell and treated cells with CSE. RT-qPCR results showed that FBXW17 was overexpressed by FBXW17-FLAG plasmids and was successfully knocked down by shRNA (Figures 5B,C). PRM6T protein decreased under FBXW17 overexpression (Figure 5B), while it increased after FBXW17 silence ( Figure 5C). Furthermore, the mRNA level of FBXW17 was upregulated by CSE ( Figure 5D). Meanwhile, CSE decreased PRMT6 mRNA ( Figure 5E). However, modifying FBXW17 had no influence on PRMT6 mRNA level under CSE treatment. Interestingly, silencing of FBXW17 significantly inhibited the reduction in PRMT6 protein induced by CSE; thus, overexpression of FBXW17 further promoted the protein reduction of PRMT6 in CSE-treated cells (Figure 5F). Hence, our results indicated that FBXW17 acted as an activator in CSE-induced PRMT6 protein degradation in the lung epithelial cell line. FBXW17/PRMT6 Signaling Involves in CSE-Induced Lung Epithelial Inflammation and Apoptosis Cigarette smoke induced aberrant activation of proteasome (Kim et al., 2011). F-Box proteins were suggested to be enrolled in cigarette smoke-stimulated protein degradation, indicating the potential roles of F-box protein in COPD development . Combined with the above data, we hypothesized that CSE-induced lung inflammation and apoptosis may be regulated by FBXW17/ PRMT6 signaling. We first tested whether overexpression of FBXW17 affected the apoptosis of lung epithelial cells. In line with our above observations of FBXW17 increasing in CSE-treated cells (Figure 5D), ectopic expression of FBXW17 increased the apoptosis rate of MLE12 cells ( Figure 6A, the middle panel). Given the detrimental effect on cell apoptosis, we further evaluated the effect of FBXW17 silencing on CSE-induced epithelial cell apoptosis. As expected, FBXW17 knockdown partly alleviated CSE-induced lung epithelial apoptosis ( Figure 6A, the lower panel). PRMT6 is suggested to target H3 histone arginine (R2) and cause asymmetric dimethylation of R2 (H3R2me2a), thereby inhibiting the occurrence of gene transcription (Hyllus et al., 2007). As previous studies showed, PRMT6 inhibits inflammation and apoptosis in endothelial cells with decreased methylation of the histone H3R2 site (Kang et al., 2015). Our data also proved that CSE reduced the PRMT6 level and H3R2me2a signal, as well as elevating proapoptotic protein Bax and inflammatory factors in lung epithelial cells (Figures 6C-E). In addition, the CSE-regulated PRMT6 level and its downstream signals were modified by FBXW17. Overexpression of FBXW17 enhanced the reduction in PRMT6 and the H3R2me2a signal in CSE treatment, accompanied by increased Bax and COX-2 expression ( Figure 6E). On the other hand, suppression of FBXW17 enhanced the inhibitory effects of PRMT6 and H3R2me2 on inflammation and apoptosis in lung epithelial cells induced by CSE ( Figure 6D). Collectively, these data indicated that FBXW17/PRMT6 signaling plays an important role in regulating CSE-induced lung epithelial inflammation and apoptosis. DISCUSSION In this study, we identified that (i) PRMT6 protein decreased in COPD; (ii) cigarette smoke extract decreased the unstable protein PRMT6 in epithelial cells; (iii) FBXW17 interacted with PRMT6 and specifically mediated the proteasomal degradation of PRMT6 protein in the nucleus; (iv) inhibition of the proteasome pathway and knockdown of FBXW17 partly blocked CSE-induced PRMT6 protein reduction; and (v) FBXW17/PRMT6 signaling was involved in CSE-induced lung epithelial inflammation and apoptosis. Noxious particles, including cigarette-smoke-induced airway inflammation, epithelial cell apoptosis, and oxidative stress, are considered to be the initial steps in COPD development (Tzortzaki and Siafakas, 2009). In addition, long-term exposure to cigarette smoke induces remodeling and narrowing of small airways and further causes the consequent destruction of the lung parenchyma as indicated by losing the alveolar attachments as a result of emphysema, and finally progression FIGURE 5 | FBXW17 promotes cigarette smoke extract (CSE)-induced protein arginine N-methyltransferase 6 (PRMT6) degradation. (A) The MLE12 cells were stimulated with 5% of CSE and cycloheximide (CHX) or MG132. Cell lysates were subjected to immunoblotting for PRMT6 and β-actin separately. The densitometry results of PRMT6 protein expression are plotted in the right-hand panel. (B) MLE12 cells were transfected with vector or FBXW17 overexpression plasmids via electroporation for 48 h. The messenger RNA (mRNA) of FBXW17 was detected using quantitative real-time PCR (qRT-PCR). The data was plotted in the right panel. The relative protein expression of FLAG and PRMT6 were assayed with Western blotting. Densitometry of FLAG and PRMT6 was presented in the lower panel. (C) Scramble small hairpin RNA (shRNA) and three kinds of FBXW17 shRNA were delivered into MLE12 cells by using Lipofectamine 2000 reagent for 72 h. FBXW17 mRNA and PRMT6 protein were determined. The plotted FBXW17 mRNA and PRMT6 protein expression were presented in the middle panel and the right panel. (D-F) Vector, FBXW17-FLAG plasmid, scramble shRNA, and FBXW17 shRNA were separately transfected into MLE12 cells. Before the cells were collected, the transfected cells were stimulated with 5% CSE for 6 h. Total RNA was collected by Trizol reagents. The mRNA levels of (D) FBXW17 and (E) PRMT6 were determined with qRT-PCR. (F) Cell lysate was collected and conducted with PRMT6 or β-actin immunoblotting. The densitometric data of PRMT6 protein expression was plotted in the lower panel. Data represent n = 3 separate experiments. The graph shows mean ± SD, and " * " denotes p < 0.05. The differences between each group were indicated by black line and " * ." to airflow limitation in the lung, which is the main pathological progression of COPD (Shapiro and Ingenito, 2005). Epigenetic modifications and related enzymes are believed to be involved in COPD pathogenesis. The most confirmed epigenetic enzyme in COPD is histone deacetylase 2 (HDAC2), and its reduction is considered to be linked to airway inflammation amplification FIGURE 6 | FBXW17/protein arginine N-methyltransferase 6 (PRMT6) signaling involves in cigarette smoke extract (CSE)-induced lung epithelial inflammation and apoptosis. (A) Vector, FBXW17-FLAG plasmids, scramble small hairpin RNA (shRNA), and FBXW17 shRNA were transfected into MLE12 cells separately. Annexin V and PI kits were used to detect the rate of apoptotic cells. Before detection, cells were treated with CSE for 6 h. The "percentage of cells ± SD" values were shown in all the quadrants of flow cytometry. (B) The plotted data of apoptosis in each group are presented. (C) MLE12 cells were transfected with either overexpressed or knockdown plasmids of FBXW17. Before collection, the cells were treated with 5% CSE for 6 h. Cell lysate was conducted with immunoblotting with indicated antibodies. (D) The plotted PRMT6, H3R2me2a, and Bcl-2 protein expression were shown. (E) The relative protein expression of H3K4me3, Bax, TNF-α, IL-1β, and COX-2 were presented. Data were represented by n = 3 separate experiments. The graph shows mean ± SD. " * " denotes p < 0.05 between indicated groups. Frontiers in Cell and Developmental Biology | www.frontiersin.org and glucocorticoid resistance in COPD treatment (Barnes, 2009;Lai et al., 2018). Cigarette smoke dysregulates epigenetic enzymes modulates inflammatory responses, thus participating in the progression of chronic respiratory disease (Wu et al., 2018;Zong et al., 2019). Our previous work has proved that cigarette smoke reduces the histone arginine methyltransferase PRMT6 protein level in vascular endothelial cells (Kang et al., 2015). Moreover, ectopic PRMT6 expression in the lung alleviated emphysema morphology change in a CSE-established mouse model (He et al., 2017(He et al., , 2020. In this study, we add another paradigm of the fact that PRMT6 decreased in COPD and CSE-stimulated lung epithelial cells, indicating its potential role in CSE-mediated inflammation and cell apoptosis in epithelial cellular models. Aberrant expression of PRMT6 widely exists in cancers like lung (Avasarala et al., 2020), liver (Chan et al., 2018), gastric (Okuno et al., 2019), and prostate (Vieira et al., 2014) cancer and viral infections (Zhang et al., 2019). However, the modulation of PRMT6 protein has not been elucidated. Several histone modification enzymes are naturally subject to degradation, but whether it occurs through ubiquitinproteasomal machinery, specifically mediated by F-box components, is more limited (Zou and Mallampalli, 2014). For the first time, we reported that PRMT6 is a short-life protein and is degraded in a proteasome pathway specifically mediated by FBXW17 F-box protein, instead of lysosome. Coimmunoprecipitation of PRMT6 and FBXW17 showed that ectopically expressed F-box subunit FBXW17 coexisted with PRMT6 and resulted in PRMT6 degradation. Frankel et al. (2002) showed that PRMT6 resides predominantly in the nucleus. Immunofluorescent staining of PRMT6 in our study also showed that it mainly existed in the nucleus and followed the ectopically expressed of FBXW17. Moreover, CSE resulting in PRMT6 protein reduction was partly rescued by proteasome inhibitor of MG132 or FBXW17 knockdown. FBXW17-induced PRMT6 turnover also caused a shift of histone arginine methylation, thus influencing the inflammatory and apoptotic gene transcription. Although others have shown that FBXW17 is restricted to mouse genome, our data raised a new paradigm for the activation of the FBXW17 gene specifically within lung epithelia by CSE to control histone modification and related enzymes. F-Box proteins contribute to the pathway of ubiquitinmediated protein disposal by specifically providing substrate to the SCF superfamily of E3 ligases Meyer-Schwesinger, 2019). SCF complex-mediated protein degradation has been proven to participate in numerous cellular activities, such as gene expression regulation and signaling transduction (Skowyra et al., 1997(Skowyra et al., , 1999. Accumulated data have shown that F-box proteins play important roles in the modulation of inflammation or cell apoptosis. Zhao et al. (2012) reported that F-box protein FBXL19 mediates the ubiquitin-proteasomal degradation of the indispensable receptor linked to interleukin 33 (IL-33) to limit sepsis-induced pulmonary inflammation. A recent research showed that FBXW17 ubiquitinates FBXL19 for its degradation. Silence of FBXW17 attenuated lysophosphatidic acid (LPA)-induced epithelial cell migration (Dong et al., 2020), which suggested a potential role of FBXW17 in cell function regulation. Small molecules inhibiting FBXO3 reduce cytokine release and lessen associated inflammatory diseases via destabilizing tumor necrosis factor receptor-associated factor (TRAF) protein Mallampalli et al., 2013). In lung epithelia, FBXO17 targets glycogen synthase kinase-3β (GSK3β) for its polyubiquitination and degradation, thus limiting LPS-induced inflammation (Suber et al., 2017). In addition, FBPs govern cell apoptosis via targeting BCL-associated proteins for degradation (Inuzuka et al., 2011;Duan et al., 2012). Here, we showed that FBXW17 overexpression exaggerated CSE-induced epithelial cell apoptosis and inflammation through repressing the antiapoptotic BCL-2 protein while promoting the proapoptotic protein Bax and inflammatory genes like IL-1β and COX-2. Moreover, silencing of FBXW17 ameliorated smoke-induced apoptosis and inflammation of epithelia. Our study highlighted the importance of manipulating the proteolytic processing of a methyltransferase by F-box protein for proteasomal degradation, which was suggested to profoundly attenuate CSE-mediated lung epithelial inflammation and cell apoptosis. Our research also added a paradigm of FBXW17 function in cell fate decision. In summary, these findings identify that inhibiting FBXW17/PRMT6 signaling is an important way to attenuate cigarette-smoke-induced inflammation and apoptosis of epithelial cell. This study also elucidates a novel molecular mechanism of PRMT6 regulation and reveals a crosstalk between epigenetic modification and proteasome system regulation. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Research Ethics Committee of the Second Xiangya Hospital of Central South University. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS YC was responsible for the study concept and design. TL, XH, and LL performed the experiments. TL, HZ, and SR contributed to analysis and interpretation of the data and responsible for critically revising the manuscript. All authors involved in editing the manuscript and approved the final version for submission. FUNDING This study was supported in part by grants from the National Natural Science Foundation of China (Nos. 81270100, 81873410, 81400032, and 81700070), the Fundamental Research Funds
6,795.6
2021-04-20T00:00:00.000
[ "Biology" ]
Multi-formation track initiation method based on Density clustering Aiming at the problem that the existing track initiation algorithm has a poor effect when starting multiple formation tracks under strong clutter, based on the Hough transform method and its derivative algorithm, a multi-formation Hough transform route based on density clustering is proposed. Trace start algorithm. The algorithm first combines the motion information of the target and the timing parameters of the detection points to filter the detection data set to exclude as many points as possible from non-targets. Then it uses the Hough transform to process the obtained detection point set to obtain a preliminary threshold with a lower threshold Formation track; Finally, according to the characteristics of formation goals, the overall formation track is obtained by density clustering method, which solves the problems of track crossing and chaos. Simulation results show that, compared with the standard Hough transform method and its derivative algorithm, the algorithm can start the track of formation targets under strong clutter, and has good performance. Introduction So far, the track start technology at the initial stage of target tracking has been researched and developed for a long time. Its purpose is to determine the track information before the target starts stable tracking. The accuracy of the initial result directly affects the target tracking process. The track initiation process is present in all radar data processing problems and is the first problem in multi-target tracking [1]. The existing track start algorithms are mainly divided into two categories: the first category is the targetoriented sequential processing technology, and the second category is the batch processing technology of surface vector measurement. Sequential processing technology is only suitable for clutter sparse environments, and representative algorithms include intuitive method and logic method [2]. Batch processing technology is mainly suitable for clutter-intensive environments, and most of its algorithms are related to the Hough transform method [3][4]. In modern warfare, attack weapons such as aircraft and missiles often carry out attacks against the enemy in the form of formations, which has become a powerful form of combat. Compared with the traditional multi-target, the spatial positions of the targets in the formation are mostly close and the motion state is basically the same, making it easy to cross, alias and other phenomena at the beginning of the track. The traditional multi-target track start algorithm is difficult to effectively deal with the formation target Therefore, the starting technology of formation target track has attracted the attention of scholars. According to the fineness of the initial formation target, the existing algorithms are divided into fine track start algorithm and overall track start algorithm. The basic principle of the fine track start algorithm [5][6] is to first use the movement state information of the formation target to segment the formation, then pre-interconnect the formation, and finally based on the characteristics of the formation target, the fine interconnection of all the targets in the formation, the start The initial track to the target. Therefore, when the targets in the formation are dense and severely interfered by strong clutter, the performance of the sensor is very high, which greatly reduces the practical value of this algorithm. The overall track start algorithm [7][8][9] mostly uses the formation target to be segmented, interconnected and speed estimated, and regards the formation target as a whole to start the track and obtain the overall formation track information. Although this algorithm has low requirements on sensor performance and is relatively easy to implement, the existing overall track start method is greatly affected by clutter. When the targets in the formation are dense and accompanied by serious clutter interference, the algorithm performance is limited and the start. The results are quite biased. For example, literature [7] does not consider the interference of clutter on formation targets, and the actual application value is not high; literature [9] lacks a powerful means to remove clutter, and the clustering process is severely interfered in the strong clutter environment. There is a large error between the track and the actual formation target track, which is prone to a lot of confusion and crossover. Based on the above background, a track formation algorithm based on density clustering Hough transform is proposed. The algorithm is performed in two stages, namely the initial preparation stage and the initial completion stage. The purpose of the initial preparation stage is to eliminate the traces generated by the target as much as possible, find the real data of the target, and reduce the amount of calculation of the track start process; the purpose of the initial completion stage is to complete the entire track start work of the formation. Initial preparation stage based on Hough transform The main task of the initial preparation stage is to screen the set of detection points. Under the premise of ensuring that the true points originating from the target are not missing, the points that are not generated by the target are eliminated to the greatest extent, providing more accurate detection for the initial completion stage Point trace data collection. The initial preparation stage is mainly divided into two parts, the first part is the point trace processing, and the second part is the Hough transform. Point processing The point trace processing part aims to eliminate the interference caused by the strong clutter to the formation targets. The detection data set is combined with the target's motion information and the timing parameters of the detection point traces, and the non-target generated traces are excluded as much as possible. The formula for filtering the trace data is target, respectively. Knowing that the movement parameters of the targets in the formation are approximately the same, using this particularity, through formula (1), find the true point traces from the target as much as possible, remove the traces that do not meet the target speed information, and reduce the calculation amount of the algorithm. Considering that the number of detection points is large, it is easy to cause the calculation of the algorithm to be too large and affect the speed of the algorithm. Therefore, according to the timing parameters when acquiring the detection points, only the speed screening between the detection points and traces at adjacent times is performed, which effectively reduces the cumbersome calculation between the detection points and traces, and saves the collection of detection points and traces at one time and multiple times. The amount of calculation between the set of detection points. Hough transform This part aims to use the threshold threshold of the Hough transform method to screen the detection points more deeply, and provide accurate detection point data for the initial completion stage to the greatest extent. According to literature [1], the core formula of Hough transform function is (2) In formula (2), and represent the coordinates of any point data in the Cartesian coordinate system in the horizontal and vertical directions; and represent the coordinates of the point data in the horizontal and vertical directions converted into the parameter space, is the angle between the normal and the abscissa axis, and , is the normal distance from the origin of the coordinate axis to the straight line. According to the core formula of Hough transform, all the detected trace data are processed, and they are converted from the rectangular coordinate system to the parameter space. Then use the threshold threshold to filter out all the peaks in the parameter space that meet the conditions. Finally, the selected peak value is converted back into the Cartesian coordinate system, so that a more accurate set of data of the detection points and traces is obtained, and the track preparation phase is completed. Initial completion stage based on density clustering Considering that the formation target is composed of multiple targets that are close in space and have a stable formation structure, the track of the target in the formation will have problems such as crossover and confusion when starting the trajectory. It is difficult to accurately start the target of each target in the formation track. Therefore, based on the characteristics of a large number of detection points and a large distribution density in the area where the formation target is located, the overall formation trajectory is initiated based on the density clustering algorithm to achieve the initial completion of the trajectory. The DBSCAN algorithm [10] is a classic density clustering algorithm. It can cluster the data in the area according to the density and find a data set of any shape. Therefore, the algorithm combines the DBSCAN algorithm to start the overall trajectory of the formation target. The steps at this stage are as follows: 1) Use the radius parameter to divide the set of probe points obtained after the initial preparation stage, and find the set to which each point belongs. Among them, represents the radius parameter in the DBSCAN algorithm, which reflects the density of the targets in the formation, and the value is related to the distance between the targets in the formation. Since the movement states of the targets in the formation are basically the same, can take the maximum distance between the targets in each formation. 2) Based on the characteristics of stable formation of formation targets, pick out the set of points not less than the quantity parameter ( represents the number of targets in the formation, cos sin xy ), and get the set . Among them, is the quantity parameter in the DBSCAN algorithm, affected by the formation target density, the value of f is related to the number of targets in the formation; MinPts MinPts is the data set of the detection points of the formation at time , and is the number of formation at time . 3) Find the center position of the detection point track data set at time , this center position is the center of the formation, and the center position corresponding to each formation at all times is connected, and the overall trajectory of the formation can be started. Assuming that the initial speeds are (270m/s, 230m/s), the upper limit of the target movement speed is , and the lower limit of speed is . Simulation environment Let the radar's direction finding error and ranging error be and respectively; the radar scan period is 1 s; the radar measurement range is =20000 m, and the sight range is , ; the number of clutters in each scanning period follows the Poisson distribution with the parameter , and the positions of the clutters are evenly distributed within the radar field of view. Assuming that the quantization levels of orientation and distance in the Hough transform are and , respectively, the orientation quantization unit and the distance quantization unit . Simulation results and analysis To simplify the language description in the text, let the algorithm in [9] be the C-H algorithm, the standard Hough transform method is the B-H algorithm, and the algorithm mentioned in this paper is the D-H algorithm. This paper compares the effects of the three algorithms of B-H algorithm, C-H algorithm and D-H algorithm to initiate the formation target track. Figure 1 is a graph of the detection data in 10 scan cycles at time . Fig. 2, Fig. 3 and Fig. 4 are the trajectory chart of the initial formation target of the three algorithms of B-H algorithm, C-H algorithm and D-H algorithm when . Comparing the three figures, it can be seen that at , there is a wrong track in the track started by the B-H algorithm, and there are phenomena such as track crossing and overlap, and the effect of the algorithm starting track is general; the initial trajectory effect of the C-H algorithm and the D-H algorithm is relatively good. There is no obvious false trajectory, and the overall trajectory of the formation target is found . Fig 2. the B-H algorithm starts the track graph when , there are a large number of erroneous trajectories in the trajectory initiated by the BH algorithm, which is also accompanied by a large number of trajectory intersections and overlaps. The algorithm performance is extremely poor; there are false tracks in the track started by the C-H algorithm, only a small part of the movement trend can be seen, and the accurate formation target movement track cannot be seen; the D-H algorithm has the best track start effect and is not affected by strong clutter. It can clearly see the overall movement trend of the formation and the initial track is accurate. Figure 10, Figure 11 and Figure 12 are the trajectory charts of the initial formation target of the three algorithms of B-H algorithm, C-H algorithm and D-H algorithm at time . Comparing the three figures, it can be seen that at , the track start effect of the B-H algorithm is very poor; the initial track effect of the C-H algorithm is poor, and some false tracks appear, and along with the phenomenon of many tracks crossing and overlapping, the accurate track of the formation target cannot be found; the D-H algorithm's track start effect is still very good, the performance is not affected by strong clutter, and it can output an accurate overall track of the formation. In general, the B-H algorithm is not suitable for starting the trajectory of the formation target. As the clutter intensity continues to increase, the C-H algorithm's track start effect becomes worse and worse, and it is not suitable for starting the track of the formation target under the strong clutter environment. However, the DH algorithm is less affected by clutter. It is suitable for both the trajectory of formation targets in a sparse initial clutter environment and the trajectory of formation targets in a dense initial clutter environment. Conclusion In this paper, the problem of formation target track trajectory under strong clutter is studied, and it is difficult to effectively solve this problem with the existing track start algorithm. A multi-form Hough transform track start algorithm based on density clustering is proposed. Simulation results show that this algorithm has two advantages compared to existing algorithms: (1) Aiming at the problem of strong clutter, a large amount of clutter is removed by combining the motion information of the target and the timing parameters of the detection trace. Based on the Hough transform method, the set of detection points and traces is further processed, and the points generated by the real target are selected as much as possible, which increases the accuracy of the initial track and reduces the amount of algorithm calculation. (2) Considering the close distance between the targets in the formation and the similar movement status, a formation is regarded as a whole, and the overall formation trajectory is obtained by density clustering. It solves the problems of crossing and overlapping of the starting track, and has good practical application value.
3,463.6
2020-09-05T00:00:00.000
[ "Engineering", "Computer Science" ]
COMPARATIVE EVALUATION OF SMEAR LAYER REMOVAL USING EDTA, PASSIVE ULTRASONIC IRRIGATION & DIODE LASER: A SCANNING ELECTRON MICROSCOPIC STUDY. Aim: To evaluate the efficacy of smear layer removal from the root canals using EDTA, passive ultrasonic irrigation , diode laser during endodontic therapy. Materials and method: 30 extracted human mandibular first premolar teeth were selected. Access cavity preparation was done. Working length was determined Instrumentation was initiated with ISO hand 15 k file upto 20 k by rotary upto 2ml NaOcl was used as an irrigant after every instrumentation by a rinse with 3ml of Samples were divided into three groups I-EDTA+PUI, group II- EDTA+DIODE LASER, group III- EDTA+DIODE LASER+ PUI). Samples were divided longitudinally. Scanning electron microscope examination of canals was done for at Result: Combination of diode laser and ultrasonics with EDTA had the least smear layer scores. Conclusion: The combination of diode laser and passive ultrasonic irrigation showed the best result in removal of smear layer. …………………………………………………………………………………………………….... Introduction:- The fundamental aim of non surgical endodontic treatment is to shape and clean the root canals as thoroughly as possible to eliminate debris and microorganisms and achieve three dimensional fluid tight seal. [1] A complete debridement of the root canal is essential to achieve an effective disinfection and a three-dimensional obturation for a favorable long-term prognosis. However, after instrumentation of the root canals, an amorphous, irregular layer is formed on the root canal walls known as smear layer. [2] Irrigation is defined as washing out a body cavity or wound with water and medicated fluid. Endodontic Irrigation is the process of delivery of endodontic irrigants within the root canal. Irrigation is complementary to instrumentation in facilitating removal of bacteria, debris and necrotic tissue, especially from areas of the root canal that remain unprepared by mechanical instruments. [3] ISSN: 2320-5407 Int. J. Adv. Res. 6(3), 1404-1409 1405 Effective irrigation depends on various irrigants and irrigation devices and techniques. Various solutions used for removing the smear layer include phosphoric acid, citric acid, maleic acid, ethylenediaminetetraacetic acid (EDTA), and MTAD (a mixture of tetracycline isomer, an acid, and a detergent). Sodium hypochlorite (NaOCl), 1-5.25% concentration as an irrigant is widely used in root canal treatment as it is bactericidal and has the ability to dissolve organic tissues but non effective in removing the smear layer. [4] Traditional needle irrigation has been proved to be insufficient for a complete cleaning of the complex anatomy of root canal system (especially the lateral canals, isthmuses and the apical third), therefore endeavors are being made to develop new irrigants and irrigating devices to improve the root canal disinfection in everyday endodontic practice. [5] Various chemicals, ultrasonics, and lasers, in combination or alone, have been evaluated for the removal of the smear layer with varying results. Passive ultrasonic irrigation has been used nowadays for removal of the smear layer. The ultrasonic irrigating system works on the principle of sonic activation of files to produce hydrodynamic intracanal fluid agitation. [6] Lasers have also been used to remove the smear layer such as argon laser, neodymium-doped yttrium aluminum garnet, CO2 laser, erbium-doped yttrium aluminum garnet and diode. Currently, a final irrigation sequence with a chelating agent EDTA and NaOCl is being used to remove the inorganic and organic components of the smear layer. [4] This study is designed to evaluate the efficacy of smear layer removal from the root canals using EDTA, passive ultrasonic irrigation , diode laser during endodontic therapy. Materials And Method:- 30 extracted human mandibular first premolar teeth were selected noncarious mandibular premolars were taken for the study. Inclusion criteria included single-rooted teeth with straight, patent roots, and fully formed apices, extracted for periodontal or orthodontic reasons. Sample preparation:- The teeth were stored in 10% formalin solution till they were used for the study. The root surfaces were cleaned. After standardization, the working length of specimens was determined by deducting 1 mm from the length of the #10/#15 K-file after it was passively placed in the canal until the tip of the instrument visibly penetrated the apical foramen. Access cavity preparation was done. Working length was determined. Instrumentation was initiated with ISO hand files no. 15 k file upto 20 k file followed by protaper rotary files upto size F3. 2ml NaOcl was used as an irrigant after every instrumentation followed by a rinse with 3ml of distilled water. Samples were then divided into three groups:-Group I: EDTA + PASSIVE ULTRASONIC IRRIGATION Group II: EDTA+ DIODE LASER Group II: EDTA +PASSIVE ULTRASONIC IRRIGATION + DIODE LASER GROUP I: The root canals were irrigated with a final flush of 1 ml 17 %EDTA with passive ultrasonic irrigation for 1 min. 15 no, 21mm irrisaf ultrasonic file placed 1 mm short of working length and followed by 3 ml of NaOcl GROUP II:-The root canals were irrigated with 1ml of 17 % EDTA then diode laser application was done. The fibre optic tip was introduced into the root canals upto the working length. The laser was then activated and gently withdrawn from the root canals and then reintroduced to the apex for a total cycle of 20 seconds. GROUP III:-The root canals were irrigated with a final flush of 1ml EDTA with passive ultrasonic irrigation for 1 min followed by diode laser Sample Preparation For Scanning Electron Microscopy:- The teeth samples were grooved longitudinally on the external surface with a diamond disc with low speed without penetration into the root canals and then split into two halves with a chisel. Specimen was dehydrated with a layer of ethanol for 24 hrs. After dehydration, samples were placed in a vacuum chamber and sputter coated with a 30 nm gold layer. The dentinal wall of the root canals was examined at apical thirds at a magnification of ×2000 for the 1406 presence or absence of smear layer and patency of dentinal tubules. Photomicrographs of the root canals were taken at apical level for scoring individually in a calibrated single-blind manner according to the rating system developed by Gutmann et al. Gutmann rating system for remaining smear layer scores:- Little or no smear layer; covering<25% of the specimen; most tubules were visible and patent, or almost complete laser melting 2 Little to moderate or patchy mounts of smear layer; covering 25-50% of the specimen; many tubules visible and patent, or laser melting 3 Moderate amounts of scattered of aggregated smear layer; covering 50-75% of the specimen; minimal to no tubule visibility or patency, or scattered laser melting 4 Heavy smear layer covering>75% of the specimen; no tubule orifices were visible or patent; or no visible laser melting Data were subjected to statistical analysis and analyzed using one-way anova analysis of variance using the SAS. Result:- The scores at apical third for all three groups was calculated as the mean score and standard deviation. At the apical third level, Group III followed by Group II had the least smear layer scores with no significant difference them. This was followed by Group I. Group III showed the least smear layer scores in the apical third region. Group I showed the highest smear layer scores in the apical third region. At the apical third, the cleaning efficacy of the combination of laser and passive ultrasonic irrigation was better when compared to the laser and passive ultrasonic irrigation individually. Statistically the score is significantly low compared to other 2 categories. P value is as low as lower than 0.0001. 2. When we compare individually, SEM score of EDTA+PUI+LASER V/S EDTA+LASER is low and is also statistically significant. P value is lower than 0.0001. 3. When we compare individually, SEM score of EDTA+PUI+LASER V/S EDTA+PUI is low and is also statistically significant.P value is lower than 0.0001. Discussion:- The aim of this study was to evaluate the effect of three different irrigating protocols in removing the smear layer at the apical third of the dentinal wall. During shaping of the canals, debris and a smear layer are created. Irrigating solutions may be inefficient in removing the smear layer completely, especially from the apical thirds of root canals. [7] EDTA reacts with the calcium ions in dentine and forms soluble calcium chelates. EDTA decalcify dentin to a depth of 20-30 μm in 5 min. The decalcifying process is self-limiting, because the chelator is used up. EDTA is most commonly used as a 17% neutralized solution (disodium EDTA, pH 7). The optimal working time of EDTA is 15 minutes, after which time no more chelating action can be expected. [8] Recently, agitation of irrigating solutions with passive ultrasonic irrigation and laser devices has become popular. By using ultrasonic the temperature of the irrigants increased causing reduced surface tension along with increased vibrational energy. [9] The conventional needle irrigation technique does not provide complete cleaning of the root canal system. The effectiveness and safety of irrigation depends on the means of delivery. Traditionally, irrigation has been performed with a plastic syringe and an open-ended needle into the canal space. [10] Passive ultrasonic irrigation (PUI) uses an ultrasonically activated file to energize the irrigant in the canal and to create acoustic streaming. Ultrasonic devices were first introduced in endodontics by Richman. Ultrasonic energy produces higher frequencies (25-30 kHz) than sonic energy but low amplitudes. They operate in transverse vibration. [11] PUI was first described by Weller et al. The term passive does not adequately describe the process, since it is in fact active. The term passive relates to non cutting action of ultrasonically activated file. [12] The active streaming of the irrigant, increases its potential to contact a greater surface area of the canal wall. After the canal has been shaped, a small file or a smooth wire is introduced at the centre of the canal, as far as the apical region. Canal is then filled with irrigating solution, and ultrasonically oscillating file activates the irrigant. [11] Ultrasound is sound energy with frequency above 25 -30 KHZ. Instruments driven by a piezoelectric element near 30 kHz exhibit a pattern of approximately three wavelengths, or six nodes and antinodes spaced approximately 5 mm apart. The oscillation amplitude of the tip of such instruments is on the order of 10-100 µm in the direction of oscillation, there is also an oscillation perpendicular to the main oscillation direction with a relative amplitude of approximately 10%. [12] When a small file (size 10-20) is placed freely in the center of the canal following preparation and ultrasonic activation is given, the ultrasonic energy passage through irrigating solution and exerts its acoustic streaming effect. [5] Acoustic streaming can be defined as a rapid movement of fluid in a circular or vortex-like motion around a vibrating file. The oscillatory part of the acoustic streaming makes the flow oscillate forward and backward together with the file. In doing so, the fluid exerts an alternating pressure and shear stress on the root canal wall. [13] The addition of ultrasonics to EDTA increased the smear layer removing efficacy of EDTA by enhancing its penetration into the narrow apical regions of the root canals. [4] Laser-assisted irrigation (LAI) technique using diode lasers, used for irrigation. Divito et al used this technique with a radial fiber tip at a subablative power setting which, they demonstrated, results in significantly better removal of the smear layer as compared to saline water irrigation. The diode laser was able to remove the smear layer from the root canal surfaces, however, most of the dentinal tubules were obliterated with smear plugs. [14] The apical third had lesser smear layer as there is narrower diameter of the canal in the apical region resulting in a closer approximation of the laser tip to the root canal walls and thus melting and evaporating the smear layer easily. [15] The results of the present study showed laser diode with the combination of PUI with EDTA to be superior to PUI with EDTA and laser diode with EDTA alone in removing the smear layer. With diode laser and EDTA, the smear layer was removed from the root canals, the dentinal tubules were obliterated at apical level. [15] The results were similar to the study of Faria et al who found absence of smear layer and partially obliterated dentinal tubules after application of 980 nm diode laser on root canals irrigated with 1% NaOCl plus 17% EDTA. Laser diode showed comparatively better result than PUI in terms of removal of smear layer. In this study, the apical third of the canals were least influenced by the ultrasonic irrigation. Ultrasonics is not able to effectively get through the apical vapor lock in the apical 3 mm of the canal. It has been shown that once an 1409 ultrasonically activated tip leaves the irrigant and enters the apical vapor lock, acoustic microstreaming and/or cavitation becomes physically impossible which is not the case with the apical negative pressure irrigation technique. This is because acoustic microstreaming or cavitation is only possible in fluids/ liquids, not in gases. [16] The oscillation of the tips of ultrasonic instruments were decreased by constraining it in the root canal. Because the amplitude of the oscillation is largest at the instrument's tip, any attenuation affects the apical part most significantly where the diameter of the canal was smallest. [12] The combined use of ultrasonic and EDTA followed by a combination of diode laser and EDTA performed better than EDTA, diode, and ultrasonics used alone. Suggesting that the incorporation of ultrasonics and diode laser with EDTA might prove beneficial in increasing the ability of EDTA to remove the smear layer. The diode laser used alone proved to be effective than PUI in the apical. [4] The combined use of ultrasonic and EDTA followed by a combination of diode and EDTA performed better than EDTA, diode, and ultrasonics used alone in removing smear layer suggesting that the incorporation of ultrasonics and diode laser with EDTA might prove beneficial in increasing the ability of EDTA to remove the smear layer by enhancing its interaction with the root canal walls particularly in the apical regions. Conclusion:- Within the limitations of the current study, all the tested groups were able to remove the smear layer from the prepared root canals to different degrees. When used alone diode laser performed significantly better than passive ultrasonic irrigation. The combination of diode laser and passive ultrasonic irrigation showed the best result in removal of smear layer. Diode laser could be a good addition to the armamentarium used for smear layer removal and could increase the success rate of endodontic therapy.
3,374
2018-03-31T00:00:00.000
[ "Materials Science" ]
Four-point functions of 1/2-BPS operators of any weights in the supergravity approximation We present the computation of all the correlators of 1/2-BPS operators in $\mathcal{N} = 4$ SYM with weights up to 8 as well as some very high-weight correlation functions from the effective supergravity action. The computation is done by implementing the recently developed simplified algorithm in combination with the harmonic polynomial formalism. We provide a database of these results attached to this publication and additionally check for almost all of the functions in this database that they agree with the conjecture on their Mellin-space form. Introduction Recently the problem of computing the four-point functions of 1/2-BPS operators in planar N = 4 SYM at strong coupling has received new attention, due to the appearance of a surprisingly simple formula for the correlators of arbitrary weights in the Mellin space, conjectured by the authors of [1,2]. This conjecture, based on symmetry and physical assumptions, matched all the examples known then in the literature [3][4][5][6][7][8][9], including the infinite family studied in [9]. However, the fact that all these examples are in one way or another degenerate has required more tests. The first step in this direction was to prove that one of the assumptions on which the work [1,2] was based, namely that the four-derivative Lagrangian in the effective supergravity action vanishes in general, holds. This statement, which was already observed in all the concrete examples [3][4][5][6][7][8][9], was recently proved in [10]. The next test was to check the conjecture on less degenerate cases, which was recently initiated in [11] by computing the 2345 and 3456 correlators. These functions consist of all-different weight operators and are far from the extremality condition. It was shown that these correlators perfectly match with the Mellin formula. Apart from that, the algorithm of computing supergravity correlators from the compactified string action was significantly simplified, opening the possibility for calculating more complicated cases. The knowledge of more complicated four-point functions could be used to further test the Mellin conjecture [1,2], but also to probe the non-planar spectrum of N = 4 SYM by computing 1/N corrections around the supergravity correlator [12][13][14][15]. In this work we combine the simplifications, obtained in [11], with the harmonic polynomial formalism [16,17]. The latter formalism provides a further significant simplification of the computational algorithm, allowing us to find the four-point functions of CPOs for practically any given weights. We discuss the explicit computation of all the correlation functions with weights up to 8 as well as 7 10 12 17 and 17 21 23 25 . This is a major improvement over the previously available set of correlators. We furthermore check whether these correlators match the Mellin conjecture. We make all the results available in a database attached to this publication. This paper, which follows the notation from [11], is organized as follows: in section 2 we recall how to compute four-point correlation functions from the type-IIB supergravity effective action. In section 3 we discuss how one can use the harmonic polynomial formalism to simplify the computation of the a, p and t tensors. In section 4 we discuss how we use this to compute new correlation functions. In particular, we discuss that all these new examples match the Mellin conjecture from [1,2]. In section 5 we conclude. In the appendix we discuss a way to obtain coordinate-space four-point functions from their Mellin-space form. Four-point functions from type IIB supergravity The central objects of our study are the four-point correlation functions in planar N = 4 SYM theory in the strong coupling limit 1 where the x i are the space-time coordinates and the t's are six-dimensional null vectors introduced to keep track of the R-symmetry. Here, the operators 3) where the φ are the scalar operators of N = 4 SYM, the κ k are normalization constants, the λ k,k 1 ,k 2 are fixed real numbers and we omit the total symmetrization over indices as this gets taken care of in the contraction with t vectors in (2.2). According to the AdS/CFT correspondence [19][20][21] these operators are dual to the Kaluza-Klein modes of compactified type-IIB supergravity on AdS 5 ×S 5 . When computing the four-point function of such operators, the boundary values of the Kaluza-Klein modes act as sources with the generating functional being given by the on-shell value of the string partition function exp(−S IIB ). This approach warrants knowledge of the supergravity action up to fourth order in the fields, which was obtained in [22][23][24]. In these works explicit expressions were obtained for the coupling constants in the effective action in terms of so-called a, p and t tensors which are effectively Clebsch-Gordan coefficients for the representations of SO (6). Nevertheless, since the action, which splits into a contact and an exchange part, is still very complicated and finding the a, p and t tensors requires additional work executing the algorithm to compute four-point functions from this action is far from trivial. Recently in [11] significant simplifications of this algorithm were obtained: the first simplification makes it possible to bypass having to write the full Lagrangian down, which becomes unpleasant very fast because of the growing number of descendants coupled to scalars in the cubic terms. Instead, the streamlined procedure allows one to obtain the four-point function for a given set of weights straight away. For example, the contribution to the non-normalized correlator, coming from the exchange terms, can be written as the following sum of the s, t and u channels: Here the exchange Witten diagrams, S, V and T are multiplied by the corresponding combination of quadratic couplings ζ and cubic couplings S, T, Φ, A, C and G that were derived in [22][23][24]. The exchange Witten diagrams can be expressed in terms of exchange integrals. A simple method to compute them was developed in [25] and further generalizations appeared in [5] and [7]. The appearing sums run over the possible set of exchange fields in the relevant channel restricted by the SU(4) selection rule. Finding the contact part also forms no difficulty once all the quartic couplings are computed, as was discussed in [11]. Therefore the main remaining difficulty in computing the correlator sits in finding the a, p and t tensors that form the building blocks of the couplings. Their explicit expressions depend on the chosen weights and their computation becomes complicated quite quickly. In [11] we discussed how one can simplify the procedure outlined in [5] somewhat by carefully analyzing the sums over the symmetric group necessary in doing the tensor contractions. Harmonic polynomial formalism However, the computation of the a, p and t tensors simplifies incredibly in the harmonic polynomial formalism, developed in [16,17] and applied to compute some supergravity correlators in [8,9]. It turns out that, after an appropriate normalization, the a, p and t tensors can be expressed as harmonic polynomials Y which carry the non-trivial dependence on the null vectors t i . These functions are generalized eigenfunctions of the SO(6) Casimir operator L 2 , satisfying with C nm being the corresponding eigenvalue. Moreover, one can solve this equation [17] and find that the Y (a,b) nm can be expressed explicitly in terms of Jacobi polynomials P where (. . .) m is the usual Pochhammer symbol and which can be related to the original σ and τ variables via σ = 1 4 (y + 1)(ȳ + 1) and It was discussed in [9], that the product of C tensors appearing in the product of scalar a 125 a 345 , vector t 125 t 345 and tensor p 125 p 345 harmonics for arbitrary weights with fixed exchange leg k 5 are proportional to these Y where the t-dependent prefactor T is given by and k 5 satisfies for some nonnegative integer m respectively. The proportionality coefficients B were worked out in [9]: given a set of weights k 1 , k 2 , k 3 , k 4 ordered such that are nonnegative 3 and an intermediate weight k 5 satisfying (3.8) they take the following form: where α 123 = k1+k2−k3 2 . This now allows for a straightforward evaluation of a 125 a 345 , t 125 t 345 and p 125 p 345 for any weights as all the complicated tensor structure is captured by Jacobi polynomials. To obtain the corresponding tensors in the t and u channel, e.g. a 135 a 245 and a 145 a 235 , one simply reshuffles the t i . Results Computation. The aforementioned simplifications, obtained in [11], in combination with the harmonic polynomial formalism reviewed in the previous section allow one to compute any supergravity four-point function of the CPOs in (2.1) of given weights in very little time. We implement the entire algorithm in Mathematica and compute all the non-trivial connected four-point functions k 1 k 2 k 3 k 4 with 2 k 1 k 2 k 3 k 4 8 (94 in total and including 64 previously unknown correlators), which can be found in the database attached to this publication. Additionally, we compute two very high-weight cases, namely 7 10 12 17 and 17 21 23 25 . The computation of these latter correlation functions takes 1 minute and 40 minutes, respectively, on a standard computer. Verification. We have checked all of these four-point functions except for the high-weight case 17 21 23 25 for consistency with the structure predicted by superconformal symmetry, see e.g. [26], using the method described in [11]. In order to find this structure one should extract the free part of the correlator: in principle the free part can be computed straightforwardly by performing Wick contractions between extended CPOs. However, in our implementation we find the free part from requiring consistency with superconformal symmetry: after reducing the D-functions the difference between the four-point function and the prediction from superconformal symmetry splits naturally into four parts as and the F i are rational functions of the x i . The undetermined free part is contained completely in F 1 and its vanishing can be used to determine the free part. The vanishing of F 2,3,4 is independent of the free part and provides a non-trivial check that the computed correlator is indeed consistent with superconformal symmetry. Mellin representation. We can go one step further by checking whether the correlator matches the conjectured formula from [1,2]. In order to do this we first reorder the full correlator k 1 k 2 k 3 k 4 such that the weights satisfy k 1 k 2 k 3 k 4 (which we will distinguish by writing G k1k2k3k4 instead) and then, following [1,2], write the correlator as where the overall prefactor A k1k2k3k4 depends on the weights as in (4.9) of [2]. One can decompose G k1k2k3k4 as G k1k2k3k4 = G free k1k2k3k4 + RH, (4.4) where The conjecture from [1,2] is a simple formula for the dynamical function H in Mellin space up to an overall weight-dependent normalization, which was later determined in [15]. Using the free part extracted in the verification process we can easily find an expression for RH. We can furthermore find H in terms ofD functions by solving a set of linear equations obtained from the decomposition into different tensor components. This expression can directly be Mellin-transformed and compared to the conjectured formulae and we find agreement for all checked correlation functions: these are all the 91 (of which 61 new) correlation functions with weights up to and including 8 except for the three with lowest weight k 1 7, 4 as well as 7 10 12 17 . In particular, we find agreement with the derived normalization function from [15], which in our notation becomes with L as in [2]: Database. With this publication 5 we include a database of all the non-trivial correlators k 1 . . . k 4 with 2 k 1 k 2 k 3 k 4 8. Moreover we include the results for the high weight cases 7 10 12 17 and 17 21 23 25 . For each correlation function there is a subfolder with the name k 1 k 2 k 3 k 4 containing up to five plain txt files: • Fullcorrelatork 1 k 2 k 3 k 4 .txt contains the full correlator as we compute it directly from the action, in the notation from [11], • Freepartk 1 k 2 k 3 k 4 .txt contains the free part as we extract it from consistency with superconformal symmetry (also in the notation from [11]), • Hk 1 k 2 k 3 k 4 .txt contains a coordinate-space expression for the dynamical function H in the notation of [1,2] as it follows from our direct computation, 6 • HfromMellink 1 k 2 k 3 k 4 .txt contains a much shorter coordinate-space expression for H in the notation of [1,2] and has been derived from its Mellin-space form (the construction of which we discuss in the appendix), • RZconjk 1 k 2 k 3 k 4 .txt contains two entries: if this four-point function coincides with the Mellin conjecture the first entry is yes and if not it would read no. The second entry is the value for the overall scaling function f (k 1 , k 2 , k 3 , k 4 ). Conclusion In this work we have demonstrated that the simplified algorithm obtained in [11] together with the harmonic polynomial formalism allow one to compute any four-point functions of CPOs of reasonable weights very fast. For example, now the computation of the 7 10 12 17 correlator takes only a minute. Attached to this publication we provide a database of all the correlators with weights up to and including 8. As an application of this new simplified algorithm we confirm for most of these correlation functions that they match the Mellin formula from [1,2], thereby further corroborating this hypothesis. These new correlators go far beyond the previously known set of four-point functions. One could use the discussed simplifications to derive a closed formula for the coordinate space correlation function from the supergravity action, which could in turn be used to prove the Mellin conjecture in full generality. The main remaining obstruction sits in the fact that the correlation function cannot at present be written as a closed formula in terms of D-functions: the representations appearing in the tensor product decomposition, parametrizing both the couplings and the exchanged fields, depend on the external weights in an intricate way and the exchange Witten diagrams are found algorithmically. If we manage to rewrite this expression as a linear combination of the Γ p1p2p3p4 , each summand in that sum gives rise to an integral of the form (A.2) after an appropriate identification of the ∆ i with the p i . Therefore, the problem of finding a coordinate-space expression is reduced to finding a linear combination of Γ p1p2p3p4 such that where the c are numbers and the sum is finite over the p i . The representation on the right-hand side of (A.7) is usually not unique, which reflects the fact that theD functions are not independent. The first step towards an expression as in the right-hand side of (A.7) is to rewrite its left-hand side as a pure product of gamma functions and linear factors. This can be done by applying the basic property xΓ(x) = Γ(x + 1) repeatedly to some of the gamma functions in the numerator, such that finally factors in the numerator cancel the denominator. For example This yields the intermediate form with C some constant and x 1,2 a linear factor in s, x 3,4 a linear factor in t and x 5,6 a linear factor inũ. Note that it could happen that each of the three sets of prefactors in s, t andũ might be empty. Suppose first for simplicity that there is only one factor −s+s1 and see that we have succeeded in our goal: by repeating the procedure described above recursively for the list of factors in (A.9) we can find a linear combination of products of gamma functions that are equal to (A.6), such that we have found a representation as in (A.7). Exchanging sum and integral we find that each summand is of the form (A.2) such that after matching the coefficients we find an expression for the inverse Mellin-transform in terms ofD functions. We have applied this algorithm to all the correlation functions in our database and included the result in the database. All cases have been checked explicitly with our coordinate-space results. In some cases it is possible that a more minimal representation exists, but due to the automatized nature of our application this is unavoidable. It is noteworthy that in exchanging the sum and integral during this procedure we do not run into any domain issues that exist for the full correlator as described in [2] that give rise to the free part upon inverse Mellin-transforming. After all, all we are doing is rewriting the integrand using a global property of the gamma function.
3,985.8
2018-08-21T00:00:00.000
[ "Mathematics" ]
Time-resolved ion spectrometry on xenon with the jitter-compensated soft x-ray pulses of a free-electron laser Atomic inner-shell relaxation dynamics were measured at the free-electron laser in Hamburg, FLASH, delivering 92 eV pulses. The decay of 4d core holes created in xenon was followed by detection of ion charge states after illumination with delayed 400 nm laser pulses. A timing jitter of the order of several hundred femtoseconds between laser- and accelerator-pulses was compensated for by a simultaneous delay measurement in a single-shot x-ray/laser cross-correlator. After sorting of the tagged spectra according to the measured delays, a temporal resolution equivalent to the pulse duration of the optical laser could be established. While results on ion charge states up to Xe4+ are compatible with a previous study using a high-harmonic soft x-ray source, a new relaxation pathway is opened by the nonlinear excitation of xenon atoms in the intense free-electron laser light field, leading to the formation of Xe5+. Introduction The high photon energies of (hard or soft) x-rays enable them to penetrate deeply into the atomic electron shell, thereby delivering detailed information about the electronic structure. If the radiation is pulsed, the dynamical interplay of electrons during the relaxation of highly excited states becomes accessible [1]- [3]. Light pulses with wavelengths in the ultraviolet to visible range, on the other hand, act on loosely bound valence or Rydberg electrons, which may then initiate secondary processes, e.g. electron transfer, dissociation or isomerization in molecules. Consequently, utilizing a visible-pump/x-ray probe or x-ray-pump/visible-probe techniques, a wealth of ultrafast dynamical processes in atoms and molecules can be studied. The new generation of accelerator-based free-electron lasers (FELs) for the soft x-ray (FLASH [4], FERMI [5]) and hard x-ray ranges (LCLS [6], SCSS [7], the European XFEL [8]) is therefore designed to deliver synchronized femtosecond (fs) light pulses from an external laser. In spite of a convincing performance in terms of pulse energy, energy range and tuneability, the applicability for visible/x-ray correlation experiments is limited by the degree of synchronization that can technically be realized. Previous studies at FLASH [9]- [11] indicate a residual laser/x-ray jitter of the order of 250 fs root mean square (rms). In spite of x-ray pulse durations of a few tens of fs [12,13] pump-probe experiments performed in the usual manner by averaging data for each setting of a scanned optical delay line will therefore be limited in resolution to about 600 fs full-width-at-half-maximum (FWHM). Approaches that have been seriously investigated in order to overcome this jitter problem include improved synchronization schemes [14], seeding of the FEL's undulator with laser-generated radiation [15,16] and generating short as well as long wavelengths in two serial undulators traversed by the same bunch of accelerated electrons [17]. As long as a solution for a highly synchronized visible and soft x-ray combination is not available, an alternative approach can be followed where the relative timing between laser and x-ray pulses is measured on a shot-to-shot basis, so that the tagged data of a simultaneously performed experiment can be sorted accordingly. This concept has proven its feasibility in timeresolved x-ray diffraction studies at the sub-picosecond pulse source (SPPS) at the Stanford Figure 1. Scheme of the experimental geometry: x-ray and 400 nm pulses are overlapped at a small angle in an Xe gas target. Ions created in the interaction region are measured by a TOF ion mass spectrometer. A cross-correlator setup is installed behind the pump-probe experiment. The x-ray-induced reflectivity changes for the 800 nm light are imaged on to a CCD camera. Linear Accelerator Center (SLAC) [18], where the timing information was deduced from electro-optical sampling (EOS) [19] of the relativistic electron bunches. In a previous work we introduced a single-shot cross-correlator that directly compares the arrival times of visible and soft x-ray pulses at the experimental end-station of FLASH [20], thus removing timing uncertainties connected with the x-ray generation and the beam transport from the undulator to the experiment. In this work we present soft x-ray-pump/visible-probe experiments that demonstrate the degree of improvement achieved with this tagging concept. Relaxation dynamics following inner-shell excitation of atoms often occurs on a timescale of a few fs to a few tens of fs. As the system to be investigated initially, we have selected 4d core-hole creation in xenon (Xe) atoms, because the subsequent Auger decay has already been studied with sub-femtosecond temporal resolution using ionizing pulses from a laser-based high harmonic generation source [2]. Precise knowledge of the very short Xe-NOO Auger decay time constant (6.0 ± 0.7 fs) allows us to determine the achievable resolution with high precision. While in this case atomic dynamics mainly serves as a standard for an assessment of the achieved jitter compensation, use of very intense soft x-ray pulses from FLASH yields evidence of a decay channel not previously observed. Let us attribute its nonlinear behavior to a two-photon excitation of a core electron shell. Furthermore, with an ultrafast dynamical process as a reference, a direct comparison between the laser/x-ray cross-correlation and the EOS technique is possible, thereby contributing to the question as to whether a measurement of the electron timing can substitute for an all-optical cross-correlation. Experimental setup A scheme of the experimental geometry is shown in figure 1; the cross-correlator set-up is installed behind the pump-probe chamber, thus both experiments share the same x-ray beam. The experiment is performed at beamline BL1 of the FLASH facility operating at a photon 4 energy near 92 eV (13.4 nm wavelength), at a 5 Hz repetition rate in a single-bunch mode. The nominal x-ray focus position is between the pump-probe experiment and the x-ray/visible crosscorrelator. A mode-locked Ti:sapphire laser system (800 nm, 120 fs FWHM, pulse energy 2 mJ), provided by the FLASH facility, is electronically synchronized with the 1.3 GHz master clock. The 800 nm pulses are frequency doubled in a 350 µm thick BBO crystal to 400 nm and 300 µJ pulse energy. A beam splitter reflects the 400 nm beam to the pump-probe experiment, while the x-ray/optical cross-correlator is operated with the remaining 800 nm light. In the interaction region of the pump-probe experiment, the 400 nm probing laser pulse is focused to a 40 µm FWHM spot size with an estimated average intensity of 2 × 10 14 W cm −2 . For the x-ray pump pulse an average target intensity of the order of 8 × 10 11 W cm −2 is estimated by assuming 35 fs FWHM pulse duration [12] and a spot size of about 150 µm FWHM. The x-ray and laser beams overlap at an angle of ∼2 • in an Xe gas target, thus keeping the blurring of the temporal overlap between pump and probe pulses below 17 fs. The pressure in the pump-probe chamber with and without target gas was ∼10 −6 and ∼10 −7 mbar, respectively. Different charge states of Xe ions created in the target volume are detected by a time-of-flight (TOF) ion mass-to-charge spectrometer. Scans with different visible laser intensity at the target reveal that the Xe 2+ ion yield is almost solely determined by the x-ray pulse intensity. Within the evaluated range of intensities, it shows linear dependence on the x-ray pulse energy measured by the gas-monitor detector in the FLASH tunnel [4]. Thus, single-ion spectra are normalized to the ion yield of Xe 2+ to compensate for shot-to-shot intensity fluctuations of the x-ray pulses. Temporal prealignment between x-ray and laser pulses is accomplished with ±10 ps accuracy using a high bandwidth copper photocathode [9]. The arrival time of the optical laser with respect to the x-ray pulse can be scanned within a several nanoseconds (ns) time window from negative values (optical probe first) to positive values (x-ray pump first) by an optical delay line available at FLASH. According to the mathematical model used [2] the inflection point of the Xe 3+ transient profile corresponds to the coincidence of the peaks of the x-ray and laser pulses. Before the cross-correlator setup, another delay stage for the visible beam compensates for the optical pathway difference between both experiments. A detailed description of the x-ray/visible cross-correlator can be found in [20]. Briefly, the x-ray and the visible pulse are non-collinearly overlapped in space and time on an Si 3 N 4 surface. Along its path, the x-ray pulse changes the reflectivity of the sample for the optical pulse, which is imaged on to a CCD array. The spatial position of the reflectivity change determines the individual x-ray arrival time. Due to a longer damage-free lifetime, an Si 3 N 4 instead of a GaAs [20] wafer was used in the current study. A typical single-shot image of the reflectivity change on Si 3 N 4 is shown in figure 2(a), where the space-coordinate is already converted into a time-coordinate by geometrical considerations. This area, where the FEL pulse has excited the Si 3 N 4 sample prior to the arrival of the optical pulse, appears brighter. Note that the x-ray pulse induces a reflectivity enhancement for 800 nm on Si 3 N 4 , instead of a reflectivity decrease as observed for GaAs. To improve the image quality, the 800 nm light passes through a spatial frequency filter before hitting the substrate. CCD raw images were background subtracted and filtered in order to correct for the remaining interference fringes and inhomogeneous illumination. An average of horizontal signal pixels within the region of interest (ROI) marked in figure 2(a) is shown in figure 2(b). The slope width of the rising edge of about 110 fs is in good agreement with the laser pulse duration of 120 fs. The inflection point is further used as an arrival-time marker for the x-ray pulse. Changes of one pixel on the CCD chip correspond to 17 fs changes of the x-ray relative arrival time. Charge-states of Xe under moderate x-ray FLASH irradiance Photoionization of Xe (ground state electron configuration [Kr] 4d 10 5s 2 5p 6 ) at 92 eV photon energy preferentially ionizes electrons from the 4d shell because of the 4d → f giant resonance, as is known from energy-resolved synchrotron studies [21]. The relaxation pathways of the created vacancies via single (A1) and double (A2) Auger decay involving several intermediate states were directly observed by detecting 4d and two Auger photoelectrons in coincidence [22]. Hence, one-photon photoionization of the 4d shell in Xe leads to Xe 2+ and Xe 3+ final states with two or three electron vacancies in the outer 5p shell, respectively. The time constants τ A1 = 6.0 ± 0.7 fs and τ A2 = 30.8 ± 1.4 fs were deduced from time-resolved experiments [2] in good agreement with energy-resolved measurements [21,23]. Charge-states higher than Xe 3+ cannot be created under low irradiance conditions with the x-ray photon energy near 92 eV because the threshold for Xe 4+ production is approximately 105 eV. With increasing irradiance higher charged Xe states appear [24]. Even under our moderate irradiance conditions (∼10 12 W cm −2 ) charged states of Xe 4+ are already observed. Besides nonlinear excitation, an additional contribution from the third harmonic of the FLASH undulator at a photon energy near 3 × 92 eV = 276 eV has to be considered, the intensity of which is expected to be near to 1% of the fundamental [4]. The Xe 3+ and Xe 4+ ion yields unperturbed by the optical laser as a function of the x-ray pulse intensity (Xe 2+ ion yield) are shown in figures 3(a) and (b), respectively. The Xe 3+ ion yield shows a linear behavior and is fitted with y = 0.71 x (red solid line in figure 3(a)). The Xe 4+ ion yield, by contrast, depends nonlinearly on the x-ray intensity and is best fitted with a third order polynomial function (red solid line in figure 3(b)): indicating that linear (probably, FLASH's third harmonic), as well as nonlinear (most probably two-photon), processes are involved in the Xe 4+ formation. Possible pathways for the latter case are 4d −2 → 4d −1 5p −2 → 5p −4 and 4d −1 → 5p −2 , 4d −1 5p −2 → 5p −4 . The data for figure 3 were collected within 1004 subsequent pulses, demonstrating that shot-to-shot intensity fluctuations of FLASH pulses can be very strong. This will introduce a substantial noise into the time-dependent pump-probe spectra and complicate data analysis. The problem can be eliminated by a long acquisition time and by sorting the data according to the x-ray pulse intensity into groups with high, moderate and low Xe 2+ ion yield. A substantial improvement is achieved by this approach, allowing detection of the transient Xe 5+ signal (see below), which could not be resolved in the non-sorted data set. Exploiting the intrinsic temporal resolution of FLASH with an ion-charge-state chronoscopy experiment To assess the maximum possible temporal resolution of pump-probe experiments at FLASH, the transient Xe 3+ and Xe 4+ ion yields are considered. The energy levels and transitions in Xe ions relevant to this study are shown in figure 4(a), as composed from [2,25]. The vacancy in the 4d shell created by the x-ray pump pulse decays to Xe 2+ in a 5p −2 configuration and via intermediate states, mainly 5s −1 5p −2 ns(np), to Xe 3+ in a 5p −3 configuration (gray shortdashed arrows). Some of these intermediate states, presumably 5s −1 5p −2 6p as well as 5p −3 nl (not shown in figure 4(a)), are below the threshold for Xe 3+ . They are long lived compared to the timescale of the experiment. The laser pulse (blue solid arrow) can ionize these states, inducing an additional time-dependent increase in the Xe 3+ ion yield. The transient contribution to the Xe 4+ ion yield arises from a series of shake-up satellites 4d −1 5p −1 np populated by the x-ray pulse. These Xe 1+ highly excited states decay via [2]. Additionally, the 4d −1 5p −1 configuration can be directly photoionized by the x-ray pulse. This state can be further ionized by an optical laser to 4d −1 5p −2 , initializing the 4d −1 5p −2 → 5p −4 decay channel, before it decays to Xe 3+ in a 5p −3 configuration. An averaged Xe 3+ and Xe 4+ ion yield as a function of the time delay between x-ray (pump) and visible laser (probe) pulses is shown in figures 4(b) and (c), respectively. Transient changes of the ion yields are observed within a broad time window of about 0.7 ps for both signals (black circles). The data were collected by scanning an optical delay line with 10 second acquisition time per 200 fs step (about 50 ion traces per step), without any compensation for arrival time [2], this being indicative of a very long lifetime of the intermediate states. Our characterization of these long-lived excited states reveals that the transient ion yield of Xe 3+ is reduced by approximately a factor of two after 500 ps (figure 5). This cannot be an artefact due to the limited stay of the Xe 2+ ions in the laser focus, estimated to be 5 to 8 ns. Discussion of deduced time constants The temporal resolution and the lifetimes of the intermediate states are deduced from fitting of the transient profiles. The profile of the transient signal is generally described by a convolution integral over an instrument response function R(t) and a response from the sample S(t) R(t) can be approximated by a Gaussian with a width σ where: σ Laser = 51 fs, σ Xray = 15 fs [12] and σ res are Gaussian widths of the laser pulse, the x-ray pulse and a residual timing uncertainty, respectively. If a temporal jitter of the x-ray pulse is completely compensated for by the x-ray/visible cross-correlator, σ is expected to be 53 fs. By instantaneous photoionization followed by an ultrafast Auger decay (τ A1 σ ) the intermediate state is immediately populated and decays exponentially with the time constant τ Evaluating the convolution integral, fit functions for the transient Xe 3+ and Xe 4+ ion yields are obtained by assuming for Xe 3+ τ → ∞ on the timescale considered here, where erf is the error function and B is an amplitude. The fitting routine is based on the Matlab R2008b Optimization Toolbox 4.1. An example of single-shot data (black open circles) and the corresponding fit curve (red solid line) for Xe 3+ and Xe 4+ ion yields is shown in figures 6(a) and (b), respectively. The fit parameter σ = 53 ± 11 fs (standard deviation (s.d.)) is in good correspondence with the expected value of 53 fs, demonstrating the efficient compensation for the temporal jitter by the x-ray/visible cross-correlator. The decay time τ = 103 ± 26 fs extracted from the Xe 4+ transient ion yield curve is considerably longer than the value of τ A2 = 30.8 ± 1.4 fs found in [2]. A possible physical cause for this obvious discrepancy from the previous result could be the participation of additional long-lived intermediate electronic states, owing to the 400 nm probe wavelength (800 nm in [2]), and contributions from nonlinear excitation of the Xe 4d core level (unavailable in [2]). However, since the experimentally observed ion-charge state does not carry information about its energetic origin, it is not clear which state could exhibit a slower secondary decay and why this state would preferentially be probed with 400 nm radiation. A more technical reason for the discrepancy could be an asymmetric temporal profile of the 400 nm laser pulse which would effectively pretend an exponential decay if its trailing edge happened to be shallower than the rising edge. In our experiment, we could confirm a symmetrical profile only for the 800 nm fundamental laser. The-technically much more challenging-temporal characterization of its second harmonic at 400 nm was not possible. Although the simulation of the second harmonic generation in a BBO crystal did not indicate this, we cannot preclude a residual spectral phase of odd order that would lead to an asymmetrical profile. It is evident that extraction of unambiguous temporal information about the Auger processes investigated here requires considerably shorter and better controlled light pulses. Double ionization of Xe 4d core level As mentioned above, a common analysis of the data in groups of increasing Xe 2+ ion yields substantially improves the signal-to-noise ratio. In a group with very high Xe 2+ ion yield (x-ray pulse energy higher than 3 µJ) Xe 5+ ions were observed for the first time ( figure 7). The transient profile of the Xe 5+ ion yield is fitted with the same model function as Xe 4+ and a decay time of τ = 81 ± 36 fs is deduced (the error given is a statistical one, see the discussion of systematic errors in section 5). The threshold for Xe 5+ formation is about 168 eV, indicating that only excitation by the FLASH's third harmonic and/or two-photon ionization can be responsible for this process. V. Jonauskas et al [25] describe the most intense de-excitation pathways following 3d ionization (black dashed arrows in figure 4(a)) ending in the Xe 5p −4 and 5p −5 configuration. The non-vanishing Xe 4+ ion yield as well as its nonlinear dependence on the Xe 2+ ion yield in the laser unperturbed spectra ( figure 3(b)) indicate a 4d −2 → 5p −4 decay channel. An optical laser can further ionize the 4d −2 state and the 4d −1 5s −1 5p −1 intermediate state, activating the 4p −1 4d −1 → 5p −5 decay channel. Xe ionic states were extensively studied as a function of the intensity of FLASH pulses up to 10 16 W cm −2 [24, 26 and references therein]. The degree of nonlinear photoionization was found to be significantly higher in Xe than in the rare gases argon, neon and krypton [26]; the 4d → f giant resonance was held responsible for this behavior. For ion charge states up to Xe 6+ a stepwise multi-photon absorption attended by a subsequent Auger decay between single steps was proposed as a possible mechanism [24]. Our observation of a transient Xe 5+ ion yield requires, however, the formation of short-lived Xe 4d −2 states in the absorption process. Hence, these results indicate that already at lower intensity the 4d shell can be double-ionized by two-photon absorption, before a single vacancy is refilled by Auger decay. Photon-photon versus photon-electron cross-correlation With the EOS setup available at FLASH [27], the electron bunch arrival time can be measured simultaneously to the x-ray pulse arrival time at the experimental end-station. Ascribing both arrival times to the same pump-probe data set, the temporal resolution of both approaches can be directly compared. Runs with a correlation coefficient of 0.98 are selected for further analysis. Figures 6(c) and (d) show the same pump-probe data set as figures 6(a) and (b), but the time axis corresponds to the electron bunch arrival time measured by EOS, instead of the x-ray pulse arrival time measured at the experimental end-station. Applying the same fitting algorithm as discussed in the previous section, σ is deduced to be 75 ± 16 fs (s.d.). Comparing this value with the expected one, we estimate a remaining uncertainty in the temporal resolution of σ res = 55 fs or 130 fs FWHM. The statistical nature of the SASE process, together with any possible jitter sources associated with the long x-ray beam transport to the experiment location, is not compensated for by EOS. An additional origin of jitter can be connected with the optical laser transport to the location of the EOS device in the FLASH tunnel. The optical laser undergoes pulse broadening within the 153 m long optical fiber. Moreover, the pulse arrival time can drift due to thermal length changes and microphonic pick-up. Thus, sophisticated pulse recompression and fiber feedback schemes have to be realized [27]. For some runs a long-term drift in the optical laser pulse arrival time is observed as a linear offset in correlation plots of the electron bunch arrival times via x-ray pulse arrival times (data not shown). Such runs were excluded from comparison. Summary and outlook Electron relaxation in Xe atoms was initiated by excitation of the 4d-shell with 92 eV soft x-ray pulses of the free-electron-laser FLASH. The transient population and depopulation of intermediate states was probed with intense 400 nm laser pulses, promoting Rydberg electrons into the continuum, thus leaving ions with up to fivefold positive charge. Arrival time fluctuations between soft x-ray and laser pulses were efficiently compensated for using a simultaneously operated optical cross-correlator for the tagging of individual shots with measured time delays. Ion-charge-state chronoscopy of Xe 3+ and Xe 4+ ionic states was used to evaluate the corresponding improvement in the timing of x-ray-pump/laser-probe experiments. With an instrumental response time of approximately 120 fs (FWHM) the temporal resolution is now clearly dominated by the laser pulse duration. Using shorter laser pulses the precision of the tagging technique utilized in this work has the potential to push the temporal resolution towards the intrinsic limit dictated by the soft x-ray pulse duration of 35 fs FWHM [12]. A direct comparison with an electron/laser cross-correlation based on EOS suggests that the optical-optical correlation yields superior precision, if a timing jitter below 50 fs rms is required. So far, the achieved resolution, restricted by the optical laser pulse duration, has not permitted a precise determination of time constants for the Xe NOO Auger decay. It will, however, become sufficient for following the dynamics of processes evolving on a timescale of a few tens of femtoseconds, e.g. secondary steps in an Auger cascade [2], or interatomic Coulomb decay [28]. As an observation of the latter requires the preparation of very dilute gaseous targets, only the high flux of accelerator-based soft x-ray sources makes such studies feasible. The significance of high soft x-ray intensities is underlined by the observation of an Xe 5+ ionic state which we assign to a double-ionization of the Xe 4d core level, the transient decay of which could be followed. Time-resolved nonlinear x-ray physics is a promising objective for existing and upcoming accelerator-based x-ray sources. In order to fully resolve the dynamics, the temporal resolution still has to be optimized. The tagging technique utilized in this work has contributed a substantial improvement and will further aid this process.
5,670
2009-12-01T00:00:00.000
[ "Physics" ]
An Enhanced Approach for Realizing Robust Security and Isolation in Virtualized Environments —Transitioning into the next generation of supercomputing resources, we’re faced with expanding user bases and diverse workloads, increasing the demand for improved security measures and deeper software compartmentalization. This is especially pertinent for virtualization, a key cloud computing component that’s at risk from attacks due to hypervisors’ integration into privileged OSs and shared use across VMs. In response to these challenges, our paper presents a two-pronged approach: introducing secure computing capabilities into the HPC software stack and proposing SecFortress an enhanced hypervisor design. By porting the Kitten Lightweight Kernel to the ARM64 architecture and integrating it with the Hafnium hypervisor, we substitute the Linux-based resource management infrastructure, reducing overheads. Concurrently, SecFortress employs a nested kernel approach, preventing outerOS from accessing mediator’s memory, and creating a hypervisor box to isolate untrusted VMs’ effects. Our initial results highlight significant performance improvements on small scale ARM-based SOC platforms and enhanced hypervisor security with minimal runtime overhead, establishing a solid foundation for further research in secure, scalable high-performance computing. INTRODUCTION Advances in computing technology have ushered in a paradigm shift in how computational resources are deployed and utilized in recent decades.The rapid growth and adoption of virtualization technologies are at the forefront of this transition [1].By abstracting the physical hardware from the software, virtualization enables the creation of multiple isolated Virtual Machines (VMs) that can run concurrently on a single physical machine, resulting in significant improvements in resource utilization and cost efficiency [1]. However, as with any technology, virtualization brings with it new challenges, most notably in the area of security [1].The hypervisor, the abstraction layer that allows the creation of VMs, is a lucrative target for attackers [2].If an attacker successfully compromises the hypervisor, they may gain control of all VMs running on the system, resulting in a significant security breach [3].Furthermore, while the isolation of VMs from each other and the host system is beneficial for security, it can also be used by attackers to conceal malicious activity [3].This necessitates the development of enhanced security solutions capable of effectively protecting the hypervisor and the virtual machines that run on it [4].Several technologies have been developed to this end, providing various mechanisms for securing virtualized environments [4].Hafnium, ARM TrustZone, and SecFortress, for example, provide unique security solutions that can significantly improve the security of virtualized environments [1,2,5]. Hafnium is a microkernel-based VM monitor that provides secure isolation between virtual machines [5].It enables each VM to run in its own isolated environment, memory-separated from other VMs [5].This means that even if an attacker gains control of one VM, they cannot access the memory of other VMs [5].Hafnium also ensures control flow integrity (CFI), which prevents unauthorized changes to a VM"s control flow [5].Fig. 1 depicts the default hafnium system architecture.Hafnium, as illustrated in the diagram, relies on a primary VM to make scheduling decisions and explicitly invoke context switches to secondary VMs via a privileged hyper-call interface. ARM TrustZone, on the other hand, provides a secure execution environment within a processor that allows sensitive tasks to be run in a separate environment from the rest of the system [1].ARM TrustZone can protect the system from both external and internal threats by ensuring that only authenticated and verified code is allowed to run within this secure environment [1]. SecFortress approaches security differently.Its goal is to protect the hypervisor by isolating it from the rest of the operating system [2].SecFortress ensures that an attack on one VM or the outer operating system does not compromise the hypervisor or any other VMs by providing each VM with its own dedicated hypervisor box and preventing direct interaction between the hypervisor and the outer operating system [2].Fig. 2 presents the architecture of SecFortress. Each of these technologies offers a distinct solution for securing virtualized environments, but their full potential may be realized only when they are integrated into a unified security framework.The purpose of this paper is to investigate the integration of these technologies into a multi-layered security solution for virtualized environments, with the goal of providing comprehensive protection against both external and internal threats [3].www.ijacsa.thesai.orgThe complexities of integrating various security mecha isms, the potential performance overheads of multiple security layers, and the requirement for ongoing updates to address new security vulnerabilities all present significant challenges to the implementation of such a framework.These challenges, however, can be addressed with careful planning and design, paving the way for a more robust and secure computing environment in the virtualization era. II. BACKGROUND Virtualization, enabled by lightweight kernels or hypervisors, has transformed computing by allowing multiple virtual machines (VMs) to run on a single physical host [5].This operation is enabled and managed by a hypervisor, a key component of the virtualization stack.Despite playing an important role in resource management and VM isolation, hypervisors are vulnerable to security threats from both guest VMs and the host environment [5].As a result, the design and implementation of secure, lightweight hypervisors are critical to protecting VMs and ensuring system security [5]. The ARM TrustZone is a hardware-based security extension for ARM processors [1], and Hafnium is a lightweight security isolation layer for virtual machines on ARM platforms [5].The TrustZone technology creates a Trusted Execution Environment (TEE) by partitioning the system into secure and non-secure worlds, whereas Hafnium isolates VM memory and provides a unique security-focused virtualization solution [1].Hafnium"s minimalist design reduces potential attack surfaces, allowing it to be a lightweight hypervisor focused on memory isolation between VM instances while leaving performance and availability guarantees to the host OS [5]. The nested kernel concept takes a different approach to security by embedding a small, lightweight, and isolated kernel within a larger one [2].This strategy achieves logic isolation by tracking all changes to the virtual-to-physical mapping and removing sensitive instructions from untrusted components, protecting physical memory and lowering the Trusted Computing Base (TCB) in complex systems [2]. Securing our digital infrastructure remains critical in the era of virtualization and cloud computing.Building upon the strengths of Hafnium, ARM TrustZone, and SecFortress, this paper proposes a new multi-layered security strategy to fortify security at both the VM and hypervisor levels, thereby protecting against both internal and external threats.While promising, the combination of these technologies presents certain challenges, including the complexity of managing the various security mechanisms, potential performance overhead due to multiple security layers, and the necessity for continuous updates to counter newly discovered security vulnerabilities [3,4]. The successful integration of these security solutions into an organization"s infrastructure necessitates careful design, comprehensive implementation strategy, and meticulous planning.Despite the obstacles, such an approach holds the potential to significantly enhance the security posture of virtualized and cloud computing environments, making them more resilient against potential attacks [6,7]. As we forge ahead, extensive testing, performance optimization, and continuous updates will play pivotal roles in overcoming these challenges.By harnessing the unique advantages provided by Hafnium, ARM TrustZone, and SecFortress, we can lay the groundwork for a more secure, robust, and resilient virtualized environment.The path to achieving this goal is strewn with difficulties, but by working collaboratively, we can hope to stay one step ahead of evolving cybersecurity threats and secure our virtualized infrastructure effectively [7]. III. RELATED WORKS Various approaches have been proposed to secure the runtime of hypervisors.HyperLock and DeHype, for instance, deconstruct KVM by assigning a separate isolated hypervisor instance to each VM, similar to the isolated context each VM has in SecFortress [4].However, unlike SecFortress, they don"t protect the hypervisor against a compromised host OS.SecFortress also differs from systems like MultiHype, which supports running multiple hypervisors on a single physical www.ijacsa.thesai.orgplatform, in that it ensures a smaller Trusted Computing Base (TCB) and stronger isolation by creating a single hypervisor box for each VM [8]. Nexen, SecVisor, and SeL4, along with SecFortress, utilize the nested kernel to reconstruct the virtualization platform [9].However, they have their limitations; Nexen doesn"t consider the security vulnerabilities in its shared service domain, SecVisor focuses on kernel integrity protection within guest VMs rather than isolation among different VMs, and SeL4, being a formally verified microkernel, doesn"t include common OS components [9].Another memory isolation implementation, Hyper Wall, requires support from specific hardware like FPGA, contrasting with SecFortress"s ability to be deployed on commercial x86 platforms [9]. Hardware-based defense technologies have also emerged.Intel TDX, for instance, isolates VMs from the hypervisor by adding a secure arbitration mode [10].AWS Nitro Enclaves and Arm CCA offer different approaches to creating isolated VM execution environments [11].However, these systems either focus on protecting the guest from an untrusted hypervisor or applications rather than the entire VM, which differs from SecFortress"s target of bidirectional isolation protection between VMs and the hypervisor. In this paper, we have furthered the research into secure HPC OS/Rs by presenting preliminary results of our approach and an initial proof-of-concept implementation [12].We have identified several potential research directions, such as evaluating our approach on more realistic systems and workloads, designing I/O mechanisms that maintain secure system isolation without imposing significant performance overheads, and investigating dynamic partitioning approaches for secure partitions and VM images [12].Also, while we have used the Hafnium hypervisor as a starting point for secure virtualization in HPC, we are still evaluating its long-term suitability [12].The necessary modifications to support HPC workloads, the need for a potential new hypervisor architecture tailored to HPC environments, and the upcoming ARM platform (ARMv9) which introduces significant security, isolation, and trusted computing features, are all factors that could impact the direction of future research in this area [11,12]. A. Design The TrustZone-Assisted SecFortress solution is a pain stinkingly designed architecture aimed to improve security in a virtualized environment.The innovative design combines TrustZone technology"s robust isolation capabilities with the SecFortress hypervisor"s flexible and comprehensive security services.As a result, hypervisors and their associated virtual machines benefit from a powerful combination of hardware and software-enforced security mechanisms. The secure boot mechanism is at the heart of our design.We ensure a trustworthy startup process by utilizing Trust-Zone technology, which allows only authenticated software to launch.This significantly reduces potential threats from unauthorized or malicious software, allowing the system to operate in a trusted state from the start. Our solution embeds a trust anchor within TrustZone"s secure world to establish a root of trust within the system.This provision has important implications for improving hypervisor security and supporting other security functions such as cryptographic key management and secure storage.It ensures a higher level of trust in the system, essentially laying the groundwork for all subsequent security protocols to operate on. Our design addresses the difficult challenge of securely handling interrupts and I/O operations.The SecFortress solution includes a mediator component that acts as a go-between for the hypervisor and the rest of the system.The mediator significantly reduces potential vulnerabilities that could be exploited during these operations by managing and securely handling interrupts and I/O operations. Inter-VM communication can be a security risk if not properly managed.Our design ensures secure communication between VMs via the SecFortress solution"s mediator.This intermediary validates each communication request to ensure it comes from a reliable source before it reaches its intended destination.This feature prevents unauthorized access and potential data leaks, thereby strengthening the system"s overall security. To provide a secure environment for sensitive processes and applications, our solution design makes use of Trust-Zone"s hardware-based isolation capabilities to create a secure execution environment within the hypervisor.This strategic inclusion effectively insulates these processes from potential threats in the "normal" world, thereby protecting the integrity of our secure world. The comprehensive isolation of hardware and software components is a critical aspect of our design.This is accomplished through TrustZone"s hardware-level isolation, which creates two distinct environments: one for the hypervisor and one for the VMs and outerOS.This hardware isolation is supplemented by the SecFortress"s mediator"s software-level isolation.As a result, our design fortifies the hypervisor"s shell, effectively isolating it from the rest of the system and potential attack vectors. Our design also prioritizes system integrity and resilience in the face of a variety of potential attacks.TrustZone technology protects the integrity of the hypervisor by preventing unauthorized changes to its code.SecFortress" security services, which validate the integrity of data and communication channels within the system, add to this.In terms of resilience, the secure boot feature ensures that the system starts up in a trusted state, while the isolation provided by TrustZone and the mediator makes the system difficult to compromise, increasing its resistance to potential threats. The way we design prioritizes secure communication and data handling.SecFortress" mediator is at the heart of all communication between the VMs, the outerOS, and the hypervisor, validating the origin and destination of each communication request.TrustZone technology protects data integrity and confidentiality by isolating sensitive data within a www.ijacsa.thesai.orgsecure world.This two-pronged approach significantly reduces the risk of data exposure or tampering from malicious processes. The TrustZone-Assisted SecFortress solution design promotes adaptability.The solution is designed to be compatible with a wide range of hardware platforms and hypervisors with minimal modifications, allowing it to be widely deployed across a wide range of systems.The design"s scalability, which is further enhanced by its modularity, enables it to protect systems of various sizes, from individual servers to large data centers, without compromising security. The TrustZone-Assisted SecFortress solution is also optimized for efficiency and performance.With the SecFortress solution"s mediator minimizing overhead in handling communication between the VMs, the outerOS, and the hypervisor, and TrustZone ensuring efficient use of hardware resources, optimal system performance is guaranteed. The solution design incorporates built-in fail-safe mechanisms to ensure that the system remains secure even if a component fails.For example, if the mediator component detects an error, it activates built-in fail-safe routines to prevent system-wide failure.Similarly, TrustZone technology ensures that any breaches in the normal world do not impact the secure world, adding an extra layer of security. The solution is designed to work seamlessly with a wide range of systems, taking into account both modern and older systems that may not have built-in security measures.It is compatible with various system architectures and hypervisor types, giving it versatility in securing various systems.The design also includes recovery and maintenance measures capable of detecting potential security threats, initiating protective actions, and allowing for easy system updates and patches.This not only ensures long-term security but also improves the system"s overall security capabilities. The TrustZone-Assisted SecFortress solution is designed for the future, combining scalability, robustness, compatibility, efficiency, and fail-safe mechanisms.Its comprehensive and adaptable design supports a variety of system architectures and hypervisors, as well as a wide range of virtual machines and can handle increasing load and traffic.Because of its adaptability, it is an effective security solution for complex virtualized environments.Furthermore, the solution reduces costs while maintaining security by leveraging TrustZone"s existing hardware security features and SecFortress"s minimalistic design.Also, our design considers future developments in the cybersecurity landscape.It is easily upgradable to handle new types of security threats or to incorporate advancements in hypervisor and virtualization technology.This foresight distinguishes our solution, making it a comprehensive, robust, and scalable option for securing hypervisor environments today and in the future. The TrustZone-Assisted SecFortress solution is divided into several layers or levels, as illustrated in Fig. 3, and are as follows: Level 1 -Hardware Layer: This includes the actual hardware platform.At this level, the TrustZone technology creates two distinct environments: the secure world" and the "normal world".The most sensitive processes and data reside in the secure world. Level 2 -TrustZone Technology: TrustZone technology primarily operates at this level, providing fundamental hardware-based isolation capabilities and establishing a root of trust within the system.It also manages the secure boot mechanism, which ensures that only authenticated software can run.Level 3 -Mediator Component: At this level, the mediator component operates, providing a layer of software-based isolation on top of TrustZone"s hardware-based isolation.It also handles interrupts, I/O operations, and inter-VM communications securely, acting as a liaison between the hypervisor and the rest of the system.Level 4 -Hypervisor: At this level, the SecFortress hypervisor operates.It communicates directly with TrustZone technology to take advantage of hardware-based isolation capabilities for enhanced security.It is also the location of the secure execution environment for sensitive processes and applications. Level 5 -Virtual Machines (VMs) and outer OS/Primary VM using Kitten lightweight kernel: At this level, virtual machines and the outer / Primary VM operating system operate.They communicate with the hypervisor and the mediator, enabling secure communication and operations. These components interact in a variety of ways to form a comprehensive, secure, and adaptable system.TrustZone technology, for example, serves as the foundation for system security by creating a secure execution environment and managing the secure boot process.The SecFortress hypervisor, in turn, relies on this secure environment to run VMs and manage their communication via the mediator component. B. Implementation The implementation of the TrustZone-Assisted SecFortress solution begins with configuring the ARM-based System-on-a-Chip (SoC) to use TrustZone technology.This process begins with the system boot-up, during which the Secure World is configured before any other system components are initialized. To establish a clear demarcation between the Secure and Non-Secure Worlds, the memory layout and IRQ controllers must be carefully adjusted.TrustZone"s Monitor Mode is set up as an extra execution level within the Secure World to host the mediator software, which manages secure transitions between the two worlds. After properly configuring the Secure and Non-Secure Worlds, we proceed to integrate the Hafnium hypervisor into the system"s Non-Secure World.The isolation capabilities of Hafnium are critical to the system"s security.It is intended to create isolated partitions for each virtual machine (VM), effectively isolating them from one another and from the hypervisor itself.The configuration of Hafnium is meticulously adjusted during this phase to manage VMs, enforce memory access policies, and control inter-VM communication.Fig. 4 depicts the memory access view of each SecFortress component.For example, the left-most large box represents the VM"s memory access view.That is, each VM has complete access to its own memory but no access to the memory of others.The mediator has access to all memory and controls the page tables to determine the memory view of each component.Illegal memory accesses across components will result in page faults, which will be detected by the mediator.We integrate the Kitten lightweight kernel into the Hafnium hypervisor after it has been established.The Kitten kernel is the foundation of our hypervisor layer.It is designed to be extremely efficient in high-performance computing environments.The integration of the kernel entails prioritizing the optimization of memory management, task scheduling, and I/O operations, which improves overall system performance. We begin development of the mediator software after successfully integrating the Hafnium hypervisor and Kitten kernel.The use of programming languages compatible with the ARM architecture and the TrustZone environment is required for this task.Memory protection, instruction protection, and control flow management are among the critical security services enforced by the mediator software.It is also intended to intercept and manage VM exits, ensuring safe context switches between system components. The addition of VM image integrity checks and encryption measures improves system security even further.Before each startup, VM image integrity checks are performed to ensure that the VM images have not been tampered with.Furthermore, the VM images are encrypted with strong encryption algorithms, ensuring the data stored within the VMs" security.The cloud provider"s key management service secures the encryption keys. We chose to implement software updates for the TrustZone-Assisted SecFortress virtualization layer via offline install packages in order to maintain system security.This method gives us more control over the software versions that are running on the system.System administrators install these updates manually, lowering the risk of online threats.We employ several strategies to reduce performance overhead, ensuring that the system remains operational and does not impair overall performance.The use of a nested MMU extension to create memory protection domains, efficient memory management with the mediator"s internal allocator, hard coding of sensitive instruction entry points during system bootup, and the minimization of memory mapping updates within the hypervisor box are among these strategies [13]. Implementing the TrustZone-Assisted SecFortress solution is a meticulous process that includes system initialization, hypervisor setup, kernel integration, mediator software development, VM security measures, software update procedures, and performance overhead reduction strategies.The end result of this comprehensive implementation process is a robust and secure virtualization environment suitable for secure cloud computing applications [13,14]. V. RESULTS AND EVALUATION This section describes the TrustZone-Assisted SecFortress solution"s results thorough evaluation and security analysis, including protection against mediator tampering, the level of isolation and confidentiality, denial-of-service (DoS) mitigation, performance assessment, and the results of practical attack scenarios. A. Security Analysis The TrustZone-Assisted SecFortress solution"s comprehensive security analysis begins with protection against mediator tampering.Because the mediator plays such an important role in the system, its integrity is critical.As a result, its security is ensured by secure boot and code integrity checks during system boot, which is a process that protects the mediator"s code by write-protecting it, preventing attackers from tampering with it.This writes protection extends to dynamic checks within non-TCB components, protecting sensitive instructions and code from bypass attacks. Effective isolation and confidentiality are critical security objectives of the TrustZone-Assisted SecFortress solution, which it achieves by isolating hypervisor boxes from the www.ijacsa.thesai.orgouterOS and other hypervisor boxes.Memory access control and paging mechanisms are used to restrict access to sensitive memory regions.OuterOS is prevented from accessing the memory of the hypervisor boxes by unmapping hypervisor box-related memory regions in the kernel master page table.Additionally, zeroing physical pages before assigning them to hypervisor boxes strengthens the integrity defense against attacks. In terms of Denial-of-Service (DoS) attacks, the TrustZone-Assisted SecFortress solution defends itself by meticulously checking the memory and registers of guest VM states before returning control to them.While this comprehensive measure reduces the possibility of crashes or interference with other VMs by addressing any state mishandling, it does not completely prevent DoS attacks from originating from the host. B. Performance Evaluation A performance evaluation was carried out to determine the overhead and efficiency of the combined solution.The solution was tested using various performance benchmarks on Pine A64-LTS SBC and Ubuntu 18.04.5 with Linux 5.2 for Intel VT platforms.The benchmarks focused primarily on CPU and memory performance, with consistent results demonstrating minimal overhead for virtualization and secure isolation.Even when lightweight kernels and ARM TrustZone-based mechanisms were used, there was no significant performance degradation. A series of tests, including Stream and Random-Access micro-benchmarks, were used to assess memory and I/O performance.These tests revealed that Hafnium and ARM TrustZone introduced minor overhead in some scenarios, but overall performance was satisfactory. The combined solution was then put through its paces with application performance benchmarks like the HPCG mini-app and a subset of the NAS Parallel Benchmark suite.These tests demonstrated that the solution was capable of running a full mini-app benchmark with minimal overhead and of providing secure isolation for HPC applications and workloads. C. Practical Attack Evaluation The TrustZone-Assisted SecFortress solution was subjected to realistic attack scenarios in order to validate the effectiveness of the security measures.CVE analysis for Linux/KVM vulnerabilities was included in these scenarios.The evaluation revealed that the security mechanisms of the solution were effective in preventing privilege escalation, information leakage, memory corruption, and denial-of-service attacks from compromised outerOS or malicious VMs.In essence, the combined solution"s isolation was critical in mitigating the impact of compromised components and safeguarding sensitive data and memory regions. Finally, the TrustZone-Assisted SecFortress solution combines lightweight kernels, Trusted Execution Environments (TEEs), the Hafnium hypervisor, and ARM TrustZone to achieve robust security isolation in virtualization environments.It effectively addresses both security and performance concerns, thereby assisting in the development of more secure and efficient virtualized systems.Comprehensive security and performance evaluations confirm its resistance to common attacks and ability to handle high-performance computing workloads with minimal overhead.As virtualization technology evolves, it promises to be a solid foundation for secure and high-performance virtualization environments. VI. CONCLUSION In this paper, we presented an enhanced to secure hypervisor runtime and a new use case for Lightweight Kernels as resource management services in securely isolated HPC systems.As part of this effort, we integrated the ARM64ported Kitten LWK with the Hafnium hypervisor in our Sec-Fortress solution to support secure virtual machine instances on a compute node.SecFortress partitions the virtualization platform strategically into a trusted mediator, an isolated outerOS, and multiple restricted hypervisor box instances, improving security isolation in high-performance computing platforms.The new approach prevents the outerOS from accessing the hypervisor"s memory, with each hypervisor box instance limited to the least amount of memory access.As a result, even if one instance is compromised, the integrity and confidentiality of other instances are not jeopardized.We have provided an initial proof of concept implementation and preliminary evaluation, which show that our approach has no significant performance overheads on a variety of HPC benchmarks.SecFortress" experimental results show that it can defeat exploits against the host OS and VMs with negligible performance overhead, implying that SecFortress could be an effective solution for improving both virtual machine security and performance.This work has also identified a number of future challenges and made a compelling case for security isolation and trusted computing as key features of nextgeneration HPC platforms.Fully supporting security isolation in a scalable and performant manner will most likely pose a significant challenge for HPC OS/R architectures, necessitating future research.Our integrated solution, we believe, provides a promising foundation for the evolution of more secure and efficient virtualized systems.
5,845
2023-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Therapeutic efficacy of equine botulism heptavalent antitoxin against all seven botulinum neurotoxins in symptomatic guinea pigs Botulism neurotoxins are highly toxic and are potential agents for bioterrorism. The development of effective therapy is essential to counter the possible use of these toxins in military and bioterrorism scenarios, and to provide treatment in cases of natural intoxication. Guinea pigs were intoxicated with a lethal dose of botulinum neurotoxin serotypes A, B, C, D, E, F or G, and at onset of the clinical disease intoxicated animals were treated with either BAT® [Botulism Antitoxin Heptavalent (A, B, C, D, E, F, G)–(Equine)] or placebo. BAT product treatment significantly (p<0.0001) enhanced survival compared to placebo for all botulinum neurotoxin serotypes and arrested or mitigated the progression of clinical signs of botulism intoxication. These results demonstrated the therapeutic efficacy of BAT product in guinea pigs and provided supporting evidence of effectiveness for licensure of BAT product under FDA 21 CFR Part 601 (Subpart H Animal Rule) as a therapeutic for botulism intoxication to serotypes A, B, C, D, E, F or G in adults and pediatric patients. Introduction Botulinum neurotoxins (BoNTs) are considered to be some of the most toxic substances known, with an estimated human lethal dose fifty (HLD 50 ) of 1 ng/kg body weight [1]. Produced from spore-forming Gram-positive bacteria belonging to the genus Clostridium, BoNTs cause paralysis by blocking the release of acetylcholine at peripheral cholinergic nerve terminals of the skeletal and autonomic nervous systems [2]. BoNTs have been classified as category A biothreat agents in the United States [3]. The rationale behind this designation is the extreme potency of the toxin, the relative ease with which it can be isolated and used with malice and the severity of the clinical disease caused by the toxin [4]. In the United States, a total of 182 confirmed and 13 probable cases of botulism were reported to the Centers for Disease Control and Prevention (CDC) in 2017 [5]. In Europe, 201 suspected and 146 confirmed cases were reported in 2015 by a total of 18 European Union/ European Economic Area (EU/EEA) countries with a notification rate of <0. 1 100,000 population. Human botulism mortality rates have been reported as high as 60% [6,7]; however, with improved supportive care including respiratory support and antitoxins, mortality rates have decreased significantly in recent years [5,8]. The duration of hospitalization and length of stay in the intensive care unit (ICU) continues to present a significant burden to the healthcare system. Humans are susceptible to all seven serotypes; thus, any one of them could be used for bioterrorism [9][10][11][12][13][14][15][16][17][18][19][20]. However only certain serotypes are associated with human botulism, including BoNT serotypes A, B, E and F. There are currently no FDA approved vaccines available for prevention of botulism in humans against any of the seven serotypes, and until the licensure of BAT 1 [Botulism Antitoxin Heptavalent (A, B, C, D, E, F, G)-(Equine)] in the United States, no therapeutic was available to treat intoxication by all seven BoNT serotypes. The approval of equine botulism antitoxin in the past was based on clinical experience; however, the botulism incidence is too low to conduct carefully controlled clinical trials. Therefore, BAT product was developed for licensure in the United States under 21 CFR Part 601 (Subpart H, Animal Rule), 'Approval of Biological Products When Human Efficacy Studies Are Not Ethical or Feasible.' Under this rule, approval is based on adequate and well-controlled animal efficacy studies in two animal models, to establish that the drug is reasonably likely to produce clinical benefit in humans, in addition to establishing safety in humans. Furthermore, a clearly defined trigger for initiation of treatment is required for use in animal efficacy studies for the treatment indication. BAT product was licensed by the United States Food and Drug Administration on 22 March 2013. It is currently the only botulism antitoxin licensed for the treatment of symptomatic botulism following documented or suspected exposure to any of the known seven BoNT serotypes in adults and pediatric patients. BAT product is a sterile solution of F(ab') 2 and F(ab') 2 -related antibody fragments prepared by blending plasma obtained from horses immunized with a specific BoNT serotype (A, B, C, D, E, F or G) of botulinum toxoid and toxin into a final heptavalent product. The guinea pig was selected as a relevant model for efficacy evaluation because of its susceptibility to all seven BoNTs [21]. In addition, there is a large body of data demonstrating the reproducibility and usefulness of guinea pigs for the efficacy evaluation of botulism vaccines and antitoxins [21][22][23][24]. Although the subtle signs of botulism such as ptosis are not visible in guinea pigs, intoxication results in muscular weakness, respiratory distress, paralysis and death mimicking the human clinical scenario [21]; thus, the response in these animals is predictive of the response in humans, an important consideration for product evaluation. The therapeutic efficacy of BAT product in comparison to a placebo (46.6% vs.0%) against BoNT serotype A in rhesus macaques has been reported previously [25]. The post-exposure prophylactic efficacy of BAT product in comparison to the placebo against all seven serotypes (>95% survival vs.0% for each serotype) in guinea pigs has also been established [21]. The therapeutic efficacy of BAT product (i.e. when given after confirmed signs of intoxication) against any of the seven BoNT serotypes mimicking the clinical use of BAT product had not been demonstrated. For the evaluation of therapeutic efficacy, the BAT product was administered to symptomatic guinea pigs to mimic the product use in a clinical setting. The results of these pivotal studies along with the demonstrated effectiveness of BAT product in rhesus macaques [25] provided evidence of effectiveness for successful licensure. Experimental plan Three separate studies were conducted. Study 1 was conducted to evaluate the efficacy of BAT product in rescuing animals intoxicated with 4x GPIMLD 50 (guinea pig intramuscular lethal dose 50) of BoNT serotypes A, C, D or F. This study was conducted in two phases; Phase 1 examined BoNT serotypes A and F, and Phase 2 examined BoNT serotypes C and D. Between thirty-one to thirty-five Hartley Guinea Pigs approximately gender-balanced were randomly assigned to either BAT product or placebo-control treatment groups for each BoNT serotype. Study 2 was conducted to determine an intoxication dose for each of BoNT serotypes A, B, C, D, E, F and G that would be highly lethal while providing an adequate window to allow for the rescue of animals in a pivotal efficacy study. Seventy Hartley Guinea Pigs were randomized into seven groups of 10 animals each. Groups were gender balanced. Six groups received a single intramuscular injection of either BoNT serotype A or BoNT serotype E toxin at a dose equivalent to 4.0x, 2.0x or 1.5x GPIMLD 50 . A seventh group received saline only and acted as concurrent controls. Study 3 was the pivotal 21-day survival study to demonstrate the efficacy of BAT product in rescuing animals intoxicated with BoNT serotypes A, B, C, D, E, F or G. Four hundred and seventy-six Hartley Guinea Pigs were randomized into fourteen groups of 34 animals each. Groups were gender-balanced. Animals were assigned to either BAT product or placebo-control treatment groups for each BoNT serotype. Due to a large number of animals used in this study, the study was conducted in seven separate phases, one for each BoNT serotype. Animals, husbandry and veterinary care All experiments were conducted with Hartley guinea pigs (Cavia porcellus) supplied by Charles River Laboratories (Kingston, NY and Raleigh, NC locations). Each animal was received from the supplier with a surgically implanted jugular vein catheter. Animals that were in good health, free of malformations, and exhibiting no signs of clinical disease were released from quarantine by the BBRC facility veterinarian. Animal husbandry was in accordance with the standards specified in the "Guide for the Care and Use of Laboratory Animals" [26]. Animals were individually housed in polycarbonate cages in stainless steel racks, equipped with automated watering systems maintained on 24-hour continuous room lighting to allow for clinical observations. The bedding material utilized was Sani-chips1 hardwood heattreated chips. Animals received both water and PMI Certified Guinea Pig Diet 5026 ad libitum. Housing room temperatures were maintained at 68 to 75˚F, and relative humidities were 32 to 70% while study animals were present. To reduce stress on the animals and to provide optional shelter from continuous room lighting, each animal was provided with a tinted individual plexiglass "hut" within the cage. The huts were removed from cages after observation of the first severe clinical sign or if the shelter interfered with the animal's mobility. Animals were identified by individual cage cards and ear tags. Guinea pigs were randomized pre-study intoxication on Day -1 to the treatment or control group. The dose of neurotoxin administered was verified at BBRC using a mouse potency assay in male CD-1 (ICR) mice according to procedures described by Cardella [22]. Botulinum neurotoxin intramuscular intoxication Botulinum neurotoxin serotypes A, B, C, D and E were produced at the University of Wisconsin; BoNT serotypes F and G were produced at Metabiologics, Inc. (Madison, Wisconsin). Potencies of all BoNT serotypes are given in S1 Table. Botulinum neurotoxin serotypes A, B, C, D, E and F were received as ammonium sulfate precipitates and were reconstituted in phosphate-buffered saline (PBS). Botulinum neurotoxin serotype G was received in PBS, pH 6.2 (ammonium sulfate was removed by the manufacturer; therefore, no reconstitution was required). All BoNTs used in this study were in the complex form [27] consisting of the toxin and non-toxin-associated proteins. The LD 50 was established previously [21]. The toxin was administered as a single 0.1 mL intramuscular (IM) injection of a specific BoNT serotype (A to G at doses equivalent to 4x GPIMLD 50 to 1.5x GPIMLD 50 , see S1 Table) into the muscles of the right hind leg. Toxin dose administered was verified by mouse potency assay. Test and placebo control article intravenous administration A preliminary study (Study 1) was conducted to evaluate the efficacy of BAT product against a limited number of toxin serotypes. For this study guinea pigs were intoxicated with BoNT toxins (A, C, D or F) at 4.0x GPIMLD 50 via the IM route. Animals were treated IV with a single dose of placebo or a single scaled human dose of BAT product based on previous studies [21,25]. Briefly, assuming the average human weight of 70 kg and BAT product dose of 1 vial/ person, the dose volume/kg of one scaled human dose equals to 1/70 of a vial or 0.16 mL/kg based on the 11.17 mL fill volume for the lot of BAT product used for these studies. This is consistent with FDA guidance, which states that for biologicals with molecular weight >100 kDa, the dose should be normalized on a mg/kg basis [28]. The toxin neutralization capacity administered based on the label claim for the lot of BAT product used for these studies is given in S2 Table. The product was administered immediately after the first observed moderate/ severe clinical sign (treatment trigger) of intoxication. For Study 2, no test or control article was administered. For Study 3, guinea pigs were intoxicated with BoNT toxins (A, B, C, D, E, F or G) at 1.5 x GPIMLD 50 via the IM route. The trigger for treatment with a single dose of placebo or a single scaled human dose of BAT product was defined as the fourth consecutive occurrence of a moderate or severe clinical sign of intoxication. Within 45 minutes of the trigger for treatment, each animal was intravenously administered with either test or control article. The treatment was administered via the indwelling venous catheter. Catheter patency was confirmed by visualization of blood in the catheter lumen immediately prior to treatment. The test article was Botulism Antitoxin Heptavalent (serotypes A, B, C, D, E, F and G)-(equine), Lot ]2060401Y, manufactured by Emergent BioSolutions Canada Inc. (Winnipeg, Manitoba Canada). It is a sterile solution which should be stored at -15 to -25˚C. The manufacturing process, label claims for potency and toxin neutralization capacity for this product are described in detail by Emanuel and Kodihalli [21,25]. The same lot of BAT product was used for both Study 1 and Study 3. Toxin potency for each serotype ranged from 1,229 U/vial (BoNT serotype G) to 10,690 U/vial (BoNT serotype E, see S2 Table). The control article was Botulism Antitoxin-Placebo, Lot #10703480 from Emergent BioSolutions Canada Inc. (Winnipeg, Manitoba, Canada). Botulism Antitoxin Placebo (normal equine immune globulin) was manufactured using a procedure similar to the manufacture of BAT product described elsewhere [21]. Placebo had a protein concentration of 50 mg/mL and potency of < 0.38 Units/vial against all seven BoNT. This material is described as a clear to opalescent liquid essentially free of foreign particles in a 20 cc Type 1 glass container. The same lot of Botulism Antitoxin Placebo was used for both Study 1 and Study 3. The test and control article dilution material was normal saline (0.9% sodium chloride USP lot #J8H009) manufactured by Baxter. It was stored at controlled room temperature per manufacturer's specifications. Euthanasia criteria Any animals meeting a criterion for euthanasia were pre-terminally euthanized. The three criteria were: (1) any animal having a 25% or greater weight loss (when compared to last preintoxication body weight) in conjunction with any concurrent severe sign of intoxication; (2) any animal that has two consecutive observations of total paralysis; and (3) any animal that did not meet either of the first two criteria but was judged to be moribund. Only the Study Director (or the Battelle staff veterinarian in consultation with a lead technician if Study Director was not available) determined if an animal was moribund. Animals that required euthanasia were first administered 0.3 mL xylazine hydrochloride (20 mg/mL) and 0.4 mL ketamine hydrochloride (100 mg/mL) by IM injection and then administered a lethal dose of Fatal-Plus (a euthanasia agent containing pentobarbital). Clinical observations Efficacy of BAT product in animals intoxicated with 4x GPIMLD50 of botulinum neurotoxin (study 1). Observations were initiated 12 hours post intoxication and performed hourly until every animal received treatment. Following treatment of the last animal, and continuing through Day 7, observations were made once every 3 hours and from Day 8 to 21 twice daily at least 6 hours apart. Determination of botulism neurotoxin intoxication dose to demonstrate efficacy (study 2). Observations were made at 6 hours post-challenge for BoNT serotype E, and within 18 hours post-challenge for BoNT serotype A. Animals were observed frequently (hourly to once every 8 hours for BoNT serotype A, half-hourly to hourly for BoNT serotype E) until study termination (Day 14). Animals judged to be in poor and deteriorating condition were euthanized. Pivotal therapeutic efficacy of BAT product in guinea pigs intoxicated with 1.5x GPIMLD50 of botulism neurotoxin (study 3). A pilot study was conducted prior to the pivotal efficacy study with a toxin dose of 1.5x GPIMLD 50 for a few of the serotypes in which animals reverted to being asymptomatic (data not shown) after the onset of moderate clinical signs including right hind limb weakness (treatment trigger). To avoid treating animals with transient clinical signs, an objective, unambiguous and reliable trigger for treatment consistent across all serotypes was determined to be observation of four consecutive signs of any moderate (salivation, lacrimation, weak limbs, right hind limb weakness, changes in breathing sounds or patterns) or severe signs (forced abdominal respirations, total paralysis), although not necessarily four consecutive observations of the same sign of intoxication, by trained personnel. To ensure that the clinical sign assessment was objective and reproducible, the personnel conducting clinical observations were required to pass a proficiency test prior to study start confirming their ability to identify symptoms in guinea pigs after intoxication. Observations were initiated within 6 hours post-intoxication for BoNT serotypes C, E and F; and within 12 hours post-intoxication for BoNT serotypes A, B, D and G. Guinea pigs were monitored for signs of intoxication either hourly ± 15 minutes (BoNT serotypes A, B, C, D and G) or every half hour ± 15 minutes (BoNT serotypes E and F). As soon as each animal showed its fourth consecutive moderate/severe clinical sign (i.e. trigger) of botulism, it was treated within 45 minutes with either BAT product or placebo-control (as appropriate). Animals were treated upon four consecutive observations of moderate or severe signs of botulinum intoxication to provide confidence that animals are showing the actual onset of clinical disease. The majority of animals were treated based on observation of right hind limb weakness (defined as the animal failing to exhibit a clutch response to a blunt object inserted across the rear leg claws) or change in breathing sounds or pattern (defined as change in breathing with audible sounds, excessive deep or shallow or irregular breathing). Each animal was intravenously administered with BAT product or placebo control (1.0 mL per 500 g body weight) article via an indwelling venous catheter adjusted to the correct volume immediately prior to administration. Time of administration was recorded immediately postdose. Following treatment of the final animal in each serotype, observations were reduced to every 3 hours until study Day 10, or later if there were no clinical signs, they were reduced to once every 6 hours and from Day 15 to twice daily (at least 6 hours apart) until study termination on Day 21. Data analysis Statistical analyses were performed using Stata (version 11.1). Survival was the primary endpoint, secondary endpoints including the incidence of clinical signs, time to death and clinical severity scores were analyzed. These secondary endpoints provide additional evidence of the efficacy of BAT product. As Study 3 was the pivotal study for licensure under the Animal Rule (21 CFR 601.90) [29], analyses were conducted in a manner similar to that done for clinical trials. Specifically, an intent-to-treat (ITT) analysis set for each serotype was used, consisting of only those animals that were intoxicated with botulinum neurotoxin and survived to receive the test or placebo control article as appropriate for the treatment group to which they were assigned. Two animals (one intoxicated with BoNT serotype C and one intoxicated with BoNT serotype D) which died whose preceding clinical course was not consistent with BoNT intoxication and progression were retained for the analysis as the cause of death could not be determined based on pathology in these animals. For each treatment and placebo group, the survival rate at 14-or 21-days post-intoxication was calculated, along with an exact 95% confidence interval for the survival rate using the Clopper-Pearson method. Two-tailed Fisher's exact tests were used to determine if there was a statistically significant difference between survival rates for the BAT product treatment group and the placebo control group or each serotype. Kaplan-Meier curves along with log-rank tests were used to compare the time to death between the BAT product treatment group and the placebo control groups for each serotype. The median time to death was determined along with a two-sided 95% confidence interval for each group using the product-limit method. The incidence of clinical signs was calculated, along with an exact 95% confidence interval, using the Clopper-Pearson method. Two-tailed Fisher's exact tests were then used to compare the incidence of clinical signs between the BAT product treatment group and the placebo control groups for each serotype. Kaplan-Meier curves along with log-rank tests were used to compare the time to onset of clinical signs between the BAT product treatment group and the placebo control group for each serotype. The median time to onset of clinical signs was determined along with a two-sided 95% confidence interval for each group using the product-limit method. This analysis was performed for each clinical sign, and the grouped clinical signs, by serotype. The assessment of clinical severity was calculated for each animal in the analysis set, wherein mild clinical signs (lethargy) were assigned a value of "1", moderate signs (salivation, lacrimation, right hind limb weakness, weak limbs, change in breathing sounds or patterns) were assigned a value of "2", severe signs (forced abdominal respirations, total paralysis) a value of "3". For those animals which succumbed or were euthanized, a score of "20" was assigned for that time point and for all subsequent time points to end of the study. At each clinical observation time point, the clinical severity scores were calculated (cumulative for all the clinical signs observed at that time point for each animal) and averaged for each treatment group. For animals that survived to study end, the final sacrifice record was not used in the analysis. Ethics statement The research was conducted in compliance with the Animal Welfare Act (AWA, 7 U.S.C. Efficacy of BAT product in guinea pigs intoxicated with 4x GPIMLD 50 of neurotoxin (study 1) The in vivo therapeutic efficacy of BAT product was evaluated in groups of guinea pigs (n = 31 to 35/group) that were intoxicated intramuscularly (IM) with respective BoNT serotypes (A, C, D, F) at 4.0x guinea pig intramuscular lethal dose fifty (GPIMLD 50 ) . Animals were treated intravenously (IV) with a single scaled human dose of BAT product or placebo immediately after the first observed moderate/severe clinical sign (treatment trigger) of intoxication. All placebo-treated animals died in all BoNT serotypes tested, confirming the lethality of the selected challenge dose. Five out of 35 guinea pigs treated with BAT product survived in BoNT serotype C group, and 2/31 survived in BoNT serotype F group (Table 1). There were no survivors in BoNT serotypes A (0/33) or BoNT serotype D groups (0/33). Survival observed with BAT product treatment compared to placebo was very low (0% -14%); consequently, survival was not statistically different between the treatment and placebo groups for any of the four BoNT serotypes tested. All animals that died had clinical observations consistent with BoNT intoxication before death. The mean and median times to death are given in initiation) was consistent between BAT product and placebo control groups for all BoNT serotypes; however, there was a delay in time to onset of severe clinical signs (immediately preceding death) in treatment groups for three (A, C, D) of the four BoNT serotypes tested ( Table 2). The clinical progression was very rapid at 4x GPIMLD 50 dose of botulinum toxin for all 4 BoNT serotypes tested. There was an overlap in the time to onset of moderate (treatment trigger) and severe signs ( Table 2). The delay in treatment while waiting for the onset of signs together with rapid clinical course resulted in survival of only 0-14% of animals (depending on BoNT serotype), compared to 0% of survival in the placebo groups. The time between the onset of clinical signs (time of treatment) and death was rapid and insufficient for BAT product treatment to prevent mortality, despite the excess neutralization capacity available in the dose of BAT product administered [21] (S2 Table). Thus, the guinea pig model using an exposure dose of 4x GPIMLD 50 is not appropriate for evaluation of the therapeutic efficacy of BAT product due to rapid progression of clinical disease at higher toxin dose. Consequently, it was necessary to extend the duration of clinical signs (time between treatment and death) by selecting a lower and more appropriate toxin dose to provide a greater opportunity for demonstration of the therapeutic efficacy of BAT product in this model. Determination of botulism neurotoxin intoxication dose to demonstrate efficacy (study 2) An extensive time course and lethality evaluation of a range of toxin doses (4x, 2x, and 1.5x GPIMLD 50, n = 10) of BoNT serotypes A and E were conducted to establish the toxin dose that provided the longest clinical course while still resulting in mortality for use in the pivotal therapeutic efficacy study. The selected BoNT toxins represent typical (Serotype A) and fast-acting (Serotype E) toxins. All animals intoxicated with BoNT serotypes (A and E) died or were euthanized before study Day 7. The median times to onset of clinical signs at 2x and 4x GPIMLD 50 were comparable for both BoNT serotypes; however, time to onset was longer at 1.5x GPIMLD 50 for BoNT serotype A ( Table 3). The duration of clinical signs decreased as the challenge dose increased for both BoNT serotypes. The median time from clinical onset to death in the 1.5x GPIMLD 50 dose group was approximately 4.6x and 2.6x longer than that of the 4x GPIMLD 50 dose group for BoNT serotypes A and E, respectively. The time to death of animals intoxicated with high toxin dose (4x GPIMLD 50 ) was approximately 3.2x and 1.6x faster than the animals exposed to low dose (1.5x GPIMLD 50 ) for BoNT serotypes A and E, respectively (Table 4). In general, animals challenged with BoNT serotype E had a shorter clinical course than those intoxicated with BoNT serotype A. A toxin dose of 1.5x GPIMLD 50 was selected as the revised toxin dose for use in therapeutic efficacy studies for all seven toxin serotypes. This dose was expected to produce a more prolonged clinical course and consequently would provide an opportunity to demonstrate the therapeutic effect of BAT product while still resulting in complete mortality of control animals. Moderate clinical signs, including right hind limb weakness, were identified and selected as early signs for use as a trigger for treatment initiation. Pivotal therapeutic efficacy of BAT product in guinea pigs intoxicated with 1.5x GPIMLD 50 of neurotoxin (study 3) The pivotal study was a randomized, blinded, and controlled GLP study. A total of 616 guinea pigs were randomized to fourteen gender-balanced groups (n = 34) and were intoxicated with Table 3 a dose equivalent to 1.5x GPIMLD 50 of the appropriate BoNT serotype (serotype A, B, C, D, E, F or G) given as a single intramuscular (IM) injection to the right hind limb. Due to the large sample size, the study was conducted independently for each BoNT serotype. At four consecutive occurrences of any moderate or severe signs, animals were treated IV with one humanscaled dose of BAT product. There was a statistically significant (Fisher's Exact Test, p<0.0001) enhancement in survival achieved with 1x scaled human dose of BAT product when compared to placebo for all BoNT serotypes (Table 5). Most treated animals, along with all placebo control animals, continued to progress from the trigger (right hind limb weakness in most cases) to develop systemic clinical signs such as a change in breathing and weak limbs. The treatment with BAT product resulted in virtually complete survival irrespective of the intoxicating BoNT serotype; however, mortality was lower than expected among BoNT serotype G placebo group. Even with the lower rate Table 5 A, B, C, D, E, F of mortality in the placebo group, a statistically significant improvement in survival of guinea pigs exposed to BoNT serotype G was achieved with BAT product treatment. Two BAT product-treated animals died before the study end at 21 days; one intoxicated with BoNT serotype C (died on day 14) and one intoxicated with BoNT serotype D (died on day 8). In both deaths, the preceding clinical course was not consistent with BoNT intoxication progression. The cause of death could not be determined based on pathology. All placebo control animals (if surviving to the first weighing at either day 7 post-intoxication or upon exhibiting poor and deteriorating condition) lost weight before death. All but eight BAT producttreated animals gained weight throughout the post-intoxication period. The difference in weight changes between the treatment groups supports an overall clinical benefit with BAT product treatment (S1 Fig). The median time to death could not be estimated for many of the treated groups since no mortality observed for BoNT serotypes A, B, E, F and G and only one death each was noted for serotypes C and D ( Table 5). The treated groups had a significantly (p<0.0001) longer time to death compared to placebo controls for all seven serotypes. . Summary of survival with fisher's exact test comparisons and Kaplan-Meier median time to death with log-rank test comparisons between BAT producttreated (1x scaled human dose) and placebo control groups in guinea pigs intoxicated with 1.5x GPIMLD 50 BoNT serotypes Treatment with BAT product substantially reduced the overall incidence of severe clinical signs compared to placebo (Table 6). Despite immediate intervention after the onset of clinical signs, almost all (~99%) BAT product-treated animals developed the clinical sign change in breathing rate/sound or pattern in addition to the right hind limb weakness (which was the treatment trigger in a majority of the cases) at rates consistent with the placebo group. However, there was a reduced incidence of other moderate clinical signs, including weak limbs, lacrimation and salivation with treatment groups compared to placebo groups. The overall incidence of weak limbs in treatment groups ranged between 8.8% (BoNT serotype D, 3/34) and 76.5% (BoNT serotype F, 26/34) compared to 100% (34/34) in each of the placebo groups. The incidence of lacrimation in placebo groups ranged between 8.8% (3/34, BoNT serotype A) Table 6. Incidence of clinical signs in BAT product-treated (1x scaled human dose BAT product) and placebo control groups of guinea pigs intoxicated with 1.5x GPIMLD 50 BoNT serotypes A, B, C, D, E, F Treatment of botulism intoxication in guinea pigs and 50% (17/34, BoNT serotype D) compared to a single BAT product-treated animal (BoNT serotype G). Salivation in treated animals was observed with exposure to BoNT serotype F (17.6%, 6/34) and BoNT serotype G (11.8%, 4/34), but was observed for all BoNT serotypes in placebo groups with incidence rates of between 8.8% (3/34) for BoNT serotype E and 91.2% (31/34) for BoNT serotype G. Incidence of severe signs of botulism was also higher in the placebo groups compared to treatment groups. For example, between 17.6% (BoNT serotypes C and D, both 6/34) and 97.1% (BoNT serotype F, 33/34) of placebo group animals exhibited forced abdominal respirations compared to only two BAT product-treated animals (one BoNT serotype C and one BoNT serotype F animal). Similarly, between 20.6% (BoNT serotype G, 7/ 34) and 100% (BoNT serotypes A and E, both 34/34) of placebo-treated animals progressed to total paralysis (severe sign requiring euthanasia) compared to 0% (0/34 for each serotype) of animals in the treatment groups. These results are indicative of the continued progression of the disease from mild to severe in placebo groups compared to the rapid arrest and subsequent reversal of the progress of illness among treated animals ( Table 6). Severe signs (forced abdominal respiration and total paralysis) of botulism were observed almost exclusively in the placebo control animals, although the exact incidence varied with each BoNT serotype. There was no incidence of forced abdominal respiration in the treated group for BoNT serotype A. A significantly lower incidence of forced abdominal respiration in most BoNT serotypes (serotypes A, B, D, E, F and G) and total paralysis in all seven BoNT serotypes was observed in treated groups compared to placebo groups (p<0.05). Forced abdominal respiration was observed transiently (two consecutive half-hourly observations) for one animal intoxicated with BoNT serotype F and treated with BAT product. A second animal intoxicated with BoNT serotype C and treated with BAT product was found dead approximately three hours after first exhibiting forced abdominal respiration. Clinical progression was comparable between treatment and placebo groups but diverged approximately 21-58 hours post-treatment depending on the BoNT serotype (Fig 1). In general, the clinical severity scores demonstrate that for a period following intoxication and treatment, the clinical progression was comparable between BAT product and placebo groups; however, later, the clinical scores diverged. After this divergence the clinical severity score for BAT product-treated animals generally decreased as animals began to recover. In contrast, the clinical severity score for placebo-treated animals for most BoNT serotypes dramatically increased due to the onset of severe clinical signs or death. The clinical severity score continued to rise for placebo control animals until the end of the study or until all were dead or euthanized (Fig 1). Discussion BAT product is an equine-derived heptavalent antitoxin licensed under the Animal Rule (21 CFR 601.90-95) for treatment of symptomatic botulism following documented or suspected exposure to BoNT serotypes A, B, C, D, E, F or G in adult and pediatric patients. The demonstrated efficacy of BAT product in rhesus macaques [25] along with the therapeutic effectiveness of BAT product against all seven BoNT serotypes in guinea pigs exhibiting clinical signs consistent with botulism provided the evidence of effectiveness in support of licensure under the Animal Rule in the US. This report for the first time demonstrates the therapeutic efficacy of a botulinum antitoxin against all seven BoNT serotypes in symptomatic guinea pigs. Guinea pigs are susceptible to all seven BoNT serotypes [22,30,31]. Our detailed clinical course studies in guinea pigs confirmed the susceptibility to all seven serotypes [21]. While the primary disease of botulism (progressive paralysis resulting in death) is comparable between guinea pigs, rhesus macaques and humans, specific details such as the onset of clinical disease differ between the species [32]. The results of the studies described here demonstrate the effectiveness of BAT product against all seven BoNT serotypes when administered to systemically intoxicated guinea pigs after the onset of definitive clinical signs of botulism. Consistent with our previous studies in macaques [25], intervention with BAT product did not result in an immediate cessation of disease progression, likely due to a portion of the toxin having already entered neuronal cells where it is no longer accessible to the BAT product. The significant survival (nearly 100%) obtained in guinea pigs is due to the administration of BAT product as soon as possible following the onset of non-transient signs of intoxication in each animal. This finding is similar to previous reports of improved survival in humans with early treatment [33][34][35][36]. Although survival rates were significantly higher in BAT product-treated animals irrespective of BoNT serotype, the mortality rate in placebo controls was not universal. In particular, a significant Guinea pigs were intoxicated with 1.5x GPIMLD 50 of BoNT serotypes A, B, C, D, E, F or G and subsequently treated with 1.0x BAT product (dashed red line) or placebo (solid blue line). Treatment was initiated after four consecutive observations of moderate or severe signs of botulinum intoxication. Animals were assigned a score of 1 (mild signs of intoxication), 2 (moderate signs of intoxication) or 3 (severe signs of intoxication) at each timepoint. A value of 20 was assigned for the time at which an animal succumbed or was euthanized, and for all subsequent time points to end of the study. proportion (50%) of placebo control animals intoxicated with BoNT serotype G survived to the end of the study. The lower than expected mortality was not believed to be due to an error in dosing as toxin dose formulation results confirmed the target dose (S3 Table). The lower than expected mortality is likely due to an inaccurate estimate of the LD 50 . This is critical as the 95% confidence intervals for the estimates of one GPIMLD 50 for all seven BoNT serotypes are significant given the nature of the dose-response where a small increase in toxin dose is capable of changing survival rates from 0% to 100% [21] and is compounded by the relatively broad acceptance criteria associated with the toxin potency estimation due to the in vivo assay method used (S3 Table). Also, the actual dose delivered was 15% less than the target dose based on dose formulation analysis of challenge material. To address this uncertainty, the sample size determinations were made assuming survival rates of up to 65% for placebo-treated animals and not less than 95% for BAT product-treated animals. Clinical severity scores are relevant for assessing the predictive efficacy of BAT product in human patients because of their comparability to the clinical scenario. In addition to survival benefit, the treatment also reduced the severity of the disease. Although intravenous administration of BAT product resulted in an immediate distribution within the circulatory system, the severity scores of treated animals were comparable to placebo controls until 2-3 days postintoxication. The severity score for placebo control animals in most serotypes dramatically increased after that time resulting in death or euthanasia. In contrast, almost all treated animals (>98%) recovered completely by day 21. When observed as a cohesive whole, these data demonstrate the therapeutic efficacy of BAT product when given after the onset of systemic clinical disease. These findings are consistent with the clinical experience, where administration of antitoxin did not result in immediate cessation in the clinical progression but did minimize the subsequent severity of the disease [35]. The duration of the recovery phase in human cases can range from several days to many months depending on the severity of the disease, serotype involved and time of treatment [10,34,37,38]. Depending on the severity, botulism intoxication can require extended periods of hospitalization and intensive care, which may not be feasible in a mass intoxication scenario [39]. Reducing the duration of hospital stays and the need for intensive care support with antitoxin treatment provide an opportunity for existing health care systems to continue to function in a mass exposure event scenario. While animal-derived anti-BoNT immunoglobulins can be immunogenic and may cause adverse events when administered to another species [39], no adverse events were noted in the animal studies presented. Overall, BAT product was well tolerated, consistent with the subsequently demonstrated favorable risk-benefit profile in patients with confirmed or suspected botulism treated with BAT product [36,40]. The significant protection obtained using the heptavalent antitoxin may be due to the polyclonal nature of the product that can target many different regions of the toxin and provide broader biological activity by interfering at various steps in the toxin pathway. Several monoclonal antibodies (mAbs) under development against BoNT toxin serotypes (A, B, E and F) have shown efficacy in animal models mostly in the form of a cocktail consisting of two or more mAbs to cover the breadth of response against each target toxin and counter naturally occurring toxin subtypes. The potential modification of toxin for use as a bioweapon limits the utility of mAbs as therapeutics [41][42][43][44][45][46][47]. Also, there are significant barriers to supply and cost with a monoclonal antibody treatment. Therefore, in a mass exposure event scenario, without knowing the exact serotype involved, a heptavalent product that can neutralize the entire spectrum to BoNT serotypes with a single dose is an effective countermeasure for bioterrorism concerns. For emergency preparedness and response, the United States government, through the Biomedical Advanced Research and Development Authority, has stockpiled BAT product in the Strategic National Stockpile. The mechanism of action of BAT product is by the clearance of toxin in circulation and inhibiting the binding of the toxin to the neuronal cell surface receptor [48,49]. Published reports suggest that there is a correlation between the toxin dose and potential therapeutic window, and that antitoxin treatment is ineffective in experimental animals exposed to relatively high doses of BoNT [7,50]. Based on the standard neutralization capacity of one unit of toxin [21], and the large excess of antitoxin administered (relative to toxin exposure dose), the failure to rescue animals intoxicated with 4x GPIMLD 50 of botulinum toxin in Study 1 can be attributed to the rapid progression of the disease due to the high intoxication dose of BoNT. This is evidenced by the overlap in times to onset of moderate and severe signs of botulism intoxication in the placebo groups intoxicated with BoNT serotypes A D and F. It is likely that neurons internalized lethal amounts of toxin before treatment. Thus, it was necessary to identify an intoxication dose that provided a wider window of opportunity for treatment, while still highly lethal to the control group. There was a clear relationship between clinical progression and the toxin dose in Study 2, with an adequate window of opportunity at lower but still highly lethal toxin challenge dose under experimental conditions. The data reported here show that a challenge dose of 1.5x GPIMLD 50 of botulinum toxin is both relevant and reasonable for the evaluation of therapeutics against botulinum intoxication. In conclusion, a single dose of BAT product administered to symptomatic guinea pigs following exposure to lethal quantities of BoNT (A, B, C, D, E, F or G) resulted in a statistically significant survival benefit compared to placebo control. Also, the progression of the clinical signs associated with botulinum intoxication was arrested with BAT product treatment. The results of these pivotal efficacy studies against all seven BoNT serotypes in guinea pig along with the efficacy against BoNT serotypes A in rhesus macaques [25] provided the evidence of effectiveness of BAT product in support of licensure under the Animal Rule in the US. Currently, BAT product is the only FDA approved treatment of symptomatic botulism following documented or suspected exposure to botulinum neurotoxin serotypes A, B, C, D, E, F or G in adult and pediatric patients.
9,515.8
2019-09-17T00:00:00.000
[ "Medicine", "Biology" ]
Synthesis and characterization of WS2/SiO2 microfibers Tungsten disulfide polycrystalline microfibers were successfully synthesized by a process involving electrospinning, calcination, and sulfidation steps. We used an aqueous solution of silicotungstic acid (H4SiW12O40) and polyvinyl alcohol as precursors for the synthesis of composite fibers by the needle-less electrospinning technique. The obtained green composite fibers (av. diam. 460 nm) were converted by calcination in air to tungsten oxide WO3 fibers with traces of SiO2 and a smaller diameter (av. diam. 335 nm). The heat treatment of the WO3 fibers under flowing H2/H2S/N2 stream led to conversion to tungsten disulfide WS2 with retention of the fibrous morphology (av. diam. 196 nm). Characterization of the intermediate and final fibers was performed by the XRD, SEM, TEM, HAADF STEM EDS, elemental analyses ICP-OES, and IR spectroscopy methods. Introduction Electrospinning is a well-established method for the preparation of many types of materials with general nanofibrous morphologies [1]. An electrode, in the form of a syringe needle or free surface of a polymer solution, is electrically charged and a Taylor cone is formed from the droplet or surface. A jet of the evaporating solution is drawn toward a grounded or negatively charged counter electrode which serves simultaneously as a collector of solidified polymer fibers. This technique, closely resembling electrospraying, is exploited for obtaining various fibrous morphologies: micro-and submicron fibers, nanofibers, hollow fibers, Janus-type fibers, porous structures, interconnected fibrous membranes, branched fibers and many other shapes [1][2][3]. The process is controllable by many variables, where the composition of the spinning solution is a dominant factor next to the applied voltage, electrode distance, feed rate, and several others [3]. Electrostatic spinning is used for the preparation of purely organic fibers where almost all regular and even modified polymers were successfully electrospun to form ultrafine or at least submicrometric fibers. However, not only these classical systems could be processed, and nowadays, we can witness intensive research on the fabrication of inorganic, purely ceramic, and mixed composite fibers by this technique [4,5]. Many oxides, carbides, nitrides, and multimetallic compounds, such as perovskites and spinels, have been reported [1]. Inorganic nanofibers could be used in many already existing applications including energy storage, battery cathodes or anodes, catalysts, fillers in composite materials, gas sensors, and as precursors of fine ceramics [6]. Following the electrospinning process, green composites of inorganic precursors and organic polymers are formed. To obtain pure oxide ceramic fibers, it is necessary to remove the organic polymer matrix, in most cases, via high temperature burn up in an air atmosphere. Additional variables are thus involved, such as maximum firing temperature, time and rate of heating, and the used atmosphere. The ambient gas has a special importance during the annealing treatment after removal of the polymer in air atmosphere. By using various gaseous mixtures or pure gasses, it is possible to obtain diverse inorganic compounds in the form of ultrafine fibers. For example, reducing atmosphere could lead in some cases to purely metallic polycrystalline fibers or utilization of, e.g., ammonia, methane, and hydrogen sulfide, produces nitrides, carbides, or sulfides, respectively [7][8][9]. An advantage of the described method is the facile scale-up of micro-and nanofiber fabrication process via needle-less electrospinning from the solution surface, which potentially allows building an industrial-level production line with continuous operation [10][11][12]. Electrospinning of tungsten-containing fibers has to be based on a soluble inorganic precursor of this element. Simultaneously, it has to be transformed by after-spinning processes to a form, which is suitable for subsequent reactions without impurity elements, which could cause the formation of undesirable phases. In practical terms that dictates usage of water-soluble ammonium tungstates, tungstic acid, or other precursors forming tungsten trioxide by heat treatment in an oxygen atmosphere. Selection of the right precursor could be a challenging task due to the limited solubility of some precursors, such as the already mentioned tungstic acid or ammonium paratungstate. Much more soluble ammonium metatungstate, however, suffers from a high cost, which could be prohibitive for production on a multigram scale. Sodium tungstate, which is easily soluble in aqueous solvents, however, cannot be used due to the presence of unwanted sodium cations, which will remain embedded in the material even after a high-temperature treatment. Finding a proper precursor for industrial scale-up from laboratory conditions, therefore, is not straight forward. Silicotungstic acid, H 4 SiW 12 O 40 , is polyoxometalate acid consisting of 12 tungsten atoms and one silicon atom all compensated by oxides and four hydrogens forming a Keggin-type structure where silicon resides in the central tetrahedral cavity of a W 12 cage. This compound is mostly applied as a catalyst for a broad spectrum of reactions, even on an industrial scale. Production of ethyl acetate and acetic acid from ethylene via catalysis by silicotungstic acid was commercialized by Showa Denko [13]. Also, a significant application could be found as an additive to fuel cell membranes enhancing proton conductivity [14]. Silicotungstic acid is highly soluble in aqueous systems and we used it here as an inorganic precursor for the electrospinning process. In the present work, the silicotungstic acid/polyvinyl alcohol (PVA) solution was electrospun via a scaled-up needle-less electrospinning procedure to green composite fibers that by calcination in air formed submicron polycrystalline fibers consisting of WO 3 /SiO 2 . Subsequently, H 2 S treatment of the annealed tungsten oxide fibers was undertaken, according to a procedure which was extensively used for sulfidation of pure WO 3-x nanoparticles [27], providing tungsten disulfide WS 2 microfibers. The electrospinning process was performed under an ambient atmosphere in an air-conditioned room. The details of the electrospinning parameters are provided below. The prepared solutions were characterized before electrospinning by conductometry, viscosimetry, and surface tension measurements. Preparation of electrospinning solution PVA (150 g) was dissolved in deionized water (1150 g) by stirring and heating for several hours providing a 11.5 wt% solution. Silicotungstic acid hydrate (120 g) was dissolved in deionized water (200 g). Both solutions were combined at ambient temperature and homogenized by intensive stirring for several hours. A clear colorless viscous solution was formed and further characterized with respect to its physical properties ( Table 1). The final polymer and tungsten precursor contents in the prepared solution were approx. 9.3 and 7.4 wt%, respectively. Needle-less electrospinning The prepared solution (approx. 800 cm 3 ) was transferred to the electrode vessel with a partially submerged electrode. A grounded counter electrode in the form of a stretched wire was covered with a large sheet of aluminum foil (approx. 0.8 9 0.4 m), and the electrode distance was set to 17.0 cm. The speed of the rotating electrode was set to 30 rpm. The applied voltage was set to 50 kV. The process was running for 4 h when half of the used solution was depleted, and the aluminum collector was covered by a thick layer of nonwoven felt of a green composite of PVA and silicotungstic acid. Electrospinning was interrupted due to a noticeable increase in viscosity of the solution. The increased viscosity was most probably caused by an increased rate of solvent evaporation from the relatively high surface area of the electrode and the solution. The solvent evaporation was exacerbated by ventilation of the chamber. The prepared fibrous material was peeled off, analyzed by SEM and TGA/DSC and used in further process. Calcination and high-temperature treatment Collected layers of electrospun fibers were treated in a muffle oven at 600°C, which was set according to the TGA/DSC measurement. The oven was programmed to achieve the maximum temperature in 4 h followed by another 4 h at the constant temperature and finally spontaneous cool down to ambient temperature. After this heat treatment, the material was collected and analyzed. High-temperature reaction with H 2 S The high-temperature sulfidation was carried out in a reducing atmosphere. The conditions of the process were similar to those used for the synthesis of other sulfide nanostructures [27]. The powder was placed into a quartz boat and then inserted into a quartz reactor. The reactor was purged continuously with N 2 in order to prevent traces of O 2 and moisture that would otherwise interfere with the course of the reaction. Then, the reaction gasesforming gas (H 2 /N 2 at a ratio of 10/90) and H 2 Swere added to the flow. Following this step, the reactor was inserted into a horizontal furnace, which was preheated to 840°C and maintained at this temperature for 2 h. At this point, the boat was moved out of the oven and left to cool spontaneously to room temperature. During this entire series of steps, the reactor was purged with a continuous flow of reaction gases. Several experiments were run with various reaction conditions in order to study their possible influence on the result of the sulfidation process. The total flow rate was about 130-150 cm 3 min -1 . Composition of the gas flow was slightly varied between 5 and 10 cm 3 min -1 H 2 , 7-10 cm 3 min -1 H 2 S, and the rest was nitrogen carrier gas. The precursor powder was either placed directly into the quartz boat or into prefilled quartz crucibles, which were then placed in the boat. Characterization of the solutions and powders Thermogravimetric analysis Thermogravimetric analysis and differential scanning calorimetry (TG/DSC) were performed using a Netzsch Jupiter STA 449 instrument with a heating rate of 10 K min -1 and a maximum temperature of 1000°C. Electrical conductivity, viscosity and surface tension of the solution The electrical conductivity of the solutions was measured with a Cond51 conductometer (XS Instruments). Viscosity measurement was performed on an Alpha Fungilab rotational viscosimeter. Surface tension was determined by a Sigma 700 tensiometer equipped with a Wilhelmy probe. Elemental analysis Samples for elemental analysis were mineralized by a heat-induced reaction between excess sodium peroxide, glycerine, and the analyzed material itself followed by quantitative dissolution in deionized water of known volume. A typical procedure was as follows: a precise amount of a material (e.g., 0.2012 g) was mixed with two droplets of glycerin in small pressure reactor and covered with sodium peroxide (approx. 3 g). The reactor was tightly sealed and heated with a burner until the slight snap-like sound was induced. After cooldown, the content of the reactor was quantitatively transferred to an analytical flask and filled with deionized water. The prepared solution was analyzed by the inductively coupled plasma method (ICP-OES) using a spectrometer IR spectroscopy Infrared spectra were obtained on a Bruker Tensor 27 FTIR spectrometer with a Bruker Alpha-Platinum ATR system. Powder X-ray diffraction Powder X-ray diffraction (XRD) measurements were performed with a GNR Europe 600 diffractometer with a Co (k Ka = 1.79030 Å ) lamp and using a TTRAX III (Rigaku, Tokyo, Japan) h-h diffractometer. This set-up was equipped with a rotating copper anode X-ray tube operating at 50 kV/200 mA. A scintillation detector aligned at the diffracted beam was used after a bent Graphite monochromator. The samples were scanned in specular diffraction mode (h/2h scans) from 10 to 80 degrees (2h) with a step size of 0.025 degrees and a scan rate of 0.5 degrees per minute. Phase identification and quantitative analysis were performed using the Jade 2010 software (MDI) and PDF-4 ? (2016) database. Electron microscopy The nanofibrous materials were characterized by scanning electron microscopy (SEM) using model Versa 3D (FEI/Thermo Fischer Scientific, Czech Republic) and Zeiss Sigma 500 microscopes. Transmission electron microscopy (TEM) characterizations were performed on an FEI Tecnai G2 instrument at 200 kV equipped with a 4 k CCD camera FEI Eagle. The samples for the TEM measurements were dispersed in methanol and 4 lL of the suspension. The suspension was dripped on a Quantifoil copper grid and allowed to dry by evaporation at ambient temperature. The TEM analysis and HAADF STEM EDS measurements of WS 2 fibers were performed with a Titan Themis Z at 200 kV, equipped with two Rose-Haider double-hexapole aberration correctors (probe and image), a Super-X large solid angle X-ray detector for EDS and a OneView high-speed CMOS camera for wide-field TEM imaging. The sample for the TEM measurements was dispersed in ethanol, then dripped on a copper grid and dried at ambient conditions. SEM micrographs were analyzed by the ImageJ software for fiber diameter and size distribution. Results and discussion The composition of the electrospun solution was optimized for the maximal simplicity of the mixture. Silicotungstic acid was found as a precursor widely used in industry and satisfying demands for high water solubility and a high content of tungsten. Polyvinyl alcohol (PVA) was found to be suitable for electrospinning, being an inexpensive, easily accessible, and water-soluble material. It is available in a range of average chain lengths, different degrees of hydrolysis, and polydispersity [29,30]. Mowiol 18-88 was chosen from all variants as the most common polymer from PVA families, which is also well soluble in comparison with fully hydrolyzed compounds [31]. Conductometric characterization of the starting solution provided an elevated value of 6.8 mS cm -1 in comparison with 1 mS cm -1 for the PVA solution. This is due to the dissociation of added silicotungstic acid. The surface tension (61.7 mN m -1 ) was lower than the tabulated value for water 72.86 mN m -1 [32] at approximately the same temperature. In general, lower surface tension means lower electrical force needed to initiate the electrospinning process because the Taylor cone is in a stable equilibrium between the electrical force field and surface tension [1]. Decrease in the surface tension also helps to prevent the formation of beaded structures on the fibers [1]. The electrospinning was performed via multi-jet spinning from free surface enhanced by blades on the electrode. The rotation speed of the electrode was set to ensure that the electrode is wetted by the solution during the entire cyclic movement. The transferred mass was collected on an aluminum foil electrode, where at first a thin layer of a white fabric formed, which turned to a thick 3D web of white material at the end of the electrospinning process. The fibrous mat was peeled off, collected, and analyzed by scanning electron microscopy (SEM) and thermal analysis (TGA/DSC). The SEM measurements provided insight into the morphology of the prepared mats consisting of submicron fibers with a broad distribution and with an average thickness of 460 nm (Fig. 2). These results correlate well with literature values of PVA electrospun fibers [33], which lay in a submicron range and typical hundred-nanometer thickness. It is possible to observe a few flat ribbon-like fibers, which are typical for PVA solutions with a relatively high concentration and viscosity [1]. Thinner fibers are present with a diameter around 200-300 nm (Fig. 3black). TG/DSC analysis (Fig. 4) was used to find a proper calcination temperature. Heating the sample causes steadily decreasing mass, followed by exothermic Average diameter: 460 ± 212 nm burnout of the polymer matrix with an onset temperature of 437°C. After exothermic removal of most of the organic material, a smaller decrease in mass is observed; however, the mass remains stable above 550°C. Based on the results of the TG/DSC measurement, we have selected the calcination temperature of 600°C. The green composite fiber mat was transferred into an alumina crucible and calcined under air by steadily increasing the temperature up to 600°C within 4 h followed by another 4 h at this temperature. Calcination was concluded by free cooldown to an ambient temperature. The white fibers were transformed into yellow-greenish brittle flakes, which were further analyzed by SEM, TEM, XRD, ICP-OES, and IR spectroscopy methods. (Table 2). This stoichiometric ratio implies a complete conversion of the silicotungstic acid precursor to the bulk fibers. From the electron microscopy analysis, it is evident that the calcined material has a polycrystalline fibrous structure in the submicron range of fiber thickness. The SEM images of the calcined material (Fig. 5a) show entangled array of fibers with a cylindrical cross section and fairly even surface morphology. On the other hand, Fig. 5b shows that in some other areas, many fibers exhibit rough surfaces with oxide bulges, indicating secondary precipitation of the oxide upon cooling of the specimen. Individual fibers are composed of crystallites (Fig. 5c). An average diameter has been measured with the ImageJ software from 50 individual fibers. Analysis of the size histogram (Fig. 3-gray) revealed an average diameter of 335 nm with a relatively broad distribution. In comparison with the green fibers, it is observed that the average diameter of (WO 3 ) 12 /SiO 2 fibers, i.e., following calcination, decreased; however, when the standard deviation is taken into account, both samples are comparable even from the mean thickness point of view. The slight shrinking of the fiber thickness can be attributed to the removal of the organic matrix from the green composite during calcination. Comparing both histograms provides identical size distribution; however, in the calcined sample also fibers with very thin diameters of 100 nm are present. It is also remarkable that the calcined fibers are curled compared to the green fibers, which are quite straight. Furthermore, many annealed fibers are broken into short sections, indicating lower fracture toughness of the annealed fibers. Detailed SEM measurements, together with TEM analysis (Fig. 6), describe the (WO 3 ) 12 /SiO 2 fibers as agglomerates of nanoparticles with crystalline structure analyzable by FFT to provide a complementary measurement to the XRD method. The result of this analysis is shown in the Supplementary material (Fig. 1S). The spacing of the crystal planes correlates well with the Miller indices from the XRD diffractogram. Diffractogram with corresponding Miller indices and x-axis in Å is shown in the Supplementary material (Fig. 2S). The results of the FFT analysis and XRD are summarized in Table 3. Determination of the fine structure of thick fibers by TEM analysis was challenging (Fig. 6a). However, analysis of thinner fibers (less than 150 nm) permitted observing the crystal planes (Fig. 6b). XRD measurement (Fig. 7) provided diffractions of the WO 3 phase according to the COD database card [COD-1528915]. Silica remains amorphous under these synthetic conditions and presents no diffractions. Several planes in the crystal lattice of WO 3 were recognized (Table 3), mainly (200), (020), (002) as the Miller indices of the corresponding planes. The prepared material was analyzed also by infrared spectroscopy. The IR spectra are shown in the Supplementary material (Fig. 3S). The bands observed in the infrared spectra are consistent with the presence of W-O-W (400-1000 cm -1 ) and Si-O-Si (1000-1200 cm -1 ) deformation vibrations [34]. A high-temperature sulfidation reaction was carried out on the fibers in forming gas (H 2 /N 2 ) and H 2 S with the aim of converting WO 3 to WS 2 while maintaining the fiber morphology. The reaction converted the green-yellowish fibers into a black powder. The XRD diffractogram (Fig. 8) of the sulfided fibers shows that all the peaks can be indexed to the 2H-WS 2 phase COD-9012191. This finding suggests that the silica islands in the fiber have not been affected and remain amorphous following this high-temperature (840°C) reaction. The SEM analysis (Fig. 9) of the sulfided samples revealed the average diameter of the fibers was 196 ± 54 nm. The images indicate that, in analogy to the oxide fibers, two kinds of morphologies are present: fibers with rough surfaces (a) and even surfaces (b). In the fibers with rough surfaces, the flakes are arranged as nanoflowers [35], which seem to emanate from a single bulge on the oxide fiber surface. In the fibers with even surfaces, the a-b plane of the flakes is presumably parallel to the fiber surface, i.e., their caxis is perpendicular to the growth axis of the fiber. High-resolution TEM of a ''rough'' fiber shows clearly that the nanoflakes stand out of the fiber surface (\ hk0 [ direction perpendicular to the fiber surface). From the TEM images (Fig. 10), it can be seen that the fibers are made of small layered flakes arranged randomly with respect to the fiber axis. The measured interlayer spacing is 6.2 Å , which corresponds well to the WS 2 interlayer distance. From the HAADF STEM EDS measurements (Fig. 11) and quantitative measurements of selected areas (Table 4), it can be concluded that the fibers are composed of tungsten and sulfur with a 1:2 atomic ratio (WS 2 ) interspersed by SiO 2 islands. The analysis indicates that the amorphous silica serves as a binder for the WS 2 flakes in the fiber. Such a configuration imparts mechanical robustness for the WS 2 flakes, which are well-separated from each other in the fiber. Therefore, the present fiber morphology could serve as a highly reactive catalyst for different reactions or sensor for different gases. Conclusion We prepared tungsten disulfide WS 2 fibers by a three-step process. In the first step, we used the needle-less electrospinning technique with an aqueous solution of silicotungstic acid (H 4 SiW 12 O 40 ) and polyvinyl alcohol (PVA) as precursors for the preparation of composite green fibers. The obtained fibers (av. diam. 460 nm) were converted in the second step by calcination in air into tungsten oxide WO 3 fibers with traces of SiO 2 coming from the H 4 SiW 12 O 40 precursor and with smaller diameter (335 nm). In the final step, we heated the WO 3 fibers under flow of H 2 /H 2 S/N 2 and converted them to tungsten disulfide WS 2 fibers (av. diam. 196 nm). Based on that experience, we can presume that silicotungstic acid could be a fruitful candidate for water-based scaled up electrospinning followed by after treatment toward WO 3 -based materials and other structures. resolution TEM image of WS 2 fibers, inset FFT of image (b). Declartions Conflict of interest The authors declare no conflicting financial interest.
5,062.4
2021-03-17T00:00:00.000
[ "Materials Science" ]
Study on diagonal hammer of three kind metals composite casting with block protecting handle Hammer crusher is widely used in cement, ceramic, mining and electricityother industries, hammer head is one of important parts in crusher, its abrasion performance directly affects the service life of hammer crusher and economic cost. According to bimetalcomposite hammer head is often appear "Hammer handle wear and tear","Composite difficult" and the set of casting alloy block hammer head set piece of "come off" phenomenon, which design a set piece type bimetal composite casting straight diagonal hammer, make full use of high chromium cast iron wear resistance and the toughness of low alloy steel.Design a protect block structure reasonably that solve the hammer when the actual production of composite interface is difficult to control and "the problem of hammer head wear"for use.Solve the problem of hammer head wear actively, achieve "energy saving, emission reduction and environmental protection". Introduction Hammer crusher iswidely used to cement, metallurgy, mining, coal, building materials, transportation and other industries, mining of the ore is high speed rotating hammer broken into grinding material, the workload is very heavy.Hammer crusher is one of the shortest life and the biggest parts of consumption.And Hammer head basically has single material hammer head, cast in alloy block hammer head, doubleliquid metal composite hammer head. High manganese steel was made earlier crusher hammers and common one kind of material, but some broken material impact is not strong,the work hardening of high manganese steel can't display,high manganese steel wear fast, short service life and so on problems, and Single material of hammer head can't meet the needs of high toughness and high wear resistance.Cast in alloy block hammer head often set piece of "come off", Bimetal composite hammer head is often appear "Hammer handle wear and tear" and "Composite difficult" phenomenon.After the wind quenching heat treatment, the hardness of high chromium cast iron is more than 62 HRC, impact toughness for 3~6 J/cm2; Microhardness The hardness of low alloy steel is 180~250HB, impact toughness is greater than 70 J/cm2; To protect the handle piece of 55 HRC hardness above [9], impact toughness for 6~8 J/cm2. Conclusions (1)This article can completely solve the set piece "come off" of casting in alloy block type hammer head, bimetal composite oblique hammer often appear "handle" and "complex difficulties" and so on. (2)Using liquid-liquid composite casting echno-logy, the composite interface routine jagged state, can obtain complete metallurgical bonding of bime-tallic interface bonding area; Liquid-solid com-pound can achieve certain mechanical connec-tion and part of the metallurgical bonding. (2)Through the wind quenching heat treatment process, under the conditions of equal protection of the handle to block the service life of straight bevel hammer, higher than the single fierce steel hammer head high 3~5 times, abrasion resistance and toughness is improved obviously. 2 Testing materials and methods 2 . 1 figure 1b) is good toughness of low alloy steel material, and the hammer end (as shown in figure 1c) is to use high wear resistance of high chromium cast iron material.The study of the hammer head of simple structure, not only solved the "handle" phenomenon, but also solves the caused by the external environment and the level of workers is not enough "composite difficulty" problem.And without affecting the original under the premise of use function, greatly extend the service life of the whole, reduce the maintenance cost and use cost of the enterprise.Straight bevel hammer structure as shown in figure 2 [2]. Figure 1protect the handle Figure 1protect the handle schematic block type straight diagonal hammer decomposition a) protect the handle piece b) hammer handle c) side Figure 2 Figure 2 protect the handle type straight diagonal hammer piece of structure 1 Side 2 Hammer handle 3 Protect the handle piece 4 Jack 5 Waist type hole 2.2 Determine the chemical composition of composite materials In order to make protect the handle piece, side, hammer handle to achieve their ideal abrasion resistance and toughness, in this article, use the high carbon high chromium steel armor handle with better abrasion resistance, and hammer end is high abrasion resistance of high chromium cast iron, the hammer handle is toughness good low alloy steel.According to the needs of the effect of alloy elements and the performance of the belt can be determined to protect the handle of the chemical composition of each place straight diagonal hammer as shown in table 1. 2. 3 Heat treatment process This study chooses the wind quenching heat treatment process, the austenitizing temperature heated to 1020 ,then use the air-filled.Will tempering temperature heated to 230 , and continue to air cooling.Research on heat treatment process to observe straight diagonal hammer's the law of the performance, thusb a c DOI: 10.1051/ 03011 (2016) , matecconf/2016 MATEC Web of Conferences 63 630 ensuring a good comprehensive mechanical properties.Heat treatment process curve is shown in figure 3[3]. Figure 3 heat treatment process curve 2 . 4 Figure 4 Figure 3 heat treatment process curve Figure 5three types metal composite interface organization characteristics a) high chromium cast iron/low alloy steel low alloy steel b) low alloy steel/high carbon high chromium steel Figure 5(a) liquid-liquid composite icrostructure characteristics.Composite interface can be seen in the metallographic photos showing a small serrated, appear the phenomenon of melting and mutual penetration.And composite interface no shock phenomenon, compact structure, achieve ideal metallurgical bonding state [7]. Figure 5 ( Figure 5 (b) solid-liquid composite microstructure observation, In metallographic figure, you can see the interface without micro crack, blowhole and inclusions, etc..Formed a similar liquid-liquid double metal composite of straight line.So in the combination of fluid of low alloy steel with high carbon high chromium steel interface to achieve complete metallurgical bonding state[8]. Figure 6 Figure 6 microhardness bimetallic interface bonding area Through the analysis of the microhardness: If the interface has a good combination of area, the microhardness should gradient smooth transition, and measured the hardness of low alloy steel is about 350HV, solid interface combined with the hardness of about 510 HV, fluid interface in combination with the hardness of about 580 HV, Figure 7 Figure 7 a) straight diagonal hammer handle piece of type b)Real figure
1,430.8
2016-01-01T00:00:00.000
[ "Materials Science" ]
Simulating Spin Waves in Entropy Stabilized Oxides The entropy stabilized oxide Mg$_{0.2}$Co$_{0.2}$Ni$_{0.2}$Cu$_{0.2}$Zn$_{0.2}$O exhibits antiferromagnetic order and magnetic excitations, as revealed by recent neutron scattering experiments. This observation raises the question of the nature of spin wave excitations in such disordered systems. Here, we investigate theoretically the magnetic ground state and the spin-wave excitations using linear spin-wave theory in combination with the supercell approximation to take into account the extreme disorder in this magnetic system. We find that the experimentally observed antiferromagnetic structure can be stabilized by a rhombohedral distortion together with large second nearest neighbor interactions. Our calculations show that the spin-wave spectrum consists of a well-defined low-energy coherent spectrum in the background of an incoherent continuum that extends to higher energies. I. INTRODUCTION Entropy stabilization means that a single phase is stabilized by the mixing entropy gained from randomly distributing a number of elements, typically five or more, over a single crystal lattice. [1] In 2015 a new family was added to the class of high entropy materials with the discovery of the high entropy oxides (HEOs). By mixing equimolar amounts of MgO, CoO, NiO, CuO and ZnO Rost et al. [2] found that Mg 0.2 Co 0.2 Ni 0.2 Cu 0.2 Zn 0.2 O (hereafter MgO-HEO) could be stabilized in a simple rock-salt structure in which the oxygen atoms occupy one of the face-centered cubic (FCC) sublattices and the cations are randomly distributed over the other FCC sublattice. The entropy stabilized nature of the phase is apparent when considering that CuO and ZnO do not form in the rock-salt structure that forces Cu and Zn in octahedral coordination. Rost et al. [2] demonstrated that the sample could be switched between a multiphase structure and the aforementioned single phase rock-salt structure by repeated heating and cooling , thereby proving the reversibility of the entropy-driven transition. Furthermore, a combination of X-ray diffraction, X-ray absorption finestructure and scanning transmission electron microscopy with energy dispersive X-ray spectroscopy, showed that the MgO-HEO sample was chemically and structurally homogeneous [2]. The HEOs are not only interesting from a basic scientific point of view, but have also exhibit promising functional properties, such as high Li-ion storage capacity and cycling stability [3], colossal dielectric constants [4], superionic conductivity [5] and a high ratio of elastic modulus to thermal conductivity [6]. In addition to rock-salt HEOs, HEOs with perovskite [7][8][9], fluorite [10,11] and spinel [12] structures have been discovered, together with high entropy carbides [13,14], borides [15] and chalcogenides [16]. The magnetic properties of MgO-HEO had remained tions are gapped, with a spin-gap on the order of ∼ 7 meV [22]. The gap is gradually buried in quasielastic fluctuations upon heating towards T N . Interestingly, the spin-wave excitations persist above T N , up to room temperature [18] . Unlike in binary oxides such as NiO and CoO, no strong lambda-anomaly has been observed in the specific heat [18,19]. Recent muon spin relaxation measurements [22] indicate that MgO-HEO undergoes a broad continuous AFM phase transition starting at 140 K and being fully ordered at 100 K, explaining why the specific heat anomaly is very weak and broad. The discovery of magnetic long-range order in MgO-HEO leads to a number of questions. How can the experimentally observed AFM long range order be stabilized in the presence of the randomly distributed moments of Co, Ni and Cu and the 40% of spin vacancies created by the non magnetic Mg and Zn ions? What is the nature of spin-wave excitations in such a extremely disordered magnetic environment? How do the magnetic properties of MgO-HEO compare with those of Ni 1−x Mg x O [23], Ni 1−x Zn x O [24] and Co 1−x Mg x O [25], which also display antiferromagnetic order in the presence of similarly large spin-vacancy concentrations? In this paper we theoretically investigate the magnetic ground state and spin-wave excitations in the high entropy oxide MgO-HEO. We model the Co 2+ , Ni 2+ and Cu 2+ cations with spins of size 3/2, 1 and 1/2 respectively and the Mg 2+ and Zn 2+ cations as being spinvacancies. We randomly distribute the cations on a rhombohedrally distorted face-centered cubic lattice and analyze the magnetic properties via linear spin-wave theory in combination with the supercell approximation. We find that large next-nearest neighbor exchange couplings and large rhombohedral distortions are required in order to stabilize the experimentally observed antiferromagnetic ground state with propagation vector q = (1/2, 1/2, 1/2). The spin-wave excitation spectrum of our model for MgO-HEO consists of a coherent component at low energies and an incoherent component at high energies. We also model Co 0.33 Ni 0.33 Cu 0.33 O containing moment-size disorder only and Ni 0.6 Mg 0.4 O containing spin-vacancy disorder only. We find qualitative differences in the spin-wave spectra of these systems compared with that of MgO-HEO. Finally we also investigate the influence of disorder on the spin-wave gap and find that it is proportional to the average moment size per lattice site. II. METHODS In order to qualitatively investigate the ground state and spin-wave excitations in MgO-HEO we use a simplified model. X-ray absorption spectroscopy [18] and extended x-ray absorption fine structure [26] are indicative of the cations in MgO-HEO being divalent. In addition, electron paramagnetic resonance [19] and Density Functional Theory (DFT) [27] have shown that Co 2+ is in the high spin state. Based on these experimental and theoretical results, we treat the Zn 2+ and Mg 2+ cations as spin-vacancies and the Co 2+ , Ni 2+ and Cu 2+ cations as spins with moments sizes 3/2, 1 and 1/2 respectively, ignoring orbital contributions here. To describe the interactions between the magnetic moments, we take into account the nearest and next-nearest neighbor exchange couplings. In the rock-salt structure, the nearest neighbor cations are connected via a cation-oxygencation group that has a bond angle of 90 degrees. For the second neighboring cations, the mediating oxygen ions are located in between the two cations, such that the cation-oxygen-cation bond angle is 180 degrees. Therefore, according to the Goodenough-Kanamori rules for superexchange [28,29], the nearest neighboring spin interactions are ferromagnetic and the next-nearest neighboring interactions are stronger and antiferromagnetic. This is also what has been concluded from DFT calculations for CoO [30], NiO [31] and MgO-HEO [32] and from some inelastic neutron scattering studies for NiO [33] and CoO [34]. Finally, we incorporate a rhombohedral distortion into our model. On an undistorted FCC lattice the antiferromagnetic order of the second kind will be a mixture of four degenerate ordering wave vectors: q = (1/2, 1/2, 1/2), q = (−1/2, 1/2, 1/2), q = (1/2, −1/2, 1/2) and q = (1/2, 1/2, −1/2). A rhombohedral distortion with the trigonal axis along the [111] direction will naturally stabilize the q = (1/2, 1/2, 1/2) configuration in favor of the other three ordering wave vectors [35]. Putting it all together, we consider to following model: where r labels the sites on the disordered cation FCC lattice, r, r 1p the first nearest neighbors within the ferromagnetic (111) planes, r, r 1a the first nearest neighbors between the ferromagnetic (111) planes and r, r 2 the second nearest neighbors. Overall our simplified model depends on four parameters, J 1 = J 1 (1 − ∆), J 2 and K, where ∆ quantifies the strength of the orthorhombic distortion and K the magnetic anisotropy. This model captures the disorder due to the random distributions of the Co 2+ , Ni 2+ and Cu 2+ moment sizes and the Mg 2+ and Zn 2+ spin-vacancies. For simplicity, disorder in the magnetic exchange couplings [32] has been ignored. To study the spin wave excitation spectra of our model for MgO-HEO, we use a combination of linear spin-wave theory [36] and the supercell approximation [37]. The linear-spin wave formalism consists of two steps. First the classical ground state configuration is determined. To this end we use the conjugate gradient method as implemented in the YaSpinWave computer program [38]. In the second step, the classical ground state is used to perform a Holstein-Primakoff transformation so that our interacting quantum-spin model (1) can be approximated as a quadratic boson Hamiltonian that can be solved with exact diagonalization. To treat the strong magnetic disorder in our model for MgO-HEO, we use the supercell approximation in which the disordered system is approximated by a large number of supercells, within which the cations are disordered but beyond which they are artificially periodic. For other studies using linear spin-wave theory in combination with the supercell approximation, we refer to Ref. [39,40] and for alternative methods to simulate magnetic excitations in disordered systems we refer to Ref. [41,42]. III. GROUND STATE Before studying the spin-wave excitations in MgO-HEO, we first focus on the stability of the q = (1/2, 1/2, 1/2) AFM ground state depicted in Fig. 1. In the ordered FCC spin lattice this AFM ground state is stabilized in the parameter regime |J 2 /J 1 | > 1 − ∆ [35]. Here J 1 > 0 is the FM nearest and J 2 < 0 the AFM next nearest neighbor exchange couplings respectively, and ∆ parametrizes the strength of the rhombohedral distortion, see Fig. 1. From this inequality we can see that the q = (1/2, 1/2, 1/2) AFM ground state becomes more stable upon increasing the magnitude of the second nearest exchange coupling and the rhombohedral distortion. Figure 1 shows that intuitively this makes sense. By inducing a rhombohedral distortion, the magnitude of the FM exchanges within the (111) planes (J 1 ) will increase relatively to those between the (111) planes (J 1 = J 1 (1 − ∆)). The AFM J 2 exchange only couples moments between the (111) planes and therefore helps to further stabilize the q = (1/2, 1/2, 1/2) AFM ground state. work the same. In Fig. 2 we consider a 2 × 2 × 2 orthogonal supercell relative to the conventional FCC unit cell. This supercell contains 6 S = 3/2, 6 S = 1 and 6 S = 1/2 moments corresponding to the randomly positioned Co, Ni and Cu cations respectively. That leaves 14 randomly distributed spin vacancies corresponding to the Mg and Zn cation sites. Figure 2(a) illustrates that for J 2 = J 1 and ∆ = 0, the ground state is in some non-collinear configuration. Upon increasing the second nearest exchange to J 2 = −15J 1 and the rhombohedral distortion to ∆ = 0.5, the q = (1/2, 1/2, 1/2) AFM configuration is stabilized despite the disorder (c.f. Fig. 2b). We note that the position of the moments within the supercells in this work have been randomized under the constraint that in the q = (1/2, 1/2, 1/2) AFM configuration the total moment is zero. To accomplish this we divide the FCC lattice into two sets of parallel (111) planes. The planes for which in the q = (1/2, 1/2, 1/2) AFM state the moments are up (e.g. the purple plane in Fig. 2b) contain an equal amount of Co, Ni and Cu cations as the planes for which in the q = (1/2, 1/2, 1/2) AFM state the moments are down (e.g. the blue plane in Fig. 2b). This implies that the number of Co, Ni and Cu cations per supercell needs to be even. In order to satisfy the equiatomic limit as closely as possible under this constraint, we choose the 2 × 2 × 2 FCC supercells to contain 6 magnetic cations of each type and 14 non magnetic cations. We now quantitatively investigate the stability of the q = (1/2, 1/2, 1/2) AFM ground state, by varying the second neighbor exchange coupling and the rhombohedral distortion parameter over a range of values. For each of the 4 × 4 parameter combinations listed in Table I, we simulate 100 orthogonal disordered Co 6 Ni 6 Cu 6 Mg 6 Zn 8 supercells, again under the constraint that in the q=(1/2,1/2,1/2) AFM configuration the total moment is zero. For each supercell we minimize the classical energy via the conjugate gradient method, starting from five different random spin configurations. The configuration with the lowest energy is assigned to be the ground state. The ground state configuration is typically found in three out of five energy minimizations. In total 4 × 4 × 100 × 5 = 8000 energy minimizations have been performed to produce Table I. To deter-mine whether the ground state energy corresponds to the q = (1/2, 1/2, 1/2) AFM configuration, we considered the average of the absolute difference for each of the spin vectors in these configurations. Appendix A contains more details about how we differentiate spin-configurations. As expected, the percentage of cases for which the q = (1/2, 1/2, 1/2) AFM ground state is realized increases as a function of the next nearest exchange coupling J 2 and the rhombohedral distortion ∆, see Table I. Yet the strength of the parameters required to stabilize the q = (1/2, 1/2, 1/2) AFM ground state is considerably higher than for the ordered case, due to the disorder. For example, for the ordered case with ∆ = 0.2, J 2 = −0.8J 1 is sufficient. Our model for the disordered MgO-HEO with J 2 = −15J 1 realizes the q = (1/2, 1/2, 1/2) AFM ground state in only 40% of the configurations. We have also repeated the same energy minimization in the presence of magnetic anisotropy K = 0.05J 1 , see Table III. The number of configurations, for which the q = (1/2, 1/2, 1/2) AFM ground state is realized, remains essentially the same as for the isotropic case, differing at most by a few percentage points compared to the isotropic case shown in Table III. Having understood the stability regime of the classical q = (1/2, 1/2, 1/2) AFM ground state for our MgO-HEO model, we move on to study the corresponding spinwave excitation spectrum. To that end we will focus on the parameters J 2 = −15J 1 and ∆ = 0.5 for which the q = (1/2, 1/2, 1/2) AFM ground state was found to be most stable within our set of parameters. Before analyzing the influence of disorder in our MgO-HEO model, it is instructive to consider the spin-wave spectra of the ordered CoO, NiO and CuO FCC spin models shown in Fig. 3. We notice two things. First, the energy range of the spin-wave spectra is proportional to the moment magnitude, S=3/2, 1 and 1/2, of the Co, Ni and Cu cations, respectively. Second, we convoluted the spin-wave spectra with a Gaussian of width 0.5J 1 for the purpose of visualization. Other than that, the spin-wave bands in In Fig. 4 we present the spin-wave spectra for our MgO-HEO model. These spectra are obtained from averaging over 100 configurations each containing 250 FCC lattice sites on average over which the Co, Ni and Cu moments and the Mg and Zn spin-vacancies are randomly distributed. We again use the constraint described in the previous section that in the q = (1/2, 1/2, 1/2) AFM state the total moment vanishes. To better average the disorder configurations, we not only randomly distribute the cations within the supercells, but also randomly change the shapes of the supercells themselves, as App. B explains. Comparing the spin-wave spectrum of the strongly disordered MgO-HEO model in Fig. 4 to those of our models for the ordered CoO, NiO and CuO systems in Fig. 3 reveals notable differences. First, the spin-wave spectrum of the MgO-HEO model displays large broadenings in energy and momentum space corresponding to the short lifetimes and short mean-free-paths of the spin-waves due to the presence of strong disorder. Another striking difference is that for the MgO-HEO model the spin-wave spectrum can be characterized by two regions. In the low energy regime, roughly within [0, 10]J 1 , the spin-wave spectrum exhibits a relatively sharp dispersion. At higher energies, roughly within [10, 100]J 1 , the spin-wave spectrum broadens to such an extent, that there is barely any dispersion left. This featureless incoherent part of the spin-wave spectrum should experimentally be difficult to distinguish from the background. Therefore, most likely the observed spin-wave dispersions in inelastic neutron scattering correspond to the coherent part of the spin-wave spectrum in our theoretical calculations. To further illustrate the coherent and incoherent portions of the spin-wave spec-trum for the MgO-HEO model, Fig. 9 in App. D presents energy distribution curves for eight fixed momenta near the q = (1/2, 1/2, 1/2) AFM ordering wave vector. Note that for the spin-wave spectra of our disordered models, see Fig. 4-6 and 9, a logarithmic scale is used for the spin-wave intensity. Next we dissect the consequences of the two different types of disorder in our MgO-HEO model, the moment size disorder induced by the Co, Ni and Cu cations and the spin-vacancy disorder induced by the Mg and Zn cations. In Fig. 5 we focus on the moment size disorder by presenting the spin-wave spectrum of the Co 0.33 Ni 0.33 Cu 0.33 O model in which there is only moment size disorder, but no spin-vacancy disorder. We note two differences between Fig. 5 and Fig. 4. First, the low energy part of the spectrum of the Co 0.33 Ni 0.33 Cu 0.33 O model is much sharper than for our MgO-HEO model. This illustrates that the scattering of low energy spinwaves against moment size disorder is strongly suppressed as a function of energy. The situation is analogous to case of low energy acoustic phonons scattering against mass disorder, for which in 3D the mass perturbation is proportional to the quadratic power of the energy; see Ref. [43] and references therein. A second difference is that the magnon propagation velocity, that is, the slope of the low-energy spin-wave band close to the q = (1/2, 1/2, 1/2) AFM ordering wave vector, is significantly higher. Similarly, the maximum energy of the spin-wave intensity is also higher. The reason for the reduction of the magnon propagation velocity and the maximum energy of the spin-wave intensity in the MgO-HEO is the presence of the spin vacancies, which make it energetically less costly for the remaining spins to rotate away from the AFM ground state. In Fig. 6 the spin-wave excitation spectrum for the Finally, we investigate the influence of magnetic disor-der on the spin-wave gap. Table II lists the spin-gaps induced by an easy-axes anisotropy K = 0.05J 1 for various model systems. For the CoO, NiO and CuO models without disorder, we see that the spin-wave gap is proportional to the size of the cation moments, as expected. To extract the spin-gap for our disordered models, we fit the spin-wave excitation spectrum at the q = (1/2, 1/2, 1/2) AFM ordering wave-vector with a Gaussian. The spinwave gap that we obtain for Co 0.33 Ni 0.33 Cu 0.33 O is nearly equal to that for our NiO model. This suggest that the the spin-wave gap is proportional to the average moment size: 1 3 (3/2 + 1 + 1/2) = 1. This is further confirmed by the fact that the spin-wave gap of the MgO-HEO and the Ni 0.6 Mg 0.4 O model are close in energy. For these two models we also notice that the spin-gap is roughly proportional to (1 − x) with x the vacancy concentration. For example, the spin-wave gap of NiO multiplied by the moment concentration is 0.6 · 4.17 = 2.51, i.e. approximately equal to the spin-wave gap of Ni 0.6 Mg 0.4 O of 2.57, and of Mg-HEO of 2.43. Overall, we find that the spingap in our model is roughly proportional to the average moment size if we count the spin-vacancies as having zero magnetic moment. V. DISCUSSION In our study we have qualitatively modeled the magnetic ground state and spin-wave excitations in MgO-HEO with a rhombohedrally distorted FCC spin model that depends on four global parameters and fully takes into account the moment size and spin-vacancy disorder. One aspect not modeled here and to be considered in future studies is the influence of disorder in the exchange couplings on the spin-wave excitations. Recently, the exchange couplings in MgO-HEO have been computed from first principles [32]. From fitting the total energy of 3 magnetic configurations in 6 ordered cation distributions the ratio of the second to first nearest neighbor exchange couplings was found to vary from J 2 = −11.4J 1 for Ni-Ni pairs to J 2 = −2.6J 1 for Cu-Cu pairs, with an average ratio of J 2 = −6.6J 1 . This is reasonably consistent with our conclusion that J 2 = −5J 1 is needed to stabilize a significant portion of the disordered configurations in the q = (1/2, 1/2, 1/2) AFM ground state. In this same study [32] an average fitting of the exchange coupling was performed via calculations in a large disordered supercell and led to the conclusion that J 2 = −15.1J 1 , very close to the value used in our study to study spin-wave excitations. Another aspect that would be significant to consider is the influence of local distortions on the exchange couplings. Local Jahn-Teller distortions of the oxygen anions surrounding the Cu 2+ cations have been observed in MgO-HEO via extended X-ray absorption fine structure spectroscopy [26], electron paramagnetic resonance and X-ray diffraction studies [44]. From DFT [45] it is concluded that these Jahn-Teller distortions are randomly orientated. Such distortions could induce a large spread in the exchange couplings even for first or second nearest neighbor couplings of a fixed inter-species pair. For example, for the high entropy alloy CrFeCoNi, DFT calculations show that the nearest neighbor Fe-Fe interactions can fluctuate roughly between +10 and −10 meV depending on the local chemical environment [46]. A realistic spin model can be bench-marked against inelastic neutron scattering. Features that can be used to constrain the model are the experimentally observed spinwave gap of 7 meV [22] and the magnon propagation velocity [18]. In addition, our current study has found that the spin-wave excitations in MgO-HEO consist of a coherent part and a featureless incoherent part. Inelastic neutron scattering experiments on MgO-HEO are most likely only able to resolve the coherent portion of the spin-wave spectrum. The maximum of the coherent spectra (in our current simulations roughly equal to 10J 1 ) is an additional quantitative aspect of the spin-wave spectrum that can be benchmarked against inelastic neutron scattering. Another interesting aspect to consider in future investigations is the magnetic frustration induced by competition between the first and second nearest neighbor exchange couplings. In this study we have worked under the assumption that below the Néel temperature MgO-HEO orders in the q = (1/2, 1/2, 1/2) AFM ground state. This is what has been reported based on neutron diffraction studies on MgO-HEO and it is also the ground state that has been experimentally observed for the end members NiO and CoO [20,21]. However, it is possible that no global rhombohedral distortion takes place. Indeed, as of yet, no global distortion has been observed experimentally, although this could also be due to limited resolution. In the absence of a rhombohedral distortion, the AFM ground state of MgO-HEO could be a mixture of the four ordering wave vectors q = (1/2, 1/2, 1/2), q = (−1/2, 1/2, 1/2), q = (1/2, −1/2, 1/2) and q = (1/2, 1/2, −1/2) as is the case for the ordered FCC model with weak nearest FM and strong next nearest AFM couplings [35]. Note that such a ground state is not a mixture of single q domains; even within the domains the four q vectors are mixed. The AFM ground state has been resolved from powder diffraction for which it is not possible to distinguish between the single or the mixed q state. Interestingly, in the absence of a rhombohedral distortion, the first nearest neighbor interaction is frustrated. As can be seen in Fig. 1, half of the nearest FM exchange couplings, labeled J 1 , are frustrated in the q = (1/2, 1/2, 1/2) configuration. A moderate frustration, expressed as the ratio of the Curie-Weiss temperature over the Neel temperature, has in fact been reported for MgO-HEO [19]. Frustration of the nearest neighbor interaction could possibly also explain some of the other curious magnetic properties of MgO-HEO, such as for example the broad phase transition ranging from 140 to 100 K [22], the persistence of spin-wave excitations up to room temperature [18], and the bifurcation of the field-cooled and zero field cooled magnetic susceptibil-ity [18,19]. VI. CONCLUSION In this study we have investigated the spinwave excitations in the high entropy oxide Mg 0.2 Co 0.2 Ni 0.2 Cu 0.2 Zn 0.2 O (MgO-HEO). To model the magnetic interactions in MgO-HEO, we utilize a FCC spin model with FM nearest, and AFM next nearest neighbor interactions in the presence of a rhombohedral distortion and magnetic anisotropy. The Co, Ni and Cu cations are modeled as spins with a moment size of 3/2, 1 and 1/2 respectively, whereas the Mg and Zn cations are taken to be spin-vacancies. We employ linear spin-wave theory in combination with the supercell approximation to treat the disorder in order to theoretically investigate the ground state and the spin-wave excitations. Stabilizing the experimentally reported q = (1/2, 1/2, 1/2) AFM ground state in our disordered model for MgO-HEO requires significantly larger second nearest neighbor interactions and rhombohedral distortions compared to the case without disorder. The calculated spin-wave spectrum of MgO-HEO consists of a coherent dispersive part at low energies, whereas at higher energies the spectrum is incoherent due to the disorder induced broadening. To differentiate the influence of moment size disorder and the spin-vacancy disorder we compute the spin-wave spectrum of Co 0.33 Ni 0.33 Cu 0.33 O and Ni 0.6 Mg 0.4 O and find these to be qualitatively different from each other and from the one of MgO-HEO. We also investigated the spin-wave gap for various cation compositions and find that for the considered model it is proportional to the average moment size per lattice site. States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). Appendix A: Differentiating spin configurations To quantify how close the ground state of a disordered supercell is to the q = (1/2, 1/2, 1/2) AFM configuration one could compute the q = (1/2, 1/2, 1/2) AFM energy relative to the ground state energy. However, for comparing among different parameter sets the energy isn't a good measure, because for a given configuration the energy varies as a function of the parameters. Therefore instead we use the spin-configuration of the ground state to determine its closeness to q = (1/2, 1/2, 1/2) AFM configuration. To this end we define the difference between two spin configurations as the absolute difference of the spin-vectors averaged over the spins in the configurations. While doing this, we have to be mindful of the rotational invariance when computing this difference. For example, in the isotropic case the difference between two spin-configurations should not depend on an overall rotation of either of the spin-configurations. To this end we define the difference between spin-configurations A and B as follows. We first perform an overall rotation of configuration B such that its first spin aligns as much as possible with the first spin of configuration A and then compute the average absolute difference of the spin-vectors. To not bias our measure of difference toward the first spin, we also repeat the procedure with aligning each of the other spins as much as possible. Following the above, the difference of spin-configurations B and A is given by: where N is the total number of spins, S A r is the spinvector of spin r in configuration A and is the spin-vector of spin r in configuration B that is rotated to align its spin r as much as possible with the corresponding spin r in configuration A. For isotropic interactions, the rotation matrix is an element of the 3D rotation group: U A r ∈ SO(3). In that case, the spin r of configuration A and B can be perfectly aligned. In the presence of an easy axis anisotropy K > 0, the rotation matrix is an element of the product of the group of 2D rotations around the easy axis and the group of reflections in the plane perpendicular to the easy axis : U A r ∈ SO(2) × Z 2 . For example, suppose we have an easy axis along the z-direction and we have two spins: S A = (cos(φ A ) sin(θ A ), sin(φ A ) sin(θ A ), cos(θ A )) and S B = (cos(φ B ) sin(θ B ), sin(φ B ) sin(θ B ), cos(θ B )). Having defined a measure to differentiate two spinconfigurations, we have to choose a cut-off below which we consider two spin-configurations to be equal. In this work, we consider a spin-configuration A to have the q = (1/2, 1/2, 1/2) AFM ground state if ∆S < 10 −4 . For example, the configuration in Fig. 2(b) (Fig. 2(a)) has ∆S equal to 0.00002(2.12636) and therefore is considered to be (not to be) in the q = (1/2, 1/2, 1/2) AFM ground state. To better average the disorder configurations we don't only randomly distribute the cations within the supercells, but also randomly change the shapes of the supercells themselves [37]. To construct the non-orthogonal supercells, we first consider the primitive cell of the antiferromagnetic order of the second kind with ordering wave vector q = (1/2, 1/2, 1/2) illustrated in Fig. 7. Co Ni Cu Mg Zn more stable in these non-orthogonal supercells compared to the orthogonal ones used to determine the ground state configuration in section III. We select the sizes of these supercells to be within 115 and 135 q = (1/2, 1/2, 1/2) AFM primitive cells, corresponding to 230 and 270 cation sites. We allow the unit cell angles to vary between 80 and 100 degrees. In Fig. 8 we illustrate one of the nonorthogonal supercells used for computing the disorderconfiguration averaged spin-wave spectra. It is spanned by A 1 = −a 1 + 5a 2 − 5a 3 , A 2 = −4a 1 + 4a 2 + 2a 3 and A 3 = 5a 1 + a 2 − 3a 3 . It contains 248 cations and its unit cell angles are α = 83.1354, β = 90 and γ = 85.904.
7,169.4
2020-11-02T00:00:00.000
[ "Physics", "Materials Science" ]
“Impact of financial ratios on technology and telecommunication stock returns: evidence from an emerging market” This study focuses on the relationship between financial ratios and the technology and telecommunication stock returns listed on the Istanbul Stock Exchange. Since technology and telecommunication sector has become an important part of the Turkish economy and is attractive for investors and shareholders, the results play a critical role for all stakeholders. This academic work aims to determine, through the application of panel data analysis, using both the Parks-Kmenta estimator and the Two-way Mixed Effects Model, whether the Price-to-Sales, Earnings per Share (EPS), Debt-to-Equity, and EBITDA Margin financial ratios affect the returns of technology and telecommunication stock returns listed on the Istanbul Stock Exchange. According to empirical findings, Earnings per Share (EPS), EBITDA Margin, and Price-to-Sales ratios have statistically significant effects on technology and telecommunication companies’ stock returns. Higher EPS and EBITDA Margin ratios generate higher returns for the next quarters, and lower Price-to-Sales ratios lead to higher returns for the following periods. Furthermore, the results obtained using the Two-way Mixed Effects Model show that the Debt-to-Equity ratio is negatively related to stock returns. INTRODUCTION Technology and telecommunication sector has become an important part of the Turkish economy and has grown up rapidly over recent years.Technology and telecommunication sector demonstrated a 7.7% annual growth on average, while the annual average growth rate of GDP in Turkey was 6.5% between 2012 and 2018.In 2018, the total sector size reached up to TL 131.7 billion (around USD 28 billion), indicating a growth of 15% on the TL basis.It seems that technology and telecommunication industries are expected to grow for the following years because of achieving higher growth rates and increasing market size. Companies operating in technology and telecommunication sector in Turkey raise funds with equity and debt financing.The companies offer shares to the public and sell some portion of their shares to international private equity funds to provide equity financing.Due to high capital and investment expenditures in the sector, the companies borrow in local and foreign currency to provide debt financing. Because of the reasons mentioned above, the stock market performance of technology and telecommunication companies plays an important role for investors, creditors, and shareholders. This study aims to identify the effects of financial ratios on technology and telecommunication stock returns, which are critical for investors, creditors, and shareholders. The factors that influence a stock return or price are a controversial issue in the financial literature.Particularly after the 1980s, various researchers studied financial ratios as one of the major determinants of stock price or stock return.However, few studies were conducted on technology and telecommunication stocks in the literature, so this study is also expected to fill this gap. LITERATURE REVIEW The literature contains many studies investigating the relationship between stock returns and different financial ratios.However, there are few studies about the companies, which are operating in the technology and telecommunication sector. Over sixty years ago, Kendall and Hill (1953) analyzed whether changes in the price of a stock could be estimated according to past returns, and ultimately showed that stock prices have a random walk over time.Subsequent studies used other predictive determinants -the book-to-market ratio, the earnings-price ratio, the liquidity ratios, interest rates, and dividend yields -as the regressors of empirical tests (e.g., Fama & French, 1992;Campbell & Yogo, 2006;Ferrer & Tang, 2016).Campbell and Shiller (1998) showed that price-earnings multiple and dividend price ratios had a significant effect on predicting stock returns in the long run.They updated this study in 2001 and confirmed that the ratios were instructive in predicting how stock prices change in the future.Ferrer and Tang (2016) indicated that some financial ratios, including price-earnings multiple, asset turnover, and dividend payout ratio, have a significant impact on the stock prices of companies traded on the Philippines Stock Exchange. Aras and Yilmaz (2008) investigated the predictability of stock returns in twelve emerging stock markets by analyzing several financial ratios, including dividend yield, market-to-book ratio, and price-to-earnings ratio, from 1997 to 2003.According to the result of that study, in all countries except South Africa, a relationship between the market index returns and the market-to-book ratio at a 1% level of statistical significance is observed.The dividend yield also plays a prominent role in forecasting the stock returns according to the findings in the study.On the other hand, the study indicated P/E was only statistically significant for Poland, South Africa, Taiwan, and Turkey.Lewellen (2004) examined whether book-to-market ratio, dividend yield, and earnings-to-price ratio could be handled as the estimators of the stock returns on the NYSE during the period 1946-2000.The research demonstrates significant support for the claim that B/M and E/P can be used as estimators of the stock returns on the NYSE over the period between 1963 and 1994.Fama and French (1992) pointed out a significant interplay among book-to-market ratios, firm size, and returns of securities belonging to non-financial firms.Moreover, they investigated whether the leverage ratio has a considerable effect on stock returns or not.They defined leverage in two ways.The first one is the ratio of book assets to market equity (market leverage), and the second one is the ratio of book assets to book equity (book leverage).Using the first definition, they found a positive relationship between leverage ratio and stock returns, although they found a negative effect on the same variables when using the second definition.The main conclusion is that for the period between 1963 and 1990, stock returns' cross-sectional variation was related to earnings-to-price ratio, size, and BV/EV (book/market equity). In another study related to stock portfolios' average returns, Fama and French (2007) concluded that dividends contribute more to average stock returns, with a low value of price-to-book ratios versus stocks with higher PBs, during the period 1964-2006. Novy-Marx (2013) and Titman and Wei (2004) pointed out the incompleteness of Fama and French's (1992) three-factor model for expected Investment Management and Financial Innovations, Volume 17, Issue 2, 2020 http://dx.doi.org/10.21511/imfi.17(2).2020.07returns since three factors defined in their study do not capture most of the variation in average returns related to profitability and investment.Fama and French (2015) subsequently developed a five-factor asset pricing model by adding profitability and investment factors to the three-factor model.Their five-factor model explains most of the cross-section variance (between 71% and 94%) of expected returns for size, B/M, operating profit, and investment portfolios. Petcharabul and Romprasert (2014) studied technology sector stocks in Thailand for 15 years.They used several financial ratios and, as a result, ROE and PE were merely interrelated to stock returns at a 95% significance level.Campbell and Yogo (2006) used an efficient test after trying a conventional t-test to find evidence for the predictability of stock returns using financial ratios.Their findings represented that the dividend-price ratio predicts stock returns at annual frequencies.Moreover, their test revealed that the earnings-to-price ratio predicts the returns at both monthly and annual frequencies.Saji and Harikumar (2015) investigated 32 firms from the Information Technology (IT) sector, traded in the Indian Stock Exchange over the period 2000-2010.They used some financial ratios to find out if they are related to stock returns.According to the findings, earnings growth, E/P ratio, and stock returns are positively correlated.They pointed out that having low P/E ratio or high E/P ratio would provide better stock returns in the long term in India. Since few studies were conducted on technology and telecommunication stocks in the literature, one of the main purposes of this paper is to examine the interplay between financial ratios and technology and telecommunication sector companies' stock returns traded on the Istanbul Stock Exchange, which is an emerging market. DATA AND METHOD Eleven firms are operating in the technology and telecommunication industry listed on the Istanbul Stock Exchange during the time window of this study.The quarterly data for this study are obtained from Bloomberg Terminal for the period between December 31, 2008 and September 30, 2016.Thirty-two (32) quarters and 11 firms generate 352 observations in balanced panel data analysis.The selection process for the firms is based on the following criteria.Firstly, the firms must have been listed on the ISE before December 31, 2008.Secondly, the stocks of the firms must not have been suspended or off the list during the research period.Eleven (11) stocks out of 18 technology and telecommunication firms on the ISE fulfill the above requirements. Four market-based financial ratios, Price-to-Sales, Debt-to-Equity, EBITDA Margin, and Earnings per Share (EPS), have been selected to measure the effects on stock returns.The dependent variable, stock return is calculated as "(Price of stock t -Price of stock t-1 /Price of stock t-1 )", where t refers to quarter.The data, which were gathered from Bloomberg Terminal, were adjusted for stock splits and dividends.Price-to-Sales ratio, which is computed as price per share divided by sales per share, is generally used as a measure of market value multiple, particularly for technology companies.Though similar to price-earnings ratio, analysts look at Price-to-Sales ratio for companies that have negative earnings because price-earnings ratios are not meaningful in such cases.Another financial ratio selected for the study, Earnings per Share (EPS), is calculated as the difference between net income and preference dividends divided by the weighted average of common shares outstanding.Debt-to-Equity ratio, which shows the financial leverage of a company, is measured by the company's total debt over equity.Lastly, EBITDA Margin, which refers to a company's operating profitability, is measured as EBITDA/Sales.To describe and analyze the data, Stata 11.0 statistical software is used.Table 1 summarizes the descriptive statistics of the selected variables. Before employing the regression model to analyze the predictive power of independent variables on stock returns, several tests were conducted to determine the right model and to test whether the suitable variables were selected or not.Multicollinearity, which means correlation among independent variables, is one of the undesired problems often found in regression models. If there is a high correlation between two or more independent variables, it becomes harder to identify which independent variable actually affects the dependent variable.In Table 2, a correlation matrix is shown to indicate correlation coefficients among variables.In Table 3, the Variance Inflation Factor (VIF) belonging to each independent variable is shown.VIF is a measure of tolerance, which indicates how much the variance of the coefficient estimate is being inflated by multicollinearity.VIFs have been observed for two different scenarios, a model without dummy variables and a model with time and unit dummy variables.A commonly given rule of thumb is that VIFs of 10 and higher or big values of the correlation coefficient, e.g., 0.80 and above, may be the reason for concern of multicollinearity.In the present study, there is no multicollinearity concern, as shown in Tables 2 and 3. Heterogeneity enables the avoidance of biased results in panel data analysis.For the data, heterogeneity across stocks and quarters is demonstrated in Figure 1 and Figure 2, respectively.The authors have taken the mean of return according to id and time, the dependent variable, and generated graphs, which show how the dependent variable varies across id and time, respectively. Just like heterogeneity and multicollinearity, the stationarity of variables is also a crucial topic in panel data analysis.Gujarati (2003) states that the terms non-stationary, random walk, and unit root could be treated as synonymous.Stationary matters for any statistical model, such as the present one, because it allows for the preservation of model stability and provides a framework in which averaging (used in autoregressive and moving-average processes) can be properly used to describe the time series behavior.Unit root test has become broadly popular over the past several years to test stationarity, so Augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) unit root tests were applied.According to the null hypothesis of both tests, the variable contains the unit root.In Table 4 tio variables contain unit roots, and all other variables are stationary at a 95% confidence interval. For those variables, the tests were applied by taking the first differences of each variable.As can be seen from Table 5, one can build a stationary model for the differenced data for both variables, so one could state that Debt-to-Equity and Priceto-Sales ratio variables are difference-stationary. After testing stationarity, multicollinearity, and According to the outcomes of both F-test and Breusch-Pagan LM test, the existence of individual and time effects for both fixed and random effects is obvious.If the individual effect (µ i ) and time effect (λ t ) are presumed to be fixed parameters to be estimated and the remainder disturbances (v it ) are presumed to be stochastic, independently and uniformly distributed with zero mean and constant variance (or standard deviation).Formula (1) represents a two-way fixed effects error component model (Baltagi, 2013). ( ) If individual effect (µ i ), time effect (λ t ), and the remainder stochastic disturbance term (V it ) are independent of each other and if the independent variables (X it ) are independent of µ i , λ t and V it for all i and t, then we could have two-way random effects model (Baltagi, 2013).To determine whether two-way random effects or two-way fixed effects models are the appropriate methods for estimating the model, the Hausman specification test is applied.This analysis tests whether individual ef- 8.In the formula (2), β 0 denotes the constant term, β i denotes the coefficients of the explanatory variables, which are Price-to-Sales ratio (P/S), EBITDA Margin (EbitdaMargin), Earnings per Share (EPS), and Debt-to-Equity ratio (D/E).The other param-eters µ i , λ t , and U it denote the unobservable stockspecific effect, unobservable time effect, and remainder, respectively.The i represents stocks, and the t represents quarters.For the two-way mixed effects model with µ i fixed and λ t random, λ t and U it have zero mean and constant variance, and the λ t is independent of the remainder.The µ i are assumed to be fixed parameters to be estimated, and it is also independent of the remainder and other regressors.As for the two-way fixed effects model, both the µ i and λ t are fixed parameters and are independent of the remainder and other explanatory variables. Hypothesis Hausman Decision Autocorrelation in the data should be tested before the estimation.Autocorrelation, also referred to as serial correlation, is an indicator of the relationship between a variable's current value and its past values.The error component model generalized by Lillard and Willis (1978), with the assumption of first-order autocorrelation for remainder disturbances (V it ), is as follows: where 1, p < and it ε is identically and indepen- dently distributed with zero mean and constant variance.As Bertnard, Dulfo, and Mullainathan (2004) also warned, autocorrelation causes the standard errors of the coefficients to be understated.Moreover, autocorrelation leads to an overestimation of the level of significance and t-statistics.In this study, the Wooldridge test for autocorrelation is implemented.According to the null hypothesis of the test, no first-order autocorrelation exists.The test results are summarized in Table 9.According to the test results, the null hypothesis is accepted, so it can be inferred that there is no firstorder autocorrelation for the model.stocks or units.Moreover, the Breusch-Pagan LM test could also be used to test the cross-sectional dependency for large T and small N values (Tatoğlu, 2016).The null hypothesis of this test remarks the residuals are independent across the individuals.It can be inferred from the results summarized by both Table 10 and Table 11 that the null hypothesis is rejected.The results of both tests consistently indicated that regression residuals for the data are dependent or correlated across stocks.Moreover, according to Pesaran's result, the absolute correlation between the residuals of two stocks is 0.25.One of the main assumptions of the standard panel data model is that regression disturbances are homoscedastic across entities and periods (Baltagi, 2013).Homoscedasticity is a restrictive assumption for panels, including different size of individuals, because cross-sectional entities with varying size possibly show different variations.Although an attempt is made to use the data of individuals (stocks) from the same sector for this study, it is difficult to meet this assumption because of the varying company sizes.Under the assumption of homoscedasticity, when heteroscedasticity is present, the estimates of regression coefficients will not be satisfactory.Moreover, the standard errors of those estimations would be biased; therefore, robust standard errors should be computed in the presence of heteroscedasticity (Baltagi, 2013).The Wald test is employed to test for heteroscedasticity in this study.The null hypothesis of the test states that the variance is constant across the individuals or periods. As summarized in Table 12 and Table 13, the test results indicate that the regression disturbances have inconstant variance across time and stocks, which means there is heteroscedasticity in the model.As a result of all tests before estimating the coefficients of the regressors, one will deal with cross-sectional dependent and heteroskedastic disturbances in the two-way error component regression model.On the other hand, no first-order autocorrelation exists for the model, according to Wooldridge test results.In the next section, the use of robust standard errors will be discussed to correct for the presence of cross-sectional dependence and heteroscedasticity.Also, the regression results will be presented. RESULTS According to the results of the tests stated in section 2, one should deal with heteroscedasticity and cross-sectional dependency in the model.The literature describes some methodologies that can deal with cross-sectional heteroskedastic disturbances.In this section, regression results using robust estimators will be presented.The regression model used in this study is presented again in Formula(4): Parks (1967) conducted the first study to consider periodical and spatial correlation with heteroscedasticity (Tatoğlu, 2016).Then, Kmenta (1986) described an alternative method based on the feasible generalized least square estimation algorithm to deal with autocorrelated and cross-sectional heteroskedastic disturbances.For the model, there is no autocorrelation, so the data are not transformed for compatibility with AR(1) correlation.Therefore, the Parks-Kmenta method could have been employed.That is, using this method for our model, generalized least square estimation under heteroscedasticity and cross-sectional dependency is applied to predict the effects of regressors on stock returns.The method could be appropriate when N is small, and T is large.However, and importantly, it is infeasible when N is large, and T is small (Baltagi, 2013). First, the Parks-Kmenta estimator was employed for robust estimation.There is no autocorrelation, but there is cross-sectional dependence with heteroscedasticity in the model; therefore, the panels (correlated) and correlation (independent) option were used.Table 14 summarizes the regression results estimated using the Parks-Kmenta methodology.The number of observations is 352, with 11 groups (stocks) and 32 time-periods (quarters). According to the regression results computed using the Parks-Kmenta estimator, Price-to-Sales, EBITDA Margin, and EPS ratios have a statistically significant effect on stock returns.The value of Wald statistics is meaningful in terms of statistical significance. According to Hausman test results summarized in Table 8, the two-way mixed effects regression model with fixed individual effects and random time effects is one of the efficient models to which one could apply to the data.Moreover, an "i.id" variable was added, which refers to fixed individual effects and "all:R.Time" variable, which refers to random time effects (Tatoğlu, 2016).To maintain the independence of residual errors by allowing heteroscedasticity with respect to individual effects, we added a "residuals (independent, by(id))" option into the command in Stata (Statacorp, 2009).In Table 15, the regression results computed employing two-way mixed effects maximum likelihood regression, are summarized.The values and signs of regressor coefficients obtained by estimating a two-way mixed effects model are similar to the previous results.Due to the existence of heteroscedasticity across individuals, standard deviations of each residual were computed independently using Stata.For the MLE DISCUSSION This study examines through application of panel data analysis, whether the Price-to-Sales, EPS, Debt-to-Equity, and EBITDA Margin financial ratios affect the returns of stocks listed on the Istanbul Stock Exchange operating in the technology and telecommunication sector.According to empirical findings, using both the Parks-Kmenta Estimator and the Two-way Mixed Effects Model, the results are similar, and they confirm each other. According to the regression results computed using the Parks-Kmenta estimator, Price-to-Sales, EBITDA Margin, and EPS ratios have a statistically significant effect on stock returns.EBITDA Margin is calculated as EBITDA/Sales, and EBITDA is found by adding depreciation and amortization (non-cash expenses) to operating profit.EBITDA refers to operating profit of a company, which is generated from its core business.It shows operational cash flow for business by eliminating non-operating factors.As EBITDA grows, the value of a business grows, and it affects the expected rate of return of the shareholders positively. On the other hand, EPS is measured by dividing net earnings to weighted average of common shares outstanding.A company with a high EPS ratio can generate high profits that can be distributed to shareholders as dividends or reinvested in the company for new investment projects as retained earnings.Therefore, higher EPS results in higher stock returns. According to findings, the price to sales ratio also has a statistically significant but negative relationship with stock returns.Price-to-Sales ratio is one of the main revenue multiples in relative valuation.It is widely used to value technology companies.It is computed as price per share divided by sales per share or market value of equity (mcap) divided by sales.As with other multiples, other things remaining equal, firms that trade at low Price-to-Sales multiple, are considered as cheap relative to firms that trade at high multiple of Price-to-Sales.If a firm's Price-to-Sales multiple is lower than the average of the similar (comparable) firms in the same industry, this stock is viewed as undervalued and higher Price-to-Sales multiple compared to the average of the similar firms operating in the same industry is interpreted as overvalued.Therefore, lower Price-to-Sales ratios lead to higher returns for the following periods on the Istanbul Stock Exchange operating in the technology and telecommunication sector. The results obtained by employing a two-way mixed effects model remarks that Debt-to-Equity ratio is negatively related with stock returns (Table 15).Debt-to-Equity ratio is calculated as total liabilities over shareholders' equity, which is used to evaluate a company's financial leverage.It indicates how much debt and how much equity a company uses to finance its assets.High Debt-to-Equity ratio can be interpreted as a measure of high financial risk that a company incurs high interest expense and may have difficulty to repay its financial obligations; therefore, the company may face default or bankruptcy. Table 4 , , unit root test results have been shown.As exogenous variables, individual intercepts and individual linear trends are used in the tests.As shown in Debt-to-Equity and Price-to-Sales ra- Table 1 . Descriptive statistics of the variables in the sample Table 2 . Correlation matrix of variables Table 4 . Stationarity test results for all variablesAccording to the null hypothesis of F-test, all unit or time-specific effects are equal to zero, and an alternative hypothesis of the test shows that any of individual or time-specific effects is non-zero for the model.As one sets time and the individual variable as the fixed variable, and by applying the F-test, it is observed that there should be both individual and time-specific effects in the model. For the data, F(10,337) value of 2.84 and F(31,316) value of 3.60 is reported for stock-specific effects and time-specific effects, respectively.The F-table value of F(10,337) is 1.858 and F-table value of F(31,316) is 1,4877.According to those results, a two-way fixed effects model could be suitable for the data.The results of the F-test are reported in Table6, where µ i denotes an individual or stockspecific effect, and λ t denotes a time-specific effect. Table 6 . F-test results for testing individual and time-specific effects 0 : all µ i = 0; H a : any µ i ≠ 0) For time effects (H 0 : all λ t = 0; H a : any λ t ≠ 0) As shown inTable 6, a two-way fixed effects error component model could be suitable as the regression model.However, what about the individual effects and time effects in the random effects model?Breusch and Pagan (1980) derived the Langrange Multiplier (LM) test to test H 0 = σ µ 2 = σ λ 2 = 0.Meanwhile, Breusch and Pagan LM test is em-ployed to test whether there are individual and time effects or not.As reported in Table7, the Chi-square results obtained for the data are higher than the Chi-square (1) table value, which is 3.84.It can, thus, be stated that one can reject the null hypothesis of this test at a 95% confidence interval, illustrating the presence of individual and timespecific effects for the random effects model. Table 7 . Breusch-Pagan LM test results for random effects Table 5 . Stationarity test results for first differenced variables Table 9 . Wooldridge autocorrelation test results (2)ts performing Pesaran's test for cross-sectional dependence test for panels with N and T values go to infinity.According to the null hypothesis of the test, the residuals are uncorrelated across the Investment Management and Financial Innovations, Volume 17, Issue 2, 2020 http://dx.doi.org/10.21511/imfi.17(2).2020.07 Table 11 . Pesaran's test of cross-sectional independence Table 12 . Modified Wald test for heteroscedasticity across stocks Table 13 . Modified Wald test for heteroscedasticity across quarters Table 15 . Regression results computed using two-way mixed effects model Table 14 indicates that EBITDA Margin and EPS have a positive relationship with stock returns as expected.If a technology anrd telecommunication company increases its EBITDA Margin and EPS, the stock return rises.
5,828.2
2020-05-18T00:00:00.000
[ "Business", "Economics" ]
Prediction of subcellular location of apoptosis proteins by incorporating PsePSSM and DCCA coefficient based on LFDA dimensionality reduction Background Apoptosis is associated with some human diseases, including cancer, autoimmune disease, neurodegenerative disease and ischemic damage, etc. Apoptosis proteins subcellular localization information is very important for understanding the mechanism of programmed cell death and the development of drugs. Therefore, the prediction of subcellular localization of apoptosis protein is still a challenging task. Results In this paper, we propose a novel method for predicting apoptosis protein subcellular localization, called PsePSSM-DCCA-LFDA. Firstly, the protein sequences are extracted by combining pseudo-position specific scoring matrix (PsePSSM) and detrended cross-correlation analysis coefficient (DCCA coefficient), then the extracted feature information is reduced dimensionality by LFDA (local Fisher discriminant analysis). Finally, the optimal feature vectors are input to the SVM classifier to predict subcellular location of the apoptosis proteins. The overall prediction accuracy of 99.7, 99.6 and 100% are achieved respectively on the three benchmark datasets by the most rigorous jackknife test, which is better than other state-of-the-art methods. Conclusion The experimental results indicate that our method can significantly improve the prediction accuracy of subcellular localization of apoptosis proteins, which is quite high to be able to become a promising tool for further proteomics studies. The source code and all datasets are available at https://github.com/QUST-BSBRC/PsePSSM-DCCA-LFDA/. Background Protein maintains a highly ordered operation of the protection of the cell system [1]. At the cellular level, proteins work only in specific locations. It is necessary to fulfill the protein's function that subcellular locations provide a specific chemical environment and set of interaction partners [2]. Apoptosis is cell physiological death which is closely related to intracellular control [3]. Cancer and autoimmune disease occurs when blocking apoptotic protein appears, ischemic damage or neurodegenerative disease occurs when unwanted apoptosis appears [4]. Studying proteins involved in the apoptotic process can help us understand the pathogenesis of the disease and provide a variety of therapeutic targets. It is very valuable to get information on apoptosis protein subcellular localization, which can help us understand the apoptosis proteins function, cell apoptosis mechanisms and drug development [5]. Therefore, it is a challenging task using the machine learning method to construct the protein subcellular location prediction model. Currently, apoptosis protein subcellular localization prediction has made great advancement. Zhou and Doctor [36] constructed 98 protein apoptosis protein dataset, using the amino acid composition and covariance discriminant method, the overall prediction accuracy reached 72.5% by jackknife test. Huang et al. [37] obtained the accuracy rate of 77.6% by combining the protein instability index with the support vector machine. However, the prediction capacity of this method was unbalanced. Especially, for other class proteins (exclude cytoplasmic, membrane and mitochondrial proteins), the prediction accuracy did not exceed 50%. Bulashevska and Eils [38] used Bayesian classifier based on Markov chain model to construct ensemble classifier, and the prediction accuracy of 98 apoptosis proteins was further improved by jackknife test. Zhang et al. [15] constructed a new apoptosis protein dataset of 225 proteins. They used encoding approach with grouped weight as feature extraction method for protein sequences and support vector machine as classifier (named as EBGW_SVM). The overall prediction accuracy was 83.1% using jackknife test. The feature extraction method of the protein sequence takes into account the distribution of residues with the same unique characteristic, but ignores the physical and chemical properties of the protein sequence. Chen and Li [39] constructed a dataset containing 317 apoptosis protein sequences and obtained higher prediction accuracy, which combined support vector machine and increment of diversity (named as ID_SVM) by using jackknife test. Similarly, Ding et al. [40] used the Fuzzy K-nearest neighbor (FKNN) algorithm and the overall prediction accuracy was 90.9% using CL317 dataset. Qiu et al. [41] used the DWT_SVM method to obtain high prediction accuracy rates of 97.5, 87.6 and 88.8% for CL317, ZW225 and ZD98 datasets, respectively by jackknife test. The above methods ignore the biological information of the protein sequence, so the prediction method of homologous similarity based on the protein sequence and protein functional domain is proposed. Yu et al. [42] proposed a novel pseudo-amino acid model which extracted the sequence characteristics of proteins using amino acid substitution matrices and auto covariance transformation and used support vector machine as classifier. The results of prediction accuracy obtained by jackknife test were 90.0 and 87.1% on the CL317 and ZW225 datasets, respectively. Liu et al. [43] used tri-gram encoding based PSSM as feature extraction method, then used SVM-RFE algorithm to reduce feature vectors, finally the best feature vectors were input to the SVM classifier. The prediction accuracy were 95.9, 97.8 and 96.9% on the CL317, ZW225 and ZD98 datasets, respectively. Dai et al. [44] treated the difference between the N-segment and C-segment of the protein in subcellular location prediction, and proposed a model based on golden ratio segmentation to improve subcellular localization prediction, and achieved a better predictive effect. Xiang et al. [45] introduced evolutionaryconservative information to represent protein sequences. Meanwhile, according to the proportion of golden section in mathematics, the position-specific scoring matrix (PSSM) is divided into several blocks. The overall accuracy of ZD98 and CL317 datasets were 98.98 and 91.11%, respectively by using SVM classifier. Liang et al. [46] combined the Geary autocorrelation function and detrended cross-correlation coefficient methods based on PSSM to extract the protein sequences from the CL317, ZW225 and ZD98 datasets. Under the jackknife test, the overall prediction accuracy were 89.0 84.4 and 91.8%, respectively. Using only a feature is difficult to have a big breakthrough in the prediction of subcellular localization. At present, researchers usually combined multiple feature extraction methods of protein sequences to obtain more comprehensive protein sequence information. However, the feature vectors of the protein sequences obtained by fusing a variety of features are usually very high. Highdimensional data contains a lot of redundant information, which may seriously affect the performance of the classifier. Dimensionality reduction methods can help us eliminate redundant information and are widely used in data classification and pattern recognition. At present, many researchers introduce a variety of methods to reduce dimension in the subcellular localization prediction, such as SVD (singular value decomposition) [47], Backward feature selection [48], CFS (correlation-based feature selection) [49], Forward selection [50], PSO (particle swarm optimization) [51], mLASSO (multi-label least absolute shrinkage and selection operator) [52], GA (Genetic algorithm) [53] and so on. In this paper, we presents a new method for predicting subcellular localization of apoptosis proteins, called PsePSSM-DCCA-LFDA. Firstly, obtain sequence information from apoptotsis protein sequences by combining PsePSSM algorithm and DCCA coefficient. Then, the LFDA method is used to reduce the dimension and noise information in the original high-dimensional space. Finally, using SVM as classifier to predict protein subcellular localization. By jackknife test, the optimal parameters of the model are determined under different ξ values, S values, different dimensionality reduction methods and selection of different dimensions, and established PsePSSM-DCCA-LFDA prediction model. Using the most rigorous jackknife test, the overall prediction accuracy are 99.7, 99.6 and 100%, respectively for CL317 dataset, ZW225 dataset and ZD98 dataset. The results show that the PsePSSM-DCCA-LFDA method can get better prediction effect than other existing methods. Pseudo-position specific scoring matrix (PsePSSM) In order to obtain the evolutionary information of the protein sequences, the protein sequences of the CL317, ZW225 and ZD98 datasets are aligned with the non-redundant (NR) database (ftp://ftp.ncbi.nih.gov/blast/db/) using the PSI-BLAST program [54], and obtain the position specific scoring matrix (PSSM) [55] of the corresponding protein sequences. The NR database contains 85,107,862 protein sequences. We use three iterations and E-value is 0.001 in PSI-BLAST program. The BLOSUM62 matrix is used as substitution matrix for generating the PSSM. PSSM can be expressed for a protein sequence P as the following Eq. (1). where L is total number of amino acids in the protein sequence, E i, j represents the evolution information of amino acids in protein sequences. The rows of PSSM represent the corresponding amino acids positions in protein sequences, and columns of PSSM indicate the 20 amino acid types that may be mutated. The PSSM value ranges from − 9 to 11. Since the length of the protein sequence in the CL317, ZW225 and ZD98 datasets is inconsistent, the corresponding PSSM dimension for the protein sequence in the dataset is different, which is difficult for our subsequent study. In this paper, PsePSSM [56] algorithm is used to extract the features of protein sequences, and the PSSM of different protein sequences is transformed into a uniform vector. First, the elements of PSSM are normalized by Eq. (2), whose PSSM value ranges from 0 to 1. where x is the original PSSM value. Then, a protein sequence can be expressed using PsePSSM as follows: where P j ¼ P L i¼1 P i; j =L ð j ¼ 1; 2; ⋯; 20Þ , P j represents the average score of the all amino acid residues which are mutated to j amino acid type in the protein P. θ ξ j ¼ 1 L−ξ P L−ξ i¼1 ðP i; j −P ðiþξÞ; jÞ Þ 2 ð j ¼ 1; 2; ⋯20; ξ < L; ξ≠0Þ , θ ξ j is order information of protein sequences, j is amino acid type, ξ is contiguous distance. From the above, a protein sequence generates 20+20×ξ dimension feature vector using PsePSSM algorithm. Detrended cross-correlation analysis coefficient According to the evolutionary information expressed by the protein sequence, we can obtain the corresponding position score-specific matrix (PSSM), as shown in eq. (1). In order to extract more protein sequence information from the PSSM matrix, the protein sequence information is extracted from the PSSM using the detrended cross-correlation analysis coefficient (DCCA coefficient) method [57][58][59]. DCCA coefficient is a method based on the trend covariance method, and the least squares linear fitting and trend elimination are carried out for nonstationary signals. The evolutionary information expressed in the form of PSSM is used as the attribute, and each amino acid is considered as one property. PSSM is considered to be the time series of all attributes. Since the size of the PSSM matrix for each protein sequence is L×20, we calculate the 20 columns in the PSSM matrix as 20 non-stationary time series [46,60]. After normalizing the PSSM matrix using the eq. (2), for any two different columns {m i } and {n 1 } of PSSM (i=1,2,···,L), L is the length of protein sequence. First we use the Eq. (4) to calculate the new time series M k and N k . Then the time series M k and N k are divided into L-S segments which can be overlapped, each segment contains S+1 data, and then the least squares linearly fitting for each segment of the data to obtain the fitting values M i;k andÑ i;k . Use the Eq. (5) to calculate the covariance of each segment. In particular, there are f 2 xx ðS; iÞ ¼ 1 Next, the covariance of the L-S segments (whole time series) calculated by using the Eq. (6) is: In particular, there are f 2 xx ðSÞ ¼ 1 Finally, the DCCA coefficients of two different time series {m i } and {n 1 } are calculated using Eq. (7). As can be seen from Eq. (7), ρ DCCA depends on the length L of the protein sequence and the length S+1 of the overlapping portion of each segment. Its value ranges from -1≤ ρ DCCA ≤1, where 1 represents perfect cross-correlation, 0 indicates no cross-correlation, and − 1 represents perfect anti-cross-correlation [61]. Finally, the DCCA coefficient algorithm will generate a 190-dimensional feature vector for a protein sequence. Local fisher discriminant analysis This paper uses a supervised dimensionality reduction method, local Fisher discriminant analysis (LFDA) [62]. LFDA has the form of embedded transformation, and it can be easily calculated by solving the generalized eigenvalue problem. Let the protein data matrix be X = [x 1 , x 2 , ⋯x n ],x i ∈ R d , where n is the number of samples of the protein, d is the dimension of the protein sequence feature extraction. y i ∈ {1, 2⋯, c}, n ℓ is the number of samples of the categoryℓ, P c ℓ¼1 n ℓ ¼ n. The local within-class scatter matrix S (w) and the local between-class scatter matrix S (b) are calculated using Eqs. (8) and (9). It is worth noting that A is an affinity matrix, A i, j ∈ A is the affinity betweenx i and x j . In this paper, we use the affinity matrixA i, j = exp(−‖x i − x j ‖/σ i σ j ) defined by Zelnik-Manor and Perona [63]. σ i ¼ kx i −x ðK Þ i k represents the local scaling of the surrounding x i data samples, where x ðK Þ i is the K nearest neighbor of x i . The literature [63] proved that in the experiment for high-dimensional data, when K = 7, better results can be obtained, so this article selected K = 7. Solve LFDA transformation matrixT LFDA Matrix after the dimension reduction becomes: Therefore, through the Eq. (11), we eliminate the redundant information contained in the high-dimensional data obtained after the original protein sequence feature extraction. In other words, the fusion PsePSSM algorithm and DCCA coefficient algorithm on the apoptosis protein sequence after the feature extraction matrix X, through the transformation matrix T LFDA , matrix Z is obtained after dimensionality reduction. Support vector machine Support vector machine (SVM) is a supervised machine learning method based on statistical learning theory, which is proposed by Vapnik et al. [64]. Because of its excellent learning and generalization ability, especially the ability to deal with high dimensional sparse vector, it has become a hotspot in the field of data mining and machine learning. In recent years, SVM has also been widely used in the field of bioinformatics. In the field of proteomics research, it has been widely used to predict membrane protein types [65,66], G protein-coupled receptors [67,68], protein structure [69][70][71][72][73], protein-protein interaction [74][75][76], protein subcellular localization [77][78][79][80], protein post-translational modification sites [81][82][83][84] and other protein structure and function of the study. SVM is used to solve a two-class classification problem. SetD = {(x i , y i )| i = 1, 2, ⋯, n} is a training set, wherex i ∈ R d represent sample i, which has d dimension feature vectors, y i ∈ {+1, −1}is class labels of sample i. SVM transforms a linearly indivisible sample of low-dimensional input space into high-dimensional feature space to make it linearly separable. In this study, we choose the radial basis function (RBF) to perform prediction. Because RBF kernel function is the most widely used kernel function and its superiority for solving nonlinear problem [17,18,[41][42][43][44][45][46], which is defined as follows: where γ is the kernel width parameter, x i and y j are the feature vectors of the i-th and j-th protein sequences, respectively. The egularization parameter C and the kernel parameter γ are optimized based on CL317 and ZW225 datasets by K-fold cross validation using a grid search strategy to obtain the highest overall prediction accuracy by using the LIBSVM software [85], which can be freely downloaded from http://www.csie.ntu.edu.tw/~cjlin/libsvm/. In this paper, C is allowed to take a value only between 2 −5 and 2 15 , and γ only between 2 -15 and 2 5 . SVM is originally designed for two-class classification, but CL317, ZW225 and ZD98 are multi-class classification data. At present, three kinds of strategies can be solved multi-classification: one-versus-one (OVO), oneversus-rest (OVR) [86] and direct acyclic graph SVM (DAGSVM) [87]. LIBSVM software implements the "one-versu-one" (OVO) strategy for multi-class classification. The OVO strategy sets up a classifier between any two categories,so if k is the number of classes, then k(k − 1)/2 classifiers are constructed. During the testing phase, the test samples are submitted to all classifiers, k(k − 1)/2 classification results are obtained, and the final result is generated by voting. That is to say, the most voting category is the final class. It is worth noting that when there are two categories of voting the same results, we choose the class appearing first of the vote as the final category for the sake of simple operation. Performance evaluation and model building In statistical prediction, there are four validation tests: self-consistency test, independent dataset test, k-fold cross-validation and jackknife test, which are often used to evaluate the prediction performance [78,80]. In this paper, the jackknife test [88,89] is used to examine the performance of the prediction model. The jackknife test requires testing each sample in the dataset. Specifically, each time one sample is selected as an independent test sample in the dataset, and the remaining samples are used as a training set to establish a prediction model until all the samples have been tested in the dataset. We use four standard performance measures to evaluate the model performance, including sensitivity (Sens), specificity (Spec), Matthews correlation coefficient (MCC) and overall accuracy (OA), as follows: where TP represents the numbers of the correctly identified positives, TN represents the numbers of correctly identified negatives, FP represents the numbers of the negatives identified as positives, FN represents the numbers of the positives identified as negatives. In addition, to assess the generalization performance of the model, the receiver operating characteristic (ROC) curves were used. The AUC is the area calculated under ROC curve plotted by FP rate vs TP rate, which is a quantitative indicator of the robustness of the model. Its values range from 0 to 1. For convenience, the method is called PsePSSM-DCCA-LFDA in this paper, which is used to predict apoptosis protein subcellular localization. To provide an intuitive picture, the flowchart of PsePSSM-DCCA-LFDA method is shown in Fig. 1. We have implemented it in MATLAB R2014a. The PsePSSM-DCCA-LFDA prediction model is detailed below: 1) Input CL317 dataset and ZW225 dataset, respectively, which contain apoptosis protein sequences and the class label corresponding to all kinds of proteins; 2) The 20 + 20 × ξdimension feature vector is generated by PsePSSM algorithm. Using DCCA coefficient, the protein sequence is extracted to generate 190 dimension feature vectors. By combining these two methods, the two different apoptosis protein datasets generate the corresponding feature extraction matrices ofX = 317 × (190 + (20 + 20 × ξ))and X = 225 × (190 + (20 + 20 × ξ)), respectively; 3) Using the LFDA method to solve T LFDA ¼ arg max and the matrix Z is obtained by removing the redundant information in the apoptosis protein sequences; 4) The matrix Z after dimensionality reduction are input into the SVM classifier, and the protein subcellular localization prediction is performed by jackknife test; 5) According to the accuracy of prediction, the optimal parameters of the model are selected, including the ξ values andSvalues of parameters, the selection of the dimension reduction algorithm and the dimensionality; 6) Calculate the Sens, Spec, MCC, OA and AUC values of the model, and evaluate prediction performance of the model; 7) Using the independent testing dataset ZD98 to test the PsePSSM-DCCA-LFDA prediction model. Selection of optimal parameter ξ andS In this study, the apoptosis protein sequences are extracted by the fusion PsePSSM algorithm and the DCCA coefficient algorithm, and obtain the feature information in the protein sequences. It is worth noting that both the PsePSSM algorithm and the DCCA coefficient algorithm can control the validity of the algorithm to extract the feature information of the protein sequence by adjusting some of the parameters in the algorithm. How to get the best parameters of these two feature extraction algorithms is very important for us to construct a protein subcellular localization prediction model. In order to discover the merits of the feature parameters, we use CL317 and ZW225 datasets as the research object, the best parameters of the model are selected by the prediction accuracy under different parameters. In this paper, the PsePSSM algorithm is used to carry out feature extraction on protein sequences, and the ξ value indicates the sequence-order information of the amino acid residues in the protein sequence. If the ξ value is set too large, the feature vector dimension of the protein sequence is too high, resulting in more redundant information, which affects the prediction effect. If the ξ value is set too small, the feature vector contains very little sequence information, and the features of the protein sequence of the apoptosis protein dataset cannot be extracted comprehensively. To find the optimal For the different ξ values, the apoptosis protein datasets CL317 and ZW225 are classified by SVM respectively. The SVM is used to select the radial basis function (RBF) and the results are tested by jackknife method. The overall prediction accuracy of each class protein and overall prediction accuracy in the apoptosis protein datasets are obtained, as are shown in Tables 1 and 2. Table 1 shows that the OA of CL317 dataset are different with constant change of ξ value. The highest prediction accuracy of mitochondrial proteins reach 70.6% when ξ = 3, which is 32.4 and 14.7% higher than when ξ values are 0 and 1, respectively. The prediction accuracy of membrane proteins is 87.3% when ξ = 10. The OA of CL317 dataset reach 83.6 and 83.9% when ξ = 3 and ξ = 10, respectively, higher than that when ξ values are taking other values. Table 2 shows that the OA of ZW225 dataset are different with constant change of ξ value. For cytoplasmic proteins, the highest predictive accuracy is 81.4% when ξ values are 1, 3, 4 and 6, respectively. For membrane proteins, the highest prediction accuracy is 93.3% when ξ = 0, which is 7.9% higher than when ξ = 5. From the overall prediction accuracy, when ξ values are 3 and 6, the OA of ZW225 dataset reach 77.3 and 77.8%, respectively, which is 6.6 and 7.1% higher than that when ξ = 0. To select the optimal parameters of the PsePSSM algorithm in the subcellular prediction model of apoptosis proteins, CL317 and ZW225 datasets are selected as the training datasets. Fig. 2 shows the OA changes when different ξ values are chosen in CL317 and ZW225 datasets. It can be seen from Fig. 2 that the prediction accuracy of the two datasets is changing with the change of the ξ value. In addition, CL317 and ZW225 datasets reach the highest accuracy, whenξ = 3 andξ = 6, respectively. But in order to unify the model parameters, ξ = 3 is chosen in the model. Therefore, the PsePSSM algorithm is used to extract the protein sequence, and each protein sequence to obtain 20 + 20 × ξ = 20 + 20 × 3 = 80 dimension feature vector. In the feature extraction process by using DCCA coefficient, the selection of Svalue has a crucial influence on the construction of the model. Sis used to determine the length of each overlapping portion of the detrended cross-correlation analysis. Because the length of the shortest protein sequence in the benchmark dataset is 50, the maximum value allowed for S is 49. To find the optimal Svalue in the model, set Svalues from 5 to 49 in turn. For the different Svalues, the apoptosis protein datasets CL317 and ZW225 are classified by SVM respectively. The SVM is used to select RBF and the results are tested by jackknife method. The prediction accuracy of each class protein and the overall prediction accuracy in the apoptosis protein datasets are obtained, as are shown in Tables 3 and 4. Table 3 shows that the OA of CL317 dataset are different with constant change of S value. The accuracy of cytoplasmic proteins reach 97.3% when S = 30 and S = 35, respectively. The highest prediction accuracy of mitochondrial proteins reach 92.7%, when S = 45 and S = 49, respectively, which is 12.7% higher than whenS = 5. The accuracy of nuclear proteins is 67.3% when S = 49, which is 21.1% higher than whenS = 5. The accuracy of secreted proteins is 88.2% when S = 25. From overall prediction accuracy, the OA of CL317 dataset is 85.8% when S = 35, which is 9.8% higher than when S = 5. Table 4 shows that the OA of ZW225 dataset are different with constant change of Svalue. The accuracy of cytoplasmic proteins reach 87.1% whenS = 49. The highest prediction accuracy of membrane proteins reach 91.0% when S values are 20, 25 and 40, respectively. The accuracy of nuclear proteins reach 75.6% when S = 40 and S = 45, respectively. From the overall prediction accuracy, the OA of ZW225 dataset is 82.7% when S = 40, which is higher than other parameters. In our current study, two apoptosis protein datasets CL317 and ZW225 are selected as the training datasets. To determine the optimal parameters of DCCA coefficient algorithm in the model, Fig. 3 shows the change of OA in CL317 dataset and ZW225 dataset by choosing differentS. It can be seen from Fig. 3 thatSvalues are different for the highest overall prediction accuracy of two datasets. The average overall prediction accuracy of CL317 and ZW225 datasets is the highest when S = 40. That is, the DCCA coefficient algorithm chooses optimal parameterS = 40. At this time, the 190 dimension feature vector can be obtained by extracting each protein sequence by DCCA coefficient method. Selection of dimensionality reduction method and optimal dimension The increasing dimension of the dataset makes the classification more difficult and the development to a certain extent can cause curse of dimensionality. For high-dimensional data, firstly, dimensionality reduction is carried out, and then data after dimensionality reduction is input into the learning system. In order to achieve the ideal protein subcellular localization prediction accuracy, the PsePSSM and DCCA coefficients are first fused to extract features of the protein sequences. In the discussion of section 3. [92] and LFDA (Local Fisher Discriminant Analysis) dimensionality reduction method are used to compare the effect of protein subcellular localization overall prediction accuracy by using these four dimensionality reduction methods. In this study, we use the SVM to classify with the radial basis kernel function, and the results are tested by jackknife method. The overall prediction accuracy of subcellular localization of two apoptosis protein datasets are obtained with different dimensionality reduction methods and under different dimensions, as shown in Tables 5 and 6. As can be seen from Table 5, for the CL317 dataset, choosing different dimensionality reduction methods and dimensions have a significant effect on the accuracy of protein subcellular prediction. When Laplacian Eigenmaps and AKPCA method are used to reduce dimension, the dimension is 50, and CL317 dataset obtains the highest overall prediction accuracy, which is 84.9 and 89.6%, respectively. When PCA method is used to reduce dimension and dimensionality chooses 40, the highest overall prediction accuracy of the CL317 dataset is 89.3%. When LFDA method is used to reduce dimension and dimensionality chooses 10, the overall prediction accuracy is the highest, which is 99.7%. It shows that when choosing different dimensionality reduction methods, getting the best dimension for the CL317 dataset is different. By comparing the overall prediction accuracy by different dimensionality reduction methods with different dimensions, we can find that when the LFDA dimensionality reduction method is adopted, the dimensionality is 10, the overall prediction accuracy is the highest, 10.1% higher than when AKPCA dimensionality reduction method is used and the dimension is 50. It can be more intuitively found in Fig. 4 for the CL317 dataset, when the LFDA dimensionality reduction method is selected, the highest overall prediction accuracy of the model is achieved when dimension is 10. As can be seen from Table 6, for the ZW225 dataset, choosing different dimensionality reduction methods and dimensions has a significant effect on the accuracy of protein subcellular prediction. When PCA method is selected to reduce the dimensionality and dimension chooses 30, the highest overall prediction accuracy of 85.8% is achieved in the ZW225 dataset. When using the Laplacian Eigenmaps method to reduce dimension, and dimensionality chooses 30 or 50, the overall prediction accuracy of the dataset is the highest, which is 82.2%. When using the AKPCA method to reduce dimension, and dimensionality chooses 40, the highest overall prediction accuracy is 86.2%. When using the LFDA method to reduce dimension, and dimensionality chooses 10,20,30,40,50,60,70 or 80, the highest overall prediction accuracy is 99.6%. It indicates that the choice of the optimal dimension is closely related to the use of dimensionality reduction methods. In this paper, by comparing the overall prediction accuracy by different dimensionality reduction methods with different dimensions, it can be found that when the LFDA dimensionality reduction method is adopted and the dimension is 10, the overall prediction accuracy is the highest, 17.4% higher than when Laplacian Eigenmaps dimensionality reduction method is used and dimension is 30. It can be more intuitively found in Fig. 5 for the ZW225 dataset, when the LFDA dimensionality reduction method is selected, the highest overall prediction accuracy of the model when dimension is 10, 20, 30, 40, 50, 60, 70 or 80. Since the two apoptosis protein datasets CL317 and ZW225 are selected as the training set, in order to unify the parameters of the model, the LFDA dimensionality reduction method is adopted in this paper, and the optimal dimension is 10-dimensional. Effect of feature extraction algorithm on results Feature extraction method converts character representation of a protein sequence into a numerical representation, which uses the corresponding feature vector to represent protein sequence information. PsePSSM method can get homology and sequence information of amino acids in the protein sequences. DCCA coefficient method is an extension of the DCCA and the DFA (detrended fluctuation analysis). Here, only the evolutionary represented in the form of PSSM is adopted as the considered properties. The PsePSSM algorithm and the DCCA coefficient method are combined to obtain more protein sequence information, but this will obtain high-dimensional features to make the model worse, which contain more redundant variables. LDFA dimensionality reduction method use local within-class scatter matrix and local between-class scatter matrix to remove the redundant information based on the feature information of the protein sequences in the dataset and the corresponding class labels. In this paper, the optimal feature extraction algorithm is selected by comparing the influence of different feature extraction methods on the prediction results. Two different predicted results of the two apoptosis protein datasets CL317 and ZW225 are shown in Tables 7 and 8. Furthermore, we analyze the robustness of the model under different feature extraction algorithms, which use ROC curve. As we know, the ROC curve is used in positive vs negative (two classes) classification. But apoptosis proteins subcellular localization prediction is a multi-class prediction problem. We first use the one-versus-rest (OVR) strategy to transform the multi-classification problem into two-classification problems. One of the classes is selected as positive samples i.e. "positive" one and other classes as negative samples [69]. Then for these two-classification true positive rate and false positive rate, the average of them was taken as the final result [43]. Figures 6 and 7 are the ROC curves obtained by four different feature extraction methods for the CL317 dataset and ZW225 dataset, respectively. Table 7 shows that the OA of CL317 dataset are different, which use different feature extraction algorithms. The OA of PsePSSM algorithm reach 83.6%, which is 1.9% lower than DCCA coefficient algorithm. The OA of PsePSSM-DCCA algorithm is 86.8%, which is 3.2, 1.3% higher than PsePSSM and DCCA coefficient algorithm, respectively. The LFDA algorithm is used to reduce the dimensionality after two algorithms. The accuracy of each class has been obviously improved by using LFDA algorithm and OA of CL317 dataset reach 99.7%. For PsePSSM algorithm, the accuracy of secreted proteins reach 70.6%, which is lower than DCCA coefficient, PsePSSM-DCCA and PsePSSM-DCCA-LFDA algorithm, respectively. The accuracy of secreted proteins is 100% by PsePSSM-DCCA-LFDA algorithm, which is 29.4% higher than the PsePSSM method. Fig. 6 shows that PsePSSM-DCCA-LFDA reach largest coverage area of the ROC curve, whose AUC value is 0.9842. In addition, the AUC values of PsePSSM, DCCA coefficient and PsePSSM-DCCA are 0.9591, 0.9520 and 0.9587, respectively. Table 8 shows that the OA, accuracy of each class are different for ZW225 dataset with different feature extraction algorithms. The OA of PsePSSM algorithm reach 77.3%, which is 5.4, 7.6 and 22.3% lower than DCCA coefficient algorithm, PsePSSM-DCCA algorithm and PsePSSM-DCCA-LFDA algorithm, respectively. The OA of PsePSSM-DCCA algorithm is 84.9%, which is 7.6, 2.2% higher than PsePSSM and DCCA coefficient algorithm, respectively. Using the LFDA algorithm to reduce the dimensionality, the PsePSSM-DCCA algorithm as feature extraction method, the prediction accuracy of the four kinds of proteins in the ZW225 dataset has been improved remarkably, and the OA of the model has reach 99.6%. Fig. 7 shows that PsePSSM-DCCA-LFDA reach largest coverage area of the ROC curve, whose AUC value is 0.9805. In addition, the AUC values of PsePSSM, DCCA coefficient and PsePSSM-DCCA are 0.9380, 0.9386 and 0.9464, respectively. Analyzing and comparing the prediction results and robustness of prediction model on CL317 and ZW225 datasets by using four different feature extraction methods, we choose PsePSSM-DCCA-LFDA as feature extraction method in this paper. Performance of prediction model In PsePSSM-DCCA-LFDA prediction model, protein sequence information is extracted by fusing the PsePSSM and DCCA coefficient methods, and then the subcellular localization of apoptosis protein datasets is predicted by SVM based on LFDA dimensionality reduction method. According to the above analysis, when using PsePSSM, ξ = 3 is selected, when using DCCA coefficient, S = 40is selected. Using the LFDA method to reduce the dimension of the dataset, the optimal dimension chooses 10. The RBF is selected as the kernel function of SVM. In this paper, the most rigorous jackknife test methods are used to test the datasets CL317 and ZW225, the main results are shown in Table 9. As can be seen from Table 9, the OA of CL317 dataset is 99.7% by using jackknife test. The sensitivity of each class is 100% except cytoplasmic proteins. The sensitivity of cytoplasmic proteins is 99.1%. The specificity of each class is 100% except mitochondrial proteins. The OA of ZW225 dataset is 99.6% by using jackknife test. The sensitivity, specificity and MCC of mitochondrial and nuclear proteins are 100, 100% and 1, respectively. The sensitivity of cytoplasmic proteins is 100%, the specificity and MCC are 99.4% and 0.99, respectively. Comparison with other methods In this section, to demonstrate the effectiveness of the proposed method PsePSSM-DCCA-LFDA, we compared with other recently reported prediction methods on the same apoptosis proteins datasets. All the methods are performed using jackknife cross-validation test. Tables 10 and 11 details the comparison of the proposed method and other prediction methods on the CL317 and ZW225 datasets, respectively. As can be seen from Table 10, the OA of CL317 dataset is 99.7% by using PsePSSM-DCCA-LFDA, which is 2.2-17% higher than other prediction methods. We can find that the overall accuracy by our method is higher than that of ID [93], ID_SVM [39], DF_SVM [21], FKNN [40] and so on. The value of sensitivity for each protein class is listed. For example, the sensitivity of mitochondrial proteins, nuclear proteins, secreted proteins, endoplasmic proteins and membrane proteins eached 100% by our method, while the ID [93] are 85.3, 82.7, 88.2, 83.0 and 81.8%, respectively. For the cytoplasmic proteins, the sensitivity of our method is 99.1%, which is also the highest, which is 12.7% higher than that of the Auto_Cova [42] method. For the CL317 dataset, our proposed method has achieved satisfactory prediction results. As can be seen from Table 11, the OA of ZW225 dataset is 99.6% using PsePSSM-DCCA-LFDA, which is almost 16.5, 15.6, 13.8 and 12.5% higher than EBGW_SVM [15], DF_SVM [21], FKNN [40], Auto_Cova [42], respectively. Especially for the most difficult case-mitochondrial proteins, the predictive accuracy has improved to 100% by our method, which is 40% higher than that of the EBGW_SVM [15], 36% higher than the prediction accuracy of DF_SVM [21]. It indicates that the model of this paper has excellent properties for the prediction of mitochondrial proteins in apoptosis proteins. In general, for the ZW225 dataset, our proposed method has achieved satisfactory prediction results. In order to further validate the actual prediction ability of the model, we use the independent testing dataset ZD98 to test the model. When using PsePSSM, ξ = 3 is selected. When using DCCA coefficient, S = 40 is selected. Using LFDA method to reduce the dimension of the dataset, the optimal dimension chooses 10. The RBF is selected as the kernel function of SVM. The results of the ZD98 dataset are tested by the jackknife cross-validation method and compared with other reported prediction methods. Table 12 shows the predictive results of the subcellular localization on the ZD98 dataset. As can be seen from Table 12, the OA of ZD98 dataset is 3.1-11.2% higher than other methods by using PsePSSM-DCCA-LFDA, which is 9.2% higher than ID [93], 11.2% higher than ID_SVM [39] and DWT_SVM [41]. In addition, the sensitivity of mitochondrial proteins, cytoplasmic proteins and membrane proteins reached 100% by our method, while the ID_SVM [39] are 84.6, 95.3 and 93.3%, respectively. Especially for mitochondrial proteins, the prediction accuracy of DWT_SVM [41] is 53.9%, which is 46.1% lower than that of our method. It shows that our method has achieved good results of the mitochondrial proteins prediction. For the accuracy of the other proteins by the algorithm proposed in this paper is 100%, which is 41.7% higher than the ID_SVM method [39]. In conclusion, the above results indicate that the prediction model we construct can significantly improve the prediction accuracy of protein subcellular localization and has achieved satisfactory prediction results. Discussion In this paper, we propose a novel method for predicting apoptosis protein subcellular localization, called PsePSSM-DCCA-LFDA. When using PsePSSM, ξ = 3 is selected. When using DCCA coefficient, S = 40 is selected. Using LFDA method to reduce the dimension of the dataset, the optimal dimension chooses 10. The RBF is selected as the kernel function of SVM. The overall prediction accuracy are 99.7, 99.6 and 100% for CL317 dataset, ZW225 dataset and ZD98 dataset by the most rigorous jackknife test, respectively, which is better than other state-of-the-art methods. The OA of CL317 dataset is 99.7% by using PsePSSM-DCCA-LFDA, which is 2.2-17% higher than other prediction methods. The OA of ZW225 dataset is 99.6% by using PsePSSM-DCCA-LFDA, which is 1.8-16.5% higher than other prediction methods. The OA of ZD98 dataset is 3.1-11.2% higher than other methods by using PsePSSM-DCCA-LFDA. PsePSSM-DCCA-LFDA demonstrated good performance on predicting apoptosis protein subcellular localization, which is better than the state-of-the-art methods. It is mainly due to the following reasons: 1. Both the PsePSSM algorithm and the DCCA coefficient method extract feature information from the PSSM corresponding to the protein sequences. Although both algorithms are data mining the evolutionary information of protein sequences in order to obtain the best numerical representation of the protein sequences, the two algorithms are different. PsePSSM feature extraction takes into account the sequence-order information of the protein sequence. The DCCA coefficient uses the columns in the PSSM as the least squares fitting and the trend elimination as the non-stationary time series to remove the PSSM between the cross-correlation. 2. LFDA can effectively remove redundant information in the protein sequences without losing important information in the apoptosis protein sequence. 3. SVM classification algorithm can deal with high-dimensional data, avoiding over-fitting and effectively removing non-support vector. Protein subcellular localization information can explain the disease mechanism, provide theoretical basis and solution. Medical studies have found that abnormal subcellular localization of proteins occurs, when cells is cytopathic. Further, abnormally localized proteins provide molecular markers for the early diagnosis of diseases and can become molecular targets for the design of new drugs, which achieve the goal of curing diseases. Currently our method is still trained on small dataset, because CL317, ZW225 and ZD98 datasets are widely used benchmark datasets, it is difficult to collect largescale experimentally verified. In the next step, we will build a large-scale protein subcellular dataset for prediction research. Conclusion With the advent of the big data age, the gap between the number of proteins in the public database and its functional annotations is widening. The critical challenge of bioinformatics is to develop automated methods for fast and accurately determining the structures and functions of proteins [94]. In this paper, a novel method for protein subcellular localization prediction is proposed. We use the LFDA dimensionality reduction method and the SVM algorithm to predict the apoptotic protein subcellular localization. Firstly, we fuse the PsePSSM and DCCA coefficient methods to carry out feature extraction on protein sequences. Then, the extracted feature vectors are used to reduce the dimension using LFDA method, and the subcellular localization of apoptosis proteins are predicted by SVM algorithm. By jackknife test, the OA of the three benchmark datasets reach 99.7, 99.6 and 100%, respectively. The results show that the PsePSSM-DCCA-LFDA method has good performance by comparing with others, which use the same benchmark datasets. Since user-friendly and publicly accessible webserver is one of the important factors in building a practical predictive system [78,88], in order for the convenience of the researchers, we will develop a web-server or standalone version for the prediction method presented in this paper.
9,606.8
2018-06-19T00:00:00.000
[ "Computer Science" ]
Optimum Design of Infinite Perforated Orthotropic and Isotropic Plates : In this study, an attempt was made to introduce the optimal values of effective parameters on the stress distribution around a circular/elliptical/quasi-square cutout in the perforated orthotropic plate under in-plane loadings. To achieve this goal, Lekhnitskii’s complex variable approach and Particle Swarm Optimization (PSO) method were used. This analytical method is based on using the complex variable method in the analysis of two-dimensional problems. The Tsai–Hill criterion and Stress Concentration Factor (SCF) are taken as objective functions and the fiber angle, bluntness, aspect ratio of cutout, the rotation angle of cutout, load angle, and material properties are considered as design variables. The results show that the PSO algorithm is able to predict the optimal value of each effective parameter. In addition, these parameters have significant effects on stress distribution around the cutouts and the load-bearing capacity of structures can be increased by appropriate selection of the effective design variables. The main innovation of this study is the use of PSO algorithm to determine the optimal design variables to increase the strength of the perforated plates. Finite element method (FEM) was employed to examine the results of the present analytical solution. The results obtained by the present solution are in accordance with numerical results. Introduction Nowadays, the design of metal and composite plates with cutouts is of a great importance due to their extensive application in different industries [1,2]. It is well known that, due to geometric changes in different structures, highly localized stresses are created around discontinues areas, at which structural failure usually occurs [3]. Therefore, the analysis of this phenomenon, called stress concentration, has a significant importance for designers of engineering structures. The fracture strength of these structures depends strongly on the stress concentration caused by cutouts. Stress concentration and fracture criterions are very important in evaluating the reliability of engineering structures [4]. For instance, designing vehicles with the purpose of weight reduction in order to decrease fuel consumption and utilize engines with less power are some applications of these plates. In this study, according to the extensive usage of different types of cutouts and considering a long process of trial and error to find their optimum design, particle swarm optimization (PSO) algorithm (see, e.g., [5]) is employed for the integrity of the search process in obtaining the optimum design. The main innovation of this paper is the use of PSO algorithm to determine the optimal design variables to increase the strength of the perforated plates. Literature Review Complex potential method established by G.V. Kolosov and N.I Muskhelishvili (see, e.g., [6][7][8]) has been applied for anisotropic plates by Green and Zerna [7], Lekhniskii [9], Sih et al. [10], Lekhniskii [11], Bigoni and Movchan [12], Radi et al. [13], Craciun and Soós [14], Craciun and Barbu [15], and Chaleshtari and Jafari [16]. Tsutsumi et al. [17] investigated the solution of a semi-infinite plane with one circular hole. Their solution was based on repeatedly superposing the solution of an infinite plane with one circular hole and of a semi-infinite plane without holes to eliminate the stresses arising on both boundaries. Applying Lekhnitskii's method [9,11], Rezaeepazhand and Jafari [18] presented an analytical solution for the stress analysis of orthotropic plates with different cutouts and evaluated the stress distribution around a quasi-square cutout in orthotropic plates. They studied the effect of various parameters such as load angle, fiber angle, and cutout orientation for perforated orthotropic plates. Yang et al. [19] presented an analytical solution for the stress concentration problem of an infinite plate with a rectangular cutout under biaxial tensions. Rao et al. [20] found stress distribution around square and rectangular cutouts using Savin's formulation [6]. Sharma [21] used Mushkhelishvili's complex variable approach [8] and presented the stress field around polygonal shaped cutouts in infinite isotropic plates. The effect of cutout shape, bluntness, load angle, and cutout orientation on the stress distribution was studied for triangular, square, pentagonal, hexagonal, heptagonal, and octagonal cutout shapes. Banerjee et al. [22] studied stress distribution around the circular cutout in isotropic and orthotropic plates under transverse loading using three-dimensional finete element models created in ANSYS. They investigated the effects of plate thickness, cutout diameter, and material on the amount of stress concentration in orthotropic plates. Marin et al. [23] studied the structural stability of an elastic body with voids and straight cracks in dipolar elastic bodies. Using the method of singular integral equations (see, e.g., [24]), Kazberuk et al. [25] presented the stress distribution in the quasi-orthotropic plane weakened by semi-infinite rounded V-notch. Optimal structures with irregular geometry but with simple fields inside were investigated by Vigdergauz [26,27,28], Grabovsky and Kohn [29,30], Vigdergauz [31,32,33]. The related problem of an optimal shape of a cavity in an elastic plane was considered by Cherepanov [34], Banichuk [35], Banichuk and Karihaloo [36], Vigdergauz [28], Banichuk et al. [37], Vigdergauz and Cherkayev [38], Markenscoff [39], and Cherkaev et al. [40]. In addition, Cherepanov [34,41] proposed an effective exact solution of some inverse plane problems of the theory of elasticity concerning the determination of equally strong outlines of holes. Sivakumar et al. [42] studied the optimization of laminate composites containing an elliptical cutout by the genetic algorithm (GA) method (see, e.g., [43]). In this research, design variables were the stacking sequence of laminates, thickness of each layer, the relative size of cutout, cutout orientation, and ellipse diameters. The first and second natural frequencies were considered as a cost function. Cho and Rowlands [44] showed GA ability to minimization of tensile stress concentration in perforated composite laminates. Chen et al. [45] used a combination of PSO and finite element analysis to optimize composite structures based on reliability design optimization. Zhu et al. [46] considered the optimization of composite strut using the GA method and Tsai-Wu failure criterion [47]. They paid attention to minimizing the weight of the structure and increasing the buckling load. Fiber volume fraction and stacking sequence of laminates were considered as design variables. Artar and Daloglu [48] used the GA to determine the optimum variable to achieve suitable steel frames. Moussavian and Jafari [49] calculated the optimal values of effective parameters on the stress distribution around a quasi-square cutout using different optimization algorithms such as Particle Swarm Optimization (PSO), GA, and Ant Colony Optimization (ACO) [50]. To achieve this goal, the analytical method based on Lekhnitskii's method was employed to calculate the stress distribution around a square cutout in the symmetric laminated composite. Jafari and Rohani [51] studied the optimization of perforated composite plates under tensile stress using GA method. The analytical solution was used to determine the stress distribution around different holes in perforated composite plates. Using GA, Jafari and Hoseyni [52] introduced the optimum parameters in order to achieve the minimum value of stress around different cutouts. Vosoughi and Gerist [53] proposed a hybrid finite element (FE), PSO, and conventional continuous GA (CGA) for damage detection of laminated composite beams. The finite element method (FEM) was employed to discretize the equations. Their design variables were damage ratios, the number of damaged elements, and the number of layers. Manjunath and Rangaswamy [54] optimized the stacking sequence of composite drive shafts made of different materials using PSO. The optimum results obtained by PSO are compared with results of GA and found that PSO yields better results than GA. Ghashochi Bargh and Sadr [55] used the PSO algorithm to the lay-up design of symmetrically laminated composite plates for maximization of the fundamental frequency. The design variables were the fiber orientation angles, edge conditions, and plate length/width ratios. Several algorithms are valid alternatives to PSO. Some of these alternatives are not heuristic algorithms but they have a strong theory behind them [56][57][58]. This paper aims to introduce a suitable mapping function and optimal cutout geometry in the perforated orthotropic plate under uniaxial tensile loads, biaxial loads, and shear loads. The design variables are cutout orientation, the aspect ratio of the cutout, bluntness, load angle, and fiber angle. Minimizing normalized stress around the cutout the Tsai-Wu criterion is considered as a cost function of the particle swarm optimization algorithm. The normalized stress is the ratio of the maximum value of circumferential stress at the edge of the cutout to the nominal or applied stress, which is called stress concentration factor (SCF). Theoretical Formulation The problem studied in this paper is an infinite plate containing a quasi-square cutout. As shown in Figure 1, the plate is under biaxial loading at an angle θ 1 (load angle) with respect to the x-axis. The square cutout has arbitrary orientations such that its major axis is directed at an angle θ 3 (rotation angle) with respect to the x-axis and fiber angle is θ 2 [59]. In this paper, the stress function is converted to an analytical expression with undetermined coefficients and displacements, and stresses can be calculated by stress function being determined. In this case, it is assumed that the plate has a linear elastic behavior. Because of the traction-free boundary conditions on the cutout edge, the stresses σ ρ and τ ρθ at the cutout edge are zero and the circumferential stress σ θ is the only remaining stress. The equilibrium equations are satisfied by introducing F(x, y) as stress function [60][61][62] according to Equation (1) The orthotropic stress-strain relation for plane problems in terms of the components of the reduced compliance matrix is as follows [62]: The constitutive vector-matrix equation of a orthotropic material in the global coordinate system is as follows [62]: where S ij are the components of the orthotropic compliance matrix. The inverted relation is [62] with the components of the stiffness matrix C ij . The transform rules for S ij into C ij and vice versa are presented in [62]. For plane stress state, σ z = τ xz = τ yz = 0 is assumed, which means the last equation is degenerated to It is obvious that, if the shear stresses τ xz = τ yz = 0, the conjugated strain must be zero if we have orthotropic material behavior. Finally, the remaining part of the last equation is Thus, we have three constitutive equations and one constraint The strain ε z can now be substituted in the expressions for σ x and finally we get In a similar manner, σ y can be expressed. The equations for both stresses can be solved with respect to the strains ε x and ε y and finally the reduced compliance components can be computed. By replacing stress-strain relations in compatibility relation, we obtain and rewriting the resultant equation in terms of stress function, the compatibility equation for orthotropic material yields: Lekhniskii [60] showed that this equation can be transferred to four linear operators of first order D k : and we obtain the characteristic equation as follows It can be proved, in general, that Equation (5) has four complex conjugate roots (µ 1 = µ 2 = ±i, µ 1 = µ 2 = − ± i) and the general expression for the stress function is: where [. . .] indicates the real part of the expression inside the brackets and z k = x + µ k y and µ k , k = 1, 2 are the roots of the characteristic equation of anisotropic materials. Finally, the stress components in terms of two potential functions of ϕ(z 1 ) and ψ(z 2 ) are expressed [52]: where with σ as applied load (see Figure 1). In the above-presented equations, by taking appropriate values of λ describing the type of loading and θ 1 for stress applied at infinity (σ ∞ x , σ ∞ y , τ ∞ xy ), uniaxial loading, equibiaxial loading, and shear loading can be considered. The following values of λ and θ 1 may be taken into Equation (8) to obtain various cases of in-plane loading: • inclined uniaxial tension: λ = 0 and θ 1 = 0; • equibiaxial tension: λ = 1 and θ 1 = 0; and We denote by ϕ (z 1 ), ψ (z 2 ) the derivatives of the functions ϕ(z 1 ) and ψ(z 2 ) with respect to z 1 and z 2 . These analytic functions can be determined by applying the boundary conditions. To calculate the stress components in the polar coordinates system, we use the following equations In these equations, Ω is the angle between the positive x-axis and the direction ρ ( Figure 2). Conformal Mapping To apply the Lekhnitskii's method to quasi-square cut out, establishing a relation between the cutout and a circular cutout is necessary [63]. A conformal mapping can be used to map the external area of a quasi-square cutout in z-plane into the area outside the unit circle in ξ-plane ( Figure 3). Such a mapping function is represented thus: where x and y are obtained as follows: The parameter w determines the bluntness factor and changes the radius of curvature at the corner of the cut out ( Figure 4). As can be concluded from Equations (12) and (13), w = 0 presents the circular cutout. Integer n in the mapping function represents the shape of the cutout. The cutout sides are given by n + 1. Bluntness w and cutout orientation θ 3 are important parameters that influence the stress distribution around the different cutouts. Parameter c is the aspect ratio of cutout (length/width ratio) and Figure 5 shows the good effects of these parameters on the cutout geometry. With increasing of c at a constant value of w, the cutout is elongated in one direction. For circular and elliptical cutout, c = 1 and c = 1, respectively, and, for both cases, w is equal to zero. For an elliptical cutout, c is the ratio of diameters the ellipse (c = b/a), where a and b are semi-major and semi-minor axis of the ellipse, respectively. Particle Swarm Optimization Particle Swarm Optimization (PSO) is a population-based stochastic search optimization algorithm [64,65]. This algorithm starts to work with a number of initial answers which are determined randomly, and it looks to find an optimum answer by moving these answers through consecutive iterations. In each iteration, the position of each particle in the search space is determined based on the best position obtained by itself and the best position obtained by the whole particles during the searching process. In each iteration, the particles, velocities and particle position are updated according to Equations (14) and (15), respectively [66], where V i (t) and X i (t) are the current velocity and position of the particle respectively. Let X i (t) = {x 1 (t), . . . , x N var (t)} be the position of particle in a N var -dimensional search space at iteration t. We denote by X i (t + 1) and V i (t + 1) the updated velocity and position, respectively, and by ω the inertia weight coefficient that controls the exploration and exploitation of the search space. c 1 and c 2 are two positive constants called the cognitive and social coefficients, respectively. A high inertia weight causes the available particles in the algorithm to search newer areas and perform a global search. On the contrary, the low inertia weight leads the particles to stay in a limited area. When the value of c 1 increases, the particles tend to move toward the best individual experience and their motion toward the best group's experience decreases, whereas, by increasing the c 2 , the particles move toward the best group's experience, thus their motion toward the best individual experience decreases. Let r 1 , r 2 ∈ [0, 1] be two random numbers, and P i (t) and p * i (t) are the best individual and group's experiences position, respectively. Choosing the appropriate values for c 1 , c 2 , and ω results in an acceleration in convergence and leads to find the absolute optimum and prevents premature convergence in a local optimum. Here, c 1 and c 2 parameters update as in Equation (16) where c 1,f , c 2,f , c 1,i , and c 2,i are constant values. In addition, Equation (17) is considered for ω operator where ω i and ω f are initial and final values of weight factor, respectively; I is the number of particle's current iteration; and I max is the number of the greatest iteration [67]. In a N var -dimensional problem, a particle includes a row vector with N var elements. This arrangement is defined as P = [p 1 , p 2 , . . . , p N var ]. To begin the algorithm, a number of these particles (as the number of the primary particle algorithm) must be created. The failure criterion and SCF are taken as a cost function (C.F.) for orthotropic and isotropic plate, respectively. It should be mentioned that, in [41,68], an alternative approach is presented. SCF is defined as the ratio of the von Mises stress, which is the maximum value of circumferential stress at the edge of the cutout (σ θ ), to the nominal or applied stress. In the case of a composite lamina, the strength is calculated by using the Tsai-Wu criterion: where σ f is the failure stress following from the Tsai-Wu criterion and σ 1 , σ 2 , τ 6 are the transformed stress components in material principle coordinate [62], which are calculated using σ x , σ y , τ xy obtained in Equation (7). We denote by F 1 and F 2 the longitudinal and transverse strength in tension, respectively, and by F 6 the shear strength. In this case, the simplified Tsai-Wu criterion is used (no linear terms, orthotropic material behavior, or plane stress state), as suggested by Tsai and Wu [47]. The Tsai-Wu criterion is a degenerated Gol'denblat-Kopnov (tensor-polynomial) criterion [69], which is an extension of the anisotropic von Mises [70] or orthotropic Hill [71] criterion. For isotropic materials, the cost function is defined as follows: with σ von Mises as failure stress following from the von Mises criterion. By evaluating the C.F. for variables p 1 , p 2 , p 3 , . . . , p N var , the cost of each particle is obtained: Moreover, the value range of design variables is defined as follows: Finally, each particle based on the best performance of his relationship has to be updated with the condition: The velocity and position of a particle on the basis of the best position among the particles are updated according to condition: if f (X i (t + 1)) < f (P * i (t)), then P * i (t + 1) = X i (t + 1). The values of effective parameters for the PSO algorithm are listed in Table 1. Population Size 40 Maximum of Iteration 50 The convergence diagrams for the SCF and fracture criterion (Tsai-Wu) with quasi-square cutout (c = 1) and in the case of uniaxial loading are shown in Figures 6 and 7, respectively. Solution verification In order to examine results obtained from the present analytical method, FEM (ABAQUS software) was employed. For this purpose, firstly, using PSO program code, optimum parameters with quasi-square cut out were determined. Then the cut out geometry was modeled in accordance with optimum parameters obtained from program execution in ABAQUS software. In order to achieve optimum mesh number and increased accuracy in results obtained from finite element numerical solution, meshing was finer around the cut out than external boundaries of the plate. According to this, in an isotropic plate under shear loading, Figs. 9 and 10 show the optimum stress distribution modeled in ABAQUS and MATLAB, respectively ( 3 = 90, = 0.078). Comparing the values obtained from analytical solutions and FEM is shown in Fig. 11. Angle θ indicates the points on the boundary cut out relative to the horizontal axis. In isotropic plates because of the symmetry of stress distribution around the cut out, results to θ = 180° is provided. Good agreements between the results obtained by the present solution and FEM show the accuracy and precision of the present analytical solution. Comparison of the present results in a special case ( 3 = 0) and for shear loading with Pilkey results (Pilkey and Pilkey 2008) for an elliptical cut out in the isotropic plate is shown in Fig. 12. In this figure, the investigation was conducted based on changing the aspect ratio of cut out. The conformity of results obtained from the two methods indicates the accuracy of the present analytical solution. Solution Verification To examine results obtained from the present analytical method, FEM (ABAQUS software) was employed. For this purpose, firstly, using PSO program code, optimum parameters with quasi-square cutout were determined. Then, the cutout geometry was modeled in accordance with optimum parameters obtained from program execution in ABAQUS software. To achieve optimum mesh number and increased accuracy in the results obtained from finite element numerical solution, meshing was finer around the cutout than external boundaries of the plate. According to this, in an isotropic plate under shear loading, Figures 8 and 9 show the optimum stress distribution modeled in ABAQUS and MATLAB, respectively (θ 3 = 90 • , w = 0.078). The values obtained from analytical solutions and FEM are compared in Figure 10. Angle θ indicates the points on the boundary cutout relative to the horizontal axis. In isotropic plates, because of the symmetry of stress distribution around the cutout, results to θ = 180 • are provided. Good agreements between the results obtained by the present solution and FEM show the accuracy and precision of the present analytical solution. Comparison of the present results in a special case (θ 3 = 0 • ) and for shear loading with Pilkeys' results [72] for an elliptical cutout in the isotropic plate is shown in Figure 11. As shown in this figure, the investigation was conducted based on changing the aspect ratio of cutout. The conformity of results obtained from the two methods indicates the accuracy of the present analytical solution. For orthotropic plate containing an elliptical cutout with aspect ratio c = 2, the amount of failure strength based on the Tsai-Wu failure criterion was compared with the results obtained by Ukadgaonker and Rao [73]. For this case, fiber and rotation angles were considered 60 • and 0 • , respectively, and the perforated plate was subjected to biaxial tensile. Table 2 shows the conformity of the present solution method with Ukadgaonker and Rao [73]. Results and Discussions Mechanical properties of the used materials are given in Table 3. The normalized stress and the Tsai-Wu criterion are considered as a cost function for the PSO algorithm. Isotropic Plates For isotropic plate with quasi-square cutout, a variation of optimal SCF with bluntness parameters for different in-plane loadings is shown in Figure 12. According to this figure, the results of uniaxial and biaxial loadings are different from the shear loading. For biaxial loading, by increasing the value of w, the C.F. rises and minimum C.F. occurs at w = 0. For uniaxial and shear loadings, minimum C.F. happens at w = 0.052 and w = 0.078, respectively. w = 0 indicates a circular cutout. In other words, for an isotropic plate with quasi-square cutout and under uniaxial and shear loadings with w = 0.052 and w = 0.078, respectively, minimum SCF will be less than SCF related to a circular cutout. By changing the value of c, the aspect ratio of the cutout can be controlled. According to Equations (12) and (13), because the aspect ratio parameter (c) is in the y-direction of the mapping function, the shape of the cutout is stretched in the y-direction. To study the effect of c, the value of c is considered between 1 and 2 (1 < c < 2). Figure 13 shows the effect of aspect ratio (c) for various in-plane loadings on C.F. in optimal values of load angle and rotation angle and w = 0.05. According to this figure, the C.F. varies linearly with c. Except for equibiaxial loading, with increasing value of c, C.F. is reduced. The values of the cost function in an optimal state for circular and elliptical cutout (w = 0) are shown in Table 4 and for rectangular cutout in different values of w are shown in Table 5. Figure 14 shows the change of normalized von Mises stress (cost function) around cutouts in an optimal condition for the isotropic plate. Table 6 shows the results of the cost function and one of the optimal modes for quasi-square cutout when all effective parameters such as rotation angle, load angle, and bluntness are considered as design variables. The last column of this table represents the percent difference between the optimal C.F. of quasi-square cutout and the corresponding value related to a circular cutout (P.D.). Table 4. Optimal results for circular/elliptical cutout in isotropic plates. Uni-Axial Tensile Loading Equi-Biaxial Loading Shear Loading Table 6. All optimal values of design parameters for quasi-square cutout in isotropic plate (c = 1). Stress distribution around square cutout in an optimal condition in different values of w and for uniaxial and biaxial tensile loading is shown in Figures 15 and 16. Orthotropic Plates For orthotropic material, the ratio of the maximum stress created around the cutout to applied stress is called a SCF. Variation of SCF for Graphite/Epoxy (T300/5208) plate with quasi-square cutout under different in-plane loadings with bluntness parameter (w) is illustrated in Figure 17. According to this figure, the minimum values of C.F. for all three types of loadings occurs in non-zero values for w. Minimum C.F. happens at w = 0.035, w = 0.020, and w = 0.045 for uniaxial, biaxial, and shear loadings, respectively. w = 0 is equivalent to a circular cutout. This means square cutout leads to less SCF than circular cutout. Figure 18 shows the effect of aspect ratio of cutout at different types of loadings on SCF. In this case, for w = 0.05, the optimal results have been achieved for optimal values of load angle, fiber angle, and rotation angle. According to this figure, there is nearly a linear relation between SCF and c. The values of the cost function in an optimal mode for circular, elliptical cutouts in different values of c and for rectangular cutout for different values of bluntness parameters w are tabulated in Tables 7 and 8, respectively. Figure 19 shows the stress distribution around quasi-square and elliptical cutouts for graphite/epoxy plate in an optimal condition. Figures 20 and 21 show the variations of cost function obtained based on Tsai-Wu failure criterion with the bluntness parameter w. The results of Figure 20 are for a square cutout (c = 1) and biaxial and shear loadings, whereas Figure 21 shows strength variations of the graphite/epoxy with w in different values of c; unexpectedly, the optimal value of w is not zero. This means that, by selecting the appropriate values of bluntness parameter, the strength of graphite/epoxy plate with rectangular cutout based on the Tsai-Wu criterion is more than those of a circular cutout. For different values of bluntness (w) and aspect ratio of cutout (c), the optimal values of the effective parameters are listed in Table 9. In addition, similar results are presented in Table 10 for triangular cutout. For all values of w, strength increases with increasing c. In this paper, we try to present the results of a square cutout in more detail while the other cutouts only the final results are presented. Table 7. Optimal values of design variables in different aspect ratios of cutout (w = 0). Uni-Axial Tensile Loading Equi-Biaxial Loading Shear Loading Table 11 gives the optimal values of all design variables for quasi-square (c = 1) cutout to achieve the greatest fracture strength. As shown in this table, the maximum value of Tsai-Wu strength occurs at w = 0. P.D. in this table refers to the percent difference between the optimal C.F. of rectangular cutout and the corresponding value related to a circular cutout. Finally, the optimal values of the design variables for other cutouts are listed in Table 12. As shown in this table, for cutout with an odd number of sides, the highest strength for all in-plane loads occurs at w = 0. This behavior is not always seen for cutout with an even number of sides. For perforated composite plates made of different materials, the optimal values of all design variables (c = 1) are listed in Table 13. The results are provided using PSO algorithm. The perforated plate is subjected to uniaxial loading. The results show that for all materials, the optimal values of bluntness parameter w are not zero. Namely, for the case of c = 1, square cutout with a certain value of w leads to higher failure strength than a circular cutout. The percent difference between failure strength of plate with the square cutout and circular cutout is shown in this table. Optimal cost function (Tsai-Wu strength) is highly dependent on the mechanical properties of the materials. The highest percentage difference is related to Boron/Epoxy and the lowest is related to E-glass/Epoxy. The value of bluntness parameters w is different for various materials. Conclusions In this study, the PSO algorithm was used to determine the optimal values of effective parameters on stress distribution around different cutouts in orthotropic/iso-tropic infinite plates under in-plane loading. The failure strength obtained from Tsai-Wu criterion was considered as cost function of the PSO algorithm. The analytical solution based on Lekhnitskii method was used to calculate the stress components around the cutout. The results show that the bluntness (w) and aspect ratio of cutout (c) and fiber angle (θ 2 ), load angle (θ 1 ), and the cutout orientation (θ 3 ) have significant effects in reducing the amount of the cost function and by appropriate selection of these parameters the higher failure strength can be achieved. In addition, the effect of material properties of perforated plates on the values of optimal design variables was studied. Optimal values of design variables depend strongly on the mechanical properties of the perforated plate.
7,046.2
2020-04-11T00:00:00.000
[ "Engineering" ]
Energy-Efficient Autonomous Navigation of Solar-Powered UAVs for Surveillance of Mobile Ground Targets in Urban Environments In this paper, we consider the navigation of a group of solar-powered unmanned aerial vehicles (UAVs) for periodical monitoring of a set of mobile ground targets in urban environments. We consider the scenario where the number of targets is larger than that of the UAVs, and the targets spread in the environment, so that the UAVs need to carry out a periodical surveillance. The existence of tall buildings in urban environments brings new challenges to the periodical surveillance mission. They may not only block the Line-of-Sight (LoS) between a UAV and a target, but also create some shadow region, so that the surveillance may become invalid, and the UAV may not be able to harvest energy from the sun. The periodical surveillance problem is formulated as an optimization problem to minimize the target revisit time while accounting for the impact of the urban environment. A nearest neighbour based navigation method is proposed to guide the movements of the UAVs. Moreover, we adopt a partitioning scheme to group targets for the purpose of narrowing UAVs’ moving space, which further reduces the target revisit time. The effectiveness of the proposed method is verified via computer simulations. Introduction Unmanned aerial vehicles (UAVs) have found numerous applications in both military and civilian domains. They are excellent tools for target surveillance and monitoring [1][2][3], thanks to their flexibility. Because using a single UAV is often inefficient to conduct a complex mission, employing a UAV team is the trend in order to complete missions quickly [4]. When multiple UAVs conduct some missions, they are often regarded as a multiagent system. In the past few decades, the coordination issue of multiagent systems has attracted great attention from different research communities [5][6][7]. This paper pays attention to the moving target surveillance by a group of UAVs. A practical application of the considered scenario is that, in wireless sensor networks, the sensor nodes collect data from the environment. UAVs function as data sinks to collect the sensory data from sensor nodes [8]. In general, the number of available UAVs is smaller than that of the sensor nodes. Thus, the UAVs carry out a periodical surveillance of the sensor nodes. Because UAVs often have limited onboard battery capacity, their operation duration is constrained. Installing solar-panels enables the UAVs to harvest energy from the sun, which is promising for prolonging the lifetime of the UAVs in the sunny daytime [9]. We consider the surveillance of mobile targets by the solar-powered UAVs in urban environments. The tall buildings have some negative impact on the mission. Firstly, they may create some shadow region, where the Line-of-Sight (LoS) between the UAVs and the sun is blocked, so that the UAVs cannot harvest energy if they fly in the shadow region. Besides, the buildings may also block the LoS between the UAVs and considered targets. This may prevent a UAV from successfully surveying a target. The problem of interest is formulated as an optimization problem to minimize the target revisit time by planning the UAVs' paths. To address the problem, we present a path planning method that is based on the Rapidly-exploring Random Tree (RRT). This method can quickly find a feasible path for the UAV to intercept the target in the scenario where the target moves along a known trajectory. We then consider the case with one UAV and multiple targets. We present a nearest neighbour (NN) based navigation method. The so-called NN involves both the UAV-target distance as well as the uncertainty level of a target. Finally, we consider the multi-UAV and multi-target case. We partition the targets into groups according to the distance information between the targets and the UAVs. Subsequently, each UAV takes care of the targets in its own partition. The proposed autonomous navigation algorithm that navigates a UAV team in order to periodically survey a group of mobile ground targets is the main contribution of this paper. It is computationally efficient and easily implementable online, since it is a randomized RRT-based approach. Extensive simulation results are reported in order to confirm the effectiveness of the developed method. The reminder of the paper is organized, as follows. Section 2 briefly reviews the relevant work. Section 3 presents the system models and formulates the problem. Section 4 presents the proposed UAV navigation approaches. Section 5 reports the computer simulation results, and Section 6 gives the concluding remarks. Related Work The target surveillance problem that is considered in this paper has not been considered in any existing publications. In this section, we present some closely relevant publications, so that we can distinguish the contributions of the paper with others. The target surveillance problem has been investigated from different levels in the literature. In terms of sensing, a large number of image/video processing strategies have been developed in order to estimate states of the targets from the measured images/videos [10][11][12][13][14]. In these publications, attention has been paid to the quality of detection for a single target. In the scenario with multiple targets, how to allocate the UAV resource becomes necessary to achieve a good quality of surveillance. Many operational research results, such as the conventional travelling salesman problem (TSP) [15] and the vehicle routing problem (VRP) [16], are the common tools for planning the UAVs' paths. When there are enough UAVs, the coverage control has been investigated to achieve the optimal sensing coverage of the targets [17,18]. In cases where moving targets are to be monitored, and to maintain the quality of sensing, the reactive algorithms have been proposed [2,19]. This paper focuses on the scenario where the number of UAVs is not enough to persistently monitor the targets, so a periodical surveillance is conducted by the UAVs. As the targets are moving in the considered region, the problem is more relevant to the time-dependent TSP [20] and the moving-target TSP [21,22]. In the time-dependent TSP [20], the common setting is to find the shortest tour for the salesman in a graph with time-dependent edges. In the moving-target TSP [21,22], the targets are assumed to move with a constant speed along a fixed direction. The problem considered here is different from them. Firstly, the targets move along some trajectories, so that their speeds and moving directions may change with time. Secondly, the existence of buildings in the urban environment requires the UAVs to avoid collision with the buildings. Thirdly, the UAVs need to harvest energy from the sun to enable the UAVs to operate in the given time period. However, each path depends on the UAV's initial position, the buildings' positions and the target's trajectory, which is challenging to be known in advance. Thus, both the moving-target TSP and the time-dependent TSP cannot be applied to address the problem that is considered in the paper. Path planning plays an important role in this work. Among various path planning algorithms, RRT is a sampling-based approach. It generates a feasible (but may not be optimal) path quickly, even if the environment is complicated. Many publications have reported that this method can be easily applied in real-time applications, such as mobile ground robots [23] and autonomous driving [24]. To improve the solution quality and computing speed, many RRT variants have been developed. Specific attention has been paid to the generation of samples and the control of the searching step length. A lower bound tree-RRT is designed to find out the near optimal path [25]. Besides, a node control strategy is proposed in order to restrict the expansion of the random tree [26]. Because of the computational efficiency, RRT-based approaches are generally suitable to run in real-time, and it also has potential to be implemented in a decentralized manner [27]. We adopt the RRT approach in this paper. However, as will be shown in the following sections, we do not have a fixed destination for a UAV. Instead, the destination of a UAV moves. Our objective is generate a feasible path in real-time, so that the UAV can intercept the target as soon as possible. System Model and Problem Statement Suppose that we have a team of solar-powered UAVs labelled 1, 2, . . . , I. We consider that these solar-powered UAVs fly at a fixed altitude z in an urban area to conduct some missions. For UAV i, let p i (t) = [x i (t), y i (t), z] be its position in the ground frame at t, θ i (t) be the horizontal heading angle with respect to the x-axis; and, v i (t) and ω i (t) be its linear and angular speeds, respectively. The motion of UAV i can be described by the following equations [28,29]: In this paper, the effect of wind has not been considered. The following constraints hold for any UAV at any time: Here, Ω max i and V max i are the given constants, and D ⊂ R 2 is the considered area. The movement of many UAVs can be described by (1) and (2); see [30][31][32][33]. It is worth pointing out that, in (2), the linear speed v i (t) can take a negative value. This allows for a UAV to move backward when necessary. In Section 5, we will see some UAV trajectories with sharp turns, and the reason is that a negative linear speed is applied. This avoids making a large turn by moving along a circle. Table 1 summarizes the frequently used symbols in the paper, together with their meanings. Let P sun i (t) be the harvesting power of the solar energy. It can be computed as follows [34] P sun where η represents the solar cell efficiency, A i represents the area of the solar cells, and φ gives the incidence angle. The incidence angle φ is further dependent on the azimuth angle α z and the elevation angle α e of the sun, and in the day time, both α z and α e vary with time. Symbol Meaning Horizontal distance between UAV i and target j The UAVs consume energy when they are flying. For the energy consuming power, we follow the model that was used in [35]: where λ 0 , λ 1 and µ are the blade profile power, the induced power and the mean rotor induced velocity in hovering, respectively; U tip represents the tip speed of the rotor blade; κ is the fuselage drag ratio; s represents the rotor solidity; ρ is the air density; and, S i rotor disc area. Let Q i (t) denote the residual energy of the battery of UAV i. We havė Moreover, Q max i represents the upper bound of Q i (t). Each UAV carries a ground-facing camera, and the camera's observation angle is denoted by α ∈ (0, π); see Figure 1. If a target locates in a disc centred at p i (t) of the radius and has LoS with the UAV, it can be observed by the UAV. We assume that a gimble is available on the UAV, so that, no matter how the UAV moves, the camera always faces the ground. Now, we model the buildings. In this paper, each building is modelled as the smallest prism to enclose this building. Each prism has two parallel and congruent bases and a number of flat sides that are perpendicular to the xy-plane; see Figure 2. Each prism is characterized: Ψ, ψ and h. Ψ is a κ-by-2 matrix and ψ is a κ-by-one vector. They determine the shape and size of the base. h is a scalar indicating the height of the prism. For a point (x, y, z), if it is inside a prism, (7) holds: Given the environment information, Ψ, ψ and h are known for each building. Subsequently, we can have a subset of space, denoted by X building , which corresponds to these buildings. At any time, the UAVs must not be inside X building . Clearly, avoiding X building is similar to the collision avoidance with steady obstacles [36,37]. Having the model of buildings is not sufficient to characterize the observation region of a UAV. We also need a method to determine whether a position in the air and a position on the ground have LoS. For this purpose, we consider the straight line segment between two points p and q, which is described as − → pq , and τ is a free variable. Whether p and q have LoS can be tested by looking for the intersection points between the the line segment connecting p and q (8) and any prism (7). Because the environment information is known, whether p and q have LoS can be easily confirmed. We introduce a function LOS(p, q, X building ): With this function, we can also test whether a UAV and the sun have LoS. To this end, the sun's location needs to be known. Let V, a unit vector, denote the sunlight direction. With V, we can imagine that the sun is at q sun = p − Vτ, where the parameter τ takes a large value so that the sun is far from the point p. We need to place the sun at a relatively far position. The reason is that we use the line segment to verify whether two points have LoS. When the sun is placed closely to the point p, we may not obtain the correct verification. Let b i (t) be a binary variable indicating whether UAV i has LoS with the sun. Subsequently, UAV i's residual energy can be computed by: There are N ground mobile targets in the given urban environment to be periodically surveyed. These targets can be some sensor nodes to measure the environment information of interest. Instead of continuously transmitting the sensory data, they only upload their sensory data to a control unit via the UAVs in proximity. This setting can prolong the lifetime of the nodes when the sensory data are of large size, such as videos. We assume that the UAVs know the current positions of the targets, and this information can be provided by the targets, since the energy consumption of reporting the position information can be ignored compared to the large size of sensory data. We also assume that the targets' future positions are predictable. This assumption is reasonable, since, when the targets carry out some pre-defined missions, their trajectories can be known. Let q j (t) ∈ R 3 denote target j's location (j = 1, . . . , N) at time t. In this paper, we consider that I < N and the targets spread in the considered environment. Subsequently, there may be some time in which a target is not under surveillance. From the common sense, the uncertainty level of a target relates to the time in which it is not under surveillance. Thus, a significant objective of the surveillance system is to maintain the uncertainty level of the targets as low as possible. This can be formulated as the minimization of the maximum target revisit time. Let τ j denote the time gap between two consecutive visits of target j. Let d ij (t) denote the horizontal distance between target j and UAV i at time t. Let s j (t) be a binary variable indicating if target j is under surveillance at time t. Subsequently, we have s j (t) = 1, if ∃i such that LOS(p i (t), q j (t), X building ) = 1 and d ij (t) ≤ R, 0, otherwise. Afterwards, we can use s j (t) to calculate τ j . Specifically, we have In other words, τ j is the time instant gap between the two consecutive visits. Note that, if there is only one visit during the mission period [0, T], i.e., at t 1 , then, τ j = T − t 1 . If there is not any visit during the mission period, then τ j = T. The problem under investigation is to develop a navigation method for the UAVs modelled by (1) subject to The problem under consideration is difficult to address optimally. Although we can have the trajectories of targets and predict their positions for the period of [0, T], it is still hard to plan the trajectories of the UAVs in advance. The main reason lies in the complexity of the flying space in urban environments. In particular, due to the existence of buildings, the trajectory of UAV i (suppose that it is assigned to survey target j) depends on both the trajectory of target j and the position of UAV i at which it is assigned with this task. Furthermore, the UAV's position depends on its last task. The coupling of the high-level task assignment problem and the low-level trajectory planning problem makes it too complex to be addressed optimally. Even if an optimal solution can be obtained, it may also take so long time that it cannot be applied online. Lifetime of UAVs Before presenting the navigation method, it is necessary to discuss the lifetime of the UAVs. As shown in (3), for a UAV, its energy harvesting power varies with time because of the movement of the sun. Subsequently, the maximum harvested energy amount by UAV i is given by According to (4), the energy consuming power P i increases with the speed v i in the first and second terms, while decreases with v i in the third term. Additionally, the third term weights more when v i is relatively small, while the first and second terms weight more when v i is relatively large. Thus, the energy consuming power P i in (4) first decreases and then increases with v i . Figure 3 shows an illustrative plot of the relationship between P i and v i . There exists an optimal linear speed, such that the energy consuming power is minimized. Let P opt i denote the minimum energy consuming power. We assume that Q max i is the initial energy of UAV i. The necessary condition for the UAV to conduct the surveillance mission is as follows: Once (15) holds, the UAV can complete the surveillance mission with the optimal linear speed, although this does not guarantee the performance of surveillance. Moreover, a hidden assumption of (15) is that the UAV always has LoS with the sun in [0, T]. Otherwise, the harvested energy amount is smaller than T 0 η A i cos φ(t)dt. However, if (15) does not hold, the UAV cannot conduct the mission. If the capacity Q max i takes some larger value, it is allowed to have some part of the UAV trajectory having no LoS with sun. The margin of capacity has the potential to reduce the revisit time. There are two feasible paths for the UAV to intercept the target, as shown in Figure 4. The red one has some part behind the building that prevents the UAV from harvesting energy, while the green path enables the UAV to harvest energy during the trip. The red one leads to a shorter time for the UAV to survey the target than the green. If there is no margin capacity, then the UAV has to follow the green path. Otherwise, it can follow the red one to reduce the revisit time. Though the margin energy capacity brings benefit in terms of the reduction of the revisit time, it also creates challenges in the management of this amount of energy. Specifically, the UAV may need to survey several targets during the mission. Subsequently, it is difficult to decide how to allocate the margin capacity to the tasks. In the subsequent parts, we assume that there is no such margin capacity, and the UAVs fly at their corresponding optimal linear speeds. Accordingly, the UAVs always look for paths having LoS with sun. We leave the complex case with margin capacity for future study. One UAV and One Target In the simplest case, we consider that one UAV is assigned to survey a target. Suppose that it is at t 0 the UAV starts to move to the target. We denote the initial positions of the UAV and the target as p(t 0 ) and q(t 0 ), respectively, and the initial heading angle of the UAV as θ(t 0 ). We use x(t) = [p(t), θ(t)] to represent the state of the UAV. Suppose we know the trajectory of the target in the considered area D, and we make a prediction for its future positions in the time interval [t 0 , t 0 + T 0 ]. Formally, we know q(t), where t ∈ [t 0 , t 0 + T 0 ]. We select a suitable T 0 , so that the UAV can intercept the target before t 0 + T 0 . For the purpose of implementing the method online, we adopt the computationally efficient RRT approach. The common setting of the RRT approach looks for a feasible path between a start position and a destination, and the obtained path can avoid obstacles. In our problem, the buildings that are taller than the UAV flying altitude are regarded obstacles. Our problem has some additional features compared to the common setting. Firstly, different from a stationary destination, the destination in our problem, i.e., the target, is moving. Secondly, the UAV should avoid shadow region, because it needs to always harvest energy during the trip. The objective is to find a feasible UAV path, such that, at time instant t ∈ [t 0 , t 0 + T 0 ], the target locates inside the vision cone of the UAV and they have LoS. Starting from the current UAV position, i.e., p(t 0 ), we generate a random tree. The termination condition of the tree generation process is that d(t) ≤ R and LOS(p(t), q(t), X building ) = 1 at time t, where d(t) is the horizontal UAV-target distance. We present all of the procedures in Algorithm 1. Let T denote the random tree and δ be a sampling interval. T consists of a set of vertices. A vertex is annotated with control inputs, parent vertex and timestamp. Firstly, we initialize the tree T with x(t 0 ). Subsquently, we keep generating random samples in the space, find the nearest vertex from the tree to the sample, choose the suitable control inputs to generate a new vertex x new . We further check the conditions that whether x new belongs to X building and whether it has LoS with the sun. When both are verified, we associate the parent vertex, the control inputs and the timestamp with this vertex, and add it to the random tree. Here, the timestamp is an integer indicating the number of steps from the initial vertex to this vertex. We stop growing the tree once there exist a timestamp k and a vertex x * with the timestamp of k, such that d(t 0 + kδ) ≤ R and the position of x * has LoS with q(t 0 + kδ). Algorithm 1: Rapidly-exploring random tree (RRT)-based path planning algorithm to intercept a target. Input: x(t 0 ), X building , q(t) where t ∈ [t 0 , t 0 + T 0 ] Output: T Initialize T . while There do not exist a timestamp k and a vertex x * with the timestamp of k, such that d(t 0 + kδ) ≤ R and x * has LoS with q(t 0 + kδ) do Randomly generate a node in the space. Select the closest vertex from T to the node. Choose the appropriate control inputs to generate x new , such that x new cannot be closer to the node after δ. if x new / ∈ X building and has LoS with the sun then Associate with x new its parent, the applied control inputs and timestamp, and add x new to T . end end When the tree growing process terminates, we obtain the final vertex on the UAV path, i.e., x * . We also know that it takes k steps for the UAV from its initial position to reach x * . By a standard backtracking algorithm, we can find the path from x * back to the initial position. An example is shown in Figure 5, where t 0 is set as 0. In this example, we stop growing the random tree, since, at time 3δ, the condition d(3δ) ≤ R holds (suppose LOS(p(3δ), q(3δ), X building ) = 1). One UAV and Multiple Mobile Targets Now, we focus on the case, where one UAV periodically surveys multiple mobile targets. The problem is a generalization of some variants of TSP. Different from the moving-target TSP [21,22], which assumes that the targets move at constant speeds in fixed directions, the targets in our problem may adjust their headings as well as speeds. Although we can make reasonably accurate predictions on the targets' movements, the time that is needed to intercept a target also depends on how the UAV moves. Thus, the time-dependent TSP [20], which assumes knowing the cost of the time-dependent arcs cannot be used directly to address our problem. We present a nearest neighbour (NN) based navigation algorithm. Different from the common sense of NN, which uses distance information as a metric, we consider not only the distance, but also the uncertainty level of a target. A target's uncertainty level is modelled as a non-decreasing function of time since its last visit. A typical uncertainty function, denoted by γ(t), is shown in Figure 6. Here, the target was last seen at instant t 0 . For t ∈ [t 0 , t 0 + σ], γ(t) = 0. After t 0 + σ, γ(t) increases with time. When σ = 0, γ(t) is a monotonically increasing function of time. We use the symbol λ j (t) to describe how close target j is to the UAV, and it is defined, as follows: where L j (t) represents the length of the path for the UAV to intercept target j. The path can be found by the method that is discussed in Section 4.2. The metric λ couples the UAV-target distance and target's uncertainty level. The target with the maximum value of λ is the nearest neighbour (NN). The navigation method is shown in Algorithm 2. It first selects the NN. Subsequently, the UAV follows the path to move towards the NN. The UAV will choose a new NN once the selected NN is surveyed. Note that since the targets are moving, it is possible that the UAV can have some other targets in view before the NN. In this case, the UAV will update the targets' status, but not change the selected NN. As a heuristic algorithm, it does not guarantee obtaining the optimal guidance. However, since it is a randomized method, it can complete the calculation quickly. [37,38]. Exchanging the selected NN can avoid the case where two UAVs choose the same target as NN. Additionally, a UAV should share the status of a target once this target is surveyed. By doing this, all of the UAVs have a whole picture of the status of the targets, so that they can select their NNs more effectively. Without this sharing, a UAV may move towards a target that was recently surveyed, and this may increase the average revisit time of the targets. With the shared positions of other UAVs, each UAV can construct a partition of the targets. In particular, a target is assigned to UAV i if where d ij (t) is the horizontal distance between target j and UAV i. Let S i (t) denote the set of targets that are more closer to UAV i. Algorithm 3 summarizes the navigation algorithm for this scenario. Each UAV determines its partition according to the locations of other UAVs and the targets. Subsequently, it selects its NN in the partition. After that, it starts to pursuit the target. Whenever it views a target other than the NN, it shares the status of this target across the team. When the selected NN has been surveyed, the UAV repeats the above procedures. The difference between Algorithm 3 and Algorithm 2 is an additional operation in Algorithm 3 to update the partition of a UAV according to the UAVs' and targets' locations. Because a UAV needs to calculate the distance from a target to I UAVs, the additional computation complexity is O(N I) for the situation with N targets. It is worth pointing out that partitioning the targets following (17) assigns a target to exact one group. Subsequently, the NN selected by each UAV is unique. Additionally, considering the existence of buildings and the randomness of the RRT-based path planning, two UAVs' paths may be close. In this case, i.e., when two UAVs are within a range of d sa f e , one of the UAVs, such as the one with the smaller index, continues to fly as planned, while the other UAV tries to avoid the former UAV. To achieve this, the collision avoidance ability should be embedded. Many available UAV products that have such a function, such as DJI Mavic Pro 2, Mavic 2 Zoom, skydio 2, etc, can be used. Fortunately, this is a common ability in most commercial UAVs. When the two UAVs are at least d sa f e apart, the latter re-starts to intercept its NN. Remark 1. The considered problem in this paper may not be stable. A simple example is that all the targets move away in different directions. Subsequently, the revisit time increases with time. Simulations We show the performance of the proposed navigation algorithms. An urban environment is constructed in MATLAB with various buildings, see Figure 7a. The heights of the buildings are between 30 to 120 m. Five targets move on the xy-plane in the environment, and their trajectories are shown in Figure 7b. We first consider using one UAV to survey the targets. The used parameters are listed, as follows: V max = 10 m/s, Ω max = 0.5 rad/s, z = 65 m, R = 50 m, T = 500 s and σ = 0. It is worth pointing out that the parameters of V max and Ω max depend on the maneuverability of the UAV in use. Our method is not restricted to these parameters. The function γ(t) is set to be increasing linearly with time. As will be seen in the below results, because the UAV flight height is taller than some buildings and lower than the others, a UAV needs to avoid the taller buildings but can fly above the lower buildings. To make results more understandable, we assume that during the 500 s, the incidence angle does not change. The fixed value can response to the average value during the 500 s. Subsequently, we can pre-compute the shadow range. Similar to X building , the shadow region is not allowed for the UAV to enter. Given an initial state, the trajectory of the UAV is obtained by applying Algorithm 2. In particular, the randomized Algorithm 1 is used to generate UAV trajectories to intercept the five targets. In our simulation, for the case with five targets, Algorithm 1 (runs on a normal computer with an Intel(R) Core(TM) i7-8565U CPU and 8.00 G RAM) takes less than 1 s to return five random trajectories. Although the onboard processor may not be as powerful as a normal computer, the algorithm can be coded in the more efficient language C. Moreover, even if the practical computation time may be longer than the simulation environment, in practice, the UAV can start to compute the random trajectories before it intercepts the intended target. Experimental verification of the proposed methods is left as our future work. To make the trajectories visible, we demonstrate the 2D views in Figure 8 for each period of 100 s. We also record the movements of the UAV and the targets in some videos, and links for both 2D and 3D views are available at the caption of Figure 8. From Figure 8f, we can see that the five targets are visited twice or three times in the operation period, and the maximum revisit time is about 220 s. For the same targets movements, we increase the maximum of the UAV linear speed to V max = 15 m/s, and the UAV's trajectory is shown in Figure 9. From Figure 9f, we can see that target 1 was visited three times and all the other targets are visited four times in the operation. The maximum revisit time is reduced from 220 s to 180 s. Finally, we add one more UAV to survey the same targets. Here, V max = 10 m/s, and all of the other parameters remain the same as above. We show the trajectories of the two UAVs in Figure 10. From Figure 10f we can see that compared to the results by one UAV, the maximum revisit time is reduced from 220 s to 150 s. The maximum angular speeds, i.e., Ω max , has little impact on the maximum revisit time, since it only influences the UAV trajectory when the UAV makes a turn. Conclusions We considered using UAVs to conduct periodical surveillance of moving targets. We formulated an optimization problem in order to minimize the target revisit time. We proposed autonomous navigation algorithms for different cases. Because these algorithms are based on RRT, they inherit the advantage of computational efficiency, and they are promising to be applied in real-time. The simulation results showed their effectiveness. One limitation of the current work lies in the assumption that the targets' trajectories are known. When these trajectories are unavailable, the accuracy of the target position predictions may significantly decrease. Consequently, the UAVs may lose some targets. One future research work is to extend the algorithms by including the searching operation. Another direction is to conduct experiments, since this is the most effective way to practically verify the computational efficiency of the proposed method. To this end, some ground mobile robots can play the role of targets and a small number of UAVs can be tested in order to conduct the surveillance mission in a laboratory environment.
8,006.6
2020-10-23T00:00:00.000
[ "Computer Science" ]
A Modified Robotic Manipulator Controller Based on Bernstein-Kantorovich-Stancu Operator With the development of intelligent manufacturing and mechatronics, robotic manipulators are used more widely. There are complex noises and external disturbances in many application cases that affect the control accuracy of the manipulator servo system. On the basis of previous research, this paper improves the manipulator controller, introduces the Bernstein–Kantorovich–Stancu (BKS) operator, and proposes a modified robotic manipulator controller to improve the error tracking accuracy of the manipulator controller when observing complex disturbances and noises. In addition, in order to solve the problem that the coupling between the external disturbances of each axis of the manipulator leads to a large amount of computation when observing disturbances, an improved full-order observer is designed, which simplifies the parameters of the controller combined with the BKS operator and reduces the complexity of the algorithm. Through a theoretical analysis and a simulation test, it was verified that the proposed manipulator controller could effectively suppress external disturbance and noise, and the application of the BKS operator in the manipulator servo system control is feasible and effective. Introduction The robotic manipulators have the characteristics of flexible movement and adaptability to harsh environments, and have been used in more and more fields in recent years, such as ocean exploration [1,2], health care [3], agricultural planting [4], and other fields. With the increase in manipulator application fields, its disturbance types are becoming more and more complex, so how to ensure its positioning accuracy and trajectory accuracy under complex disturbances has become an important research direction. Aiming at the high-precision trajectory control of the manipulator, in recent years, many scholars have proposed a series of control and optimization methods to design and develop the controller of the manipulator. Shang et al. [5] proposed a PD controller with fuzzy logic combined with an RBF neural network optimization for the parameter perturbation caused by the telescopic flexible manipulator when the linkage length changes and the nonlinear rotation angle fluctuates, which reduces the trajectory error caused by the change of the manipulator structure parameters. Yang et al. [6] proposed an improved PID controller in order to identify the torque and stiffness of the manipulator, which improves the trajectory tracking accuracy of the manipulator by correcting the servo motor trajectory and satisfies the suppression of the disturbance generated by nonlinear friction. Fan et al. [7] designed a recursive neural network by collecting motion data of redundant manipulators and established a controller for the case of incomplete or unknown manipulator dynamics models to improve the trajectory tracking accuracy of industrial manipulators. The control method with an obvious effect on manipulator disturbance suppression is the sliding mode control, which can identify and track the disturbance for the characteristics of an external disturbance so that the accuracy of the manipulator's controller trajectory tracking control can be improved [8,9]. It can be seen from the methods mentioned in the above documents that to improve the trajectory tracking accuracy and positioning accuracy of the manipulator, an accurate control model is required, and the controller needs to be improved to identify the noise and disturbance in the manipulator's working environment. Therefore, designing an advanced observer for the controller is an effective method for improving the trajectory-tracking accuracy of the robotic manipulator. By designing an appropriate disturbance observer to estimate the disturbance, and integrating the disturbance estimation into the manipulator controller, the negative effects of noise and disturbance on the trajectory tracking control can be greatly attenuated. Therefore, many scholars have studied and designed the disturbance observer. Shi et al. [10] have developed a new integral disturbance observer to actively compensate the uncertainty of the system and combined it with an adaptive sliding mode control to improve the transient performance of the manipulator controller. However, the simulation verification of this method on a 2-DOF robot manipulator cannot show the application effect on redundant robots that are widely used in industry. Gao and Chen [11] proposed a fixed-time extended state disturbance observer and proved its stability by improving the extended state observer. However, while this method can make the robot complete the task in a limited time, the global trajectory tracking accuracy cannot be guaranteed. Zhao et al. [12] proposed a boundary control based on the disturbance observer and explored that disturbance suppression can still be achieved under the premise that the time derivative of the disturbance received by the manipulator is not zero. However, this research uses a single-link flexible manipulator, which is difficult to apply to common manipulators. Jung and Lee [13] analyzed the application similarity of the disturbance observer and time delay controller in a manipulator, and found that the time delay controller can achieve better results in Cartesian space. However, the definition of manipulator disturbance is not complex enough to completely represent the case of manipulators in practical application. In recent years, some disturbance observers designed based on sliding mode observers have also achieved good results in the application of manipulator controllers, and the design of initial parameters is low in difficulty and easy to achieve. However, with the change of external disturbance characteristics, these parameters are difficult to evolve quickly and accurately, resulting in weak adaptability of the controller and difficulty applying in the presence of complex disturbances [14,15]. Therefore, a novel Bernstein-Kantorovich (BK) operator is added to the disturbance observer in this paper. By using its excellent approximation characteristics in statistics and data convergence estimation characteristics, the updating speed and accuracy of the parameters in the controller and disturbance observer are improved, and the identification accuracy of the manipulator controller for disturbance is improved. The BK operator and its improved methods have been applied in the engineering field in recent years. They are mainly used to optimize the process of updating controller parameters to make it approach quickly and ensure accuracy [16]. Moreover, with the development of the BK operator, some improved BK operators, such as λ-BK [17,18], α-BK [19], q-BK [20], and other methods, have also been developed [21][22][23][24]. In this paper, a new BKS operator, which is introduced in [25], is implemented in the manipulator controller to optimize the parameter update process of the disturbance observer and improve the trajectory tracking accuracy of the robotic manipulator controller. In this paper, we improve the robotic servo controller for the uncertainty of the robotic manipulator model and trajectory tracking problem due to the external unknown complex perturbations, and the main contributions of this paper are as follows: 1. Combining the BKS operator with the perturbation observer in the robotic controller. 2. A multi-turbulence simulation based on the new perturbation observer and the preresearched trajectory tracking controller [26] on the robotic manipulator is performed to verify the effectiveness of the method. 3. An experimental platform of the manipulator is built to verify the effectiveness of the proposed controller in the actual manipulator, which reflects the feasibility of the method in practical engineering applications. Figure 1 shows the experimental setup. In this paper, a manipulator model is built, and motion control is realized by using a single-chip computer. The model is STM32F13RBT6. The trajectory optimization is updated and compensated by the upper computer. The insulation cup near the robot can be regarded as an obstacle or a coordinate reference. In this experimental environment, a vibration device is installed on the base to provide background noise; that is, the influence of noise, such as simulated mechanical vibration is added to the process of disturbance. Model Derivation and Control Objectives Micromachines 2022, 13, x FOR PEER REVIEW 3 of 21 2. A multi-turbulence simulation based on the new perturbation observer and the preresearched trajectory tracking controller [26] on the robotic manipulator is performed to verify the effectiveness of the method. 3. An experimental platform of the manipulator is built to verify the effectiveness of the proposed controller in the actual manipulator, which reflects the feasibility of the method in practical engineering applications. Figure 1 shows the experimental setup. In this paper, a manipulator model is built, and motion control is realized by using a single-chip computer. The model is STM32F13RBT6. The trajectory optimization is updated and compensated by the upper computer. The insulation cup near the robot can be regarded as an obstacle or a coordinate reference. In this experimental environment, a vibration device is installed on the base to provide background noise; that is, the influence of noise, such as simulated mechanical vibration is added to the process of disturbance. Manipulator Dynamic Modeling The dynamic equation of an n-DOF manipulator under external disturbance can be given by the following equation: where , , and ∈ mean the vectors of manipulator joint positions, velocities, and accelerations, respectively. ( ) ∈ is expressed as the positive definite inertia matrix of the manipulator joint; ( , ) ∈ represents the Coriolis force and centrifugal force matrix of the manipulator joint in motion.; ( ) ∈ represents the gravity matrix of the manipulator joint; ( ) ∈ means the friction matrix, which is generally used to represent the dynamic friction force in the state of joint motion, and the static friction of starting is not considered here; ∈ represents the control torque of each joint of the manipulator, which refers to the torque required by each joint of the manipulator to complete the task in an ideal state; ∈ represents an uncertain external disturbance, which can also be regarded as the additional torque required by the manipulator to suppress the error caused by the external disturbance. In the application process of the actual robot manipulator, there are factors such as geometric errors, sudden load changes, and environmental noises, as well as uncertain emergencies such as obstacle avoidance, collisions, and human interference. These factors lead to the value of the physical parameters of the Manipulator Dynamic Modeling The dynamic equation of an n-DOF manipulator under external disturbance can be given by the following equation: where q, . q, and .. q ∈ R n mean the vectors of manipulator joint positions, velocities, and accelerations, respectively. M(q) ∈ R n×n is expressed as the positive definite inertia matrix of the manipulator joint; C q, . q ∈ R n×n represents the Coriolis force and centrifugal force matrix of the manipulator joint in motion; G(q) ∈ R n represents the gravity matrix of the manipulator joint; F . q ∈ R n means the friction matrix, which is generally used to represent the dynamic friction force in the state of joint motion, and the static friction of starting is not considered here; τ ∈ R n represents the control torque of each joint of the manipulator, which refers to the torque required by each joint of the manipulator to complete the task in an ideal state; τ d ∈ R n represents an uncertain external disturbance, which can also be regarded as the additional torque required by the manipulator to suppress the error caused by the external disturbance. In the application process of the actual robot manipulator, there are factors such as geometric errors, sudden load changes, and environmental noises, as well as uncertain emergencies such as obstacle avoidance, collisions, and human interference. These factors lead to the value of the physical parameters of the model. It cannot be Micromachines 2023, 14, 44 4 of 21 obtained accurately, which will cause an error in the controller system parameters, and the uncertainty of the system can be expressed as: where the symbol with the subscript 0, such as M 0 , represents the nominal matrix of mechanical parameters of the robotic manipulator, and the symbol with ∆ in front represents the parameter variation caused by uncertainty in the manipulator control system, such as ∆M. Substituting Equation (2) into Equation (1), we can obtain the dynamic equation that can express the uncertainty of the manipulator. To organize the Equation (3) that expresses the system uncertainty, it can be written as: .. In order to facilitate subsequent calculation and derivation, the parameters in Equation (4) can be classified according to the characteristics of the dynamic model, which can be written as: .. where D(t) represents the error caused by system uncertainty and disturbance to the joint motion, which can also be regarded as the compensation of the manipulator to suppress the uncertainty. f 0 q, . q is a bounded and known nonlinear function obtained from the manipulator model. In order to ensure the task trajectory accuracy of the robot manipulator, it is necessary to ensure its end joint motion state. By designing the sliding mode surface of the controller, the robustness of the sliding mode control is adopted to ensure that the controller can track the desired trajectory in the presence of system uncertainty. where q d ∈ R n represents the given task of the manipulator, that is, the expected trajectory; e = q − q d represents the trajectory tracking error caused by system uncertainty; t f is the expected time to complete the task, that is, the given finite time. The controller designed in this paper needs to establish the following assumptions and lemmas for the system and uncertainty to ensure the stability of the system. It is assumed that the joint position and velocity state of the manipulator are data that can be measured by the joint or servo motor sensor. Uncertainty of ψD(t) meet |ψD(t)| ≤ ω 0 and ψ . D(t) ≤ ω 1 , where diagonal matrix ψ is a constant, which is based on mechanical properties of the manipulator. The constants ω 0 and ω 1 are unknown and positive. When the above assumptions are satisfied, the stability of the following invariant systems can be proved according to the correlation lemma. Suppose there is a continuous function V(z) in the system shown in Equation (5) which satisfies the following conditions, i.e., it is a positive definite function on D ⊆ R n and satisfy the constraint condition: If V(z) is a continuous function and D ⊆ R n is a positive definite function, the following constraints can be obtained: When k > 0 and λ ∈ (0, 1), it can be seen from Equation (8) that the defined manipulator control system is locally finite-time stable. In addition, if D = R n is satisfied, the defined system is globally finite-time stable. It can be seen from the existing robot manipulator motion model that the constraint on the disturbance is mainly the amplitude of the disturbance, but the change characteristics of the disturbance are not discussed. From the actual application, it can be clearly seen that if the nonlinearity of the disturbance is too strong, the controller and observer have insufficient ability to suppress the disturbance, and the impact on the motion control accuracy is very obvious. To solve this problem, this paper proposes a modified control method of a robotic manipulator which can ensure high-precision motion control when the manipulator works in complex environments such as underwater, picking, and so on. Bernstein-Kantorovich-Stancu Operator In this section, we apply the BKS operator to the manipulator and obtain the recursive relationship of BKS polynomial according to the Bernstein polynomial [27]. Then, some properties of the BKS operator applied to the robotic manipulator controller are given according to the above-derived model. The modified BKS operator is defined in [25] as where 0 ≤ α ≤ β and f represent a continuous function defined on the interval [0, 1]. It is clear that each of these nth BKS basis polynomials are a linear combination of Bernstein basis polynomials for the higher order observer coefficients of complex manipulators, and the higher order combination of BKS basis polynomials is proved below. The Bernstein basis polynomial [25] can be defined as For the sliding mode observer, which has an order up to the second order, it is necessary to derive the case when n is an even number. where * denotes the floor function, and C i n = n i for n ≥ i ≥ 0. The associated shape parameters λ i are constrained to be λ −1 = λ n−1 = 0 [25]. However, if the observer is of full order, and n is odd, then it is called an nth extended Bernstein basis polynomial and the associated parameters satisfy the following conditions: By derivation, it follows that in case λ i , α, and β are any real numbers satisfying the above conditions, the following moments were obtained in [25] Based on the form of Equation (13), according to the role of dynamics in the movement process of the robot arm, we can use it in the dynamic model of the robot arm in two ways. One is that the time when the sliding mode controller passes through the sliding mode surface is taken as the starting point of the sampling time, and the subsequent time when the sliding mode controller continuously passes through the sliding mode surface is taken as the subsequent sampling time. The other way is to set the boundary layer of the sliding mode control. When the control function output crosses the boundary layer, it is used as the starting point of the sampling time. If the output is always outside the preset boundary layer, the sampling time interval is set according to the actual application results. Both methods have corresponding application scenarios. The former is suitable for the continuous occurrence of external force disturbance, while the latter is suitable for the occurrence of disturbance according to impulse noise. In order to ensure that the operator converges in the course of the iteration, it is necessary to prove D α,β n,λ i,k ,k (x). The range of its interval can be obtained as follows: In addition, the following results were obtained in [25]: and lim Therefore, it can be proved that the modified BKS operator proposed in this paper does not cause the divergence of the control system due to the change of the operator coefficients caused by the perturbation when the alternative parameters are involved in the calculation in the control system and can be applied in the manipulator control system. Controller Design The design of the manipulator controller needs to be based on the actual engineering task requirements. In order to more clearly express the dynamic control of the manipulator in the process of completing the task, a simple definition of the finite-time generator is created. It is mainly aimed at the dynamic control of manipulator tasks, assuming that there is a double integral system .. e(t) = u(t) in the manipulator. According to the work in [28], in order to achieve local convergence of the manipulator control system within the desired target time of the task, it is necessary to ensure that the input u(t) in the controller tracks the desired trajectory of e(t) and . e(t). Manipulator's Task Definition This section focuses on the manipulator configuration error caused by the difference between the given motion trajectory output by the controller and the expected trajectory of the human-made task when the robot manipulator performs the task, and the mapping relationship between the coordinates in the manipulator system and the actual motion coordinates. The error function can be written as: We need to assume that Equation (18) has a quadratic differential with respect to time, and then define that x(q) is the trajectory of the manipulator obtained from the solution of the positive kinematics, and x d is the desired trajectory defined in the given task requirements, then . e = . x = J . q where J = ∂e/∂q is the Jacobian matrix expression of the desired trajectory of the task. .. Substitute Equation (5) into (19) to obtain the following form: .. where q are the influence of the error between the expected trajectory and the actual trajectory of the manipulator during a given task on the motion control of the manipulator. Then we can obtain the dynamic model of the manipulator movement under the influence of the tasks as follows: where means the generalized matrix inverse of J A −1 , and u represents the parameter used to adjust the input of the manipulator controller. In defining the manipulator dynamics model, it is mentioned that the inertia matrix, A, is a positive definite matrix, so the following assumptions can be given: where A min and A max are defined as positive constants. In addition, we also need to define another constant α to satisfy the constraints of 0 < α < 1 and where the uncertainty of the dynamic model of the manipulator controller in the above model derivation section is represented by A. Based on this, the manipulator dynamics model can be written as where M max is a normal number representing the upper limit of the inertia matrix variation. It is worth mentioning that if there is no other interference, that is, satisfying A = A, then M = I can be obtained. The Designed Controller To solve the trajectory tracking problem using only the position feedback, the control inputs and observer dynamics can be written as: .ê where K d and K p are positive the diagonal gain matrices;ê(t) =q(t) − q d (t) is the approximate accurate value observer output of e(t); L d and L p are the observer gain matrices, and D(t) represents the estimation of D(t). The following assumptions are considered for the values of the above parameters: where µ d , k d , γ and l d are positive constants, satisfying k d > µ d , and I n shows the identity matrix of dimension n. Substituting Equation (26) into Equation (20) and using the definition of Equation (28) yields .. Then we define where . q 0 (t) = .q (t) − µ d q(t) and q(t) = e(t) −ê(t) = q(t) −q(t) denotes the position estimation error of the manipulator joints. Therefore, we can write the closed-loop error dynamics equation of the system as: .. . In Section 2.2 we proved that the proposed BKS operator is a mathematical tool, which is a simplified calculation and it is easier for making polynomials used to estimate different functions. By combining multiple polynomials, the BKS operator is able to estimate any complex and real-valued function with any degree of accuracy. The ideal form of the dynamic equation of the robotic manipulator can be regarded as a linear system. The traditionally obtained algebraic linear system is derived from the least squares solution of the minimum norm using accurate data. However, the introduced disturbance cannot be completely regarded as a nonlinear part. Some complex noise disturbances are difficult to be completely distinguished from the linear system in the discrete sampling process. Therefore, we need to adopt a method to suppress this noise as much as possible while retaining the linear system, even if the basis function of the control system is smoother. The BKS can introduce a regularization feature to obtain a more smooth solution. In Equation (26), u(t) is the auxiliary function for constructing the sliding mode controller and the dynamic model in Equation (5) is substituted. However, the mean square error of u(t) is approximated to the unknown function in the first kind of integral equation by the given form of the BKS operator. The obtained linear equations are transformed into algebraic linear equations, and a more stable numerical solution can be obtained after approximating with a higher order modified BKS operator. According to the design of the controller, we rewrite Equation (9) as follows: In Equation (34), C m+p k denotes the binomial coefficient, which is described as The kth order factorial power with increment (−α) of t is shown by t (k,−α) , which is described as For ease of representation, it can be directly presented as: where and Therefore, we can express the basis function for D(t) using a linear form based on the BKS and approximation theorem as where the number of basis functions used is determined by the filtering accuracy of the controller, and the ideal weighting vector is given by W D . The approximation error denoted by ε D and Z D is the basis function vector. The W D , Z D , and ε D are determined by the filter and observer in the controller. The selection of the basis function parameters has an impact on the filtering accuracy. By establishing the relationship between the probability density function of the filter estimation error and the filter gain matrix, the filtering dynamic system is uniformly bounded in the mean square sense, indicating that the feedback particle filter will cause system divergence at low sampling rates. In this paper, full order observers are used to determine the filtering accuracy. Using the same set of basis functions, it can be written as . where the BKS operator weight approximation error is denoted by W D . A simulation case was used to verify the performance of the proposed manipulator controller combined with the BKS operator. The trajectory of each joint used in the simulation, the controller inputs, and the perturbations added to the joint #1 are shown in Figure 2. The simulation results are shown in Figure 3. During the activation of the joint servo motor, a disturbance exists between 6 and 8 s. Figure 3a shows the controller without the BKS operator, which lacks sufficient suppression capability in the face of the externally imposed disturbance, while the controller with the BKS operator applied has a higher accuracy in tracking the disturbance, as shown in Figure 3b, where the impact caused by the imposed disturbance is significantly reduced. Micromachines 2022, 13, x FOR PEER REVIEW 10 of 21 where the BKS operator weight approximation error is denoted by . Figure 3. During the activation of the joint servo motor, a disturbance exists between 6 and 8 s. Figure 3a shows the controller without the BKS operator, which lacks sufficient suppression capability in the face of the externally imposed disturbance, while the controller with the BKS operator applied has a higher accuracy in tracking the disturbance, as shown in Figure 3b, where the impact caused by the imposed disturbance is significantly reduced. However, it is worth noting that only one joint of the manipulator is perturbed in this simulation. Due to the series connection of the manipulator, the disturbance of one joint will affect the other joints. Taking the above simulation as an example, if all three joints are disturbed, then there is a complex coupling relationship between the total disturbance of each joint. When the controller uses the standard model to calculate such problems, it will consume a lot of time, which cannot meet the requirements of real-time control in the actual application of the manipulator and will have a great impact on the communication throughput. We propose an effective solution to this problem. where the BKS operator weight approximation error is denoted by . Figure 3. During the activation of the joint servo motor, a disturbance exists between 6 and 8 s. Figure 3a shows the controller without the BKS operator, which lacks sufficient suppression capability in the face of the externally imposed disturbance, while the controller with the BKS operator applied has a higher accuracy in tracking the disturbance, as shown in Figure 3b, where the impact caused by the imposed disturbance is significantly reduced. However, it is worth noting that only one joint of the manipulator is perturbed in this simulation. Due to the series connection of the manipulator, the disturbance of one joint will affect the other joints. Taking the above simulation as an example, if all three joints are disturbed, then there is a complex coupling relationship between the total disturbance of each joint. When the controller uses the standard model to calculate such problems, it will consume a lot of time, which cannot meet the requirements of real-time control in the actual application of the manipulator and will have a great impact on the communication However, it is worth noting that only one joint of the manipulator is perturbed in this simulation. Due to the series connection of the manipulator, the disturbance of one joint will affect the other joints. Taking the above simulation as an example, if all three joints are disturbed, then there is a complex coupling relationship between the total disturbance of each joint. When the controller uses the standard model to calculate such problems, it will consume a lot of time, which cannot meet the requirements of real-time control in the actual application of the manipulator and will have a great impact on the communication throughput. We propose an effective solution to this problem. Design of the Modified Observer Real-time control is an important parameter index of the manipulator, so the observer in the articulated rotary axis servo control system should consider more influencing factors at the same time, including parameter observation accuracy, observation speed, parameter setting, and computational complexity [29]. The controller method proposed above can only guarantee the observation accuracy in the servo control system, so a new full-order state observation scheme for manipulator servo systems is designed. Combining the BKS operator allows the controller to take into account both observation performance and execution capability. Combining the BKS operator allows the controller to achieve a compromise between system tracking capability and noise sensitivity by introducing only one adjustable parameter. At the same time, determining whether the value of the parameter is reasonable can optimize the performance of the required observer, thereby simplifying the controller parameter adjustment steps. Considering that there is a transmission device such as a reducer in the servo motor of the manipulator, the observer of the manipulator servo system actually considers the data of two rotating bodies. So, the models of two observers can be designed at the same time, that is, the motor side of the joint and the actuator. The state directly observed on the motor side includes the motor positionθ m and speed ω m . From this, the motor-side observer model can be obtained: In the designed motor-side observer, the motor position θ m measured by the encoder can be used to correct the estimated value in real time, so as to meet the requirement of the observer in order to continuously improve the observation accuracy. We use the difference ∆θ(k) between the directly measured actual value and the predicted estimated value as one of the input signals for the next sampling point of the motor-side observer. In addition, the driver of the servo motor can also obtain the electromagnetic torque T e in real time by measuring the q-axis current. Thus, the transfer torque T s can be obtained indirectly by observing the motor speed and acceleration derived from Equation (45). Directly observed conditions, include load positionθ l , speedω l , and accelerationâ l . By observing the load speed and acceleration, the load torque T L can be indirectly observed. The equation is as follows: Considering that the load-side observer does not have a direct input signal, it is necessary to obtain the position difference between the motor-side and the load-side and solve the following differential equation: In Equation (48), T s is the value obtained by the observer on the motor-side, and θ m is measured by the encoder of the motor. The purpose of correcting the state observation is achieved by solving the equation, and then the input signal of the load-side observer is constructed. The parameter form of the BKS operator in the fixed gain matrix of the controller can be further derived as follows: where α, β, and γ are dimensionless constants that can actually be solved analytically, and the solution can be represented by a parameter, that is, the ratio of motion state to observation uncertainty, defined as: Then, all feedback gains from the servo system observer can be derived in terms of λ. When this parameter is obtained, the optimal steady-state gain parameters α, β, and γ, as well as the resulting performance, can be obtained by Equation (51) The influence of traditional observer parameters on observation performance is not obvious. In order to simplify the process of parameter formulation and to keep the stability of the best gain matrix of the observer, an adjustable parameter s = √ 1 − α is introduced. At this point, through the explicit expression of the elements in K, s can be obtained, as shown in Equation (52). By analyzing the stability of the observer, the value range of s can be refined. . (52) The relationship between s and λ can then be derived as shown in Equation (53). By analysis, s is a monotonically decreasing function of λ. Since λ > 0 and s ∈ (0, 1), the performance of the observer can be adjusted according to s. In order to ensure the stability of the controller of the manipulator, it is necessary to refer to the poles and zeros of the system to realize the requirement of adjusting the amplitude and phase of the unit impulse response of the system. Therefore, the value interval of s can be determined by the position of the pole in the characteristic equation of the observer. From the perspective of the servo system model of the manipulator joints, the characteristic equation of the relationship between the state matrix and the system output can be expressed as: According to Equation (54), the transfer function between the input signal and the observed state shows that the amplitude of Equation (55) is equal to 1 when z = 1. This indicates that after reaching steady state, the observation of the position is able to track the actual value without error, regardless of the input signal. Simulation and Discussion This section will first explain the calculation results of the BKS operator in the process of manipulator controller parameter tuning and give appropriate controller parameters. Then, four disturbance cases are used to test the manipulator model to verify that the proposed controller can suppress the disturbance in the manipulator control process. Finally, to illustrate the role of the proposed observer in the controller, signals with a certain mechanical vibration noise are used to show the tracking effect of the proposed observer on noise and disturbance. In the simulation, we use a CPU with Intel Core<EMAIL_ADDRESS>A personal computer with GHz and 8 GB RAM uses MATLAB 2021a to simulate the designed controller. Disturbance Cases and Control Parameters In the subsequent simulation, we introduced the following four kinds of disturbance properties into the three joints of the manipulator: two cases with different joint strengths between 6-8 s and two cases with multiple high-frequency disturbances between 5-9 s. The forms of the four disturbances are shown in Figure 4. Simulation and Discussion This section will first explain the calculation results of the BKS operator in the process of manipulator controller parameter tuning and give appropriate controller parameters. Then, four disturbance cases are used to test the manipulator model to verify that the proposed controller can suppress the disturbance in the manipulator control process. Finally, to illustrate the role of the proposed observer in the controller, signals with a certain mechanical vibration noise are used to show the tracking effect of the proposed observer on noise and disturbance. In the simulation, we use a CPU with Intel Core<EMAIL_ADDRESS>A personal computer with GHz and 8 GB RAM uses MATLAB 2021a to simulate the designed controller. Disturbance Cases and Control Parameters In the subsequent simulation, we introduced the following four kinds of disturbance properties into the three joints of the manipulator: two cases with different joint strengths between 6-8 s and two cases with multiple high-frequency disturbances between 5-9 s. The forms of the four disturbances are shown in Figure 4. In the simulation, the degree of freedom of the manipulator is 6, so we can take = 6. At this time, the error between the trajectory obtained by using the basis function of ( ) determined by the BKS operator in the numerical simulation and the expected trajectory is shown in Table 1. The comparison with other BK operators shows the advantages of the BKS operator proposed in this paper. As shown in Table 1, compared with the classical BK operator [18], -BK [24] ( = 0.6), α-BK [19,30] ( = 0.5), and λ-BK [31] ( = − 0.3) make a comparison, and the error of the BKS operator proposed in this paper is smaller. Under the condition that = 6 of the model in this paper is satisfied, the error is the smallest. If we conduct iterative calculation, when = 40 is tested, it can obtain a more reasonable basis function, which makes the error of the model lower. We can adjust the value of according to the different needs of the user for the manipulator and the adaptation to the environment. For example, if the observer is used for a task that requires a high completion time, the observation speed is the most important indicator, so the should be designed to be appropriately decreased. If it is a high-precision system, such as precision machining, or a system that pursues high data smoothness, In the simulation, the degree of freedom of the manipulator is 6, so we can take n = 6. At this time, the error between the trajectory obtained by using the basis function ofD(t) determined by the BKS operator in the numerical simulation and the expected trajectory is shown in Table 1. The comparison with other BK operators shows the advantages of the BKS operator proposed in this paper. As shown in Table 1, compared with the classical BK operator [18], q-BK [24] (q = 0.6), α-BK [19,30] (α = 0.5), and λ-BK [31] (λ = −0.3) make a comparison, and the error of the BKS operator proposed in this paper is smaller. Under the condition that n = 6 of the model in this paper is satisfied, the error is the smallest. If we conduct iterative calculation, when n = 40 is tested, it can obtain a more reasonable basis function, which makes the error of the model lower. We can adjust the value of s according to the different needs of the user for the manipulator and the adaptation to the environment. For example, if the observer is used for a task that requires a high completion time, the observation speed is the most important indicator, so the s should be designed to be appropriately decreased. If it is a high-precision system, such as precision machining, or a system that pursues high data smoothness, the value of A should be appropriately increased. In addition, adjustable parameters can be adjusted according to the different observation requirements. In addition, the adjustable parameters can be optimized under different observation requirements. Considering the requirements of most working conditions, s = 0.55 is selected as the parameter of the observer in this paper. The parameters of the BKS operator are α = 0.5, β = 0.7, and γ = 0.1. The parameters of the controller are shown in Table 2. Table 2. Parameters of the controller in simulation. Simulation Results of the Proposed Controller Using the BKS Operator In this section, the four disturbance cases shown in Figure 4 are used to verify the controller's ability to suppress different disturbances in different disturbance environments by taking the starting process of the servo motors at each joint of the manipulator as the experimental environment. The control effects of the improved means proposed in this paper on various characteristic disturbances are illustrated, respectively, by using and not using the BKS operator controller proposed in this paper. It can be seen from Case 1 and Case 2 in Figure 4 that the difference between the first and the second disturbance is that the amplitude of the disturbance applied to the three joints is different, but it can be seen from Figures 5 and 6 that the vibration caused by the disturbance with a large amplitude applied in Case 2 during the starting process of the servo motor is less obvious than that in Case 1 because there is a coupling relationship between the disturbances between joints. That is to say, the disturbance result of the front joint may be applied to the joint at the end, and the disturbance compensation of the end joint will affect the control signal of the front joint. Therefore, it is not possible to judge whether the stability of the manipulator servo system is severely affected by the given disturbance amplitude strength alone, but when it is compared through Figure 5a, Figure 5b, Figure 6a, and Figure 6b, respectively, it can be seen that the controller with the BKS operator proposed in this paper can effectively suppress the influence of external disturbance on the manipulator servo motor. Figures 7 and 8, respectively, show the simulation results of Case 3 and Case 4 shown in Figure 4. The disturbances in these two cases are high-frequency and have a longer duration. It can be seen from the simulation results that in this case, if the control method of the BKS operator proposed in this paper is not applied, the applied disturbance cannot be suppressed. Consequently, the controller cannot effectively suppress the disturbance during the application process, which shows that the manipulator servo control system with the BKS operator proposed in this paper can effectively suppress the complex disturbance, and it is effective and feasible. joint may be applied to the joint at the end, and the disturbance compensation of the end joint will affect the control signal of the front joint. Therefore, it is not possible to judge whether the stability of the manipulator servo system is severely affected by the given disturbance amplitude strength alone, but when it is compared through Figure 5a, Figure 5b, Figure 6a, and Figure 6b, respectively, it can be seen that the controller with the BKS operator proposed in this paper can effectively suppress the influence of external disturbance on the manipulator servo motor. Figure 4. The disturbances in these two cases are high-frequency and have a longer duration. It can be seen from the simulation results that in this case, if the control method of the BKS operator proposed in this paper is not applied, the applied disturbance cannot be suppressed. Consequently, the controller cannot effectively suppress the disturbance during the application process, which shows that the manipulator servo control system with `the BKS operator proposed in this paper can effectively suppress the complex disturbance, and it is effective and feasible. Figure 4. The disturbances in these two cases are high-frequency and have a longer duration. It can be seen from the simulation results that in this case, if the control method of the BKS operator proposed in this paper is not applied, the applied disturbance cannot be suppressed. Consequently, the controller cannot effectively suppress the disturbance during the application process, which shows that the manipulator servo control system with `the BKS operator proposed in this paper can effectively suppress the complex disturbance, and it is effective and feasible. In addition, there is another thing to note. From the comparison between (a) and (b) in Figure 5-8, it can be seen that the starting stability time of the servo system without the BKS operator proposed in this paper is longer than that of the controller with the BKS operator, which shows that the controller with the BKS operator proposed in this paper can not only improve the stability of the manipulator in complex disturbance environments but also improve the starting performance of the servo system. Experiment Results of the Manipulator Controller in a Complex Environment In order to verify the advantages of the robotic manipulator control method proposed in this paper, we compared it with the new method in the same field in [14] and used the disturbance of Case 2 in Figure 4 carried out under the condition that the disturbance has no background noise and there is background noise in the disturbance-to illustrate the effectiveness of the method in this paper. In this part, we used a manipulator model with an open source driver and compared its application effect with other methods in the model. This also reflects the positive influence of the servo control method proposed in this paper on the trajectory tracking control of the controller. The control flow of the model is shown in Figure 9. In addition, there is another thing to note. From the comparison between (a) and (b) in Figures 5-8, it can be seen that the starting stability time of the servo system without the BKS operator proposed in this paper is longer than that of the controller with the BKS operator, which shows that the controller with the BKS operator proposed in this paper can not only improve the stability of the manipulator in complex disturbance environments but also improve the starting performance of the servo system. Experiment Results of the Manipulator Controller in a Complex Environment In order to verify the advantages of the robotic manipulator control method proposed in this paper, we compared it with the new method in the same field in [14] and used the disturbance of Case 2 in Figure 4 carried out under the condition that the disturbance has no background noise and there is background noise in the disturbance-to illustrate the effectiveness of the method in this paper. In this part, we used a manipulator model with an open source driver and compared its application effect with other methods in the model. This also reflects the positive influence of the servo control method proposed in this paper on the trajectory tracking control of the controller. The control flow of the model is shown in Figure 9. In addition, there is another thing to note. From the comparison between (a) and (b) in Figure 5-8, it can be seen that the starting stability time of the servo system without the BKS operator proposed in this paper is longer than that of the controller with the BKS operator, which shows that the controller with the BKS operator proposed in this paper can not only improve the stability of the manipulator in complex disturbance environments but also improve the starting performance of the servo system. Experiment Results of the Manipulator Controller in a Complex Environment In order to verify the advantages of the robotic manipulator control method proposed in this paper, we compared it with the new method in the same field in [14] and used the disturbance of Case 2 in Figure 4 carried out under the condition that the disturbance has no background noise and there is background noise in the disturbance-to illustrate the effectiveness of the method in this paper. In this part, we used a manipulator model with an open source driver and compared its application effect with other methods in the model. This also reflects the positive influence of the servo control method proposed in this paper on the trajectory tracking control of the controller. The control flow of the model is shown in Figure 9. As shown in the position estimation and velocity estimation in Figure 10, the method in [14] can observe the position and velocity during the operation of the manipulator servo system, and as shown in the disturbance estimation in Figure 10, it can better control the manipulator joints. However, in the initial stage of the servo system operation, that is, during the start-up process, it can be seen that there are obvious observation distortions. As shown by the error between the estimated result and the given result, the method is stable when observing the position, speed, and disturbance. There are obvious errors, especially the observation error, which is very large in the starting process. Compared with the disturbance imposed in Figures 6-8, the influence on the stability of the system is not obvious. This shows that the method proposed in [14] is not timely enough to update the model after the system state changes, and it is not suitable for manipulators that require high-precision positioning accuracy and trajectory accuracy. As shown in the position estimation and velocity estimation in Figure 10, the method in [14] can observe the position and velocity during the operation of the manipulator servo system, and as shown in the disturbance estimation in Figure 10, it can better control the manipulator joints. However, in the initial stage of the servo system operation, that is, during the start-up process, it can be seen that there are obvious observation distortions. As shown by the error between the estimated result and the given result, the method is stable when observing the position, speed, and disturbance. There are obvious errors, especially the observation error, which is very large in the starting process. Compared with the disturbance imposed in Figures 6-8, the influence on the stability of the system is not obvious. This shows that the method proposed in [14] is not timely enough to update the model after the system state changes, and it is not suitable for manipulators that require high-precision positioning accuracy and trajectory accuracy. Figure 10. Estimation of control signals and disturbances using the method in [14]. The experimental results of the method proposed in this paper are shown in Figure 11. Compared with Figure 10, both the position estimation accuracy and the speed estimation accuracy are improved. Especially for the tracking of the disturbance error, not only is the estimation error of the disturbances between Figure 6-8 greatly reduced, but also the method proposed in this paper can quickly track the control signal of the upper manipulator servo system during the starting process, ensuring the overall control accuracy. Figure 10. Estimation of control signals and disturbances using the method in [14]. The experimental results of the method proposed in this paper are shown in Figure 11. Compared with Figure 10, both the position estimation accuracy and the speed estimation accuracy are improved. Especially for the tracking of the disturbance error, not only is the estimation error of the disturbances between Figures 6-8 greatly reduced, but also the method proposed in this paper can quickly track the control signal of the upper manipulator servo system during the starting process, ensuring the overall control accuracy. Then, when the disturbance occurs, we apply high-frequency vibration noise with different amplitudes to the manipulator to simulate the mechanical vibration noise in the actual application process. As shown in Figure 12, the method in [14] can be used to estimate this complex disturbance to a certain extent, but the details after the combination of the two disturbances are not accurately estimated, so there is a large estimation error, resulting in a large estimation error in the velocity signal of the estimated location. However, using the method proposed in this paper, we can estimate the details of the combination of noise and disturbance. As shown in Figure 13, the influence of simulated mechanical vibration noise on the disturbed signal can be seen in the disturbance estimation, and the estimation error is also significantly reduced. At the same time, the advantages of the method proposed in this paper compared with the method proposed in [14] in the application of a manipulator controller can be seen in the position estimation error and speed estimation error. Additionally, the effectiveness and feasibility of the method proposed in this paper can be proven through experiments. Micromachines 2022, 13, x FOR PEER REVIEW 18 of 21 Figure 11. Estimation of control signals and disturbances using the proposed method. Then, when the disturbance occurs, we apply high-frequency vibration noise with different amplitudes to the manipulator to simulate the mechanical vibration noise in the actual application process. As shown in Figure 12, the method in [14] can be used to estimate this complex disturbance to a certain extent, but the details after the combination of the two disturbances are not accurately estimated, so there is a large estimation error, resulting in a large estimation error in the velocity signal of the estimated location. Then, when the disturbance occurs, we apply high-frequency vibration noise with different amplitudes to the manipulator to simulate the mechanical vibration noise in the actual application process. As shown in Figure 12, the method in [14] can be used to estimate this complex disturbance to a certain extent, but the details after the combination of the two disturbances are not accurately estimated, so there is a large estimation error, resulting in a large estimation error in the velocity signal of the estimated location. Figure 12. Estimation of control signals and disturbances with noise using the method in [14]. However, using the method proposed in this paper, we can estimate the details of the combination of noise and disturbance. As shown in Figure 13, the influence of Figure 12. Estimation of control signals and disturbances with noise using the method in [14]. simulated mechanical vibration noise on the disturbed signal can be seen in the disturbance estimation, and the estimation error is also significantly reduced. At the same time, the advantages of the method proposed in this paper compared with the method proposed in [14] in the application of a manipulator controller can be seen in the position estimation error and speed estimation error. Additionally, the effectiveness and feasibility of the method proposed in this paper can be proven through experiments. Figure 13. Estimation of control signals and disturbances with noise using the proposed method. Conclusions In this paper, we combine the BKS operator and the sliding mode controller of a robotic manipulator to establish a novel trajectory tracking control scheme for the robot manipulator servo system to solve the problem of low trajectory tracking accuracy under complex external perturbations. The BKS operator can accurately estimate the parameter update trend of the system and the observer based on the previous sampled data and the results of the perturbation observer; the method effectively reduces the steady-state error due to the perturbation estimation error. Introducing an estimation function effectively reduces the steady-state error of the servo control system due to the perturbation estimation error. The proposed control strategy retains the advantages of the traditional sliding mode controller while improving the tracking accuracy of the robotic servo system under complex perturbations. Finally, simulations and experiments comprehensively verify the effectiveness and feasibility of the proposed method. The scheme proposed in this paper is applicable to scenes with complex disturbances and noises or to industrial tasks with high requirements for accuracy. However, in some scenes with low requirements for accuracy or a single disturbance, the calculation amount of the method proposed in this paper is larger than that of traditional methods and is not suitable for cases with requirements for running time. Therefore, in the future, we will develop a disturbance identification system to switch different control strategies in complex scenes where the disturbance form changes frequently. While improving the control accuracy, we also need to ensure the running speed of the robotic manipulator so that it can be applied to a wider range of application cases. Conclusions In this paper, we combine the BKS operator and the sliding mode controller of a robotic manipulator to establish a novel trajectory tracking control scheme for the robot manipulator servo system to solve the problem of low trajectory tracking accuracy under complex external perturbations. The BKS operator can accurately estimate the parameter update trend of the system and the observer based on the previous sampled data and the results of the perturbation observer; the method effectively reduces the steady-state error due to the perturbation estimation error. Introducing an estimation function effectively reduces the steady-state error of the servo control system due to the perturbation estimation error. The proposed control strategy retains the advantages of the traditional sliding mode controller while improving the tracking accuracy of the robotic servo system under complex perturbations. Finally, simulations and experiments comprehensively verify the effectiveness and feasibility of the proposed method. The scheme proposed in this paper is applicable to scenes with complex disturbances and noises or to industrial tasks with high requirements for accuracy. However, in some scenes with low requirements for accuracy or a single disturbance, the calculation amount of the method proposed in this paper is larger than that of traditional methods and is not suitable for cases with requirements for running time. Therefore, in the future, we will develop a disturbance identification system to switch different control strategies in complex scenes where the disturbance form changes frequently. While improving the control accuracy, we also need to ensure the running speed of the robotic manipulator so that it can be applied to a wider range of application cases.
13,171.2
2022-12-24T00:00:00.000
[ "Computer Science", "Engineering" ]
Incorporate Online Hard Example Mining and Multi-Part Combination Into Automatic Safety Helmet Wearing Detection Automatic detection of workers wearing safety helmets at the construction site is essential for safe production. Aiming at the problem of low recognition rate caused by factors such as background and light in the automatic detection of safety helmets using traditional machine learning methods, this paper proposes an object detection framework that combines Online Hard Example Mining (OHEM) and multi-part combination. In our framework, we first use the multi-scale training and the increasing anchors strategies to enhance the robustness of the original Faster RCNN algorithm to detect different scales and small object. Then, the OHEM is to optimize the model to prevent the imbalance of positive and negative samples. Finally, the person wearing the helmet and its parts (helmet and person) are detected by improved Faster RCNN. The multi-part combination method uses the geometric information of the detection objects to determine if a worker is wearing a helmet. Experiments show that compared with the original Faster RCNN, the detection accuracy is increased by 7%. It also has better detection performance for partial occlusion and different-size objects, showing good generalization and robustness. I. INTRODUCTION Various risk factors safety of workers due to complex environment in chemical plants, power substations and construction sites. The causes of injury and fatality include falls, slips, being corroded by chemicals, being struck by objects and electrocution, etc. Struck by falling object and falls to lower level are the leading hazards. According to the Occupational Safety and Health Administration (OSHA) statistics, about 5-6% of fatal accidents in the United States are caused by falling objects [1]. There is one third of the deaths because of falling to lower level [2]. Therefore, people working in such places must wear safety helmets to protect them from being struck by falling objects and falling to lower level [3]. Automatically detecting workers wearing safety helmets at the construction The associate editor coordinating the review of this manuscript and approving it for publication was Mostafa Rahimi Azghadi . site and making corresponding feedback in the monitoring system is crucial for safety production. With the development of computer technology, automatic visual detection has been widely used in industrial applications. Many related studies have been conducted for helmet wearing detection [4], [5]. Wua and Zhaoa [4] divided the entire helmet wearing detection process into two parts. First, workers were detected by combining the frequency domain information and Histogram of Oriented Gradient (HOG) of the image. Then, the color and Circle Hough Transform (CHT) features were combined for safety helmet detection. The method achieved a certain detection effect; however, the overall accuracy of the method is low and only a specific color safety helmet can be detected. Rubaiyat et al. [5] utilized the Local Binary Patterns(LBP), Hu Moment Invariant(HMI) and Color Histogram(C.H.) of the image to extract the feature of different-color helmets, then hierarchical Support Vector Machine (SVM) is used to recognize safety helmets. Above methods are based on traditional machine learning methods for object detection. These methods are mostly based on subjective feature selection, which required a solid professional foundation and rich experience. Moreover, feature selection is time-consuming, and its generalization ability is poor, hard to adapt to changes in conditions, such as lighting. With the rapid development of deep learning in recent years, more and more researchers have applied deep learning methods to many complex tasks such as image classification [6], object recognition [7], image segmentation and detection [8], etc. Object detection algorithms based on deep learning are mainly divided into two categories: one is RCNN series, such as Fast RCNN [9], Faster RCNN [10], and R-FCN [11]. Faster RCNN modularized (region proposals generation, feature extraction, object classification, location refinement) the object detection into a deep network framework and fully implementing an end-to-end object detection. The detection results of such algorithms are more accurate, but the speed is slower. Another type of method converts detection problems into regression problems, such as YOLO3 [12], SSD [13], and RetinaNet [14], etc. Such algorithms run faster, but object detection accuracy is lower, especially for small objects. Due to the challenge of detecting small helmet targets on the construction site, this paper proposes an object detection framework based on combining Online Hard Example Mining (OHEM) and multi-part combination. In the framework, the OHEN strategy is employed to extract the personnel wearing safety helmets and their safety helmets for coarse detection. Then the multi-component combination method is utilized to calculate the belonging relationship of the components to detect the wearing of the safety helmet accurately. The main contributions of this paper are as follows: (1) To solve the problem that hard negative samples are difficult to learn, an OHEM learning strategy is proposed to initially select hard negative samples, and then input them into the network again for retraining, so that the network pays more attention to these hard negative samples. (2) A multi-part model is proposed to determine whether there is a component in the corresponding position in regions of interest(ROIs), which can further eliminate the false object and improve the detection accuracy. (3) The framework can automatically detect the wearing of safety helmets in different construction site scenarios. This method can obtain better detection accuracy and robustness than the other state-of-the-art methods. The rest of this paper is organized as follows. Section II introduces the existing safety helmet wearing detection methods. The detailed description of our proposed method is then presented in Section III. In Section IV, the experimental results on datasets are reported. Finally, the conclusion is provided in Section V. II. RELATED WORK Safety helmet wearing detection has been extensively studied in the literature of computer vision. The helmet wearing detection is the basis for analyzing production safety on the construction site, and provides essential technical support for enterprise intelligent video surveillance. Safety helmet wearing detection methods are mostly based on traditional machine learning methods. For example, in 2013, Lin et al. [14] designed a helmet wearing detection system for traffic scenarios. The method detected whether to wear a helmet based on the detection of the motorcycle driver, and used the upper 1/5 of the object area of the motorcycle driver as a potential area of the helmet, and extracted local binary pattern features and HOG features of the object. Then three types of classifiers, including Naive Bayes, Random Forest, and SVM, were trained for comparative experiments. The results showed that the trained random forest classifier had the best detection performance with an accuracy of 93.08%. In 2014, Silva et al. [15] first used the Adaptive Mixture of Gaussians (AMG) to extract moving objects and then detected motorcycle drivers. Through the calculation of the sub-window, the human head is framed, converted into a grayscale image, and the mean filtering is performed to denoise, and the binarization conversion and the hough transform are performed to find the circular region operation. Then the LBP, HOG and W.T. features of the head region were extracted, and any pairs of two features were combined. The experimental results showed that the detection based on the combination of HOG and LBP features was the best, and the detection accuracy was as high as 94.04%. In 2018, [16] proposed a method for helmet recognition based on feature fusion. First, a head image was extracted based on the acquired video. Then, the LBP (texture), H.U. moment invariant (geometry), and color histogram (color) feature vectors of the head image were extracted. Finally, the head image was divided into four categories (red hard hat, yellow hard hat, blue hard hat, and no hard hat) using a hierarchical SVM (HSVM). The method in [16] not only monitored whether the worker weared a helmet, but also further recognized the color of the helmet. Based on the traditional machine learning object detection algorithms, relevant researchers are required to conduct indepth research on detection fields for different detection tasks by designing specific and adaptable features. Such methods are individually optimized during the feature extraction and classifier training phases and do not affect each other. But it is susceptible to environmental changes. In recent years, we have witnessed advances in object detection using deep learning, which often outperforms traditional computer vision methods significantly. For example, in 2017, Wu and Zhao [4] developed a system for automatic detection of helmet driver wearing in a traffic scene. First, the adaptive image subtraction method was used to acquire dynamic objects for video images. Then two different convolutional neural networks were used to perform motorcycle driver detection and helmet detection. The experiment used two data sets: one containing one single object per image without small object and fuzzy object and the other one consisting of multiple objects per image with occlusion and small objects. The average detection accuracy of the experiment was as high as 92.87%. This method is a helmet wearing test in a traffic scene, and the background, category and posture of the picture are relatively simple. Also, it uses two deep convolutional neural networks, which are cumbersome and increase the computational complexity. Inspired by the wide application of Faster RCNN in the field of object detection. In this study, we propose a framework for safety helmet wearing detection by improving the Faster RCNN. III. PROPOSED METHOD This paper proposes an object detection framework that combines OHEM [17]- [19] and multi-part combination [20]- [22], which adopt the Faster RCNN as the backbone. Firstly, multi-scale strategy is adopted in the network training stage to enhance the robustness of the object's size and increase the number of anchors to improve the detection accuracy for small objects. Then, the OHEM method is used to automatically select hard samples, which are fed into the network again to retrain. During the training process, the samples of wearing safety helmets are often consider hard negative samples. The retraining of these samples can make the network pay more attention to these samples with safety helmets. Finally, according to the geometric information between workers and safety helmets, a multi-part model is proposed to eliminate false detection objects and identify missed objects to improve the detection accuracy. In the following, we discuss our framework in detail. A. MULTI-SCALE TRAINING At the actual construction site, the difference in size between different targets such as helmet workers and helmets is large, and the sizes of similar targets in the same image are also different. To detect different-sizes safety helmets, we utilize the pyramid method to extract multi-scale images semantics. The original Faster RCNN network sets the short side of the image to 600 based on the premise that the original image scale of the input image is unchanged. There is only one scale, which makes the network have poor generalization performance for different-sizes object. In this paper, the multi-scale strategy is adopted in the network training process to make the Faster RCNN network learn and extract the different-scale features of the object. During the network training process, the input image is randomly resized under the premise of ensuring the original proportion of the image, so that the shorter side takes the pixel size of one of 480, 600 and 750. Then one of the three scales is randomly selected and sent to the network for training. Experiments show that multiscale training enables the network to learn various-dimension objects, making the network robust to the object size. To improve the ability of the network to detect small targets, we have modified anchor parameters of the network. Based on the default parameters, a set of 64 × 64 anchors (smaller than the default setting) allow the network to detect more small targets. In the training process, the RPN part uses 12 anchor points, the size of which is 64 × 64, 128 × 128, 256 × 256, 512 × 512, and the three aspect ratios are 1:1, 1:2 and 2:1, respectively. Experiments show that the increased scale of 64 × 64 can detect smaller targets. B. OHEM The hard sample is the sample where the wrong object is classified as correct and the confidence threshold is high. In the training process of Faster RCNN, many ROIs will be generated randomly in the RPN. Due to the small proportion of the object in the image, there is a huge imbalance between the number of positive samples and negative samples, and the network training model tends to be negative samples. To make the network pay more attention to those hard samples, OHEM is incorporated into the backbone network for safety helmet wearing detection, which can simultaneously select hard samples without setting the positive and negative ratio of samples. The structure of Faster RCNN with OHEM is shown in Fig.1. The latter part of the ROIs pooling layer of Faster RCNN was called the ROIs network. Integrating OHEM method, the original ROIs network is expanded into two ROI networks, which share net-work parameters. One of them is read-only. In the read-only ROIs network, all operations are forward. Its main functions include calculating and sorting the loss values of all region proposals, selecting 128 region proposals with large loss values. Another ROIs network is the standard ROIs network, which contains forward and backward operations. The input is the hard sample selected by the first ROIs network. The output is the predicted classification result and the coordinates of the bounding box. In conclusion, an extra ROIs is added to select hard examples, which are then used for the standard ROIs network training. This algorithm does not need to set the ratio between positive and negative samples to solve the imbalanced problem. It improves the accuracy of object detection. The experiments show that the OHEM strategy can enhance the discrimination ability of the algorithm and improve the detection accuracy of network. C. MULTI-PART COMBINATION Whether the worker wears a helmet is mainly determined by whether there is a helmet in the head area of the worker. To mark the image of the helmet, the helmet worker and the helmet worker without wearing the helmet, and then use the optimized Faster RCNN network for model training. Since the helmet area is relatively small in the region of the helmet wearing area, the network will confuse the worker who does not wear the helmet with the worker who wears the helmet, resulting in a wrong inspection. According to the geometric position relationship between the helmet and the worker, the positional relationship between the object and the component is calculated to eliminate the false detection object. We propose a multi-part combination method to detect the helmet on the worker's head, as shown in Fig.2. The training data set labeling these types of targets are input into the optimized Faster RCNN network for training. After the initial target detection using the optimized Faster RCNN framework to reduce the network detection confidence threshold to achieve more goals (wearing helmet workers and not wearing helmet workers) and components (helmets). For the detected wearing helmet worker, the relationship between the component and the worker is judged by calculating the overlapping ratio of the component. The relative positional relationship between the component and the worker is calculated to determine the object category. We examine the upper 1/3 part of the target as a potential area for the helmet. If the target and the target overlap rate are the highest and the relative positional relationship is correct, it is judged to be wearing a helmet worker, otherwise, it is a wrong check. For the detected un-wearing helmet worker target, check if there is a helmet at the top of the target, and if it exists, it is a wrong check. The functions of Faster RCNN part are data set format conversion and model optimization. The multi-part combination method is used to determine whether there is a corresponding part in the object area, such as a safety helmet. The whole process of our framework is as follows. (a) Get datasets from the VOC2012 dataset [23]- [25], Internet, and other ways. Then, labeled helmet, worker and the worker with helmet in the datasets are converted to the VOC2007 dataset format. (b) The processed data set was imported into the improved Faster RCNN model to train. (c) Reduce the model detection confidence threshold, detecting worker wearing helmets and safety helmets, etc. Based on the above optimized model. Then, eliminate the isolated parts. (d) For the remaining pending objects, calculate whether there is a matching part. If there is, it is our object. Else, remove it. After testing with the improved Faster RCNN, the detected objects fall into two categories: workers wearing helmets and related parts such as safety helmets. If the confidence of our object area is less than 0.95 [26], [27] the relative positional relationship and the overlap rate between the part and our detected object are calculated to judge their affiliation. If the overlap rate is the largest and the relative positional relationship is correct (For example, the safety helmet at the top 1/3 of our object area). From this, we can ensure that this is our object. where PartArea is the area of the helmet and other parts after the detection. OverallArea is our object area. The IoU is the overlap rate between the part and the object. Finally, the isolated test results are removed, and the rest is our object, i.e., worker wearing helmet. IV. EXPERIMENT ANALYSIS The data set we created, comparative methods, and performance metrics used to validate our approach are presented in this section. The proposed algorithm is compared with the current typical object detection algorithms on our data set. The experiment used the Caffe (Convolution Architecture For Feature Extraction) deep learning framework for related codes and parameters training. The network framework of Faster RCNN uses the VGG 16 network. Experimental environment configuration: GPU: GeForce GTX 1080Ti, CUDA8.0, Ubuntu16.04, memory 12GB. A. DATASETS AND EVALUATION METRICS The image of the worker's work image at the construction site is the basis for studying the wearing of the worker's helmet for the construction site. At present, there is no publicly available image data set of workers' work images at the construction site. Image data is an indispensable element of VOLUME 9, 2021 image processing tasks. The quality and quantity of image data have a significant impact on the results of helmet wear detection. This paper refers to the establishment criteria of PASCAL VOC dataset, combined with the requirements of this method and the characteristics of detection targets, to establish a more standardized construction site worker image dataset containing multiple scenarios and multiple objectives. There is no public dataset in the research on safety helmet wearing detection. The data used in this experiment were collected from VOC2012 dataset, self-collection and online collection. A total of 7000 images were collected, including monitoring pictures with different quality under various background scenes in construction sites and substations. Some example images are shown in Fig.3. According to the experimental requirements, the datasets were converted into VOC2007 datasets format. The example is shown in Fig. 4. It manually labeles each part. In addition, an extra 200 monitoring images in actual work scenes are collected for testing to verify the effectiveness of the proposed method. To evaluate the effectiveness of the proposed method for object detection, the experiment uss precision and recall [28] for evaluations. The calculation formula are shown in Eq. (2) and Eq. (3). where TP (True Positive) represents a positive sample predicted to be positive by the model. FP (False Positive) represents a positive sample predicted to be negative by the model. FN (False Negative) represents a negative sample predicted to be positive by the model. B. COMPARE THE DETECTION EFFECTS OF THE ORIGINAL FASTER RCNN AND THE IMPROVED FASTER RCNN IN THE SAME SAMPLES In order to verify the effectiveness of the improved Faster RCNN, 7000 pictures of VOC2007 format are used as the training set. The original Faster RCNN network and the improved Faster RCNN are trained through multi-scale training to increase anchor points and OHEM. Two models were tested using 200 actual scene monitoring images (including 377 objects). The results of the two models were shown in Table 1. Table 1 shows that the improved Faster RCNN improves test accuracy by 3.85% and recall by 8.23%. Compared with the original Faster RCNN network, the accuracy and recall rate of the optimized Faster RCNN is greatly improved, whereas the false detection target and the missed detection target are reduced. For helmet workers, reducing false detection targets can reduce false positives, whereas reducing false positives can increase real alert rates. It can be seen that the optimized Faster RCNN network has strong robustness for complex scenes including chemical plants, substations, and building construction. Fig. 5 shows the detection effect using two algorithms on the actual picture. The green, purple, and yellow boxes are the worker, the safety helmet, and the worker wearing the safety helmet, respectively. The category name and confidence value are displayed above the bounding box. As can be seen from Fig. 5, the improved Faster RCNN is significantly better than the original Faster RCNN. Fig. 5(b) is able to detect more occludded targets and small targets, compared to Fig. 5(a). The detected target confidence values are also higher and the positions are more accurate. Experiments show that the improved Faster RCNN network can effectively optimize the model. C. THE EFFECT OF TRAINING THE NETWORK USING DIFFERENT STRATEGIES To verify the effectiveness of different strategies, different strategies were used to train and test. The detection performance is shown in Table 2. Compared with strategy 1 and strategy 2, the detection accuracy is increased by 0.79%, which is because a set of (64 × 64)-scale anchors are added into network. In the experiment, the number of anchor points is increased from 9 to 12, so that the network can detect small objects. Compared with strategy 2 and strategy 3, the test precision of the network model is improved by about 1.15%, which is because the network model adopts the multi-scale training strategy in the training stage, which makes the network have certain robustness to objects with different sizes. Compared with strategy 2 and strategy 4, the detection accuracy of strategy 4 was improved by about 1.52% due to OHEM mechanism. This way can solve the problem of too large negative sample space in the training process and enhance the network resolution. In conclusion, all three strategies improve the detection performance of the network model. D. DETERMINE THE CONFIDENCE THRESHOLD To obtain more object parts such as wearing safety helmet worker and safety helmet in the initial detection stage, a lower confidence threshold value should be set. The experiment is based on the improved Faster RCNN framework to discuss the confidence threshold. Table 3 shows the object detection results under different confidence thresholds. Table 3, when the confidence threshold is 0.2, the object that workers wearing safety helmet have the lowest miss rate, while the false detection rate is lower compared with the threshold of 0.1. As shown in To improve the accuracy of detection to the greatest extent, confidence threshold is selected as 0.2. As shown in Fig. 6, when the confidence threshold value is 0.2, the worker wearing safety helmet is detected in Fig. 6(a), in which confidence is 0.548. Also, the parts with lower confidence are detected, so that more objects and parts can be detected in the initial detection stage. However, the wrong objects will also be detected when the confidence threshold is lower. As shown in Fig.6 (b), the left worker is wrongly detected as a worker wearing a safety helmet. Therefore, the detected object needs to be filtered by multi-part combination method. E. MULTI-PART COMBINATION METHOD After reducing the detection confidence threshold, the multipart combination method will be employed to detect the object more accurately. Some examples of detection results are shown in Fig.7. After reducing the confidence threshold, the isolated parts are filtered out, and the rest is the object that is expected to be detected. Fig. 7(a) is the detection result when the confidence threshold is high, and Fig. 7(b) is the detection result after reducing the confidence threshold. Fig. 7(b) detected more objects and parts that are missed in Fig. 7(a). Using the multi-part combination method, the overlap rate and relative position relationship between the parts and the object are calculated. If the relationship is correct, the object will be judged to be the worker wearing safety helmets. Therefore, the three objects on Fig. 7(a) are detected as worker wearing a safety helmet. When the confidence threshold decreases, a misdetection target appears in Fig. 7(c). We calculate the relative positional relationship of the parts and find that it is incorrect (the position of the safety helmet) using multi-part combination method. Therefore, the misdetection object is removed, as shown in Fig. 7(d). After the multi-part combination method is performed, the misdetection object is removed, resulting in the final detection object. F. COMPARED WITH OTHER AUTOMATIC HELMET DETECTION METHODS To further verify the effectiveness of the proposed method, in addition to the original Faster RCNN model, some representative target detection methods are selected for analysis and comparison. The chosen method is HOG+SVM based on traditional machine learning methods. Experiments are performed using the training dataset and test datasets herein. Compare the test accuracy and recall rate for each method. The experimental results are shown in Table 4. It can be seen that the proposed method is better than the traditional machine learning feature extraction method HOG+SVM, SSD, YOLOv3, RetinaNet and the optimized Faster RCNN algorithm. Compared with the HOG+SVM method, the test accuracy and recall rate of this method are greatly improved. This is because the worker's movements are varied, and the working environment is complex. If there is occlusion and deformation, the HOG gradient feature will be weak, and the target cannot be extracted. Compared with the optimized Faster RCNN network, the proposed method integrated a target geometric position information, so the detection accuracy is improved by about 3%. Detecting the relationship between the target and the position of the helmet is feasible for accurate detection of the wearing of the worker's helmet. G. . DETECTION OF DIFFERENT SCENES AND DIFFERENT IMAGE QUALITY The testing sets contain images of different scenes and qualities, as shown in Fig. 8. In Fig. 8(a), our method detected multiple objects with various sizes. Our method has better detection robustness for poor light environment, multi-object and partial occlusion, as shown in Fig. 8(b), (c), and (d). Our method can automatically detect workers wearing safety helmets in different scenarios, showing its robustness. V. CONCLUSION To solve small safety helmet objects during the detection process of workers wearing safety helmets, we propose a deep learning object detection framework that integrates the HOEM mechanism and multi-part method. The pyramid method is used to obtain the multi-scale features of the image, and the OHEM mechanism is introduced to select hard samples, and then send them to the network retraining so that the network can learn the hard samples. Through the multi-part combination method, the helmet and the worker's position information are combined to detect the worker wearing the helmet. Experimental results show that the method proposed in this paper effectively improves the accuracy of helmet automatic detection. Besides, it is still robust to low light environments and occlusion images. However, the posture of the workers is different, and our method can only roughly select the relative position of the safety helmet and other parts. In the future, we will solve this problem by attitude estimation. XIN LYU is currently a Lecturer with the College of Computer and Information, Hohai University. He has published more than 60 articles. His research interests include cryptography, network information security, and privacy-preserving theory and technology. SHOUKUN XU is currently a Professor of software engineering with Changzhou University. His research interests include deep learning and image processing in chemical production. YARU WANG is currently pursuing the master's degree with Changzhou University, Changzhou, China. Her research interests include deep learning and image processing in chemical production. YUSHENG WANG is currently pursuing the master's degree with Changzhou University, Changzhou, China. His research interests include deep learning and image processing in chemical production. YUWAN GU received the Ph.D. degree in agricultural engineering from Jiangsu University, Zhenjiang, China, in 2016. She was a Lecturer of computer science and technology with the School of Information Science and Engineering, Changzhou University. Her research interests include machine learning and image processing. VOLUME 9, 2021
6,627.6
2020-12-16T00:00:00.000
[ "Engineering", "Computer Science" ]
Apoptosis of Sensory Neurons and Satellite Cells after Sciatic Nerve Transection in C57bl/6j Mice The rate of axonal regeneration, after sciatic nerve lesion in adult C57BL/6J mice, is reduced when compared to other isogenic strains. It was observed that such low regeneration was not due just to a delay, since neuronal death was observed. Two general mechanisms of cell death, apoptosis and necrosis, may be involved. By using the terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) technique , we demonstrated that a large number of sensory neurons, as well as satellite cells found in the dorsal root ganglia, were intensely labeled, thus indicating that apoptotic mechanisms were involved in the death process. Although almost no labeled neurons or satellite cells were observed one week after transection, a more than tenfold increase in TUNEL labeling was detected after two weeks. The results obtained with the C57BL/6J strain were compared with those of the A/J strain, which has a much higher peripheral nerve regeneration potential. In A/J mice, almost no labeling of sensory neurons or satellite cells was observed after one or two weeks, indicating the absence of neuronal loss. Our data confirm previous observations that approximately 40% of C57BL/6J sensory neurons die after sciatic nerve transection, and indicate that apoptotic events are involved. Also, our observations reinforce the hypothesis that the low rate of axonal regeneration occurring in C57BL/6J mice may be the result of a mismatch in the timing of the neurons need for neurotrophic substances , and production of the latter by non-neuronal cells in the distal stump. Subsequent to a peripheral lesion, a broad range of events is initiated, both in the proxi-mal and distal stump of the nerve (1). In the distal stump, a process called Wallerian de-generation starts as a local inflammation, being characterized by an extensive recruitment of macrophages which scavenge axon and myelin debris, stimulate Schwann cell proliferation and produce several signaling molecules such as cytokines (2-4). At the same time, the damaged neurons display a series of cytoplasmic alterations characterized as chromatolysis (5). It is known that such a process is the initial morphological step in a cascade of events which will be responsible for neuronal survival as well as axonal regeneration, or for neuronal death, which can occur by apoptosis or necrosis (6). Necrosis is characterized by rapid cell disintegration , cellular swelling, organelle dys Subsequent to a peripheral lesion, a broad range of events is initiated, both in the proximal and distal stump of the nerve (1).In the distal stump, a process called Wallerian degeneration starts as a local inflammation, being characterized by an extensive recruitment of macrophages which scavenge axon and myelin debris, stimulate Schwann cell proliferation and produce several signaling molecules such as cytokines (2)(3)(4).At the same time, the damaged neurons display a series of cytoplasmic alterations characterized as chromatolysis (5).It is known that such a process is the initial morphological step in a cascade of events which will be responsible for neuronal survival as well as axonal regeneration, or for neuronal death, which can occur by apoptosis or necrosis (6).Necrosis is characterized by rapid cell disintegration, cellular swelling, organelle dys-function and passive cell disassembly (7).The products of cytoplasmic leakage activate the immune system, leading to macrophage invasion and local inflammation.In contrast, apoptosis is an active process, extensively identified under electron microscopy by a series of morphological alterations (7).It is characterized by early chromatin condensation followed by internucleosomal DNA cleavage, cell shrinkage, reorganization of the cytoskeleton, organelle relocation and production of apoptotic bodies.Apoptosis also requires the transcription of certain genes, protease and endonuclease activation and the expression of phagocytic signals on the cell surface.The entire process ends as a silent and quick cell self destruction, without detectable inflammation (8)(9)(10). It has been reported in the literature that C57BL/6J mice have a lower axonal regeneration potential after a crush lesion when compared to other isogenic strains.Although Lu et al. (11) proposed that such a defect would be a delay rather than a permanent neuronal impairment, Lainetti et al. (12) observed a large percentage of sensitive neuron death four weeks after axotomy.It has been proposed that such neuronal death may be the result of a mismatch in the timing of the neuronal need for trophic substances and their production by the non-neuronal cells in the nerve (13).In this respect, apoptotic mechanisms may be involved, and may be related to the previously described loss of C57BL/6J dorsal root ganglion (DRG) neurons.Taking into account the facts reported above, the aim of the present study was to characterize neuronal loss in C57BL/6J mice using the terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) method for the detection of apoptosis. Six adult male mice of the C57BL/6J and A/J strains were used.The animals were anesthetized (0.25 g ketamine and 0.02 g xylazine dissolved in 5 ml of distilled water, 0.2 ml/25 g body weight, ip) and the left sciatic nerve was exposed and transected at the midthigh level.The proximal stump was ligated with an 8-0 Ethicon stitch and a 2mm segment of the distal stump was resected in order to avoid regeneration.After one and two weeks (N = 3 for each strain and survival time) the animals were sacrificed with an overdose of chloral hydrate (0.6 mg/kg, ip) and perfused transcardially with 150 ml of 4% paraformaldehyde in 0.1 M PBS.The L4, L5 and L6 spinal cord segments and L5 DRG were dissected out, left in the same fixative for 24 h, washed in PBS and processed for paraplast embedding.Transverse sections of the spinal cords (7 µm) and DRG were obtained, transferred to albumin-coated slides and stored until use.After deparaffinization, endogenous peroxidase was inactivated with 3% H 2 O 2 in distilled water for 5 min at room temperature.The slides were transferred to a humidified chamber, the equilibration buffer solution was applied (Oncor, s7110-1) and the slides were incubated for 5 min at room temperature.The equilibration buffer was shaken off and the TdT enzyme solution (Oncor, s7110-2 and 3) applied for 60 min at 37 o C. The reaction was stopped by applying the stop/wash solution (Oncor, s7110-4) for 30 min at 37 o C. The sections were incubated with 2% BSA (Sigma Chemical Co., St. Louis, MO, USA) for 10 min and covered for 30 min with an anti-digoxigenin peroxidase complex (Dako ABC kit, Dako A/S, Glostrup, Denmark).The peroxidase was detected with a 3,3',5,5'diaminobenzidine solution, washed in distilled water, counterstained with hematoxylin and mounted in Entelan (Merck, Darmstadt, Germany).The number of TUNELpositive neurons and satellite cells in the DRG was obtained by counting four alternating sections from each animal.In this way, six fields of each selected section were captured using a video camera (Olympus U-CMAD-2) connected to an Olympus BX60 microscope (objective, 100X), and the TUNEL-positive cells counted with the Image Pro Plus 3.0 software (Media Cybernet-ics, Baltimore, MD, USA).The total number of neurons and satellite cells of each section was obtained as the sum of the number counted in the six sample areas of each section. For the spinal motoneurons, alternate sections of the lumbar intumescence were used and processed as described above.The numerical data are reported as mean ± SD. One week after sciatic nerve transection, almost no TUNEL labeling was detected in the DRG sections from A/J mice.Also, counterstaining revealed no signs of neuronal or satellite cell degeneration.The same pattern was observed two weeks after lesion (Figure 1).With respect to the spinal motoneurons found in the ventral horn of the lumbar spinal cord, no labeling was observed after one or two weeks. Analysis of sections from C57BL/6J mice obtained one week after transection revealed the absence of neuronal labeling on the contralateral side.Also, a reduced number of TUNEL-positive neurons (18.7 ± 6.1) and satellite cells (16.7 ± 6.1) was found in the lesioned side.For the neurons, the number of labeled profiles represented 3% of the sampled cells.However, two weeks after lesion, a marked increase in TUNEL-labeled sensory neurons (237.3 ± 13.6) was observed on the ipsilateral side, representing approximately 45% of the population studied.Such an increase in labeling occurred simultaneously in the satellite cells (232.7 ± 41.7; Figures 1 and 2).No labeling was detected on the contralateral side and the number of normal neurons (508.0 ± 21.2) was similar to that found one week after lesion (509.1 ± 10.1). Although a number of labeled cells showed the basic characteristics of apoptosis, i.e., chromatin condensation, cell shrinkage and fragmentation, some of the TUNELpositive neurons and satellite cells were morphologically normal.No TUNEL labeling was found in the motor nuclei of the lumbar intumescence from C57BL/6J mice after one or two weeks. Peripheral nerve transection in neonatal rats induces an extensive death of lesioned motoneurons in the spinal cord and primary afferent sensory neurons in the DRG (14,15).However, this kind of lesion in adult animals does not result in extensive neuronal death, although a series of alterations known as chromatolysis take place in the cell body (14).In fact, Ekström (16) reported that only a small proportion of the lesioned neurons became apoptotic after sciatic nerve crush in vivo.Also, Schwann cells at the site of injury became TUNEL labeled in the first 2 h after injury. The survival rate of adult neurons after disconnection from the target is probably due to the relative independence of certain neurotrophic molecules produced by the nonneuronal cells and target organ.This also means that the neurons from adult individuals have an intrinsic regenerative potential, which is triggered by their disconnection from the target.In this respect, after lesion, adult neurons are able to switch from a transmitting to a growth mode, producing cytoskeleton proteins such as neurofilaments, CGRP and GAP-43 (17).Later, the timing between axonal elongation and the generation of a supportive microenvironment distally to the lesion seems to be essential for neuronal survival and the success of regeneration (18). Interestingly, C57BL/6J mice have been reported as deficient in regenerating myelinated fibers in the peripheral nerves and DRG (11,12).Although motoneurons are able to regenerate as efficiently as those from other isogenic strains such as A/J, DBA/ 1J and C3H/HeJ, poor sensory neuron survival has been observed.Our results confirm these observations and indicate that apoptotic mechanisms are not present after motoneuron axotomy in A/J and C57BL/6J mice.In this case, the motoneurons probably receive the necessary amount of trophic substances, such as brain-derived neurotrophic factor and ciliary neurotrophic factor to assure cell survival during the critical period without contact with the distal stump and the target organ. Despite the fact that, in general, sensory neuron death is unlikely after peripheral dam-age in adult animals, it has been proposed as a possible explanation for the observations made in C57BL/6J mice, based on horseradish peroxidase retrograde labeling (12).The present data confirm this hypothesis and show that it is a characteristic of C57BL/6J mice, since almost no DRG neuron death was identified after A/J sciatic nerve transection, as reported in the literature for NMRI mice (16).Also, our observations indicate that a major proportion of cell death is the result of apoptosis and that satellite cells are also affected.By using the TUNEL technique, it was possible to identify cells with early chromatin fragmentation before any morphological sign of cell death could be noticed.Also, neuronal damage was detected prior to cell death, at least four weeks before the previous report (12), showing that neuronal and satellite cell death occurs around the second week post-lesion.Such timing is consistently different from that reported by Ekström (16) and reinforces the fact that adult C57BL/6J mice have a greater neuronal loss after peripheral lesion compared to other strains. The low level of TUNEL labeling one week after axotomy may be explained by the fact that, soon after lesion, neurons are able to survive disconnected from the target.Conversely, after this period there is an increase in the need to obtain a certain amount of trophic factors, such as nerve growth factor (NGF), synthesized by non-neuronal cells in the distal stump (19).If such substances are not available, neuronal death can be triggered, also resulting in the loss of satellite cells, as observed two weeks after axotomy.Interestingly, at this point, we found that approximately 45% of the DRG neurons were TUNEL positive, indicating that a large number of labeled neurons would probably die within a short period of time.This fact can be confirmed by an analysis of the results after horseradish peroxidase tracing, where only about 60% of the neurons survived (12). Taken together, our results support the hypothesis that DRG neuron loss is the result of a putative mismatch between the neuronal need for trophic factors and their production by the cells in the distal stump.Also, the possible lack of trophic support results in apoptotic neuronal death, which also includes the satellite cells in the DRG.This hypothesis is in agreement with a previous report which showed that if a predegenerated nerve was grafted to the proximal stump, C57BL/ 6J regenerative performance was greatly enhanced (13).Also, if exogenous NGF is administered after sciatic nerve transection, sensory neuron death can be partially reduced (20).Other studies are underway in order to identify possible apoptotic mechanisms involved in C57BL/6J DRG cell loss after peripheral nerve lesion.The number of normal neurons is also presented.Observe the reduced number of TUNEL-positive neurons (3%) and satellite cells one week after lesion.Two weeks after injury, the number of apoptotic neurons (45%) and satellite cells was greatly enhanced. Figure 1 - Figure 1 -Transverse sections of lumbar spinal cord and L5 dorsal root ganglia after sciatic nerve transection.A, C57BL/6J micetwo weeks after lesion.Observe the number of TUNEL-positive neurons (arrows).Normal neurons are also seen (asterisks).B, Detail of the previous picture showing apoptotic sensory neurons (thick arrows) and satellite cells (thin arrows).C, C57BL/6J normal motoneuron (arrow), two weeks after injury.D, C57BL/6J TUNEL-labeled sensory neuron (arrow) found one week after nerve transection, surrounded by normal cells.E, F, A/J miceone and two weeks after lesion.Observe the absence of TUNEL labeling.Bar = 10 µm. Figure 2 - Figure2-Number of TUNELlabeled dorsal root ganglion neurons and satellite cells from C57BL/6J mice, one and two weeks after sciatic nerve transection.The number of normal neurons is also presented.Observe the reduced number of TUNEL-positive neurons (3%) and satellite cells one week after lesion.Two weeks after injury, the number of apoptotic neurons (45%) and satellite cells was greatly enhanced.
3,312.4
2001-03-01T00:00:00.000
[ "Biology", "Medicine" ]
Storing Reproducible Results from Computational Experiments using Scientific Python Packages —Computational methods have become a prime branch of modern science. Unfortunately, retractions of papers in high-ranked journals due to erroneous computations as well as a general lack of reproducibility of results have led to a so-called credibility crisis. The answer from the scientific community has been an increased focus on implementing reproducible research in the computational sciences. Researchers and scientists have addressed this increasingly important problem by proposing best practices as well as making available tools for aiding in implementing them. We discuss and give an example of how to implement such best practices using scientific Python packages. Our focus is on how to store the relevant metadata along with the results of a computational experiment. We propose the use of JSON and the HDF5 database and detail a reference implementation in the Magni Python package. Further, we discuss the focuses and purposes of the broad range of available tools for making scientific computations reproducible. We pinpoint the particular use cases that we believe are better solved by storing metadata along with results the same HDF5 database. Storing metadata along with results is important in implementing reproducible research and it is readily achievable using scientific Python packages. Introduction Exactly how did I produce the computational results stored in this file?Most data scientists and researchers have probably asked this question at some point.For one to be able to answer the question, it is of utmost importance to track the provenance of the computational results by making the computational experiment reproducible, i.e. describing the experiment in such detail that it is possible for others to independently repeat it [LMS12], [Hin14].Unfortunately, retractions of papers in high-ranked journals due to erroneous computations [Mil06] as well as a general lack of reproducibility of computational results [Mer10], with some studies showing that only around 10% of computational results are reproducible [BE12], [RGPN + 11], have led to a so-call credibility crisis in the computational sciences. The answer has been a demand for requiring research to be reproducible [Pen11].The scientific community has acknowledged that many computational experiments have become so complex that more than a textual presentation in a paper or a technical report is needed to fully detail it.Enough information to make the experiment reproducible must be included with the textual presentation [RGPN + 11], [CG12], [SLP14].Consequently, reproducibility of computational results have become a requirement for submission to many high-ranked journals [Edi11], [LMS12]. But how does one make computational experiments reproducible?Several communities have proposed best practices, rules, and tools to help in making results reproducible, see e.g.[VKV09], [SNTH13], [SM14], [Dav12], [SLP14].Still, this is an area of active research with methods and tools constantly evolving and maturing.Thus, the adoption of the reproducible research paradigm in most scientific communities is still ongoing -and will be for some time.However, a clear description of how the reproducible research paradigm fits in with customary workflows in a scientific community may help speed up the adoption of it.Furthermore, if tools that aid in making results reproducible for such customary workflows are made available, they may act as an additional catalyst. In the present study, we focus on giving guidelines for integrating the reproducible research paradigm in the typical scientific Python workflow.In particular, we propose an easy to use scheme for storing metadata along with results in an HDF5 database.We show that it is possible to use Python to adhere to best practices for making computational experiments reproducible by storing metadata as JSON serialized arrays along with the results in an HDF5 database.A reference implementation of our proposed solution is part of the open source Magni Python package. The remainder of this paper is organized as follows.We first describe our focus and its relation to a more general data management problem.We then outline the desired workflow for making scientific Python experiments reproducible and briefly review the fitness of existing reproducibility aiding tools for this workflow.This is continued by a description of our proposed scheme for storing metadata along with results.Following this specification, we detail a reference implementation of it and give plenty examples of its use.The paper ends with a more general discussion of related reproducibility aiding software packages followed by our conclusions. The Data Management Problem Reproducibility of computational results may be considered a part of a more general problem of data management in a computational study.In particular, it is closely related to the data management tasks of documenting and describing data.A typical computational study involves testing several combinations of various elements, e.g.input data, hardware platforms, external software libraries, experiment specific code, and model parameter values.Such a study may be illustrated as a layered graph like the one shown in figure 1.Each layer corresponds to one of the elements, e.g. the version of the NumPy library or the set of parameter values.The edges in the graph mark all the combinations that are tested.An example of a combination that constitutes a single simulation or experiment is the set of connected vertices that are highlighted in the graph in figure 1.In the present study, we focus on the problem of documenting and describing such a single simulation.A closely related problem is that of keeping track of all tested combinations, i.e. the set of all paths through all layers in the graph in figure 1.This is definitely also an interesting and important problem.However, once the "single simulation" problem is solved, it should be straight forward to solve the "all combinations" problem by appropriately combining the information from all the single simulations. Storing Metadata Along With Results For our treatment of reproducibility of computational results, we adopt the meaning of reproducibility from [LMS12], [Hin14].That is, reproducibility of a study is the ability of others to repeat the study and obtain the same results using a general description of the original work.The related term replicability then means the ability of others to repeat the study and obtain the same results using the exact same setup (code, hardware, etc.) as in the original work 1 .As pointed out in [Hin14], reproducibility generally requires replicability. The lack of reproducibility of computational results is oftentimes attributed to missing information about critical computational details such as library versions, parameter values, or precise descriptions of the exact code that was run [LMS12], [BPG05], [RGPN + 11], [Mer10].Several studies have given best practices for how to detail such metadata to make computational results reproducible, see e.g.[VKV09], [SNTH13], [SM14], [Dav12]. Here we detail the desired workflow for storing such metadata along with results when using a typical scientific Python workflow in the computational experiments.That is, we detail how to document a single experiment as illustrated by the highlighted vertices in figure 1. The Scientific Python Workflow In a typical scientific Python workflow, we define an experiment in a Python script and run that script using the Python interpreter, e.g. This is a particularly generic setup that only requires the availability of the Python interpreter and the libraries imported in the script.We argue that for the best practices for detailing a computational study to see broad adoption by the scientific Python community, three elements are of critical importance: Any method or tool for storing the necessary metadata to make the results reproducible must 1. be very easy to use and integrate well with existing scientific Python workflows.2. be of high quality to be as trustworthy as the other tools in the scientific Python stack.3. store the metadata in an open format that is easily inspected using standard viewers as well as programmatically from Python. These elements are some of the essentials that have made Python so popular in the scientific community 2 .Thus, for storing the necessary metadata, we seek a high quality solution which integrates well with the above exemplified workflow.Furthermore, the metadata must be stored in such a way that is is easy to extract and inspect when needed. Existing Tools Several tools for keeping track of provenance and aiding in adhering to best practices for reproducible research already exist, e.g.Sumatra [Dav12], ActivePapers [Hin15], or Madagascar [Fom15].Tools like Sumatra, ActivePapers, and Madagascar generally function as reproducibility frameworks.That is, when used with Python, they wrap the standard Python interpreter with a framework that in addition to running a Python script (using the standard Python interpreter) also captures and stores metadata detailing the setup used to run the experiment.E.g. when using Sumatra, one would replace python my_experiment.py with [Dav12] $ smt run -e python -m my_experiment.py 1.Some authors (e.g.[SLP14]) swap the meaning of reproducibility and replicability compared to the convention, we have adopted. 2. See http://cyrille.rossant.net/why-using-python-for-scientificcomputing/for an overview of the main arguments for using Python for scientific computing.This idea of wrapping a computational simulation is different from the usual scientific Python workflow which consists of running a Python script that imports other packages and modules as needed, e.g.importing NumPy for numerical computations.This difference is illustrated in figure 2. We argue that an importable Python library for aiding in making results reproducible has several advantages compared to using a full blown reproducibility framework.A major element in using any tool for computational experiments is being able to trust that the tool does what it is expected do.The scientific community trusts Python and the SciPy stack.For a reproducibility framework to be adopted by the community, it must build trust as the wrapper of the Python interpreter, it effectively is.That is, one must trust that it handles experiment details such as input parameters, library paths, etc. just as accurately as the Python interpreter would have done.Furthermore, such a framework must be able to fully replace the Python interpreter in all existing workflows which uses the Python interpreter.A traditional imported Python library does not have these potentially staggering challenges to overcome in order to see wide adoption.It must only build trust among its users in the same way as any other scientific library.Furthermore, it would be easy to incorporate into any existing workflow.Thus, ideally we seek a solution that allow us to update our my_experiment.pyto have a structure like: Interestingly, the authors of the Sumatra package has to some degree pursued this idea by offering an API for importing the library as an alternative to using the smt run command line tool. Equally important, to how to obtain the results, is how to inspect the results afterwards.Thus, one may ask: How are the results and the metadata stored, and how may they be accessed later on?For example, Sumatra by default stores all metadata in a SQLite database [Dav12] separate from simulation results (which may be stored in any format) whereas ActivePapers stores the metadata along with the results in an HDF5 database [Hin15].The idea of storing (or "caching") intermediate results and metadata along with the final results has also been pursued in another study [PE09]. We argue that this idea of storing metadata along with results is an excellent solution.Having everything compiled into one standardized and open file format helps keep track of all the individual elements and makes it easy to share the full computational experiment including results and metadata.Preferably, such a file format should be easy to inspect using a standard viewer on any platform; just like the Portable Document Format (PDF) has made it easy to share and inspect textual works across platforms.The HDF5 Hierarchical Data Format [FP10] is a great candidate for such a file format due to the availability of cross-platform viewers like HDFView 3 and HDFCompass 4 as well as its capabilities in terms of storing large datasets.Furthermore, HDF5 is recognized in the scientific Python community 5 with bindings available through e.g.PyTables 6 , h5py 7 , or Pandas [McK10].Also, bindings for HDF5 exists in several other major programming languages. Suggested Library Design Our above analysis reveals that all elements needed for implementing the reproducible research paradigm in scientific Python are in fact already available in existing reproducibility aiding tools: Sumatra may serve as a Python importable library and the ActivePapers project shows how metadata may be stored along with results in an HDF5 database.However, no single tool offers all of these elements for the scientific Python workflow.Consequently, we propose creating a scientific Python package that may be imported in existing scientific Python scripts and may be used to store all relevant metadata for a computational experiment along with the results of that experiment in an HDF5 database. Technically, there are various ways to store metadata along with results in an HDF5 database.The probably most obvious way is to store the metadata as attributes to HDF5 tables and arrays containing the results.However, this approach is only recommended for small metadata (generally < 64KB) 8 .For larger metadata it is recommended to use a separate HDF5 array or table for storing the metadata 9 .Thus, for the highest flexibility, we propose to store the metadata as separate HDF5 arrays.This also allows for separation of specific result arrays or tables and general metadata.When using separate metadata arrays, a serialization (a representation) of the metadata must be chosen.For the metadata to be humanly readable using common HDF viewers, it must be stored in an easily readable string representation.We suggest using JSON [ECM13] for serializing the metadata.This makes for a humanly readable representation.Furthermore, JSON is a standard format with bindings for most major programming languages 10 .In particular, Python bindings are part of the standard library (introduced in Python 2.6) 11 .This would effectively make Python >=2.6 and an HDF5 Python interface the only dependencies of our proposed reproducibility aiding library.We note, though, that the choice of JSON is not crucial.Other formats similar to JSON (e.g.XML 12 or YAML 13 ) may be used as well.We do argue, though, that a humanly readable format should be used such that the metadata may be inspected using any standard HDF5 viewer. Magni Reference Implementation A reference implementation of the above suggested library design is available in the open source Magni Python package [OPA + 14].In particular, the subpackage magni.reproducibility is based on this suggested design.Figure 3 gives an overview of the magni.reproducibilitysubpackage.Additional resources for magni are: In magni.reproducibility, a differentiation is made between annotations and chases.Annotations are metadata that describe the setup used for the computation, e.g. the computational environment, values of input parameters, platform (hardware/OS) details, and when the computation was done.Chases on the other hand are metadata describing the specific code that was used in the computation and how it was called, i.e. they chase the provenance of the results. Requirements Magni uses PyTables as its interface to HDF5 databases.Thus, had magni.reproducibilitybeen a package of its own, only Python and PyTables would have been requirements for its use.The full requirements for using magni (as of version 1.5.0) are 14 We now give several smaller examples of how to use magni.reproducibility to implement the best practices for reproducibility of computational result described in [VKV09], [SNTH13], [SM14].An extensive example of the usage of magni.reproducibility is available at doi:10.5278/VBN/MISC/MagniRE.This extensive example is based on a Python script used to simulate the Mandelbrot set 18 using the scientific Python workflow described above.An example of a resulting HDF5 database containing both the Mandelbrot simulation result and metadata is also included.Finally, the example includes a Jupyter Notebook showing how to read the metadata using magni.reproducibility. Quality Assurance The Magni Python package is fully documented and comes with an extensive test suite.It has been developed using best practices for developing scientific software [WAB + 14] and all code has been reviewed by at least one other person than its author prior to its inclusion in Magni.All code adheres to the PEP8 19 style guide and no function or class has a cyclomatic complexity [McC76], [WM96] exceeding 10.The source code is under version control using Git and a continuous integration system based on Travis CI 20 is in use for the git repository.More details about the quality assurance of magni are given in [OPA + 14]. Related Software Packages Independently of the tool or method used, making results from scientific computations reproducible is not only for the benefit of the audience.As pointed out in several studies [Fom15], [CG12], [VKV09], the author of the results gains as least as much in terms increasing one's productivity.Thus, using some method or tool to 19.See https://www.python.org/dev/peps/pep-0008/20.See https://travis-ci.org/help make the results reproducible is a win for everyone.In the present work we have attempted to detail the ideal solution for how to do this for the typical scientific Python workflow. A plethora of related alternative tools exist for aiding in making results reproducible.We have already discussed ActivePapers [Hin15], Sumatra [Dav12], and Madagascar [Fom15] which are general reproducibility frameworks that allow for wrapping most tools -not only Python based computations.Such tools are definitely excellent for some workflows.In particular, they seem fit for large fixed setups which require keeping track of several hundred runs that only differ by the selection of parameters 21 and for which the time cost of initially setting up the tool is insignificant compared to the time cost of the entire study.That is, they are useful in keeping track of the full set of combination in a large computations study as marked by all the edges in the layered graph in figure 1.However, as we have argued, they are less suitable for documenting a single experiment based on the typical scientific Python workflow.Also these tools tend to be designed for use on a single computer.Thus, they do not scale well for big data applications which run on compute clusters. Another category of related tools are graphical user interface (GUI) based workflow managing tools like Taverna [OAF + 04] or Vistrail [SFC07].Such tools seem to be specifically designed for describing computational workflows in particular fields of research (typically bioinformatics related fields).It is hard, though, to see how they can be effectively integrated with the typical scientific Python workflow.Other much more Python oriented tools are the Jupyter Notebook 22 as well as Dexy 23 .These tools, however, seem to have more of a focus on implementing the concept of literate programming and documentation than reproducibility of results in general. Conclusions We have argued that metadata should be stored along with computational results in an easily readable format in order to make the results reproducible.When implementing this in a typical scientific Python workflow, all necessary tools for making the results reproducible should be available as an importable package.We suggest storing the metadata as JSON serialized arrays along with the result in an HDF5 database.A reference implementation of this design is available in the open source Magni Python package which we have detailed with several examples of its use.All of this shows that storing metadata along with results is important in implementing reproducible research and it is readily achievable using scientific Python packages. Fig. 1 : Fig. 1: Illustration of a typical data management description problem as a layered graph.In this exemplified experiment, several combinations of input data, hardware platforms, software libraries (e.g.NumPy), algorithmic/experimental setup (described in a Python script), and parameter values are tested.The challenging task is to keep track of both the full set of combinations tested (marked by all the edges in the graph) as well as the individual simulations (e.g. the combination of highlighted vertices). Fig. 2 : Fig.2: Illustration of the difference between a full reproducibility framework (on the left) and an importable Python library (on the right).The reproducibility framework calls the metadata collector as well as the Python interpreter which in turn runs the Python simulation script which e.g.imports NumPy.When using an importable library, the metadata collector is imported in the Python script alongside with e.g.NumPy. if __name__ == '__main__': reproducibility_library.store_metadata(...) run_my_experiment(...) Fig. 3: Illustration of the structure of the magni.reproducibilitysubpackage of Magni.The main modules are the data module for acquiring metadata and the io module for interfacing with an HDF5 database when storing as well as reading the metadata.A subset of available functions are listed next to the modules.
4,608.8
2016-08-29T00:00:00.000
[ "Computer Science" ]
Fast Sample Adaptive Offset Jointly Based on HOG Features and Depth Information for VVC in Visual Sensor Networks Visual sensor networks (VSNs) can be widely used in multimedia, security monitoring, network camera, industrial detection, and other fields. However, with the development of new communication technology and the increase of the number of camera nodes in VSN, transmitting and compressing the huge amounts of video and image data generated by video and image sensors has become a major challenge. The next-generation video coding standard—versatile video coding (VVC), can effectively compress the visual data, but the higher compression rate is at the cost of heavy computational complexity. Therefore, it is vital to reduce the coding complexity for the VVC encoder to be used in VSNs. In this paper, we propose a sample adaptive offset (SAO) acceleration method by jointly considering the histogram of oriented gradient (HOG) features and the depth information for VVC, which reduces the computational complexity in VSNs. Specifically, first, the offset mode selection (select band offset (BO) mode or edge offset (EO) mode) is simplified by utilizing the partition depth of coding tree unit (CTU). Then, for EO mode, the directional pattern selection is simplified by using HOG features and support vector machine (SVM). Finally, experimental results show that the proposed method averagely saves 67.79% of SAO encoding time only with 0.52% BD-rate degradation compared to the state-of-the-art method in VVC reference software (VTM 5.0) for VSNs. Introduction Recently, the advances in imaging and micro-electronic technologies enable the development of visual sensor networks (VSNs) [1,2]. By integration of low-power and low-cost visual sensors, VSNs can obtain multimedia data such as images and video sequences. As the key applications in VSNs, video transmission and compression technology have been increasingly used in the field of communication and broadcasting. Especially with the development of Internet of Things [3][4][5][6] and 5G techniques [7,8], the transmission of video and multimedia information in mobile communication have become the current hot technology, and improving the compression performance of mobile videos could combine the mobile application with communication better in VSNs. Due to the increasing pressure of video storage and transmission [9,10], more and more efficient video coding standards have been put out in the last few decades. High-Efficiency Video Coding (HEVC/H.265) [11] is developed by Joint Collaborative Team of Video Coding (JCT-VC). Compared with advanced video coding (AVC/H.264), HEVC achieves equivalent subjective video quality with approximately 50% bit rate reduction. As the upcoming standard with the most advanced video coding technology, versatile video coding (VVC/H.266) [12,13] can reduce the bit rate by 40% while maintaining the same quality compared to HEVC. Therefore, it is very suitable for high-resolution and different formats of videos in VVC, such as virtual reality (VR) video [14] and ultra high-definition video [15]. However, block-based coding structures and quantization structures are still inherited, which cause artifacts in VVC, such as blocking artifacts, ringing artifacts, and blurring artifacts [16]. In order to reduce the ringing artifacts and distortions, VVC also adopts the sample adaptive offset (SAO) filter as in HEVC [17,18]. The thought of SAO is to reduce the distortion between the original samples and reconstructed samples by conditionally adding an offset value to each sample inside coding tree unit (CTU) [18]. Although the SAO process effectively improves the coding quality, it brings computational redundancy [19] as SAO not only refers to each original sample and reconstructed sample to collect statistic data, but also uses recursive rate distortion optimization (RDO) calculation to select the best SAO parameters [20]. Moreover, the coding complexity of VVC has increased greatly at the same time, which may be four to five times more complex than the current HEVC video coding standard [21][22][23]. Moreover, new video applications in VSNs need more bandwidth and less delay [24] when transmitting wireless communication, which brings great challenges to video coding and transmission in VSNs. Therefore, reducing the coding complexity of VVC becomes an important issue for VSNs. Thus, this paper proposes a SAO acceleration method to reduce the SAO coding time, thereby reducing the coding complexity of VVC and improving the efficiency of video transmission for VSNs. The main contributions of this paper can be summarized as follows. (1) A new depth-based offset mode selection scheme of SAO is proposed for VVC. According to the partition depth of CTU, the edge offset (EO) mode and the band offset (BO) mode are adaptively selected. (2) A histogram of an oriented gradient (HOG) feature-based directional pattern selection scheme is proposed for EO mode. The HOG features [25] of CTU are extracted and input to the support vector machine (SVM). The best directional pattern is output, skipping the RDO calculation process and sample collection statistics of the other three directional patterns. The rest of the paper is organized as follows. Section 2 introduces the related work. Section 3 describes the overview of SAO algorithm in VVC. Section 4 introduces the proposed method. Experimental results are shown in Section 5. Section 6 concludes this paper. Related Work In recent years, many researchers have proposed improved methods to reduce the computational complexity of SAO. They can be classified into two categories: The first category focuses on reducing the complexity by improving the SAO algorithm directly. Joo et al. [26] proposed a fast parameter estimation algorithm for SAO by using the intra-prediction mode information in the spatial domain instead of searching all EO patterns exhaustively to simplify the decision of the best edge offset (EO) pattern. Furthermore, they also proposed to make a simplified decision of the best SAO edge offset pattern by using the dominant edge direction [27], which reduced the RDO calculation and sped up the SAO encoding process in HEVC. Zhang et al. [28] proposed to distinguish videos according to texture complexity and performed an adaptive offset process by reducing some unnecessary sample offsets to improve the video coding efficiency. Gendy et al. [29] proposed an algorithm to reduce the complexity of SAO parameter estimation by adaptively reusing the dominant mode of corresponding set of CTUs, which saved SAO encoding time. Although the above two methods save SAO encoding time, the BD-rate gain loss is large. Sungjei Kim et al. [30] proposed to decide the best SAO parameters earlier by exploiting a spatial correlation between current and neighbor SAO types, which reduced the parameter calculation of other SAO patterns. The other category focuses on reducing the complexity of SAO by parallel processing on a central processing unit (CPU) and graphic processing unit (GPU). Zhang et al. [31] designed the corresponding parallel algorithms for SAO by exploiting GPU multi-core computing ability, and a parallel algorithm of statistical information collection, calculation of the best offset and minimum distortion, and SAO merging was proposed. D. F. de Souza et al. [32] optimized the deblocking filter and SAO by using GPU parallelization in a HEVC decoder for an embedded system. Wang et al. [33] redesigned the statistical information collection part, which computes offset types and values, to make it well suitable for GPU parallel computing. Later, Wang et al. [34] designed the pipeline structure of HEVC coding through the cooperation of CPU and GPU. Through the joint optimization of deblocking filter and SAO, the parallelism can be improved and the computational burden of CPU can be reduced. Although the above methods have achieved good results in the research of SAO acceleration, these technologies are all designed for HEVC, and many new technologies have been added to VVC, such as multi-tree partitioning, independent coding of luma and chroma component, Cross-Component Linear Prediction (CCLM) prediction mode, Ref. [35][36][37] Position Dependent intra Prediction Combination (PDPC) technology, and so on [38,39]. These new technologies cause different encoding characteristics between VVC and HEVC. Therefore, the acceleration method of SAO for VVC needs to be re-studied. It should be noted that the proposed method utilizes depth information for SAO acceleration. Although the work in [26] is also depth-based, it is designed for HEVC. The block partition mode of VVC has large differences compared with that of HEVC. Concretely, VVC adopts the new quad-tree with nested multi-type tree (QTMT) [35,36]. Similar to HEVC, each frame is first divided into CTUs, and then further divided into smaller coding units (CUs) of different sizes. In the QTMT structure, there are five ways to split blocks, including horizontal binary tree (BH), vertical binary tree (BV), horizontal ternary tree (TH), vertical ternary tree (TV), and quad-tree (QT), and the five possible partition structures are shown in Figure 1. This division pattern means that the shape of CU includes square and rectangular. In VVC, the maximum value of CU partition is 128 × 128, and the minimum depth value is 0; the minimum partition of CU is 4 × 4, and the maximum depth is 6. In Figure 2, a possible CTU partitioning with the QTMT splits is depicted. Overview of SAO in VVC SAO, as a key technology of loop postprocessing, mainly consists of three steps: sample collection statistics, mode decision and SAO filtering. First, in the process of sample statistics collection, eight SAO offset patterns of each CTU need to be traversed, including four EO patterns, one BO pattern, two merge patterns, and one SAO off pattern. We need to traverse all possible offset patterns to collect statistic information. Then, for the mode decision, the encoder will perform RDO calculation for each pattern by the statistical data. We choose the best pattern according to the RDO calculation results of the eight patterns. Finally, in the filtering process, we add an offset value for each reconstructed sample. SAO consists of two types of algorithms: Edge Offset (EO) and Band Offset (BO). The two methods are described as follows. Edge Offset (EO) EO classifies samples based on direction, using four one-dimensional directional patterns: horizontal (EO_0 • ), vertical (EO_90 • ), 135 diagonal (EO_135 • ), and 45 diagonal (EO_45 • ). As shown in Figure 3, "c" represents the current sample, and "a" and "b" are two adjacent samples. The classification of the current sample "c" is based on the comparison between "c" and the two neighboring samples of it. For a given EO pattern, samples are divided into five categories according to the relationship between the current pixel and neighbor pixels. Table 1 summarizes the classification rules for each sample. The offset values are always positive for categories 1 and 2, and negative for categories 3 and 4, which indicates that EO tries to reduce the distance between current sample and neighbor ones. Table 1. Sample classification rules for EO. Category Conditions None of above Band Offset (BO) BO divides pixel range into 32 bands where each band contains pixels in the same intensity interval. Each interval band's offset value is an average difference between original and reconstructed samples. Moreover, four consecutive bands are selected to calculate the differences in pixel values between original samples and reconstructed samples. Only four offsets of the consecutive bands are selected and signaled to the decoder. The schematic diagram of BO mode is shown in Figure 4. Proposed SAO Method In this section, first the motivation of the proposed method is analyzed. Next, the simplification scheme of offset mode selection is introduced. Then, the simplification scheme of EO mode is presented. Finally, the process of the proposed method is summarized with a flowchart. Motivation For the BO mode, the pixel values of the compensated samples are concentrated in the four consecutive bands, and the BO mode performs better for those regions where the pixel values are concentrated in small ranges. Therefore, the BO mode can be fast selected according to the pixel distribution of different CTUs. For the EO mode, when it compensates for high-frequency distortion caused by quantization, it references neighbor pixels for pattern selection. Therefore, the best EO pattern is closely related to the main local edge features. In local areas, consecutive samples along the local edge direction are more probable to have similar values compared to other samples. The HOG feature is a feature descriptor used for object detection in computer vision and image processing. It is formed by calculating and counting the gradient direction histogram of the local area of the images. Therefore, the HOG features extracted from CTUs can be used to select the optimal pattern. The extracted HOG features are used as the input of the SVM, and the best EO pattern is directly selected by the output of SVM. Simplification of Offset Mode Selection The BO mode works in the areas where the pixel values are concentrated, and the pixel distribution is closely related to picture contents. We observe that regions with complex texture have complex pixel distribution and the corresponding pixel values are decentralized. On the contrary, regions with simple texture have concentrated pixel distribution. Simultaneously, regions with complex texture are usually encoded with small CU, and regions with simple texture are usually encoded with large CU. For example, Figure 5 depicts the partition result of each CTU in the 87th frame of the sequence BasketballPass under all intra (AI) configuration, where the quantization parameter (QP) is 22. We can see that larger CU is selected for encoding flat areas, such as floors and walls. A smaller CU is selected to encode regions with complex texture, such as the human head and a basketball. Moreover, to show the pixel distribution difference between complex region and simple region, a block belongs to complex region is selected and the pixel distribution is shown in Figure 6a, and a block belongs to simple regions is selected and the pixel distribution is shown in Figure 6b. It can be seen that for block belongs to complex regions (Figure 6a Therefore, the partition depth of CTU can be used to measure the pixel distribution, where the depth of CTU represents the maximum depth of CU in CTU. Concretely, smaller depth means simpler texture, which indicates more concentrated pixel distribution. Therefore, CTU with small depth can directly choose BO mode as the offset mode, because all the concentrated pixels of this CTU could be covered by four consecutive bands. On the other hand, CTU with large depth contains decentralized pixel range. If BO mode is adopted, many samples can not be covered by the four consecutive bands, which will degrade offset performance. Therefore, in this paper, if depth < δ, the BO mode is selected as the SAO type; otherwise, the EO mode is selected. Obviously, threshold δ directly influences the performance of the proposed method. Thus, we conduct some experimental tests to select a proper value for δ. We count the proportion of each depth in which the best mode is BO mode between EO mode and BO mode, where the test sequences are Johnny and KristenAndSara under low delay with B picture (LB) configuration, and the number of test frames is 100 for each sequence. As shown in Figure 7, the depth values are mostly concentrated at 0 and 1 when the best mode is BO mode, which illustrates that BO mode shows good effect in the area with lower complexity, and EO mode performs better in the area with higher complexity. Therefore, we set δ = 2. If the depth < 2, BO mode will be directly selected as the best mode between EO mode and BO mode. Otherwise, EO mode will be directly selected as the best mode. Simplification of EO Mode This section describes the simplification of EO mode. First, the gradient computation is introduced. Then, the HOG features calculation is present. Finally, the classification based on SVM is analyzed. Gradient Computation The HOG features are extracted by calculating and counting the histogram of the gradient direction of the local area of the picture. Therefore, we divide the CTU picture into small cells to calculate the gradient amplitude and gradient direction. The Scharr operator and Sobel operator are two common operators for computing gradient. The principles and structures of the two operators are similar. The central element of the Scharr operator takes more weight, so the accuracy calculated by it is higher. Therefore, in this paper, we choose the Scharr operator to calculate the edge gradient. The direction gradient of Scharr operator is calculated as where Gx i,j and Gy i,j represent the gradient in the horizontal direction and the gradient in the vertical direction, respectively, and the gradient amplitude can be roughly estimated in the following ways, The decision of EO pattern is made by horizontal gradient and vertical gradient, and the HOG is established by , and the direction angle of the gradient θ i,j could be calculated as follows. HOG Features Calculation In the process of HOG features calculation, each pixel in the cell votes for a direction-based histogram channel. According to the gradient direction and gradient amplitude of each pixel in the cell, the gradient amplitude value is added to the histogram channel to which the current pixel belongs. The histogram channels are evenly distributed in the range of 0-180 • or 0-360 • . EO modes are classified according to four kinds of position information (horizontal, vertical, 135 • diagonal, and 45 • diagonal) between current pixel and neighbor pixels, and we divide 0-180 • into 9 bins (20 • for each part) as histogram channels. Due to changes in local illumination, the range of gradient intensity is very large. Therefore, groups of adjacent cells are considered as spatial regions called blocks to perform normalization operations to achieve better extraction results, and the histograms of many cells in the block represent the block histograms, which represent the feature descriptor. After the calculation of block gradient histograms, all the block gradient histograms in a CTU represent all the features within the CTU, and all the block feature vectors are concatenated to form the final feature vectors in each CTU. Figure 8 shows the visualization of a picture based on HOG features, where 4 cells form a block. Classification Based on SVM The best pattern is predicted by SVM. The SVM algorithm is to find the best hyperplane in a multidimensional space as a decision function, so as to achieve classification between classes. For a given training set, S = {(x i , y i )}, x i represents the feature vector of the training samples, y i represents the label of the training samples, and y i = 1 and y i = −1 denote the positive and negative samples respectively. Therefore, hyperplane f (x) can be calculated as follows, where ω is the normal of the hyperplane, m is the number of support vectors, α i is the Lagrange multiplier and b is the deviation. The objective function can be calculated as follows, where the ξ i is the slack variable, and C is the penalty factor. The kernel function κ(x, x i ) is used to map the original space to a higher dimensional space, and f (x) can be rewritten as In this paper, we choose radial basis function as the kernel function. The kernel function can be calculated as where γ defines the impact of a single sample. The samples can be classified according to the obtained hyperplane. In this paper, there are four directional patterns of EO as candidates for SAO. Therefore, we design four one-versus-rest SVM models. For each model, the positive examples are the CTUs with the best EO pattern, and the negative examples are the CTUs with the remaining three other patterns. The HOG features of CTUs are used as the input of SVM to train the four SVM models off-line, and the best EO pattern is directly selected through the models. Summary Combining the simplification of offset mode selection and directional pattern selection, the flowchart of the proposed SAO acceleration method is summarized in Figure 9. Concretely, first, the depth information is used to evaluate the pixel distribution, and SAO is accelerated based on the depth information. If the depth is smaller than 2, BO mode is selected as the best offset mode; otherwise, EO mode is selected as the best offset mode. Second, for the EO mode, the HOG features of each CTU are extracted, and the best directional pattern of EO mode is predicted based on HOG features and SVM. Next, the best mode is selected by comparing the RDO values of EO or BO mode and SAO off state. Then, we compare this mode with the best pattern in SAO merge mode to select the final offset mode and obtain the SAO offset information. Finally, the offset value is added to the reconstructed samples. Figure 9. Flowchart of proposed overall sample adaptive offset (SAO) algorithm. Experimental Result In this section, first the experimental design is introduced. Then, the performance of HOG and SVM is analyzed. Finally, the acceleration performance of different methods is compared. Experimental Design The proposed method and other acceleration methods are implemented on the VVC reference software (VTM5.0) [40]. The test platform is a Dell R730 server, which has two 12-core Intel(R) Xeon(R) E5-2620 V3 CPUs with a main frequency of 2.4 GHz made in China. Our experimental materials are from the standard sequences of JCT-VC proposals [41], as shown in Table 2. All the experimental sequences are encoded with four modes, which contains AI mode, random access (RA) mode, low delay with P picture (LP) mode, and LB mode. The main encoding parameter configurations are listed in Table 3. Effectiveness Verification of HOG Features The parameters and data regarding the HOG features are calculated and shown. The first sequence in each class is selected as the training sequences, and the remaining sequences are used as the testing sequences. In this paper, a self-made data set is used. For training of each configuration, we select a total of about 16,000 CTUs of each training sequence for each QP, and we extract the HOG features of CTUs in the training sequences to make the data set. The size of the CTU is 128 × 128 (For the CTUs whose size is not 128 × 128 at the boundary, pixels of boundary are used to fill the size to 128 × 128 when calculating the HOG features). The cell size of the extracted HOG features extraction is 8 × 8, and the size of each block is 16 × 16. The size of blockStride is 16 × 16, so there is no overlap among the blocks. We divide 0-180 • into 9 parts (20 • for each part) as histogram channels. The radial basis function (RBF) is selected as the kernel function of SVM, and the penalty factor C is set to 10 and the parameter γ is set to 0.09. Figure 10 shows the CTU pictures for each best EO pattern and their corresponding HOG features maps. The four EO patterns correspond to the four labels of SVM, and the HOG features of each CTU are used as the input of SVM to predict. In this paper, each CTU contains 8 × 8 (64) blocks, while each block contains 4 cells, and each cell is divided into 9 bins. Therefore, a CTU contains a total of 8 × 8 × 4 × 9 (2304) dimensional features. Taking these features as the input of SVM, the best EO pattern can be predicted directly by comparing the probability that the current CTU belongs to each EO pattern. Table 4 shows the prediction accuracy of EO Pattern in this paper and the works in [26,27]. Compared with the methods in [26,27], the algorithm based on HOG features fully combines the features of images and shows higher prediction accuracy. Algorithm Accuracy(%) Intra-based EO [26] 72.20 Sobel-based EO [27] 76.30 Figure 12 summarizes the distribution of depth values of CTUs under different configurations. From the figure, it can be seen that for the sequences with higher texture complexity, the depth of CTU is larger, such as the sequences in ClassB, ClassC, and ClassD; for those sequences with lower texture complexity, there are more CTUs with smaller depth than the sequences with high texture complexity, such as the sequences in ClassE and ClassF. This is also in line with our expectations. Acceleration Performance Comparison We evaluate the performance of the algorithm with the Bjφntegaard [42] metric (BD-rate) and the reduction of SAO encoding time ∆T. ∆T can be calculated as follows. Table 5 summarizes the BD-rate and run time reduction of the proposed method compared with VTM5.0. The result shows that the proposed method averagely achieves 63.68%, 65.09%, 71.46%, and 70.93% SAO encoding time saving with 0.20%, 0.33%, 0.96%, and 0.59% coding performance gain degradation for AI, RA, LP, and LB, respectively. The comparison shows that the proposed method can effectively reduce the SAO encoding time in the case of a small BD-rate performance loss for VSNs. Table 6 compares the proposed method with the other three methods in [27,28,30] under AI configuration. It can be seen that the proposed method averagely achieves 63.68% SAO encoding time saving, which is better than the methods in [27] (53.43%), [28] (9.62%), and [30] (48.34%). Compared with the method in [27], it can be seen that the proposed algorithm achieves further computational complexity reduction with a much smaller increment in the BD-rate. This is because that the prediction accuracy of the best EO pattern of the method by HOG features is more accurate than that of the [27] directly using Sobel operator. In addition, this paper further optimizes the coding efficiency by combining the depth. Compared with the method in [28], we can see that the SAO encoding time saved in this paper in VVC is much more than that in [28]. On the one hand, the algorithm in [28] directly uses the depth information to turn on SAO adaptively. Due to the block partition method based on QTMT, there are more choices of block partition in VVC, which leads to great differences in block partition structure between VVC and HEVC. On the other hand, the optimization of EO mode is not considered in [28]. Compared with the method in [30], it can be seen that the SAO encoding time saved in this paper in VVC is more than that in [30] with almost the same BD performance loss. This is because in [30] at least two patterns of EO mode and BO mode must be calculated. Moreover, when the best mode is SAO off, all possible SAO patterns must be calculated, which consumes a lot of time. Figure 13 evaluates the rate-distortion curves of Bit Rate and PSNR of Cactus sequence in AI, RA, LP, and LB. It can be seen that the two curves almost coincide, which indicates that the encoding performance of the fast SAO algorithm proposed in this paper is similar to the default algorithm of VVC in terms of objective quality. It means that the proposed algorithm greatly reduces the SAO encoding time and improves the encoding efficiency with almost no lose of SAO encoding quality for VSNs. Figures 14 and 15 compare the subjective quality from Johnny and BasketballPass by using the default algorithm in VVC and our algorithm in this paper, and we analyze the situation when QP is 22 under AI configuration. As shown in Figures 14 and 15, the differences of subjective quality between the two algorithms are also barely visible to the naked eye, which shows that the subjective quality loss caused by the algorithm in this paper can be ignored. Conclusions Complex calculation of SAO is a bottleneck to realize real-time transmission for VVC in VSNs. In order to solve the time-consuming problem of the SAO encoding process in VVC, this paper proposes a fast sample adaptive offset algorithm jointly based on HOG features and depth information for VSNs. First, the depth of each CTU is utilized to simplify the offset mode selection. Then, for EO mode, the HOG features and SVM are used to predict the best pattern in EO mode, skipping 75% calculation of the mode selection in EO mode. Finally, experimental results show that the proposed method can reduce the SAO encoding time by 67.79% with negligible objective and subjective degradation compared with the state-of-the-art method in VVC reference software, which is meaningful for real-time encoding applications in VSNs. In addition, in our future work, finding the optimization method of BO in VVC and making it suitable with all patterns and sequences will be studied.
6,598.2
2020-11-26T00:00:00.000
[ "Computer Science" ]
3D Multiple-Antenna Channel Modeling and Propagation Characteristics Analysis for Mobile Internet of Things The demand for optimization design and performance evaluation of wireless communication links in a mobile Internet of Things (IoT) motivates the exploitation of realistic and tractable channel models. In this paper, we develop a novel three-dimensional (3D) multiple-antenna channel model to adequately characterize the scattering environment for mobile IoT scenarios. Specifically, taking into consideration both accuracy and mathematical tractability, a 3D double-spheres model and ellipsoid model are introduced to describe the distribution region of the local scatterers and remote scatterers, respectively. Based on the explicit geometry relationships between transmitter, receiver, and scatterers, we derive the complex channel gains by adopting the radio-wave propagation model. Subsequently, the correlation-based approach for theoretical analysis is performed, and the detailed impacts with respect to the antenna deployment, scatterer distribution, and scatterer density on the vital statistical properties are investigated. Numerical simulation results have shown that the statistical channel characteristics in the developed simulation model nicely match those of the corresponding theoretical results, which demonstrates the utility of our model. Introduction The Internet of Things (IoT) connects a multitude of dissimilar sensors and devices with the Internet through various communication links in a robust and efficient manner to support complex and ubiquitous interactions between physical objects [1]. Such an emerging trend has a steady and sustained penetration into various domains, including industries, intelligent transportation systems, healthcare, smart cities, smart space, and smart grids [2][3][4][5][6][7]. In recent years, mobile and wireless communications have become important enabling technologies to allow the growth of the IoT vision [8]. Typical examples of this are wireless sensor networks (WSNs) and wireless sensor actor networks (WSANs). As essential integral parts of the IoT paradigm, WSNs and WSANs consist of a collection of sensor nodes connected through wireless channels and provide digital interfaces to real-world things [9]. Accompanied by technologies in the information field like big data and blockchain, these large-scale data collected from WSNs can be tapped for potential value for service consumers [10,11]. Moreover, the revolutionary technologies in fifth (and beyond) generation (5G) systems like massive multi-input multi-output (MIMO) are expected to provide high spectral efficiencies and high data rates to satisfy the enormous traffic demands of heterogeneous and scattered communicating units [12,13]. In general, the wireless signal is particularly vulnerable to multipath fading effects as the result of reflection, diffraction, and scattering phenomena. Accordingly, the communication system performance is strictly restricted by the underlying propagation characteristics. In particular, the spatial-temporal correlation properties that result from the dense antenna array or lack of rich scattering are capable of degrading the multiple-antenna system's realistic IoT environmental conditions featuring both local scatterers and remote scatterers by adjusting the corresponding model parameters. Subsequently, according to the exact geometrical relationship among the azimuth angle of departure (AAoD), the azimuth angle of arrival (AAoA), the elevation angle of departure (EAoD), and elevation angle of arrival (EAoA), we derive the critical channel correlation characteristics of the proposed model and investigate the impact of the antenna deployment, scatterer distribution state, and scatterer density on these characteristics. In addition, the corresponding simulation model is presented by leveraging efficient parameter calculation methods. Our study extends the research of channel modeling and provides insights for the design and deployment of mobile IoT communication systems. The remainder of this paper is organized as follows. In Section 2, we provide the 3D channel model for mobile IoT wireless communication systems. Herein, we derive complex channel gains and determine the distribution of effective scatterers in detail. In Section 3, channel statistical properties are derived, and the simulation parameter calculation method is proposed. Section 4 presents the numerical simulation results. Finally, the conclusions are shown in Section 5. Description of Theoretical Model A typical wireless communication scenario for mobile IoT environments is considered, where the mobile transmitter (MT) and mobile receiver (MR) are in the motion state. v T and v R are the mobile velocities of MT and MR, respectively, with the mobile directions γ T and γ R . It is assumed that the MT and MR are equipped with M T and M R uniform linear array (ULA) antennas with omnidirectional patterns (i.e., the antenna patterns can be normalized to 1). The antenna elements are spaced with separation δ T and δ R . The p-th (p ∈ {1, 2, . . . , M T }) antenna of MT and q-th (q ∈ {1, 2, . . . , M R }) antenna of MR are denoted as T q and R q , respectively. Moreover, O T and O R denote the antenna center of the MT and MR, respectively. Note that the proposed model can be generalized to other kinds of antenna arrays, such as circular or spherical multielement antenna arrays. The wave propagation environment is characterized by 3D effective scattering with LoS and non-line-of-sight (NLoS) components, and the NLoS components consist of SB rays and DB rays. In the proposed 3D multiple-antenna regular-shaped geometry-based stochastic model, the distribution region of the local scatterers is modeled by the doublespheres model, as illustrated in Figure 1. Likewise, the distribution of the remote scatterers is modeled by the ellipsoid model, as shown in Figure 2. The proposed model invokes the geometrical 3D double-spheres model and ellipsoid model, which is sufficiently generic and adapted to various realistic IoT environmental conditions featuring both local scatterers and remote scatterers by adjusting the corresponding model parameters. Subsequently, according to the exact geometrical relationship among the azimuth angle of departure (AAoD), the azimuth angle of arrival (AAoA), the elevation angle of departure (EAoD), and elevation angle of arrival (EAoA), we derive the critical channel correlation characteristics of the proposed model and investigate the impact of the antenna deployment, scatterer distribution state, and scatterer density on these characteristics. In addition, the corresponding simulation model is presented by leveraging efficient parameter calculation methods. Our study extends the research of channel modeling and provides insights for the design and deployment of mobile IoT communication systems. The remainder of this paper is organized as follows. In Section 2, we provide the 3D channel model for mobile IoT wireless communication systems. Herein, we derive complex channel gains and determine the distribution of effective scatterers in detail. In Section 3, channel statistical properties are derived, and the simulation parameter calculation method is proposed. Section 4 presents the numerical simulation results. Finally, the conclusions are shown in Section 5. Description of Theoretical Model A typical wireless communication scenario for mobile IoT environments is considered, where the mobile transmitter (MT) and mobile receiver (MR) are in the motion state. vT and vR are the mobile velocities of MT and MR, respectively, with the mobile directions γT and γR. It is assumed that the MT and MR are equipped with MT and MR uniform linear array (ULA) antennas with omnidirectional patterns (i.e., the antenna patterns can be normalized to 1). The antenna elements are spaced with separation δT and δR. The p-th (p∈ {1, 2, …, MT}) antenna of MT and q-th (q∈ {1, 2, …, MR}) antenna of MR are denoted as Tq and Rq, respectively. Moreover, OT and OR denote the antenna center of the MT and MR, respectively. Note that the proposed model can be generalized to other kinds of antenna arrays, such as circular or spherical multielement antenna arrays. The wave propagation environment is characterized by 3D effective scattering with LoS and non-line-of-sight (NLoS) components, and the NLoS components consist of SB rays and DB rays. In the proposed 3D multiple-antenna regular-shaped geometry-based stochastic model, the distribution region of the local scatterers is modeled by the doublespheres model, as illustrated in Figure 1. Likewise, the distribution of the remote scatterers is modeled by the ellipsoid model, as shown in Figure 2. The distance between OT and OR is denoted as D. Here, the single-sphere model with the center OT and radius RT is represented as M1, and the single-sphere model with center OR and radius RR is represented as M2. Besides, the ellipsoid model is represented as M3 whose focal points are OT and OR. The ellipsoid's semi-length on the major axis is a. It is assumed that there exists N1 effective scatterers located on M1, and the n1-th scatterer (n1 = 1, 2, …, N1) on M1 is represented by symbol S 1 (n1) . N2 effective scatterers are located on M2, and the n2-th scatterer (n2 = 1, 2, …, N2) on M2 is represented by symbol S 2 (n2) . Similarly, it is assumed that there exists N3 effective scatterers located on M3, and the n3-th scatterer (n3 = 1, 2, …, N3) on M3 is represented by symbol S 3 (n3) . For NLoS rays, the waves from the MT antenna elements impinge on the scatterers located on M1, M2, or M3 before they arrive at the MR antenna elements, such as SB ray Tq-S 3 (n3) -Rq and DB ray Tq-S 1 (n1) -S 2 (n2) -Rq. The notations and parameters in the model are defined in Table 1. In addition, since the antenna array is generally compact in the multiple-antenna systems, it is reasonably assumed that min {RT, RR, a-0.5D} >> max {δT, δR}. The distance between O T and O R is denoted as D. Here, the single-sphere model with the center O T and radius R T is represented as M 1 , and the single-sphere model with center O R and radius R R is represented as M 2 . Besides, the ellipsoid model is represented as M 3 whose focal points are O T and O R . The ellipsoid's semi-length on the major axis is a. It is assumed that there exists N 1 effective scatterers located on M 1 , and the n 1 -th scatterer (n 1 = 1, 2, . . . , N 1 ) on M 1 is represented by symbol S 1 (n1) . N 2 effective scatterers are located on M 2 , and the n 2 -th scatterer (n 2 = 1, 2, . . . , N 2 ) on M 2 is represented by symbol S 2 (n2) . Similarly, it is assumed that there exists N 3 effective scatterers located on M 3 , and the n 3 -th scatterer (n 3 = 1, 2, . . . , N 3 ) on M 3 is represented by symbol S 3 (n3) . For NLoS rays, the waves from the MT antenna elements impinge on the scatterers located on M 1 , M 2 , or M 3 before they arrive at the MR antenna elements, such as SB ray T q − S 3 (n3) − R q and DB ray T q − S 1 (n1) − S 2 (n2) − R q . The notations and parameters in the model are defined in Table 1. In addition, since the antenna array is generally compact in the multiple-antenna systems, it is reasonably assumed that min {R T , R R , a − 0.5D} >> max {δ T , δ R }. In this proposed 3D channel model, the physical characteristics of a multiple-antenna channel can be described by a complex fading envelope matrix H(t). The element in H(t) that represents the diffuse component of the transmission link from T p to R q is h pq (t). It is assumed that the received complex fading envelope h pq (t) is superimposed by LoS, SB, and DB components, which can be expressed as Line-of-Sight Component The LoS component can be modeled as where Ω pq is the total power of the T p − R q link, K pq designates the Ricean factor defined as the ratio of signal power in the dominant component over the scattered power, λ is the wavelength and λ = c/f c , c is the speed of the wave, and f c is the carrier frequency. f LoS pq denotes the Doppler frequency of the LoS component due to the motion, which can be calculated as where f Tmax = v T /λ and f Rmax = v R /λ are the maximum Doppler frequencies with respect to the MT and MR, respectively. We also have α LoS = π because of the assumption that min Therefore, the LoS path length can be calculated as where k p = 0.5M T + 0.5 − p, k q = 0.5M R + 0.5 − q. Single-Bounced Component In this model, we assume that there are three single-bounced subcomponents, SB1 from M 1 , SB2 from M 2 , and SB3 from M 3 , which can be modeled as where η SB1 , η SB2 , and η SB3 are the weights that SB1, SB2, and SB3 rays contribute to the total NLoS power. The Doppler shift of SBi components can be expressed as Here, according to the law of cosines and ellipsoid properties, we have Based on (9), the path length of SBi can be computed as For SB rays, there exists a correlation among AAoD, AAoA, EAoD, and EAoA. According to the geometrical relationship, the exact relationship can be calculated as follows: sin β sin α Double-Bounced Component Similarly, the DB components can be modeled as where η DB denotes the weight that DB rays contribute to total NLoS power, and all weights satisfy the constraint that η SB1 + η SB2 + η SB3 + η DB = 1. The Doppler shift of the DB component can be expressed as The path length for DB components can be computed as Distribution of Effective Scatterers In the proposed 3D model, the six discrete variables α n1 can determine the location of effective scatterers in M 1 , M 2 , and M 3 . As for the theoretical model, the effective scatterers are assumed to be infinite. Accordingly, the abovementioned discrete random variables can be replaced by continuous random variables R , and β 3 R , as shown in Figure 3. where ηDB denotes the weight that DB rays contribute to total NLoS power, and all weights satisfy the constraint that ηSB1 + ηSB2 + ηSB3 + ηDB = 1. The path length for DB components can be computed as cos cos cos sin sin cos cos cos sin sin Distribution of Effective Scatterers In the proposed 3D model, the six discrete variables α R can determine the location of effective scatterers in M1, M2, and M3. As for the theoretical model, the effective scatterers are assumed to be infinite. Accordingly, the abovementioned discrete random variables can be replaced by continuous random variables α Figure 3. We leverage the von Mises distribution and cosine distribution applied in [26] to adaptively depict the probability density function (PDF) of the continuous random variables in the azimuth and elevation planes, respectively, which can be expressed as ( ) where α0 and β0 denote the mean angle of azimuth and elevation, respectively, βm represents the maximum range of elevation angle deviation from the mean value β0, and k (k ≥ 0) denotes the control factor for the distribution concentration relative to α0. Note that the larger k implies more concentrated scatterers. If k = 0, the scatterer distribution is isotropic. We leverage the von Mises distribution and cosine distribution applied in [26] to adaptively depict the probability density function (PDF) of the continuous random variables in the azimuth and elevation planes, respectively, which can be expressed as where α 0 and β 0 denote the mean angle of azimuth and elevation, respectively, β m represents the maximum range of elevation angle deviation from the mean value β 0 , and k (k ≥ 0) denotes the control factor for the distribution concentration relative to α 0 . Note that the larger k implies more concentrated scatterers. If k = 0, the scatterer distribution is isotropic. Contrarily, if k = 0, the distribution can characterize the non-isotropic scattering environment. Furthermore, the I 0 (·) is the zeroth-order modified Bessel function of the first kind. Sparial-Temporal Correlation Function The normalized spatial-temporal correlation function (ST-CF) between any two complex fading envelopes is defined as Since the LoS component, SB components, and DB components are independent zero-mean complex Gaussian random processes, the formulation (24) can be represented by the normalized correlation function of each component as Note that various other existing correlation functions can be obtained from the ST-CF as exceptional cases. For instance, the 2D spatial CF defined as ρ pq ,p q (δ T , δ R ) equals the ST-CF at τ = 0 and the temporal correlation function defined as ρ pq ,p q (τ) can be obtained from the ST-CF at δ T = δ R = 0. Substituting the corresponding von Mises PDF and cosine PDF into (25), the ST-CF of all components can be derived as follows. where Single-Bounced Component where where E[·] is the expectation operator. It is notable that the integral variables in the ST-CF of SB1 components are α 1 T and β 1 T , while those in SB2 components are α 2 R and β 2 R , and those in SB3 components are α 3 R and β 3 R . Furthermore, the other variables in (30) and (31) can be replaced by the corresponding integral variables using the exact geometrical relationship. where Using the Fourier transform, the corresponding Doppler power spectral density (Doppler PSD) can be obtained from the ST-CF as where the F T [·] denotes the Fourier transform. Simulation Model The 3D theoretical channel model described in Section 2 is grounded on the assumption that the number of effective scatterers is infinite, which is non-realizable in the actual mobile IoT communication environments. To develop a corresponding simulation model through the theoretical model, it is necessary to determine the unknown simulation model parameters, i.e., discrete α n1 T , α n2 R , α n3 R , β n1 T , β n2 R , and β n3 R . The efficient simulation parameter calculation method leveraged in this work for AAoA (AAoD) is expressed as Similarly, the simulation parameter calculation method for EAoA (EAoD) is Numerical Results and Analysis In this section, the statistical propagation properties of the proposed 3D multi-antenna channel model and evaluation of the simulation model are investigated in detail using numerical results. The simulation platform is built in MATLAB 2019. Some fixed variables are adopted as the following: f c = 5.9 GHz, D = 300 m, a = 200 m, To simplify the illustration, we assume the same antenna spacing and antenna array elevation angle in the MT and MR, i.e., δ T = δ R = δ, ψ R = ψ T = ψ. Isotropic Scattering Scenarios When the communications between the MT and MR are in isotropic scattering scenarios, we obtain k 1 = k 2 = k 3 = 0. Figure 4 shows the spatial CF of the proposed model in terms of different antenna spacings δ/ λ and antenna array elevation angles ψ. We assume that the scenario is the NLoS condition, i.e., K pq = 0, and other model parameters are set as: θ T = θ R = 0, β 1 Tm = β 2 Rm = β 3 Rm = π/6, η SB1 = η SB2 =η SB3 =η DB = 0.25. Apparently, in isotropic scattering scenarios, the spatial correlations exhibit an oscillating decrease as the antenna spacing δ/ λ in the MT/MR increases. This is because the shorter spatial distance of arrays would bring about a more similar channel response of antenna elements, which acts in agreement with the measurements in [31]. In addition, when the antenna spacing δ/ λ is within a small value, such as δ/ λ < 1, the spatial correlations tend to be smaller as the antenna array elevation angle ψ decreases and could reach the first local minimum faster. Besides, the spatial correlation function oscillates more smoothly in terms of the increasing antenna spacing δ/ λ in the case of a larger antenna array elevation angle ψ. In future deployments of mobile IoT communication, due to the promotion of massive MIMO as well as economic considerations, the antenna array will tend to be miniaturized and compact. Accordingly, adjusting the antenna elevation angle would be taken into consideration to obtain a richer horizontal space, which leads to increased spatial correlation. Therefore, in the actual antenna layout, it should obtain a reasonable trade-off between horizontal spatial redundancy and channel spatial correlation. Isotropic Scattering Scenarios When the communications between the MT and MR are in isotropic scattering scenarios, we obtain k1 = k2 = k3 = 0. Figure 4 shows the spatial CF of the proposed model in terms of different antenna spacings δ/λ and antenna array elevation angles ψ. We assume that the scenario is the NLoS condition, i.e., Kpq = 0, and other model parameters are set as: θT = θR = 0, β 1 Tm = β 2 Rm = β 3 Rm = π/6, ηSB1 = ηSB2 =ηSB3 =ηDB = 0.25. Apparently, in isotropic scattering scenarios, the spatial correlations exhibit an oscillating decrease as the antenna spacing δ/λ in the MT/MR increases. This is because the shorter spatial distance of arrays would bring about a more similar channel response of antenna elements, which acts in agreement with the measurements in [31]. In addition, when the antenna spacing δ/λ is within a small value, such as δ/λ < 1, the spatial correlations tend to be smaller as the antenna array elevation angle ψ decreases and could reach the first local minimum faster. Besides, the spatial correlation function oscillates more smoothly in terms of the increasing antenna spacing δ/λ in the case of a larger antenna array elevation angle ψ. In future deployments of mobile IoT communication, due to the promotion of massive MIMO as well as economic considerations the antenna array will tend to be miniaturized and compact. Accordingly, adjusting the antenna elevation angle would be taken into consideration to obtain a richer horizontal space, which leads to increased spatial correlation. Therefore, in the actual antenna layout it should obtain a reasonable trade-off between horizontal spatial redundancy and channel spatial correlation. Figure 5, we can observe that the spatial CFs have a similar oscillation frequency but a distinct oscillation amplitude in the cases of different β M . Overall, the amplitude value decreases with both the increase in antenna spacing δ/ λ and the increase in the maximum range of elevation angle deviation β M . Building on these observations, we can conclude that the larger the maximum range of elevation angle deviation β M , the smaller the spatial correlation characteristics between antenna elements. This conclusion can be explained qualitatively that the expansion of the scatterer distribution range in the vertical dimension means a richer scattering environment, and the different antenna elements have less possibility to be affected by the same range of scatterers. These observations also indicate that 2D wireless channel models would tend to overestimate the channel spatial correlation properties. observations, we can conclude that the larger the maximum range of elevation angle deviation βM, the smaller the spatial correlation characteristics between antenna elements. This conclusion can be explained qualitatively that the expansion of the scatterer distribution range in the vertical dimension means a richer scattering environment, and the different antenna elements have less possibility to be affected by the same range of scatterers. These observations also indicate that 2D wireless channel models would tend to overestimate the channel spatial correlation properties. For different values of antenna array orientation θ, the resulting spatial correlation functions seem to be approximately indistinguishable. The negligible nuance of these spatial correlation functions results from the geometrical relationship for SB rays, which means that the AAoA and AAoD cannot obey the uniform distribution simultaneously. Hence, we can easily conclude that there exists no correlation between spatial correlation and antenna array orientation under the circumstances of isotropic scattering. For different values of antenna array orientation θ, the resulting spatial correlation functions seem to be approximately indistinguishable. The negligible nuance of these spatial correlation functions results from the geometrical relationship for SB rays, which means that the AAoA and AAoD cannot obey the uniform distribution simultaneously. Hence, we can easily conclude that there exists no correlation between spatial correlation and antenna array orientation under the circumstances of isotropic scattering. observations, we can conclude that the larger the maximum range of elevation angle deviation βM, the smaller the spatial correlation characteristics between antenna elements. This conclusion can be explained qualitatively that the expansion of the scatterer distribution range in the vertical dimension means a richer scattering environment, and the different antenna elements have less possibility to be affected by the same range of scatterers. These observations also indicate that 2D wireless channel models would tend to overestimate the channel spatial correlation properties. For different values of antenna array orientation θ, the resulting spatial correlation functions seem to be approximately indistinguishable. The negligible nuance of these spatial correlation functions results from the geometrical relationship for SB rays, which means that the AAoA and AAoD cannot obey the uniform distribution simultaneously. Hence, we can easily conclude that there exists no correlation between spatial correlation and antenna array orientation under the circumstances of isotropic scattering. Non-Isotropic Scattering Scenarios The non-isotropic scattering scenarios can be characterized by setting k to a nonzero value. The impact of the control factor for the distribution concentration k of scatterers on the spatial CF is shown in Figure 7. The corresponding parameter settings are: K pq = 0, θ T = θ R = 0, β 1 Tm = β 2 Rm = β 3 Rm = π/6, ψ = 0, η SB1 = η SB2 = η SB3 = η DB = 0.25, α 1 T0 = π/4, α 2 R0 = α 3 R0 = 3π/4. Comparing the spatial correlation in the cases with different degrees of scatterer concentration, we can find that the spatial correlation increases significantly with the increase in k when antenna spacing δ/λ is in a small value range. The physical meaning can be understood as when the effective scatterers are more closely distributed, the stronger the influence that the multiple-antenna array elements suffer from effective scatterers in the same area, and the stronger the spatial correlation between antennas. In addition, it can be observed that, compared to the highly concentrated scatterer scene where k is large, the spatial correlation in the low scatterer concentration scene declines faster amid more intense fluctuations. The non-isotropic scattering scenarios can be characterized by setting k to a non-zero value. The impact of the control factor for the distribution concentration k of scatterers on the spatial CF is shown in Figure 7. The corresponding parameter settings are: Kpq =0, θT = θR = 0, β 1 Tm = β 2 Rm = β 3 Rm = π/6, ψ = 0, ηSB1 = ηSB2 = ηSB3 = ηDB =0.25, α 1 T0 = π/4, α 2 R0 = α 3 R0 = 3π/4. Comparing the spatial correlation in the cases with different degrees of scatterer concentration, we can find that the spatial correlation increases significantly with the increase in k when antenna spacing δ/λ is in a small value range. The physical meaning can be understood as when the effective scatterers are more closely distributed, the stronger the influence that the multiple-antenna array elements suffer from effective scatterers in the same area, and the stronger the spatial correlation between antennas. In addition, it can be observed that, compared to the highly concentrated scatterer scene where k is large, the spatial correlation in the low scatterer concentration scene declines faster amid more intense fluctuations. The influence of the parameter α0 on the spatial correlation characteristics is shown in Figure 8 Here, we denote ω as the angle between the mean of azimuth α0 and antenna array orientation θ, i.e., ω = |α0-θ|. Comparing the distinct spatial correlation properties in Figure 8a,b, we can conclude that the spatial correlation properties are closely related to the angle ω. To be specific, when ω is within the 0-π/4 range, the spatial correlation decreases monotonically with the increase in ω. On the contrary, the spatial correlation has a monotonically incremented property if ω is in the π/4-π/2 range. The interesting observation is that the spatial correlation would achieve the global minimum when angle ω is a right angle, which would provide certain insights for the design and deployment of the antenna array in mobile IoT communication systems. The influence of the parameter α 0 on the spatial correlation characteristics is shown in Figure 8 with (a) θ T = θ R = 0 and (b) θ T = θ R = π/4. The antenna spacing δ/λ and α 1 T0 are chosen as the research variables, and we also set the constraint that α 2 T0 . The other scenario parameters are chosen as: K pq = 0, k = 6, ψ = 0, β 1 Tm = β 2 Rm = β 3 Rm = π/6, η SB1 = η SB2 = η SB3 = η DB = 0.25. Here, we denote ω as the angle between the mean of azimuth α 0 and antenna array orientation θ, i.e., ω = |α 0 − θ|. Comparing the distinct spatial correlation properties in Figure 8a,b, we can conclude that the spatial correlation properties are closely related to the angle ω. To be specific, when ω is within the 0-π/4 range, the spatial correlation decreases monotonically with the increase in ω. On the contrary, the spatial correlation has a monotonically incremented property if ω is in the π/4-π/2 range. The interesting observation is that the spatial correlation would achieve the global minimum when angle ω is a right angle, which would provide certain insights for the design and deployment of the antenna array in mobile IoT communication systems. Spatial-Temporal Correlation Scatterer density is an important feature reflecting the communication conditions in mobile IoT wireless transmission scenarios. Herein, we focus our attention on the impact Spatial-Temporal Correlation Scatterer density is an important feature reflecting the communication conditions in mobile IoT wireless transmission scenarios. Herein, we focus our attention on the impact of scatterer density on the spatial-temporal correlation. For a sparse scatterer density, the LoS component bears a significant amount of power. Additionally, SB rays instead of DB rays are more likely to exist, and the local scatterers located in the double-spheres model have a relatively weaker effect on channel propagation than that of remote scatterers. Conversely, for a dense scatterer density, the LoS component is relatively weak, and the DB rays are the primary components of the received signal. Therefore, the mobile IoT scenarios with considerations for scatterer density can be characterized adequately in the proposed channel model by utilizing an appropriate Ricean factor and weights of power contribution. Figure 9 illustrates the spatial-temporal correlation with considerations for scatterer density, where the corresponding model parameters capturing scatterer density features are set as the following: (1) for high scatterer density, K pq = 0.2, η SB1 = η SB2 = 0.115, η SB3 = 0.055, η DB = 0.715. (2) For sparse scatterer density, K pq = 2.186, η SB1 = η SB2 = 0.252, η SB3 = 0.481, η DB = 0.005. The isotropic scattering is chosen as the mobile IoT communication environment, and other scenario parameters are ψ = 0, θ = 0, β 1 Tm = β 2 Rm = β 3 Rm = π/6, γ T = γ R = π/2. From Figure 9, we can observe that scatterer density significantly affects the spatial-temporal correlation. Higher scatterer density leads to significantly lower correlation properties thanks to the richer scattering. Spatial-Temporal Correlation Scatterer density is an important feature reflecting the communication conditions in mobile IoT wireless transmission scenarios. Herein, we focus our attention on the impact of scatterer density on the spatial-temporal correlation. For a sparse scatterer density, the LoS component bears a significant amount of power. Additionally, SB rays instead of DB rays are more likely to exist, and the local scatterers located in the double-spheres model have a relatively weaker effect on channel propagation than that of remote scatterers. Conversely, for a dense scatterer density, the LoS component is relatively weak, and the DB rays are the primary components of the received signal. Therefore, the mobile IoT scenarios with considerations for scatterer density can be characterized adequately in the proposed channel model by utilizing an appropriate Ricean factor and weights of power contribution. Figure 9 illustrates the spatial-temporal correlation with considerations for scatterer density, where the corresponding model parameters capturing scatterer density features are set as the following: (1) Figure 9, we can observe that scatterer density significantly affects the spatial-temporal correlation. Higher scatterer density leads to significantly lower correlation properties thanks to the richer scattering. Simulation Model The performance evaluation of the simulation model lies in the better fit of the statistical characteristics of the theoretical model when the scatterer number is limited. Here, the theoretical ST-CF is regarded as the channel characteristic fitting target of the simulation model, and the absolute error is introduced as the appropriate measure for the quality of the approximation between the theoretical model and simulation model, which is defined as where ρ(δ T , δ R , τ) and ρ(δ T , δ R , τ) denote the ST-CF obtained from the theoretical model and simulation model, respectively. In Figures 10 and 11, we compare the difference in the simulation ST-CF from the desired theoretical ST-CF by adopting the squared error for isotropic scattering scenarios and non-isotropic scattering scenarios, respectively. The number of discrete scatterers in M 1 , M 2 , and M 3 is selected in the numerical simulation as: N 1 = N 2 = N 3 = 50. Th scenario parameters in Figure 9 are: K pq = 0, η SB1 = η SB2 = η SB3 = η DB = 0.25, k 1 = k 2 = k 3 = 0, ψ = 0, θ = 0, β 1 Tm = β 2 Rm = β 3 Rm = π/6, γ T = γ R = π/2, and those parameters in Figure 10 are: K pq = 0, η SB1 = η SB2 = η SB3 = η DB = 0.25, k 1 = k 2 = k 3 = 6, ψ = 0, θ = 0, β 1 The results obtained in Figures 10 and 11 show that the ST-CFs of the mathematical theoretical model and simulation model match very well, demonstrating the excellent validity of our simulation model. In addition, the fitting effect of the spatial-temporal correlation characteristics of the isotropic scattering environment is better than that of the non-isotropic scattering environment. Conclusions In this paper, we have developed and studied a novel 3D multiple-antenna theoretical channel model and a corresponding simulation channel model for mobile IoT environments. In the proposed model, the double-spheres model and ellipsoid model are leveraged to characterize the efficient local scattering and remote efficient scattering region, respectively. Flexible parameters invest the model with the ability to sufficiently adapt to various mobile IoT scenarios, which provides the model with the capacity to investigate the impact of the scatterer distribution state, antenna deployment, and scatterer density. We derive the ST-CF and corresponding spatial Doppler power spectral density for both isotropic and non-isotropic scattering scenarios. It has been demonstrated that the scatterer distribution concentration would influence oscillation trend of spatial correlation and the higher scatterer density leads to significantly lower correlation properties. In addition, the angle between the mean of the azimuth and antenna array orientation is shown to be a critical factor to determine the spatial correlation in a non-isotropic environment. Those useful conclusions observed by numerical simulations can provide enlightenment on the optimized design of mobile IoT communication systems. Finally, excellent agreement is achieved between the theoretical model and simulation model, which validates the utility of our analysis and derivations.
8,407
2021-02-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Frequency decoding of periodically timed action potentials through distinct activity patterns in a random neural network Frequency discrimination is a fundamental task of the auditory system. The mammalian inner ear, or cochlea, provides a place code in which different frequencies are detected at different spatial locations. However, a temporal code based on spike timing is also available: action potentials evoked in an auditory-nerve fiber by a low-frequency tone occur at a preferred phase of the stimulus—they exhibit phase locking—and thus provide temporal information about the tone's frequency. Humans employ this temporal information for discrimination of low frequencies. How might such temporal information be read out in the brain? Here we employ statistical and numerical methods to demonstrate that recurrent random neural networks in which connections between neurons introduce characteristic time delays, and in which neurons require temporally coinciding inputs for spike initiation, can perform sharp frequency discrimination when stimulated with phase-locked inputs. Although the frequency resolution achieved by such networks is limited by the noise in phase locking, the resolution for realistic values reaches the tiny frequency difference of 0.2% that has been measured in humans. Author Summary Humans can resolve tiny frequency differences of only 0.2%, much below the frequency interval of a semitone in Western music which is about 6%.How is this astonishing frequency resolution achieved? Sound is detected within the inner ear, or cochlea, in which auditory-nerve fibers fire action potentials upon acoustic stimulation.Which of an ear's 30,000 auditory-nerve fibers fire indicates the frequency of a pure tone.However, the timing of spikes from a single auditorynerve fiber can also provide information about the signal's frequency.Recent psychoacoustic experiments on the human perception of tones show that humans indeed employ this temporal information.The neural mechanisms, however, remain unclear. Here we show that a class of neural networks-with random connections, temporal delays, and coincidence detection-can read out the frequency information provided in the spike timing of auditory neurons.We employ methods from statistical physics as well as numerical simulations to demonstrate the frequency resolution that such networks can achieve resembles that of human observers. Introduction A sound impinging on the eardrum elicits a wave of displacement of the basilar membrane within the cochlea [1,2].Mechanosensitive hair cells on the basilar membrane transduce the membrane's vibration into electrical signals that are transmitted to the associated auditory-nerve fibers [3,4].Through position-dependent resonance along the basilar membrane the cochlea establishes a place code for frequencies: high frequencies evoke traveling waves that peak near the organ's base whereas the waves elicited by lower frequencies culminate at more apical positions. A temporal code may, however, supplement or even supersede the place code.In response to a pure sound at a frequency below about 300 Hz, an auditory-nerve fiber fires action potentials at every cycle of stimulation and at a fixed phase [3,5].Above 300 Hz the axon starts to skip cycles, but action potentials still occur at a preferred phase of the stimulus.The quality of this phase locking decays between 1 kHz and 4 kHz, however, and phase locking is lost for still higher frequencies.Phase locking below 4 kHz is sharpened in the auditory brainstem by specialized neurons such as spherical bushy cells that receive input from multiple auditory-nerve fibers [6,7].These cells can fire action potentials at every cycle of stimulation up to 800 Hz (Supplementary Figure 1).Temporal information about the stimulus frequency is therefore greatest for frequencies below 800 Hz, declines from 800 Hz to 4 kHz, and vanishes for still greater frequencies.In some species, such as the barn owl, phase locking can continue up to 10 kHz [8]. Phase locking is employed for sound localization in the horizontal plane [9,10].A sound coming from a subject's left, for example, reaches the left ear first and hence produces a phase delay in the stimulus at the right ear compared to that at the left.Auditory-nerve fibers preserve this phase difference, which is subsequently read out by binaurally sensitive neurons through delay and coincidence detection.A temporal delay generally results when one neuron signals another: the signal propagates along the axon of the transmitting neuron and along the dendrites of the receiving cell, producing delays of up to 20 ms with only a few microseconds of jitter [11][12][13].Coincidence detection occurs when two or more synchronous incoming spikes are required for a neuron to fire: the signals must arrive at the nerve cell's soma within a certain time window τ, comparable to the membrane's time constant, in order for their effects to add and initiate an action potential. Phase locking can also provide information about the frequency of a pure tone, for the duration between two subsequent neural spikes is on average the signal's period or a multiple thereof.Evidence for the usage of this information in the brain comes from psychoaoustic studies that show that human frequency discrimination is superior for the lower frequencies at which phase locking is available and that discrimination of these frequencies worsens when the phase information is perturbed [14][15][16][17][18]. It remains unclear how the temporal information on frequency is read out in the brain. The usage of delay and coincidence detection has been proposed, but the exact mechanism has not been specified and the resulting frequency resolution has not been determined [19].Other studies have proposed mathematical schemes for determining a signal's frequency from phase locking, the neural implementation of which remains unclear [20,21].Here we study how a random recurrent neural network with delay and coincidence detection can encode frequency in its activity pattern when stimulated with phase-locked, cycle-by-cycle input. Results Denote by T the period and by L the duration in cycles of a signal such that the phase-locked, cycle-by-cycle external spikes arrive at the network neurons around times 0, T, 2T, 3T, ..., (L-1)T.Assume that the first external spike triggers an action potential at each neuron; because of adaption, two coincident spikes are needed for the generation of subsequent spikes.If a neuron i projects to another neuron j then its first spike arrives there at a later time t ij that represents the delay between the cells.If that time differs no more than the small amount τ from the time T at which the second external spike arrives at neuron j, the two spikes act in concert to elicit an action potential; otherwise neuron j remains silent.If active, neuron j may trigger spikes in other neurons, specifically those for which the time delay from neuron j also matches the signal period.Sustained network activity results when the connectivity C between neurons-the average number of internal connections that a neuron receives-exceeds a certain value (Figure 2; Supplementary Methods). How can we quantify this network's pattern of activity?Let the network comprise N neurons and denote a neuron as active if it fires spikes in response to at least half of the external spikes and as inactive otherwise.The network's activity may then be summarized by a binary vector .., N) is active under stimulation at a period T and € x T (i) = 0 otherwise.The fraction a of active nodes follows as An analytical approximation provides insight into the dependence of the network's activity on its connectivity and size.Assume that a neuron j is active if it receives at least one active connection, in which we define a connection from another neuron i to neuron j as active if spikes from neuron i traveling to neuron j can elicit action potentials there in at least half of the trials.Denote the average number of active connections that a neuron receives by B. The probability that a neuron does not receive any active connection then reads and equals the fraction of inactive neurons: ( The approximation can be solved through the Lambertz W-function, Further analysis shows that the average number of active connections can be approximated as € B = 2τC /(t max − t min ) and the analytically derived average network activity is then in excellent agreement with numerical simulations (Figure 1c and Supplementary Methods). The fraction of active nodes does not depend on the network size N if the network connectivity C is independent of N. Because the probability c of a connection between two neurons follows as the resulting networks are sparse.We denote the network connectivity at which half of the neurons are active by Because an activity pattern is most informative when half of the neurons are active, we employ this connectivity in the following. Inaccuracy in phase locking results in noise: spikes from spherical bushy cells, for example, exhibit a phase distribution that is approximately Gaussian around the mean value with a standard deviation s that can be as small as one-twentieth of a cycle [7].Input spikes therefore arrive at the network neurons at times ξ 1 , T+ξ 2 , ..., (L-1)T+ξ L in which ξ k (k = 1, 2, ..., L) is a random Gaussian variable with zero mean and standard deviation s.Although small, this noise evokes slightly different neural activity patterns upon repeated stimulation.The brain may nevertheless learn the mean pattern at period T over many repeated trials.We define this mean pattern as half of the trials and Stimulation at another signal period Tʹ′ evokes a different mean activity pattern € X T ' .We can quantify its difference from the mean pattern This distance specifies the fraction of neurons that differ in their activity between the two patterns.Analytical calculations and numerical simulations show that the distance increases linearly in the absolute period difference |Tʹ′-T| and vanishes for Tʹ′=T (Figure 3a; Supplementary Methods).The network thus decodes stimulation periods through distinct patterns of mean activity.Each such pattern may then selectively activate a particular downstream neuron. The identification of the period from an individual signal is inevitably limited by the noise in the timing of the external spikes.When a network is stimulated at a period T its single-trial activity pattern € x T differs both from the mean pattern € X T at period T and from the mean pattern € X T ' at another period Tʹ′.Correct discrimination between T and Tʹ′ thus requires the pattern € x T to be closer to The relative Hamming distance (i) .An analytical approximation shows that these variables can be regarded as effectively independent.The distribution of with a certain standard deviation σ (Figure 3b; Supplementary Methods). The distance but with the same standard deviation σ as for 3d).The two distributions can be differentiated with at least 95% accuracy when the mean values and € D(x T , X T ' ) differ by 4σ or more.When is this condition fulfilled? Because larger system sizes N imply averaging over more neurons, and as confirmed by analytical and numerical computations, the standard deviation σ decreases as accordance with the central limit theorem (Figure 3c; Supplementary Methods).We can therefore resort to a network that is large enough to yield a sufficiently small variance in the relative Hamming distance.Correct discrimination of a signal's period between T and Tʹ′ is then feasible as soon as the mean value How does the mean value € D(x T , X T ' ) depend on the difference in periods?Analytical and numerical computations show that € D(x T , X T ' ) increases linearly in |Tʹ′-T| when |Tʹ′-T|>ΔT for a threshold difference ΔT (Figure 3d; Supplementary Methods).When the periods of the two signals are closer, |Tʹ′-T|<ΔT, the distance for Tʹ′=T.The threshold ΔT thus provides a measure for the smallest period difference that a particular network can resolve. The threshold value ΔT is proportional to the mean distance € D(x T , X T ) between a singletrial pattern € x T and the mean pattern € X T at period T (Figure 4; Supplementary Methods). Because the pattern evoked by a signal of greater length L involves more averages over external spike times, and hence over the phase noise, it results in a smaller mean value 5a; Supplementary Methods).However, a larger network size N does not affect 3c). The threshold ΔT also decreases as € L −1/ 2 with increasing signal length L (Figure 5b). Longer signals indeed provide more information that can enhance frequency resolution.The improvement in resolution is less than that expected from the Fourier uncertainty principle, in which frequency resolution is inversely proportional to signal length [22]. The threshold ΔT for a given signal length is proportional to the noise in the phase-locked input signal but independent of the neural network's parameters (Supplementary Methods).For a realistic value for the standard deviation s of one-twentieth of a cycle and for a signal of length L=200 cycles, we obtain a period resolution ΔT/T of about 0.2% (Figure 5b).This value agrees well with the human frequency resolution measured in psychoacoustic experiments [15]. Discussion Our results demonstrate that the timing of action potentials, even without the cochlear place code, allows for accurate frequency discrimination.A simple, randomly connected network with a range of signal-propagation delays between neurons can use phase-locked inputs to replicate the striking frequency discrimination of the human auditory system. The network encodes different input frequencies in distinct activity patterns.We have defined those patterns in the simplest possible way, through the mean activity of the network neurons during stimulation.Further statistics of the spike trains fired by the network neurons, such as temporal correlation between the spikes from one neuron as well as correlation between spikes from different neurons, can lead to finer discrimination of activity patterns and hence improve the precision in frequency discrimination. The activity patterns that we have defined here may be read out by downstream neurons. A given downstream neuron may detect a particular activity pattern of the network if it receives excitatory connections from the neurons that are active for this pattern and inhibitory connections from the network neurons that are inactive for this pattern.The properties and precision of such a downstream read-out will be investigated in future studies. In our simulations we have employed a range of temporal delays between network neurons that encompasses about an octave.Frequency discrimination by such a network is accordingly restricted to a spectral band of less than an octave, and many networks, each with a distinct range of temporal delays, are required to cover a broader frequency range.Where might such structures exist in the brain?The inferior colliculus displays a tonotopic array of multiple frequency-band laminae, each of which analyzes about one-third of an octave [23,24]. Substantial signal processing appears to be performed within each lamina, potentially including pitch detection [25,26].Frequency discrimination through frequency-dependent network activity patterns as proposed here might therefore occur in these laminae.Simultaneous recordings from many interconnected neurons within one lamina would be required for an experimental test of this hypothesis. Neural networks can exhibit emergent computational abilities such as synchronous information transmission, memory, and speech recognition that are not present at the level of individual nerve cells [27][28][29][30][31][32].It remains uncertain to what extent such brain functions depend upon a neuron's precise action-potential timing as opposed to its average firing rate [33,34]. Although our study is specific to the auditory system, it may also help a more general understanding of the usage of temporal codes for other brain functions. Methods Random neuronal networks are constructed by assigning to every pair (i, j), i≠j, of neurons a connection from i to j with a low probability c.The average number of connections emerging from an individual neuron accordingly reads C=c(N-1) and equals the average number of a neuron's incoming connections.To each connection from a neuron i to another neuron j we assign a time delay t ij that is drawn randomly between a minimal time t min and a maximal time In our simulations we have employed t min =1.2 ms and t max =2.8 ms and a period T=2 ms. The probability distributions in Figure 3b show a typical result from one random network.All other numerical results, including mean activity patterns and the statistics of distances between individual trials and mean patterns, have been obtained by averaging over at least 100 different random networks. We measure the arrival of an action potential at a neuron's soma by the time at which the maximum of the depolarization occurs.Each neuron fires an action potential upon arrival of a signal's first external spike.The initiation of subsequent action potentials requires that two action potentials arrive at the neuron's soma within a time window τ; in our simulations we have employed τ=0.6 ms.Such a difference between generation of the first spike and later ones could result, for example, from adaptation in a neuron.Generation of an action potential is followed by a refractory period for the duration of which we have assumed 1.2 ms. For simulations of network dynamics we have developed a fast, event-based algorithm that stores the propagating spikes and their arrival times at each neuron.At each step in the algorithm we then compute the earliest subsequent time at which a neuron fires a spike, determine to which neurons that spike propagates as well as the associated arrival times, and appropriately update the list of incoming spikes at those neurons.The mean activity patterns as well as the statistics of the pattern distance of a single trial from a mean pattern have been computed from at least 100 trials for each network realization.b, The period resolution ΔT achieved by a network with N=500 also decreases as Reichenbach and Hudspeth Phase-locked input An external spike at neuron j together with an internal spike received from neuron i therefore causes neuron j to fire with a probability p(T,t ij ).The connection from i to j is active if the number of such spikes associated with L external spikes is at least L/2, and the corresponding probability q(T,t ij ) is Denote the index set of neurons that have a forward connection to neuron j as I j .The average number B of a neuron's active connections then follows as Analytical computation of pattern distances As described in the Introduction, we quantify the distance between two network activity patterns € x = x (1) , x (2) ,..., x (N ) ( ) and € y = y (1) , y (2) ,..., y (N ) ( ) through the relative Hamming distance Distinct patterns of network activity result from differences in the active connections. Denote by ΔB(x,y) the average number of a neuron's active connections that differ between the two patterns x and y.If this difference is small, as it is for the small differences in the signal period that we consider, the pattern distance d(x,y) can be approximated as depending linearly on the difference ΔB(x,y) We start by computing the distance d(X T ,X T′ ) between the mean pattern X T at period T and the mean pattern X T' at period Tʹ′.Consider a connection from a neuron i to another neuron j with a time delay t ij .For the mean patterns we can ignore the phase noise in Equation S1: the connection is thus active during a signal period T when T-τ<t ij <T+τ and vanishes otherwise. Analogously a signal period Tʹ′ yields an active connection when Tʹ′-τ<t ij <Tʹ′+τ and an inactive connection otherwise.Assume that T<Tʹ′; the other case follows by analogy.Only when the time delay t ij lies in the intervals (T-τ, Tʹ′-τ) or (T+τ, Tʹ′+τ) does the connection differ in its activity between the two patterns, and the probability of having such a time delay is The average number ΔB(X T ,X T′ ) of a neuron's active connections that differ between the two patterns follows as and the pattern distance reads As observed numerically, the pattern distance increases linearly in |Tʹ′-T| (Figure 3a).For the parameters employed in our simulations, the above expression yields a slope that is comparable but about 20% greater than the numerical value. Let us now compute the distance d(x T ,X T′ ) between a single-trial pattern x T during period T and the mean pattern X T' during another period Tʹ′.This distance varies from trial to trial (Figure 3b).Again, we consider a connection from a neuron i to another neuron j that induces a certain time delay t ij .As before, the mean pattern X T' is unaffected by the phase noise: the connection is active when Tʹ′-τ<t ij <Tʹ′+τ and vanishes otherwise.For the single-trial pattern x T , however, the activity may fluctuate: as computed above, the connection is active with probability q(T,t ij ) and zero otherwise (Equation S4).To capture this stochasticity we introduce a random binary variable Z ij that is one when the connection from neuron i to neuron j differs in its activity between the single-trial pattern x T and the mean pattern X T'ʹ′ ; otherwise Z ij is zero.We find that Z ij =1 with a probability r(T,Tʹ′,t ij ) and Z ij =0 with a probability 1-r(T,Tʹ′,t ij ) in which To compute the integral in Equation S18 we employ a piecewise linear approximation q lin (T,t) of q(T,t) (Figure S2): ) The standard deviation σ therefore decreases as € N −1/ 2 with an increasing system size N and as € L −1/ 4 for a greater signal length L, in agreement with our numerical results (Figure 3c and 5a). The standard deviation is moreover independent of the signal periods T and Tʹ′ as we have also found numerically (Figure 3b and 4).The values predicted by the above analytical expression are about 30% lower than those calculated numerically. The linear approximation q lin (T,t) results, through Equation S11, in a piecewise linear approximation r lin (T,Tʹ′,t) for r(T,Tʹ′,t) that we employ to approximate the integral in Equation S21 (Figure S2).Two cases then emerge.First, for small period differences The pattern distance is quadratic in Tʹ′-T for these small period differences, and has a nonvanishing minimum at Tʹ′=T as seen in numerics (Figure 4). Second, for larger period differences For these larger period differences the pattern distance therefore increases linearly in |Tʹ′-T| as we have already found numerically (Figure 4).Because the value € ΔT = πs / 2L separates the two regimes, we consider it to be the network's threshold for period discrimination.The above analytical expression shows that ΔT is independent of the system size N but decreases according to € L −1/ 2 with increasing signal length L, in agreement with our numerical results (Figure 5b). Equations S23 and S25 show that the mean distance D(x T ,X T′ ) is invariant under exchange of T and Tʹ′: D(x T ,X T′ )=D(x T' ,X T ) in agreement with numerical results.Because the standard deviation σ does not dependent on either T or Tʹ′, it follows that the distributions of d(x T ,X T' ) and d(x T' ,X T ) are identical. Figure 1 . Figure 1.Schematic diagrams of a neural network with delay and coincidence detection.a, Each neuron (light gray) can receive phase-locked inputs from a preceding neuron through external nerve fibers (orange).The network neurons are randomly connected (light blue) with the indicated characteristic delays in signal propagation.To fire an action potential, a neuron requires two temporally coincident spikes, such as one from an external and one from an internal source.b, When a periodic signal arrives through the external nerve fibers (red), the internal connections whose signal delay is approximately matched to the signal period induce spikes in their target neurons, resulting in a pattern of active internal connections (dark blue) and active neurons (black borders).c, A different signal period evokes a distinct pattern of active connections and active neurons. Figure 2 : Figure 2: Patterns of network activity.a, Each horizontal line depicts the activity of a single neuron in the network.Because every cell fires a spike upon receiving the first input signal, the raster of action potentials displays a vertical black line at its outset.The generation of spikes subsequently requires two incoming action potentials that temporally coincide.The connections between neurons induce different delays, so an activity pattern results in which some neurons fire at almost every cycle whereas others remain silent.Noise in the timing of the external spikes introduces variation in the firing of each neuron.b, The fraction of active neurons depends on the mean connectivity C, the average number of internal connections that each neuron receives.An analytical approximation (black line) confirms that the fraction of active neurons is independent of the network size N. Half of the neurons are active at a connectivity Figure 3 : Figure 3: Distances of activity patterns evoked by different signal periods.a, The average distance Figure 4 : Figure 4: Influence of network size N and signal length L. The mean distance Figure 5 : Figure 5: Dependence of period resolution on signal length L. a, The mean value Figure 3 B * follows from the network connectivity € C * through Equation S6. €T' −T ≥ πs / 2L we compute € r lin (T,T',t) S2)Because ξ n -ξ n+1 is a stochastic process with zero mean and standard deviation In conjunction with Equation S4, this relation yields an analytical dependence of the fraction a of active nodes on the connectivity C.This dependence is shown as a black line in Figure2band agrees excellently with numerical results.We find that a=0.5 for a connectivity of € C * ≈ 1.85.
5,836.6
2012-06-08T00:00:00.000
[ "Computer Science" ]
Introduction on Data Analysis and Graphical Representation of Data In everyday life, most of the members using spreadsheets for maintenance of their professional or research works. A small amount of statisticians or researchers using the Microsoft excel to do statistical computations and mathematical functions which are in-built. The implementation of this paper leads to guide the researchers or statisticians based on techniques used by professional data management teams to save their time and get better knowledge about data analysing. By doing themselves, they get correct and accurate results on their collection of data. Through Microsoft Excel, researchers would collect data, summarizes data, interpret data and perform specific techniques that can suits their work. The main aim of the study is to enrich their knowledge on data analysis and graphical representation in computational method. Introduction Data can be defined as a systematic record of a particular quantity. It is the different values of that quantity represented together in a set. It is a collection of facts and figures to be used for a specific purpose such as a survey or analysis. When arranged in an organized form, can be called information. The source of data is also an important factor. Types of Data Data Types are an important concept of statistics, which needs to be understood, apply statistical measurements to your data and therefore to correctly conclude certain assumptions about it. Speed, salary, word count. Data Collection In Statistical studies, data collection is the first step. Data are collected by an investigator from personal experimental studies is called primary data. In this method, the investigator would interact directly with the respondent and gather an information. When data is obtained from some secondary source such as journals, magazines, reports etc., then it is called secondary data. Analyzing Data Data analysis is defined as a process of cleaning, transforming, and modeling data to discover useful information for business decisionmaking. This purpose is to extract useful information from data and taking the decision based upon the data analysis. There are several types of Data Analysis techniques that exist based on business and technology. Data analysis is defined as a process of cleaning, transforming, and modeling data to discover useful information for business decisionmaking. This purpose is to extract useful information from data and taking the decision based upon the data analysis. There are several types of Data Analysis techniques that exist based on business and technology. Diagnostic Analysis This Analysis is useful to identify behaviour patterns of data. Predictive Analysis This analysis makes calculation about future outcomes based on past or present data. It can accurate or estimate values based on our detailed information. Prescriptive Analysis This analysis is last phase of business analytics. It predicts the questions, "what is likely to happen?" Classification of Data Connor defined classification as: "the process of arranging things in groups or classes according to their resemblances and affinities and gives expression to the unity of attributes that may subsist amongst a diversity of individuals". The classification is also called categorization of data and it can be broadly classified in to four types as follows: Tabulation of Data The tabulation is also known as frequency distribution of variable. The main objective of tabulation is to reduce the data and to make comparison easy. In the tabular form of data, the required interpretation is easily reachable. The preparation of table depends upon size and nature of data. The data collected from field and recorded in the way it was collected would be raw data. Monthly reports of trade and commerce, food stocks and hospital statistics are some of the examples. These tabulated data may need further "finishing" before interpreting them. MS EXCEL: Microsoft Excel is a spreadsheet program included in the Microsoft Office group of their applications. In MS Excel, there are some features like calculation, graphing tools, pivot tables and a macro programming language called Visual Basic for Applications. It also offers a suite of statistical analysis functions and other tools that can be used to run descriptive statistics and to perform several different statistical tests used in business and research. [1][2][3][4]. Data Entry Data entry in the worksheet is an easy process. There is no need to create a structure as same as in access table. We can enter different types of fields in rows and columns with headings.(Russel A. Stultz (1997)) [4][5][6][7]. Graphical Representation of Data In MS Excel: The data can be displayed with the half of graph and diagram instead of classification of tabulation. There are many reasons to draw a graph. The most important reasons is that one simple graph says more than words. Before preceding the formal statistical calculation, usually suggested that the graphical representation to understand the data in an easy way. The graphs gives us visualization of data. (Tukey JW 1990) [6]. Most frequently used graphs are Bar chart, Pie chart, Histogram and Scatter diagram. Bar Chart: In bar chart the values for each category is shown in vertical or horizontal bar format whose length is proportional to the value. If the bars are in vertical direction, then it is called a column chart / bar chart. Bar chart is used to represent the values of categorical variables like gender, age groups, social categories, etc. To construct a bar chart in excel, first enter data in a worksheet with headings in two columns with a row (or) two rows and a column. Select all cells with headings. Click on "Insert tab" and then click on column chart. Pie Chart: First enter the data in worksheet to construct a pie chart. Then select the data of the first series and choose pie chart in the insert tab of excel sheet. The pie chart will appears with different sectors as per data values. A circle is divided into sectors having an area which is proportional to the frequencies or percentages of cases under various categories. A Pie chart is used to represent the distribution of categorical data as a percentage of the total values. This chart is popularly used to describe financial and economic data. Histogram and Frequency Distribution in Excel: It is a graphical presentation of frequency distribution in which variable characteristics are presented on the x-axis of the graph while the frequencies on y-axis. In histogram, each bins allows a continuation of values without a gap between the adjacent columns. The curve of histogram consists of a series of adjacent rectangles drawn for a grouped frequency. A frequency distribution table in excel gives a detail note of how the data is spread out. It is an usual pair of frequency distribution table with a histogram. In order to make a frequency distribution table in Excel with a histogram. For this, data analysis tool pak must be installed. (Sarma, K.V.S (2010)) [1]. Enter the data into a worksheet into columns with headings, click on "Data tab" "Data Analysis" Fig.8. Dialogue box of Data Analysis In this dialogue box click on histogram and then select a location where you want your output to appear in worksheet. Fig.9. Histogram Chart and Frequency Polygon 2.2.4 Scatter Diagram: Scatter diagram is the simplest method of studying relationship between variables. It shows direction of correlation between two variables. It is primary tool in correlation and regression studies. The graphical presentation in the form of dots. So, it is also called as dot diagram. It is easy to understand and easy to interpret the given data. Construction of this chart is similar as other charts. It can be created by choosing scatter diagram in chart options in excel sheet.
1,766.8
2020-11-03T00:00:00.000
[ "Computer Science" ]
FAM5C Contributes to Aggressive Periodontitis Aggressive periodontitis is characterized by a rapid and severe periodontal destruction in young systemically healthy subjects. A greater prevalence is reported in Africans and African descendent groups than in Caucasians and Hispanics. We first fine mapped the interval 1q24.2 to 1q31.3 suggested as containing an aggressive periodontitis locus. Three hundred and eighty-nine subjects from 55 pedigrees were studied. Saliva samples were collected from all subjects, and DNA was extracted. Twenty-one single nucleotide polymorphisms were selected and analyzed by standard polymerase chain reaction using TaqMan chemistry. Non-parametric linkage and transmission distortion analyses were performed. Although linkage results were negative, statistically significant association between two markers, rs1935881 and rs1342913, in the FAM5C gene and aggressive periodontitis (p = 0.03) was found. Haplotype analysis showed an association between aggressive periodontitis and the haplotype A-G (rs1935881-rs1342913; p = 0.009). Sequence analysis of FAM5C coding regions did not disclose any mutations, but two variants in conserved intronic regions of FAM5C, rs57694932 and rs10494634, were found. However, these two variants are not associated with aggressive periodontitis. Secondly, we investigated the pattern of FAM5C expression in aggressive periodontitis lesions and its possible correlations with inflammatory/immunological factors and pathogens commonly associated with periodontal diseases. FAM5C mRNA expression was significantly higher in diseased versus healthy sites, and was found to be correlated to the IL-1β, IL-17A, IL-4 and RANKL mRNA levels. No correlations were found between FAM5C levels and the presence and load of red complex periodontopathogens or Aggregatibacter actinomycetemcomitans. This study provides evidence that FAM5C contributes to aggressive periodontitis. Introduction Aggressive periodontitis is characterized by a rapid and severe periodontal destruction in young systemically healthy subjects, and can be subdivided into localized and generalized forms according to the extension of the periodontal destruction [1].Epidemiological surveys have shown that the prevalence of aggressive periodontitis varies among ethnic groups, regions and countries, and may range from 0.1% to 15% [2,3].A greater prevalence is reported in Africans and African descendent groups than in Caucasians and Hispanics [4,5]. There are many reports in the literature describing families with multiple aggressive periodontitis affected individuals, suggesting familial aggregation [6][7][8].Several research groups have used segregation analysis to determine the likely mode of inheritance for this trait.The patterns of disease in these families have led investigators to postulate both dominant and recessive modes of Mendelian inheritance for aggressive periodontitis [9][10][11].Segregation analysis that included the families in the present study suggested an excessive disease transmission from heterozygous parents.This model provides support for the hypothesis that a few loci, each one with relatively small effects, contribute to aggressive periodontitis, with or without interaction with environmental factors [12]. Candidate gene approaches have been used to study aggressive periodontitis, but the results so far are very diverse and conflicting [13,14].A case-control genome wide association study suggested a role for GLT6D1 in aggressive periodontitis in Germans [15].One linkage study in African American families [16] showed that aggressive periodontitis is linked to the marker D1S492, located on chromosome 1q.A susceptibility locus for aggressive periodontitis was determined between the markers D1S196 and D1S533.This region of chromosome 1 (from base pair 165,770,752 to base pair 192,424,848) includes the cytogenetic regions from 1q24.2 to 1q31. 3.In this study, we first investigated this chromosomal region for genetic variants that contribute to aggressive periodontitis in a clinically well-characterized group of families, several of African descent (Table 1), segregating this condition.The hypothesis of this study is that genetic variation located between 1q24.2 to 1q31.3 contributes to aggressive periodontitis.Since the present genetic studies provide evidence that FAM5C gene contributes to aggressive periodontitis, we also investigated the pattern of FAM5C expression in periodontal lesions and its possible correlations with inflammatory/immunological factors and pathogens commonly associated with periodontal diseases in a second population presenting aggressive periodontitis, compared to periodontally-healthy controls. Genetic results All markers studies (Table 2) were in Hardy-Weinberg equilibrium (data not shown).Non-parametric linkage analysis showed no linkage between genetic markers in 1q24.2-1q31.3 and aggressive periodontitis (Table 3 [48]).Association could be seen between aggressive periodontitis and markers in FAM5C, rs1935881 and rs1342913.Both the A allele (common allele) of marker rs1935881 and the G allele (rare allele) of marker rs1342913 were observed to be over-transmitted among cases (p = 0.03 for both, complete results in Table S3).The results of PLINK also suggested an association between aggressive periodontitis and the same marker alleles: most common allele A of marker rs1935881 (OR = 0.50, 95% CI 0.15-1.66,p = 0.07) and rare allele G of marker rs1342913 (OR = 3.2, 95% CI 1.17-8.73,p = 0.03).No linkage disequilibrium was apparent between these two markers (Table S4).Haplotype analysis also showed an association between the haplotype A-G (rs1935881-rs1342913; p = 0.009) and aggressive periodontitis (Table 4).Additional haplotypes including these two markers also had suggestive association results (Table S5). The rs1935881 wild type allele A is conserved in several mammals, while the G allele of rs1342913 is conserved back to zebrafish (Figure 1).The TRANSFAC program predicted the presence of transcription factors in the binding-sites of rs1935881 and rs1342913 (Table S6). Sequencing of FAM5C coding regions did not disclose any etiologic mutations.Two variants were found in highly conserved intronic regions of FAM5C gene: rs57694932 and rs10494634.TRANSFAC predicted changes in binding site affinity with these variants (Table S6).These two variants were genotyped in the entire population but they did not show an association with aggressive peridontitis (Tables 3 and S7).They are in moderate linkage disequilibrium (D' = 0.655) with the two markers associated with aggressive periodontitis (rs1935881 and rs1342913), which suggests they do not explain the association observed. Discussion Aggressive periodontitis is a group of infrequent types of periodontal diseases with rapid attachment loss and bone destruction initiated at a young age.Though a variety of factors, such as microbial, environmental, behavioral and systemic disease, are suggested to influence the risk of aggressive periodontitis, an individual genetic profile is a crucial factor influencing their systemic or host response-related risk [17,18].This is the first report that provides evidence of an association between variation in FAM5C and aggressive periodontitis.Our work supports the initial findings of linkage [16] between chromosome 1q and aggressive periodontitis. The family-based study design that we used is robust to problems resulting from population admixture or stratification [19].Brazil is a trihybrid population of Native Indians, Caucasians with Portuguese ancestry and Africans [20].The last National Research for Sample of Domiciles census in Rio de Janeiro revealed that in this city 53.4% are white, 46.1% are black, and 0.5% are Asian or Amerindian [21].Table 1 describes additional demographic variables of the families studied.* The first two lines indicate the maximum possible scores for this dataset.These are followed by analysis results at each location: cM position, Z score, p-value assuming normal approximation, delta [48], logarithm of odds score [48], and p-value [48].We found evidence of association between aggressive periodontitis and FAM5C, but not linkage.Since marker allele-disease association and linkage between a disease locus and a marker locus are two different events, linkage without evidence of association and association without evidence of linkage are possible observations [22].In linkage analysis, we take advantage of the process of forming new allelic combinations (recombination) to identify loci that are linked to the disease.One can argue that these alleles are necessary for the disease to happen.However, an association can exist if the disease-causing variants are in linkage disequilibrium with the associated marker/locus.An association can also exist if the associated genetic marker is a susceptibility locus that increases the probability of developing the disease.By themselves, these alleles are not sufficient for disease manifestation.If the linkage disequilibrium hypothesis is correct, there will be evidence for linkage.If the susceptibility locus hypothesis is correct, there may be strong evidence against linkage [22]. The FAM5C gene (NM199051-1, Gene ID:339479) is located on chromosome 1q31.1,comprises eight exons and encodes a protein of 766 amino acids named FAM5C (family with sequence similarity 5, member C; aliases BRINP3, DBCCR1L, RP11-445K1.1).FAM5C was originally identified in the mouse brain as a gene that is induced by bone morphogenic protein and retinoic acid signaling [23].Importantly, FAM5C is localized in the mitochondria and that over-expression of this molecule leads to increased proliferation, migration, and invasion of non-tumorogenic pituitary cells [24], a phenotype relevant to the cellular changes of smooth muscle cells that are associated with the formation and vulnerability of an atherosclerotic plaque [25,26].FAM5C alleles are also implicated in the risk of myocardial infarction [27].Through complex signaling cascades, mitochondria have the ability to activate multiple pathways that modulate both cell proliferation and, inversely, promote cell arrest and programmed cell death [28], all phenomena relevant in the pathogenesis of periodontal diseases. Our exploratory genome wide scan analysis unveiled new candidate loci for aggressive periodontitis.The regions on chromosomes 2, 3, 5, 6, and 18 included many associated markers (Table 5) and spanned over large segments, and included several hundred genes but fine-mapping approaches such as the one used in this study can considerably reduce the time and cost effort to study these loci.Out of the most studied genes in aggressive periodontitis [IL1-A and IL1-B (2q14), IL-4 (5q31.1),IL-10 (1q31-q32), FccRIIa, FccRIIb, and FccRIIIb (1q23), and TNFA (6p21.3)],IL-4 and TNFA map in the intervals with suggestive association results.IL-10 maps in the interval analyzed in the present studied (1q24.2 to 1q31.3) and FccRIIa, FccRIIb, and FccRIIIb are just outside of it. Interestingly, this preliminary genome wide scan analysis did not suggest linkage to 9q34.3.This locus was recently shown to be associated with aggressive periodontitis in Germans [15].Since the families studied here are from a distinct geographic location, it is possible that the role of GLT6D1 in 9q34.3 in these families is less pronounced.Future investigations in our study population include replication of the German genome wide scan finding. Since literature data is scarce to suggest a mechanism linking FAM5C to the pathogenesis of aggressive periodontitis, we next investigated its pattern of expression in periodontal lesion and possible correlations with inflammatory/immunological and microbial factors classically associated with the periodontitis outcome.FAM5C expression was found to be significantly higher in disease tissues, and to present a slight but significant correlation with IL-1b, IL-17A, IL-4 and RANKL expressions (Figure 2).The pro-inflammatory cytokine IL-1b has been classically associated with inflammatory cell influx and osteoclastogenesis in the periodontal environment [29], and a similar role for IL-17A was Table 5. List of markers associated with aggressive periodontitis in the genome wide scan analysis.recently suggested [30].Interestingly, both cytokines are positive regulators of RANKL expression, the master regulator of osteoclasts differentiation and activation, which is thought to account for alveolar bone loss throughout the periodontal disease process [31].Conversely, IL-4 was described as an inhibitor of RANKL expression, but in certain conditions may increase osteoclast activity [32].While some studies suggest a possible destructive role for IL-4 in both chronic and aggressive periodontitis [33,34], other studies suggest that this cytokine has a protective role against tissue destruction [35,36].Therefore, it is possible to suppose that FAM5C may somehow modulate/interfere in cytokine network in diseased periodontal tissues, and consequently impact disease outcome.Interestingly, while destructive cytokine expression have been linked to the presence of classic periodontopathogens [33], FAM5C mRNA levels were not associated with the presence or load of red complex periodontopathogens or Aggregatibacter actinomycetemcomitans, reinforcing the putative strong genetic control of its expression in periodontal tissues.In summary, this study provides evidence that variation in FAM5C might contribute to aggressive periodontitis, and that the markers rs1935881 and rs1342913 are candidate functional variants (based on multispecies nucleotide sequence comparisons and electronic transcription binding site predictions -Figure 1 and Table S6) or are in linkage disequilibrium with still unknown disease-predisposing alleles.Future work will investigate if expression profiles of FAM5C are associated with genetic variation in the gene. Subjects (Genetic Studies) Three hundred and seventy-one subjects from 54 pedigrees (75 nuclear mother-father-affected child) were recruited at the Periodontology Department at the Rio de Janeiro State University (Rio de Janeiro, RJ, Brazil), and UNIGRANRIO (Duque de Caxias, RJ, Brazil) (Figure S1).One additional family was recruited at Guarulhos University (Guarulhos, SP, Brazil) and included father, mother and sixteen offspring (Figure S2).All subjects were of Brazilian descent.The protocol for the study was reviewed and approved by the Ethics Committee of the Rio de Janeiro State University, Guarulhos University, and University of Pittsburgh, and written informed consent was obtained from all individuals prior any research activity.Aggressive periodontitis were diagnosed according to the 1999 international classification of periodontal diseases [1] and positive individuals were assigned as affected.If individuals were edentulous and reported having lost all their teeth at young age (before 35 years), for no obvious reasons such as trauma or extensive cavities, this was recognized as a potential indicator that they started as an aggressive periodontitis case and we also designated them as affected.In addition, the following information was collected by the same examiner from all probands and family members: affection status, gender, age, family relationship and ethnicity, cigarette smoking habits, current medications taken and general health status.In addition, clinical data (pocket probing depth and clinical attachment level) and radiological examinations were collected from all participants.Individuals with co-existing morbidities (e.g.diabetes) or smokers were not defined as affected to minimize the risk of inadvertently including chronic periodontitis in the analysis. Isolation of genomic DNA Saliva samples were collected from all of the 389 individuals with Oragene TM DNA Self-Collection Kit (DNA Genotek Inc., Kanata, ON, Canada).The DNA was extracted using the protocol for manual purification of DNA from 0.5 mL of Oragene TM / saliva.The DNA integrity was checked and quantified using the absolute quantification in real-time PCR as suggested by Applied Biosystems (Foster City, CA, USA). Selection of single nucleotide polymorphisms (SNPs) The region between markers D1S196 and D1S533 on chromosome 1(1q24.2-1q31.3),covering about 26 million base pairs, was studied using the data from the International HapMap Project [37] and the University of California Santa Cruz Genome Bioinformatics, and viewed through the software Haploview [38].Based on pairwise linkage disequilibrium, haplotype block structures, and structure of genes, we identified the 14 most informative single nucleotide polymorphisms in the region (Table 2). Subjects (Gene expression studies) One hundred and three subjects (57 healthy controls and 46 presenting aggressive periodontitis) were recruited at the Department of Periodontics, University of Ribeira ˜o Preto Dental School (UNAERP).All subjects were of Brazilian descent.The protocol for the study was reviewed and approved by the Ethics Committee of the UNAERP and written informed consent was obtained from all individuals prior any research activity.All subjects were diagnosed as described above for genetic analysis. Gene expression analysis One biopsy of gingival tissue of each periodontally-healthy subjects (N = 57) were taken from sites that showed no bleeding on probing, probing depth smaller than three millimeters, and clinical attachment loss smaller than one millimeter during surgical procedures due to esthetics, orthodontic or prosthetic reasons.Samples included junctional epithelium, gingival crevicular epithelium and connective gingival tissue.One biopsy of gingival tissue from each aggressive periodontitis patients (N = 46) were taken from the gingival margin to the bottom of the gingival pocket of affected sites, and included junctional epithelium, periodontal pocket epithelium, and connective gingival or granulation tissue.These samples were collected during surgical therapy of the sites that exhibited persistent bleeding on probing and increased probing depth three to four weeks after the basic periodontal therapy (non-responsive sites), as previously described [41].The extraction of total RNA from periodontal tissue samples was performed with Trizol reagent (Invitrogen, Carlsbad, CA, USA), and the cDNA synthesis was accomplished as previously described [41].Real-Time-PCR mRNA experiments were performed in a MiniOpticon system (BioRad, Hercules, CA, USA), using SybrGreen MasterMix (Invitrogen, Carlsbad, CA, USA), using 2.5 ng of cDNA in each reaction and primers previously described [41].Calculations for determining the relative levels of gene expression were made from triplicate measurements of the target gene, with normalization to b-actin in the sample, using the cycle threshold (Ct) method and the 2 DDct equation, as previously detailed [41]. Bacterial DNA quantification In order to allow the detection of Porphyromonas gingivalis, Tannerella forsythia, Treponema denticola, and Aggregatibacter actinomycetemcomitans, periodontal crevice/pocket biofilm samples were collected with sterile paper point ISO #40 from the same site biopsied previously to the surgical procedure [41].Bacterial DNAs were extracted from plaque samples using the DNA Purification System (Promega, Madison, WI, USA).RealTime-PCR mRNA or DNA analyses were performed in a MiniOpticon system (BioRad, Hercules, CA, USA), using SybrGreen MasterMix (Invitrogen, Carlsbad, CA, USA), using 5 ng of DNA in each reaction and the primers previously described [41].The positivity to bacteria detection and the bacterial counts in each sample were determined based on the comparison with a standard curve comprised by specific bacterial DNA (10 9 to 10 22 bacteria) and negative controls [41].The sensibility range of bacteria detection and quantification of our real time-PCR technique was of 10 1 to 10 8 bacteria to each of the four periodontal pathogens tested. Statistical analysis Calculations of linkage disequilibrium were computed with the Graphical Overview of Linkage Disequilibrium (GOLD) software [42] for both the squared correlation coefficient (r 2 ) and Lewontin's standardized disequilibrium coefficient (D').The program Rutgers Map Interpolator (www.compgen.rutgers.edu/map-interpolator/)was used to convert the physical position of the 14 markers from base pairs to centiMorgans.Non-parametric linkage analysis was performed with the program Merlin [43,44].Alleles and haplotypes were tested for association with aggressive periodontitis with the programs Family-Based Association Test (FBAT) [45,46] and PLINK version 1.05 [47].To generate odds ratios, the most common allele was used as reference.In the analysis, only probands and relatives with aggressive periodontitis were considered as affected individuals, while relatives who could not be definitely diagnosed with aggressive periodontitis were considered as unaffected individuals (including healthy individuals and individuals with chronic periodontitis).Data was analyzed with and without the family recruited in the Guarulhos University. Analyses regarding gene expression were performed with t test or by ANOVA, followed by Tukey's test.Multiple logistic and linear regression analyses were performed to evaluate possible associations between the expression of FAM5C and inflammatory/ immunological and microbial factors.Values of p,0.05 were considered statistically significant. Follow up experiments after preliminary results Increasing genotyping density.After the first genotyping results and association analysis, seven additional markers were chosen in the proximity of the rs1342913 marker.The same criteria described previously were used to select these additional markers (Table 2).Genotypes were generated and analyzed as described above. FAM5C sequencing.The coding regions, including the exonintron boundaries of FAM5C, were sequenced in eleven unrelated individuals carrying two copies of the haplotype A-G of rs1935881-rs1342913 (nine diagnosed with aggressive periodontitis and two unaffected relatives -Figure S1).As a positive control for good DNA quality, one sample from the Centre D'E ´tude du Polymorphisme Humain -Fondation Jean Dausset (obtained through Coriell Institute for Medical Research, Camden, NJ, USA) was also sequenced.This sample originated from an anonymous healthy individual.The FASTA sequences of FAM5C exons were obtained based on data from the Ensemble Genome Browser (www.ensembl.org).Primer3 (version 0.4.0)(www.primer3.sourceforge.net)was used to design primers covering each exon and exon-intron boundary.FAM5C has 8 exons (Figure S3).Primer sequences and polymerase chain reaction conditions are available as Supporting Document (Table S1). Since no etiologic variants were identified in FAM5C coding regions, five highly conserved FAM5C intronic sequences were identified in the University of California Santa Cruz Genome Bioinformatics database (www.genome.ucsc.edu)and sequenced (Table S2).Two single nucleotide variants were identified in the conserved regions.These two variants (Table 2) were genotyped in all samples and data was analyzed as described above. Bioinformatic analysis.The program ENDEAVOUR [49] was used to perform gene prioritization in the selected region based on genes already described in the literature as associated with the target disease.A list of 10 genes previously described [14] as showing evidence of involvement with periodontitis in humans was used.Secondly, we used the program TRANSFAC H 7.0 Public 2005 (www.gene-regulation.com) in order to assess the likely transcription factors binding to the sites of the variants associated with aggressive periodontitis in this study.Finally, the BLAST function (Basic Local Alignment Search Tool) of NCBI (National Center for Biotechnology Information, www.ncbi.nlm.nih.gov) was used to make sequence comparisons between humans and other species in selected nucleotide sequences. Genome Wide Scan.The family recruited in the Guarulhos University (Figure S2) was not associated with markers in 1q (data not shown) and we decided to investigate if this family, in addition to pedigree 24 (Figure S1), would yield the identification of additional contributing loci to aggressive periodontitis, since we previously showed that more than one loci may contribute to the disease [12].Genome wide genotyping was performed with the GeneChip 500K arrays (Affymetrix, Santa Clara, CA, USA) at the Genomics and Proteomics Core Laboratories, University of Pittsburgh.In brief, two aliquots of 250 ng of DNA each are digested with NspI and StyI, respectively, an adaptor is ligated and molecules are then fragmented and labeled.At this stage each enzyme preparation is hybridized to the corresponding array.Samples were processed in 96-well plate format; each plate carried a positive and a negative control, up to the hybridization step.A total of 443,816 markers were genotyped.Data was analyzed using the PLINK software. Figure 3 . Figure 3. Summary of bacterial DNA quantification results.For red complex bacteria, ''0'' indicates sites with no bacteria, ''1'' indicates the detection of one species, ''2'' indicates the detection of two species, and ''3'' indicates the detection of three species.FAM5C mRNA levels were not associated with the presence or load of red complex periodontopathogens or Aggregatibacter actinomycetemcomitans.doi:10.1371/journal.pone.0010053.g003 Figure S1 Figure S1 Families recruited in Rio de Janeiro, Brazil.Black color indicates affected individuals.White color indicates unaffected individuals.Arrows indicate proband.Blue color indicates individuals who could not be examined.Found at: doi:10.1371/journal.pone.0010053.s001(3.99 MB TIF) Figure S2 Family recruited in the Guarulhos University, Brazil.Black color indicates affected individuals.White color indicates unaffected individuals.Arrow indicates proband.Found at: doi:10.1371/journal.pone.0010053.s002(0.66 MB TIF) Figure S3 FAM5C localization in chromosome 1q.Schematic representation of chromosome 1 (top).In the middle is the linkage disequilibrium plot generated for chromosomal region 1q31 including the FAM5C gene.Below is the schematic representation Table 1 . Ethnic background of the families and number of individuals by affection status and gender in 55 families with at least a proband affected with aggressive periodontitis and average age of the probands. doi:10.1371/journal.pone.0010053.t001 Table 3 . Results of non-parametric linkage analysis. Table 4 . Haplotype results for two, three and four-marker windows in FAM5C. 1 Positive non-parametric logarithm of odds score indicates excess allele sharing among affected individuals.A negative non-parametric logarithm of odds score indicates less than expected allele sharing among these groups of individuals.doi:10.1371/journal.pone.0010053.t003doi:10.1371/journal.pone.0010053.t004
5,160.4
2010-04-07T00:00:00.000
[ "Medicine", "Biology" ]
Stochastic Comparisons of Weighted Distributions and Their Mixtures In this paper, various stochastic ordering properties of a parametric family of weighted distributions and the associated mixture model are developed. The effect of stochastic variation of the output random variable with respect to the parameter and/or the underlying random variable is specifically investigated. Special weighted distributions are considered to scrutinize the consistency as well as the usefulness of the results. Stochastic comparisons of coherent systems made of identical but dependent components are made and also a result for comparison of Shannon entropies of weighted distributions is developed. Introduction In the literature, weighted distributions have been exhaustively applied and put to use to model data in nature, as they provide more insights to provide more adequacy in modelling as a result of variety of sampling surveys (cf. Rao [1], Patil and Rao [2] and Patil [3]). Let X be a random variable with cumulative distribution function (cdf) F and probability density function (pdf) f and let w(·, θ) is a non-negative function such that E(w(X, θ)) exists and is finite for all θ ∈ χ, where χ is an arbitrary subset of R. Then X w is taken to be a random variable with weighted distribution associated with f , having pdf Many families of statistical distributions hold at the disposal of the family of weighted distributions in (1) (see, e.g., the typical weighted distributions in Sections 3.1 and 3.2). Suppose that the hazard rate function h corresponds to the pdf f so that h(x) = f (x)/F(x) whereF ≡ 1 − F is the survival function of X. In spirit of Jain et al. [4], the hazard rate of X w is characterized by where A(x, θ) = E[w(X, θ) | X ≤ x] and h(x) = f (x)/F(x) is the reversed hazard rate function of X. The density in (1) may be used to model data randomly drawn from population at a certain level θ of some quantity of interest. For example, θ could be a particular age for an individual, a certain time point or a given threshold with a specific amount. In many realistic circumstances it is acknowledged that the parameter θ may not be constant so that the occurrence of heterogeneity is sometimes incalculable and unexplained. In addition, it often takes place in practical situations where data from several populations is mixed. To model such data sets mixture models are used. For example, the measurements on the life lengths of a device may be gathered regardless of the manufacturer, or data may be gathered on humans without regard, say, to blood type. If the ignored variable has a bearing on the characteristic which is being measured, then the data follow a mixture model. To all intents and purposes, it is hard to find data that are not some kind of a mixture, because there is almost always some relevant covariate that is not observed. The study of reliability properties of various mixture models has recently received much attention in the literature. When a mixture model is fitted to survival data, the mixing operation can change the pattern of aging for the lifetime unit under consideration in some favorite way (see, for example, Finkelstein and Esaulova [6], Alves and Dias [7], Arbel et al. [8], Cole and Bauer [9], Bordes and Chauveau [10], Li and Liu [11], Amini-Seresht and Zhang [12], Misra and Naqvi [13] and Badía and Lee [14]). Mixture models capture heterogeneity in data by decomposing the population into latent subgroups, each of which is governed by its own subgroup-specific set of parameters. To represent a general formulation of the mixture model in the case of our study, consider the density associated with (1), where µ(θ) = E(w(X, θ)) and G is the cdf of the random varaible Θ. It is known playing the role of the weight function through which f is altered to f * . This signifies that the mixture density in (4) can be thought as the density of a weighted distribution with weight function v for which E(v(X)) = 1. In situations where Θ is designated by a discrete random variable, a finite mixture model is considered. To this end, the model (4) is developed as where g(θ i ) represents the value of the probability mass function (pmf) of Θ at θ i for i = 1, 2, · · · . Throughout the paper, it is assumed that the output random variables following the mixture weighted distribution (4) have absolutely continuous distribution functions. To the best of our knowledge, there has not been a work on the literature to argue different stochastic properties of the parametric weighted distributions as well as their mixtures in general to be attractive for broader audiences. There is a need for an effective study in this direction. The main objective of this paper is to initiate such a study to investigate the impact of the association of the model to a parameter on some general stochastic aspects of the model. The rest of the paper is organized as follows. In Section 2, some useful notions of stochastic orders and some further stochastic properties are presented. In Section 3, some special applied weighted distributions are introduced. In Section 4, preservation of several ordinary as well as relative stochastic orderings is studied in Section 4.1. In Section 4.2, preservation properties of some stochastic orders in the extended mixture model of weighted distributions are secured and in the long run in Section 4.3, a link to information theory is provided. Preliminaries Assume that the random variables X and Y have distribution functions F and G, survival functions F = 1 − F andḠ = 1 − G, density functions f and g, hazard rate functions h X = f /F and h Y = g/Ḡ and reversed hazard rate functions h X = f /F and h Y = g/G, respectively. To compare the magnitude of random variables some notions of stochastic orders are introduced below. Definition 1. The random variable X is said to be smaller than the random variable Y in the It is known that the following implications hold: The notions of the totally positive of order 2 (TP2) and the reverse regular of order 2 (RR2) are defined as follows. for all x 1 ≤ x 2 and for all y 1 ≤ y 2 for 1 and 2 equaling to +1 or −1. If 1 = +1 and 2 = +1, then h is said to be TP 2 . If 1 = +1 and 2 = −1, then h is said to be RR 2 . It is readily pointed out that the TP 2 [RR 2 ] property of h(t, x) is equivalent to saying that h(t, x 2 )/h(t, x 1 ) is non-decreasing [non-increasing] in t whenever x 1 ≤ x 2 after making the conventions that a/0 = +∞ when a > 0 and a/0 = 0 if a = 0. In view of the foregoing statements and by assuming h Y = h 2 and h X = h 1 and also h Y = h 2 and h X = h 1 , one observes X 1 ≤ rhr X 2 holds if, and only if, h i (x) is RR 2 as a function of (i, x) ∈ {1, 2} × ζ, where ζ is the common support of X and Y. In a similar manner we can establish that Special Weighted Distributions In this section, several special parametric weight functions are presented making the investigation of the main model of (4) more developed. First, some general formations of the weight function are considered by which many important families of weighted distributions are included. In all of the cases we assume that the weight function has a finite mean with respect to the underlying distribution. Distribution-Free Weight Functions Here, several weight functions which do not depend on the underlying distribution are given. Suppose that w i , i = 1, 2 are two non-negative functions of x and that k i , i = 1, 2 are two proper functions of θ so that the following weight functions satisfy the requirement that µ(θ) < ∞. Substituting any of these weight functions in the density (4) leads to a particular model that might be of some interest in a context. Semiparametric Models Models where the parameters of interest are finite-dimensional and the nuisance parameters are infinite-dimensional are called semiparametric models. There are some choices for the weight function w(x, θ) that are functional of the underlying distribution function F, including the parameter θ within. Below, we list some kinds of those choices whose associated weight function depend on the underlying distribution. Stochastic Orderings In this section, preservation properties of some stochastic orders under the formation of the weighted model in the fixed as well as the random levels of the parameter θ are studied. Weighted Distribution with Specific Parameter Here, in the same vein as Misra et al. [16] several preservation properties on likelihood ratio, hazard rate and reversed hazard rates orders can be established in the sense of the model (1). Suppose that X i is a random variable with pdf f i and cdf F i , for i = 1, 2, and assume that X iw i follows the weighted distribution of X i with weight function w i (x) = w(x, θ i ) having pdf where θ 1 and θ 2 are two fixed numbers in χ. In the next round, as will be presented, conditions for stochastic orders made of X 1w 1 and X 2w 2 to emulate the same type of stochastic orders between X 1 and X 2 are obtained. The following Proposition is a direct conclusion of Theorem 3.2 in Misra et al. [16]. Preservation properties of the stochastic orders considered in Proposition 1 have been procured for some special weighted distributions by Izadkhah et al. [17] including the models of proportional (reversed) hazard rates, upper (lower) records, right (left) truncation, moment generating and size-biased distributions. Izadkhah et al. [18] obtained sufficient conditions for preservation of reversed mean residual life order and Izadkhah et al. [19] presented some conditions under which the mean residual life order is preserved under weighting. For the sake of completeness, the preservation properties of the likelihood ratio, the hazard rate and the reversed hazard rates orders are studied for some of the parametric weighted distributions considered in Sections 3.1 and 3.2. Suppose that X 1 and X 2 are two non-negative random variables with distribution functions F 1 and F 2 , survival functions F 1 = 1 − F 1 andF 2 = 1 − F 2 and density functions f 1 and f 2 , respectively. Suppose that w 2 and k 1 are both non-decreasing (or non-increasing) functions. By Proposition 1(i), if X 1 ≤ lr X 2 then X 1w 1 ≤ lr X 2w 2 . Let us further assume that k 1 (θ i ) > 0 for i = 1, 2. Then, by Proposition 1(ii) X 1 ≤ hr X 2 implies X 1w 1 ≤ hr X 2w 2 provided that w 1 and w 2 are both non-decreasing. In parallel, if w 1 and w 2 are both non-increasing then using Proposition 1(iii), X 1 ≤ rh X 2 concludes that X 1w 1 ≤ rh X 2w 2 . Let us suppose that w 2 and k 1 are both non-decreasing (or non-increasing) functions. By Proposition 1(i), X 1 ≤ lr X 2 implies X 1w 1 ≤ lr X 2w 2 . If w 1 , w 2 and k 1 are all non-decreasing functions then Proposition 1(ii) establishes that X 1 ≤ hr X 2 implicate X 1w 1 ≤ hr X 2w 2 . In a similar manner, if w 1 , w 2 and k 1 are all non-increasing functions then from Proposition 1(iii) it is deduced that X 1 ≤ rh X 2 implies X 1w 1 ≤ rh X 2w 2 . We assume that w 2 and k 1 are both non-decreasing (or non-increasing) functions. Proposition 1(i) guarantees that X 1 ≤ lr X 2 implies X 1w 1 ≤ lr X 2w 2 . If w 1 is non-decreasing and w 2 and k 1 are non-increasing functions then by Proposition 1(ii) X 1 ≤ hr X 2 yields X 1w 1 ≤ hr X 2w 2 . In the dual case, if w 1 is non-increasing and further w 2 and k 1 are both non-decreasing functions then Proposition 1(iii) concludes that X 1 ≤ rh X 2 gives X 1w 1 ≤ rh X 2w 2 . Some relative stochastic orders including the relative (reversed) hazard rate and relative mean residual life orders have attracted the attention of researchers in the last decade (cf. Di-Crescenzo and Longobardi [20], Kayid et al. [21], Misra and Francis [22], Misra et al. [23], Ding et al. [24], Ding and Zhang [25], Misra and Francis [26] and Misra and Francis [27]). We reminisce about the definition of these orders from Rezaei et al. [28] and Kayid et al. [21] [see, for example, Definition 1(v) and (vi)]. In the next theorem, the study of preservation of the relative hazard rate and the relative reversed hazard rate orders are initiated for a well-known class of semiparamtric distributions. For i = 1, 2, denote by h iw i (t, θ i ) ( h iw i (t, θ i )) the hazard rate (resp. the reversed hazard rate) of X iw i , where w i and is supposed to be valid as a weight function. Before stating the result, we introduce some notations. The symbol sign = is used to denote the similar sign. Thus, for all t > 0, we have: By assumption, h 2 (t)/h 1 (t) is non-increasing in t > 0. It suffices only to prove that: The assumption X 1 ≤ hr X 2 yields h 1 (t) ≥ h 2 (t), for all t > 0, which further concludes thatF 1 (t) ≤ F 2 (t), for all t > 0. Therefore, is non-positive (resp. non-negative) for all t > 0, if, and only if, for all x 1 ≤ x 2 and for all θ 1 ≤ θ 2 it holds that which is validated by assumption. To present the result about the preservation of the relative reversed hazard rate order we introduce some other notation. Let us define for x ∈ [0, 1], is non-decreasing (resp. non-increasing) in x, for all θ and non-decreasing (resp. non-increasing) in θ, for all x, in which ξ * (x, θ) is non-decreasing (non-increasing) in x for all θ ∈ χ, then X 1 ≤ rrh X 2 implies that X 1w 1 ≤ rrh X 2w 2 . The weight functions considered in Theorems 1 and 2 encompass some particular cases which may be of independent interest. In that regard, the following corollary is resulted. x v i (u, θ i ) du for i = 1, 2. Then (i) X 1w 1 ≤ rhr X 2w 2 if, and only if, ξ 2 (x,θ 2 ) ξ 1 (x,θ 1 ) is non-increasing in x. (ii) X 1w 1 ≤ rrh X 2w 2 if, and only if, Proof. We only prove the assertion (i) as the proof of (ii) is similarly accomplished. Note that analogously as in the proof of Theorem 1, we can get It can be seen that, for all t ≥ 0, d dt which is non-positive if, and only if, or equivalently if ξ 2 (x, θ 2 )/ξ 1 (x, θ 1 ) is non-increasing in x ∈ [0, 1] according which the ratio h 2w 2 (t, θ 2 )/h 1w 1 (t, θ 2 ) is also non-increasing in t > 0, that is, X 1w 1 ≤ rhr X 2w 2 . The proof is complete. The following corollary is a useful observation in the context of Theorem 3 as it illustrates that a typical parametric family of weighted distributions enjoys the relative hazard rate and the relative reversed hazard rate ordering properties in some cases. Corollary 2. Suppose that the random variable X(θ 1 ) and X(θ 2 ) for θ 1 , θ 2 ∈ χ have density functions x v(u, θ) du, we have: In reliability and survival theories, feature of ordering for lifetime of coherent systems is a relevant subject to be studied. To this end, Navarro et al. [29] obtained a representation of the system reliabilitȳ F Sys as a distorted function of the common component reliabilityF such thatF Sys (t) =F(t), where h is an non-decreasing function depending on the structure of the underlying system and the survival copula of the joint distribution of the component lifetimes. In this context, they have shown that the reliability function of a coherent system with dependent identically distributed (DID) components can be written as a distorted function of the common component reliability function. The following lemma is due to Navarro et al. [29]. Lemma 1. Let τ(X) be the lifetime of a coherent system formed by n DID components with the vector of random lifetimes X = (X 1 , X 2 , . . . , X n ) with common survival functionF. Then the reliability function of τ(X) can be written asF 1] is an non-decreasing continuous function such that h(0) = 0 and h(1) = 1. The function h is called the domination (or distortion) function which is characterized through the structure function φ(·) of the system (see, e.g., Barlow and Proschan [30]) and on the survival copula C of X 1 , X 2 , . . . , X n . In the set up of the particular weighted distributions given in Section 3.2 , the survival function of the arisen weighted distribution can be commuted to a distorted survival function, as specified earlier in Lemma 1, for which the domination function is characterized by the associated weight function. To this purpose, consider the weight function w(x, θ) = v(F(x), θ) and notice that in this case X w has the survival function 1] plays the role of a parametric domination function. Note that h θ (·) : [0, 1] → [0, 1] is a non-decreasing continuous function with h θ (0) = 0 and h θ (1) = 1. In the reversed direction, if h θ is a distortion (domination) function and v(x, θ) = h θ (x), for any x ∈ [0, 1] then 1 0 v(x, θ) dx = 1 and thusF w (t, θ) = h θ (F(t)). Therefore, there is a unique relationship between v(·, θ) and h θ (·) that is the studies of weighted distributions in the context of semiparametric models entertain the studies of distorted survival functions and vice versa. The parameter θ may be an appropriate quantity that affects the magnitude of system's lifetime. In the case when DID components construct the system, θ may be related to the dependency of the component lifetimes in a way that the survival copula in Lemma 1 depends on θ. For instance, in the case where the Archimedean copula or the FGM copula is adopted to model the association of lifetime of components in a coherent system. The following results are useful to analysis of relative ordering properties of coherent systems as to the best of our knowledge such a study has not been developed in the literature thus far. The following proposition is a direct consequence of Theorem 3. where h 1 and h 2 are two domination functions. Let T 1 and T 2 have respective survival functions h 1 (F) and h 2 (F). Then, The following example illustrates an application of Proposition 2. In the following example, we show that Proposition 2 can also be applied to systems with DID components. The following example reveals a relative ordering property in the Marshall-Olkin family of distributions. Example 8. Suppose that the incorporated weight function is whereθ = 1 − θ and θ > 0. The random variable X w has survival function so that h θ (u) = θu/(1 −θu) which is considered to be the relevant domination function. Note that the family of distributions characterized via (8) is called the proportional odds family of distributions which is due to Marshall and Olkin [31]. Let T 1 and T 2 be two random variables with respective survival functions h θ 1 (F) and h θ 2 (F) such that θ 1 > θ 2 . It can be seen that It follows that d dx that is ξ i (x) is RR 2 in i = 1, 2 and x > 0. Thus, according to Proposition 2(i) we deduce that T 1 ≤ rhr T 2 . Comparisons of Mixture Weighted Distribution In this segment, the problem of preservation of a number of stochastic orderings in the mixture weighted model is investigated. The study is carried out in two different settings, where firstly the random parameter varies in distribution while the underlying distribution remains unchanged and secondly the underlying distribution is changed in the case when the random parameter is fixed in distribution. The results obtained by Kayid et al. [32] are developed to entertain more dynamic weighted distributions. It is followed up that some stochastic orders of random parameters as well as the underlying random variables are transmitted to the random variables with the associated mixture weighted distribution. Give thought to Θ i as a random variable with the pdf g i , the cdf G i and the sfḠ i = 1 − G i , for i = 1, 2. Contemplate the random variable X * i , i = 1, 2 having pdf from which the cdf F * i and the sfF * i = 1 − F * i of X * i are procured after somewhat plain algebraic calculations, respectively, by where the bivariate functions A and B and the function µ are all determined as earlier in Section 1. In the rest of the paper, it is taken for granted that the random variables Θ 1 and Θ 2 are independent. Denote by h * i and h * i the hazard rate and the reversed hazard rate of X * i , respectively. It can be seen, after some integral calculation, that and The following result demonstrates the likelihood ratio order preservation in the model (9). Proof. It is not impenetrable to realize that X * 1 ≤ lr X * 2 if, and only if, In spirit of (9), one gets By the assumption of Θ 1 ≤ lr Θ 2 we can rely on the fact that It is also obvious that The general composition theorem of Karlin [15] concludes the desired result. In the setup of the model (9), the reversed hazard rate order of the random parameters is relocated into the overall random variables. The last result establishes the reversed hazard rate ordering preservation in the baseline-varied mixture weighted model of (13). Proof. First, we denote by h i and h * * i the reversed hazard rate functions of X i and X * * i , respectively, for i = 1, 2. For all x > 0, for all x > 0, and θ ∈ χ. Thus It can be seen that w(x, θ)/A 1 (x, θ) is non-decreasing in θ ∈ χ. From assumption, for all T ≥ 0. Lemma 7.1(a) of Barlow and Proschan [30] is applicable in (15) and provides the proof. A Link to Information Theory The concept of entropy in information theory has played a prominent role in a broad area of science including statistical thermodynamics, urban and regional planning, business, economics, finance, operations research, queueing theory, spectral analysis, image reconstruction, biology and manufacturing (see, for example, El Gamal and Kim [35], Brillouin [36], Khinchin [37] and Grant [38]). Here before closing the paper, we impose a stochastic ordering property that leads to ordering of entropies of weighted distributions with weight functions given in Section 3.2. The extension of the Shannon entropy from the discrete case to the absolutely continuous case when dealing with lifetime random variables is defined by where f is the pdf of non-negative random variable X with an absolutely continuous distribution function. Note that log, with convention 0 log(0) = 0 stands for the natural logarithm. However, it is found that the entropy is related to the concept of dispersion of (random) variables. Being aware of this certitude, it is useful to concentrate on dispersion measures of probability distributions as well as their related stochastic dispersion orderings. In spirit of Theorem 3.B.20(a) and Theorem 3.B.20(b) in Shaked and Shanthikumar [34] if X or Y has an increasing hazard rate function, then and if X or Y has a decreasing hazard rate function, then In accordance with Corollary 4.4 in Bartoszewicz [45], if X or Y has a decreasing reversed hazard rate function, then X ≤ disp Y =⇒ X ≤ rh Y, and if X or Y has an increasing reversed hazard rate function, then If X and Y are two random variables with supports S X = (l X , u X ) and S Y = (l Y , u Y ), respectively, then according to Theorem 3.B.13(a) in Shaked and Shanthikumar [34] when l X = l Y > −∞, and also according to Theorem 3.B.13(b) in Shaked and Shanthikumar [34] when u X = u Y < ∞, The weight functions w θ (x) and v θ (x) considered in the following theorem depend on x only through F(x) and G(x), respectively. Theorem 9. Let X w θ and Y v θ be the weighted versions of X and Y with weight functions w θ (x) = d θ (F(x)) and v θ (x) = d θ (G(x)), respectively, where θ ∈ χ is a parameter and d θ is a non-negative function so that
6,102.6
2020-07-30T00:00:00.000
[ "Mathematics" ]
Production of recombinant cellulase enzyme from Pleurotus ostreatus ( Jacq . ) P . Kumm . ( type NRRL-0366 ) Successful utilization of Pleurotus ostreatus (Jacq.) P. Kumm. (type NRRL-0366) mushroom as a type of edible locally isolated mushroom in Egypt at the Mushroom Research Center (Mubarak City for Scientific Research and Technology Applications), to produce extensive hydrolyzing cellulase complex enzymes. This hydrolysis was approached in submerged culture supplemented with avicel PH101 as a substrate for endo-,exoglucanase production. The avicel concentration 6% yielded the maximum enzyme activities (2.46, 1.80 U/ml) for both endo-and exoglucanase activities on basal medium at 27°C, initial pH value of 5.5 for 12 days on rotary shaker (180 rpm) incubation period. Cellulase enzyme was amplified using specific PCR and the amplicone was cloned using TOPO TA cloning vector. The cellulolytic activity of the recombinant protein was examined and high activity was obtained compared with the standard ones. The avicel was used as a sole carbon source in the fermentation medium and the results revealed that, avicel induced the cellulolytic activity of the examined organism compared with those grown on medium deficient of avicel. INTRODUCTION Mushrooms were known more than three thousand years ago by the ancient Egyptians (Hassan et al., 2010).They were considered a luxury food, were eaten only by the nobility, and known as the food of the gods.In the 1940 ' s some European foreigners live in Egypt, they cultivated mushrooms on a very small scale and collected wild mushrooms during the winters.In the 1980 ' s the mushroom cultivation farms were established in Tanta and Faquos could not satisfy the demands from the hotels, tourists, and local residents.Mushroom became a new and alternative demand for poultry and animal protein fresh mushrooms.The demand is increasing rapidly as consumers discover the delicious meaty flavor of mushrooms (Daba et al., 2008).The commercial cultivation began in 1988, when several universities and Food technology institutes established research that are responsible for training growers in mushroom cultivation and marketing.*Corresponding author.E-mail<EMAIL_ADDRESS>+20145574211.Fax: +2033911794. Mushrooms have been used as medicinal materials from 100 years ago.In the fields of Chinese the oriental medicine, dried mushrooms are used as diuretics and some other species have recently been getting attention as carcinostatic substances (Mizuno et al., 1995;Wasser and Wise, 1999).The control and improvement of edible fungus cultures have provoked considerable interest in the past few years because mushroom production is economically important.Pleurotus spp. is third place in worldwide production of edible mushrooms after Agaricus bisporus and lentinula edodes (Chang, 1999), these mushrooms yield the possibility of successful cultivation on a variety cheap substrates such as rice straw (Kaul and Janardhanon, 1970;Ghada et al., 2008), banana pseudostems (Jandaik and Kapoor, 1974). Mycelial growth of Pleunotus spp. is fast, and various lignocellulosic waste products can be used as a culture substrate (Yildiz et al., 2002).Cellulose is the only renewable carbon source that is available in large quantities and can be a solution to the problems of energy, chemicals, and food.Cellulose can be hydrolyzed by acid or enzymatic treatment, yielding soluble products of low molecular weight such hexoses and pentoses, the high cost of the production of these enzymes has hindered the industrial application of cellulose bioconversion (Lange, 2007).One of the different approaches to overcome this hindrance is to make continuous search for organisms with secretion of cellulase enzymes in copious amounts and to optimize enzyme production with them. Cellulase is used for commercial food processing in coffee.It performs hydrolysis of cellulose during drying of beans.Furthermore, cellulases are widely used in textile industry and in laundry detergents (Elisashvili et al., 2009).They have also been used in the pulp and paper industry for various purposes, and they are even used for pharmaceutical applications.Cellulase is used as a treatment for phytobenzoars, a form of cellulose benzoar found in human stomach.In this study, we investigated the influence of avicel PH101 concentration (as a carbon source) on the production of different enzymatic activities of cellulase complex, complete enzymatic hydrolysis of enzymes require 3 types of enzymes, namely cellobiohydrolase, endoglucanase or carboxymethyl cellulase (CMCase) and 6-glucosidases (Bhat, 2000), in this paper we are dealing with the endo, and exoglucanase (avicelase) activities by Pleurotus ostreatus under submerged condition.The effect of fermentation time on the growth and enzymatic cellulase activities of the fungus in a laboratory study was also represented.The total protein was extracted and determined from 8 and 4 samples with avicel and the other without avicel. Mushroom cultivation Strain P. ostreatus (Jacq.)P. Kumm.(type NRRL-0366) was provided by the Agricultural Research Service (Peoria, U.S.A.), and reactivated monthly in Petri dishes containing sterile solid potato-dextrose agar medium.The mycelia growing in dishes were incubated at 25°C for 7 days, and then stored in a refrigerator at 4°C.Potato dextrose agar slant was inoculated with spores from wild P. ostreatus strain, slants were incubated at 25°C for 10 days.The mycelium culture (Figure 1A) obtained was used for production of mushroom fruit bodies (Figure 1b) using rice straw as a substrate according to the method described by Chang and Milles (1982). Enzyme production Cultivation was preformed on culture basal submerged medium, (mushroom medium) (gm/100 ml) Containing of 0.2% yeast extract, 0.2% peptone, 0.1% K2HPO4, 0.05% KH2PO4, 0.05% MgSO4, and supplemented with 2% microcrystalline cellulose ( Avicel PH101) as a carbon source, pH adjusted to 5.5, the culture medium was sterilized by autoclaving at 121°C for 15 min, inoculated by aseptically adding 2 ml of seed culture to 100 ml of sterilized media and incubated at 27°C on rotary shaker (180 rpm) for 12 days, cultures were centrifuged at 4000 Xg for 15 min at 4°C, the supernatant was used as the crude enzyme to measure the activity of endoglucanase (CMCase), exoglucanase (avicelase) activities and extracellular protein. Endoglucanase activity (Carboxymethyl -cellulase, CMCase) Activity of endoglucanase was estimated according to technique proposed by (Ghose, 1987), using a reaction mixture containing 1 ml of 2% carboxymethyl cellulose (CMC) in 0.05 M acetate buffer (pH 4.8) and 1 ml of culture supernatant.The reaction mixture was incubated at 50°C for 60 min and the reducing sugar produced was determined by dinitrosalicylic acid-DNS method of (Miller, 1959) using glucose as a sugar standard, Blanks were prepared with inactivated enzymes.One unit (1U) of endoglucanase activity was defined as the amount of enzyme releasing 1 mg of reducing sugar per min. Exoglucanase (Avicelase) activity The activity of exoglucanase was determined as described previously, for endoglucanase one, but the incubation was carried out with 1 ml of 1% avicel suspension instead of carboxymethyl cellulose.Extracellular protein was measured in culture supernatant by the method of (Lowry et al., 1951) with bovine serum albumin as standard. Determination of cellulase enzymes molecular weight using (Sodium dodecyl sulfate polyacrylamide Gel electrophoresis) SDS-PAGE Gel electrophoresis (BioRad, USA) was carried out according to Laemmli (1970) method on 12% SDS-PAGE.Suitable volume of 12% SDS polyacrylamide separating gel was prepared by mixing 37.5 ml of 30% stock solution of acrylamide (acrylamide BDH, 146 g; Bis-acrylamide, 4 g in 500 ml distilled water), 22.5 ml of 1.5 M Tris-HCl, pH 8.8, 29.1 ml of distilled water and 0.9 ml of 10% SDS solution.0.5% final concentration of TEMED (v/v) and 1.5 ammonium sulphate (w/v) were added just before pouring the gel.The mixture was poured using a pipette into Biorad Buchler electrophoresis unit (BioRad, USA), and then overlaid carefully with isopropanol to the level of the gel surface.The gel was left for polymerization at room temperature for about 30 min.Staking gel (5%) was prepared by mixing 5 ml of 30%, 1 acrylamide, 0.3 ml of 10% SDS, 37.5 ml 1 M Tris-HCl buffer (pH 6.8), and 20.5 ml of distilled water.TEMED was added at concentration of 0.5% (v/v) and ammonium sulphate was added also at concentration of 1.5 (w/v).The polymerization was carried out as previously described, where a comb was inserted to prepare the wells.The comb was removed from the staking gel after polymerization, then the gel was installed to the reservoir containing buffer solution of 0.025 M Tris-HCl, 0.192 M glycine (pH 8.3) and 0.1% SDS (w/v).Samples were prepared by mixing small volume of P. ostreatus mushroom sample containing about 1 mg/ml protein with (X2) application buffer, 0.125 M Tris-HCl (pH 6.8), 4% SDS, 10% 2-mercapto ethanol, 10% glycerol and 0.02% bromophenol blue, and then exposed to 100°C in water bath for 1 min.Each sample was applied to a separate well in the slab gel along with a prestained SDS molecular weight marker (14-205 K Daltons).Electrophoresis was carried out at constant current 25 mA for about 1.5 h.The gel was stained with comassie blue, 0.06% comassie brilliant blue R-250 in 50% methanol and 10% acetic acid.The gel was destained overnight in a mixture of 60 ml methanol, 40 ml acetic acid, and 800 ml distilled water.The gel will viualized on gel documantation system. DNA extraction from fungal mycelium DNA extraction was performed using 10 to 15 mg (wet weight) of freshly subcultured fungi.The fungal mycelium were grinded in liquid nitrogen using mortar and pestle and then the ground mycelium was subjected to DNA extraction using QIAGene, DNA extraction kit (QiaGen, Germany). Cellulase amplification using specific PCR Polymerase chain reaction PCR amplification was carried out using two cellulase specific primers, primer F; 5-ATA GAA TTC TTR TCN GCR RTT YTG RTG RAA CAA and the reverse primer R; ATA GAA TTC ATY TGG GAY TGY TGY AAR CC-3.PCR reaction was performed in a total volume of 50 l and contain 5 l (5 x Green Go Taq flexi buffer (promega, USA ), 5 l (5 x colorless Go Taq flexi buffer (Promega, USA), (100 mM Tris-HCl (pH 8.8 at 25°C) 500 mM KCl),5 l MgCl2 (25 Mm) ( promega, USA), 2 l 4 dNTPs mixture (10 mM of each) (BIORON, Germany), 4 l C. DNA of chymosin,4 l of each primer (20 pmol / l), 2 U Taq polymerase (5 U / l) (promega, USA), the reaction volume was completed to 50 l with sterile distilled H2O.The reaction mixtures were subjected to Daba et al. 1199 amplification as follows: initial denaturation step at 95°C for 3 min, followed by 35 cycles of amplification with denaturation at 95°C for 1 min, annealing at 55°C for 1 min and extension at 72°C for 1 min ending with extension at 72°C for 10 min, the thermocycler (PTC-200 Peltier, USA) was used for performing PCR amplification, as described by Shih et al. (2002). Cloning and subcloning cellulase gene Cellulase of amplified PCR products was done by T/A based cloning protocol by using TOPO TA Cloning ® (with pCR ® 2.1-TOPO ® Cloning vector) and (a TOP 10 E-coli strain) (Invitrogen TM , USA).The chymosin gene was released from the pCR® 2.1-TOPO® vector using EcoRI restriction enzyme, meanwhile the released fragment was purified by EzWay TM Gel Extraction kit (KOMBIOTECH.Korea) and ligated to the lineraized prokaryotic expression pPROEX HT (life technologies, USA) and the cloning was done according to the protocols outlined by Life Technologies, Invitrogen.For more acceptable protein for the human, we tried to subclone the functional gene into PichiaPink™ Yeast Expression system (Invitrogen company) according to the manufacture procedure. Cellulase purification using 6x Histidine affinity-tagged method Cellulase purification was carried out by Ni-NTA resin matrix (QIAGEN Inc., USA).The induced bacterial cells was pelleted and resuspended in 4 volumes of lysis buffer (50 mM Tris-HCl (PH 8.5 at 4°C), 5 mM 2-mercaptoethanol, 1 mM PMSF).The suspension was sonicated until 80% of the cell was lisyed.The cell debris was removed by centrifugation, the supernatant was removed to a new tube (crude supernatant).Affinity purification was done according to the protocols outlined by Life Technologies, Invitrogen. Solubilization and renaturation of chymosin protein The procedures developed by Marston et al. (1984), the inclusion pellets were solubilized in 8 M urea buffer (pH 8).The urea mixture was incubated at 25°C for 1 h before the insoluble molecules were removed by centrifugation.The urea solution was then diluted in a high pH buffer (pH 10.7) for renaturation of chymosine.After the insolubilization in 8 M urea, the inclusion body solution was diluted with phosphate buffer pH 10.7, the solution was incubated at 25°C for 1 h and then adjusted to pH 8 and incubation was continued at 25°C for 1 h.The solution was transferred to dialyze against buffer (20 mM Tris/HCl pH 8.0, 50 mM NaCl, 1 mM EDTA) at 4°C overnight.The folded streptokinase were then assay as thrombolytic agent. Plasmid miniprep Plasmid miniprep (QIAGEN, Germany) was performed on the clones to harvest the vectors potentially with insert.The spectrophotometry was applied to estimate the quantity and quality of the plasmid samples.The plasmids were restriction digested at 37°C for 4 h by NdeI (4 unit/ug DNA) (NEB, USA) and SacII (4 unit/ug DNA) (NEB, USA) then heat inactivated at 65°C for 20 min to further confirm the insert.6 ul of the digested products were loaded and resolved on agarose-TAE gel after electrophoresis.Each clone was also submitted for DNA sequencing done by a local vendor. The recombinat cells were cultivated on LB (Lysogeny broth) medium, the positive transformants carrying the pGEM-T Easy Vector were Ampicillin resistant. Influence of avicel concentration An evaluation of carbon source utilization by P. ostreatus suggested that the production of high -titer cellulase necessitated an increase in the concentration of avicel as a carbon source, by adding different levels of avicel ranging from 0 to 10%.The results of this investigation are represented in Table 1 these results collectively indicate that the substrate concentration has a variable effect on the metabolic activities of the tested organism.The highest activities (2.46 and 1.80 U/ml) for endo-,exoglucanase respectively, were detected at substrate concentration 6%.Some cellulases that were lower than the expected maximum cellulases titer were however obtained at avicel concentration greater than 6%, the activities decreased gradually and 10% of avicel concentration showed a low end, exoglucanase activities representing 39.43 and 42.22% of the activity obtained at 6% substrate level this results according to Duboism et al. (1956) This relatively lower titer may be attributed to the adsorption of cellulase produced on to the avicel, the repression was concentration -dependant (Bindu et al., 2006;Suzuki et al., 2008;Waeonukul et al., 2009). Influence of fermentation time The enzymes were produced both during the growth and stationary phases, so the process of these enzymes production and secretion is a growth associated one (Domingues et al., 2000).As it is previously observed, the values of cellulases activities and extracellular protein were higher at 6% avicel concentration, in case of endoglucanase, a low activity was obtained when the fungus was growing and when the stationary phase was reached.Results shown in Table 2 represent a sudden increase in activity until it reached a maximum at 192 h (8 days).A decrease was observed after wards probably due to the depletion of nutrients in the medium which stressed the fungal physiology resulting in the inactivation of secretary machinery of the enzymes (Nochure et al., 1993), as in the case of exoglucanase, reaching a minimum at 280 h.The highest value of activity was achieved at 240 h. Determination of the molecular weight of protein The molecular weight of cellulase enzyme was determined by SDS polyacrylamide gel electrophoresis applied to 12% SDS polyacrylamide gel.The molecular weight of one band was calculated from the relation between molecular weight of the marker and the relative mobility of the calculated molecular weight was found to be 45,000 daltons, (Figure 2).These results agree with (Cailler 1986).However, gel electrophoresis in the presence of sodium dodecyl sulphate, as described by Weber and Osborn (1969) indicated that at least five proteins were present in cellulase.Activity assays on the SDS-polyacrylamide gel showed that the protein with the highest molecular weight had cellulolytic activity.The presence of several components, revealed by SDS-polyacrylamide electrophoresis, and of both carboxymethyl cellulose and avicelase activities could indicate that the protein obtained by preparative electrophoresis is a multienzyme complex.Protein extraction was carried out for the examined 8 samples, 4 samples with avicel and the other without.The data presented in Figure 3 revealed that, the protein pattern is the same in the examined 8 samples but it differ in band intensity.The band intensity which considered as indicator for protein expression was broad and strong in case of the medium contains avicel (2, 4, 6 and 8%), but it was so narrow and weak in the medium free of avicel.Moreover, the avicel induced the cellulolytic activity of the examined organism compared with these grown on the other medium. Cloning and in vitro-transcription of the cellulase enzyme About 200 bp of the cellulase enzyme was amplified using specific PCR, and the obtained amplicone was cloned using the TOPO TA cloning kit.The recombinant clones were selected on medium contains the avicel as a sole carbon source.The recombinant purified protein was assayed for it cellulolytic activity and the results revealed that a high activity was obtained compared with the standard ones (Stahl, 1997).We used PCR-based methods to clone and sequence four previously unidentified cellulase cDNAs: cbhI-I, cbhI-II, cbhII-I and egII.CbhI-I, cbhI-II and cbhII-I consist of 1710, 1610 and 1453 bp, respectively, and encode for 512, 458 and 442 amino acids, respectively.EgII consists of 1180 bp encoding for 310 amino acids, and belongs to family 61 of the glycosyl hydrolases.CbhI-I, cbhII-I and egII all have a modular structure, with the catalytic domain (CD) and cellulose-binding domain (CBD) located at the Cterminus in cbhI-I and egII, and at the N-terminus in cbhII-I.CbhI-II shows high homology to cbhI-I but lacks a CBD.Northern blotting revealed that cbhI-I, cbhI-II and cbhII-I were coordinately expressed at various stages of the mushroom developmental cycle (substrate colonization to mature fruit body), although the number of cbhI-I transcripts was much smaller.No egII expression was detectable during the substrate colonization phase but transcription levels increased as fruit body morphogenesis progress (Shaojun et al., 2006). Table 1 . Extracellular protein content and cellulolytic activities of P.ostreatus at different avicel concentration.Under shaked condition. Table 2 . Enzymatic activities and extracellular protein of P. ostreatus versus the time of fermentation.
4,162.2
2011-05-18T00:00:00.000
[ "Environmental Science", "Chemistry" ]
Person Reidentification Model Based on Multiattention Modules and Multiscale Residuals At present, person reidentification based on attention mechanism has attracted many scholars’ interests. Although attention module can improve the representation ability and reidentification accuracy of Re-ID model to a certain extent, it depends on the coupling of attention module and original network. In this paper, a person reidentification model that combines multiple attentions and multiscale residuals is proposed. The model introduces combined attention fusion module and multiscale residual fusion module in the backbone network ResNet 50 to enhance the feature flow between residual blocks and better fuse multiscale features. Furthermore, a global branch and a local branch are designed and applied to enhance the channel aggregation and position perception ability of the network by utilizing the dual ensemble attention module, as along as the fine-grained feature expression is obtained by using multiproportion block and reorganization. Thus, the global and local features are enhanced. The experimental results on Market-1501 dataset and DukeMTMC-reID dataset show that the indexes of the presented model, especially Rank-1 accuracy, reach 96.20% and 89.59%, respectively, which can be considered as a progress in Re-ID. Introduction As an important video intelligent analysis technology, person reidentification (Re-ID) uses computer vision technology to realize the identification and matching of target pedestrians in a multicamera network with nonoverlapping fields of view. at is, given a pedestrian test image (Probe), the pedestrian image is retrieved under crossmonitoring equipment from all gallery images (Gallery) [1]. Compared with the perspective scenes of fixed monitoring equipment, person reidentification technology solves the problem of its visual limitations and can be well matched with pedestrian detection and pedestrian tracking scenes. It is a key technology for target tracking, urban intelligent security, and prevention and control of public places such as supermarkets, airports, stations, exhibition halls, and exhibition centers. In recent years, research in the field of person rerecognition has focused on representation learning [2], metric learning [3], local features [4], video sequences [5], and Generative Adversarial Networks (GAN) generated images [6]. e performance of the person reidentification system is getting better and better, and it has made great progress. However, in the real scene, the problems of different image scales, low resolution, occlusion, and illumination differences that seriously affect the recognition effect are still not well resolved. erefore, in recent years, many scholars have begun to study the attention mechanism and local features in order to improve the performance of the recognition model. At present, there has been some progress in the basic network and local features. In terms of basic networks, most of models used ResNet50 as the backbone network, but also improved lightweight convolution models as the backbone network were also selected, such as Omni-Scale Feature Learning Network (OSnet) [7] and Robust-Re-ID [8]. e improvement ideas of these networks to the basic network mainly focus on the research and application of the attention mechanism. For example, Hou et al. proposed Interactionand-Aggregation Network (IA-Net) [9], which constructs an IA block by combining channel attention and spatial attention and then embeds it in the residual network to aggregate channel and spatial information. Chen et al. proposed the Mixed High-Order Attention Network (MHN) [10], which uses the high-order attention distribution High-Order Attention (HOA) module to obtain more feature information. Chen and Ding et al. proposed Attentive But Diverse Network (ABD-net) [11], which combines channel attention and position attention in parallel and adds them to the local feature network for feature fusion. Xia et al. proposed Second-Order Nonlocal Attention (SONA) network [12], which adds covariance matrix to the first-order nonlocal attention structure to make it second-order structure. is method effectively enhances the information flow between residual convolution blocks. In terms of local features, image horizontal block is a common step in local feature extraction [4]. However, the disadvantage is that the requirements for image alignment are relatively high. If the two images are not aligned up and down, it is likely that body parts will be misplaced and compared, such as head and background contrast, which will increase the probability of model judgment error. erefore, Zhang et al. designed a dynamic alignment network AlignedReID [13], which can automatically align image blocks from top to bottom without additional information. Some literatures used some prior knowledge to align pedestrians. For example, Zhao et al.'s spindle net [14] first estimated the key points of travelers with the attitude estimation model and then used affine transformation to align the same key points. In addition, Sun et al. proposed a Partbased Convolutional Baseline (PCB) [15] method to divide the feature map into six blocks horizontally and used refined part pooling (RPP) method for local alignment. Later, some researchers found that the combination of global features and local features can improve the expression ability of the network, and the local features are divided more carefully. Wang et al. proposed Multiple Granularities Network (MGN) [16] based on discriminative features, which uses a more detailed combination of local features and global features and achieves quite good recognition results. In addition, Zheng et al. [17] proposed the pyramid block model, which integrates the local and global information and the progressive clues between them and solves the occlusion problem to a certain extent. However, there are still some problems to be solved in the above-mentioned attention mechanism and local features research. For example, some attention mechanisms need multiple matrix calculations, and the coupling with the original network is not ideal when joining the basic network. In addition, in the segmentation of feature map, the more blocks, the better local feature expression, but it will increase the amount of model parameters. If there are few blocks, the recognition rate will not be improved. In this regard, we designed a person reidentification model based on attention fusion and multiscale residuals. e model mainly solves the problems of poor coupling between attention mechanism and original network and scientific expression of local features. e model uses an improved ResNet50 as the backbone network and is designed with global and local branch structures. e paper added Combined Attention Fusion Module (CAFM) and Multiscale Residual Fusion Module (MSFM) to the original ResNet50 [18] to effectively concatenate the feature information between residual blocks and better integrate multiscale features. e global branch uses a Dual Ensemble Attention Module (DEAM) to enhance the network's channel aggregation and location awareness capabilities. e local branch is divided into finegrained features by multiproportion block method to further refine the local features. In the experiment, the network in this paper has achieved good results on the Market-1501 and DukeMTMC-reID datasets, and the indicators are better than other Re-ID networks. Person Reidentification Model Based on Attention Fusion and Multiscale Residuals e algorithm model framework of this paper is shown in Figure 1. Firstly, the improved ResNet50 network and local and global branches are used to extract pedestrian features of Probe and Gallery, respectively. en the similarity between Probe and Gallery pedestrian features is calculated. Finally, the similarity scores are sorted to obtain the retrieval results of all the images of Probe in the Gallery. e model in this paper is based on the ResNet50 network with multiple improvements and model extensions to enhance feature extraction and expression capabilities and effectively improve the recognition rate. e feature extraction network designed in this paper is shown in Figure 2 e Improved ResNet 50 Network. ResNet 50 contains a total of 50 convolutional layers, which are input layer, output layer, and 48 hidden convolutional layers. 48 hidden layers are divided into four stages in the form of 3 + 4 + 6 + 3 convolution residual bottleneck. In this paper, CAFM and MSFM are introduced into the backbone network for improvement, and the step size of the downsampling convolutional layer in Stage4 is changed from 2 to 1. e following is a detailed description of CAFM and MSFM. Combined Attention Fusion Module (CAFM). e attention mechanism is a very important and effective method in deep learning [19]. Its essence is to linearly weigh the relationship between things to obtain a new representation. In the field of person reidentification, the attention mechanism is often used to focus the attention of the network on the pedestrian's body, thereby eliminating the influence of factors such as background and occlusion. We designed the combined structure of Channel Attention Module (CAM) [11] and Nonlocal Attention Module (NAM) [20] and embedded it in the residual structure, which can fully concatenate the information between the residual blocks and increase the network's attention to the target feature. e specific structure of the module is shown in Figure 3. In Figure 3, the input characteristics firstly fuse the channel information through the channel attention module (CAM), and the important location is perceived by the nonlocal attention module (NAM), and finally the attention information features integrated with the original input features. e Channel Attention Module is shown in Figure 3(b). It integrates all the relevant features in the channel map and selectively strengthens the correlated channel map. Nonlocal Attention Module (NAM) is shown in Figure 3(c). In theory, NAM is a position attention mechanism. Each position value of its output is a weighted average of other position values, which represents the dependence between pixels and other pixels. Among them, the function of softmax is to map the feature value between 0 and 1 to get the attention map. Multiscale Residual Fusion Module (MSFM). Multiscale Residual Fusion Module is a dynamic selection mechanism that enables each neuron to select different receptive fields according to the size of the target feature [21]. e Multiscale Residual Fusion Module structure designed in this paper is shown in Figure 4(b), which is mainly divided into two parts: multiscale feature extraction and feature selective fusion. e multiscale feature extraction part is mainly used for convolution extraction with different sizes of convolution kernels [22]. Taking into account the requirements of parameter weights and network performance, the module of this paper selects three sizes of convolution kernels of 3 × 3, 5 × 5, and 7 × 7 to extract features by group convolution. Feature selective fusion part is to fuse multichannel information for weight selection [20]. According to the selected weights, the feature images of convolution kernel with different sizes are fused. e Multiscale Residual Fusion Module adopts a gate mechanism to control the information flow into the different branches of the next convolutional layer [23]. is mechanism fuses the information of all branches to realize the adaptive adjustment of the receptive fields of different sizes of neurons. is module first performs simple pixel-level addition and fusion of multibranch features to obtain feature U ∈ R C×H×W , as shown in formula (1), where U r , U f , U s ∈ R CHW are the output features of the three convolution channels. U uses global average pooling to encode global information to generate statistical information S ∈ R C on the channel. e c-th element S c in S is obtained by compression calculation on the H × W dimension of U. en normalize and nonlinearly operate on S to produce a compact feature Z ∈ R C×1 , which is obtained through a fully connected layer. en softmax is operated on Z on the channel to get the soft attention information α, β, c ∈ R C×1 between the three branch channels. Finally, α, β, c are multiplied and fused together with U t , U f , U s in the channel dimension to obtain a multiscale fusion feature. e calculation formula is as follows: Among them, E � e AZ + e BZ + e CZ ; matrices A, B, C ∈ R C×C represent the weight matrix of three branches, respectively, which are used to selectively fuse different scale features. Complexity Module (DEAM) to further enhance the global features in space and channel dimensions on the basis of the output characteristics of the backbone network [24]. Based on the idea of ensemble learning [25], global branch designs two pairs of CAM and NAM into two-stage deep attention module [26]. As shown in Figure 5, this module models the semantic correlation information of spatial dimension and channel dimension, respectively. In the first stage, two basic CAM and NAM are integrated to realize the preliminary extraction of channel and position features. In the second stage, two improved CAM and NAM are integrated, and channel attention and position attention of the first stage are integrated, respectively. rough the weighted connection with the first stage, the second stage collects the information of attention block in the first stage, which further strengthens the learning ability of attention module. In the second stage, the information weighted fusion mode is shown in equation (5), where μ and τ are manually set parameters. Local Feature Based on Multiproportion Block and Reorganization. e design of local branch network is mainly to strengthen the expression of local features. is paper uses multiproportion block method to refine the finegrained features and selects the part information with obvious features to enhance the network expression. What is different from the past is that we adopt a multiproportion block method to feature reorganization. In this method, the information of important parts is reused through block reorganization, and the information of secondary parts is weakened or discarded in varying degrees [27]. is design is derived from the observation of a large number of pedestrian pictures in reality and public datasets. It is found that the upper body features in the pictures are significantly stronger than the lower body. For example, the upper body has important fine-grained features such as human faces, hair, clothes logos, and hats, but the lower body has only monotonous legs and shoes, and there is no obvious distortion with the change of posture. In terms of occlusion, the lower body is the easiest to be occluded, such as people riding a bicycle, carrying a handbag to block their legs, and walking and being blocked by lawns and motor vehicles. If the occluded parts or weak features receive more attention, the rerecognition effect will be reduced. Some literatures have given more proportion of block patterns, but the more blocks, the heavier the network burden, and the improvement of identification index is not obvious. After experimental analysis, we designed a more optimized multiproportion block reorganization method, and the principle is shown in Figure 6. We first cut the H dimension of the feature map horizontally with different proportions of 1/2, 2/3, and 3/4 and then select the top 1/2, bottom 1/2, top 2/3, and top 3/4 of the feature map to be expressed as the local feature in cooperation with the global branch. is method not only reflects the importance of the top body characteristics but also covers the characteristics of the lower body. rough the upper 1/2, top 2/3, and top 3/4 blocks, characteristics such as head, torso, hands, and accessories are strengthened many several times. e bottom 1/2 block contains the characteristics of the lower body such as legs and shoes. Experiments e person reidentification network model experiment proposed in this paper uses NVIDIA V100 16G graphics card to accelerate calculations in the CUDA10 environment and is implemented based on the PyTorch open-source framework and Python language programming. Experimental Datasets and eir Extension. e experimental data in this article come from the Market-1501 dataset and DukeMTMC-reID dataset. e Market-1501 dataset was collected on the campus of Tsinghua University. e images come from 6 different cameras, one of which is of low resolution. e dataset contains a total of 32,668 pictures of pedestrians with 1501 IDs. e training set has 751 IDs and a total of 12936 images. e test set has 750 IDs and a total of 19,732 images. In all training sets, there are on average 17.2 pieces of training data for each ID. e DukeMTMC-reID dataset was collected at Duke University in the United States. e images came from 8 different cameras, and the borders of the pedestrian images were manually marked. e dataset provides training set and test set and has pedestrian attributes (gender, long and short sleeves, backpack, etc.) annotations. e training set contains 16522 images with a total of 702 IDs. On average, each ID has 23.5 training images, and the test set contains 17,661 images. In order to improve the expression ability and generalization ability of the model, we expanded and preprocessed the dataset image, adopted random erasing, random horizontal flip, and data standardization [25], and adjusted the image size uniformly. e size is 384 × 128, and the RGB three-channel image is normalized with the mean value Experimental Parameters and Methods. In training, the ResNet50 model parameters pretrained on ImageNet dataset are used to initialize the model, and triplet hard loss with batch hard mining (TriHard loss) and cross entropy loss are used to accelerate convergence. e weight of triplet loss is 6 Complexity 0.3, the margin is set to 1.2, and the weight of ID loss is 1, with label smoothing. In the model training, we first fix the initial weight of ResNet50 in the first 10 epochs and only train the weights of attention module and branch network and then continue training after releasing the last 90 epochs. e initial learning rate was 0.0003 when batch size was set to 32, and the learning rate decreased to 0.1 times at 30 and 60 epoch. In the experiment, Adam optimizer is selected to train the network. e momentum parameter is set to 0.9 and the weight attenuation coefficient is set to 0.0005. We also used label smoothing [28] and BNNeck [21] training methods. In the test, the performance of the model was evaluated by two widely used indexes: Cumulative Match Characteristic (CMC) curve and mean average precision (mAP). , and local branch network (local-net). Among them, local-net (n-1)/n means that the local branch is divided into n blocks with different proportions, and the upper (n-1)/n part of the feature map is selected for local feature expression. We conducted experiments on the Market-1501 and DukeMTMC-reID datasets, respectively, and the comparison indicators were the mean average precision (mAP) and Rank-1 accuracy. e specific experimental results are shown in Table 1. Experiment and Result It can be seen from Table 1 that, on the Rank-1 accuracy of the Market-1501 dataset, the accuracy increased by 0.8% after adding DEAM based on the baseline (92.93%). On the basis of DEAM, we conducted more detailed experiments on local-net. It can be seen from the table that as the number of blocks increases, the accuracy rises. But when the number of blocks n is 5, the accuracy starts to decline, so our local-net selects the network structure when n is 4, and the Rank-1 accuracy reaches 95.57%. en we added CAFM and MSFM to the comparative experiment based on the DEAM. It is found that the Rank-1 accuracy increases by about 0.5% after adding the two modules, but the mAP does not improve. After adding local-net, it is obvious that mAP and Rank-1 accuracies have increased significantly, which fully shows the importance of local branch. Finally, we combined all the improvement methods, and the mAP reaches 88.18% and Rank-1 accuracy reaches 96.20%. On the DukeMTMC-reID dataset, the Rank-1 accuracy increases with the addition of the above modules, reaching 89.59%. From the overall ablation experiment, it is found that, in the Market-1501 dataset, the improvement method that has the greatest effect on the index improvement is the local-net design. is fully shows the effectiveness of multiproportion block scheme and the importance of local features. In terms of other modules, although the DEAM module is not as obvious as local-net, the accuracy is also greatly improved. For CAFM and MSFM modules, the improvement effect of single module may not be obvious. But, after using them together, especially with the local branch design, the experiment has achieved good results. To sum up, the modules we designed are effective. In order to investigate the role of each module more comprehensively, we conducted a series of experiments on the Market-1501 dataset and obtained the CMC curve in the range of Ranks-1-40 of each module network, as shown in Figure 7. e abscissa of the CMC graph represents the number of hits, and the ordinate represents the hit probability of each rank. e five curves in Figure 7 are the performance of the ResNet50-based baseline after adding DEAM, local-net, CAFM, and MSFM. It can be seen from Table 1 and Figure 7 that the design modules and practices in this article have an improved identification index. e model in this paper incorporates the advantages of DEAM, localnet, CAFM, and MSFM, and the overall performance is greatly improved compared to the pure ResNet50. With the increase of each module, the performance also has a certain degree of improvement. Comparison with Other Re-ID Models. We have compared the model with the person reidentification models based on Stripe, GAN, and Global Feature and Attention [7] in recent years. e experimental results are shown in Table 2. e scatter plot of several network indicators based on the Market-1501 dataset is shown in Figure 8. As can be seen in Table 2, the Rank-1 (96.2%) accuracy of this model on the Market-1501 dataset is the same as Robust-ReID and is better than several other network models. mAP (88.2%) is better than other network models, except that it is slightly lower than Robust-Re-ID, SONA, and ABD-net. On the DukeMTMC-ReID dataset, the Rank-1 (89.6%) accuracy is higher than other network models except Robust-Re-ID, but the mAP (76.4%) is slightly lower than several models. is may be because the local part of the block combination has more features and there is greater redundancy. In the similarity comparison, although the first hit rate is high, the index on the n-th hit rate is not high, which leads to lower mAP. We will continue to pay attention and study this issue. In addition, we also performed a Re-ranking [36] experiment on the model in this paper, and the results are listed in the last row of Table 2. e results showed that mAP and Rank-1 accuracies reached 94.6% and 96.5% on the Market-1501 dataset, respectively, and mAP and Rank-1 accuracies reached 89.5% and 91.9% on the DukeMTMC-ReID dataset, respectively. At the same time, the advantages of this model can be clearly seen in Figure 8. e overall index is better than the other 18 network models except Robust-Re-ID, especially in the Rank-1 accuracy. Visualization Experiment and Results. We select four query sets on the Market-1501 dataset and perform Re-ID feature extraction [37] and similarity matching experiments [38] in the Gallery library. e matching results are shown in Complexity 8 Complexity Figure 9. Figure 9 mainly shows the matching results of the images in Ranks-1-10. e green box indicates that the image matches the Gallery library image correctly, belonging to the same ID, and the red box indicates the matching error. It can be seen that, for most pedestrian images, although there are some factors such as background interference, low resolution, and misalignment, the model in this paper can match correctly and achieve high accuracy. It can also be seen from the matching errors of the two red boxes in Figure 9 that the reason for the matching error may be that the backpack covers the important information of the upper body of pedestrians or there are few positive samples, and the search is complete in Rank-8. erefore, there are many reasons for model matching errors, such as less positive samples, less feature information, and more interference from occlusion and background. Conclusions Applying attention mechanism to improve the performance of person reidentification model has become a hot topic and has made some progress indeed. However, there are still some problems to be solved, such as matrix operation, coupling problem with original network, and optimizing attention. In this paper, person reidentification model, attention mechanism, and block scheme are studied, and a person reidentification model based on multiattention and multiscale residuals is proposed. Multiple attention models are added to the backbone network and global branches. e multiscale residuals and multiproportion block and reorganization are used to obtain better local and global features. e experimental results show that the model in this paper has some progress in indicators and has certain advantages. Further, we also notice that although the indicators have improved to some extent, they are still not ideal, and there are still errors in Rank-n. We also analyze the reasons for them. In the next steps, we will continue to study Re-ID problems under occlusion and small samples conditions and try to give improvement or solutions. Another research direction is model lightweight. On the basis of ensuring the recognition rate, the lightweight model can reduce the requirement of system computing power and is better applied to the economic and efficient scene. Data Availability e data used to support the findings of this study are available from the corresponding author upon request.
5,804.4
2021-01-01T00:00:00.000
[ "Computer Science" ]
Minus-C subfamily has diverged from Classic odorant-binding proteins in honeybees Odorant-binding proteins (OBPs) in insects bind to volatile chemical cues that are important in regulating insect behavior. It is hypothesized that OBPs bind with specificity to certain volatiles and may help in transport and delivery to odorant receptors (ORs), and may help in buffering the olfactory response and aid the insect in various behaviors. Honeybees are eusocial insects that perceive olfactory cues and strongly rely on them to perform complex olfactory behaviors. Here, we have identified and annotated odorant-binding proteins and few chemosensory proteins from the genome of the dwarf honey bee, Apis florea, using an exhaustive homology-based bioinformatic pipeline and analyzed the evolutionary relationships between the OBP subfamilies. Our study confirms that the Minus-C subfamily in honey bees has diverged from the Classic subfamily of odorant-binding proteins. INTRODUCTION Insects are a diverse class of Arthropods with a highly sensitive olfactory system. Olfactory information helps in mate selection, mating and oviposition, foraging for food, and social behavior (Hildebrand and Shepherd 1997). Odorant-binding proteins (OBPs) are abundantly present in the sensillar lymph of insects with the presence of at least 50 OBP genes reported in some species like Drosophila (Hekmat-Scafe et al. 2002), and in the nasal mucus of many vertebrate animal species (Bianchet et al. 1996;Loebel et al. 2000;Mastrogiacomo et al. 2014;Scaloni et al. 2001;Tegoni et al. 1996;White et al. 2009;Zhu et al. 2017;Manikkaraja and Bhavika et al. 2020). Despite their abundance and diversity, the role of OBPs in olfactory coding is yet to be completely explored (Larter et al. 2016). Insect OBPs are small, soluble globular proteins, 10-30 kDa, that are further characterized by alpha-richness, and the presence of six highly conserved cysteine residues (C1-C6) with conserved disulfide spacing (Vogt et al. 1981;Pelosi and Maida, 1990) that stabilize its tertiary structure. Alpha helix-rich OBPs found in insects do not show structural homology with vertebrate OBPs, characterized with a classical lipocalin fold (Flower 1996). It has been hypothesized that OBPs bind to ligands and solubilize them to aid transport and delivery towards odorant receptors. Genome-wide surveys to identify odorantbinding proteins in insect orders have been previously performed for various insect species in existing literature. Previous studies have predicted the presence of odorant-binding proteins in various species including Apis mellifera (order: Hymenoptera) (Forêt and Maleszka 2006), Drosophila melanogaster (order: Diptera) (Hekmat-Scafe et al. 2002;Graham and Davies 2002), Anopheles gambiae (order: Diptera) (Manoharan et al. 2013), and Periplaneta americana (order: Blattodea) (He et al. 2017) using homology-based bioinformatic approaches as a typical start point. Previous work in our laboratory (Karpe et al. 2016) has identified odorant receptors (ORs) in Apis florea using an exhaustive genomic pipeline. In order to complement the search of ORs (Karpe et al. 2016) towards a better understanding of odor coding (Missbach et al. 2015), this study investigated odorant-binding proteins (OBPs) in Apis florea. Apis florea or the red dwarf honey bee exhibits the complex behavior of eusociality, where there is a reproductive division of labor within a colony that comprises a female queen, male drones, and female worker bees. While worker bees perform important tasks such as foraging, guarding the colony hive, maintenance, and other diverse tasks for the colony, the queen and drone perform reproductive roles (Page and Robinson 1991). Members of the species exhibit haplodiploidy (Halling et al. 2001) system of genetic inheritance, where the males in this species are haploid, possessing half the number of chromosomes as diploid females. Apis florea is geographically distributed with a preference for warm climate (Otis 1991) in regions such as mainland Asia, the southern border of the Himalayas, the plateau of Iran, Oman and Vietnam, southeast China, and peninsular Malaysia (Hepburn et al. 2005;Oldroyd and Nanork 2009;Moritz et al. 2010) and display open nesting typically on low-lying tree branches in shaded regions (Wongsiri et al. 1997;Hepburn et al. 2005). Apis florea are important pollinators of tropical and ornamental plants as well as agricultural crops. They primarily feed on pollen and nectar from flowering plants. Like other honeybees, the body of Apis florea is studded with various types of sensilla among which olfactory sensilla (sensilla basiconica and sensilla chaetica) are prominent structures (Gupta 1986(Gupta , 1992. The antenna of the insect is typically the main site for olfactory receptors (Wigglesworth 1965). The antennae of Apis florea harbor hair-like sensillae trichodea types I, II, III, IV, sensilla basiconica, sensilla placodea, and sensilla ampullaceal (Gupta 1992;Kumar et al. 2014;Suwannapong et al. 2011). Insect OBPs, although highly divergent, are classified on the basis of conserved cysteine signature into Classic (six cysteines), Minus-C (loss of two conserved cysteines), Plus-C (additional cysteine residues and one proline) (Zhou et al. 2004), and atypical (~ 10 cysteines and long C-terminus) (Hekmat-Scafe et al. 2002;Xu et al. 2003) and Dimer OBPs (two cysteine signatures). Rapid identification of repertoires of putative OBPs across various insect genomes has been suggestive of the idea that the ecological niche of an insect species may correlate with an abundance of OBPs and social behavior (Zhou et al. 2020). While reference Dipteran fruitfly Drosophila melanogaster and Japanese encephalitis vector Culex quinquefasciatus have been found to have 51 and 110 putative OBPs respectively (Hekmat-Scafe et al. 2002;Manoharan et al. 2013), previous studies in Hymenopteran OBPs have also found species-specific differences in OBPs including 21 OBPs in eusocial Apis mellifera (Foret et al. 2006), 7 OBPs in fig wasp Ceratosolen solmsi ) that lives in closed spaces, and 90 in diamondback moth Plutella xylostella (Vieira et al. 2012) that lives in open spaces. Using Apis mellifera as a closely related reference genome and a revised annotation of Apis mellifera OBPs, we thus investigated the identification, annotation, and subfamily-based classification of putative OBPs from the genome of Apis florea and examined their evolutionary relationships using in silico approaches (Karpe et al. 2017;Mam and Sowdhamini, 2020). In order to have a standard query set from Apis mellifera, overlapping and unique hits from three sources were retained in one final dataset. Unique hits were those that were not reciprocal best hits across datasets, and further depending on sequence identity were labelled as isoforms or distinct. Based on homology with the final AmelOBP dataset, AfloOBPs were identified, scored, and annotated (explained in detail in later sections). AmelOBPs were pooled from the NCBI non-redundant protein database (29 putative AmelOBPs) and a previous study (Foret et al. 2007;21 AmelOBPs) to obtain a filtered set of query protein sequences. The filtered dataset contained sequences that were common to both or unique to either dataset (Supplementary Data). Reciprocal homology was performed using the filtered query set obtained and AmelOBPs from a recent study (Vieira and Rozas 2011;21 AmelOBPs). An e-value cutoff of e − 10 was used. The resultant matches as well as unmatched OBPs (putative OBPs with no reciprocal hit; 10 protein sequences) resulted in a final dataset of annotated AmelOBPs (Supplementary Table 1). Preparing query dataset from Insecta OBPs Protein sequences annotated as OBPs were obtained for organisms within Insecta from literature (Supplementary Tables 2 and 3). A non-redundant dataset was prepared by using CD-hit (Li and Godzik, 2006) with a filtering threshold of 95% sequence identity. Query protein to subject genome alignments Genomic alignments were obtained using Exonerate (Slater and Birney 2005) with intron sizes of 500, 2000, 5000, and 10,000 respectively with BLOSUM62 (Henikoff and Henikoff 1992) as the substitution matrix. In order to identify phylogenetically distant orthologs, PAM250 (Dayhoff et al. 1978) was also used as a substitution matrix. The genomic alignments were processed as per the methodology in a previous in-house study from a lab (Karpe et al. 2016(Karpe et al. , 2017(Karpe et al. , 2021. The pipeline involves thoroughly scanning and scoring alignments to the genome based on length, degree of similarity, and the best match of the scaffold location in the subject genome to the query sequence. The unique set of genomic alignments was then processed further to translate amino acids from corresponding in-frame codons. The resultant set of gene models and protein sequences were also manually corrected for missing start and stop codons and missing N-terminal and C-terminal amino acids, and annotated as "complete," "partial," or "pseudogene." For the purpose of further evolutionary analysis, only Apis florea OBPs obtained from Apis mellifera as the query (Sect. 2.1) have been discussed (Supplementary Table 4). The results of Apis florea OBPs obtained by querying OBPs in other insect orders (Sect. 2.2) have been provided as supplementary information (Supplementary Table 5). Homology-based validation and nomenclature The predicted Apis florea OBPs (AfloOBP) were subjected to reciprocal homology with our manually curated AmelOBP dataset, as explained above. The final dataset of predicted AfloOBPs comprised resultant matches as well as unique sequences with no corresponding reciprocal hits found in the AmelOBP dataset. The AfloOBP predicted protein sequence dataset was thus annotated with respect to AmelOBP homolog, if present as well as its status as "complete" or "partial." Secondary structure prediction The secondary structure of the protein sequences was predicted using neural networkbased PSIPRED v3.2 (Buchan et al. 2013). Detection of signal peptide and subcellular localization N-terminal signal peptide was detected using SignalP 4.1 (Nielsen et al. 1997;Petersen et al. 2011) and SignalP 6.0 (Teufel et al. 2022). This algorithm uses neural networks and Hidden Markov Models to determine signal peptides in each protein sequence. The predicted signal peptide for a given sequence was cleaved off and the "mature" sequence was used for multiple sequence alignment and phylogeny. Prediction of subcellular localization was performed through DeepLoc (Armenteros et al. 2017), an algorithm based on deep neural networks. Preparing dataset of insect OBPs for rooted and unrooted phylogeny In order to prepare an outgroup for the rooted phylogeny, annotated chemosensory proteins of Apis mellifera (AmelCSPs) were obtained from a previous study ), namely, AmelCSP1, AmelCSP2, AmelCSP3, AmelCSP4, AmelCSP5, and AmelCSP6. In order to construct the phylogeny, protein sequences of OBPs from 11 insect orders from representative insect species were obtained from previous literature and UniProt (The UniProt Consortium 2019) database. The insect orders, corresponding species, and the number of species-specific OBPs have been tabulated as in Supplementary Table 1. Structure-based sequence alignment and phylogenetic analysis A structure-based seed template was obtained from the PASS2.5 database (Gandhimathi et al. 2012) with the SCOP ID of the fold as 47,565. PASS2 is a database of alignments of proteins organized as structural superfamilies. Such structure-guided alignments (seed templates) are useful for guiding sequence alignments of protein families that are diverse in sequence but are conserved at the level of their structures, e.g., insect OBPs. After the removal of signal peptides, the dataset of "mature" AfloOBP sequences was aligned against the seed template using the G-INS-i algorithm, BLOSUM 30 substitution matrix, and 1000 iterations in MAFFT (Katoh et al. 2002(Katoh et al. , 2013. The output of the multiple sequence alignment was checked for the best-fit model of evolution using ProtTest v.3.4.2 (Darriba et al. 2011;Guindon and Gascuel 2003) as determined by the AIC and BIC scores. Phylogeny was constructed with RAxML (Stamatakis 2006(Stamatakis , 2014 using the maximum likelihood method with 1000 bootstraps and LG matrix (Le and Gascuel 2008) of amino substitution, the proportion of invariant sites, and the gamma rate heterogeneity ("LG + I + G") model of evolution. Tips labelled as seed are structure-based templates of insect OBPs derived from the PASS 2.5 database (Gandhimathi et al. 2012). They were used to verify the clustering observed in the sequencebased phylogeny, as insect OBPs are known to have low sequence identity among themselves. A seed tip label contains the following information in this order: (a) abbreviated form of the insect species, (b) letter "d," (c) PDB ID, and (d) PDB chain ID. The phylogenetic tree was visualized and annotated using iTOL Bork 2006, 2016). Validation of gene models using RNA-seq exon data from NCBI Gene models that were obtained through our in-house in silico homology-based pipeline and those obtained from literature (Fouks et al. 2021) were validated and/or corrected using publicly available RNA-seq exon data from the NCBI Apis florea Annotation Release 102. RESULTS AND DISCUSSION We filtered and re-annotated OBPs from closely related reference genome Apis mellifera using a homology-based approach (see Sect. 2). The final dataset (Supplementary Table 1) comprised 25 AmelOBP protein sequences. A genome-wide survey of Apis florea using these AmelOBPs revealed 22 novel OBP protein sequences with 15 complete and 7 partial sequences either towards the N-terminus, C-terminus, or both with an average exon number of 5 (Supplementary Tables 4, 6). A parallel study has recently provided annotation of 22 OBPs in Apis florea (Fouks et al. 2021). We also checked whether the addition of OBPs from other insects while performing a genomewide survey in A. florea would increase the number of identified AfloOBP genes. For this purpose, our final query dataset of AmelOBPs was used as input against several Hymenopteran genomes (Sect. 2; Supplementary Table 5) to obtain gene models and predict OBPs in each such genome using the methodology detailed in Sect. 2.3. Here, through an extensive filtering strategy, we identified 30 putative OBPs orthologous to other social bees (Apis dorsata, Apis cerana, Melipona quadrifasciata), facultatively social bee Eufrisea mexicana, solitary bee (Megachile rotundata), social wasp (Polistes dominula), and bumblebees (Bombus terrestris and Bombus impatiens). Of these 30 putative sequences from the family Apidae (Supplementary Table 5), twenty-two genes were very similar in gene structure to those identified using only AmelOBPs. The remaining 8 OBPs were a result of remote homology searches (with PAM250 matrix or allowing very long intronic regions), and hence 5 of these also had significant overlap with previously mentioned good-quality 22 gene models (Supplementary Table 5)In the end, expanding the search space by using many insect OBPs and remote homology searches yielded only 3 novel OBPs on the NW_003789197.1 scaffold. These three hits were not the best hits in bidirectional blast with their respective query sequences. As such remote homologs may sometimes belong to another related protein family, we decided to not include them in our further analysis. Chemosensory proteins in Apis florea Furthermore, a similar approach yielded 8 putative chemosensory proteins (CSPs) in the Apis florea genome, out of which 7 putative CSPs showed significant e-value for the presence of OS-D domain (Supplementary Table 5). The 7 CSPs identified have orthology with Atta colombica (leafcutter ant), Bombus ignitus (bumblebee), Apis cerana cerana (Asian honey bee), Camponotus floridanus (Florida carpenter ant), Trachymyrmex zeteki (fungus-farming ant), and Trichogramma pretiosum (endoparasitoid wasp) (Supplementary Table 5). The CSP gene family is highly conserved and shows an expansion in the flour beetle Tribolium castaneum (20 genes; Forêt et al. 2007) and a reduction in bees, wasps (10 genes; Werren et al.), and Dipterans (4 in fruit flies and 8 in mosquitoes; Vieira and Rozas 2011). Although 2 CSP genes have been identified in Apis cerana (Diao et al. 2018), studies have confirmed 6 CSPs in honeybee species (Wanner et al. 2004;Forêt and Maleszka 2006;Forêt et al. 2007;Fouks et al. 2021) such as Apis florea, Apis mellifera, and Apis dorsata. Our finding is thus in agreement with the knowledge of insect CSPs in current literature. Although CSPs have been typically detected in pheromone glands (e.g., mandibular glands) and reproductive organs (Zhu et al. 2019), members of CSPs, like OBPs, have also been associated with nonsensory functions. CSPs have been involved in embryonic development ), insecticide resistance (Liu et al. 2014), and limb regeneration (Nomura et al. 1992). Comparative study on AfloOBP predicted protein sequences Finally, out of 22 OBP genes, 15 complete AfloOBP genes (i.e., annotated as having a start and stop codon), and 16 translated protein sequences were predicted to have signal peptide sequences. The average length of signal peptides predicted in our AfloOBP dataset was 19 amino acids. Cleavage position ranged from 16 to 24th amino acids in the sequence. Secondary structure analysis revealed an alpha-rich state of OBPs with high confidence. Typically, 6-7 alpha helices per complete AfloOBP sequence were predicted. We used a reciprocal BLASTp approach to compare protein sequences of AfloOBPs obtained through our pipeline with that of the recent study (also reporting 22 OBPs) (Fouks et al. 2021). Our analysis reports 20 OBPs as reciprocal best-hit matches (Supplementary Table 8). However, triangular associations are likely with more sequences in their dataset. For example, AfloOBP17-like shows complete query coverage and identity with OBP17 (Fouks et al. 2021) but OBP17 shows identity only to AfloOBP17. Similarly, isoform AfloOBP19like did not have reciprocal matches with OBPs in Fouks et al. 2021 and has been annotated as a long non-coding RNA (Supplementary Table 9). OBP20PSE (61 amino acids) showed full query coverage with AfloOBP19 in our dataset. The exon coordinates in the gene model for OBP20PSE were also found in our prediction from protein to genome alignments using Exonerate (Sect. 2.3; Supplementary Table 6). Interestingly, OBP22PSE (216 amino acids) was annotated in a recent and parallel study (Fouks et al. 2021) and corresponds to a gene under negative selection (Fouks et al. 2021). It did not find reciprocal matches in our dataset. Additionally, we observed that OBP22PSE was positive for PBP/ GOBP domain against the Pfam database with a considerably higher e-value (3.4e − 06) than other OBPs. With a length of 216 amino acids, we expected it to be a double-domain or atypical OBP but, interestingly, observed only 1 domain with a relatively weak e-value. As our study created a revised annotation of Apis mellifera OBPs combining three studies (Sect. 2.1) before using it for identifying OBPs in Apis florea, therefore, we retain our annotation independently. Comparative study on gene models of AfloOBPs and revised annotation using RNA-seq data We had performed the GWS work independently and deposited the manuscript in BioRxiv in 2020 prior to the 2021 release of the publication by Fouks et al. group (2021). We present a detailed comparative analysis of the gene models below using a combination of RNA-seq data from NCBI and our computational genome-wide survey. As discussed earlier, the study by Fouks et al. (2021) also annotated two genes AfloO-BP20PSE and AfloOBP22PSE. The annotation for AfloOBP20PSE by Fouks et al. (2021) comprised two exons only. We have revised the model to a five-exon model using RNA-seq exon data and corrected the exon-intron boundaries. AfloOBP22PSE is a novel gene model predicted by Fouks et al. (2021). However, we report the presence of two splice variants, and present corrected gene boundaries for the same. Recent research on long non-coding RNA in Hymenoptera reveals their involvement in various neuronal processes in ants (Shields et al. 2018) and adult honeybee workers (Sawata et al. 2004;Kiya et al. 2012) as well as olfactory behaviors (Liu et al. 2019) in honeybees. Their potential involvement in regulating behavioral plasticity, ovary activity, and division of labor in honeybees through targeting mRNAs is yet to be fully understood Shi 2020). We are the first to report the annotation of long non-coding RNA (lncRNA) sequence in Apis florea through both our in silico homology-based GWS pipeline and publicly available RNA-seq exon data. The sequence contains a peroxisomal targeting signal and has a weak PBP/GOBP domain. The novel sequence AfloOBP19-like at scaffold NW_003791127.1 indicates a weak PBP/ GOBP domain but is a long non-coding RNA. It is a two-exon gene (289 bases) with a peroxisomal targeting signal. In general, using RNA-seq exon data, we have corrected gene boundaries in most models in both Fouks et al. (2021) and our in silico prediction to maintain in-frame translation of protein sequences. There have been frame shifts due to gene boundaries annotated causing issues with the translated amino acid sequence. These have been corrected throughout. For example, we have improved gene boundaries for the last exon in Fouks et al.'s (2021) annotated OBP19. On the other hand, the C-terminus has been extended in the case of OBP3. We have tabulated the differences among the models in Supplementary Table 9. AfloOBP1 is encoded by six exons with a well-defined N-terminal signal peptide and C-terminus. This is an improvement on the gene boundaries in the homology-based annotations by ours and Fouks et al. 2021 (Supplementary Table 9). Our homology-based annotation lacked a signal peptide and the C-terminal exon. The gene model for OBP1 by Fouks et al. (2021) has slightly differing gene boundaries at the third and fifth exons respectively that translate to frame shifts and do not align with their predicted protein sequence. The RNA-Seq-based annotation for OBP1 overcomes these limitations (Supplementary Table 9). AfloOBP2 is encoded by five exons and is characterized as complete in our in silico analysis. In AfloOBP3, the amino terminus has been extended further and gene boundaries have been corrected to correct frame shifts. OBP7 X1 splice variant has been annotated by both homology-based studies-ours and Fouks et al.'s (2021) pipelines. We have corrected the gene boundaries using publicly available RNA-seq exon data and have annotated the AfloOBP7 X2 isoform as well. For AfloOBP8_2, splice variants X1 and X2 have been annotated. For both OBP10 and OBP12, X1 isoform was originally annotated through our GWS, whereas X2 has been annotated by Fouks et al.'s (2021) study. In the case of OBP12 gene model, exon 1 (886,573-886,608) predicted is absent from the RNA-seq exon data. We did not find transcriptomic evidence for OBP17-like that was a predicted partial gene in our in silico pipeline. Despite partial overlap with OBP17, the exon-intron boundary (1,817,764-1,817,790) of OBP17-like did not correlate well with transcriptomic data. Evolution of OBP subfamilies in honeybees Sequences AfloOBP1-AfloOBP13 were found to display the conserved Cysteine signature of Classic and Minus-C subfamilies, as their orthologs in Apis mellifera, and comparable to that of AgamOBP (Figure 1). Multiple sequence alignment revealed conserved Cysteine profiles specific to Classic and Minus-C subfamilies in the Apis florea genome (Figure 2). Sequences AfloOBP14-AfloOBP21 were found to show the conserved Minus-C cysteine signature where Cysteine residues in the conserved second and genome across Classic and Minus-C subfamilies. The cysteine signature of OBPs from Anopheles gambiae is given for reference fifth positions are missing. Our analysis shows that the conserved cysteine signature for both subfamilies in Apis florea is similar to the representative signature observed in a previous study (Xu et al. 2009). The conserved cysteine signature for the Classic subfamily for the Hymenopteran insect order was determined as C1-X 23:35-C2-X3-C3-X 27:45-C4-X 7:14-C5-X8-C6 (Xu et al. 2009). Our study has identified 13 Classic and 9 Minus-C OBPs in Apis florea. We observe the Classic cysteine signature to be conserved similarly as C1-X 27:37-C2-X3:4-C3-X 33:43-C4-X 9:13-C5-X8-9-C6. We tested various models of evolution on our insect OBP data (Darriba et al. 2011;Guindon and Gascuel 2003). Out of these, the most optimal model of evolution was parameters "LG + I + G." We generated our phylogenetic tree using these parameters with 1000 iterations. Phylogenetic inference revealed the clustering of Minus-C OBPs as a subclade of the Classic OBP subfamily comprising members of both Apis mellifera and Apis florea OBPs ( Figure 3A). Moreover, a conserved cysteine signature specific to the chemosensory protein (CSP) family was observed in the outgroup chemosensory proteins (AmelCSP) (Figure 2). AmelCSPs used as outgroup clustered distinctly ( Figure 3A) from the odorant-binding proteins input to the phylogeny with 100% bootstrap value. Minus-C OBPs were found to cluster together with a 60% bootstrap value closest to AfloOBP9, annotated as a Classic OBP. However, the topology of the Minus-C clade with Classic AfloOBP13 outgroup has good bootstrap support in general. OBPs of the Minus-C subfamily, AfloOBP 14-20, emerge closest to AfloOBP13, a Classic OBP with an observed six cysteine signature. Interestingly, all the other Classic OBPs cluster distinctly in a clade corresponding to the insect Classic subfamily; however, AfloOBP13 clusters closely with the Minus-C group in a distinct subclade with high confidence ( Figure 3B). Interestingly, antennal OBP (MsexABP1) (Vogt et al. 2015) from Lepidopteran insect Manduca sexta clustered close to the Minus-C clade along with other bee species (Hymenopteran) with high bootstrap support of 97%. We found that most Classic OBPs in Apis florea are phylogenetically distant from Minus-C (bee OBPs) than clades representing atypical OBPs in Dipterans and Figure 2 Multiple sequence alignment of OBPs from Apis florea (AfloOBPs). The alignment also contains OBPs from its phylogenetic neighbor Apis mellifera (AmelOBP). Chemosensory proteins from Apis mellifera (AmelCSPs) constitute the outgroup Plus-C insect OBPs. It is possible that Minus-C OBPs in honey bees may have evolved from a single ancestral Classic OBP (similar to AfloOBP13, AmelOBP13) of its species by deletion/mutation of second and fifth cysteines. However, we also acknowledge that we have used representative organisms covering 11 insect orders with a focus on Hymenoptera. It is also plausible that Minus-C OBPs in insects may have evolved independently under positive selection pressure. The evolution and insect order-specific occurrence of Minus-C, Plus-C, and atypical subfamilies of insect OBPs may have functional roles and would be interesting to investigate. Taken together, our observations from a comprehensive bioinformatic analysis strongly suggest that Minus-C OBPs are likely to have evolved from a Classic OBP subfamily member in honeybees. Similar co-clustering of Classic and Minus-C OBPs across other insects suggests a recurrent need for such variation. It is possible that the evolution of a subfamily could be an adaptation to the local niche of the insect species for functional specificity (Zhou et al. 2020). CONCLUSION In a step towards understanding the role of OBPs in insects, a bioinformatics-based approach was used. We have curated OBPs from Apis mellifera from three sources and used them to query the genome of Apis florea. To study the evolutionary relationships, we used OBPs from 11 insect orders from diverse habitats (Supplementary Table 3). Our phylogeny was constructed using a novel structural templatebased approach (Gandhimathi et al. 2012) that addresses the challenges faced in sequence alignments due to low sequence identities. A total of 22 OBPs including isoforms have been identified and annotated from the genome of eusocial Asian red dwarf honeybee, Apis florea, using a modified in-house pipeline. Our results include AfloOBPs that have been previously identified by the automated pipeline of NCBI with a query coverage and identity of 100% each with their respective subjective sequences (AfloOBP9 and AfloOBP11) (Supplementary Table 7). Our annotated data includes complete OBPs that were identified as having incomplete exons in N-termini and C-termini or/ and labelled as uncharacterized by the automated pipeline of NCBI. We also observe that a number of OBP genes in Apis florea (22) and the western honeybee, Apis mellifera (25), are similar despite the differences in respective ecological niches. We have analyzed the characteristic conserved features of these OBPs using computational methods and phylogeny resulting in the discovery of new gene models as well as improvement on existing gene models from NCBI. Presence of conserved cysteine pattern, disulfide spacing, domain analysis, size, and predicted secondary structure further strengthen their identity as putative insect OBPs. The Classic OBP subfamily clade appears to have expanded to Minus-C OBPs in our study on the dwarf honeybee, Apis florea, and also in a few other insect orders (Vieira et al. 2007;Sanchez-Gracia and Rozas 2008). It is suggested that the expansion of OBPs in the last common ancestor of honeybees explains their unique chemosensory behavior. Although a study on bumble bee OBP (Sadd et al. 2015) identified only the Classic subfamily (16 OBPs in Bombus terrestris), and its eight orthologs in Apis mellifera, previously, the C-minus subfamily in honeybees was identified by Foret and Maleszka (2006). Using phylogeny, our study also shows the evolution of Classic OBP, AfloOBP13, and its Minus-C relatives, AfloOBP14-AfloOBP21. The exon length of B. Rooted phylogeny (A) of OBPs from sister species, Apis florea (Aflo; in bright green) and Apis mellifera (Amel; in purple). Members of the alignment template are labelled with the prefix "seed" and colored based on order, whereas the outgroup consisting of Apis mellifera chemosensory proteins (AmelCSP) is labelled in black. The Minus-C subfamily has been indicated in a rectangular box. The bootstrap values of the branches are indicated on the nodes in percentage values. Unrooted phylogeny (B) of OBPs from representative members of 11 insect orders represents phylogenetic clades. Inner branches and leaf labels denoting insect species are colored based on order whereas the outer strips denote the OBP subfamily. Subfamily annotation label for a sequence has been provided only if the criteria of high confidence in annotation have been met (i.e., phylogeny and cysteine pattern conservation). Subfamily labels have been left blank in case of limited or no information about subfamily classification for the OBP sequence. Classic subfamily is colored in brown, Minus-C in dark blue, Plus-C in dark green, two-domains in bright pink, and atypical in red. The outer circle denotes members' clades. The inner branch colors and label colors are colored as per order. Hymenoptera is denoted in violet. The bootstrap values of the branches are indicated on the nodes in percentage values. An interrogative, userfriendly version of the phylogeny will be provided at http:// caps. ncbs. res. in/ downl oad/ Apid ◂ terrestris is typically 4 or 5 exons, and is similar to Apis mellifera and Apis florea. However, Classic OBP AfloOBP10 (ortholog to AmelOBP10) has 6 exons. The strong phylogenetic similarity between Apis mellifera and Apis florea OBPs strongly suggests that a similar birth and death model of evolution occurs in the OBP gene repertoire in Apis florea as well. Interestingly, AmelOBP3 was found to be under positive selection in Apis melifera (Fouks et al. 2021). Inhibition in OBP7 in response to glyphosate stress was also accompanied by a decrease in metabolites contribution to the chemosensory pathways such as L-malic acid, histamine, and gamma-aminobutyric acid. There is an increase in differentially expressed OBPs in response to glyphosate and commercial formulation-based stress in both A. cerana cerana and A. melifera ligustica (Zhao et al. 2020). Significant changes in the expression of OBP4, OBP16, OBP18, and OBP21 are observed in A. mellifera as a stress response to sublethal doses of imidacloprid, a nicotine-mimic insecticide. It is worthwhile to note that AmelOBP16, AmelOBP18, and AmelOBP21 belong to the Minus-C subfamily and could explain the positive selection of the C-minus subfamily. This strengthens the possibility of testing the response of their respective orthologs (Supplementary Table 4) to various abiotic stress factors in Apis florea. We used a query set for Apis mellifera OBPs derived from three studies, and used this revised query set to annotate OBPs in Apis florea using our in silico homology-based approach as described in the present study. Furthermore, we used available RNA-seq exon data to refine our in silico predictions of exon-intron boundaries for each gene model. We have also considered AfloOBP gene models in a recent study (Fouks et al. 2021). Overall, we corrected exon boundaries, annotated novel isoforms, and missing exons. We also report a long non-coding RNA in Apis florea. We have provided an improved annotation for Apis florea OBPs that incorporates all the above features into our work (Supplementary Table 9). FUNDING BM received support from NCBS-TIFR and the Tata Education and Development Trust. SDK received support from Shyama Prasad Mukherjee Fellowship from the Council of Scientific and Industrial Research (CSIR) and Bridging Postdoctoral Fellowship from NCBS-TIFR. RS received support and funding from JC Bose fellowship (JBR/2021/000006) and from the DBT-Bioinformatics Centre (BT/PR40187/BTIS/137/9/2021). DATA AVAILABILITY All the main data generated or analyzed during this study are included in this published article [and its supplementary information files]. Related datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. CODE AVAILABILITY Not applicable.
7,322
2023-02-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Design and Evaluation of an Asynchronous VR Exploration System for Architectural Design Discussion Content : Design discussion is crucial in the architectural design process. To enhance the spatial understanding of 3D space and discussion effectiveness, recently, some systems have been proposed to support design discussion interactively in an immersive virtual environment. The entire design discussion can be archived and potentially become course materials for future learners. In this paper, we propose an asynchronous VR exploration system that aims to help learners explore content effectively and efficiently anywhere and at any time. To improve effectiveness and efficiency, we also propose a summarization-to-detail approach with the application space by which students can observe the visualization of spatial summarization of actions and participants’ dwell time or the temporal distribution of dialogues and then locate the important or interesting region or dialogue for further exploration. To further explore the discussion content, students can call the preview to see the time-lapse animation of the object operation to understand the change in models or playback to view the discussion details. We conducted an exploratory user study with 10 participants to evaluate user experience, user impression, and effectiveness of learning the design discussion course content using our asynchronous VR design discussion content exploration system. The results indicate that the interactive VR exploration system presented can help learners study the design discussion content effectively. Participants also provided some positive feedback and confirmed the usefulness and value of the system presented. Our applications and lessons learned have implications for future asynchronous VR exploration systems, not only for architectural design discussion content, but also for other applications, such as industrial visual inspections and educational visualizations of design discussions. Introduction 1.Research Background Recently, affordable, accessible, and powerful hardware has emerged in consumer VR, enabling new developments in applications.In recent years, we have seen widespread applications in gaming, training, social activities, and education.VR is becoming an important medium in design fields such as product design, architecture design, landscape design, and urban planning [1][2][3].In architectural design, the design process evolves from early conceptual design to later detailed design.For each design stage in the design studio, review or discussion is vital for students to learn by doing [4,5].Instructors and students meet each other face-to-face; the designer or instructor presents the design work; and the instructor then coordinates the discussion, asks questions, and provides suggestions.Traditionally, the participants have discussions based on the physical model and 2D drawings prepared by the designer and communicate via voice and sketching on paper or whiteboard.However, firstly, the physical model can usually only be viewed from the outside and cannot be modified during the discussion.Second, the 3D spatial understanding from 2D drawings or 2D sketching on paper may require considerable mental effort.Lastly, it is difficult for participants to go back to a place previously visited or to a previous state of discussion. To overcome these problems and improve the effectiveness of design discussion, some research has proposed using VR as a representation medium for design reviews [5].More recently, VR has been fully used to offer a design discussion platform in which participants can navigate and view the virtual model, discuss through voice and 2D/3D sketching, and modify the object during the discussion [6].It has been reported that the 3D spatial understanding of the architecture model and sketching during the discussion requires little mental effort; the support for instant model modification greatly enhances the discussion's effectiveness, since you can see what is suggested during the discussion [6].Furthermore, in such systems, since discussion logs are archived, rollback functionality has been proposed to guide participants back to a place previously visited for some discussions in that state or to a previous discussion state to initiate a new direction of discussion [6].The archived discussion log is outstanding course material because students who were not in the design discussion class will have the opportunity to learn architecture design by asynchronously viewing and interacting with the discussion content in their own time.Furthermore, students who were in the discussion class can also learn more by interactively reviewing the design discussion content in a post-hoc fashion. Problems and Proposed Approaches Since the design discussion course normally takes one or two hours, a challenging problem is how to present the course material in VR such that students can learn the architecture design effectively and efficiently.At the beginning of this research, we conducted a requirement interview with a group of architecture students and teachers who had previously experienced the traditional architecture design discussion.We found that learners are more concerned about what has happened in the course than about the sequence of discussions, and that realizing what has happened to the architecture models makes understanding the content more effective and efficient.Based on this observation, we proposed a learning strategy called "summarization-to-detail". Learners are first offered an overview of the discussion by showing the spatial distribution of object changes and participants' dwelling time, as well as the temporal distribution of dialogues; from the summerization visualization, learners can easily filter out a time interval for further exploration by selecting a region or a dialogue of interest. In our requirement interview, we also observed that learners prefer to see the overview of object modifications before viewing the detailed discussion related to the object modifications.Therefore, the second problem is how to provide an overview of the object modifications occurring within the filtered time interval.Since each object modification could take time, and there might be several modifications occurring within the time interval, time-lapse animation of the objects' change can be a reasonable choice.Moreover, the object modifications could happen in different locations and the animation is normally shown in a few seconds, so it is not always easy for learners to follow and see the time-lapsed animation of object changes in the 3D Virtual Environment (VE).For this problem, our solution is to present the time-lapse animation in the Worlds in Miniature (WIM) [7] of the VE so that the learner can inspect the animations without changing view and position. To further meet what we learned from the requirement interview-"the overview of object modification is preferred to be seen before viewing the detailed discussion"-with the "summarization-to-detail" strategy, the learner's exploration normally starts with a preview and then proceeds with playback, or directly enters into playback, and of course can go back and forth between preview and playback.The preview offers a highlight of objects that were modified within the time interval, followed by a time-lapse animation of the changes of the objects.Highlights and animation are shown in the WIM of the VE, as well as in the 3D VE.Learners can switch their view from WIM to the 3D environment for a closer look via a zoom-in-like operation that teleports the learner to an appropriate position and with the same view direction as that for the WIM.The playback is the normal playback of the discussion content in the virtual environment.Learners can follow the teacher's position and view, or their own view, and in this manner can experience the discussion in the same way as students who were actually present in the design discussion in real-time. To evaluate the effectiveness of the proposed system, we conducted an exploratory user study with 10 participants.Eight of them were architecture majors.Participants were asked to explore the discussion content on the design of VR architecture using our system, and we compared and contrasted the effectiveness and learning experience of using the presented system.Eight architecture majors were also asked to draw their own designs based on the design logic they learned from exploring the design discussion under both conditions.The user study revealed that the VR exploration system presented is capable of facilitating a more effective learning experience of the design discussion content.The participants also gave us several examples of positive feedback and confirmed the usefulness and value of our new system. Architectural Design Review and Discussion The most popular medium for reviewing architectural design is drawing or sketching.During the review or discussion course, participants express their ideas by drawing on paper or sketching on a white board.The ambiguity and amorphousness of sketching is very helpful for architects in improving the ideas process [8].Sketching also plays an important role in idea exchange and communication between participants [9].However, 2D sketching requires significant mental effort in 3D spatial understanding and its association with the 3D architecture model. After computers became a possible choice for the design process, many studies have been concerned with whether or not digital media can provide better representation for discussion, making 3D scene visualization and navigation possible.Recently, virtual reality (VR) has become a popular digital medium.VR can be used to improve the architectural education environment [3], not only due to the fact that it offers a much better sense of space, but it also makes participants feel more realistic with the presence of other users.John Messner [10] has presented ongoing research to improve construction education through the use of VR and 4D CAD modeling (3D design plus time) for construction processes and projects.The students were also very engaged with this type of interactive learning experience.Schnabel et al. [11] compared the understanding of spatial volume within VR with representation using conventional media.They demonstrated that implementing design in VR has significant advantages to the design process by enhancing the perception and understanding of 3D volumes.Bruder et al. created the Arch-Explore user interface, which allows users to explore the 3D models of architectural buildings through natural real walking [12].They integrated multiple travel techniques to allow the exploration of larger virtual spaces than the real-world tracked space, such as redirected walking for natural walking and virtual portals for large-scale teleportation. Other researchers have designed and developed an integrated system for architecture design discussion in VR.Hsu et al. [6] constructed a multi-user system that integrated 3D modeling and procedural modeling with a virtual reality platform in order to support interactive architectural design discussion.The system offers a minimap for users to see each other's position, and they can transfer themselves to wherever they want by selecting the position on the minimap.This system also supports instant editing, 3D sketching, and on-surface sketching.Additionally, a rollback mechanism is designed, where users can roll back to a place previously visited for further discussions and then resume the discussion or go back to a previous state of discussion to initiate a new discussion direction. Spatial-Temporal Data Summarization Visualization Spatial-temporal data includes spatial location, time, and thematic attributes, which are very different and make the data understanding and visualization a difficult task.Cockburn et al. have provided an interesting review of interface techniques that enable users to work with and view different contextual and focused views of data that can be leveraged for a variety of information visualization applications, potentially including architectural design and post-experience visualization [13].The interface and interaction techniques they explored spanned Overview and Detail, Zooming, and Focus and Contextual techniques.To visualize the paths of the participants, Kraak [14] first revealed the visited trajectories as a space-time cube.Coulton et al. [15] proposed a method to combine spatial and temporal information using the human geographers' technique of paths to provide 3D visualizations of the participants' movements.To summarize human actions in spatial and temporal space, Zeng et al. [16] proposed a system that translated activities into a series of line segments from source to terminal city in 3D space, stored in a spatial index that allows quick retrieval of relevant data.Landesberger et al. [17] presented a spatial-temporal bidirectional linkage method, allowing users to select objects in space to see how they develop over time, or to select object groups according to their temporal behavior and locate them in space.Guo et al. [18] introduced EventThread, which categorizes events into time-specific clusters and visualizes the clusters interactively.Shapiro [19] showed how space-time visualization is powerful to support specific types of learning environment designs, such as in museum studies and in social studies education. Worlds in Miniature (WIM) are miniature virtual replicas of virtual space within the virtual world that have been used as a navigational and interaction tool in a variety of applications that span the virtuality continuum [20][21][22].Recently, Danyluk et al. examined the WIM design space in XR applications and defined design dimensions that span Size and Scope, Abstraction, Geometry, Reference Frames, Multiples, Virtuality types [23].They mapped new application examples of WIM that are the best exemplars of these categories, which inspired our WIM design for asynchronous information visualization for architectural design visualization. Summarization-to-Detail Exploration and Visualization It is quite common for human beings to learn or explore information from overview to detail.For example, in information visualization, it is common to use a summarizationto-detail approach.The very first visual information seek mantra, "overview first, zoom, and filter, then details on demand", proposed by Shneiderman [24], emphasizes the interaction between the overview and the details.Overview provides a summarized view and allows the user to find regions where further exploration in more detail may lead to productive finding.Zhang et al. [25] proposed methods that automatically categorize numerical attribute values by exploiting the hidden domain knowledge, and then proposed an interesting measure for graph summaries to point users to the potentially most insightful summaries.Nam et al. [26] proposed a framework for the exploration of high-dimensional data that provides both overview and detail and adopted a sightseeing paradigm for data and space navigation that allows a better understanding of high-dimensional relationships.Chen et al. [27] visualized the traffic data of highways and allowed semantic zooming, which continuously changes the appearance of the graph to allow visualization of the overview to detail.Sarikaya et al. [28] identified key factors in the design of summary visualization.The analysis of design factors provided a more principled understanding of design practices for summary visualization.Finally, a recent review and analysis of Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis of the use of immersive technologies in Architectural Engineering and Construction (AEC) has revealed that, while there is plenty of use of XR technologies to facilitate design, there is little work on asynchronous design discussion visualization and interaction for users of these applications (i.e., educators, students, observers, and clients of the architectural design decision process) [29]. Overview As we mentioned in Section 1.2, the architecture design discussion may begin with an overall presentation about the design solution, and each segment of the following discussion usually focuses on a particular area of the model.Additionally, the requirement interview with a focus group of architecture students, teachers, and designers we conducted suggests that learners are more concerned with what has happened in the course rather than in the sequence of events.Therefore, realizing what has happened to the architecture models makes understanding the content more effective and efficient.Furthermore, the requirements interview also pointed out that it is preferred to see the overview of object modifications before listening to the relevant discussion. Based on the interview observations, we proposed a learning strategy, called "summarization-to-detail", in which the learner first sees the overview of actions that occurred during the discussion and then filters out a time interval of interest for further exploration; see Figure 1.The summary of actions consists of the spatial summarization of the modifications of the object and the dwell time of the participants, as well as the temporal distribution of the dialogues, and with an optional Theme River.Spatial summarization is visualized as heat maps on a 2D minimap of the architecture model, and the temporal distribution of dialogue is simply visualized as blocks along the timeline.A time interval will automatically be filtered when the learner selects a region on the minimap or a dialogue block.As shown in Figure 1, with the filtered time interval, the learner can explore the content using preview then playback or directly by playback.The learner can go back and forth between preview and playback or between summarization and detail.In the preview, objects that were modified within the time interval will be highlighted, followed by a time-lapse animation of the object modifications.Playback is the replay of the actions that occurred within the time interval.The learner can experience the playback with his or her own view, or with a view that is closest to the teacher's.The summarization visualization on the minimap, dialogues, timeline, Theme River, and buttons for preview and playback are organized on the main menu; see Figure 2. The menu is displayed on a translucent 2D plane as a heads-up display initialized to hover in a near-field space at a reachable distance to the participant.The minimap with heat maps is on the top left.The heat maps can render the density of actions that took place on objects in the environment, or the proportion of time participants' spent in the architectural space during the design discussion.In addition to the heat maps, on the minimap we also display the modified objects as red dots and represent participants and their positions with icons.The blocks with numbers inside to the right of the minimap are floor numbers for the architecture model.Below the minimap there is a tree structure representing branches resulting from the discussion.Each branch represents a version of the model resulting from the discussion.Note that, by default, all branches are selected, and the learner can select one of the branches.The summarization visualization depends on the branches selected.Below the timeline there are blue blocks that represent dialogues.Two buttons are used to move the activated dialogue to the left or right.The three buttons below the dialogue blocks are for playback, pause, and preview, from left to right.With a filtered time interval, once the preview button is activated, the main menu disappears, and a WIM is shown to the right of the learner, surrounded by the virtual architecture model.Highlight and time-lapse animations are displayed in WIM, as well as in the 3D virtual environment.The WIM will rotate according to the change in the learner's view point so that the display of the highlight and animation always faces the learner.The WIM is rendered with a translucent background so that the learner can overlook the whole model and observe the highlight and animation functionality.If the learner wants to have a closer look at a highlight or animation, we provide a zoom-in-like operation that teleports the learner to an appropriate position in the 3D environment, with the same viewing direction as that for the WIM. Summarization Visualization We archived data in VR-based design discussion courses that included the movement of each participant, content of the discussion (e.g., voice, sketching), model modification log, bookmarks, and rollback information.Using this archived information, we interactively visualized the design discussion data in our VR system for learners using a combination of interaction metaphors that will be explained below. Minimap and Spatial Summarization Visualization Minimap is a widely used user interface.It provides a top-down view of the architecture model.For a multi-story architecture model, each floor has a minimap.Each minimap is a rendered image of the models on the floor on which a region partition is applied based on the model design and the designer's opinions.On the minimap, we augmented with heap map for actions regarding objects, including modification, creation, and sketching, or heat map for the participants' dwelling time, with red dots representing the modified objects and icons representing the participants; see Figure 2.With the minimap and its augmented information, the learner can have a clearer understanding of model space, where the modified objects are, and the density distributions of an object's modification and participants' dwelling time. The heat map can be generated as follows.The map plane is a square in Unity and is first aligned with the minimap.Suppose that the heat map has a resolution of n × m, with nm vertices.The density value at each vertex is the accumulated weighted values contributed by the actions that occurred at the vertex or on the edges or faces adjacent to that vertex. Timeline and Temporal Summarization Visualization The targets of the design discussion are the architecture model and the model space.Therefore, ideally, content exploration can best be driven by the region of interest in the model space.However, when a participant comments on the design, the comment could be regarding a general design principle or be about the design of some objects that are not in same region as the participant.We finally decided to drive the content exploration based on the time interval of interest, which is quite similar to the video players that most people are familiar with.Therefore, when a region on the minimap or a dialogue is selected, the corresponding time interval will be filtered, and the content exploration can be triggered for that time interval. In addition to the dialogue blocks along the timeline, we utilize Theme River to summarize the number of model operations, including transformation, sketching, and object creation, along the timeline, as shown in Figure 3. Theme River is a popular visualization that shows changes in events or variables over time.However, in our initial pilot study, which aimed to assess the system functionality and identify any potential problems, we found that it is not easy for architecture students to grasp the information presented by Theme River.Therefore, in the current system, Theme River functionality is an option.If, from Theme River, the learner finds a certain period of time interesting, he or she then needs to set the time interval manually for further detailed exploration.To this end, Theme River is placed above and aligned with the timeline, as illustrated in Figure 3. Branches The VR architectural design discussion system provides rollback functionality that allows the teacher to return to a previous state of discussion and initiate a new direction of discussion [6].As a result, the log achieved may have more than one version represented as a tree structure, in which a branch is formed by traversing from the root to a leaf; see the bottom left of the main menu in Figure 2. By default, all branches are selected.Once a branch is selected on the main menu, all summarization information will be updated accordingly. Filtering of Time Interval The time interval for a region is derived as the time distribution range for dialogues and operations of objects related to the region.Object operations have been associated with the region by their locations.However, it is not always easy to find which regions are related to a dialogue or which dialogues are related to a region, since the target objects of a discussion dialogue may not be in the same region as the speaker.One way to do this is to interpret the content of the dialogue to identify which are the target objects automatically or manually.Alternatively, for a dialogue, we can find the regions that received the most attention from the participants during the time period of the dialogue.This is based on a phenomenon we observed in the design discussion where the teacher usually makes sure that most participants are focused on the target objects before he or she starts the voice communication.In the current version, we segmented the voice communication for a whole course into dialogues and manually identified the target objects for each dialogue.Note that, due to the branching of the design discussion, the derived time interval for a region may need to be divided into k subintervals if k branches are involved for exploration. The time interval for a dialogue is the range of time span for the dialogue itself and the operations of the target objects of the dialogue. Preview The preview functionality aims to provide an effective and efficient way for learners to learn where and how objects are operated within the filtered time interval.To this end, two problems need to be carefully dealt with.First, since an object operation may take a while, we depict the modification process with a time-lapse animation that lasts for a few seconds.Note that the time-lapse animation of a model modification was generated by rendering the whole model change over a pre-specified time interval.Second, in the filtered time interval, different object operations are often spread over a large area.Hence, it is difficult for the learner to follow and see the time-lapse animations due to the possibility of motion sickness.Worlds in Miniature (WIM) is chosen to provide an overview of the animations. For a filtered time interval, once the preview button on the main menu is triggered, the WIM, rendered with transparency, will immediately be displayed in front of the learner's personal space.Since the animation of the object operations may cover a large area, WIM needs to rotate and scale automatically so that the part that covers the object highlight and the time-lapse animation can always face the learner at the proper size and is displayed with proper scale.Moreover, WIM is also surrounded by the 3D virtual architecture environment; see Figure 4.In the cases needed, we also provide an interface for the learner to translate, rotate, or scale the WIM manually. Although WIM is advantageous for examining the object changes over the whole model, the learner may want to have a closer look at the highlight and animation.We provide a zoom-in-like functionality that, once triggered, will teleport the learner to an appropriate location in the 3D virtual environment and with the same viewing perspective as that for viewing WIM; see Figure 5. Additionally, viewing in the 3D model space would provide better immersion than WIM does.Switching between these two perspectives (WIM space or virtual model space) allows the learner to achieve a thorough viewing experience in the preview. Playback Playback is a replay of the design discussion.The architecture model, model modifications, sketching, voice dialogues, and movements of the participants will be displayed in the virtual environment.The learner observes and experiences the design discussion in the same way as those who actually joined the discussion; see Figure 6a,b.Note that in Figure 6b we see a snapshot of the discussion scenario in which there is an icon placed on top of the avatar who is speaking, and the sketched objects as well as objects that have been modified are highlighted with different colors.One problem with playing back is how the learner can move effectively to observe the playback.In the VR architecture design discussion course, participants might move frequently and quickly, and the target objects of the discussion might be a little away from the participants.These observations imply that it is difficult for the learner to move effectively in order to see the playback, even with the help of the minimap.Note that, in playback mode, the learner needs to pay almost full attention to the discussion, so "how the learner moves" should be almost effortless.Since the teacher plays a leading role in the discussion, we provide a "follow teacher" functionality that transfers the learner to a position at the side of the teacher (i.e., rear right side of the teacher) and with a viewing direction similar to the teacher's. Gesture-Based Interaction and User Interface Current implementation offers two options for the interface: HTC controller or gesture recognition supported by Leap Motion.In the interest of space, the interaction metaphor with gesture-based controls is elaborated below.In the proposed system, two categories of gestures are defined: traveling and system control.For traveling, there are gestures for forward movement, for ascending movement, and for descending movement, as shown in Figure 7.To control the system, there are gestures for pointing and manipulating WIM; see Figure 8a,b.Since these gestures differ significantly, it is unlikely that users will falsely trigger an unexpected functionality.However, it also necessary to train the user in this interaction metaphor for movement and control of the WIM.The "follow teacher" can be triggered on the main menu and on a menu that can be summoned only when the main menu is gone by using the pointing gesture; see Figure 8c.It is worth mentioning that, during the playback, we highlight the objects that have been modified and mark the time of the last modification on such objects to reduce the learner's mental effort in memorizing what has happened.The proposed system has three types of menu interface: the main menu described in Section 3.1 (Figure 2), the menu for supporting functionalities, and the menu to interact with WIM.For the main menu, we first considered designing it as a head-mounted menu, but we found that the menu often moved slightly.The accuracy and stability issues of the Leap Motion gesture tracking system further complicated the problem.The main menu is summoned by turning over the left-hand side, will appear statically in front of the user, and will disappear automatically when the preview or playback button is triggered, or when it does not receive the user's attention for a few seconds.In this way, we made the main menu work like a head-mounted menu. An auxiliary menu provides supporting functionality: "follow teacher" and "turn on/off the highlight of modified objects in the virtual model scene".The auxiliary menu can be summoned by a pointing gesture only when the main menu is closed.As shown in Figure 8c, the menu contains two buttons and appears in front of the users' right index finger and will disappear when the user lays down the right hand or when the main menu is turned on. The third is the menu to manipulate the WIM.The menu will appear below the WIM when the time-lapse animation is over or paused.In this menu, there are translation, rotation, and scaling options to choose to manipulate WIM using the WIM gesture described in the previous subsection. An Example Scenario In this section, we will describe an example scenario to explore the content of the design discussion using our system.At first, when the system is activated, the learner is placed in the virtual model space from which he or she can navigate and look around to examine the model space.Before or while navigating the virtual model space, the learner can activate the functionality that highlights all the sketched objects and the objects that have been modified during the discussion by using the auxiliary menu (Figure 8c).With this action, the student can become familiar with the model space and conceptualize some ideas about the targets of the design discussion. To begin exploring the content, the learner summons the main menu by turning over his or her left hand.The main menu will be shown and fixed in front of the user and within the learner's personal space.The learner looks at the summarization visualization shown on the main menu, trying to grasp the distribution of actions over the model space and dialogue blocks along the timeline.The learner may decide to select a region with the densest object modifications or a dialogue about the course theme for exploration.Once the learner makes the selection, the filtered time interval will be shown above the timeline, and then he or she can activate the preview or playback button.Once the preview button is activated, the main menu disappears and the WIM is shown on the right side of the learner, surrounded by the virtual environment.In the WIM and virtual environment, the objects modified within the time interval will be highlighted, followed by the time-lapsed animation of the object modification.During animation in WIM, the learner can switch to the virtual environment for a closer look by activating the zoom-in button.During or after the preview, the learner then decides to look at the detailed discussion within the filtered time interval, summons the main menu again, and triggers the playback button.After exploring the content for the current time interval, the learner selects another region or dialogue for another exploration.This process repeats until the learner is satisfied with exploring the design discussion in a post-hoc fashion. User Evaluation We conducted an exploratory user study with 10 participants to assess user experience, user impression, and effectiveness of learning the design discussion course content using our asynchronous VR design discussion content exploration system.The research question we explored was "To what extent is the asynchronous architectural design discussion visualization system effective from a usability and user experience perspective?" Participants The age range of the participants was 20 to 25 years.Half of them were women, the other were men.Eight out of ten participants were architecture majors who had experience with the traditional architectural design discussion course and little VR knowledge.The other two participants were computer science majors who had experience using VR devices, but had not experienced architecture design or design discussion. Course Material The course material used in this study was gathered from a VR architectural design discussion course that took place in March 2019.The course lasted about an hour, and the participants included a teacher and two students.The model for the design discussion is a four-story building with a large platform on the left wing; see Figure 9.During the course, the participants discussed and revised the design of the platform to make it more aesthetic and functional.The teacher began with a series of introductions on the structure of the building.This process lasted about 15 min.After that, the teacher made some comments on the design of the building and explained some design logic and its goals, followed by some suggestions for reviewing the design.He then drew diagrams using 2D/3D sketching functionality in the architecture design discussion system to explain the modification he suggested to apply to the platform.Students were then helped to perform the necessary operations for the modifications.Finally, at the end of the course, the teacher gathered the students to discuss and evaluate the results. There are two design discussion branches in this course.The teacher decided to roll back to a previous state of the discussion about 30 min after the class began, and, from that point, initiated a new direction for discussion. Study Design Through user study, we want to investigate whether users can grasp and understand the content of the VR design discussion course thoroughly and efficiently.In addition to questionnaires, participants were asked to draw their own design based on the design logic they learned from the course.The designs were graded by the teacher who led the discussion. Procedure There are three phases in the user study.First, in the training phase, participants received a brief introduction to the system and then practiced with the system with enhanced training information, such as text and verbal guidance for each functionality by a facilitator.After practicing, the participants would be tested on five tasks to see if they understood fully how to use the system.Each task requires the coordinated use of several different functionalities. After the training phase, the participants began exploring the course content using the system in the learning phase, which took approximately 20 min.Subsequently, participants completed post-experiment questionnaires, including the simulator sickness questionnaire (SSQ) [30], NASA TLX [31], presence [32], co-presence [33], system usability satisfaction [34], and the system evaluation questionnaire designed by the experimenter.Participants were also asked to draw their own design that met the design logic learned in learning phase.The SSQ questionnaire was completed by the participants before and after the learning phase for comparison.We computed the average difference in the SSQ scores of the pre-learning phase score subtracted from the post-learning phase score for each item in the questionnaire.A Wilcoxon signed rank test conducted between pre-and post-SSQ scores did not reveal a significant difference.The most notable result comes from the symptom in question 2, which represents the fatigue symptom.Approximately 80% of the participants felt more tired after the user study.Due to the fact that our system requires a significant amount of mental effort, the users needs to stay focused for some time.The symptoms in Questions 1, 3, 4, and 5 refer to "general discomfort, headache, eye strain" and "difficulty focusing", respectively.Approximately 30-40% of the participants felt that these symptoms increased after the user study.One possible reason for this result might be unfamiliarity with VR equipment.Most of the participants were novice users of VR, and may not have been able to adapt to the VR environment as quickly as we expected.The symptoms in Questions 8, 15, and 16 refer to "nausea, stomach awareness" and "burping", respectively.According to the results, no users experienced these symptoms before or after using the VR system, which was highly encouraging.Figure 10 shows the results of the SSQ questionnaire. NASA TLX Questionnaire The NASA Task Load Index (NASA-TLX) is a widely used, subjective, multidimensional assessment tool that rates perceived workload in a task using a system.The average workload scores of all participants are shown in Figure 11. As shown in Figure 11, the value for the temporal demand is approximately 30, while the values for other dimensions range from 100 to 160.Since experiencing the course content is a critical aspect in learning, in the learning phase we did not impose any kind of time limit and hence induced no time pressure on the participants.On the other hand, the frustration level received the highest demand in the study.This may be due to the poor performance of gesture recognition supported by the Leap Motion system for the menu interaction in front of the participant, and this should be improved in the future.Since the purpose of the system is to offer the participants an effective learning experience in VR, instead of the frustration level, the mental demand is expected to be the one with the highest value, as the participants would focus on the course content for quite a long time during the learning phase.However, we found that mental demand was somewhat lower than effort or frustration. Presence and Co-Presence Questionnaire Our VR system could potential enhance both presence via the object, environment, and architectural details to some extent, and co-presence via the shared presence of static avatars that represented students, teachers, or designers in the architectural design discussion content interactive visualization experience.The 17 items in the survey were rated on a Likert scale of 1 (strongly disagree) to 7 (strongly agree). In the responses to items 8, 9, and 10, "Somehow I felt that the virtual world would surround me", "I felt present in the virtual space", "I did not pay attention to the real world", participants' mean presence scores ranged from five to six (slightly agree to moderately agree), indicating that participants commonly had the feeling of being present in the virtual environment and being totally immersed.The rather low score for item 5, "I did not feel present in the virtual space", also agrees with the above observation.The rather low score received for item 11, "The virtual world seemed more realistic than the real world", is reasonable, since the architecture model we reviewed at this design stage is not a final model, and it is a plain model without appropriate light and material setting.The low score (slightly disagree to neither agree nor disagree) for item 1, "I was aware of the real world while navigating in the virtual world (i.e., sounds, room temperature, other people, etc.)", may be due to the fact that there were approximately 10 people inside the room during the user study, although each participant is in an isolated area surrounded by curtains.The response to item 4, "The experience in the virtual environment seemed consistent with my real-world experience", also received a low score.It is hard to interpret this result, as there seems to be no corresponding real-world experience in this case.Figure 12 shows the results of the Presence and Co-presence Questionnaire. According to the feedback, we can see that users are quite focused on learning in the VR environment during the user study; they are able to describe what happened in the design discussion and learn from it.During learning, it is acceptable for users to focus on the content and not be affected by what happens in the real world.This matched our goal of making the whole learning experience efficient; therefore, the user can always find interesting content easily, instead of spending their time on a not-so-attractive part of the design discussion content.However, users either agree or disagree that the system provides a convincing rendering effect to make the environment appear realistic.We employed the IBM system usability questionnaire, which consists of 20 questions, to measure the system effectiveness, efficiency, and satisfaction rated on a scale from 1 to 5. We found that most users agree that our system was easy to use, useful, and satisfactory for helping users during the learning phase of the interactive design discussion content exploration task.The response to the items "The information provided for the system is easy to understand" (M = 3.75, SD = 0.8), "The information is effective in helping me complete the task and scenarios" (M = 3.6, SD = 1.1), and "The organization of information on the system screen is clear" (M = 3.7, SD = 1.2) received relatively high scores from the participants.We also found that the participants approved the quality of information that our system provided and considered them helpful.The rather low point for item "The system gives error messages that clearly tell me how to fixed problems" (M = 2.85, SD = 1.5) may be due to the fact that the system does nothing when the menu selection failed due to poor gesture recognition.Similarly, for the item "The interface of this system is pleasant" (M = 3.0, SD = 1.8), the relatively low score implies that beautification of the interface is necessary in the future. System Evaluation Questionnaire This researcher-designed questionnaire's purpose was to assess the features of our VR system from both a quantitative and a qualitative perspective.The questionnaire is divided into two parts: system overall evaluation and system interactive functionality. System Overall Evaluation In the questionnaire, there were seven questions that evaluated what the participants felt about the system, including "The system was interactive, enjoyable, satisfying, user friendly, usable, not frustrating to use, and useful" and rated it on a scale of 1 (strongly disagree) to 5 (strongly agree).Results are shown as items 1 to 7 in Figure 13.Based on feedback, most of the participants considered the system interactive (item 1) and usable (item 5), which means that most of them were able to learn and gained familiarity with how to interact with the system during the user study and found the system to work normally without any interruptions due to errors.Items 2, 3, and 4 received mean scores ranging from 3.6 to 3.8, indicating that participants tended to agree that the system is enjoyable, satisfying, and user-friendly.Finally, item 6, "The system was not frustrating to use", received a rather low mean score of 3.1, implying that participants may have encountered some glitches such as gesture recognition problems.In order to compare the proposed system with the video-based learning system, we included items 8 to 11 to assess what the participants felt about the difference between the two systems, including "The VR exploration system provides a better understanding of 3D building space, gives a better learning experience, provides more effective learning due to the summarization-to-detail approach than the video-based system", and item 11, "By using VR exploration system, you have completely realized the content of this VR architecture design discussion course". System Interactive Functionality In order to evaluate the functional design of the system, the questionnaire assessed the user's feelings about 11 functionalities provided in the system on a scale of 1 (worst) to 10 (best).The 11 functions evaluated were (1) color maps on minimap, (2) temporal distribution of dialogues in the timeline, (3) Theme River, (4) summarization visualization for understanding what happened in the discussion course, (5) preview, (6) Worlds in Miniature visualization and interaction, (7) playback, (8) summarization-to-detail approach for facilitating the understanding of the design discussion content, (9) the menu's readability, (10) the menu's usability, and (11) the menu's convenience in interaction.The results are shown in Figure 14.Overall, all features received a mean score of more than 7 points.Features 2, 6, and 10 even received a mean score of 8.It seems that users consider the dialogue list to be helpful for them to realize the distribution of dialogues in the course.They were also fond of the Worlds in Miniature functionality, which offers a global perspective of the object modifications that occurred during the discussion. Learning Effectiveness Assessment Besides the evaluation of the system's feature and user experience, it was also necessary to see if the participants really understood the content of the discussion course.Eight architecture majors were also asked to draw their own designs based on the design logic they perceived from the course content.The hand-drawn designs were graded by a teacher who led the design discussion for the course content; see Figure 15 for the scores of learning effectiveness for each participant that took the test.The design was evaluated on three different aspects: consistency, completeness, and originality.In the course material for the user study, the teacher mentioned revision advice for the building, such as staggered construction, protruding roof, walls with holes, and more thin frames.Participants needed to add these features to their design to receive a high consistency score.The more such features were in the design, the higher the grade that would be given with respect to completeness.However, participants were also expected to make their design unique, which would contribute to the originality score.As shown in Figure 15, the eight participants were able to obtain scores higher than 75%, with an average of 81%.It appears that participants were able to grasp and understand the design logic behind the design discussion and were able to create their own designs afterwards.From this, we can conclude that the proposed system was successful in storytelling and extracting the focal discussion in the interactive immersive VR design discussion content exploration afforded by our system. Discussion of Qualitative and Quantitative Results In the system evaluation questionnaire, we also asked participants to express their opinions about what they liked or did not like the most about the features provided in the system and the overall learning experience.Most of the participants considered the system useful and helpful for them to understand and learn the design discussion course that we discussed her.Participants enjoyed the strengths of the system, such as the summarization visualization, timeline filtering, experiencing the design logic in 3D space, minimap visualization of important events, high levels of immersion in the discussion conversation, teacher following, ease of grasping the course content, minimap, and Worldsin-Miniature interactions, as well as being able to perceive what objects changed over time, playback functionality, and the overall discussion exploration model that was employed for the summarization-to-detail approach. We also received negative feedback regarding the system's functionality, such as the lack of ease of control of the gesture-based interaction and the lack of aesthetics of the design of the menu system.Since none of the participants had any experience using gesture-based input devices before, they needed some time to become familiar with the gesture-based interaction metaphor in our system.The effective operational characteristics of the Leap Motion-based gesture tracking device were also limited; sometimes users lost tracking of their hands if their hands left the tracker workspace, and gesture recognition was susceptible to tracker latency and jitter.Therefore, our system received a high score on "frustration" on the NASA TLX Questionnaire.As for the design of menus, our system rendered the menu as a pop-up display that may have been too large to allow interaction from the perspective of some users.Some of the participants said that they had difficulty interacting with the menu as it required large movements of their hands.Additionally, it was also suggested that the aesthetics of the menus be enhanced. From a subjective quantitative perspective, we were excited to find that the overall metrics of presence, system usability satisfaction, overall system functionality evaluation, and most importantly, the learning performance of the architectural design using our system were high.However, in future iterations of our research, we hope to take advantage of advances in gesture tracking technology to improve the usability and user experience of gesture-based interaction in our architecture design discussion visualization system. Conclusions and Future Works We have presented a novel asynchronous VR interactive exploration system for learners to grasp and understand the design logic or concept from the architecture design discussion content achieved in VR.We conducted an exploratory user study evaluating the effectiveness and efficiency of the proposed system from a usability and user experience perspective.We found that participants could be immersed in the design discussion scenario or atmosphere and obtaied a robust and useful spatial understanding about architecture design, decision processes, and contextual information in the design process.Additionally, with the proposed summarization-to-detail approach, participants were able to follow and explore the design discussion process effectively and efficiently.The results of our study revealed that the asynchronous VR exploration system could be potentially effective for architecture design education.From a broader impacts perspective, our application and lessons learned have implications not only on the design and development of future asynchronous VR exploration systems for architectural design discussion content, but also other applications, such as industrial visual inspections and educational visualizations of design discussion content in general. In future work, we aim to improve the interaction interface and to enhance the main menu and Worlds-in-Miniature (WIM) interaction metaphors within our system.In our system, gesture-based interaction is used to interface with menus and as a command and control tool for navigation and WIM manipulation.The current implementation is powered by Leap Motion and suffers from accuracy and stability problems.A better gesture-based interface will greatly enhance the learning experience.Currently, WIM is a scaled-down model for the entire architecture model.Therefore, occlusion might become a problem when a time-lapse animation of object changes appears in only a small portion of the WIM.We will investigate a WIM for a partial architecture model in a future version of the system.Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. (a) Function introduction (b) Run-time demonstration Figure 2 . Figure 2. The main menu of the proposed system.The learner may select an interested region or a dialogue to filter a time interval for exploration. Figure 3 . Figure 3. Theme River is placed above and aligned with the timeline. Figure 4 . Figure 4.In preview, the learner examines the highlighted objects and time-lapse animation in WIM displayed in the learner's personal space and surrounded by the 3D virtual architecture model space. Figure 5 . Figure 5.After the zoom-in-like function is triggered, the learner is teleported to an appropriate position in the virtual model space. (a) Avatars in the scene (b) Bird's-eye view of the model Figure 6.(a) A snapshot of playback in which the avatar with the black shirt represents the teacher character and the avatars with brown shirts are students.(b) A snapshot of playback in which the avatar who is speaking has an icon above them.Objects that have been modified are highlighted with different colors. Figure 9 . Figure 9. Architecture model used in the design discussion course. Figure 13 . Figure 13.Results of System Overall Evaluation questionnaire. Figure 14 . Figure 14.Results of System Interactive Functionality questionnaire. Figure 15 . Figure 15.Results of Learning Effectiveness assessment questionnaire. Funding: This research was funded by the Ministry of Science and Technology, ROC (Taiwan), under MOST 110-2221-E-A49-114.Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of National Chiao Tung University (NCTU-REC-109-078E, 22 October 2021).
11,807.6
2023-12-27T00:00:00.000
[ "Engineering", "Computer Science", "Education" ]
On the Genesis of the Marshall-Olkin Family of Distributions via the T- X Family Approach: Statistical Modeling : In the last couple of years, there Has been an increased interest among the statisticians to define new families of distributions by adding one or more additional parameter(s) to the baseline distribution. In this regard, a number of families have been introduced and studied. One such example is the Marshall-Olkin family of distributions that is one of the most prominent approaches used to generalize the existing distributions. Whenever, we see a new method, the natural questions come in to mind are (i) what are the genesis of the newly proposed method and (ii) how did the proposed method is obtained. No doubt, the Marshall-Olkin family is a very useful method and has attracted the researchers. But, unfortunately, the authors failed to provide the explanation about the genesis of the method that how this family of distributions is obtained. To address this issue, in this article, an attempt Has been made to provide a straight forward computation about the genesis of the Marshall-Olkin family that somehow completes its derivation. The genesis of the Marshall-Olkin family is based on the T- X family approach. Furthermore, we have showed that other extensions of the Marshall-Olkin family can also be obtained via the T- X family method. Finally, a real-life application form insurance science is presented to illustrate the newly proposed extension of the Marshall-Olkin family. Introduction Statistical distributions are very worthwhile in describing and predicting real-world phenomena. In the recent era, several new statistical distributions have been proposed via extending the classical distributions for modelling data in applied areas, particularly, in the practice of insurance and economics [1][2][3][4][5][6][7], engineering [8][9][10] and biological studies [11][12][13][14][15], among others. However, the traditional distributions are not exible enough to model real world phenomena of nature. Therefore, it is always of interest to obtain the extended versions of the existing distributions for modeling real life data. This has been done through many different approaches [16,17]. One such approach of generalizing the traditional distributions is the Marshall-Olkin (MO) family approach. The cumulative distribution function (cdf) of the MO family is given by where F (x; ξ ) is the cdf of the baseline distribution may depend on the vector parameter ξ ∈ R, and σ is an additional parameter. The probability density function (pdf) corresponding to Eq. (1) is where σ = 1 − σ . The MO family method has been used extensively to generalize the existing distributions. The MO family of distributions is one of the most prominent approaches used to generalize the existing distributions. But the authors did not provide the genesis and the derivation of the MO family and hence the derivation of the paper is not completed. In this article, an effort has been made to provide the genesis and straight forward computation of the MO family. There may also be other methods through which the MO family can be obtained. However, this article offers the derivation of the MO family via the T-X family approach [18]. This paper is organized as follows: The genesis of the MO family is presented in Section 2. Further developments are discussed in Section 3. A special sub-case of the newly proposed family is presented in Section 4. A real-life application to nancial sciences is presented in Section 5. Finally, the paper is concluded in the last section. Genesis of the Marshall-Olkin Family of Distributions This section offers the derivation of the MO family of distributions via the approach of T-X family. Let v (t) be the pdf of a random variable, say T, where T ∈ [m, n] for −∞ ≤ m < n < ∞ and let W [F (x; ξ )] be a function of cdf of a random variable, say X , depending on the vector parameter ξ satisfying the conditions given below: Recently, Alzaatreh et al. [18] de ned the cdf of the T-X family of distributions by where W [F (x; ξ )] satis es the conditions stated above. The pdf corresponding to Eq. (3) is Using the T-X idea, several new classes of distributions have been introduced in the literature. Let T follows the exponential distribution with θ, and then its cdf is given by If θ = 1, then from Eq. (4), we have The pdf corresponding to Eq. (5) is If T follows Eq. (6) and setting W ( On solving, we get The Eq. (7) which is equal to Eq. (1) provides the cdf of the MO family. Other Extensions of the MO Family Other extensions of the MO family can also be obtained through the T-X approach. For example, the Kumaraswamy version, exponentiated version and alpha power transformed version, etc. Exponentiated Version of the MO Family If v (t) be pdf given by Eq. (6) and letting W ( Eq. (3), we get the exponentiated Marshall-Olkin (EMO) family given by where a > 0 is an additional shape parameter. The density function corresponding to Eq. (8) can easily be obtained simply by differentiation. Kumaraswamy Version of the MO Family This section offers the derivation of the Kumaraswamy version of the MO family of distributions. If v (t) represent the density function Eq. (6) and letting in Eq. (3), we get the Kumaraswamy Marshall-Olkin (Ku-MO) family given by The density function of the Ku-MO family can be obtained by differentiating Eq. (9). Alpha Power Transformed Version of the MO Family If v (t) follows Eq. (6) and letting W ( (3), we get the alpha power transformed version of the MO family given by On differentiating Eq. (10), we get the density function of the alpha power transformed Marshall-Olkin (APTMO) family. Exponentiated Version of the Alpha Power Transformed Marshall-Olkin Family If v (t) follows Eq. (6) and letting W (x) = − log α A Sub-Model Description of the APTMO Family In this section, we investigate a special sub-case of the APTMO family. The density function corresponding to Eq. (10) is given by Consider the distribution and density functions of the Weibull random variable given by γ ) . Then, the cdf of the APTMO-Weibull distribution is given by The density function corresponding to Eq. (12) is given by For (i) σ = 0.5, γ = 1 and (ii) σ = 1, γ = 0.5 and different values of α and θ , plots for the pdf of the APTMO-Weibull distribution are sketched in Fig. 1. Statistical Modeling In this section, we demonstrate the exibility of the APTMO-Weibull distribution by using a heavy-tailed insurance claims data. The data set represents the unemployment insurance initial claims per month from 1971 to 2018, and it is available at https://data.worlddatany-govns8z-xewg. By applying the APTMO-Weibull distribution to this data, we observed that the APTMO-Weibull model can be used quite effectively to provide the best description of the real phenomena of nature. Corresponding to the insurance claims data, the tted cdf and density plots of the APTMO-Weibull (α = 1.2, γ = 0.8, θ = 0.5, σ = 0.9) are presented in Fig. 2. The probabilityprobability (PP) plot and Kaplan-Meier survival plots are sketched in Fig. 3. The graphical sketching provided in Fig. 3 indicate that the APTMO-Weibull distribution provides the better t and could be chosen as an adequate model to analyze the heavy-tailed insurance claims data. In this article, a straight forward computation of the genesis of the MO family of distributions is provided. It is showed that the MO family of distributions can be obtained via the T-X family approach. It is also showed that the other extensions of the MO family such as the exponentiated version and the Kumaraswamy version can be obtained using the T-X method. Furthermore, a sub-case of the APTMO family is considered and a real data set from the insurance sciences is analyzed. The real-life application show that the newly proposed APTMO-Weibull distribution can be used quite effectively to model data in insurance sciences and other related elds. We hope that the method of genesis provided in this paper will attract the researchers and will use it to obtain the extensions of the other existing family of distributions. Data Availability: This work is mainly a methodological development and has been applied on secondary data related to the insurance science data, but if required, data will be provided. Funding Statement: This study is supported by the Department of Statistics, Yazd University, Yazd, Iran. Con icts of Interest: The authors declare that they have no con icts of interest to report regarding the present study.
1,996.4
2021-01-01T00:00:00.000
[ "Mathematics" ]
“Assessing the stability of the banking system based on fuzzy logic methods” The functioning of the country’s banking system is the basis for ensuring its economic development and stability. The state of the banking system often causes financial crises; therefore, ensuring its stable work is one of the main tasks of monetary policy. Meanwhile, it is important to find approaches to a comprehensive assessment and forecasting of the stability of the banking system that would allow obtaining adequate results. Based on a sample of data generated for the period from 2008 to the 1st quarter of 2020 with a quarterly breakdown, an integrated stability index of Ukraine’s banking system was estimated. The analysis was based on 23 variables that characterize certain aspects of the functioning of the Ukrainian banking system. Using the principal component analysis, five factors have been identified that have the greatest impact on ensuring the stability of the banking system. They were used to form an integrated index based on the application of the Mamdani fuzzy logic method. The results obtained adequately reflected the state of stability of the banking system for the analyzed period, which coincided in time with the crisis phenomena occurring in the Ukrainian banking system. The obtained value of the integrated index characterizes the stability of Ukraine’s banking system at the average level, since it depends not only on the internal state of the system, but also on the influence of external factors, both national and international. that have both positive and negative impact on the stability of the banking system. In addition, it is necessary to build an integrated index that accumulates both positive and negative mani-festations in the behavior of individual indicators. The purpose of the paper is to identify the factors and build an integrated index reflect-ing the level of stability of Ukraine’s banking system using fuzzy logic methods. INTRODUCTION Ensuring the stability of the banking system is one of the key tasks of countries' economic policies, regardless of their economic levels. This is due to the fact that the history of financial crises shows that, for the most part, the banking system was the source of their development, both at the level of individual national economies and at the regional and global levels. Therefore, today it is important to identify negative trends in ensuring the stability of banking systems. It is necessary to find methods and models that could take into account the influence of the maximum number of factors that have both positive and negative impact on the stability of the banking system. In addition, it is necessary to build an integrated index that accumulates both positive and negative manifestations in the behavior of individual indicators. The purpose of the paper is to identify the factors and build an integrated index reflecting the level of stability of Ukraine's banking system using fuzzy logic methods. LITERATURE REVIEW Ensuring financial stability is an acute issue not only for developing countries, but also for countries with a high level of economic development. Many scholars around the world have studied the relationship between monetary policy and financial stability; the overwhelming majority of scientists have concluded about the decisive influence of monetary policy pursued by the central banks of countries and implemented at the bank level. Barnea Salter and Tarko (2019) argue that the problem of ensuring financial stability goes far beyond economics and is often political and institutional in nature. Ijaz, Hassan, Tarazi, and Fraz (2020) investigated the impact of banking system stability on the financial stability and economic growth of countries. They took panel data from 38 European countries from 2001 to 2017 as a basis and employed a fixed-effect estimator and a system generalized method of moments to control unobserved heterogeneity, endogeneity, dynamic effect of economic growth and inverse causality in its estimation. Grytten and Koilo (2019) went from reverse and, using eleven Eastern European countries as an example, tested the hypothesis of financial instability as an explanatory factor of the financial crisis put forward by Minsky and Kindleberger. They used a cyclical approach based on two blocks of indices -the real and financial sectors of the economy. Among the indicators of the financial sector, the key ones are those that characterize the state of monetary policy in the studied countries. The authors concluded that the uncontrolled increase in money supply and the credit boom led to overheating the economy and, as a consequence, to the financial crisis and the crisis of the real economy, reaffirming the decisive role of the banking system in ensuring economic stability. Younsi and Nafla (2019) examined panel data from 40 developed and developing countries using a re-gression model and concluded on the complementarity and importance of monetary variables and the level of bank soundness and their significant impact on the financial stability and economic development of countries. Vučinić (2015) conducted a comparative analysis of indicators characterizing financial stability of three countries: Montenegro, Serbia and the Netherlands. The author notes that financial stability is ensured according to various areas of financial activity: the activities of the Central Bank as a regulator, credit ratings assigned by rating agencies, including Standard and Poor's and Moody's, the state of macroeconomic development of countries (GDP dynamics, employment, inflation), the state of public finances and fiscal deficit, the state of the banking system (credit risk, liquidity risk, market risk, operational risk, capital adequacy, profitability of the banking sector). In addition, special attention is paid to the compliance of the banking sector with the requirements of the Basel Committee on Banking Supervision. Hausenblas, Kubicová, and Lešanovská (2015) analyzed the state of the banking system of the Czech Republic and its stability in the face of changes in the structure of interbank risks and the balance of regulatory characteristics. In this context, S. Kuzucu and N. Kuzucu (2017) also conducted a study using the Turkish banking system as an example. They focused on the need to comply with the standards of the Basel Capital Accords. Considerable attention is paid to ensuring the stability of the banking system in countries with Islamic banking. Among the studies on this topic, it is worth noting Rashid (2017), Korbi and Bougatef (2017), Mawardi (2020), Subbar and Vladimirovich (2020), Rizvi (2020). Barra and Zotti (2019) emphasize that the stability of the banking system depends on the type of banks and, to a lesser extent, on the level of concentration in the system. Gulaliyev, Ashurbayli-Huseynova, Gubadova, Ahmedov, Mammadova, and Jafarova (2019) propose to calculate the integrated banking stability index using the Minimax normalization method. This index was used to analyze the financial stability of the banking sector of 29 countries, as well as to form a risk map taking into account the main macroeconomic indicators of national economies. Research on stability of Ukraine's banking system is quite popular, since this issue is extremely relevant, given that banks are major participants in the financial market, as well as significant political and financial turbulence that have a significant impact on the banking sector. Bondarenko, Zhuravka, Aiyedogbon, Sunday, and Andrieieva (2020) investigated the impact of problem debt of banks, the profitability of the Ukrainian banking system on its stability and proved the existence of mutual influence between the studied indicators. Haber, D'yakonova, and Milchakova (2018) highlight the main problems that arose in the Ukrainian banking system as a result of the National Bank of Ukraine reforms and, at the same time, the adaptation of the banking system in the context of the development of financial technologies. Kozmenko, Shkolnyk, and Bukhtiarova (2016) analyzed the state of the banking system using self-organizing Kohonen maps. For the assessment, 32 banks were selected from different classification groups by the size of assets according to the National Bank of Ukraine's classification and 15 indicators were used that characterize the effectiveness of banks. Based on the calculations, five groups of banks were obtained: powerful banks, stable banks, problem banks, banks in crisis state and banks in the bankrupt stage. Given that the banks with the largest assets were included in the first two groups, Ukraine's banking system could be considered quite stable. Similar studies were conducted by Shkolnyk et al. (2018). They analyzed the indicators of 49 Ukrainian banks and the trajectory of their patterns, made a forecast of the state of both individual banks and the banking system as a whole. Equally important are studies that determine the interaction between the state of the banking system and monetary policy and the state of public finances. Most of them conclude about the main role of banks in pursuing financial policy of the state. Shvets (2020) analyzes the interaction between active monetary policy and the golden rule of public finance. The analyzed works mainly use the methods of correlation-regression analysis. The stability of the banking system depends on many factors, and cause-and-effect relationships cannot always be described by linear models, so fuzzy logic methods must be used. The latter are increasingly being used to model various processes in finance. To assess the stability of the Ukrainian banking system in this study, the Mamdani fuzzy inference model was chosen. The basis for this decision was a study by S. S. Izquierdo and L. R. Izquierdo (2018). These authors analyzed the features of this model, its significant advantages and disadvantages, and concluded that there was significant potential for applying the Mamdani method to modeling social and other complex systems, as well as using this model and results in such studies. Marfalino, Putra, Guslendra, and Yulia (2018) use Mamdani's (1994) fuzzy logic method to determine the optimal price for financial services. Musayev, Madatova, and Rustamov (2018) also use the Mamdani fuzzy logic method to assess the impact of tax administration reforms on tax potential. Hachami, Alaoui and Tkiouat (2019) used fuzzy logic methods to determine the impact of the type of micro-enterprises activity on the performance of microfinance investments. Boloș, Bradea, Sabău-Popa, and Ilie (2019) used Mamdani's fuzzy logic model to detect the financial sustainability risk of the assets owned by a company). Dalevska, Khobta, Kwilinski, and Kravchenko (2019) used the Mamdani method to model macroeconomic dynamics according to UN data for 189 countries. Thus, Mamdani's fuzzy logic model is widely used to model the state of economic systems. DATA AND METHODOLOGY In the course of the study, data were selected that characterize the state of the Ukrainian banking system by quarters for the period from 2008 to the 1st quarter of 2020, that is, the number of periods is 49. 23 variables were selected: Var1 is the ratio of regulatory capital to risk-weighted assets, Var2 is the ratio of Tier 1 regulatory capital to risk-weighted assets, Var3 is the ratio of non-performing loans excluding reserves to capital; Var4 is the ratio of non-performing loans to total gross loans, Var5 is the share of loans of depository cor-porations in total gross loans, Var6 is the rate of return on assets, Var7 is the rate of return on capital, Var8 is the ratio of interest margin to gross income; Var9 is the ratio of non-interest expenses to gross income, Var10 is the ratio of liquid assets to total assets, Var11 is the ratio of liquid assets to short-term liabilities, Var12 is the ratio of net open position in foreign currency to equity; Var13 is the ratio of capital to assets, Var14 is the ratio of large open positions to capital, Var15 is the ratio of the gross position of financial derivatives in assets to equity, Var16 is the ratio of the gross position of financial derivatives in liabilities to equity; Var17 is the ratio of trading income to gross income, Var18 is the ratio of personnel costs to non-interest expenses, Var19 is the spread between interest rates on loans and deposits (basis points); Var20 is the spread between the highest and lowest interbank rates (basis points); Var21 is the ratio of customer deposits to total gross loans (excluding interbank); Var22 is the ratio of foreign currency loans to total gross loans; Var23 is the ratio of foreign currency liabilities to total liabilities. Data were processed using eViews and MatLab packages. The research was carried out in the following sequence: • A correlation matrix was built and tested for multicollinearity, and all analyzed variables were checked for compliance with the normal distribution law. • Factor analysis was carried out using the principal components method. Using factor analysis allows determining the minimum number of hypothetical quantities that correspond to a larger number of output variables. Thus, there is a certain systematization of the analyzed variables. The use of the principal components method, along with other methods of factor analysis makes it possible to determine a sufficient number of factors to assess the stability of the banking system. Principal component method is the most common dimensionality reduction approach. • Mamdani's fuzzy inference model is used to define the comprehensive assessment of the stability of Ukraine's banking system based on factors determined by the principal com-ponent method. This model describes the relationship between the inputs and outputs of the knowledge base from the "IF… THEN" fuzzy rules. Transparency is one of the main advantages of the Mamdani fuzzy model. To improve the accuracy of the model, it is taught, that is, the weights of the rules and the membership functions of fuzzy terms are adjusted. Training a fuzzy model is a non-linear optimization task. In this case, the Mamdani base can be interpreted as a breakdown of the space of factors affecting zones with blurred boundaries, within which the response function takes on a fuzzy value. Fuzzy inference is performed on a fuzzy knowledge base, where the values of input and output variables are specified by fuzzy sets, that is, there is a fuzzification procedure: ( ) ( ) is the t-norm, which is realized by the minimum operation. Accordingly, the following fuzzy set y is obtained, which corresponds to the input vector X*: Further, a transition is made from a fuzzy set, which is given to a universal set of fuzzy terms {d 1,d2…dm }, to a fuzzy set in the interval . yy. To do this, it is necessary to "cut" (minimum operation) a membership function Then a resulting fuzzy set * y  . is obtained by combining the fuzzy sets: After that, it is necessary to aggregate (maximum operation) the obtained fuzzy sets: The net value of the output y* is determined, which corresponds to the input vector X*, or defuzzification of a fuzzy set y* occurs in this case by the center-of-grity method: Thus, an integral value is obtained that characterizes the stability of Ukraine's banking system. of banks as a whole are rather uneven and have significant fluctuations over time. At the same time, checking the variables used in the model for compliance with the law of normal distribution showed that the overwhelming majority of them showed compliance, but for certain indicators; in particular, for Var15 -the ratio of the gross position of financial derivatives in assets to equitythere is no normal distribution due to the instability of transactions with derivatives, changes in the legal framework regulating such transactions, and the underdevelopment of the derivatives market in Ukraine. Calculations The analysis of the correlation matrix showed that Var1 and Var2, as well as Var6 and Var7 (0.99) had a strong correlation (0.97), that is, they had signs of multicollinearity. Therefore, it was decided to exclude Var2 and Var7 from further analysis. Thus, in further calculations, the values of 21 variables will be used. To determine the factors affecting the state of stability of Ukraine's banking system, a factor analysis was carried out using the principal component method. The analysis revealed five main factors explaining 81% of the variance. These factors are: factor 1 -the state of the formed capital in the Ukrainian banking system, which includes Var3, Var6, Var12, Var14, and Var18; factor 2 -the state of the formed assets of banks (Var4, Var5, Var10, and Var21); factor 3 -performance, which includes not only the yield on basic operations, but also on transactions with securities carried out by banks, including derivatives (Var8, Var9, Var16, and Var17); factor 4 -the level of dollarization of both assets and liabilities of banks (Var22 and Var23); factor 5 characterizes the role of transactions in the interbank market (Var20). The values of all five factors in each of the 49 quarters studied are given in Appendix C. The Mamdani fuzzy inference model was used to conduct a comprehencive assessment of the stability of the banking system. The input factors for construction of this model were taken from five factors indicated above. For such a model, term sets are defined for each of the input and output variables. Accordingly, for each of the terms, Therefore, the values of a comprehencive assessment of the stability of the banking system were calculated in each of the specified periods -quarters (49 quarters) (see Figure 2). The highest indicator of the stability of the Ukrainian banking system was observed in 2012, followed by a gradual decline, which decreased significantly in 2014 and 2015. During this period, there is a significant de-Source: Author's calculations. crease in the volume of banks' assets, significant losses due to political events that led to the financial and economic crisis in Ukraine -the loss of control over part of the territory in the east of the country and the loss of the Autonomous Republic of Crimea. In addition, during 2014-2017, the National Bank of Ukraine carried out a largescale reform to cleanse the banking system from low-quality banks with low transparency -75 of 180 banks are currently operating in the market. At the same time, the approaches to monetary policy have radically changed, and the structure of the National Bank of Ukraine itself has been reformed. All this is reflected in the integrated index, which shows significant unstable dynamics during this period. Starting from the 3rd quarter of 2017, the value of the integrated index remained almost at the same level and amounted to 50%. This is due to the fact that during this period the largest stage of the reform of the banking system was actually completed. At this time, the maximum number of banks was withdrawn from the market, and those banks that continued to leave the market had insignificant assets and did not significantly affect the performance of the banking system as a whole. CONCLUSION The study leads to the following conclusions. The stability of banking systems is assessed using different methods and different sets of indicators. A review of scientific publications on the issue shows that the vast majority of studies are carried out using the correlation-regression method, which does not always take into account all factors, especially with non-linear dependencies. To take into account the influence of various factors as much as possible, it was decided to use the fuzzy logic method, in particular, the Mamdani fuzzy inference model. The results obtained in the course of modeling made it possible to identify five main factors that have a decisive impact on the stability of the banking system of Ukraine. These factors are as follws: the state of assets and liabilities formed by banks, the level of efficiency of banking operations, the volume of formed assets and liabilities in foreign currency, as well as the state of the interbank market. The application of Mamdani's fuzzy logic model allowed determining the integrated stability index of Ukraine's banking system. Its fluctuations on a quarterly basis from 2008 to the 1st quarter of 2020 are explained by the corresponding events that occurred in certain periods of time and influenced, both positively and negatively, the state of the Ukrainian banking system. Further research is planned to determine how the stability of the banking system of Ukraine can be affected by the economic situation in the country and on world markets in the pandemic. The indicators of the first quarter of 2020 entered into the model did not show any fluctuations, but further studies should take into account the lagging effect, which can largely manifest itself in the second half of 2020. Note: Correlations in bold are significant at p < .05000.
4,683
2020-10-07T00:00:00.000
[ "Economics" ]
Time-efficient simulations of fighter aircraft weapon bay A cavity flow exhibits aero-acoustic coupling between the separated shear layer and reflecting waves within the walls of the cavity, which leads to emergence of dominant modes. It is of primary importance that this flow mechanism inside the cavity is understood to provide insights and control the relevant parameters and that it can be properly predicted using state-of-the-art CFD tools. In this study, an open-cavity configuration with doors attached on the sides and a length-to-depth ratio of 5.7\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf{5}.7 $$\end{document} have been studied numerically using the TAU code developed by the German Aerospace Center for transonic flows with three simulation methods such as DES with wall functions and SST-SAS with resolved wall flow or wall function techniques. The free-stream conditions investigated are Mach number (Ma) 0.8\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf{0}.8 $$\end{document} with Reynolds number (Re) 12×106\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf{12} \times \mathbf{10} ^\mathbf{6 }$$\end{document}. The Rossiter modes occurring in the cavity due to the acoustic feedback mechanism have been numerically computed and validated. The SST-SAS model is around 90% more computationally efficient compared to the hybrid RANS-LES model providing excellent accuracy in predicting the Rossiter modes. The SST-SAS model with wall functions is 50% more computationally efficient than wall-resolving SAS simulations showing good behaviour in predicting modal frequencies and shapes, with further scope for improvement in the spectral magnitude levels. Introduction Historically, research of cavity flows has been done by aerospace companies, specifically for weapon bays. A cavity flow presents a complex unsteady flow and acoustic processes due to the shedding of separated shear layer from the front edge of the cavity. This causes severe limitations for operating weapon bays and landing gears, etc. In weapon bays, the deployment of stores could be improved by controlling the flow mechanisms existing inside the cavity which requires a fundamental understanding of cavity flow physics. Additionally at present, due to the requirement of stealth characteristics of the aircraft, the investigation of cavity flows has become even more crucial. In general, there are closed, transitional and open-cavity flow type [1]. Categorization of cavity flow type can be based on several factors such as length-to-depth ratio (L/D) and Mach number (Ma). In the closed cavity configuration, the shear layer flow separates from the front edge of the cavity, loses its energy, and reattaches to the cavity before separating again. There exists two large-scale recirculations on either corners of the cavity. In the open flow cavity, there is one large-scale recirculation caused by the shear layer that carries enough energy to cross the length of the cavity. The shear layer in the open-cavity flow type impinges on the rear wall, which then acts as an acoustic source to initiate sustained flow oscillations inside the cavity. In the transitional cavity, the flow reattachment to the ceiling of the cavity is unstable. There are many articles in the literature that show the effort to investigate the modal tones produced in the cavity. One explanation that is accepted by many authors is the Rossiter flow oscillation model (Eq. 1). Rossiter [2] postulated a semi-empirical model to estimate the dominant modal frequencies produced in the cavity. The model is based on the observations that the downstream convection of vortices from the shear layer generates aerodynamic disturbances, which then are excited by the reflected pressure waves from the rear wall produced by the shedding layer. This feedback process continues and leads to a selfsustained oscillation process Much of the cavity flow research has been experimental. The study by Rossiter [2] was one of the foremost studies that provided a solid understanding of the physics-based acoustic-flow dynamic interaction. The model by Rossiter (Eq. 1) is still widely used to predict the modes, particularly in the subsonic and transonic flow conditions. However, the model has shown some inaccuracies in supersonic flow conditions. Heller et al. [3] improved the Rossiter model by assuming that the speed of sound inside the cavity is equal to that in the stagnating freestream to extend its validity to supersonic flows as well. Handa et al. [4] have studied the generation and propagation of pressure waves experimentally and showed the periodic nature of them. The study explains the process by which the pressure waves are generated. It attempts to clarify the relationship between the shear-layer motion, pressure-wave generation, and the pressure oscillation inside the cavity. After extensive wind-tunnel studies on the cavity, several attempts have been made to study the cavity flows numerically, mostly on the M219 cavity [5] to describe the flow physics inside the cavity and accordingly determine the resonant modes. Henderson et al. [6] carried out timeaccurate simulations using the RANS k-model and it has been shown that the models are unable to predict the broadband spectra. To determine both the narrowband and broadband spectra accurately, advanced turbulence resolving methods such as DES methods based on Spalart-Allmaras (SA) and k-SST have been used [7]. Many of the numerical studies pertaining to cavity flows were based on the URANS approach. Chang et al. [8] studied 3D incompressible flow past an open cavity using the SA model. Although the predictions of the mean velocity field from the URANS and the scale-resolving simulation were similar, the study found that the URANS predictions show poor agreement with LES and experimental results for the turbulent quantities. ( Woo et al. [9] studied the three-dimensional effects of supersonic cavity flow due to the variation of cavity aspect and width ratios using the RANS k-turbulence model. The compressible NS equations were solved with the fourthorder Runge-Kutta method and FVS method with van Leer's flux limiter. The study concluded the oscillation mode 2 appeared as a dominant oscillation frequency regardless of the aspect ratio of the cavity in the two-dimensional flow and oscillation mode 1 and 2 appeared in three-dimensional cavities of small aspect ratios. With the increase in the aspect or the width ratios, only the modes 2 or 3 appeared as a dominant frequency. It is understood that due to the nature of the URANS formulation, the method has an inherent inability to detect modes accurately. Therefore, a number of studies have been dedicated to scale-resolving turbulence models such as LES. The study by Larcheveque et al. [10] shows the accuracy of employing LES or DES methods for a 3D cavity case where doors are present and aligned vertically. DES simulations are still expensive, whereas the scaleadaptive simulation (SAS) approach developed by Menter [11] has shown to provide results nearly as good as DES or LES. Girimaji et al. [12] evaluated the scale-adaptive simulation of M219 cavity flows for transonic flow conditions. The SAS results showed good agreement with the experimental data for the M219 cavity at a tenth of the time required for Detached-Eddy simulations. As the SAS simulations are yet quite expensive for use in the industrial design process, further reduction in the computational time is sought. Therefore, in the pursuit of reducing the computational effort for cavity simulations, the SAS simulations have been investigated using the wall function technique in this study and its merits and demerits have been outlined in terms of acoustic prediction. In this study, a novel open-cavity configuration with opened doors at the sides [13] has been investigated numerically at transonic flow conditions using two scale-resolving methods namely, a hybrid RANS-LES method with wall function (DES-WF) and scale adaptive simulation (SAS). The scale-adaptive simulation was also carried out using wall function technique (SAS-WF) to investigate the feasibility of simulating the cavity flows. The numerical simulations have been performed using the DLR-TAU computational fluid dynamics (CFD) code [14] under the flow conditions of Ma 0.8 and Re 12 × 10 6 . The numerically computed RMS values and wall spectra have been validated against the experimental data, which have been made available for this study by Airbus Defence and Space [13]. Model configuration and mesh generation The cavity configuration used in this study has dimensions of length-to-depth ratio (L/D) 5.7 and length-to-width ratio (L/W) 4.16 (see Fig. 1). The cavity is cut onto a flat side along the centre line of the cavity rig at a certain distance from the rig's sharp leading edge. Under the transonic flow condition of Ma = 0.8 with the Reynolds number of the flow 12 × 10 6 , the cavity is expected to exhibit resonance. The doors are placed on either side of the cavity with positive Z pointing into the cavity. The probe locations placed at equidistant locations along the cavity ceiling and are named L1 to L8, as shown in Fig. 1. The cavity geometry has been spatially discretized with a mesh consisting of unstructured elements with tetrahedral, prism, hexahedral, and pyramid cells. An RANS model was used to estimate the integral length scale, and according to the estimates, cell sizes have been chosen appropriately during the meshing stage. Figure 2 shows the DES-WF mesh that is used for this study. In Fig. 2a, the overview of the mesh distribution is shown. As the motivation of the study is to investigate the flow mechanisms as efficient as possible in terms of computational time, only the region of interest has been meshed with the highest level of refinement, as shown in Fig. 2. By adapting the mesh proportion inside the cavity, as shown in Fig. 2b, one can save a significant number of mesh nodes. In the cavity geometry, a 50% reduction in the number of prism cells has the potential to reduce the total number of mesh nodes by almost 40% , which suggests that one can save a significant amount of computational time by reducing the number of prism layers while adopting a wall function technique. The model has been meshed in half and mirrored about the symmetry axis to avoid the asymmetric grid effects. The DES-WF mesh is composed of 12.5 × 10 6 grid nodes. In the SAS mesh, the cell size in the shear layer and inside the cavity is almost double the cell size of the DES-WF mesh and contains around 5 × 10 6 grid nodes. The number of mesh nodes in SAS-WF is about half of that of the SAS mesh. Moreover, a non-dimensional wall coordinate ( y + ) of more than 100 has been set along the walls of the cavity for DES-WF and SAS-WF cases. Flow solver and turbulence modelling The numerical simulations have been carried out in this study using the DLR-TAU code, a finite volume (FV) flow solver based on the compressible Navier-Stokes formulation developed by the German Aerospace Center (DLR) [14]. The popular RANS approach in industry for turbulence modelling loses a lot of intricate details in the flow field when it is employed to the unsteady cavity flow simulation. An alternative is an LES-based model, which resolves major part of the energy carrying eddies and model the isotropic sub-grid scales [15]. As the cavity configuration has a boundary layer developed on the cavity rig upstream of the cavity, which then leads to the shear layer formation, an LES model requires an uncompromised resolution of the boundary layer and a substantial computing time for this application. Therefore, the following numerical approaches for turbulence modelling are employed in this study. Hybrid RANS-LES approach In the author's previous work [16], the hybrid RANS-LES approach with wall resolved technique has been investigated and the first promising results of this cavity configuration have been published. In the present study, some of the numerical settings used in the previous study [16] have been optimised and the DES part of this study differentiates from the author's earlier work using matrix dissipation and adopting the wall function approach for the cavity flow in the current study. By applying matrix dissipation in this study, the artificial dissipation is reduced to prevent excessive damping of the resolved turbulent structures. The SAneg-IDDES model [17] is based on the standard one-equation Spalart-Allmaras model, which models the transport equation for the eddy viscosity [18] where the production term P and the destruction term are This model represents the standard SA model, except that the length scale d in the destruction term is modified. In the SA model, d is the distance to the nearest wall. In the IDDES model [19] that is used in this study, d is replaced with d , which is defined as with Δ = max(Δx, Δy, Δz) and f d is the shielding function designed to be unity in the LES region and zero elsewhere. The SA-neg model is the same as the "standard" version when the turbulence variable ̃ is greater than or equal to zero. When the kinematic eddy viscosity would become negative, the Eq. 2 is modified, such that the turbulent eddy viscosity in the momentum and energy equations is set to zero [20]. Scale-adaptive approach To only resolve turbulence where significant fluctuations exist, the work by Menter et al. [21] suggested a modified turbulence model which adds a source term Q SAS based on the local von Karman length scale L vK into the dissipation rate equation. This scale-resolving technique has been used in this study with the standard k-SST model [22] as the base model in conjunction with the wall function technique. The governing equations of the SST-SAS model differ from the k-SST model by the additional source Unlike LES, it also remains well defined if the mesh cells get coarser and do not allow resolving scales well within the inertial range. This makes it attractive in the present application, where the aero-acoustic effects are mostly affected by larger turbulent scales which, in turn, need to be predicted accurately. Wall functions approach There are two classical wall boundary conditions, namely low-Re and high-Re type boundary conditions. The low-Re boundary condition imposes no-slip at the wall and requires a finer mesh. The aim of grid-independent wall functions is to provide a boundary condition at solid walls that enable flow solutions independent of the location of the first grid node above the wall. In this study, wall functions based on the universal law of the wall are employed for the DES-WF and SAS-WF simulations, whereas the low-Re boundary condition is used for the SAS simulation. The RANS equations are solved only down to the first grid node above the wall and are matched with an adaptive wall function solution at the first grid node above the wall. The matching condition (Eq. 6) makes sure that the wall-parallel components of the RANS solution and the wall function are equal at wall distance y , which is then solved for the friction velocity u using Newton's method. The shear stress is then prescribed at the wall node In all the simulations, a second-order central scheme and backward Euler scheme have been used to discretize spatial flux and temporal terms, respectively. The time step size has been chosen, such that the convective Courant-Friedrichs-Lewy number (CFL) is well below 1.0. Results and discussion In this section, the experimental data will be first analysed and the effect of FFT window length will be determined to support the validation of the numerical simulations. Moreover, this section presents the physics of cavity flows including the validation of DES-WF, SAS, and SAS-WF simulations. The commonalities and differences between the used simulation methodologies will be highlighted and appropriate findings will be outlined. Acoustic spectral analysis In this subsection, first, the Rossiter modes are estimated using the semi-empirical model, which is then will be followed by the flow statistics with respect to prediction of the resonant modes by different simulation methodologies and RMS pressure. Theoretical estimation of Rossiter modes The pressure signal that varies over a time series has been extracted along the cavity ceiling and transformed into frequency space to investigate the modes existing in the signal. Analytically, the frequencies at which the Rossiter modes occur can be computed through the Heller's modified Rossiter oscillation model (Eq. 7) [3] Effect of signal length on the RMS and FFT values A total of 20.0s of pressure measurement data have been made available for the validation of simulation results. Prior to deciding on the length of the time series to be simulated, awareness of the effect of signal length on the FFT and RMS statistics is sought. To estimate this effect, some fundamental analysis of the raw data has been performed. The 20.0s of measurement data has been split into two groups of data: one group containing 160 samples of each 0.125s and the other group containing 40 samples of each 0.5s. Figure 3a shows the RMS pressure for both the sample groups. In the 0.125s case, the deviation in the RMS values is around 350 Pa near the front edge of the cavity between x∕L = 0 and x∕L = 0.2 and its value increases along the cavity length reaching around 500 Pa near x∕L = 0.9 . In the 0.5s case, the deviation in the RMS values is around 100 Pa near x∕L = 0.2 and increases to 350 Pa towards x∕L = 0.9 . Figure 3b shows the FFT results of the two sample groups and displays that there is a variation of around 8-9 dB/Hz in the amplitude levels for 0.125s case, whereas the variation is around 4-5 dB/ Hz for 0.5s case. This underlines the importance of selecting an appropriate signal length for the validation of the simulation data. Performance in terms of acoustic prediction A fast Fourier transform (FFT) has been performed based on Welch's method to decompose the pressure fluctuations into its frequency components. The input for the FFT has been the pressure data, which have been collected for over a 1000 locations on the mid-plane of the cavity. From all these locations, the amplitude of the first four resonance modes were identified and interpolated on the mid-plane to visualise the shape of the modes inside the cavity. The results can be seen in Fig. 4 showing the SPL distribution of the Rossiter modes 1,2,3 and 4 on the plane y = 0 . Rossiter mode 1 has a node in the center of the cavity, anti-nodes on both ends and the front part is significantly overlayed by the shear layer, which suppresses the mode with its own frequency. The higher order Rossiter modes 2,3 and 4 correspond to the standing waves resulting from the organised vortical structures between the front and rear wall of the cavity. It is also observed that the lip of the cavity in all the modes is overlayed by the shear layer. Figure 5 shows the FFT plots of experimental data, DES-WF, SAS, and SAS-WF simulations for the probe locations L1 and L8. From the experimental data in the FFT, the band with the group with samples of each 0.5 s is shown. Due to the expensive nature of the DES-WF simulation, the length of the series simulated in the DES-WF is 0.15s, whereas the length of series simulated in both the SAS and SAS-WF simulations is 0.25s. The time series of the simulations have been processed for the FFT analysis using the Hamming window function with the maximum offset length of FFT windows equivalent to the integral time scale computed through the autocorrelation function. The lowest frequency that the simulated data can resolve is kept around 40 Hz for all the simulations. From the experimental data, the dominant modes of the probe locations L1 are 1, 2, and 3, whereas the dominant modes of L8 are 2 and 3. At the probe location L1, the mode 1 is predicted well by the DES-WF and SAS simulations. 2 is slightly over-predicted by the SAS simulation, whereas mode 3 is slightly underpredicted by the DES-WF simulation. The SAS-WF simulation tends to over-predict the modes, but shows the tendency to capture the frequencies as good as the DES-WF and SAS simulations. As the pressure fluctuations are higher near the rear wall, it is worth to analyse the performance of the simulations with the FFT of the probe location L8. It is clearly seen that the SPL levels in general are higher in the probe location L8 as compared to the probe location L1. At the probe location L8, the mode 1 is captured quite well by the SAS simulation as compared to the DES-WF and SAS-WF simulations. The modes 2, 3, and 4 are captured adequately well by the DES-WF and SAS simulations. Although the DES-WF results slightly lie outside the experimental range formed by 0.5s samples, they show reasonable agreement in capturing the modes considering the length of the series simulated is 0.15s. Table 1 shows the frequencies of the modes computed from modified Rossiter model and measured data together with the SAS simulation results. In the SAS simulations, the predicted modes fit extremely well with the frequencies of the experimental data and also with the theoretical modes, as shown in Table 1. In the probe location L1, the magnitude of the dominant mode is predicated well with a slight overprediction of the modes 2. In the probe location L8, all the modes are predicted exceptionally well in terms of relative magnitudes, absolute magnitude, and the frequencies. Considering that severe pressure fluctuations exist towards the rear wall, the prediction of modes at the probe location L8 is extremely sensitive with higher SPL levels and the fact that the SAS simulation has captured the features of this location shows the reliability of the SAS simulation results. In the SAS-WF simulation, the prediction of Rossiter frequencies still is quite good. It is observed that there is an offset by 3-4 dB/Hz when compared with SAS simulation results. Considering that the SAS-WF are 50% computationally cheaper than SAS simulations, the results seem quite promising. To summarise the results, it is observed that the overall behaviour of all simulations is extremely good in terms of frequency prediction. However, the magnitude levels between simulations show a noticeable difference. In particular, the SAS simulations fit the magnitude levels as good as the DES-WF simulations. The SAS-WF simulations show some good trends in predicting the modal frequencies and shapes with scope for improvement in its prediction capability. Figure 6 shows the plot of root mean square (RMS) of pressure along the centerline of the cavity ceiling compared with the measured data. In DES-WF simulation, the predicted RMS of pressure fits the experimental data extremely well. In the SAS simulation, the predicted values fit the experimental data within the first 30% of the cavity length, overpredicted in the middle region, and captured reasonably well towards the rear portion. The reason for the over-prediction is related to the delayed prediction of the resolved structures in the shear layer (see Fig. 7). The activation of the Q SAS term has been delayed and thereby the shear layer breakdown prediction shows a different behaviour than the DES-WF simulation. This delayed prediction of the shear layer has a consequent effect of higher fluctuations over the midsection of the cavity. In the SAS-WF simulation, the RMS profile follows the trend of DES-WF simulation quite well but over-predicts the values significantly towards the regions of higher pressure RMS. The over-predicting behaviour of SAS-WF is also relatable to the distribution of the resolved turbulent kinetic energy, as shown in Fig. 7. The shear layer breakup has been considerably delayed compared to both the DES-WF and SAS simulations, and clearly, this has increased the scale of the fluctuations by a significant margin in the second half of the cavity. Flow field visualisation In this section, some of the flow features of the cavity, such as the resolution of the turbulent structures, boundary layer profile, and turbulent kinetic energy profile, are investigated and the performance of the simulations in terms of acoustic levels is discussed. Visualisation of flow structures To visualise the structures in the cavity configuration, the Q-criterion has been computed and it is shown in Fig. 8a. Attached boundary layer upstream of the cavity separates from the front edge and starts to shed vortices of varying scales. The vortical structures during their life time combine with other structures as they are convected downstream. After impinging on the rear wall of the cavity, some of the flow structures travel downstream after being ejected out and some travel upstream. Figure 8b shows the highly turbulent behaviour on the downstream corner of the cavity showing the flow redirecting from the rear wall and interacting with the oncoming shear flow components. Figure 8b shows a characteristic feature of an open-cavity configuration that the shear layer starts developing from the front edge as a narrow region in the transverse direction and grows in the streamwise direction reaching its maximum width near the center of the cavity and reducing in width as the shear layer approaches the rear edge of the cavity. The prediction of shear layer width by the simulations is more clearly visible with the distribution of the resolved turbulent fluctuations in the streamwise and crosswise directions u ′ w ′ , which is shown in Fig. 9. Furthermore, the distribution is such that the streamwise and crosswise velocity fluctuations are intense towards the aft wall in the case of SAS-WF simulation compared to the SAS simulation, while having maximum fluctuations inside the core of the shear layer. Figure 10 presents the flow field resolving capability of the DES-WF, SAS, and SAS-WF simulations inside the cavity by showing the vorticity magnitude in the plane y = 0 . One can expect the flow field resolution from the DES-WF simulation to be high and this is indeed true as seen in Fig. 10a, which is used as a reference to investigate the resolving capability of the other turbulence models. Figure 10b shows Figure 10c shows the flowfield snapshot from SAS-WF. The fine scale structures are clearly less pronounced than in the SAS simulation. The wall functions upstream of the wall did not produce resolved structures and this has led to the difference in the resolving capability of this variant. L vK prediction between SAS and SAS-WF To further investigate the difference in the turbulent field resolution between SAS and SAS-WF simulations, the distribution of von Karman length scale has been investigated. The only difference between the SAS and SAS-WF meshes is the number of prism layers close to the wall. The SAS mesh has 35 prism layers with a y + value less than 1.0, whereas the SAS-WF mesh has 10 prism layers with y + value greater than 100. Therefore, it is noteworthy to investigate the von Karman length scale, L vK predicted by SAS and SAS-WF simulations, as shown in Fig. 11, especially close to the walls. The von Karman length-scale represents a key element in triggering the model to allow the generation of resolved turbulence in Scale-Adaptive Simulations. As seen in Fig. 11, the L vK is predicted strongly over a larger region in the SAS simulation, whereas, in SAS-WF simulation, the region of L vK presence is more limited. One can evidently observe that there is a difference near the upstream wall of the cavity between the two simulations. The authors believe that the usage of wall functions has rendered the SAS model to operate in pure RANS mode near the upstream wall of the cavity, which has led to the difference in the resolving capability of the model inside the cavity between SAS and SAS-WF simulations. If the model had operated in the resolving mode close to the front edge of the cavity, the SAS-WF could have better predicted the shear layer growth and its breakdown as observed in SAS and DES-WF simulations. In Fig. 12, the asymptotic near-wall flow profile at 0.1L distance upstream of the cavity has been shown. It is noticed that as a result of RANS behaviour close to the wall without resolved structures, the thickness of the boundary layer based on the 99% U ∞ measure in SAS-WF simulation is larger than the thickness predicted by the DES-WF and SAS simulations. The boundary layer developed upstream of the cavity has an important effect on the growth of the shear layer. Most, if not all of eddy viscosity contained in the boundary layer is transferred to the shear layer making it more stable than in the DES-WF and SAS simulations. This thicker shear layer with higher turbulent energy content cannot breakdown sooner as seen in the SAS simulation and the process of shear layer breakdown is thereby delayed, as the shear layer contains most of the energy-carrying eddies and they do not dissipate enough energy. This leads to over-prediction of energy levels inside the cavity as seen in Figs. 5 and 6. Moreover, the predicted shape factor (i.e., the ratio of displacement to momentum thickness) has been determined as 1.24 in the case of DES-WF simulation at a distance 0.1L upstream of the cavity, having the local Re x = 2.8 × 10 6 . Further, it is observed that with respect to the DES-WF case, there is a nominal over-prediction of 5-10% in the displacement and momentum thicknesses in the SAS simulation, whereas around 20% over-prediction is found in the case of the SAS-WF simulation, both showing deviations of the shape factor as low as 3% from the DES-WF case. Finally, the 99% thickness for the DES-WF reference case has been found to be 60L x which coincides with the SAS prediction. Figure 13 further confirms presence of more energy inside the cavity by showing the mean turbulent kinetic energy profile for four slices at x∕L = 0.19, 0.37, 0.56 and 0.94. It is evident that the turbulent kinetic energy produced by SAS-WF simulation is higher than in the SAS simulation. The thicker boundary layer profile in SAS-WF has transferred most of its energy to the shear layer, and therefore, turbulent kinetic energy is maximal at the cavity lip. Further downstream, at locations x∕L = 0.56 and x∕L = 0.94 , more energy is seen transferred inside the cavity in the SAS-WF simulation, which leads to higher pressure fluctuations in SAS-WF as seen in Fig. 6. Conclusion and outlook In this study, a novel cavity configuration with sidewise doors has been studied numerically with three simulation methodologies such as DES with wall functions (DES-WF) and SAS with wall resolved and using wall functions (SAS and SAS-WF) under the transonic flow conditions of Ma 0.8 and Re 12 × 10 6 . The correlation of the Rossiter modes with the flow processes has been identified in detail through FFT of DES-WF simulation results. It has been proven that all three simulation methodologies can capture the Rossiter frequencies well with a marginal over-prediction of spectral magnitudes by the SAS-WF simulation. The reason for the over-prediction behaviour in the SAS-WF simulation has been investigated with the boundary layer profile and the resolved fluctuations inside the cavity. The commonalities and differences between SAS and SAS-WF simulations were investigated and outlined using the von Karman length scale and vorticity fields. On the requirements of computational cost, the DES-WF simulation is estimated to be around 50% computationally cheaper than the wall-integrated DES simulation, whereas the SAS simulation is estimated to be 90% faster than DES simulations and the SAS-WF simulation is twice as fast as the SAS simulation. As the cheapest of the three simulations that were carried out in this study, the SAS-WF shows good trends in predicting the modal frequencies and shapes. The reasons for a moderate over-prediction behaviour have been investigated and outlined in this study. To overcome these numerical issues in SAS-WF simulations, future work will address the breakdown phenomena of shear layer in detail by incorporating a synthetic turbulence forcing term. Funding Open Access funding enabled and organized by Projekt DEAL. This work has been carried out with the financial support from Airbus Defence and Space (ADS) under the project "Analysis of Unsteady Effects in Fighter Aircraft Aerodynamics", which is greatly acknowledged. The authors would like to thank the German Aerospace Center (DLR) for providing the TAU code and Ennova Technologies, Inc. for the meshing software. The authors would also like to acknowledge the Gauss Centre for Supercomputing for making the required computing hours available to this study. Data availability Datasets generated during the study can be made available by the corresponding author upon request. Conflict of interest The authors have no competing interests to declare that are relevant to the content of this article. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
7,739.2
2023-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Optimization of double-layer sound absorber in a broadband frequency range using transfer matrix method and Evolution Strategies algorithm . Curtailment of the ambient noise level for providing a better living environment is immensely important. Accordingly, acoustic isolation via different combinations of porous materials is the most widely used passive soundproo fi ng system. The present study focuses on the optimization of single and double-layer absorbers in different frequencies. To this end, the transfer matrix and the Evolution Strategy (ES) method are fi rstly explained. Afterward, the optimization of single and double-layer absorbers is considered for up to 10 parameters (material porosity, air gap, perforated plate characteristics among others) at 350 Hz frequency and has been compared with the results obtained through other methods (Genetic Algorithm among others). It has been illustrated that ES algorithm provides better optimization in this fi eld. Subsequently, since the incident sound in most cases is a correlation of different frequencies, the broadband optimization of the single and double-layer absorbers is considered in three frequency ranges (100 – 800 Hz, 800 – 1600Hz, 1600 – 3000 Hz), with an increment of 1 Hz, for three different materials (polyester, fi ber and foam). After the optimization, the resulting optimum parameters are presented in form of characteristics charts of the optimized materials for different frequency ranges, as a reference for material designers and manufacturers. Also, the absorption coef fi cient of all optimized cases are calculated and presented in range of 100 Hz to 3kHz as a reference for the absorber selection for different situations. Finally, by presenting the improvement chart of double layer versus single layer combinations, it has been shown that double layer combination can improve the absorption coef fi cient of different materials up to 4% in different frequencies depending on the material (4% for polyester and foam for under 800 Hz, 3 – 4% for polyester and fi ber for 800 – 1600 Hz and 2.6% for foam in 1600 – 3000Hz). / optimization / foam / fi ber / polyester / transfer matrix / Evolution Strategy (ES) Algorithm / broadband frequency range Introduction In 1970, the World Health Organization (WHO) carried out an assessment about the global disease burden from the occupational noise in which the noise characteristics and their relevance to workers' health were studied and quantified. It was proved that high level of environmental noise in a working area is the main cause of worker's psychological and physiological diseases [1]. It has always been a challenge for engineers to design an optimized sound absorber in order to obtain suitable absorption in a broadband frequency range [2]. To achieve this goal, a combination of multi-layered absorbers has been proved to be very effective [2,3]. While porous materials have many applications in sound absorption, they only exhibit suitable absorptive properties at high noise frequencies. Therefore, for an appropriate noise reduction and sound absorption in low frequency range, a resonator absorber is used as a coating for the porous material. This type of covering improves the absorption properties at low frequency noises while protecting the porous material. The absorption mechanism for this type of coating is based on the Helmholtz resonator function. The resonators can be of various types such as membrane, perforated plate, micro-perforated plate and slot plate while a perforated plate resonator is used in the present study as the coating for the porous material. As far as porous sound absorbers are concerned, different models have been introduced which offer sound absorption coefficients using three different methods [4]; Empirical Model (EM), Phenomenological Model (PM) which is based on the cylindrical tube, and Microstructure Model (MM). The latter is based on a set of cylinders while air passes through them. Formulations of Delany and Bazley [5] are among many examples of the application of empirical models [6]. In the past couple of decades, great efforts have been exerted to use phenomenological models [7][8][9][10][11][12][13][14][15][16], while major contributions have been made to the application of microstructure models [17][18][19][20][21]. In order to achieve an optimum design, different optimization tools have been developed over the years. In the case of optimal noise control, sound absorption and transfer loss have been studied using Genetic Algorithm (GA), Simulated Annealing (SA), and Topology methods. As an example, Yoon [22] optimized fiber materials using topology method, while Duhring et al. [23] developed an acoustical design using this method of optimization. Different arrangements of porous layers have been studied by Lee et al. [24] in order to achieve maximum transfer loss based on topology method. In another attempt, Lee et al. [25] optimized two-dimensional foams for maximum sound absorption. In a similar approach, the efficiency of a muffler was improved using topology method [26], while Yoon et al. [27] optimized the acoustic-structure interaction problems using topology method. Simulated annealing method was also used by Ruiz et al. [3] in order to optimize the absorption properties of micro-perforated multi-layered panel. Also, the Genetic Algorithm (GA) optimization tool was used to improve the absorption characteristics of meta-materials by Meng et al. [28]. In the present study, first the single-layer absorber with a particular arrangement is optimized using Evolution Strategy (ES) algorithm and the obtained results are compared against those by Genetic Algorithm (GA) and Gradient methods at a particular noise frequency [29,30]. Subsequently, a double-layer sound absorber is optimized under thickness limitations at a particular frequency using ES and GA methods [31,32]. Accordingly, different characteristics of single-layer and double-layer sound absorbers are optimized to obtain the best absorption coefficients. This is done at three different frequency bands for three different porous materials of foam, fiber and polyester, using transfer matrix and ES optimization method. Later, by comparing the absorption coefficients of single and double-layer absorbers, the effectiveness of double-layer layout is evaluated for these materials. In the next sections, the mathematical formulation of transfer matrix method and the Evolution Strategy is presented, followed by the optimization of single and double layer combinations of foam, fiber and polyester. Mathematical formulation Transfer matrix is a powerful method which is capable of modeling sound propagation in porous materials with or without resonator coatings. By this method, the assumption of plane wave is applied to the incident and transmitted waves through the absorber layers. Impedance of any arbitrary surface is given by the relation [2] in which z si+1 and z si are impedances at X i+1 and X i , respectively (as shown in Fig. 1), while k i is wave number at the ith layer. When a hard backing is placed behind the absorber, the above formula is then reduced to equation (2) [2]. It is now possible to change equation (2) to conform to a single-layer absorber shown in Figure 2, as in where Here, z 0 and k 0 are air impedance and wave number, respectively [2]. In the meantime, impedance for medium 2 can be written as [2] Now, with the addition of z 2 to z p , the impedance of the absorber's surface can be evaluated as [2] Using empirical formulas [5], it is possible to calculate the impedance and wave number for the porous material by Coefficients c i should be defined for each of the materials. Additionally, equation (9) is used to evaluate the impedance on the surface of the perforated panel [33]. Consequently, equations of the absorption coefficient using transfer matrix approach for the single and doublelayer sound absorbers can be formulated. Absorption coefficient for single-layer absorbers Absorption coefficient for a single-layer absorber (as shown in Fig. 2) is defined as follows: where p% and L are Evolution Strategy (ES) algorithm The algorithm of Evolution Strategy (ES) was firstly introduced by Rochenberg [34] in 1973 as an innovative method. Steps that are performed in most of the ES algorithms: Step 1: Determination of population size, maximum number of generations, and mutation rate. Step 2: Generation of the initial population (as parents) using random numbers (which gives a set of chromosomes). Step 3: Calculation of fitness function for each chromosome. Step 4: Evaluation of termination criterion, moving to Step 5 in case the criterion is not satisfied. Step 5: Production of next generation using following methods: using elitism (selection of a particular number of elite chromosomes from the community); applying mutation to a particular set of community members (mutants) and the generation of children for the next generation; Step 6: Returning to Steps 3 and 4. For applying the ES algorithm to any desired problem, the operators and design parameters for the considered problem should be established, first. Optimization of a single or double-layer absorber using the ES algorithm is done in the same way and hence the operators of the absorption problem are introduced and discussed in this section. A chromosome in ES algorithm is given as a set of (x 1 , x 2 , … , x n , s) in which x i are the problem variables which are given as real numbers and s is the step length for the mutation. The value of s is determined using the one fifth success rule during the execution of the algorithm. Rochenberg [31] mathematically proved that, when the number of successful mutations accounts for one fifth of the unsuccessful mutations, the speed of convergence to the optimum solution increases. Based on this rule, in the present study, five variables are introduced. Each chromosome of the population has five genes which stand for the following parameters; Gene 1 for the porous material thickness (L 1 ), Gene 2 for the specific resistance of porous material (R), Gene 3 for the diameter of the holes on the perforated panel (d), Gene 4 for the porosity of the perforated panel (e) and Gene 5 for the total thickness of the absorber (L). Therefore continuous random numbers are used for production of the initial generation (parents). It must be noted that these random numbers should not violate the limitations of the problem at hand. After producing the initial generation, a fitness function should be assigned to each chromosome. Based on this fitness function, some elite chromosomes will be chosen as new parents in order to be used for the mutation process which leads to production of children for the next generation. In the present study, normal mutation method is applied to the parent chromosomes to produce mutants. In this method of mutation, a random and normalized number is added to all genes, as shown in equation (15). It is noteworthy that after this mutation process, generated chromosomes that are theoretically impossible to be created, ought to be fixed. This basically means that the generated genes should not violate the limitation of the problem. If this situation does occur, the respective chromosome should be repaired. In order to repair the chromosome, the value of the violated gene should be set as the allowable limit for that parameter. The process of elitism is used to generate the next generation. In other words, a particular number of elite chromosomes are chosen which will be transferred to the next generation. The rest of the required chromosomes for the next generation will be selected randomly from the chromosomes of the previous generation and the newly generated chromosomes resulting from the mutation process. The Evolution Strategy algorithm is continued until the termination criterion is fulfilled. The flowchart of the applied ES algorithm is illustrated in Figure 4. Validation tests In order to validate the results of optimization using Evolution Strategy algorithm, the obtained results are compared against the results of other methods in similar condition. Accordingly, the optimization results for both the single and double-layer absorbers are compared with the results of Chang et al. [30,31]. Single-layer absorber The following limitations are set as the design and optimization parameters for the case of single-layer sound absorber which is intended to be optimized at a noise frequency of 350 Hz. 1000 R 50000 rayls m ; p% ¼ 50: Thickness of the perforated panel is kept fixed at q = 0.01(m). Therefore, to assess the validity of the Evolution Strategy optimization algorithm, the obtained results using ES are compared against the results of reference [29] for similar design parameters, but using Genetic Algorithm (GA) and Gradient methods. The results of comparison are displayed in Table 1. It is shown that by applying ES algorithm, higher absorption coefficient can be achieved, while other design parameters and limitations are respected. Therefore, the results of ES optimization are proved to be favorable. Double-layer absorber The design parameters and limitations of optimization for the double-layer absorber based on the studies of Chang et al. [31] are given as follows: It should be noted that in the present study, thickness of the perforated plate for both layers are fixed at q = 0.0006(m) = 0.6(mm), and other design parameters, as introduced in section 2, are as follows: L 0 1 ¼ ð1 À rt1Þ Â L1: The results of optimization using Genetic Algorithm (GA) [31] and Evolution Strategy (ES) at a particular frequency of 350 Hz are displayed in Table 2. Broadband optimization ES algorithm Evolution Strategy (ES) algorithm has been proved to be a robust and effective tool for sound absorption optimization at a predefined frequency for both single and double layer sound absorbers. The algorithm is now implemented for Table 2. Optimization results in the case of double-layer absorber using Evolution Strategy and genetic algorithms. optimization of single and double-layer absorbers in a broadband frequency range extending from 100 Hz to 3 kHz for three different materials of foam, fiber, and polyester. This range is divided into three bands; low frequency from 100 Hz to 800 Hz, medium frequency from 800 Hz to 1600 Hz, and relatively high frequency from 1600 Hz to 3000 Hz. In each frequency range, the initial design parameters for both single and double-layer absorbers are selected randomly in the framework of design limitations and criteria. Subsequently, by a step of 1 Hz and using the transfer matrix method presented in Section 2, the absorption coefficients at each frequency are evaluated and ultimately the mean absorption coefficient in the entire range is calculated. This averaged parameter is then used as the fitness function for the algorithms of elitism and mutation in order to generate the next set of offspring for the next generation. The ultimate goal is to achieve a set of optimized design parameters which will result in the highest possible averaged absorption coefficient in the desired frequency range. Broad-band optimization of single and doublelayer materials As described in the previous section, design parameters are initially selected randomly to give an initial averaged absorption coefficient for the selected frequency range, based on the formulas of transfer matrix approach. This averaged absorption coefficient is then maximized using the ES algorithm. Before proceeding to the analysis and optimization, the coefficients(c i )foreachmaterial shouldbeidentifiedasinTable3. By substituting the values of these three materials in the single and double-layer optimization code and by defining the frequency range of the optimization, the best material properties for three ranges of frequency are determined. Therefore, eighteen optimized materials are obtained. The resulting optimized parameters (defined in Sect. 2) and the mean absorption coefficients for different materials in different ranges are listed in Table 4. Characteristics charts To obtain a more comprehensive comparison of the optimized materials, a new "at a glance" representation is introduced in this section in which all material characteristics could be presented in a single chart, called the Characteristic Chart. To illustrate all characteristics in one chart, the first difficulty is the large difference in the range of values for different characteristics. As an example, the values of flow resistivity are in the ten thousands scale, while the porosity is always under unity. Therefore, the values of all characteristics should be normalized first, i.e., a value between zero to one should be assigned to each characteristic. Here, the values of each parameter have been normalized using its maximum value over all materials. For example, in single-layer absorbers, all values of flow resistivity and porosity are divided by 23138 and 0.43 respectively, which are the maximum values of flow resistivity and porosity in all single-layer absorbers. Same operation is performed on other characteristics. Aerward, a radar chart is used to plot the values of each material in one chart. In these charts, there are as many as as the characteristics of the materials, i.e., 5 axes for single layer materials and 10 for double-layer absorbers. The characteristics chart of single and double layer foam optimized for 100-800 Hz is illustrated in Figure 5. It is observed in Figure 5 that all characteristics of a material are shown in one chart and a good notion of the material characteristics is perceived. In Figures 6 and 7, the optimized characteristics charts of each single and double-layer absorber in different frequency ranges are plotted together. It is quite evident in these characteristics charts that parameters of a material may vary dramatically to obtain the best performance in the three ranges of frequency, especially the porosity, the flow resistivity, and the thickness of the absorber. Although the average absorption coefficient is used as the target for optimization, it does not offer a good description of the performance of the absorber. Therefore, the absorption coefficient of the absorbers has been plotted vs. the frequency ranges in Figure 8. As evidenced in Figure 8, for mid and high frequency ranges, double layer formation seems to improve the overall behavior of the absorber. However, for low frequencies, one may conclude that single layer formation is a better choice. Efficacy analysis of double layer formation For better assessment of the efficacy of the double layer formation in different frequencies and for different materials, the Improvement Percentage is defined as in equation (20). Improvement Perc:j frequency ¼ a Double layer À a Single layer a Single layer j frequency  100: The Improvement Percentage has been calculated for each formation and illustrated in Figure 9. As observed in Figure 9, the maximum improvement obtained is slightly more than 4%. Also, it is observed that at low frequencies, foam is the only material improved by double layer formation. It is therefore deduced from Figure 9 that, although there is a slight improvement of the absorption coefficient, using double layer porous materials for the frequency ranges specified here requires careful design and optimization. Conclusions In the present study, Transfer Matrix approach and Evolution Strategy (ES) algorithm are used in order to achieve an optimized design for single and double-layer formations for Foam, Fiber and Polyester porous absorbers with maximum sound absorption. Because of the fact that the main goal in many engineering and industrial applications is to obtain optimal absorption in a particular range of frequencies, the focus of the present study is to optimize the absorbers in order to have appropriate absorption properties in a broadband range of frequency. Single and double-layer absorbers are first optimized at a particular frequency using the ES algorithm and the obtained results are compared with those using Genetic Algorithm (GA) and Gradient methods. This comparison proves that the ES algorithm offers favorable results based on the limitations that are imposed on the design problem with superiority over the absorption coefficient obtained by the widely used method of Genetic Algorithm. Single and double-layer absorbers are then optimized in three different frequency ranges extending from 100 Hz to 3000 Hz divided into three bands of low, medium and relatively high frequencies for three porous materials. The averaged absorption coefficients in each range of frequency are maximized using ES algorithm for both single and double-layer sound absorbers. To show the optimized parameters of absorbers, the characteristic charts of absorbers are introduced. Also, for a better assessment of the efficacy of the double layer formation, an Improvement percentage parameter has been defined and estimated for each optimized material and formation in each frequency range. The obtained results indicate that maximum improvement reached by the use of doublelayer formation is slightly higher than 4%. Furthermore, it is shown that materials are differently affected by double layering. Accordingly, the improvement of absorption coefficient is 4% for polyester and foam below 800 Hz frequency, 3-4% for the polyester and fiber for 800 Hz-1600 Hz frequency and 2.6% for the foam for 1600 Hz-3000 Hz frequency. Overall, the presented findings indicate that, although there is a slight improvement of the absorption coefficient, using double layer porous materials for the frequency ranges specified here requires careful design and optimization. The method of Evolution Strategy (ES) algorithm is also proved to be robust in the case of single and multi-layer sound absorber optimization problems. Compliance with ethical standards Authors of this study received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. The authors also declare that they have no conflict of interest.
4,848.2
2018-01-01T00:00:00.000
[ "Physics" ]
Improved sensing detector for wireless regional area networks : In wireless communication, noise and fading affect the radio signals. Multiple antennas one of the solutions to nullify these affects. In this paper, authors proposed an improved sensing detector for wireless regional area networks. The presented scheme uses two detectors concept, detectors imply multiple antennas, follows selection combination to select best signals. The proposed model not only improves the detection performance but also decreases the sensing time. Out of the two detectors the first one is energy detector with single adaptive threshold and second one is energy detector with two adaptive thresholds. The thresholds are adaptive as they are dependent on noise variance ( 𝜎 2 𝜔 ), and the value of this noise variance changes according to the noise signal. Both the detectors work simultaneously and their output is then fed to a decision devise which takes the decision using OR rule. In the proposed scheme more than one antenna has been used and is compared with existing sensing techniques. Results show that the proposed an improved sensing detector technique while number of antennas ( N r ) = 2, improves the detection performance by 24.6, 53.4, 37.9, and 49.6%, as compared to existing schemes (i.e. EDT-ASS-2015 scheme, ED and cyclo-2010, adaptive SS-2012, and conventional-ED) scheme at −12 dB SNR respectively. Meanwhile, proposed technique also decreases sensing time in the order of 47.0, 49.0, and 53.2 ms as ABOUT THE AUTHORS Jyotshana Kanti received the Bachelor of Technology degree (Hons.) and Master of Technology degree in Computer Science & Engineering from Uttarakhand Technical University, Dehradun. She has published several research papers in reputed SCI/ISI indexed journals, and IEEE conferences as well. She has written 1 book and few book chapters in reputed publications. Geetam Singh Tomar received his UG, PG, and PhD degrees in electronics engineering from universities of India and PDF from University of Kent, UK. He is the director of THDC Institute of Hydropower Engineering and Technology, Tehri, India and is also the director of Machine Intelligence Research Labs, Gwalior, India. He is associated with professional societies like IEEE as Senior Member and IETE and IE (I) as Fellow. He is chief editor of 5 International Journals and has published more than 150 research papers in international journals/conferences and has written 5 books and book chapters in IGI Global Publication/CRC Press. PUBLIC INTEREST STATEMENT Cognitive radio is 5-G technology, resolves band-width crises problems. To resolve such issues there are four functions proposed by IEEE 802.22. Spectrum sensing is one of them in which cognitive radio (CR) users sense primary users (PU) licensed band. In this paper, we have proposed an "improved sensing detector for Wireless Regional Area Networks". The presented scheme uses two detectors concept, and detectors imply multiple antennas, follows selection combination to select best signals. Further, out of the two detectors the first one is energy detector with single adaptive threshold and second one is energy detector with two adaptive thresholds. Both the detectors work simultaneously and their output is then fed to a decision devise which takes the decision using OR rule. The final results shown that proposed model improves the detection performance and decreases the sensing time. Furthermore, the proposed scheme collaborates with cooperative spectrum sensing (CSS) who shows that the proposed detection technique detects −20 dB SNR levels PU's signal without any difficulty. Introduction In present scenario, day-by-day technology is growing, and cognitive radio network (5-G technology) is one of the examples of this. In cognitive radio network (CRN), spectrum sensing plays an important role. There are various techniques have been proposed by researchers to sense licensed signal. Such as in Maleki, Pandharipande and Leus (2010), authors have proposed two-stage spectrum sensing technique (ED and cyclo-2010) to improve detection performance. This technique carried two detectors, first stage consisted energy detector and second stage consisted cyclostationary detector to give better sensing performance, however it was computationally more complex and required longer sensing time. Then, in Ejaz, Hasan, and Kim (2012), authors have proposed adaptive spectrum sensing scheme (adaptive SS-2012). In this scheme, out of two stages only one of the detection stage performed at a time, which was based on the estimated SNR. Presence of cyclostationary detector in Ejaz et al. (2012) made system implementation more complicated. Furthermore, in Sobron, Diniz, Martins, and Velez (2015), authors presented energy detection technique for adaptive spectrum sensing (EDT-ASS-2015). Here, authors focused on cost-function that depended upon a single parameter which, by itself, contained the aggregate information about the presence or absence of primary users. In this paper, we optimize detection performance using multiple antennas with two detectors, two detectors ED_SAT and ED_TAT perform sensing operation simultaneously. Thresholds are adaptive that's why chances of occurring sensing failure problem is negligible (Liu, Hu, & Wang, 2012). The output results of detectors go to decision device (DD) who takes final decision using OR-rule, if the output of DD is 1 shows frequency band is busy (H 1 ), otherwise free (H 0 ). The main difference between this paper and others (Ejaz et al., 2012;Maleki et al., 2010;Sobron et al., 2015) are that none of these techniques focused on spectrum sensing failure (Liu et al., 2012), and fading problem. Adaptive threshold scheme reduces sensing failure problem while multiple antennas mitigate fading problem. We further cooperative spectrum sensing scheme with an improved sensing detector (ISD). Here, all the CRs perform local observation by using ISD technique. The local decision will be made and reported to the fusion center (FC) in the form of binary bit i.e. 0 or 1. The FC will make a final decision using hard decision rule. In the proposed model we have used hard decision rule because it has better detection performance when the number of cooperating CR users is large (Akyildiz, Lo, & Balakrishnan, 2011), and provides slightly better performance at low probability of false alarm (P f ) (Teguig, Scheers, & Le Nir, 2008). The novelty of this paper that proposed model is using multiple antennas to mitigate fading problem, and detectors are using adaptive thresholds to mitigate sensing failure problem. Final decision makes by DD, simulation results show that the proposed model enhances detection performance at P f = 0.1, performs well at low SNRs, and reduces sensing time as well. The rest of the paper is organized as follows: Section 2 presents system description. Section 3 describes proposed system model. Section 4 shows the numerical results and analysis. Finally, Section 5 concludes the paper. System description There is a mathematical expression to detect the PU signal by using following hypothesis for received signal (Bagwari & Singh, 2012): In Equation (1), r(n) is signal received by each CR user. x(n) is the PU licensed signal, w(n) is AWGN (additive white gaussian noise) with zero mean i.e. w(n) ~  (0, σ w 2 ), σ w 2 is noise variance, h is the gain of Rayleigh fading channel. H 0 is the null hypothesis, shows the absence of PU and H 1 is the alternative hypothesis, shows that PU is present. Figure 1 illustrates the proposed system model of an ISD. There are N r number of antennas are implemented at each CR users. N is number of samples transmitted by PU. In Figure 1, Maximal-ratio combining (MRC) scheme is not considered since it has spectrum sensing overhead due to channel estimation. Moreover, a combining scheme based on the sum of the decision statistics of all antennas in the CR is not analytically tractable. Therefore, we assume that each CR contains a selection combiner (SC) that outputs the maximum value out of N r decision statistics calculated for different diversity branches as x = max(x 1 , x 2 , x 3 , ..., x Nr ). The output of the SC is applied to upper stream and lower stream. In Figure 1, upper stream carries ED with single adaptive threshold, this detector is similar as conventional-ED, except adaptive threshold that's why detector is an advance version of conventional-ED. ED with single adaptive threshold calculates energy (X) of received signal (Bagwari & Singh, 2012) and compares with adaptive threshold (λ 1 ), then generates output (L 1 ) and passes to decision device (DD) in the form of binary bits. If the calculated energy (X) is greater and equal to adaptive threshold (λ 1 ), then the output of detector (L 1 ) is bit 1 else bit 0. Similarly, the lower stream carries ED with two adaptive thresholds (ED_TAT), this detector is different from the upper stream detector because it has two adaptive thresholds. Two adaptive thresholds concept is fruitful to reduce sensing failure problem (Liu et al., 2012). Now, ED_TAT computes the energy, compares with thresholds (λ 2 ) and produces output (L 2 ). If computed energy is greater and equal to λ 2 , then the output L 2 will be bit 1 else bit 0. The outputs of detectors (ED_SAT and ED_TAT) go to decision device (DD), further, DD adds L 1 & L 2 using OR-rule operation. According to OR-rule, if the sum of L 1 & L 2 is greater or equal to 1, shows H 1 (channel is busy), else shows H 0 (channel is free) as shown in Figure 1. An improved sensing detector Suppose, x j (k) is the received signal at jth antenna for kth data stream, sensing channel between PU and CR is assumed to be Rayleigh fading channel, N is total number of samples to be sensed by CR and N r is number of antennas. Hence, the overall output of a SC as follows: It is seen from Figure 1 that individual antennas are allocated to cognitive radio. Now we consider that antenna branch which has maximum gain and passes to detectors for further process. • Probability of detection of an ISD can be defined as: • Total error probability of an ISD can be defined as: where P ED_SAT d and P ED_TAT d are the probability of detection throughout of ED_SAT and ED_TAT detector respectively, P ED_SAT f and P ED_TAT f are the probability of false alarm of ED_SAT and ED_TAT detector respectively. P r is the probability factor that a channel would be reported to ED_SAT and therefore, the probability that a channel would be reported to ED_TAT detector will be (1−P r ). P r dependents on SNR of the channels to be sensed i.e. if P r < 0.5 shows channel is very noisy, and P r ≥ 0.5 shows channel is less noisy or has a good SNR. Hence, the overall probability of false alarm and probability of detection directly depend on P r . (0 ≤ P r ≤ 1). Energy detector with single adaptive threshold (ED_SAT) Energy detector plays an important role in CRN in order to detect PU signal due to its simplicity and easy to implement (Urkowitz, 1967). Figure 2 shows the internal architecture of ED with single adaptive threshold (ED_SAT). Here, input PU licensed signal received by square law device, which shows detected signal energy (X) and compared with single adaptive threshold (λ 1 ) to make an output decision to determine whether the PU is present or absent. (2) 3.1.1.1. Expression of single adaptive threshold. The mathematical expression of single adaptive threshold (λ 1 ) can be defined as (Tandra & Sahai, 2008): where, N is number of samples, N r is number of antennas, Q -1 ( ) denotes inverse-gaussian tail probability Q-function, P f is probability of false alarm, and 2 is noise variance. Analyze Equation (7), threshold (λ 1 ) is directly proportional to noise variance ( 2 ), noise variance depends on noise signal, and noise signal is random in nature and change w.r.t. time, due to this noise variance ( 2 ) varies, and then threshold (λ 1 ) also change. Threshold is adaptive in nature, therefore, at every time instant its value changes. Consider Figure 2, decision device-I of ED_SAT is given by: 3.1.1.1.1. Probability of detection for ED_SAT detector. The final expression for probability of detection can be written as (Sobron et al., 2015): In Equation (9), N is number of samples, Q( ) denotes Gaussian tail probability Q-function, and λ′ is defined as x is PU signal variance, and 2 is noise variance. 3.1.1.1.2. Probability of false alarm for ED_SAT detector. The final mathematical expression of probability of false alarm can be derived as (Sobron et al., 2015): In Equation (10), N is number of samples, and λ′′ is defined as: 3.1.1.1.3. Total error probability for ED_SAT detector. The total error rate is the sum of the probability of false alarm (P f ) and the probability of missed detection alarm (P m ). Hence, the total error probability rate as follows (Bagwari, Kanti, Tomar, & Samarah, 2015): where (1 − P d ) shows the probability of missed detection (P m ), then: Energy detecor with two adaptive thresholds (ED_TAT) This is a simple kind of a circuit of ED except threshold. In this model we used two adaptive thresholds. where Y 1 is the output values of upper part and Y 2 is the output value of lower part. After that values of Y 1 and Y 2 are added using adder. Finally, using Equations (13), (15) and (16) the local decision (DD-II) of ED_TAT is expressed as: Equation (17), equating the resultant value (Z) to threshold (λ 2 ) to maintain overall system probability of false alarm (P f ) 0.1. If Z is greater than or equal to λ 2 then signal is present otherwise absent. 3.1.2.1. Two adaptive threshold scheme for spectrum sensing. In CRN, this is very difficult situation for detector to detect correct signal when noise and PU signal overlap to each other as shown in Bagwari and Tomar (2013a). To overcome this problem two adaptive thresholds are the fruitful solution. The area comes under thresholds λ 21 and λ 22 , known as confused region (Bagwari & Tomar, 2013a). λ 22 is known as upper bound and λ 21 is known as lower bound. In Bagwari and Tomar (2013a), if the energy of received signal lies beyond λ 21 or λ 22 , detector generates bit 0 or bit 1 respectively, whereas, between λ 21 and λ 22 , we divided the confused region into equal quantization level. We further, converted these levels into decimals. Now, correlating (Bagwari & Tomar, 2013a) with Figure 3, if X lies beyond λ 21 or λ 22 , upper part of Figure 3 (ED_TAT) shows Y 1 . Hence, final expression can be written as: Expression of two adaptive threshold. In the proposed double threshold decision, the value of maximum noise variance shows the value of upper threshold λ 22 and the value of minimum noise variance shows the value of lower threshold λ 21 . Hence, the mathematical expression of two adaptive threshold (λ 21 and λ 22 ) can be defined as: Earlier, we have discussed that threshold is adaptive if it depends on noise variance, in above Equations (21) and (22) , where ρ is a constant parameter that computes the size of the uncertainty and ρ > 1. Probability of detection for ED_TAT detector. Suppose that r i is the normalized version of received sample r(n), the cumulative distribution function (CDF) of the ED_TAT, can be calculated as: In Equation (23), a is an arbitrary constant, has value two. The zero-mean primary signal r(n) with average power 2 r is independent of the circularly symmetric complex Gaussian noise w i (n) with variance 2 w , and h i denotes the Rayleigh faded channel that is independent of w i (n). Hence, |h i | is Rayleigh distributed with variance 2 h ∕2. Thus, the Probability distribution function (PDF) of the ED_ TAT detector for H j (where j = 0, 1) is given as: is exponentially distributed as follows: Note that S = 2 h × 2 x ∕ 2 w is the average signal-to-noise ratio (SNR) of the sensing channel. Finally, by using Equations (24) and (25) we have: Now, the probability of detection for ED_TAT can be obtained as: (24), f Probability of false alarm for ED_TAT detector. Considering Equation is exponentially distributed as follows: Finally, by using Equations (24) and (30) we have: Now, the probability of false alarm for ED_TAT will be calculated as: Total error probability for ED_TAT detector. The total error rate is the sum of the probability of false alarm (P f ) and the probability of miss-detection alarm (P m ): Using Equations (29) Decision device This device takes final decision whether PU frequency band is free or not, using OR-rule. The mathematical expression is given as: Figure 1 illustrates the working operation of ISD technique. In the given Figure 1, CR receiver sense received signal and perform the respective sensing operations using ED_SAT and ED_TAT detectors, and further makes final decision via decision device (DD) that PU band is available or not. Cooperative spectrum sensiong with proposed an ISD CSS technique is used to mitigate shadowing and fading in order to improve sensing performance of both local sensing performance and global sensing performance in a CRN (Bagwari & Tomar, 2013b, 2014Do & Mark, 2012;Ling-Ling, Jian-Guo, & Cheng-kai, 2011). Here all CRs are using an ISD spectrum sensing scheme to sense PU signal. Once all CRs have taken the local decision individually, they send their local decisions in the form of binary bit i.e. 0 or 1 to the FC over error free orthogonal channels to take final decision. In Figure 4, let there are k numbers of CRs, all of them transmit local decision O i to a common single FC. Finally, FC combines the binary bit decisions of all CRs where each CR have proposed scheme i.e. an ISDs, and makes global decision to show presence or absence of PU signal as follows In Equation (38), D is the sum of the all local decisions O i from the CRs. The FC considers a hard decision rule to decide whether PU signal is present or not. The hard decision rule states that a signal is present only and only if any of the CRs sense a signal. As per the hard decision rule if O is greater or equal to 1, then signal is detected and if O is smaller than 1, then signal is not detected. The mathematical expression can be written as: Finally, Equation (41) shows the global or final decision of FC. Now, the performance of overall proposed system can be analyzed via P D . Hence, using hard decision rule in CSS, the probability of detection (P D ) of the FC can be expressed as follows: In Equation (43), P d is the probability of detection of individual CR users, can be calculated using Equation (3). Numerical results and analysis In our simulations we first evaluate the systems using QPSK modulation scheme and Rayleigh channel. In this section, the proposed ISD scheme is compared with conventional energy detection, energy detection technique for adaptive spectrum sensing-2015 (EDT-ASS-2015) (Sobron et al., 2015), Adaptive spectrum sensing-2012 (Ejaz et al., 2012), ED and cyclostationary-2010 (Maleki et al., 2010), and hierarchical with quantization-2012 detection (Liu et al., 2012). The parameters used for simulation are given in Table 1. In the following simulation given in Figure 5, we employ an ISD technique for 1,000 numbers of samples, we set the threshold for the system to achieve false alarm probability 0.1. In the simulation environment the value of λ 1 , λ 2 , λ 21 , and λ 22 varies at every iteration. But in this case we have chosen λ 1 = 1.25, λ 21 = 0.9, λ 22 = 1.2 and λ 2 = 1.014 as trade-off value. In simulation environment, there is detection performance comparison between proposed an ISD scheme, EDT-ASS-2015 scheme, ED and cyclo-2010, adaptive SS-2012, and conventional-ED scheme. It can be seen from Figure 5 that when the number of antennas increases, probability of detection also increases. Proposed ISD scheme with number of antennas (N r ) = 3 outperforms N r = 1, 2, EDT- ASS-2015scheme, ED and cyclo-2010, adaptive SS-2012.1% at −12 dB SNR in terms of probability of detection respectively under constraint at probability of false alarm (P f ) = 0.1. IEEE 802.22 states that if the value of P f is set at 0.1, the minimum acceptable value required for detection probability is 0.9. It shows that proposed ISD scheme detects PU signal at approximately −12.5 dB SNR. The spectrum sensing time defines the total time taken by cognitive radio users to detect licensed frequency band. Sensing time can be computed as: Where, T ISD is total time taken by proposed sensing technique for SS. T SC , T ED_SAT and T ED_TAT are the selection combiner (SC), ED_SAT and ED_TAT detectors SS time respectively. Therefore, the SC sensing time can be calculated as: C is total number of sensed channels by secondary users, and S SC is the average detection time for each channel calculated as: M SC is the number of samples during the observation interval, B is the channel bandwidth. Now, the ED_SAT sensing time can be calculated as: S 1 is the mean sensing time for each channel, in which M ED_SAT is the number of samples during the observation interval, B is the channel bandwidth, C is number of sensed channels and probability factor P r that a channel would be reported to the ED_SAT. Hence, the detection time of the energy detection is: Similarly, the ED_TAT detector sensing time can be calculated as: Figure 5. Probability of detection vs. SNR at P f = 0.1, with N = 1,000, modulation scheme is QPSK, and Rayleigh fading channel. S 2 is the mean sensing time for each channel, in which M ED_TAT is the number of samples during the observation interval. (1 − P r ) is the probability factor that a channel would be reported to the ED_TAT detector. Hence, the detection time of ED_TAT detector is: The decision device (DD) sensing time can be calculated as: C is total number of sensed channels by secondary users, and S 0 is the average detection time for each channel calculated as: In Equation (52), M 0 indicates the total numbers of samples during the observation interval, and channel bandwidth denoted by C. Thus, the overall spectrum sensing time is calculated by substituting Equations (45), (48), (50), and (51) in Equation (44) as: Figure 6 shows the graph of spectrum sensing time versus SNR. The proposed scheme at N r = 2 requires lesser sensing time than the existing schemes. It is observed that there is an inverse relation between SS time and SNR. As SNR increases, sensing time decreases. We have used Equation (54) for potting the graph between sensing time and SNR. The value of parameters used in Equation (54) is defined in Table 1. Now, at −20 dB SNR, proposed scheme at N r = 2 requires approximately 46.7 ms while presently existing schemes (EDT-ASS-2015, Adaptive SS-2012, ED and Cyclo-2010) require around 47.0, 49.0, and 53.2 ms sensing time respectively. The SS time is directly related to the numbers of samples received by the CR user. The more sensing time is devoted to detecting, the less sensing time is available for transmissions and hence degrading the CR throughput. This is known as the sensing efficiency problem (Lee & Akyildiz, 2008) or the sensing-throughput tradeoff (Liang, Zeng, Peh, & Hoang, 2008) in SS. (50) Figure 7 shows the graph of probability of detection (P d ) versus SNR between proposed CSS with ISD scheme, CSS-EDT-ASS-2015, and Hierarchical with Quantization-2012 scheme. In CSS we consider only three CR users. Simulation results show that Cooperative SS with ISD outperforms EDT-ASS-2015, and Hierarchical with quantization-2012. CSS with ISD improves detection performance around 12.5 and 19.1% as compare to EDT-ASS-2015 and Hierarchical with quantization-2012 at −12 dB SNR respectively. CSS with ISD achieves 0.9 detection probability at −12.5 dB with N r = 2, while EDT-ASS-2015 and Hierarchical with quantization-2012 detection scheme achieves the same detection probability at −11 and −10.5 dB respectively. In Figure 8, we have plotted the probability of detection (P d ) versus SNR plots for different number of cooperative CR users k = 3, 4, 5, 6, 7, 8, 9, 10, P f = 0.1, N r = 2, and N = 1,000. It can be concluded from Figure 8 that the value of probability of detection increases with increase in the value of SNR for different number of CRs. The probability of detection is maximum for k = 10, it's implies that for N = 1,000, N r = 2, and P f = 0.1, only ten CR users are required for deciding the presence of the PU by using the ISD spectrum sensing scheme. When k = 10, P f = 0.1 and SNR = −20 dB approximately, the proposed SS model can achieve probability of detection value 0.9, which is the SS requirement of IEEE 802.22 (Cordeiro, Challapali, Birru, & Shankar, 2005). Conclusion In this paper, an ISD for WRANs has been proposed. This scheme enhances detection performance, reduces bit error rate as well as sensing time. Numerical results show that proposed ISD scheme while N r = 2 outperforms other existing schemes (i.e. EDT-ASS-2015 scheme, ED and cyclo-2010, adaptive SS-2012, and conventional-ED scheme), by 24.6, 53.4, 37.9, and 49.6% at −12 dB SNR respectively. It is also shown that the proposed scheme has lesser sensing time than EDT-ASS-2015, Adaptive SS-2012, and ED and Cyclo-2010 scheme in the order of 47.0, 49.0, and 53.2 ms at −20 dB SNR respectively. ISD has also been implemented with CSS scheme, it further shows that when k = 10, N r = 2, and P f = 0.1, proposed detector is able to detect PU licensed signal at −20 dB SNR. Finally, results conclude that the proposed scheme exhibits better performances than existing schemes.
6,090
2017-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
COMPUTER-AIDED DECISION-MAKING IN CONSTRUCTION PROJECT DEVELOPMENT . One of the most difficult problems in construction is taking objective decisions. A decision-making process is very complicated and time consuming (due to the complex nature of construction projects). Many experts with extensive knowledge of construction industry take subjective decisions related to verbal methods of decision-making. Difficulties are related mostly to the creation of a set of relevant criteria, providing answers to the decision-maker’s questions. A set of proper criteria and mathematical tools (such as computer calculation algorithms with multi-criteria analysis) could significantly improve objective decision-making. The paper presents ESORD – an informatics tool allowing to establish a hierarchy (ranking) of different types of solutions on the basis of mathematical calculation. The authors present a comparison of different methods used for multi-criteria decision-making. Introduction One of the main problems faced by every investor/project manager is selecting an implementation variant for an investment project. The difficulty related to this issue emerges as early as at the stage of investment preparation, once requirements and expectations of the investor are defined in the functional-utility program. At individual stages of the lifecycle of an undertaking, analysed phenomena are very complex, which is mainly due to specific traits, characteristics, complexity and nature of construction processes and relations between them. On the other hand, a description of these relations is based mainly on expert opinions and should take into account both measurable factors and those difficult to measure (Ustinovichius et al. 2006;Kildienė et al. 2014;Vodopivec et al. 2014); besides, its quality depends largely on expert knowledge and experience of decision-makers. The issue of decision-making constitutes an integral part of every field of science and art. A decision-making process is an activity which results in taking a specific decision. The entity involved in a decision-making process is a decision-maker, expressing specific preferences, assessing possibilities and results, and choosing the final decision-making variant (Książek 2010a;Tyszka 1986;Brown 2012;Yazdani-Chamzini et al. 2013a;Ustinovichius et al. 2011;Ghosh et al. 2012). Analysis of the decision-making situation is the first task of a decisionmaker. The decision-making situation is a set of all elements, dependent on and independent of an assessor, which exert impact on the decision to be made. In the process of formulation of the decision-making problem, factors independent of a decision-maker include a set of variants to be examined (the so-called conditions restricting the decision), while factors dependent upon the decision-maker include criteria for assessment of solutions, described by technical and economic indicators, most adequate for the given decision-making situation, expressed in specific units (Zavadskas et al. 2014b;Yazdani-Chamzini et al. 2013b;Zolfani et al. 2013). Assessment of characteristics of a given variant may be both quantitative (objective) and qualitative (difficult to measure) (Ustinovichius 2004;Simanavičienė et al. 2014). The difficulty in decision-making is not only due to the level of complexity of a task as well as complexity and designation of variants, but also -expectations of the assessing person. On the other hand, preferences of the expert are largely dependent on the point of view of the decision-maker who has caused development of the given opinion or assessment. The authors believe that due to the above reasons, computer-based implementation of calculation algorithms of selected methods of assessment and ranking of solutions is an efficient tool that allows to obtain aggregated variant assessments and results in a more efficient decision-making process. Detailed information concerning the issues of valuation of criteria, as well as the psychological aspect of decision-making has been presented in Brauers et al. (2013), Hashemkhani et al. (2013), Ustinovichius et al. (2007), Tyszka (1986) and Zavadskas et al. (2012Zavadskas et al. ( , 2014a. 1. Main assumptions of the methodology of assessment and ranking of solutions in the construction trade According to the authors, the calculation algorithms of various types of methods of multi-criteria assessment and the theoretical apparatus -including sociology, psychological theory of decision-making and decision-making analysis -contribute to greater effectiveness of the decision-making process, and allows for avoidance of substantial mistakes that could interfere with the quality and reliability of the decision made. In practice, individual tools are often used selectively, which often garbles the assessment results. Experts are expected to make assessments in accordance with their professional knowledge and the construction art -reliable, objective and considering the specific character of the given decision-making situation. It would be difficult, however, to clearly define individual preferences, system of values, and motivations of an expert. Expert opinions are formulated on the basis of their knowledge and experience, and they depend upon such factors as availability of information and the level of complexity of the task, emotional state and mood, selfesteem and susceptibility to group influence, and the mode of perception of a given phenomenon (Ustinovichius 2007;Turskis et al. 2013;Zavadskas et al. 2014a). Sometimes, difficulties associated with decisionmaking arise from an assessor's fear to assume responsi-bility, make a mistake or be rejected by the community. Therefore, in order to as much as possible eliminate the causes of interference with decisions made, an original survey of decision-maker preferences has been developed, as well as the decision-making variant ranking procedure, implemented as the ESORD calculation tool (Expert System for Assessment of Developer Solutions) (Tyszka 1986;Kozielecki 1977). In the opinion of the authors, the purpose of the survey developed within the framework of this research was -in the first place -to clearly and precisely define criteria for assessment of variants, referring to the problem of selecting the best investment (e.g. premises, building) from the perspective of expectations particular to potential recipients (users) (Peng, Tzeng 2013;Zalewska, Zalewski 2012). IT tool to support decision-making in the construction industry The ESORD IT tool contains groups, types and kinds of criteria entered into the system, which have been defined in accordance with the aspect of selection of the apartment (house) variant with reference to preferences of potential purchasers and users. Figure 1 presents the overall block diagram of algorithms used by ESORD, using the variant ranking methods (Książek 2010b(Książek , 2011. Algorithm solving the presented problem As a result of conducted surveys, the basic group of criteria for the assessment of residential construction facilities (Table 1) The apartment plan the survey, using the preferences specified by respondents. The collection of criteria segregated into importance groups, their importance levels and preferences defined by decision-makers with respect to a residential construction facility constitute the starting point for the methodology of solution assessment suggested by the authors. In the opinion of the authors, it will be possible to improve prioritisation of the solutions and select the best one using the obtained survey results, correlated using a supporting IT tool. ESORD orders the variants using the entire group of implemented assessment method algorithms or using only those indicated by the user (decision-maker). As a result, one receives a table and a visual presentation of results in the form of aggregated assessments (resulting from the use of a given group of methods) and classification of the considered variants using each of the methods of multicriteria assessment. Figure 1 presents the general chart of activities during the assessment of decision variants within the framework of the suggested methodology. In order to arrange the project variants, the following multi-criteria assessment methods were applied: ELECTRE (Roy 1991), ideal point (Hwang, Yoon 1981;Zalewski 2013), AHP (Saaty 1994;Gudienė et al. 2014), total, weighted total (MacCrimmon 1968), and the method using elements of fuzzy logic (Zadeh et al. 1975;Zadeh 1978;Corriere et al. 2013;Radziszewska-Zielina 2011;Kaya, Kahraman 2014). In order to implement the suggested methodology, the ESORD IT tool was developed (Expert System of Developer Solutions Assessment). The software uses algorithms of the above-listed methods and detailed results of surveys that constitute the basis to define the level of importance of specific criteria. This chapter presents performance of specific calculations leading to prioritisation of the considered variants, using numeric examples from the ESORD IT tool (Książek 2010c). The above chart was expanded so that IT implementation of algorithms of specific methods of assessment to the ESORD software was possible. Input data for all methods Input data for all calculation methods constitute: 1. Calculation algorithms of methods (Książek 2010b(Książek , 2011Książek, Nowak 2009). 2. List of criteria (described in Table 1) assessed by a group of respondents within the framework of surveys conducted by the authors (Krzemiński, Książek 2008). 3. Criteria importance levels (obtained on the basis of preferences specified by the respondents within the framework of surveys conducted by the authors and generated by ESORD). 4. A collection of decision variants implemented in the system, subject to assessment (the so-called apartment database presented in Table 2). 5. Survey of decision-maker's preferences concerning the examined residential construction facility (Książek 2010a). ESORD calculates the so-called main weights vector for specific criteria in order to determine the level of their completion. In order to determine the vector, the following marks were adopted for the main criteria: L Kassessment of Type 1 user for the Facility location criterion; In K -assessment of Type 1 user for the Technical infrastructure of the facility criterion; Ko K -assessment of Type 1 user for the Facility structure criterion; F Kassessment of Type 1 user for the Rooms functionality criterion; S K -assessment of Type 1 user for the Apartments finishing standard criterion; B K -assessment of Type 1 user for the Safety criterion; C K -assessment of Type 1 user for the Cleanness and ecology of the facility criterion; O K -assessment of Type 1 user for Attitude towards the facility criterion; K K -assessment of Type 1 user for the Costs criterion; L W -the main weights vector index value for the Facility Location criterion. The specific main weights indexes for the criteria are calculated using the following formula: where: NK -name (mark) of a given criterion. Therefore, for example, the value of L i W index in relation to the Facility location criterion is calculated in the following way: (2) It should be noted that while calculating the main weights vector, the system does not include the weights value for detailed criteria. For main criteria, the received results are presented in Table 3. Calculations for the selected assessment methods ESORD software has implemented calculation algorithms of the selected methods of multi-criteria assessment including the average (total) method, weighted average (weighted total) method, ELECTRE method, ideal point method, AHP method and the calculation method using fuzzy logic. Assessment of each criterion is made in accordance with the following dependence: O -NK criterion assessment given by Type 2 user; n -means the n sub-criterion within a given NK criterion. Raising the point assessment of a given criterion to a power was introduced to the system in order to enable more extensive differentiation of the final assessments of variants. In order to calculate values of the assessments of the considered decision variants for specific criteria, the system performs calculations in accordance with the dependence presented below: where: NK w W -assessment of "w" variant according to "NK" criterion. Based on the conducted calculation procedure, the system generates the results of ordering specific decision variants subject to assessment. The final assessments for specific solutions received after using the methodology are presented in Table 4. Table 5 contains a visual presentation of the orders of preferential variants generated by ESORD for specific assessment methods. Table 5. Visualization of the order of preferential variants for applied methods Methods Visualization of the order of preferential variants Average method Weighted average method ELECTRE method Continued Table 5 Methods Visualization of the order of preferential variants Ideal point method AHP method Calculation method using elements of fuzzy logic Expert assessment was applied to ten decision variants introduced to ESORD system. Their detailed description is included in Table 2. The analysis of the obtained results was conducted using the selected methods of multi-criteria analysis (average, weighted average, ELECTRE, ideal point, AHP and the calculation method using fuzzy logic). Table 6 presents the obtained list of preferential variants ordered from the best to the worst. Figure 2 presents the obtained order of the analysed solutions. A and B methods may be compared by calculating cosθ , namely, the angle between their vectors in accordance with the dependence: where: A B -lengths (norm, value) of vectors A and B; ⋅ A B -the scalar product of vectors A and B; cosθ -the angle between vectors A and B. Tables 7-8 present the assessment of the correlation level of the ordering results for variant examples for the selected multi-criteria assessment methods in the linguistic form, and the numerical values reflecting themcos θ. The analysis of correlation of the results pertaining to ordering examples of decision variants for the selected multi-criteria assessment methods (Table 9) shows that specific numerical values fall between 0.738 and 0.999. Therefore, in order to compare correlation levels of the results within the framework of the assessment methods used in the calculation procedure, the following thresholds were adopted: − below 0.85 -low correlation of results; − 0.85-0.94 -medium correlation; − 0.95 -high correlation; − over 0.95 -very high correlation. − High correlation occurs between the results of ELECTRE, ideal point and AHP methods. For those methods, variant 2 was placed on the second place in the classification. In the ELECTRE method, variant 10 was on the first place, while for AHP and ideal point methods, variant 1 was the best. The specifics of ordering solutions in the ELECTRE method is that at the last stage of the calculation procedure, the calculations are made in a binary system, so the variants with slightly differentiated assessment may become equal (for example variant 7 and 8 or variants 3, 4 and 9). High correlation occurs also between the results of AHP, total and weighted total methods. − Medium correlation occurs between the results of ELECTRE, entropy, total and weighted total meth-ods. Although in the case of the above methods (apart from ELECTRE), the order of the three first variants is the same (namely 1 ≻ 2 ≻ 7), the differences in their assessments are quite significant, presumably because of high differentiation of the calculation algorithms. In the ELECTRE method, the order of the three first variants is different only because variant 10 is in the first place, namely 10 ≻ 2 ≻ 1. − Low correlation occurs between the results of the method using the elements of fuzzy logic, ideal point, entropy, total, weighted total, ELECTRE and AHP. For the method using elements of fuzzy logic and ELECTRE, variant 10 is the best. For the remaining methods, variant 1 is the best. Further order of the objects is as follows: 2 ≻ 1 ≻ 5 for ELECTRE method and 1 ≻ 7 ≻ 8 for the method using fuzzy logic. In the case of average, weighted average and ideal point methods -the order of the first three preferential variants is the same, namely, 1 ≻ 2 ≻ 7. For AHP method, the order of the first three preferential variants is as follows: 1 ≻ 2 ≻ 8. In the opinion of the authors, the approach of an average software user towards the obtained order of decision variants is important. Based on the above calculation example, it can be stated that among ten examples of apartments which were considered, variants with numbers 1, 2, 7 and 10 in all methods (apart from variant 7 in ELECTRE method and variant 2 in the method using fuzzy logic) were ordered (in various sequences) on the places from 1 to 4. For example, the variants with numbers 1 and 2 were the best for six of the considered methods. Variant 7 was placed on the third place of the preferential list for five methods. Variant 10 was on the fourth and fifth place, accordingly. Only in the case of the method using elements of fuzzy logic and the ELECTRE method, variant 10 obtained the highest position. In the opinion of the authors of this paper, none of the users is going to check all ten variants generated by the software, but only the first three or four, as the closest to meeting their expectations. Therefore, the calculation method applied to order the decision variants should be the least labour-consuming, which is very easy using the ESORD tool. Conclusions Based on the conduced research and analyses, the following conclusions may be drawn: 1. Expectations of an assessor towards a specific decision variant are significantly dependent on their approach in a given decision-making situation. 2. Because of the specifics of design solutions in residential construction, a universal method allowing complex approach to the problem is not possible. In a given decision-making situation, it is possible to obtain a reliable assessment result and select the variant, which is the most adequate in relation to expectations of the future user of an apartment, formed in a specific criteria system.
3,900.4
2015-01-30T00:00:00.000
[ "Engineering", "Computer Science" ]
Topological and Time Dependence of the Action-Complexity Relation We consider the dependence of the recently proposed action/complexity duality conjecture on time and on the underlying topology of the bulk spacetime. For the former, we compute the dependence of the CFT complexity on a boundary temporal parameter and find it to be commensurate with corresponding computations carried out in terms of the rate of change of the bulk action on a Wheeler deWitt (WDW) patch. For the latter, we compare the action/complexity relation for $(d+1)$-dimensional Schwarzschild AdS black holes to those of their geon counterparts, obtained via topological identification in the bulk spacetime. The complexity/action duality holds in both cases, but with the proportionality changed by a factor of 4, indicating sensitivity to spacetime topology. I. INTRODUCTION The importance of dualities between quantum field and gravity theories is difficult to underestimate. The AdS/CFT correspondence [1], the first and most successful, posits the existence of a d-dimensional conformal field theory (CFT) on the boundary of a (d + 1)-dimensional asymptotically anti-de-Sitter (AdS) spacetime, and has therefore led to several dualities between quantities observed in AdS (for example black holes in the bulk) and those in the CFTs defined on their boundaries. Recently Watanabe et.al. [2] introduced a duality between a quantum information metric (or Bures metric) defined in the CFT on the boundary of an AdS black hole, and the volume of a time slice in the AdS. Their work was motivated by Susskind's idea [3] that it would be interesting to find a quantity in a CFT that might be dual to a volume of a co-dimension-1 time slice of an AdS black hole spacetime. More recently a similar idea was proposed suggesting a correspondence between computational complexity in a CFT and the action evaluated on a Wheeler-De Witt (WDW) patch in the bulk [4]. In specific terms the conjecture is where the WDW patch refers to the region enclosed by past and future light sheets that are sent into the bulk spacetime from a time slice on the boundary. Subsequent work [5,6] was devoted to a better understanding of how one evaluates the right-hand side of this relation. Complexity is concerned with quantifying the degree of difficulty of carrying out a computational task. However a sufficiently clear definition of its meaning in the CFT remains to be fully formulated. One attempt to this end [7] proposes a function providing a measure of the minimum number of gates necessary to reach a target state from a reference state in the CFT. This proposal is motivated by an earlier attempt [8] to provide a geometric interpretation of quantum circuits, which consisted of the definition of two states -a reference and a target state -along with a unitary operator mapping the former to the latter. The minimum number of gates required to synthesize the unitary operator has been interpreted as a minimum length between the identity operator and that unitary operator in the manifold of unitaries. This manifold is endowed with a local metric known as the Finsler metric. The aforementioned proposal [7] chose instead the Fubini-Study metric, and the computational complexity obtained from some fixed reference and target states (related by unitaries involving a squeezing operator) appeared to be somewhat similar to the action on a WDW patch in the bulk. Furthermore, a time dependent expression of the complexity derived from the CFT computations remains to be derived, despite previous work computing the rate of change of the conjectured complexity in terms of the rate of change of the action on a Wheeler deWitt (WDW) patch at late time [4][5][6]9]. It is of particular interest to determine how computational complexity grows in the late boundary-time limit. Attempts to build a timedependent complexity from CFTs [10][11][12] yielded an expression for complexity that did not grow linearly at late time as conjectured. Furthermore, using a recent proposal for circuit complexity [13], it has been shown [14] that that complexity growth dynamics has two distinct phases: an early regime whose evolution is approximately linear is followed by a saturation phase characterized by oscillations around a mean value. To this end, one goal of the current paper is to compute from the CFT perspective the dependence of complexity on boundary time in the late time limit. The other goal of our paper is to understand if and how equation (1) is sensitive to topological effects. The simplest spacetimes that allow the most straightforward exploration of such effects is the AdS black hole in (d+ 1) dimensions with an identification that renders it an RP d geon [15]. The complexity of the AdS black hole spacetimes has been studied recently [6], but their geon counterparts have not (though there has been recent work incorporating a different form of topological identifica-tion in the BTZ case (d = 2) [16]). In the particular case d = 2, the BTZ-geon is obtained by placing further identifications on the BTZ black hole; the boundary of the Euclidean continuation of the BTZ spacetime is an RP 2 space, whereas that of its geon counterpart is a Klein bottle [15,17]. Previous work [18] demonstrated that the quantum information metric [2] was sensitive to spacetime topology in this case, and so it is reasonable to expect complexity to have a similar dual dependence on bulk topology. Our paper is organized as follows. In section 2, the notion of complexity will be revisited and written in term of control functions, introduced as the Hamiltonian components in a basis of generalized Pauli matrices. The same steps will be followed in section 3, but here the manifold of unitaries will be taken to be SU (1, 1), which is non compact. A useful expression of the complexity will then be derived. Section 4 will specify our considerations to Gaussian states as they are very central in the understanding of quantum information processing with continuous variables. The reference and target states will both be taken to be Gaussian states. The complexity of a d dimensional CFT will be expressed in section 5, as well as its rate of change in the late time limit. To attain this, a time-dependent target state will be chosen, and thus the unitary map between the reference and target state will have time dependence. Section 6 will be devoted to the complexity of the Schwarzschild-AdS d+1 spacetime and its geon counterpart as a quotient space, along with its equivalent quantum system, and in section 7 the rate of change of the action in the bulk evaluated on a WdW patch for both the AdS d+1 black hole and the AdS d+1geon will be computed. The result will be two similar correspondence relations that illustrate the sensitivity of (1) to the topology of the bulk. The last section will be a conclusion and discussion, in which our results will be summarized in the context of previous work. II. COMPLEXITY AND COST FUNCTION Here we intend to define computational complexity in a quantum theory and study its evolution in terms of a single parameter. We revise the notion of complexity introduced in [8] as a quantity obtained from two fixed (in time) states and a unitary operator mapping one state to the other. We follow the same steps in the case where at least one of the states (from which the complexity is constructed) is time-dependent. This complexity can be understood as the minimum number of resources required to reach a given configuration of a quantum system starting from an initial configuration thereof. We will be working with quantum systems (more specifically CFTs) whose set of unitary operators corresponds to SU (2 n ). To this end, let us consider a quantum system whose Hamiltonian in an SU (2 n ) basis takes the form [8] where σ i are the 4 n − 1 basis matrices of SU (2 n ) and γ i (t) are the components of the Hamiltonian in that basis. These are functions of the variable t defined in the interval [s i , s f ], and are referred to as control functions. The evolution of an arbitrary operator V in the manifold SU (2 n ), whose Hamiltonian is of the form (2), satisfies the equation [8] where I is the identity operator. We have also defined t in the interval [s i = 0, s f = 1]. We now introduce two states, an initial reference state |R and a final target state |T , whose relationship is given by with U the unitary operator introduced in (3). It can be reached or approximated by a combination of unitary gates of SU (2 n ). In this context, computational complexity is defined as an expression quantifying the minimum number of gates or operators required to synthesize U . To make this concrete we introduce a cost function as a functional of the control function via the relation [8] C f (γ) = 1 0 f (γ(t))dt (5) where the function f is a given distance function. We define complexity by minimizing the cost function via In order to be more specific on the nature of the function f (γ), let us define the tangent space to the unitary manifold SU (2 n ) at the point U as T U SU (2 n ) (or T to be short). Thus, we identify f (γ) with a metric function mapping elements of the tangent bundle T M (M = SU (2 n )) at a point U to elements of the set of scalars R. That is, f : T M → R. We can reformulate f (γ) in terms of a new metric function via [8] where y = The coordinates y i are determined for a given unitary operator in equation (A-2) in the appendix. The cost function (5) is proportional to the length associated with the metric function F (U, y), and will have the form [8] l F (s) = I dtF (s(t), [s] t ) (9) where s : I → M maps elements of an interval I to those of the manifold M = SU (2 n ), s(t) is a point on the manifold and [s] t the tangent space to the manifold at that point. The complexity measure (6) is obtained by minimizing l F (s) over the interval from reference to target state. There are various different types of functions F (U, y) that one can employ to compute (9). We will only enumerate those that involve an L (1) -norm and an L (2) -norm along the path, namely [8] where p(wt(σ i )) and q(wt(σ i )) are weight functions. Suppose that the target state is a state that depends on a parameter σ (not to be confused with the basis functions σ i ) defined in the interval [s i , s f ]. The expression (4) in this case takes the form Introducing the Fubini-Study metric [7] ds F S (σ) = dσ |∂ σ |Ψ(σ) | 2 − | Ψ(σ)|∂ σ |Ψ(σ) | 2 (12) we find yielding the length as function of σ associated with the FS metric. The above expression tells us about the evolution of the computational complexity as a function of σ. We shall postpone the question as to whether the current metric is an L (1) -or L (2) -norm in the coming sections. III. SU (1, 1) MANIFOLD AND METRIC GENERATION We now review the steps required for the derivation of the unitary operator mapping the reference to the target state and thus the Fubini-Study metric that the unitary yields [7], but with complexity reformulated to be timedependent. For simplicity we shall deal with quantum systems whose manifolds of unitaries are non compact and isomorphic to SU (2 n ) (with n = 1). We shall specifically work with the group SU (1, 1) which admits the Poincare disk as the manifold associated with its coset SU (1, 1)/U (1). Coherent states, which are either characterized by complex eigenvalues of a non compact generator of the group SU (1, 1) [19] or by points of a coset space of the same group [20], can be defined for a unitary irreducible representation of SU (1, 1). SU (1, 1) coherent states are the result of a two mode squeezing operator acting on a Fock state. ξ is a complex parameter and K ± are generators of the SU (1, 1) group that we will define explicitly in the next few steps. We start with a target state |Ψ(σ) (where σ is a parameter in the time interval [s i , s f ]) in a d dimensional CFT, which obeys the equation (11) with a reference state being a two-mode state of some momentum spaces. This two-mode state consists of a product state | − → k , − − → k of two basis states, one mode representing a state of positive momentum − → k and the other of negative momentum − − → k . This can also be expressed in terms of the quantum numbers associated with the momenta |n k , n −k . We also consider the unitary operator U (σ) to be of the form and Λ a momentum cut-off parameter. Note that the direction that only gives an overall phase to the state is modded out . are the generators of the SU (1, 1) algebra. These latter quantities can be written in term of annihilation opera- and satisfy the commutation relations It is straightforward to show that (15) can be put into the form [21] where the new functions γ + ( − → k , σ), γ − ( − → k , σ) and γ 0 ( − → k , σ) read as It is desirable to obtain the simplest possible form of (19). This can be done by imposing the conditions [7] on the reference state, yielding and so only the factor involving γ + needs to be taken into account. The quantity δ d−1 (0) comes from the com- k that appear in the generator K 0 . Now that we have managed to find a reduced form of the unitary operator U (σ), we will chose a reference state and attempt to derive the complexity using the Fubini-Study metric (12). By choosing a reference state annihi- we obtain, when omitting the variables and the integrals We find that (24) becomes upon choosing N so that the target state is normalized. Inserting (25) in the Fubini-Study metric, we get (see also appendix (A-4)) Restoring the variables and the integrals, we obtain a more general form of the complexity (13) with the expression with γ ′ + = ∂γ + /∂σ and V d−1 the (d − 1)-dimensional volume of a time slice. Upon comparison with (10) we see that (28) is an L (n) -norm. We will mostly use the case where n = 1 as it leads to a function easier to integrate as well as to a complexity whose rate of change corresponds to that of the action evaluated in the bulk. Note that the gates for different k's are not allowed to act in parallel in order to obtain the C (1) norm. IV. GAUSSIAN STATES Here we briefly review the Gaussian states of a quantum system [7]. Such states play a central role in quantum information processing with continuous variables as well as in quantum field theory where the vacuum states of some field theories (for example, quantum electrodynamics) appear to be Gaussian states. We shall choose the reference and target states to be Gaussian states. Consider a scalar field theory in a d dimensional spacetime with the Hamiltonian density where m is the mass of the field Φ(x) and π(x) is its conjugate momentum. These obey the commutation rules The field and its conjugate momentum in terms of the annihilation a k and creation operators a † k are explicitly given by with with all other commutators zero. It is helpful to write things in momentum space where the Hamiltonian can be expressed in a more elegant form as . 1: (a) Conformal diagram of a BTZ (d = 2) black hole. As we can see, a CFT is defined at each boundary thereof. (b) Quantum circuit which consists in an unitary U acting on n qubits. In the context of the current work its associated complexity can be regarded as equivalent to the action integral evaluated on a WDW patch in BTZ black hole. and the field and its associated momentum become In the sequel we consider a CFT for which the field is massless (m = 0). A pure Gaussian state |S is a state for which [7] where α k = ω k corresponds to the ground state |m of the theory . We can consider the target state to be the ground state. To construct the reference state |R(M ) we write the Bogoliubov transformation [7] b and require where β + k = cosh 2r k , β − k = sinh 2r k and r k = log( 4 M/ω k ). This corresponds to a state with α k = M in (36). V. CONFORMAL FIELD THEORY IN d DIMENSIONS Employing the formalism of the previous sections, we now compute the complexity defined in the CFT dual of an AdS gravitational theory. The spacetimes we have in mind for the latter are AdS black holes which, according to the AdS/CFT correspondence, admit CFTs on their boundaries. The Penrose diagram for the AdS d+1 black hole is illustrated in figure 1. The BTZ case can be described as a quotient space of AdS d+1 with d = 2. Here we aim to derive the computational complexity associated to quantum theories defined in the boundary CFTs. States on such CFTs are described by thermofield double (TFD) of finite temperature, defined in a thermal circle of period β with H 1,2 the free Hamiltonians, |n 1,2 the eigenstates of the free Hamiltonians defined on the CFT 1,2 and E n their corresponding energies. These states on the CFT 1 can be assigned to the positive momentum modes − → k and the ones on the CFT 2 to the negative momentum modes − − → k of a scalar field theory. We see that for a free scalar field theory. The state |T F D(0) is annihilated by operators b ± − → k defined via a Bogoliubov transformation as with tanh θ k = e −βω k /2 . We can regard the states in the boundaries as twomode states where one side of the diagram (figure 1a) corresponds to states of a conformal scalar field theory with positive momentum − → k and the other side to a scalar field theory with negative momentum states − − → k . The total Hamiltonian of the system according to (34) will be where ω k = k, a 1 = a− → k and a 2 = a − − → k . Using (41), the total Hamiltonian (42) in the basis (17) has the form and so (39) becomes with Equation (44) using the transformation of the unitary operator (19). We obtain a state equivalent to (24) and (25), but where γ ± = −i sinh(2θ k ) sin Ξ cos Ξ + i cosh(2θ k ) sin Ξ Ξ = 2ω k t and ω k = k. (47) In term of the parameter σ the control function γ + can be written as It is easy to check that γ + = γ + (k, σ) as a function of σ, satisfies the conditions γ + (k, s i ) = 0 and corresponding to reference and target state respectively. It appears that the control function is time-dependent and this fact will imply a time-dependent complexity. In order to compute the complexity in the simplest possible manner we consider situations in which the control function obeys the condition |γ + | < 1, which is holds if the operator is unitary. Now that we have assembled all the ingredients, the complexity (29) as a function of t is as detailed in eq. (B-1) in the appendix. The computational complexity can be understood as the minimum number of gates needed to synthesize a unitary operator U (figure 1b). Before proceeding further, we define the total energy of the scalar field as (see (D-2) in the appendix) Hence the complexity (50) takes the form Note that the rate of change of the complexity for very large t is with n d = 2(2 d − 1)ζ(d) a dimensionless constant. Equation (53) means that the variation of the complexity with respect to time at late time is proportional to the total energy E of the CFT . This total energy E will later be identified with the mass of the AdS black hole dual to the CFT. VI. GEON AND DIRECT PRODUCTS In this section we repeat these computations in the context of the AdS d+1 -geon. The AdS d+1 black hole has the metric which, in Kruskal coordinates (Ũ ,Ṽ , x i ) with i = 1 to d − 1, takes the form where f and r are smooth functions of (Ũ ,Ṽ ). The AdS d+1 -geon is the quotient spacetime resulting from a freely activing involutive isometry applied to the AdS d+1 black hole [15]. It is obtained via the identification [15,23] J : (Ũ ,Ṽ , which corresponds to the change in the spacetime coordinates. P (x i ) = −x i is the antipodal map on the (d − 1)-dimensional sphere S d−1 , which corresponds to κ = 1 in (54). The state associated with the CFT on the geon boundary is the thermofield single [24] |Ψ g = e −(β/4+it)H |C where |C is the cross-cap state, consisting of an entangled state between left-and right-moving modes of a free boson CFT (see figure 2a). In terms of the modes j n andj n of the holomorphic and anti-holomorphic conserverd currents J = i∂X andJ = i∂X, respectively, it is solution to [25] [j n + (−1) nj −n ]|C = 0 (59) and thus takes the form which clearly shows entanglement between the left-and right-moving modes of the CFT. In the case of the geon space, we claim that due to the reflection coming from the involution J the metric function F (U, y), satisfies The right-hand side of (61) saturates the geon metric function. This make sense when the complexity is regarded as the minimum time required to approximate the unitary. The presence of first and second terms on the right hand side of (61) is depicted in figures 2a and 2b. Thus the unitary operator U ′ and the tangent space vectors y ′ to the manifold of unitary operators at U ′ correspond to those where the spacetime coordinates for the left-modes are (−t, −x i ). Equation (61) can be understood as the metric function of a quantum system consisting of the direct product of two other quantum systems (figures 3a and 3b). Indeed, let us suppose that F A , F B , and F AB are the metrics given in equation (7) on SU (2) nA , SU (2) nB and SU (2) nA+nB , respectively. The metric F AB of the system composed of a unitary U on the n A qubit and a unitary V on the n B qubits is [8] where H A ∈ SU (2) nA and H B ∈ SU (2) nB (omitting the tensor factors I A ⊗ . and . ⊗ I B acting trivially on V and U , respectively). The Finsler metrics F A , F B and F AB are said to form an additive triple of Finsler metrics. Equation (62) leads to the inequality . The quantity we are now going to compute is the complexity corresponding to the metric F (U ′ , y ′ ) in (61). We first introduce the notion of an F-Isometry. A In the tangent space to the manifold at s(t), it acts like with h * defined as such that the F-Isometry reads as Under the identification (57), the momentum components transform as From the above relations we infer that the quantities k = are invariant under these transformations. Hence the control function is still invariant under these transformations. Thus, the geon transformation is an F-Isometry, and still obeys the condition |γ + | < 1. The complexity is therefore equal to twice that of the AdS d+1 black hole since the two contributions from the geon metric contribute equally to the complexity and the rate of change thereof is Equations (71) and (72) hold for any (d + 1) dimensional AdS geon with d ≥ 2. For any limiting value of t, the geon complexity is still twice the amount obtained in (52). More explicitly, we have (73) VII. RATE OF VARIATION OF THE ACTION In this section we verify the action-complexity conjecture in the context in which we have been working: between an action evaluated in the bulk (on a particular patch) and the complexity computed in the CFTs at the boundaries of the Schwarzschild AdS black holes and their geon counterparts. Consider a Schwarzschild-AdS black hole in d + 1 dimensions whose metric is given by where k = 0 for planar black holes. We aim to compute the action evaluated on a WDW patch, as shown in the figure 4a, for this black hole. The different contributions to the action from the bulk and the boundary terms are [5,6] with the cosmological constant (not to be confused with the cut-off parameter in the CFTs) Λ = −d(d − 1)/(2l 2 ) and the curvature radius R = −d(d + 1)/l 2 . The first term in (75) accounts for the bulk contribution. The other terms are the boundary contributions. The second term is the surface or Gibbons-Hawking-York term, in which K represents the extrinsic curvature. The third term comes from the null hypersurfaces with κ a parameter related to the tangent vector to these hypersurfaces. The fourth term (Hayward term) is a joint term involving the junctions of spacelike/timelike hypersurfaces [26][27][28][29]. The last term is also a joint term involving the junctions of null hypersurfaces. Evaluating the bulk contributions, we obtain for the four quadrants of figure 4a where v = t + r * and r * = dr/f . The surface contributions lead, for the four quadrants in figure 4a, to with h the induced metric on the surface. The only nonzero contributions are those coming from the singularities (r = 0). The null surface contributions are with x µ = (λ, θ A ) parametrizing the null hypersurfaces and γ the induced metric on them. κ satisfies the equation k µ ∇ µ k ν = κk ν and k µ = ∂x µ ∂λ are the tangent vectors to these surfaces. It is possible to choose everything to be affinely parametrized such that κ = 0. We thus can infer that the null surfaces do not contribute to the action. The joint term (Hayward) contributions have the form In our case there is no contribution coming from this term since there are no spacelike/timelike junctions for the chosen patch (figure 4a). The contribution of the last term for the four quadrants is It is important to recall that here the only non zero contributions are those of the junctions at the region near the singularities (r = ǫ 0 with ǫ 0 very small). And we also have to keep in mind that those contributions only appear when we consider black holes with hyperbolic metrics (k = −1) whose horizon radii are smaller than the AdS radius (r h < l). We shall not consider these kinds of black holes any further; they lead to similar conclusions. After summing up all these contributions we find that the rate of change of the action at late time is with M * given in appendix (C-3). We shall see in the next few steps that the mass term M * can be identified with the total energy E of the scalar field. Focusing now on the geon case, since in figure 4b only half of the patch (two quadrants) contributes to the action, it implies that the total action for the geon space will be the half of that of the AdS d+1 black hole. In fact, the time in the geon conformal diagram (see figure 4b) is moving up for both the left and right CFTs. The geon action can be interpreted in the AdS context as This can be justified by the fact that a given point in the geon diagram has two images in the AdS diagram. For symmetric time evolution (t 1 = t 2 = t/2) the second term of the right-hand side of (82) is time independent whereas the first term is time dependent and is only evaluated on half the patch of the AdS black hole. The rate of change at late time for the geon action then becomes We thus obtain for d ≥ 2 the relation Setting the total energy E of the CFTs to be equal to the mass term M * of the AdS d+1 black hole, we infer that the complexity (53) defined in the CFTs at the boundaries of the AdS d+1 black holes can be expressed in term of the AdS d+1 action (81) as follows Equation (85) is the conjectured relation. Making use of the equations (73) and (84) we find the same relation for the AdS d+1 geon except for a factor of 4, indicative of the sensitive of complexity to the underlying topology of the spacetime. In [16] the action was computed at t = 0 for the BTZgeon on a WDW patch partitioned into non-intersecting pieces associated with each boundary and a remaining interior piece. It was found that the action evaluated on each partition is precisely half the WDW patch-action of the corresponding two-sided BTZ wormhole (t = 0) and is independent of the black hole mass. VIII. CONCLUSION We have derived the computational complexity of a CFT defined on the boundary of an AdS d+1 black hole as a function of a temporal variable t, and have explicitly computed the small-t and large-t limits. The quantity t can be regarded as the boundary time parameter, yielding the rate of change of the CFT complexity. Up to a factor this equals n d times the rate of change of the bulk action evaluated on a WDW patch as conjectured [4,5]. Our results are commensurate with previous work [7], where the target state was defined for a fixed value of time and where a different control function was employed, resulting in a dimensionless complexity proportional to V d−1 Λ d−1 (see discussions in the appendices). Similar results have been derived in the context of the cMERA circuit [30][31][32]. In contrast to this, we began with a particular configuration of the TFD state defined on the boundaries of an AdS d+1 black hole as the target state and obtained a more complex control function depending on the parameter t. This led us to a dimensionless expression (50) for the complexity that is a function of t, which is proportional to V d−1 Λ d−1 as well (see appendix E). We have also established a correspondence between the geon quotient space of the AdS d+1 black hole and a quantum system consisting of a product of two quantum systems. We found that the complexity of the CFT on the boundary of the AdS d+1 geon is twice that of the its AdS d+1 black hole counterpart. Furthermore, we found that the rate of change of the bulk action of the AdS d+1 geon evaluated on a WDW patch is half of that of the AdS d+1 black hole. We therefore infer that the complexity/action relationship is sensitive to the topology of the bulk spacetime: there exists the same kind of correspondence relation between the complexity of a CFT and the bulk action of a geon evaluated on a WDW patch (86), but with the additional (topological) factor of 4. It would be interesting to compute in future investigations the computational complexities C (n) (with n > 1) associated with the same control function γ + ( − → k , σ) and see whether they can lead to desired and more general forms of the complexity C (1) . Likewise an exploration of the computational complexities C (n) (with n ≥ 1) for charged and/or rotating AdS black holes (and their geon counterparts [15]) should also provide further insight. thereof, the tangent to SU (2 n ) at this point U , admits the coordinates For the metric function F 1 (U, y) = f (γ), the complexity or length (Euclidean distance) associated with it reads In the Poincare disk model (with γ i (i = +)), the complexity or length (hyperbolic distance) associated with the metric F 1 (U, y) has the form (A-4) B. Complexity This subsection is devoted to the derivation of the final form of the computational complexity C (1) (t). As introduced earlier in the previous sections, it has the form where we employ the control function yielding in turn C. AdS/CFT (Planar black holes) Here we review some useful notions on the metric of Schwarzschild-AdS black hole, particularly the planar one, as well as the metric of its boundary CFT. A planar Schwarzschild-AdS black hole in d + 1 dimension has the metric (C-1) Changing variables to z = l/r, (C-1) becomes The mass of this black hole is The metric of the CFT on the boundary of the black hole is of the form (κ = 0) for planar black holes. It can be rewritten as and we can labelt as t. D. Total energy of the scalar field Here we compute the total energy of the scalar field knowing the probability densities of the Hamiltonian eigenstates |n, n . Starting with the state |T F D(0) in (40) we find that the density matrix is obtained from the expression . E. Comparing methods for Computing Complexity We compare here our approach in section II to a recent proposal [13] in which a lattice was used to study the complexity of a free scalar field theory. The distinction between the two approaches consists of the choice of gates, the distance or metric function, and the regularization method. 1. Choice of Gates The approach of ref. [13] is to minimize over all gates obtained by considering the exponential of bilinear generators of the form Φ(x 1 )π(x 2 ) (squeezing operator). They found that optimal circuits (in absence of penalty factors in the cost functions) admit normal mode decompositions and require for their construction only generators of the form Φ( , which are momentum preserving. These generators have the form G k =x kp−k +p kx−k with x a on the lattice. By contrast, in our approach we consider Hamiltonian operators consisting of combinations of generators G 2k = Φ( − → k )Φ(− − → k ) and G 3k = π( − → k )π(− − → k ). We thus minimize over the gates constructed from the generators G 2k and G 3k . 2. Choice of Metric Instead of a Finsler metric [13] (as studied by Nielsen [8]), we use the Fubini-Study metric, and subsequently derive a time-dependent complexity which reads as with β the period of the thermal circle in which is defined the TFD state (reference state). Choice of Regularization Method The methods of [13] yielded the result for the complexity (28) with n = 2, where the frequencies and where δ is the lattice spacing. In d − 1 dimensions, the lattice volume is V = L d−1 = N δ d−1 with N the number of sites. For QFTs the complexity is dominated by ultraviolet (UV) modes (ω k = 1/δ). The leading term thus reads as The square root in (E-3) comes from the cost function F 2 . To obtain an expression similar to the one proposed in [33]: an F 1 cost function was employed [13], yielding the complexity where ω 0 is some arbitrary frequency. A similar result [7] was obtained by considering the same set of gates, i.e. G 1k = Φ( − → k )π(− − → k )+π( − → k )Φ(− − → k ) employed in ref. [13] along with a Fubini-Study metric. The complexity was found to have the form which, when n = 1 and M = Λ, becomes where Λ is the cut-off and M a parameter that characterizes the reference state. This result is in accordance with (E-4). In our approach, in order to get the proposed holographic complexity (E-4), we assume that the period β of the thermal circle is of the order of the lattice spacing δ. Indeed, for β very small and t of the order of β (t ∼ β), the complexity becomes and is similar to the proposed expression in ref. (E-4) with β ∼ δ.
8,706.6
2018-04-19T00:00:00.000
[ "Physics" ]
Determination of Local Magnetic Material Properties Using an Inverse Scheme The precise knowledge of material properties is of utmost importance for motor manufacturers to design and develop highly efficient machines. Due to different manufacturing processes, these material properties can vary greatly locally and the assumption of homogeneous material parameters for the electrical steel sheets is no longer feasible. The goal of our research project is to precisely determine these local magnetic material properties using a combined approach of measurements, numerical simulations, and the applications of inverse methods. In this article, we focus on the identification of the local linear permeability of electrical sheets considering cutting edge effects. In doing so, the electrical sheets are divided into subdomains, each assigned with a linear magnetic material model. The measurement data are generated artificially by solving the magneto-static case using the finite element (FE) method and overlay these data with a Gaussian white noise. Based on the measured and simulated data, we apply our inverse scheme to determine the parameters of the linear material model. To ensure solvability of the ill-posed inverse problem, a Tikhonov regularization is used and the regularization parameter is computed via Morozov’s discrepancy principle. I. INTRODUCTION V ARIOUS cutting processes, e.g., punching, water cutting or laser cutting, are an essential part in the production chain of electrical steel sheets.It is generally known that these processes lead to a deterioration of the magnetic properties, especially in the area of the cutting edges (see [1] and [2]), and can significantly reduce the efficiency of electrical machines.Therefore, the accurate knowledge of the local magnetic properties is of utmost importance for motor manufacturers and must be taken into account during the design and development process. A widely used approach to determine the influence of cutting edges is to increase the cutting length to bulk material ratio (see [3] and [4]).This is achieved by cutting an electrical steel sheet into several smaller strips, such that the total width of all strips aligned next to each other matches the original sheet size.Varying the strip width leads to different cutting length to bulk material ratio combination, which are then measured with a single sheet tester (SST) or Epstein frame. In our approach, we combine methods based on measurements, numerical simulations, and inverse schemes to determine the magnetic material behavior locally.To generate measured data, a sensor-actuator (SA) system is developed, capable of locally exciting the electrical steel sheets and measure the magnetic flux density.Using an appropriate SA model, numerical data are produced using the finite element (FE) method.Based on the measured and numerical data, an inverse problem is solved to determine the search-for unknown parameters of an assumed material model.In this article, the general methodology is presented and the accuracy as well as convergence of our approach is investigated.Therefore, we restrict ourselves to a linear (no dependence on the field itself) magnetic material behavior for the electrical steel sheets.The measured data are artificially generated by forward simulations solving the magnetic field for the magneto-static case applying the FE method and overlay this data with a Gaussian white noise.Starting with an initial guess for the unknown material parameter and considering the artificially generated measurement data, the inverse scheme is applied to determine the search-for parameter for the considered material model.Due to the ill-posedness of the inverse problem, a Tikhonov regularization [5] is applied to ensure solvability. II. SA MODEL The SA system in Fig. 1 is considered to be a stacked iron core and is excited with two excitation coils.The system is capable of magnetizing the electrical sheets and measuring the magnetic flux density locally, using a sensor array.The sensor array includes S Hall-and/or GMR-sensors and measures the x-, y-, and z-components of the magnetic field.Two electrical steel sheets (Sample 1 and Sample 2) are placed concisely together along the cutting edge (see Section II-B).Since both samples originate from the same batch and unchanged cutting process parameters are considered, identical and symmetric material behavior for sample 1 and sample 2 can be assumed. For the generation of measurement data, sample 1 and sample 2 are measured at N different positions along the x-direction.Due to the high change in material behavior in the vicinity of cutting edges, the density of measurement positions in this area are much higher than in the bulk material.For the sake of completeness, between each measurement position, the sheets are demagnetized to ensure, no residual magnetism is present (assumption for the numerical simulation).The resulting measuring data contain the three magnetic field components for each sensor and measurement position B meas x,i, j , B meas y,i, j , and B meas z,i, j with i = 1, 2, . . ., S the sensor positions and j = 1, 2, . . ., N the measurement positions.Based on these data, the amplitude of the magnetic field density is computed using the euclidean norm and results in the measurement data A. Numerical Model to Consider Edge Effects The deterioration of the material parameters due to the cutting edges is pronounced within a small range of millimeters up to maximum centimeters and depends strongly on the used cutting technique as well as cutting process parameters.To appropriately represent the large material change in this area during the simulation, each electrical steel sheet is divided into M non-equidistant distributed subdomains m , where the subdomain size in the immediate vicinity of the cutting edges is much smaller than in the bulk material (see Fig. 2). In the numerical simulation, the electrical steel sheet is generally subject to a certain material behavior that is characterized by a corresponding material model μ (linear, nonlinear, or hysteretic model).Depending on the chosen model, a different number of selectable parameters defines the material model μ = f (a 1 , . . ., a K ) and must be determined, respectively, such that, the model behavior fits the real material behavior.In order to take the cutting edge effects into account, the selected material model is assigned to each subdomain, whereby the model parameters for each subdomain a 1,m , . . ., a K ,m can be selected independently.Thus, the searched-for parameter vector reads as p = (a 1,1 , . . ., a K ,1 , a 1,2 , . . ., a K ,2 , . . ., a 1,M , . . ., a K ,M ) T .The advantage of this approach is that no adaptation of the material model is necessary to take into account factors influencing the magnetic material behavior, e.g., residual stresses, microstructure, etc., since these are inherently included in the model parameter for each subdomain. B. Sample Arrangement and Magnetization of the Electrical Steel Sheets For the proposed method, it is crucial that a flux density change is clearly present due to the degradation of magnetic properties caused by the cutting edges.For this purpose, we consider the following arrangements and evaluate the magnetic flux densities at the measurement point s 1 for the case without (homogeneous linear relative permeability µ exact r,5 in Table II) and with cutting edge effects, using the sample model as shown in Fig. 5 and the permeabilities µ exact r,m listed in Table II.The arrangement with only one sheet shows hardly any magnetization in the vicinity of the cutting edges (see Fig. 3).Thus, the change in flux density is caused predominantly geometrically.Therefore, changing the material properties in the nonmagnetized area does not lead to any significant field change.Consequently, the sensitivity (∂ B/∂ μ) can be classified as very small. For the second case, two sheets are placed parallel along the cutting edges.As shown in Fig. 4, with this arrangement, the magnetization is clearly pronounced in the cutting edge area.Therefore, a high sensitivity can be assumed. The assumptions made regarding the sensitivity are confirmed by computing the relative change of the magnetic flux density due to neglected and considered cutting edge effects Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.I. III. INVERSE SCHEME The inverse scheme calculates the searched-for parameter vector p based on the measured magnetic flux densities B meas and the simulated magnetic flux densities B sim .Therefore, a nonlinear least squares problem has to be solved to find the optimal parameter p opt , such that the error norm between B meas and B sim is minimized.Due to the inevitable measurement noise in the data, difficulties in solving the nonlinear root squares problem occur.More precisely, small perturbations in the measurement data have a pronounced extreme negative effect on the computed parameters and cause the solution strategy to diverge.From a mathematical point of view, this can be stated as an ill-posed problem.To overcome this problem, a Tikhonov regularization is applied to ensure convergence.In doing so, the minimization problem reads with F i (x j , p) = B sim i (x j , p) − B meas i (x j ), N p the number of measurement positions, N s the number of sensors position, B sim i (x j , p) the simulated magnetic flux density, B meas i (x j ) the measured magnetic flux density, α the regularization parameter, A the magnetic vector potential and J the electric current density.Finding the optimal parameter p opt of the minimization problem, ( 2) is performed iteratively, using a quasi-Newton scheme [6] with I the identity matrix, q the search direction, p ref a priori information, λ the line search parameter (determined by Armijo rule) and B the approximated Jacobian using Broyden's update formula Since an iterative solution strategy is used, a stopping criterion must be defined.Therefore, the following error norm is used: A. Regularization Parameter The choice of an appropriate regularization parameter is crucial for finding an optimal solution during the iterative procedure.Since, an a priori upper bound δ for the error norm with B exact the exact data without noise, is available, the discrepancy principle of Morozov is used.Therefore, an initial regularization parameter α init is chosen, such that the regularization term is pronounced compared to the error term ∥F i (x j , p)∥ 2 2 .This is necessary due to the initial poorly approximated Jacobian B, which shows a very high condition number.Otherwise, the optimization procedure diverge.During the iteratively solved minimization problem, α init is decreased in each iteration step by until the following condition is fulfilled For all computations, a = 0.9 and α init = 1 has been chosen. IV. NUMERICAL RESULTS As an example, electrical steel sheets with dimensions 32 × 0.5 × 20 mm are characterized.The affected area due to cutting is w CE = 5 mm.As described in Section II-A, each steel sheet is decomposed into ten subdomains (see Fig. 5), with x m ∈ [0.5, 1, 1.5, 2, 11] in mm.Furthermore, a linear material behavior is considered and the material model assigned for each subdomain is defined by with µ r,i the relative permeability for the subdomain i and µ 0 the permeability in vacuum. Since symmetrical material properties are assumed, the searched-for parameter vector of the minimization problem reads p = (µ r,1 , µ r,2 , µ r,3 , µ r,4 , µ r,5 ) T . (13) In total, the electrical steel sheets are measured at six different positions with the SA system, whereby the measurement To test the proposed inverse scheme, it is assumed, that the exact relative permeabilities µ exact r as well as a priori information regarding the material change are known.Due to the inverse procedure, an initial guess of the relative permeabilities are necessary.All the considered values for the initial µ init r , reference µ ref r (a priori information) and exact µ exact r relative permeability for each subdomain are listed in Table II. The permeabilities (µ init r , µ ref r , and µ exact r ) for the subdomain 5 are equal.This is based on the assumption, that the material properties of the bulk material are known from SST or Epstein measurements.Thus, this parameter is assumed to be constant during the iterative procedure. The measurement data B meas are generated artificially by 3-D forward simulations solving the magnetic field for the magneto-static case for the exact permeabilities.Furthermore, the generated data are overlaid by a Gaussian white noise with 10% standard deviation.The results of the FE simulation for the exact and noisy measurement data are shown in Fig. 6. Based on the given data, the searched-for parameter p is computed, using the proposed method and the results are shown in Fig. 7. Overall a smooth convergence of the parameters can be observed.The remaining error of the parameters is due to the following points.First, due to the measurement noise, only a solution in the vicinity of the real solution (without noise) can be found.Second, depending on the regularization parameter, the regularization term and subsequently the a priori information are included in the calculated parameters. V. CONCLUSION In this article, a methodology for determining the local magnetic material properties taking into account cutting edge effects is described.Therefore, a concept of a SA system is proposed, capable to magnetize the electrical steel sheets and measure the magnetic flux density locally, using 3-D Hall sensors.Based on the measured and simulated data, an inverse problem is solved to determine unknown the parameters of a defined material model.This is performed by a quasi-Newton method, where the Jacobian is approximated via a Broyden update formula.Due to the ill-posedness, a Tikhonov regularization is used to ensure solvability of the iterative solution strategy.The methodology is applied to an example considering a spatially varying linear magnetic peremability.The results show a fast and smooth convergence of the model parameter.For future work, this procedure is tested for a nonlinear material model. Fig. 1 . Fig. 1.Concept of the SA system with two electrical steel sheets (sample 1 and sample 2).s i the number of sensors in the sensor array. Fig. 2 . Fig. 2. Example of sample discretization into M subdomains m (color coded), each subdomain assigned with a material model μm .The sample width is w S and w CE is the affected area due to cutting.x m is the length of the subdomains. FOR CONSIDERED ARRANGEMENTS with B CE the norm of the magnetic flux density taking into account cutting edge effects and B const the norm of the magnetic flux density considering homogeneous material behavior.The results are shown in Table Fig. 5 . Fig. 5. Measurement position 1 of the SA system with six sensors for the characterization of electrical steel sheet with ten subdomains, each subdomain assigned a linear material model μlin m . Fig. 6 . Fig. 6.Measurement data B meas exact (without noise) and with noise for number of sensors N S = 6 and number of measurement positions N P = 6. Fig. 7 . Fig. 7. Convergence behavior and error of the searched-for parameter p. TABLE II CONSIDERED INITIAL µ init r , REFERENCE µ ref r AND EXACT µ exact
3,479
2024-03-01T00:00:00.000
[ "Mathematics" ]
Uncovering a novel mechanism: Butyrate induces estrogen receptor alpha activation independent of estrogen stimulation in MCF-7 breast cancer cells Abstract Butyrate is a promising candidate for an antitumoral drug, as it promotes cancer cell apoptosis and reduces hormone receptor activity, while promoting differentiation and proliferation in normal cells. However, the effects of low-dose butyrate on breast cancer cell cultures are unclear. We explored the impact of sub-therapeutic doses of butyrate on estrogen receptor alpha (ERα) transcriptional activity in MCF-7 cells, using RT-qPCR, Western blot, wound-healing assays, and chromatin immunoprecipitation. Our results showed that sub-therapeutic doses of sodium butyrate (0.1 - 0.2 mM) increased the transcription of ESR1, TFF1, and CSTD genes, but did not affect ERα protein levels. Moreover, we observed an increase in cell migration in wound-healing assays. ChIP assays revealed that treatment with 0.1 mM of sodium butyrate resulted in estrogen-independent recruitment of ERα at the pS2 promoter and loss of NCoR. Appropriate therapeutic dosage of butyrate is essential to avoid potential adverse effects on patients’ health, especially in the case of estrogen receptor-positive breast tumors. Sub-therapeutic doses of butyrate may induce undesirable cell processes, such as migration due to low-dose butyrate-mediated ERα activation. These findings shed light on the complex effects of butyrate in breast cancer and provide insights for research in the development of antitumoral drugs. Introduction Breast cancer is a significant global health concern, with 70% of breast tumors being estrogen-dependent (Meneses-Morales et al., 2014).While tamoxifen therapy is an effective treatment for hormone receptor-positive breast cancer in premenopausal women, prolonged administration can lead to tumor resistance and increase the risk of developing bone and uterus cancer (Barrios-García et al., 2014).Currently, efforts are focused on developing improved antitumoral strategies to combat human breast cancer. Short-chain fatty acids (SCFAs) have been found to play roles in epigenetic regulation (Fellows and Varga-Weisz, 2020).Butyrate, a SCFA produced by the intestinal fermentation of dietary fiber by associated microbiota (Louis and Flint, 2017), has been the subject of significant research due to the "butyrate paradox," which describes the differential effects of treatment on normal and tumor cells (Donohoe et al., 2012).In particular, butyrate treatment at doses over 2 mM acts as a carbon source in colonocytes but drives apoptosis mechanisms in tumor colon cells (Berni Canani et al., 2012).Additionally, butyrate has been shown to induce a decrease in the expression of estrogen, progesterone, and prolactin receptors (DeFazio et al., 1992;Ormandy et al., 1992;Hamer et al., 2008).These findings suggest that butyrate may hold promise as a potential therapeutic agent for breast cancer treatment. Butyrate is a promising agent for treating cancer, particularly hormone receptor-dependent cancers such as breast cancer (Chen et al., 2019b;He et al., 2021;Jaye et al., 2022).Studies have shown that butyrate can enhance the efficacy of established therapies such as doxorubicin, irinotecan, or oxaliplatin when used as an adjuvant (Chen et al., 2019a;He et al., 2021).However, the clinical use of butyrate is limited by its rapid metabolization in the liver and enterocytes following oral or rectal administration, resulting in poor plasma concentrations that are lower than therapeutic requirements (Davis et al., 2000;Blaak et al., 2020).To address this limitation, new delivery systems are currently under development to achieve stable plasma concentrations of butyrate (Roda et al., 2007;Donovan et al., 2017;Wang et al., 2023). The intestinal microbiota is the primary source of shortchain fatty acids, and the butyrate concentration in the colon ranges from 14.7 to 24.4 mM (Salimi et al., 2017;Blaak et al., 2020).In contrast, plasma concentrations are typically less than 20 μM (Olsson et al., 2021;Martinsson et al., 2022;Tang et al., 2022).Experimental conditions demonstrating butyrate's antitumoral activity typically involve at least 2 mM (Meneses-Morales et al., 2019).However, subtherapeutic concentrations of butyrate (less than 0.5 mM) have received little attention in antitumoral research due to their perceived lack of anticancer action.Nonetheless, previous reports have shown that treatment with less than 0.5 mM of butyrate can induce ligand-independent transcription of prostatic-specific antigen in a prostate cancer cell line (Sadar and Gleave, 2000) and induce estrogen receptor alpha mRNA in a breast cancer cell line treated with a concentration of 0.3 mM of butyrate (DeFazio et al., 1992).Another report showed increased proliferation of a colon cancer cell line treated with 0.5 mM of butyrate (Donohoe et al., 2012). Breast cancer is a complex and challenging disease to treat, and butyrate has emerged as a promising candidate for its antitumoral potential.However, subtherapeutic doses of butyrate are a plausible scenario in the clinical setting, and its effects on cancer cells are poorly understood.Thus, this study aimed to investigate the cellular responses to subtherapeutic doses of butyrate in a breast cancer cell line as a model.Our findings reveal that butyrate can activate hormone receptors, stimulate transcription of estrogen-dependent genes, and promote migration of breast cancer cells.By elucidating the effects of low-dose butyrate treatment on breast cancer cells, we can better understand the mechanisms underlying butyrate's antitumoral potential and optimize its clinical use for breast cancer treatment. Cell culture and Treatment The MCF-7 breast cancer cell line, representative of the luminal A subtype and characterized by the expression of estrogen receptor (ER) and progesterone receptor (PR), was procured from ATCC (Manassas, VA) by the Instituto de Investigaciones Biomédicas, UNAM.Subsequently, it was graciously provided to the Facultad de Ciencias Químicas, UJED.Cultivation of MCF-7 cells was carried out in DMEM medium supplemented with 10% FBS, antibiotics, and antimycotic agents until reaching confluence.The cells were then seeded in six-well plates and, after 24 hours, were washed with PBS and maintained in DMEM without phenol-red and 10% charcoal-stripped FBS for 4 days to reach hormone deprivation conditions.To study the effects of butyrate on MCF-7 cells sodium butyrate (NaB) was purchased from Sigma-Aldrich (St. Louis, MO, USA), five different concentrations (0.1 to 2 mM) and one without treatment (control condition) were used.The cells were treated with sodium butyrate for 16 hours and then harvested for further analysis. Western blotting MCF-7 cells were seeded in p100 plates and incubated with five sodium butyrate treatments (0.1 to 2 mM) and one control condition for 24 and 48 hours.The cells were then harvested and lysed using Triton x-100 buffer plus 2 mM sodium decavanadate pH 7.6 to release nuclear receptors from chromatin.The protein extracts were quantified using the Bradford method.30 micrograms of each total protein extract were loaded onto SDS-PAGE gels, transferred to PVDF membranes, and incubated overnight with primary antibodies against beta-actin and estrogen receptor alpha (Santa Cruz, CA).The proteins were visualized using a secondary horseradish-peroxidase-conjugated antibody and an enhanced chemiluminescence (BM Chemiluminescence Western Blotting Kit (Mouse/Rabbit), Roche).The results were digitalized using a ChemiDoc Bio-Rad® gel imaging system. Wound-healing assay MCF-7 cells were seeded in 6-well dishes.After confluence, the monolayer was "scratch-wounded" in triplicate, washed with PBS and treated with five sodium butyrate treatments (0.1 to 2 mM) and one control condition.Images of the cells were captured at the beginning and every 24 hours for three days to monitor cell migration and wound closure.The migration rate of the cells was quantified using ImageJ and Fiji plugin. Chromatin immunoprecipitation To investigate the binding of estrogen receptor alpha (ERα) to the pS2 gene promoter in response to butyrate and estradiol treatments, we performed chromatin immunoprecipitation (ChIP) assays.MCF-7 cells were treated with sodium butyrate, or a control condition for 45 min, crosslinked with formaldehyde, and sonicated to fragment the chromatin.Then, 2 mg of specific anti-ERα antibody or anti-luciferase as a control antibody was added to two mg of chromatin extract, and the mixture was incubated overnight at 4°C.We used a DNA region located 3 kb upstream of the pS2 promoter as a negative control.After immunoprecipitation, the DNA-protein complexes were eluted, reversed crosslinked, and purified.The pS2 gene promoter region and the control region were amplified by PCR using the immunoprecipitated chromatin as a template; the primers sequences were: 5'-CCG GCCATCTCTCACTATGAA-3' (forward) and 5'-GGTCATCTTGGCTGAGGGATCT-3' (reverse) for pS2 promoter region; 5'-AGCTGGGTGTCCTTGTAAAG-3' (forward) and 5'-AGTTT GGGAGGAAGTGGATC-3' (reverse) for pS2 control control.The PCR products were separated on a 2.5% agarose gel, visualized with GelRed, and quantified by densitometry analysis using the ChemiDoc gel imaging system and Quantity One software (Bio-Rad). Statistical analysis All experiments were performed as independent triplicates, and the results are expressed as the mean ± standard error of the mean.Statistical significance was assessed utilizing Student's t-test or ANOVA, with a predetermined significance level of 0.05, as outlined in the figure legends.Data analysis was carried out using the OriginPro 2021 statistical software. Results Treatment with subtherapeutic doses of sodium butyrate (0.1-and 0.2 mM) increased the expression of estrogen receptor alpha (ERα) and estrogen-responsive genes pS2 and Cathepsin D in MCF-7 cells, as measured by RT-qPCR (Figure 1).The ERα transcript was upregulated by 30% with low-dose sodium butyrate treatment (Figure 1A), while pS2 by 20% and Cathepsin D as much as 80% (Figure 1B and 1C).These findings suggest that low-dose butyrate induces estrogenindependent ERα transcriptional activity in MCF-7 cells.As previously reported, the administration of a therapeutic dose of butyrate (>2 mM) resulted in a decrease in the expression of ERα and pS2 transcripts. The effects of subtherapeutic doses of butyrate on ERα protein expression were investigated in MCF-7 cells using western blot analysis.After treatment for 24 and 48 hours, a slight increase in ERα protein expression was observed beyond 24 hours (Figure 2A and 2B).However, these differences were not statistically significant.On the other hand, treatment with higher doses of sodium butyrate (≥1 mM) resulted in a decrease in ERα protein expression, which is consistent with previous reports. Previous studies have suggested that estrogen receptor ligands such as tamoxifen can modulate cell migration (Lymperatou et al., 2013;Sabol et al., 2014;Han et al., 2018).To investigate whether estrogen-independent activation of estrogen receptor by subtherapeutic doses of butyrate can influence cell migration, we performed wound-healing assays in MCF-7 cells treated with different concentrations of NaB (Figure 3A).As shown in Figure 3B, treatment with 0.1 and 0.2 mM of sodium butyrate led to a faster wound-area reduction compared to the control condition, indicating enhanced cell migration.In contrast, higher doses of sodium butyrate (≥1 mM) did not induce wound-area reduction (Figure 3C), suggesting that the effect on migration is specific to subtherapeutic doses of butyrate.Consistent with these findings, 72 h wound-healing assays revealed a significant increase in wound closure with subtherapeutic doses of butyrate compared to therapeutic ones (Figure 3D). To investigate the underlying mechanisms of butyrateinduced estrogen receptor activation, we performed chromatin immunoprecipitation assays to evaluate whether butyrate activates ERα through genomic mechanisms.Our results showed that treatment with 0.1 mM of sodium butyrate for 45 minutes led to estrogen receptor alpha-enriched recruitment at the pS2 promoter region, and to a lesser extent, with 0.2 mM (Figure 4A).We further investigated the effect of butyrate treatment on co-regulator recruitment at the pS2 promoter by performing ChIP assays with NCoR and pCAF antibodies.Our results showed a loss of binding of the transcriptional co-repressor NCoR to the pS2 promoter with 0.1 mM of sodium butyrate treatment and an increased binding with 0.2 mM (Figure 4B).In contrast, our assays with MCF-7 cells under the conditions of 0.1 and 0.2 mM of NaB for 45 minutes showed no significant statistical differences in co-activator pCAF recruitment (Figure 4C).We used an anti-luciferase antibody for the control chromatin immunoprecipitation, and PCR control reactions with primers specific to a region three kb upstream of the pS2 promoter as recruitment-negative control did not yield amplification products (not shown). Taken together, our findings demonstrate that subtherapeutic doses of butyrate can activate estrogen receptor-mediated transcription and enhance cell migration in MCF-7 cells.Our chromatin immunoprecipitation assays suggest that these effects may be mediated through genomic mechanisms involving estrogen receptor alpha recruitment and co-regulator binding as for the pS2 promoter.These results provide new insights into the potential role of butyrate in modulating estrogen receptor signaling in breast cancer. Discussion In this study, we investigated the influence of subtherapeutic doses of butyrate on ERα activity and its cellular implications.Our chromatin immunoprecipitation assays showed that subtherapeutic doses of butyrate induce estrogen independent ERα transcriptional activity, such as for the enhanced ERα recruitment to the pS2 promoter region.This finding is significant because it reveals a previously unknown mechanism by which butyrate regulates estrogen receptor activity. Previous studies have investigated the effects of butyrate treatment on gene expression in various cancer cell lines.For example, Sadar and colleagues (2000) reported on the role of butyrate in regulating the expression of prostatespecific antigen (PSA) in LNCaP prostate cancer cells.They discovered that low concentrations of butyrate (0.2-0.5 mM) increased PSA mRNA levels, while higher concentrations (0.5-5 mM) decreased its expression.Furthermore, their results suggested that butyrate could activate androgen receptor (AR) transactivation activity in a ligand-independent manner.Our study using real-time PCR revealed a statistically significant difference in the mRNA levels of ERα, pS2, and Cathepsin-D under low sodium butyrate treatments.Specifically, we observed an increase in the mRNA levels of pS2 and Cathepsin-D in MCF-7 cells treated with 0.1-and 0.2-mM sodium butyrate, which suggests that subtherapeutic doses of butyrate can induce ERα transcriptional activity.However, higher concentrations of butyrate were found to decrease the mRNA levels of ERα and pS2, consistent with previous reports (DeFazio et al., 1992;Sun et al., 2005), these actions could be linked to the HDAC inhibitor role of butyrate (Donohoe et al., 2012).In the case of Cathepsin-D mRNA, we observed a further increase in mRNA levels following treatment with 1 and 2 mM of NaB, which is likely due to the induction of apoptosis, as previously reported (Minarowska et al., 2007). Although an increase in ERα mRNA levels was observed, western blot assays did not show any significant changes in ERα protein levels after 0.1-and 0.2-mM sodium butyrate treatments at 24 h or 48 h.However, higher concentrations of sodium butyrate resulted in a decrease in estrogen receptor protein levels, consistent with previous reports (DeFazio et al., 1992).These findings emphasize the multifaceted effects of butyrate on estrogen receptor regulation. Our "wound-healing" assays revealed a significant increase in the speed of scratch closure in MCF-7 monolayers treated with subtherapeutic doses (0.1 and 0.2 mM) of sodium butyrate, indicating the potential of butyrate to induce collective cell migration.Prior investigations have consistently indicated an inhibitory impact of various concentrations of butyrate (ranging from 0.1 to 2 mM and higher) on the proliferation of MCF-7 and other breast cancer cell lines.This inhibition was determined through MTT or CCK-8 assays conducted over a 4-day period, with measurements recorded at 24-hour intervals (Li et al., 2015;Salimi et al., 2017).These findings were further substantiated in the context of a colon cancer cell line by Li et al. in 2018.The researchers replicated similar experiments utilizing HCT116 cells and the CCK-8 assay.As a result, these consistent findings reinforce the proposition that the observed enhancement in wound closure is more plausibly attributed to an augmentation in cell migration rather than the induction of cell proliferation.Future studies should investigate the impact of butyrate on ERα-negative cell lines, such as MDA-MB-231, to determine whether the effect of butyrate on cell migration is ERα-dependent or related to the enhanced histone acetyltransferase (HAT) activity induced by lower butyrate concentrations (Donohoe et al., 2012). It is important to note that the concentration of butyrate in plasma is typically less than 20 μM (Olsson et al., 2021;Martinsson et al., 2022;Tang et al., 2022).In order to achieve antitumoral effects, concentrations higher than 2 mM are typically required (Meneses-Morales et al., 2019).Our results demonstrate a dual influence of butyrate concentration on estrogen receptor activity, indicating a narrow therapeutic window for butyrate.This suggests the necessity of a fine balance tuning between subtherapeutic concentrations and antitumoral effects. According to previous reports, low-dose butyrate treatment has the potential to increase the availability of acetyl groups and activate histone acetyltransferases (HATs) (Donohoe et al., 2012).In this study, we sought to investigate whether our findings could be attributed to genomic mechanisms of regulation.To this end, we conducted chromatin immunoprecipitation assays and found that subtherapeutic doses of sodium butyrate (0.1 mM) led to an estrogenindependent recruitment of estrogen receptor alpha to the pS2 promoter in MCF-7 cells.These results suggest that low-dose butyrate treatment may induce ERα transcriptional activation through estrogen-independent mechanisms. After examining the effects of butyrate on the recruitment of representative nuclear receptor co-regulators, NCoR and pCAF, our study did not yield conclusive evidence.Consequently, the precise mechanisms by which butyrate facilitates the recruitment of nuclear hormone receptors to their regulated promoters' cognate sequence remain unclear.To gain a better understanding of these mechanisms, it is essential to conduct additional research that considers the temporal dynamics of co-regulator recruitment under butyrate treatment.Such research would help to clarify the molecular pathways involved in butyrate-induced co-regulator recruitment and its downstream effects on nuclear hormone receptor activity at regulated promoters. The current study has some limitations that should be acknowledged.Firstly, we employed a single-cell line (MCF-7) to examine the impact of butyrate on ERα transcriptional regulation.Although MCF-7 is a well-established cellular model for estrogen receptor-dependent breast cancer, this choice may constrain the generalizability of our results.Future investigations could broaden the scope of our findings by exploring the effects of butyrate on various other cell lines.Such efforts would provide a more comprehensive understanding of the potential applications and limitations of butyrate as a treatment for breast cancer. To summarize, our study adds to the expanding body of research on the influence of butyrate on gene expression and underscores the potential therapeutic risks of butyrate in cancer treatment.Our findings demonstrate that even subtherapeutic doses of butyrate can elicit estrogen-independent ERα transcriptional activity, which could have significant implications for treating estrogen receptor-positive breast cancer.These results indicate that butyrate has the potential to be a valuable addition to existing breast cancer therapies, nonetheless, additional studies are needed to further understand the mechanistic underpinnings of butyrate's effects on ERα transcriptional regulation and to optimize its potential for clinical use in treating breast cancer. Figure 1 - Figure 1 -Subtherapeutic doses of sodium butyrate (NaB) can enhance estrogen receptor-mediated transcription in a ligand-independent manner.The results demonstrate the RT-qPCR assessment of mRNA expression for ERα (A), pS2 (B), and Cathepsin D (C) in MCF-7 cells after a 16-hour treatment with butyrate.The data were normalized to beta-actin, and the experiment was repeated three times (*p<0.05). Figure 2 - Figure 2 -High doses of sodium butyrate (NaB) significantly decrease the levels of estrogen receptor protein.Western blot analysis of ERα protein after 24 h (A) or 48 h (B) of treatment did not reveal any statistically significant increase in response to 0.1-and 0.2-mM concentrations of sodium butyrate.However, higher concentrations of NaB led to a decrease in ERα protein signal (n = 3; *p<0.05;**p<0.01). Figure 3 - Figure 3 -Subtherapeutic doses of sodium butyrate (NaB) significantly increased cell migration as evaluated through the "scratch-wound" healing assay.The results obtained at different time points (A) with subtherapeutic (B), and therapeutic (C) doses of NaB showed differential effects, as demonstrated by the percentage of wound closure observed after the 72-hour assay (D).The data were normalized to the control condition (n=3; **p<0.01). Figure 4 - Figure4-Butyrate induces ligand-independent recruitment of the estrogen receptor to the pS2 promoter.We performed pS2 promoter-specific PCR and used total chromatin as a positive control for Input (5%) amplification (Up) and antibody-precipitated chromatin from MCF-7 cells treated with sodium butyrate as a template (Down).We performed triplicate experiments and generated graphs to show the recruitment of estrogen receptor alpha (A), NCoR (B), and pCAF (C) under different sodium butyrate (NaB) treatments (n = 3; **p<0.01;***p<0.001),using densitometry analysis.
4,492.6
2024-03-08T00:00:00.000
[ "Medicine", "Biology" ]
Adsorption Capacity and Mechanism of Expanded Graphite for Polyethylene Glycol and Oils Expanded graphite (EG) shows higher adsorption capacity for oils such as salad oil and SD300 oil than polyethylene glycol (PEG) with different MW (4000, 10000, 20000). To illustrate their different adsorption mechanism, adsorption capacities of EG for these pollutants are firstly detected. And then stepwise adsorption for oils is carried out with EG which has been saturated first by PEG with different MW. Then difference between stepwise adsorbance of oil is checked with deviation analysis. Scanning electronic microscopy (SEM) analysis is used to show structure difference of EG adsorbed different adsorbates. It is testified adsorption isotherms of PEG are all type I, PEG molecules lay flat on EG surface and equilibrium adsorbance decrease with the increase of PEG MW. Adsorbance for SD 300 oil and salad oil can reach 131.3 g/g and 127.8 g/g respectively. Deviation analysis for stepwise adsorbance of oil shows no statistical significance. EG saturated firstly by PEG, still has an average adsorption capacity of 98 g/g for SD300 oil and 85 g/g for salad oil and it does not change with the initial PEG concentration. SEM photos illustrate the adsorption of oil on EG is mainly filling. In the adsorption of PEG water solution, there is severe breakage of the V-type pore and shrinkage of the particle. Introduction Accidents of oil tankers and oiliness wastewater have caused serious environmental problems.It gives not only environmental problem but also great loss of energy resource.Thus, effectively recovering technology is needed.Adsorption treatment with porous material 1 is one of current methods.Wide application of PEG in industries, such as medicament, metal forming, cosmetics and food makes, it become another major wastewater source. EG is a kind of porous material, it has attracted attention of scientists and engineers as an adsorbent with a high adsorption capacity for organic materials, such as heavy oil and organic molecules [2][3][4] .Pores in EG are described using a 4-level model 5 , these four level pores are expected to act quite differently in adsorption performance of EG for various liquids.In the adsorption of heavy oils 6,7 , large open space among entangled worm-like particles, first-level and second-level pores of EG are found to be very important, but microporous or mesoporous pores are useless.Adsorbance of grade A heavy oil was detected as 83.0 g on 1.0 g of EG with a bulk density 8 of 0.006 g/cm 3 and the pores with the size of 0.004 to 4 µm was too low to explain the adsorption capacity of the measured oil. In the adsorption of PEG with active carbon as adsorbent, Zhao et al 9 reported the adsorbed molecules lay flat on active carbon surface and isotherms are all Langmuir type.Chang et al 10,11 indicated a high adsorption capacity of 303 g/g during 14 days for PEG with an average MW of 6000 from copper electroplating solutions at 288-313 K. Contrast to the adsorption on activated carbon, basic study of PEG on EG is scarce.The porous structure of EG makes it have adsorption capacity for PEG.Therefore, purpose of this study is: with salad oil, Thermal oil, PEG (4000, 10000, 20000) as reference compounds, based on adsorption experiment, to detect the adsorption capacity of EG for these adsorbates and to testify their different adsorption mechanism on EG.SEM observation, stepwise adsorption and deviation analysis of stepwise adsorption are carried out simultaneously. Experimental EG is firstly prepared according to literature 12 and then it was expanded in KSW heating oven (Huacheng Oven Factory of Tientsin) at 900 °C.Structural parameters of EG were characterized by bulk density, specific surface area and pore cubage as listed in Table 1.These data were detected with Pore Master 60GT instrument (Quantachrome Instruments, USA) under varying pressures of 0.818 PSIA to 59667.199PSIA. Adsorbates characteristic Thermal oil and salad oil were used in the experiment, the viscosity was determined as 0.059 and 0.016 (100 mL/g) repectively at 25 °C with an Ubbelohde viscometer.Pure substances were used in the experiment.PEG with different MW of 4000, 10000, 20000 was used as adsorbate.They are purchased from the Fu Cheng Chemical Factory (Catalog No.20050707).Simulated PEG wastewater is prepared by dissolving PEG in distilled deionized water at various concentrations.In quantitative analysis 13 , Dragendoff was used as colored reagent of PEG and absorbance of the colored complex (color reaction lasted 10 min) was detected with T6 New Century UV spectrophotometry (Puxi Tongyong Instrument Limited Company of Beijing).Absorbance values were recorded at the wavelength for maximum absorbance (λ max ) (as listed in Table 2) and its solution was initially calibrated for concentration in terms of absorbance units. Adsorption for oil Batch adsorption experiments have been carried out with about 0.200 g EG (m 1 ) and 100.0 mL oil in 250 mL flask with plug and mixed well gently.At different intervals of time, EG is filtrated with wire gauze and quantified for estimation of balance adsorbance for oil.Balance time is detected as about 24.0 h at 25 °C.Incremental weight of wire gauze is calculated as m 2 .Adsorbance q e of oil on EG is calculated according to equation (1): Adsorption for PEG Static adsorption experiments of PEG have been undertaken by taking about 0.200 g (m 1 ) EG with 100.0 mL (V) PEG solution of known initial concentrations C 0 in different conical glass flasks.Adsorption equilibrium time at 25 °C was about 40 min for PEG (4000), about 60 min for PEG (10000) and about 180 min for PEG (20000).Samples were analysed by using standard spectrophotometry technique.Adsorbance was determined according to equation (2): Stepwise adsorption of oil A series simulated PEG wastewater were prepared with concentration of 50, 200, 500 mg/L.Adsorption experiments for PEG were firstly carried out according to method mentioned in the adsorption of PEG.EG, saturated by different concentration of PEG solution, was filtrated with wire gauze and placed for 30.0 min and then it was successively used for the adsorption of oil.After equilibrium, filtration with wire gauze and placed for 30.0 min, it was dried at 110 °C for about 7.0 h to insure a constant of m 2 .The stepwise adsorbance of oil was calculated according to equation (1). Adsorption capacity of EG for oil EG was found to have much better adsorption capacity for oily materials.In the experiment, saturated adsorbance during 24.0 h was used to show its adsorption capacity.The value was detected as 131.3 g for SD300 and 127.8 g salad oil for every gram of EG with a expanded volume of 320 mL /g. Adsorption capacity and adsorption isotherm of EG for PEG Static adsorption capacities of EG for PEG (4000, 10000, 20000) were measured.Figure 1 shows a typical I type isotherm, and the equilibrium adsorbance is less than 50 mg/g.The planar structure of PEG might form certain kinds of conformation on the surface of EG, which might reduce the adsorbed sites and make the further adsorption difficult.As showed in Figure 1 (b), absorbance decreases with the increase of PEG MW.Similar result was obtained as the adsorption of active carbon for PEG 9 .3) was used to treat the isotherm data.The molecule area (a) of PEG was calculated according to maximum adsorbance Q 0 and total pore area as showed in Table 1. Langmuir equation 14 : Q 0 -maximum adsorbance; mmoL/g A -The equilibrium concentration of PEG corresponding to half saturation adsorbance; mg/mL. Results showed in Figure 2 Stepwise adsorption of EG for oil-the further evidence of adsorption mechanism Stepwise adsorbance of oil on EG which is saturated by different concentration of PEG were detected and the results are showed in Figure 4 & 5 respectively.Adsorbance of oil on EG, which is saturated by de-ionized water firstly in stepwise adsorption, is regarded as blank value.No obvious difference among the adsorbance is observed.Stepwise adsorption capacity of salad oil on EG To justify whether there is statistical significance or not among the adsorbance of oil in stepwise adsorption, deviation analysis has been carried out.The blank value is considered as average of deviation analysis 9 .Deviation both between average and every adsorbance of oil corresponding to various initial PEG concentration (in group) and deviation between average and adsorbance of oil corresponding to different kind PEG (among groups) are calculated.Results showed in Table 3 and PEG(10000), 12 groups of data are used. Stepwise adsorbance of oil declines markedly, EG saturated firstly by PEG, still has an average adsorption capacity of 98 g/g for SD300 oil and 85 g/g for salad oil and it does not change with the initial PEG concentration.This diminished value should be caused by the breakage of v-type structure (as showed in Figure 3 (b)), deformation of pores and shatter of the particles under surface tension between EG and water during the adsorption of PEG.The existence of stepwise adsorbance of oil testifies a different adsorption mechanism between EG and PEG, EG and oil.Adsorption of oil on EG is mainly filling in first-level and second-level pores; once the structure is destroy, the adsorption capacity of oil would decreased markedly.Adsorption of PEG is monolayer, and they lay flat on the surface of EG.EG, which has been used for the treatment of PEG wastewater, can be used sequentially for the remove of oils contamination. Conclusion This study has provided an insight into adsorption capacity, adsorption mechanism of SD 300 oil, salad oil and PEG (4000, 10000 and 20000) on EG. Results can be summarized as following: (1) EG has better adsorption capacity for oils.EG with an expanded volume of 320 mL/g has an adsorption capacity of 131.3 g/g for SD 300 oil and 127.8 g/g for salad oil and the adsorption is mainly filling. (2) Adsorption of PEG on EG is monolayer and PEG molecules lay flat on the surface of pore, which causes smaller adsorbance.EG, saturated by PEG, still holds abundant pores on the surface of the worm like particles.In the stepwise adsorption of oil on EG, which is saturated by different PEG with different concentration, there is no statistical significance among stepwise adsorbance of oil.The adsorbent could be used for the elimination of PEG and oil step by step. Figure 1 . Figure 1.Adsorption isotherm of PEG (1000, 4000, 10000 and 20000) at 25 o C (a) Unit of adsorbance is defined as mg/g; (b) Unit of adsorbance is defined as mmoL/g, Langmuir equation (3) was used to treat the isotherm data.The molecule area (a) of PEG was calculated according to maximum adsorbance Q 0 and total pore area as showed in Table1.Langmuir equation14 :1/Q = 1/Q 0 + A/(Q 0 ×C)(3)Q 0 -maximum adsorbance; mmoL/g A -The equilibrium concentration of PEG corresponding to half saturation adsorbance; mg/mL.Results showed in Figure2suggest a linear relationship between MW and molecule area a [(nm) 2 /molecule of PEG].This suggests PEG molecules lay flat on the EG surface, and it's just the same as the adsorption of active carbon for PEG. Figure 2 . Figure 1.Adsorption isotherm of PEG (1000, 4000, 10000 and 20000) at 25 o C (a) Unit of adsorbance is defined as mg/g; (b) Unit of adsorbance is defined as mmoL/g, Langmuir equation (3) was used to treat the isotherm data.The molecule area (a) of PEG was calculated according to maximum adsorbance Q 0 and total pore area as showed in Table1.Langmuir equation14 :1/Q = 1/Q 0 + A/(Q 0 ×C)(3)Q 0 -maximum adsorbance; mmoL/g A -The equilibrium concentration of PEG corresponding to half saturation adsorbance; mg/mL.Results showed in Figure2suggest a linear relationship between MW and molecule area a [(nm) 2 /molecule of PEG].This suggests PEG molecules lay flat on the EG surface, and it's just the same as the adsorption of active carbon for PEG. Figure 3 (Figure 3 . a) shows a special v-type structure of EG as mentioned in reference5 .Compared with (a), (b) shows obvious breakage of v-type structure of EG, but pores on the surface of the worm like particles are still clear.While, (c) shows EG is filled thoroughly by SD 300 oil and (d) shows EG is filled thoroughly by salad oil.Adsorption process of oil into EG may compose of two-unit process: adsorption and filling7 ; "wrapping space", first-level and second-level pores of EG particles play important role in influence adsorbance.However, as proved above, the adsorption of PEG on EG belongs to monolayer adsorption, and the molecules lay flat on EG surface.SEM micrograph of EG, (a) EG, (b) EG saturated by PEG (10000), (c) EG saturated by SD 300 oil, (d) EG saturated by salad oil Table 1 . Structural parameter of expanded graphite Table 2 . MW and maximum absorbance wavelength of PEG and Table4all give the conclusion of no statistical significance In other words, the stepwise adsorbance of oils depends neither on PEG concentration, nor on PEG molecular weight. Table 3 . t-Test of stepwise adsorption adsorbance of SD 300 oil on EG Table 4 . t-Test of stepwise adsorption adsorbance of salad oil on EG The blank adsorbance of salad oil on EG is considered as an average, and deviation analysis is carried out among stepwise adsorbance of salad oil corresponding to various initial concentration of PEG(10000).bDeviation of stepwise adsorbance of salad oil on EG is calculated betweenPEG(20000) a
3,059
2010-01-01T00:00:00.000
[ "Materials Science", "Chemistry" ]